Showing posts with label Apple. Show all posts
Showing posts with label Apple. Show all posts

Tuesday, February 15, 2011

How Apple May Inadvertently Boost eBook Linking

"The net interprets censorship as damage and routes around it." John Gilmore, 1993
The official word from Apple finally came out today, in their press release announcing in-app subscriptions.
In addition, publishers may no longer provide links in their apps (to a web site, for example) which allow the customer to purchase content or subscriptions outside of the app.
Assuming that this limitation will be applied to the Kindle App, it means that the "Shop in Kindle Store" button will disappear, and similar features in other ebook reader software, such as Nook, Sony, and Kobo will disappear as well.

You can go elsewhere if you want to read apocalyptic whining about Apple's imperious ways. What I want to focus on is how the net will route around this damage. The net will route around this damage by making more links.

At O'Reilly's Tools of Change for Publishing Conference, I was able to spend some time with Keith Fahlgren, a partner at ThreePress Consulting. He's part of a group that has worked on the improvement of linking capability in EPUB 3. A Public Draft of the specification was released today by IDPF.

Since EPUB3 is based on HTML5, all the outbound linking that you would expect from a web page is already built into EPUB3 (as well as earlier versions of EPUB). Ebook reader apps available on iOS and Android use the "Webkit" webpage renderer for ebooks in EPUB. (Kindle devices use Webkit to render web pages and WebKit is used by Amazon to render Kindle ebooks (in mobi format) on  hardware other than their own.) So it's clear to me, at least, that even if ebook reader apps can't have "Kindle Store" buttons, the apps will be able to present "Kindle Store" links inside the ebook content. I'll bet you anything that Amazon is loading up ebook content with Kindle Store links: "If you like this book, perhaps you'd like this one". They'll even have specialized shop-books containing Kindle store links available for free. Ditto the others.

Publishers aren't going to like Apple's power-play. But neither will they like having their content getting hijacked to promote individual ebook stores. There will therefore be a great deal of pressure for the creation of vendor-neutral, customer friendly ways to link to ebooks from within ebooks, one that Apple can't ban because doing so would break Safari.

Here's where it gets tricky. If a customer has already purchased the linked-to book, it's pointless to send them out to a ebook store, they should connect their copy of the ebook. But figuring out whether a consumer already has the book is messy, given the state of ebook identification. There are many other use cases for linking to a specific chapter or paragraph inside an ebook.

Unfortunately, doing this sort of linking is not a solved problem. EPUB3 adds one tool that will help. A new required metadata property, dcterms:modified,  will help identify the epub in the case where it has been modified- in the past it was poorly specified what should happen to the epub identifier if the file was modified. With EPUB3, it's now clear that EPUB documents are identified internally at a level above the ISBN (different DRM wrappings of the same EPUB file often require different ISBNs) but below the "work".

There's still a lot of apparatus that will need to be built, both inside and outside of EPUB, for linking to work the way it should. Being able to decide which ebook to target will require external mechanisms. Perhaps some linking organization along the lines of Crossref be formed; perhaps a more wikipedia-ish database collaboration will suffice. In any case something like xISBN supercharged for ebooks will be needed. Fahlgren told me that without a strong use case to drive the solution, the EPUB group has had a hard time going very far in their linking development.

A true ebook linking solution would need to include Amazon, of course, and since they've not been using EPUB, it seems to me that ebook linking won't get done by the EPUB group itself. Amazon hasn't had much use for EPUB in the past, but now Apple may have handed the ebook technology community a giant use case for interoperable ebook linking.

Happy Day-After-Valentines-Day, EPUB!

Sunday, September 5, 2010

What "Ping" Really Means

When I was little, some Swedish was spoken in my house. At some point, I realized that our bathroom words were different from those used by my Ohio playmates. In my house, we didn't do "poop" or "poo" and definitely not "crap" and most certainly not "shit". We did "bice". It may be a complete coincidence, but I have never dined at the Italian restaurant named "Bicé".

The Story about Ping (Reading Railroad Books)I know what you're thinking. Ok, that's number two, so what did you call number one? No, not "pee" or "wee", and definitely not "piss". But I remember exactly when my mom explained to us that the word we used was not the one used by most English speakers. It was when my mom read us the book The Story about Ping by Marjorie Flack. My little sister roared with laughter, because "ping" was our word for urine.

I have no idea whether "ping" is a widely used bathroom word, here, in Sweden or anywhere; I don't go around talking much about ping. But I can tell you that Ping, Apple's new iTunes feature, is a piss-poor excuse for a social network.

I read Dave Winer's post describing the lameness of Ping, but I was still eager to try for myself. Apple's ability to create things that "just work" is justifiably reknowned. But having put my toes in the water, my reaction was more along the lines of Swizec's scatologically titled post.

I was stunned that Apple had not implemented the obvious functionality. When you listen to a song, you should be able to push a comment for it to your followers. In Ping, you can't. iTunes knows the songs I've rated most highly. Inexplicably, these are not the songs it suggests for my profile. Ping seems only interested in things I've bought recently in the store. But it even appears to be inept at using my iTunes-store sanctioned activity in my profile. It's hard to believe that something so poorly executed could get released by Apple.

The Napster corporate logoAfter thinking about it for a while, I realized what had happened. Imagine a system that can tell people what songs you have on your computer, and can connect you directly with the people interested in those songs. You want to share information about the songs and connect to people. Does that sound vaguely familiar? Do you remember Napster? The only difference between a well implemented Ping and the legally challenged Napster is a way to push files around.

I think that Apple showed Ping to some music publishers, who flipped out at the possibility that it would be used for file sharing and forced Apple to cripple Ping. Or maybe Apple saw the file-sharing potential itself and worried that Ping could kill off its music based revenue stream. Or could it possibly be that Apple is being incredibly devious, and is expecting that someone, somewhere will see how to add file sharing to Ping, resulting in huge popularity of Ping even while some elusive third party assumes all the Napster liability? We shall see.

The only thing I'm sure of is that Apple isn't likely to discuss what "Ping" really means – outside of its own bathroom.
Enhanced by Zemanta

Sunday, May 30, 2010

BookExpo, Digital Book 2010, and eBook Messes

When I was 5 years old, I wanted to be either a doctor or a garbage man. When I got my Ph. D., I thought that I had checked off the first option; that was stretching a 5-year-old's conception of a doctor pretty far. Looking back on my career so far, however, I see that using machines to get rid of dirt is a much more consistent theme. In graduate school, I used high vacuum systems to remove impurities from semiconductors; at Openly Informatics, we built software systems to clean up electronic resource metadata.

This week at IDPF's DigitalBook2010 Conference and BookExpo America, I was reminded over and over again how messy the so-called "supply chain" for books had become, and how the transition to ebooks from print is just making everything that much messier. I had known about the ebook ISBN mess, the book metadata mess, the territorial rights mess and of course the orphan works mess, but the presentations at IDPF on the "agency model" staggered me with the realization that big publishers were dumping a huge load of local sales-tax excrement on their channel partners. The last straw for me was a BISG presentation on rights standards. The speaker was trying to convince the audience that huge piles of money were to be made if only content rights could be efficiently chopped into smaller, more insidious pieces.

I hit the floor of the expo hoping to find solace in some shiny clean gizmos. I found all sorts of reader devices that I hadn't seen before, along with the alluring iPads and the competant Sony readers that I'd seen before. I didn't see a single Kindle on the entire show floor.

Well maybe I wasn't looking too hard. But it was hard not to get the impression that IDPF and BookExpo was a gathering of the anti-Amazon forces of Openness.

It's easy to swallow the story line that Amazon is building a closed, sterile system with its Kindle and that B&N, Sony, and all the others are unleashing a torrent of innovation with their open ePUB standards and promises of interoperability. This story line usually makes an analogy with the early days of the PC, in which Apple's proprietary Mac system was swamped a wave of innovation fostered by the PC's open design and Microsoft software that worked with all the hardware. The irony of Apple using ePUB for their iBookstore on the iPad is dutifully noted and left unexamined.

Somehow, BEA failed to sell me on the open vs. closed story line for ebooks. I don't see how open standards are going to clean up the scrapheaps on which the current book industry is built and in which the ebook industry is stuck.

I've mentioned that I've been reading about the early days of Intel. The Windows-Intel platform was never an open one; it was designed to sell Intel chips and Microsoft software. Apple's strategy, in contrast, was designed to sell computers, and avoid all the mess of keeping the hardware compatible.

After a day to reflect on Book Expo (and some time to sleep off a very nice party!) I came to a different story line that I find more useful in giving insight into the future course of the ebook industry. I think the key to understanding the different entrants in the ebook race is to understand which messes they're trying to tidy.

Amazon, with its Kindle, has focused on maintaining a clean shopping experience and a clean reading environment. By eliminating the computer tether with wireless Whispernet, they avoid a hardware compatibility mess. By choosing a proprietary file format, they avoid a document compatibility mess. By launching only in the US and extending to other territories slowly, they avoid all the territorial mess. Since their online bookstore had already addressed all the messy details of e-commerce for a huge catalog, the execution of Kindle was in large part an exercise in avoiding having to deal with any new messes.

Overdrive had a surprisingly large presence at BEA- they had two separate booths. Working with hardware makers such as Sony, Overdrive has attacked the problems of messy distribution channels- libraries and bricks and mortar retailers, in particular. The work of Overdrive has allowed publishers to pretty much forget that libraries use ebooks- the only place that libraries were mentioned on the show flow in connection with ebooks was in the Sony booth- they work directly with Overdrive to surface the library channel to consumers through their Library Finder feature.
The company that made the biggest impression on me this week was Kobo, the ebook seller that spun out of Canada's Indigo Books. More than any of the current ebook players, Kobo is emphasizing an any-screen strategy. Unlike Amazon, Kobo is not afraid to takle the mess of making a consumer's ebook work on all the devices they own. Kobo's $150 ebook reader device, which launches in the US on June 17, looks and feels like the device that Apple would have designed if Steve Jobs  bothered to reads books anymore. Perhaps most significantly, Kobo is tackling the ebook territorial rights mess. At IDPF, Michael Tamblyn, Kobo's Executive Vice President for Content, Sales and Merchandising, described Kobo's global reach. On one day last month, Kobo sold books to customers in 174 countries. Kobo does business in 6 different currencies and has agreements to sell books from 1600 publishers.

Apple's Disneyfied approach, as expressed in the iPad, is to sweep all sorts of application messes into app sandboxes. Apple has done very little, though, to clean up ebook messes, and their complicity in letting the big 6 agencies dump on the supply chain suggests that they want to be like Target and Walmart and just skim the blockbuster cream off the incumbent publishing ecosystem. I agree with Mike Cane that Apple will open the digital book floodgates by targeting the ebook creation tools mess.

There are still a lot of ebook messes looking for entrepreneurial sorts to show up with a broom, or perhaps a tsunami to just wash away the whole rickety shantytown. It should be an interesting couple of years.

I hope this piece has cleaned up the picture for you a bit!
Enhanced by Zemanta

Monday, March 22, 2010

Second Sourcing, Application Interfaces, and a 16 Bit Static Ram

While returning my Mac Plus to the attic, I decided to bring out some electronics relics I have from an even earlier era. The photo shows an undiced 1.25" wafer of integrated circuits and two packaged chips from around 1969. (The acorn hat is for scale) There are about 200 transistors on each chip. I think it's a 16 bit static RAM chip- with a magnifying glass, I can see and count the 16 cells. For context, when the Mac 128K Mac came out, it shipped with 64Kb DRAM chips. (about 1000x more dense). Today 4Gb chips are in production, a factor of a billion denser than my relic, and the silicon wafers are 30 cm in diameter.

My father was one of the founders of Solid State Scientific, Inc., (SSSI) a company that made CMOS integrated circuits. SSSI, located in Montgomeryville PA, started out as a second-source supplier for RCA's line of low-power CMOS logic chips. In the electronics industry, it has been a common practice for component manufacturers to license their circuit designs or specifications to other manufacturers so that their customers would be assured of an adequate supply. The second source company could compete on price or performance. For example, engineers could design systems with the 4060 14-bit ripple counter chip with internal oscillator, and know that they could buy a replacement chip from either RCA or SSSI. If RCA's fab was fully booked, SSSI would be able to fill the gap. There was no vendor lock-in.

Second source relationships could be tricky- AMD and Intel famously ended up litigating AMD's second-source status for the 8086 series of microrocessors. Logic family chips were commodities, and profit margins were thin. The second-source gambit was a judgment that a company could make more money by driving prices down and volume up. Companies like SSSI were always chasing after higher profit margins in new applications such as custom circuits for digital watches. The large volume parts would pay for their fabs, and the proprietary circuits would earn the profits, or at least that was the idea. Vendor lock-in, while while it might discourage adoption and reduce volume, is good for profitability.

As chips become more and more complicated, the chip manufacturing industry realligned. Today, apart from giants like Intel, most chips are manufactured by foundry companies that don't do chip design at all. Chip design companies try to maintain high margins with exclusive intellectual property; the foundry companies aggregate volume and drive down cost by manufacturing chips from many different design companies.

I've been thinking about the way that the advance of technology moves application interfaces. In the days of the CMOS logic chips, the application interface was a spec sheet and a logic diagram. That was everything an circuit designer needed to include the component in a design. Today that interface has migrated onto the chip and into software;  chip foundries provide software models for components ranging from transistors to processor blocks for designers to include in their products.

When software engineers talk about application interfaces, they're usually thinking about function calls and data structures that one block of software can use to interact with other blocks of software. These interfaces, once published and relied on, tend to be much more stable over time than the code hidden behind them. To some extent, software application interfaces can hide hardware implementations as easily as they can hide code. One result of this is that new chips may come with software interfaces that persist through different versions of the chip. In something of a paradox, the software interface is fixed while the hardware interface moves around.

Software has become more and more part of our daily work, and interfaces have become important to non-engineers. File formats are a good example of application interfaces that are important to all of us. The files I produced on my Mac Plus 25 years ago are still with me and usable; because of that, but you can read the Ph. D. dissertation I wrote using it. OpenOffice serves as a second-source for Word, and I can use either program with some assurance that I will continue to be able to do so into the future.

There's some backstory there. The "interchange format" for the original Word was "RTF". RTF is a reasonably good format, informed by Donald Knuth's TeX, but it was always a second citizen compared to the native "DOC" format. Microsoft published a spec, but they didn't follow it too closely and they changed it with every new release of Word. One result was that it was difficult to use Word as part of a larger publishing system (which I tried to do back in my days as an e-Journal developer). The last thing Microsoft wanted was for competition to Word develop before it grew to dominate the marketplace.

Cloud based software (software as a service) depends in a interesting way on application interfaces. Consider Google docs. You can send it a ".DOC" file created in Microsoft Word, do something with it, then export it. In a sense, Word is a "second source" for Google Docs, and consumers can use Docs without fear of lock-in. Docs adds its own web API so that developers can use it as a component of a larger web-based system. This is the "platform" strategy.

These new interfaces offer a user lock-in trade-off. While the customer gains the freedom to use a website's functionality with services from other companies, the control of the interface leaves the other companies at the mercy of the  company controlling the API. Developers coding to the interface are in the same situation as a second source chip supplier- always exposed to competition, while the platform provider becomes more and more locked in with every new component that plugs into it.

We now see a very interesting competition in platform strategies emerging. Apple's iPad/iPhone/iTouch software platform tries to lock-in consumers by opening an attractive set of API's for app development. It goes further, though, by attempting to control a marketplace (the app store) and imposing restrictive terms on app vendors. Google's Android platform tries to do the same thing in a much more open environment. Apple seems to have learned an important lesson, though. The biggest difficulty facing a company trying to plug into a platform is profitability, and the iPhone software marketplace appears to be offering viable business models for developers. It remains to be seen whether that condition will last, but it's clear that technology shifts are pushing services (such as phone service) that used to be stand-alone products into large, more complex ecosystems.
Enhanced by Zemanta

Sunday, March 7, 2010

After 25 Years, My Mac Plus Still Works

On this day 25 years ago, I got my first Mac.
It had 128KB of RAM, a single-sided 3.5 inch internal floppy drive and a Motorola 68000 microprocessor running at 8 MHz. The black and white 9-inch CRT screen had a resolution of 512×342 pixels. My purchase was a bundle that included a 1200 baud modem, an Imagewriter printer, an external floppy drive, and a copy of MacPascal. As a Stanford student, I was eligible for a discount, so the whole package cost me $2,051.92, including sales tax. About a year later I got it upgraded to a Mac Plus.

I'm currently typing on the 8th Mac that I've used as my main computer. It's a MacBook Pro.  It has 4 GB of RAM, a 320 GB hard drive, An Intel Core 2 Duo Microprocessor running at 2.53 GHz, and a 15 inch color LCD screen with a resolution of 1440x900 pixels.

I never got rid of my original Mac. To celebrate its 25th birthday, I went up to the attic to bring it out for some air. My kids were excited to get a look at the antique. It still works.

What was interesting to me is that apart from being alarmed at the disk drive noises, and asking "is this what they called a floppy disk?", my teenagers sat down and immediately knew how to use MacPaint, MacDraw and Word 3.0. They understood how to interact with Ultima II. The graphical user interface notions introduced with the Mac are still alive and well.

This got me thinking about the longevity of user interfaces. For example, the rotary dial telephone that I grew up with was an interface introduced in the US in 1919. It lasted about 60 years. The Model T Ford that I wrote about last July had the same basic driver interface as my car does today and is still going strong, but the television I grew up with has almost nothing in common with the one I own today.

My all-time favorite YouTube video is taken from a Norwegian comedy show. It imagines what it might have been like for users when the new-fangled "book" came along:


The book's "user interface" (more precisely, the Codex) has had a pretty good run; it's in its third millenium. Kids 25 years from now will know how to use the codex interface, though I'm guessing they'll consider books to be hopelessly out of date, like the vinyl LPs that I had to move around to get at the Mac in my attic.

It won't be the Nook that replaces the book, though. I got to play with one the other night, and while it has some pretty interesting features, the user-interface, which uses a small touch screen and a larger e-ink display, is not long for this world.

It's possible that my long run of Macs will eventually end with a touch oriented device, such as the iPad. Its hard to imagine the devices that, 25 years from now, will make my very nice MacBook Pro seem as much an antique as my Mac Plus.

The kids lost interest in the Mac Plus after about 20 minutes. It had no internet.

More pictures of my Mac are on the Facebook fan page.
Reblog this post [with Zemanta]

Saturday, January 2, 2010

Ten Predictions for the Next Ten Years


I didn't do so well in 2000 when I made predictions for the coming year; a year later, I determined that only one of my seven predictions came true.

I'm ten years older and wiser, and I guarantee, triple your money back, that at least 3 of this years predictions will come true. In 2000 I didn't have Twitter to try my first draft on.
  1. The number of public libraries in 2020 will be less than half today's number. Addendum: the number of public library locations will be 50% more in 2020 than today.

    I will write a full post about this, but I believe the driving force for this will be e-books and book digitization, and the result will be consolidation, outsourcing and shuttering of public libraries. Update: I've written a full post.

  2. By the end of 2014, the world's largest aggregation of bibliographic metadata will not be WorldCat. By 2020, no one will care which aggregation is largest.

    Currently, the growth curve for LibraryThing makes it look like it will pass WorldCat in a few years. SerialsSolutions' Summon is definitely in the running. Google can't be discounted. But by the middle of the decade, the size question will seem silly, sort of like "What's the largest computer chip in the word?" or "Who has the most powerful nuclear bomb?" In 2010, we don't care about these questions. In 2020, data quality and currency will be much more important than data completeness. Also, see my article on "When are you collecting too much data?".

    Thanks, @DataG for the comments!

  3. In 2020, general purpose quantum computers will not be useful for any purpose.

    If there's one thing I learned from doing physics, it's there ain't no such thing as a free lunch. If you spend a billion dollars on quantum computing, you might be able to factor an unfactorable integer or two by 2020.

  4. Open Linked Data will hockey-stick in 2012 on standardization of of quad (named graphs?) transport.

    I've been meaning to write more about quad transport, but if you read my article on Pat Hayes' Surfaces, Leigh Dodds' article on Named Graphs, and the DERI proposal on N-quads, you'll know more than I do.

  5. In 2020, the search engine era will be ending. Search engines will give way to less centralized "knowledge fabrics".

    Search engines have a specific topology: spiders pull in data from millions of distributed sites and add it to one big pile that can be searched on. This topology works great if what you want to do is search, but have you ever noticed that Google can't count? Understanding the connections in rapidly changing data will require new topologies and new business models. In 2020, we'll know what they are.

  6. In 2020, China will be seen as having a more modern, sensible, and practical copyright regime than the US.

    In 2010, China has a poor reputation enforcement of Copyright. China will certainly mature in this respect, but to expect it to adopt the regime currently prevailing internationally is to ignore the best interests of China. I think that China will look to the original intent of the US Constitution and invent a copyright regime optimized "To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries."

  7. In 2020, more than half of the book industry's revenue will be facilitated by a Book Rights Registry.

    The Book Rights Registry that would be created by the Google Books Settlement Agreement is too good of an idea to be tied to the settlement agreement. It will happen whether the settlement is approved or not. People will complain about it... all the way to the bank. Note that my prediction uses the indefinite article. There may be more than one book rights registry!

  8. In 2020, the New York Times will be profitable, and will not have gone bankrupt.

    It's easy to predict that the newspaper industry will contract- it's already happening! But the New York Times is uniquely positioned to take advantage of the market gaps that will open when local newspapers fail. Because they do expensive original reporting, they will have little competition. Because they're family-controlled, like Ford, they won't fall victim to the stupidities of the equity markets.

  9. In 2020, Twitter will be a distant memory; Facebook will still be with us.

    Facebook has demonstrated ability to purposefully evolve and extend. Twitter seems not to understand itself. While my neighbor David Carr thinks that Twitter Will Endure, his argument applies to the idea, not the company. Twitter the company will be squeezed between multipurpose networks like Facebook on the high end and non-proprietary protocols on on the low end.

    Thanks, @CodyBrown for the comments!

  10. On January 1, 2020, when I review this list of predictions, I will use a Mac to do it.

    It's been almost 25 years that I've been using a Mac. Do you really think that the mythical Apple tablet of 2020 will not be a Mac?

Reblog this post [with Zemanta]