Tue, Oct 9, 12
In what passes as technology journalism, 3 months = 180° turn. (Why this particular author changed his heart, brain, spleen and testosterone level for this particular story is a matter for another day.) What is worrisome here is that such fickleness of opinion has become excruciatingly common in online journalism. It pays to shout, shout first, shout often, shout loud, shout different, but most familiarly, just shout.
Shouting sells. We’ve known this for a long time. If companies are daft enough to let their ad buyers talk them into spending money on those who shout the most, then publishers would be reckless to leave money on the table. Some publishers say they would like to steer their publications away from yellow journalism, but in a compensation system based solely on pageviews and clicks, they are beholden to a Romneyesque principle: “We’re not going to let our campaign be dictated by fact-checkers.”
It’s far less important how one author feels about the iPhone 5 than the alarming fact that Slate let this author publish a 1,200-word essay about a device he hadn’t used, nearly three months before it shipped. Why? Because shouting creates pageviews and clicks, and…well, there’s nothing more to say: shouting sells. If this author or another wants to be in the game, sooner than later, he or she will have to start shouting, louder and louder.
Paradoxically, some of the most thoughtful people around work in journalism. And yet all efforts of transition from print-based to online publishing without reliance on pageviews and clicks have essentially flopped. The current crop ranges from VC-supported publicity outlets masquerading as online newsdailies to those whose contribution to civilization stop at copy-and-paste aggregation in a slide show.
While what’s new may not be fully satisfying, there’s no going back to the old either. Regardless, all around the world and especially in Europe there are calls to subsidize old print by taxing new tech:
Mind you, these aren’t really calls to incentivize companies to create new models of service delivery online but to subsidize and sustain their existing operating structures during transition to an online regime that expects them to inevitably adopt, yes, pageview advertising for survival.
Nobody likes advertising, and yet we seem to be stuck with its corrupting effects on public discourse online. It corrupts news delivery, Facebook privacy, Twitter flow, Google search, Kindle reading and so on. There doesn’t seem to be any way to make profits online, or often just survive, without pageviews and clicks, and all the shouting that entails.
Sadly, publishing is not the only industry suffering the ravages of transition to digital. We want better and cheaper telephony, faster and more ubiquitous Internet access, digitally efficient health care, on-demand online education, 21st century banking, always-available music, TV and movies…
We believe the future is fully digital, and the future is now. And yet experimenting with new digital models not based on advertising at a scale that matter have not been successful. Entrenched players spend hundreds of millions to maintain their regulatory moats and leverage their concentrated distribution power. In Canada, just three publishing groups own 54% of newspapers. If allowed to merge, Universal and EMI would control 51 of 2011’s Billboard Hot 100 songs. Six Hollywood studios account for well over 3/4 of the market. AT&T and Verizon alone have over 440,000 employees. Predictably, the FCC remains the poster child of regulatory capture.
The un-digital camp is far from relinquishing their power. Models that can replace them aren’t here. Advertising online has been corruptive of user privacy and editorial integrity. I’m afraid it’ll be a miracle if the shouting subsides anytime soon.
Tue, Sep 11, 12
I am a phlegmatic man. But once, just once, I want to wake up and invent a new design philosophy, and acronymize it so sublimely even a sixth-grader can instantly grasp its exultation of the human spirit:
I want to get on every telescreen to explain The Theory and Practice of MUSE Design Philosophy:
I want to show everyone how hard our team worked:
Then, right after a Two Minutes Hate, I want to take the stage, hold the fruits of our labor in my hand and let everyone soak in its glory:
Yes, there will be doubters. And there will be haters. But we will deal with them…in Room 101:
In the fullness of time, there will be learning, there will be understanding, and there will be acceptance. One unperson after another.
One bright cold day in September when the clocks strike thirteen, I will come back and reassure everyone that we do what we do for the greater good.
Wed, Jun 13, 12
As a budding standup comedienne, Siri opened Apple’s WWDC 2012 Monday morning and concluded her act with the prophetic:
It’s really hard for me to get emotional, because as you can tell, my emotions haven’t been coded yet.
Clearly, Siri is a work in progress and she knows it. What others may not know, though, is that while Siri is a recent star in the iOS family, her genesis in the Apple constellation goes far back.
The Assistant and Assist
Nearly three decades ago, fluid access to linked data displayed in a friendly manner to mere mortals was an emerging area of research at Apple.
Samir Arora, a software engineer from India, was involved in R&D on application navigation and what was then called hypermedia. He wrote an important white paper entitled “Information Navigation: The Future of Computing.” In fact, working for Apple CEO John Sculley at the time, Arora had a hand in the making of the 1987 “Knowledge Navigator” video — Apple’s re-imagining of human-computer interaction into the future:
Unmistakably, the notion of Siri was firmly planted at Apple 25 years ago. But “Knowledge Navigator” was only a concept prototype, Apple’s last one to date. Functional code shipped to users along the same lines had to evolve gradually over the next few years.
After the “Knowledge Navigator,” Arora worked on important projects at Apple and ran the applications tools group that created HyperCard and 4th Dimension (one of the earliest GUI-based desktop relational databases). The group invented a new proprietary programming language called SOLO (Structure of Linked Objects) to create APIs for data access and navigation mostly for mobile devices.
In 1992, Arora and the SOLO group spun off from Apple as Rae Technology, headquartered on the Apple campus. A year later, Rae Assist, one of the first Personal Information Managers (PIMs), was introduced. Based on 4th Dimension DB, Assist combined contact management, scheduling and note taking in an integrated package (automatically linking contact and company information or categorizing scheduled items, etc) for PowerBook users on the go. Although three versions of Assist were released in the following two years, Rae didn’t make any money in the PIM business. But as Rae also worked with large enterprise customers like Chevron and Wells Fargo in database-centric projects, the company realized the SOLO frameworks could also be used to design large-scale commercial websites:
SOLO is based on a concept that any pieces of data must accommodate the requirement of navigation and contextual inheritance in a database environment. In layman terms, it means that every piece of text, graphics and page is embedded with an implicit navigation framework based on the groupings or order in which the items are organized. In other words, a picture, which is a data object, placed in this programming environment will automatically know the concept of ‘next’ and ‘previous’ without having to write an explicit line of code. This simplifies the coding process. Since the information and business logic organization models were already completed for the client-software, converting this to a web application was simply a recompilation of the codes for a different delivery platform. The project was completed within four weeks and we were stunned as to how simple it was. This was an important validation point illustrating the portability of our technology for cross-platform development.
It wasn’t long before we realized that SOLO, a technology based on information organization models, could be adapted and modified for an application to build web sites. A prototype was developed immediately and soon after a business plan was developed to raise venture funding. NetObjects was founded.
Rae quickly applied for patents for website design software and transferred its technology IP to NetObjects. With seed money and the core team from Rae, NetObjects had a splashy entry into what later came to be known as Content Management Systems (CMS). Unfortunately, the rest was rough going for the fledging company. Not long after IBM invested about $100M for 80% of NetObjects, the company went public on NASDAQ in 1999. Heavily dependent on IBM, NetObjects never made a profit and it was delisted from NASDAQ. IBM sold it in 2001.
Outside Apple, SOLO traveled a meandering path into insignificance. Rae Technology became Venture Capital and NetObjects eventually atrophied.
Flying through WWW
Only three years after the SOLO group left Apple for Rae, Ramanathan V. Guha, a researcher in Apple’s Advanced Technology Group, started work on the interactive display of structured, linkable data, from file system hierarchy to sitemaps on the emerging WWW. Guha had earlier worked on CycL knowledge representation language and created a database schema mapping tool called Babelfish, before moving to Apple to work for Alan Kay in 1994.
His new work at Apple, Project X (HotSauce, as it was later called), was based on 3D representation of data that a user could “fly through” and Meta-Content Format (MFC), a “language for representing a wide range of information about content” that defined relationships among individual pieces of data. At an Apple event at the time, I remember an evangelist telling me that HotSauce will do for datasets what HTML did for text on the web.
Apple submitted MCF to IETF as a standard for describing content and HotSauce (with browser plugins for Mac OS and Windows) found some early adopters. However, shortly after Steve Jobs’ return in 1997, it was a casualty of the grand house cleaning at Apple. Guha left Apple for Netscape, where he helped create an XML version of MCF, which later begot RDF (W3C’s Resource Description Framework) and the initial version of RSS standards.
It’s the metadata, stupid!
Even in its most dysfunctional years in the mid-199os, Apple had an abiding appreciation of the significance of metadata and the relationships among its constituent parts.
SOLO attempted to make sense of a user’s schedule by linking contacts and dates. HotSauce allowed users to navigate faceted metadata efficiently and with some measure of fun to find required information without having to become a data architect. The Assistant in the “Knowledge Navigator” had enough contextual data about its master to interpret temporal, geo-spatial, personal and other contextual bits of info to draw inferential conclusions to understand, recommend, guide, filter, alert, find or execute any number of actions automatically.
There is an app for that
A decade later, Apple was now in need of technology to counter Google’s dominance in search-driven ad revenue on its iOS platform. A frontal assault on Google Search would have been silly and suicidal, notwithstanding the fact that Apple had no relevant scalable search technology. But there was an app for that. And it was called Siri.
Siri was a natural language abstraction layer accessed through voice recognition technology from Nuance to extract information from primarily four service partners: OpenTable, Google Maps, MovieTickets and TaxiMagic. Siri was on the iPhone first but it was headed to BlackBerry and Android. Apple bought Siri on April 28, 2010 and that original app was discontinued on October 15, 2011. Now Siri is a deeply embedded part of iOS.
Of course, the Siri code and the team came to Apple from an entirely different trunk of the semantic forest, from SRI International’s DARPA-funded Artificial Intelligence Center projects: Personalized Assistant that Learns (PAL) and Cognitive Assistant that Learns and Organizes (CALO), with research also conducted at various universities.
What made Siri interesting to Apple wasn’t the speech recognition or the simple bypassing of browser-based search, but the semantic relationships in structured and linkable data accessed through natural language. It was SOLO redux at scale and HotSauce cloaked in speech. It wasn’t meant to compete with Google in search results but to provide something Google couldn’t: making contextual sense.
Unlike Google, Siri knows, for example, what “wife” or “son’s birthday” means and can thus provide, not a long list of departures for further clicks, but precise answers. Siri delivers on the wildest dreams of SOLO and HotSauce of an earlier generation. In two years, even as limited to just a few service partners, Siri progressed far more than the developers of SOLO or HotSauce could have imagined. It now speaks the vast majority of the world’s most prominent languages, with connections to local data providers around the globe.
Having intimate conversations with Samuel Jackson and John Malkovich, Siri has become a TV star. Most iOS users already think Siri has a personality, if not an attitude all together. Hard to say what will happen when she actually gets her “emotions coded.”
Wed, Jun 6, 12
It has become a routine: rumors and speculations lead up to an Apple event wherein the company introduces products that “fail expectations” but go on to sell out and make huge profits. But why does this happen like clockwork, year after year?
To be sure, part of it comes from market speculators who make money from AAPL price swings, but the inability or unwillingness of analysts and pundits to understand how Apple works is the more likely reason.
The “failure” is declared by comparing Apple’s hardware specs against those of its competitors. Like “covering” a U.S. President through the confines of the White House press briefing room, hardware specs are the lazy person’s ideal tool: short, simple, often numerical, but ultimately not very illuminating.
One of the key ingredients of Apple’s spectacular success over the last decade has been the inability of its rivals to distinguish hardware from product. As the proverbial design adage goes, “People don’t want to buy a quarter-inch drill. They want a quarter-inch hole.” Non-geeks, Apple’s primary audience, aren’t interested in what the hardware is, but how the product solves their specific problems. Hence they buy on demonstrable value, rather than on potential of hardware specs.
But Apple detractors ask why should the most valuable technology company on the planet — with a large patent portfolio, unrivaled in-house industrial design capabilities and enormous influence over its supply chain and component pricing — fail to offer the best hardware specs in the industry for its premium products? There are a few basic reasons why Apple doesn’t believe it’s in a hardware race.
Hit or miss, no surprises here
Like Hollywood, Apple is perceived to run a hit-driven business. Perhaps the single most important question affecting AAPL’s P/E compression is whether Apple can continue to generate blockbuster products with regularity, especially in its most profitable iOS line.
This requires a sufficient degree of surprise of the “One more thing…” variety and the secrecy that secures it. However, it’s a monumental task for Apple to coordinate the sourcing and assembly of countless parts for its iOS and Mac devices from three continents in total secrecy so that it can spring on its loyal users products to buy at specific annual intervals. No wonder Tim Cook recently said Apple is going to “double down on secrecy.”
While surprise has been essential to Apple’s success, total secrecy may no longer be attainable or even necessary.
From Retina displays to DRAM chips to CPUs, Apple’s principal component supplier for iOS devices is Samsung, accounting for about a quarter of the component cost of an iPhone. To add insult to injury, Samsung is also Apple’s biggest rival in consumer electronics and one that takes particular delight in aping every aspect of its products down to icons. Furthermore, as various investigations and court cases reveal, many people in Apple’s vast supply chain are fond of divulging its upcoming product secrets in exchange for money from stock manipulators and rivals. One way or another, rumors, tips, “supply chain checks” and “sources in Asia” turn into a steady stream of “Confirmed!” headlines that then precondition us to discount the significance of Apple’s offerings when they do in fact materialize.
Lately, it’s become very difficult for Apple to surprise us with breakthrough hardware. As a matter of fact, since the introduction of the iPhone, Apple gave us precious few surprising breakthroughs in hardware. To an average user, the differences between consecutive iPhone versions, from 3G to 4S, are purely incremental improvements or aesthetic embellishments, not hardware breakthroughs. Sure, better cameras, higher resolution screens, cases that feel richer to the touch, faster speeds…but no significant surprises or breakthroughs in hardware. In hardware terms, the iPad is indeed not much more than a large iPod touch. And yes, days before their likely introduction, we know and certainly expect Retina-like displays on upcoming Macs. They’ll surely be great, but hardly surprising hardware breakthroughs.
Lots of new technologies not yet in iOS products have already been deployed by an army of Apple rivals: larger phone screens, NFC, haptic displays, stylus, inductive charging, very high-resolution cameras and so on. So we couldn’t count their appearance on upcoming Apple products as surprising or hardware breakthroughs either.
But doesn’t Apple have a ton of interesting patents yet to be deployed? Indeed, Apple has a huge spectrum of hardware patents, ranging from illuminated hardware cases to password recovery information stored inside a charging adapter to optical stylus with advanced haptics to Thunderbolt interface on iOS devices to coded/secure magnets to ionic wind generator cooling systems.
We could certainly see in an upcoming iOS device some unforeseen application of Liquid Metal or a novel 3D camera setup or flawless bio-metric security or a one-week battery or a silky smooth digital pen with zero perceptible latency or wireless power transmission or bendable screens…We could, but we likely won’t any time soon.
Apple knows how to count SKUs
There are certain characteristics of Apple that put it in a different category than any other hardware manufacturer. Unlike others, Apple carries an extremely small number of products in each category, with minor and easily discernible differences. There’s really only one iPhone and one iPad. There are no pro, lite, region-specific or one-off versions. Samsung can introduce a phablet with a stylus. When it fails, not a big deal, Samsung has dozens of other models. Motorola can try a smartphone that forms the brains of a larger computer when docked into it. When it fails, it’s yet another Motorola model to be forgotten. Kyocera can try a dual-touchscreen phone. When it fails, Kyocera has scores of other models that will also fail.
When Apple introduces its annual phone, however, it’s a single product, with minor storage, radio and black|white SKU variations. These days, a new iPhone product has to sell 100-200 million units within 12-18 months. There’s no room for (what often seems) frivolous experimentation so prevalent in the industry. No other single product sells in such large numbers.
Apple is also unique in constantly moving millions of users into elevated patterns of computing behavior over time. Apple creates markets, others follow. As a market maker, if you will, Apple is in a unique position to create the rules and then educate its users in how to participate in the new paradigm. No other technology company has ever created so many “markets” and educated so many users. Historically, Apple has taught millions how to use GUI-based computers with a mouse. It transitioned personal storage from floppies to hard disk to optical disk to flash. It moved the notion of a cellphone from a device that makes phone calls to a diminutive personal computer with multiple sensors and multi-touch input for hundreds of millions of people. It got millions to pay for music online by the song. It educated people into buying billions of apps instead of using a web browser.
Of course, these new markets have been very good to Apple. But (as I explained four years ago in Why Apple doesn’t do “Concept Products”) with market making comes the responsibility of introducing new technologies with extreme deliberation and the willingness to educate tens of millions of users year after year. There’s no magic to introducing a new OS, for example, when your previous one is deployed by only 7% of your users. In market making, there are no shortcuts.
It may be a dilemma, but it’s not a weakness
The rumor inflated expectations prior to Apple product introductions, followed by the inevitable “let down” has been a familiar leitmotif. Doubled-down secrecy or not, this is unlikely to change in the near future.
But what the pundits may be missing is that Apple is hardly unhappy about this state of affairs. When Steve Jobs said in 2010 that “RIM would have a hard time catching up to Apple because it has been forced to move beyond its area of strength and into unfamiliar territory of trying to become a software platform company,” it was clear that smartphone value proposition had transitioned from hardware to platforms — a clear Apple core competency.
Apple has the best hardware-software-service integration in the industry, bar none. So the fact that the new device wars are now actually fought not on hardware specs but on vertical integration accords Apple a unique advantage. Hardware discipline coupled with constrained SKU count give Apple enormous economies of scale which, in turn, provide depth, reach, staying power, unparalleled gross margins, service excellence and, ultimately, customer loyalty. Counterintuitive as it may seem, rivals may find out that too much hardware “innovation” can actually kill a company. And that is a dilemma.
Mon, May 17, 10
Last week, Forrester analyst Sarah Rotman Epps published Curated Computing: Designing For The Post-iPad Era where she observed:
“What’s revolutionary about the iPad is the experience that it delivers: The iPad is a new kind of PC that ushers in an era of Curated Computing.“
Not unexpectedly, this drew the attention of the anti-Apple echosystem that regards the Cupertino company as the evil incarnate who’s hellbent on destroying the “open web” by curating its users’ experience on Apple devices.
Taking the baton of anti-Apple venom from Adobe’s Lee (Go screw yourself Apple) Brimelow, Google’s newest evangelist Tim (I hate, hate Apple) Bray responded to Forrester’s “Curated Computing” notion with élan:
I shudder to the core.
In a series of tweets on Twitter, Bray piled on Apple with escalating snarkiness. Let’s review his misdirections away from Google’s own sins:
Curated computing: Who needs complexity?
Exactly, who needs complexity? Who does need complexity other than those who profit from mediating its ill effects on consumers? Who, for example, needs Byzantine complexity purposely injected into our legal, tax or health care systems? Who profits from the shameful complexity of our IT universe? Who benefits from the anti-virus industry? Who profits from the complexity of Facebook’s privacy settings, Oracle’s pricing structure or Microsoft’s SharePoint hairball? Who needs the complexity of users being forced to navigate through six different Android OS versions against a permutation of dozens and dozens of carriers, handset manufacturers and devices? Google would like you to believe users are craving for this complexity, just as Microsoft tried to convince you for the last two decades.
[John @gruber answers @timbray: I think this one actually nails it: "Curated computing: Who needs complexity?" Many use cases where we *don't* need complexity. Tim Bray responds:]
Agreed, many indeed, but freedom is too high a price.
Freedom? Whose freedom? The freedom of those who directly profit from the artificial complexity to continue as they please or the freedom of users who are being taxed by these parasites? Let’s ignore the absurdity of equating Apple’s banning of proprietary Flash with the abrogation of, say, the First Amendment, a real freedom.
Curated computing: Don’t bother your pretty little head, we’ll take care of what you see.
Just like Google telling the rest of the world: “If someone forced us to [disclose how our search advertising business works], it would destroy our product.” This from a company that’s currently being investigated by the European Commission for antitrust ramifications of its opaque search ranking algorithms and the resulting 90% monopolistic share of the European search market. Google knows best.
Curated computing: Pay no attention to the man behind the curtain.
Let’s open that curtain a bit. Here’s what Bray’s bosses and Google founders Sergey Brin and Larry Page said in their The Anatomy of a Large-Scale Hypertextual Web Search Engine a few years ago:
Currently, the predominant business model for commercial search engines is advertising. The goals of the advertising business model do not always correspond to providing quality search to users.
We expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers.
It could be argued from the consumer point of view that the better the search engine is, the fewer advertisements will be needed for the consumer to find what they want. This of course erodes the advertising supported business model of the existing search engines. We believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.
It’s not as if, a decade later, the rest of the world can see what’s behind Google’s perfectly opaque and proprietary search and advertising curtain, is it? Can you say “link farms”and SEO? Do you really know what exactly Google does with your click-stream history? Did you know Google has been snooping on European WiFi transmissions until a few days ago even though the company denied it previously? Do you really know what the man behind the curtain is doing?
Curated computing: Admire the beautiful murals on the garden walls.
Or you can go “out there” to admire the graffiti on the…ground? In Google’s walled garden of advertising, for example, “cougars and cubs are out, but sugar daddies and sugar babies are in.” Google “will take care of” your sexual proclivities.
Curated computing: Freedom is over-rated.
So are utopias.
I, for one, welcome our new curatorial overlords.
Of course, no mention of our current overloads: complexity merchants.
Curated computing: What they have right now in China.
And what they also had in China just a few years ago when Bray’s employer Google went in three-monkey style to conduct commerce, despite all manner of people pleading the overlord of search/ad business not to.
Curated computing: Just fine if you’re the curator.
Google should know, its share of the search market hovers around 65-70% and its U.S. search advertising share is over 75%. If you’re the sole “curator” of AdSense/AdWords things should be just fine.
Curated computing: Your gated-exurban-community home on the Internet.
Perhaps the most pernicious proposition of the “everything must be open” crusade is the notion that curation is bad and anti-freedom. Soldiers of this crusade confuse freedom with competition. Our museums are not football-field sized warehouses where art objects are indiscriminately dumped and our magazines and blogs are not amorphous containers of randomly selected articles. Our classrooms, restaurants, hospitals and indeed all our civilized institutions are firmly reliant on curation of one kind or another. The goal should be for curators to compete, not for curation to be declared illegal and unholy by the “open” zealots.
Who’s behind the curtain?
Just as Adobe is desperately trying to yell at the world, “Don’t buy into Apple’s walled garden, get locked into our own proprietary Flash,” so is Google trying to misdirect consumers’ attention from its own monopolistic sins to Apple’s mobile platform where 100 million users voted with their own money to enjoy 200,000 apps. The evil man behind the curtain in this scenario is not Apple’s curation, it’s the frightening prospect of Google getting cut off from search and ad revenue derived from its naked domination of the search box on top of your web browser. That, unfortunately, doesn’t sound like an appealing public cry, hence the “Curated Computing” misdirection whining.