Apple Maps: The FAQ

Q: Is this Apple’s Mapgate?

A: Yes.

Q: It is?

A: Most certainly is. Apple released a product which in its very first day didn’t have the coverage of Google Maps, which took about eight years to get here:

gmapsfail

Q: You’re exaggerating. Google Maps has the best user experience of any company in this business, does it not?

A: Yes it does, if you walk on water, like Google does from Alicante to Valencia in Spain:

Valencia1

Q: C’mon that must be old data.

A: Well, the map says it’s current:

Mapdate

Q: Maybe Google just didn’t get to it yet. Google Maps is in beta anyhow.

A: Yes, it must be:

Directions

Q: This is confusing.

A: No it’s not. It simply means Google Maps can and likely will get better. Just like Apple Maps.

Q: But Google Maps has been around for the better part of a decade.

A: Yes, mapping is hard.

Q: Then why did Apple kick Google Maps off the iOS platform? Wouldn’t Apple have been better off offering Google Maps even while it was building its own map app? Shouldn’t Apple have waited?

A: Waited for what? For Google to strengthen its chokehold on a key iOS service? Apple has recognized the significance of mobile mapping and acquired several mapping companies, IP assets and talent in the last few years. Mapping is indeed one of the hardest of mobile services, involving physical terrestrial and aerial surveying, data acquisition, correction, tile making and layer upon layer of contextual info married to underlying data, all optimized to serve often under trying network conditions. Unfortunately, like dialect recognition or speech synthesis (think Siri), mapping is one of those technologies that can’t be fully incubated in a lab for a few years and unleashed on several hundred million users in more than a 100 countries in a “mature” state. Thousands of reports from individuals around the world, for example, have helped Google correct countless mapping failures over the last half decade. Without this public exposure and help in the field, a mobile mapping solution like Apple’s stands no chance.

Q: So why not keep using a more established solution like Google’s?

A: Clearly, no one outside Mountain View and Cupertino can say who’s forced the parties to come to this state of affairs. Did Google, for example, want to extract onerous concessions from Apple involving more advertising leeway, user data collection, clickstream tracking and so on? Thanks to the largest fine in FTC’s history Google had to pay (don’t laugh!), we already know how desperate Google is for users’ data and how cavalier it is with their privacy. Maybe Apple didn’t like Google’s terms, maybe it was the other way around, perhaps both parties agreed it was best to have two separate apps available…we don’t know. After well-known episodes with Microsoft, Adobe and others, what we do know is that Apple has a justifiable fear of key third parties dictating terms and hindering its rate of innovation. It’s thus understandable why Apple would want to wrest control of its independence from its chief rival on its most important product line.

Q: Does Apple have nothing but contempt for its users?

A: Yes, Apple’s evil. When Apple barred Flash from iOS, Flash was the best and only way to play .swf files. Apple’s video alternative, H.264, wasn’t nearly as widely used. Thus Apple’s solution was “inferior” and appeared to be against its own users’ interests. Sheer corporate greed! Trillion words have been written about just how misguided Apple was in denying its users the glory of Flash on iOS. Well, Flash is now dead on mobile. And yet the Earth’s obliquity of the ecliptic is still about 23.4°. We seemed to have survived that one.

Q: So all you’re saying is that Apple Maps was rushed out the door even though it wasn’t quite ready?

A: As they say, every turn-by-turn direction starts with the first step. The longer Apple waits the harder it gets. From iPods to iTunes to iPhones to iOS, Apple’s modus operandi has been to introduce products and continuously improve them into widely attractive maturity by adding value without increasing prices, enlarging ecosystems, deepening integration and generally delighting users with a constant stream of innovations. With a user base fast approaching half a billion and thousands waiting in line to buy its latest product at this very moment, we empirically know this to be true. Why should Apple Maps be any different?

Spirit of Siri at Apple 25 years ago

Siri

As a budding standup comedienne, Siri opened Apple’s WWDC 2012 Monday morning and concluded her act with the prophetic:

It’s really hard for me to get emotional, because as you can tell, my emotions haven’t been coded yet.

Clearly, Siri is a work in progress and she knows it. What others may not know, though, is that while Siri is a recent star in the iOS family, her genesis in the Apple constellation goes far back.

The Assistant and Assist

Nearly three decades ago, fluid access to linked data displayed in a friendly manner to mere mortals was an emerging area of research at Apple.

Samir Arora, a software engineer from India, was involved in R&D on application navigation and what was then called hypermedia. He wrote an important white paper entitled “Information Navigation: The Future of Computing.” In fact, working for Apple CEO John Sculley at the time, Arora had a hand in the making of the 1987 “Knowledge Navigator” video — Apple’s re-imagining of human-computer interaction into the future:

knowledge-navigator.jpg

Unmistakably, the notion of Siri was firmly planted at Apple 25 years ago. But “Knowledge Navigator” was only a concept prototype, Apple’s last one to date. Functional code shipped to users along the same lines had to evolve gradually over the next few years.

After the “Knowledge Navigator,” Arora worked on important projects at Apple and ran the applications tools group that created HyperCard and 4th Dimension (one of the earliest GUI-based desktop relational databases). The group invented a new proprietary programming language called SOLO (Structure of Linked Objects) to create APIs for data access and navigation mostly for mobile devices.

In 1992, Arora and the SOLO group spun off from Apple as Rae Technology, headquartered on the Apple campus. A year later, Rae Assist, one of the first Personal Information Managers (PIMs), was introduced. Based on 4th Dimension DB, Assist combined contact management, scheduling and note taking in an integrated package (automatically linking contact and company information or categorizing scheduled items, etc) for PowerBook users on the go. Although three versions of Assist were released in the following two years, Rae didn’t make any money in the PIM business. But as Rae also worked with large enterprise customers like Chevron and Wells Fargo in database-centric projects, the company realized the SOLO frameworks could also be used to design large-scale commercial websites:

SOLO is based on a concept that any pieces of data must accommodate the requirement of navigation and contextual inheritance in a database environment. In layman terms, it means that every piece of text, graphics and page is embedded with an implicit navigation framework based on the groupings or order in which the items are organized. In other words, a picture, which is a data object, placed in this programming environment will automatically know the concept of ‘next’ and ‘previous’ without having to write an explicit line of code. This simplifies the coding process. Since the information and business logic organization models were already completed for the client-software, converting this to a web application was simply a recompilation of the codes for a different delivery platform. The project was completed within four weeks and we were stunned as to how simple it was. This was an important validation point illustrating the portability of our technology for cross-platform development.

It wasn’t long before we realized that SOLO, a technology based on information organization models, could be adapted and modified for an application to build web sites. A prototype was developed immediately and soon after a business plan was developed to raise venture funding. NetObjects was founded.

Rae quickly applied for patents for website design software and transferred its technology IP to NetObjects. With seed money and the core team from Rae, NetObjects had a splashy entry into what later came to be known as Content Management Systems (CMS). Unfortunately, the rest was rough going for the fledging company. Not long after IBM invested about $100M for 80% of NetObjects, the company went public on NASDAQ in 1999. Heavily dependent on IBM, NetObjects never made a profit and it was delisted from NASDAQ. IBM sold it in 2001.

Outside Apple, SOLO traveled a meandering path into insignificance. Rae Technology became Venture Capital and NetObjects eventually atrophied.

Flying through WWW

Only three years after the SOLO group left Apple for Rae, Ramanathan V. Guha, a researcher in Apple’s Advanced Technology Group, started work on the interactive display of structured, linkable data, from file system hierarchy to sitemaps on the emerging WWW. Guha had earlier worked on CycL knowledge representation language and created a database schema mapping tool called Babelfish, before moving to Apple to work for Alan Kay in 1994.

His new work at Apple, Project X (HotSauce, as it was later called), was based on 3D representation of data that a user could “fly through” and Meta-Content Format (MFC), a “language for representing a wide range of information about content” that defined relationships among individual pieces of data. At an Apple event at the time, I remember an evangelist telling me that HotSauce will do for datasets what HTML did for text on the web.

Hotsauce

Apple submitted MCF to IETF as a standard for describing content and HotSauce (with browser plugins for Mac OS and Windows) found some early adopters. However, shortly after Steve Jobs’ return in 1997, it was a casualty of the grand house cleaning at Apple. Guha left Apple for Netscape, where he helped create an XML version of MCF, which later begot RDF (W3C’s Resource Description Framework) and the initial version of RSS standards.

It’s the metadata, stupid!

Even in its most dysfunctional years in the mid-199os, Apple had an abiding appreciation of the significance of metadata and the relationships among its constituent parts.

SOLO attempted to make sense of a user’s schedule by linking contacts and dates. HotSauce allowed users to navigate faceted metadata efficiently and with some measure of fun to find required information without having to become a data architect. The Assistant in the “Knowledge Navigator” had enough contextual data about its master to interpret temporal, geo-spatial, personal and other contextual bits of info to draw inferential conclusions to understand, recommend, guide, filter, alert, find or execute any number of actions automatically.

There is an app for that

A decade later, Apple was now in need of technology to counter Google’s dominance in search-driven ad revenue on its iOS platform. A frontal assault on Google Search would have been silly and suicidal, notwithstanding the fact that Apple had no relevant scalable search technology. But there was an app for that. And it was called Siri.

Siriold

Siri was a natural language abstraction layer accessed through voice recognition technology from Nuance to extract information from primarily four service partners: OpenTable, Google Maps, MovieTickets and TaxiMagic. Siri was on the iPhone first but it was headed to BlackBerry and Android. Apple bought Siri on April 28, 2010 and that original app was discontinued on October 15, 2011. Now Siri is a deeply embedded part of iOS.

Of course, the Siri code and the team came to Apple from an entirely different trunk of the semantic forest, from SRI International’s DARPA-funded Artificial Intelligence Center projects: Personalized Assistant that Learns (PAL) and Cognitive Assistant that Learns and Organizes (CALO), with research also conducted at various universities.

What made Siri interesting to Apple wasn’t the speech recognition or the simple bypassing of browser-based search, but the semantic relationships in structured and linkable data accessed through natural language. It was SOLO redux at scale and HotSauce cloaked in speech. It wasn’t meant to compete with Google in search results but to provide something Google couldn’t: making contextual sense.

Unlike Google, Siri knows, for example, what “wife” or “son’s birthday” means and can thus provide, not a long list of departures for further clicks, but precise answers. Siri delivers on the wildest dreams of SOLO and HotSauce of an earlier generation. In two years, even as limited to just a few service partners, Siri progressed far more than the developers of SOLO or HotSauce could have imagined. It now speaks the vast majority of the world’s most prominent languages, with connections to local data providers around the globe.

Having intimate conversations with Samuel Jackson and John Malkovich, Siri has become a TV star. Most iOS users already think Siri has a personality, if not an attitude all together. Hard to say what will happen when she actually gets her “emotions coded.”

Apple’s hardware “dilemma”

Rumor

It has become a routine: rumors and speculations lead up to an Apple event wherein the company introduces products that “fail expectations” but go on to sell out and make huge profits. But why does this happen like clockwork, year after year?

To be sure, part of it comes from market speculators who make money from AAPL price swings, but the inability or unwillingness of analysts and pundits to understand how Apple works is the more likely reason.

The “failure” is declared by comparing Apple’s hardware specs against those of its competitors. Like “covering” a U.S. President through the confines of the White House press briefing room, hardware specs are the lazy person’s ideal tool: short, simple, often numerical, but ultimately not very illuminating.

One of the key ingredients of Apple’s spectacular success over the last decade has been the inability of its rivals to distinguish hardware from product. As the proverbial design adage goes, “People don’t want to buy a quarter-inch drill. They want a quarter-inch hole.” Non-geeks, Apple’s primary audience, aren’t interested in what the hardware is, but how the product solves their specific problems. Hence they buy on demonstrable value, rather than on potential of hardware specs.

But Apple detractors ask why should the most valuable technology company on the planet — with a large patent portfolio, unrivaled in-house industrial design capabilities and enormous influence over its supply chain and component pricing — fail to offer the best hardware specs in the industry for its premium products? There are a few basic reasons why Apple doesn’t believe it’s in a hardware race.

Hit or miss, no surprises here

Like Hollywood, Apple is perceived to run a hit-driven business. Perhaps the single most important question affecting AAPL’s P/E compression is whether Apple can continue to generate blockbuster products with regularity, especially in its most profitable iOS line.

This requires a sufficient degree of surprise of the “One more thing…” variety and the secrecy that secures it. However, it’s a monumental task for Apple to coordinate the sourcing and assembly of countless parts for its iOS and Mac devices from three continents in total secrecy so that it can spring on its loyal users products to buy at specific annual intervals. No wonder Tim Cook recently said Apple is going to “double down on secrecy.”

While surprise has been essential to Apple’s success, total secrecy may no longer be attainable or even necessary.

From Retina displays to DRAM chips to CPUs, Apple’s principal component supplier for iOS devices is Samsung, accounting for about a quarter of the component cost of an iPhone. To add insult to injury, Samsung is also Apple’s biggest rival in consumer electronics and one that takes particular delight in aping every aspect of its products down to icons. Furthermore, as various investigations and court cases reveal, many people in Apple’s vast supply chain are fond of divulging its upcoming product secrets in exchange for money from stock manipulators and rivals. One way or another, rumors, tips, “supply chain checks” and “sources in Asia” turn into a steady stream of “Confirmed!” headlines that then precondition us to discount the significance of Apple’s offerings when they do in fact materialize.

Lately, it’s become very difficult for Apple to surprise us with breakthrough hardware. As a matter of fact, since the introduction of the iPhone, Apple gave us precious few surprising breakthroughs in hardware. To an average user, the differences between consecutive iPhone versions, from 3G to 4S, are purely incremental improvements or aesthetic embellishments, not hardware breakthroughs. Sure, better cameras, higher resolution screens, cases that feel richer to the touch, faster speeds…but no significant surprises or breakthroughs in hardware. In hardware terms, the iPad is indeed not much more than a large iPod touch. And yes, days before their likely introduction, we know and certainly expect Retina-like displays on upcoming Macs. They’ll surely be great, but hardly surprising hardware breakthroughs.

Lots of new technologies not yet in iOS products have already been deployed by an army of Apple rivals: larger phone screens, NFC, haptic displays, stylus, inductive charging, very high-resolution cameras and so on. So we couldn’t count their appearance on upcoming Apple products as surprising or hardware breakthroughs either.

But doesn’t Apple have a ton of interesting patents yet to be deployed? Indeed, Apple has a huge spectrum of hardware patents, ranging from illuminated hardware cases to password recovery information stored inside a charging adapter to optical stylus with advanced haptics to Thunderbolt interface on iOS devices to coded/secure magnets to ionic wind generator cooling systems.

Stylus

We could certainly see in an upcoming iOS device some unforeseen application of Liquid Metal or a novel 3D camera setup or flawless bio-metric security or a one-week battery or a silky smooth digital pen with zero perceptible latency or wireless power transmission or bendable screens…We could, but we likely won’t any time soon.

Apple knows how to count SKUs

There are certain characteristics of Apple that put it in a different category than any other hardware manufacturer. Unlike others, Apple carries an extremely small number of products in each category, with minor and easily discernible differences. There’s really only one iPhone and one iPad. There are no pro, lite, region-specific or one-off versions. Samsung can introduce a phablet with a stylus. When it fails, not a big deal, Samsung has dozens of other models. Motorola can try a smartphone that forms the brains of a larger computer when docked into it. When it fails, it’s yet another Motorola model to be forgotten. Kyocera can try a dual-touchscreen phone. When it fails, Kyocera has scores of other models that will also fail.

When Apple introduces its annual phone, however, it’s a single product, with minor storage, radio and black|white SKU variations. These days, a new iPhone product has to sell 100-200 million units within 12-18 months. There’s no room for (what often seems) frivolous experimentation so prevalent in the industry. No other single product sells in such large numbers.

Follow me

Apple is also unique in constantly moving millions of users into elevated patterns of computing behavior over time. Apple creates markets, others follow. As a market maker, if you will, Apple is in a unique position to create the rules and then educate its users in how to participate in the new paradigm. No other technology company has ever created so many “markets” and educated so many users. Historically, Apple has taught millions how to use GUI-based computers with a mouse. It transitioned personal storage from floppies to hard disk to optical disk to flash. It moved the notion of a cellphone from a device that makes phone calls to a diminutive personal computer with multiple sensors and multi-touch input for hundreds of millions of people. It got millions to pay for music online by the song. It educated people into buying billions of apps instead of using a web browser.

Of course, these new markets have been very good to Apple. But (as I explained four years ago in Why Apple doesn’t do “Concept Products”) with market making comes the responsibility of introducing new technologies with extreme deliberation and the willingness to educate tens of millions of users year after year. There’s no magic to introducing a new OS, for example, when your previous one is deployed by only 7% of your users. In market making, there are no shortcuts.

It may be a dilemma, but it’s not a weakness

The rumor inflated expectations prior to Apple product introductions, followed by the inevitable “let down” has been a familiar leitmotif. Doubled-down secrecy or not, this is unlikely to change in the near future.

But what the pundits may be missing is that Apple is hardly unhappy about this state of affairs. When Steve Jobs said in 2010 that “RIM would have a hard time catching up to Apple because it has been forced to move beyond its area of strength and into unfamiliar territory of trying to become a software platform company,” it was clear that smartphone value proposition had transitioned from hardware to platforms — a clear Apple core competency.

Apple has the best hardware-software-service integration in the industry, bar none. So the fact that the new device wars are now actually fought not on hardware specs but on vertical integration accords Apple a unique advantage. Hardware discipline coupled with constrained SKU count give Apple enormous economies of scale which, in turn, provide depth, reach, staying power, unparalleled gross margins, service excellence and, ultimately, customer loyalty. Counterintuitive as it may seem, rivals may find out that too much hardware “innovation” can actually kill a company. And that is a dilemma.

Some questions on Google Project Glass

I haven’t tried on Google Project Glass, which attempts to overlay virtual data contextually on objects seen through a custom manufactured ocular device that resembles eyeglasses. Here’s Google co-founder Sergey Brin wearing one:

Googleglass

Recently, Google let non-Googlers (photographers) try it out publicly for the first time, as it begins its seeding program to get the concept out there. Some think “Google Glasses are Preparing us for the Post-Phone World” and that may be. Others wrote about how Project Glass might be a bit jarring and too in-your-face after the release of a demo video from Google.

I like the general premise, but ceding so much intimate information with such precise temporal and spatial context to an information monopolist with a very questionable regard and record for personal privacy would be disturbing. On a more practical level, I had a few questions:

Optics — If you look at the photo above, you’ll see that Project Glass doesn’t require glasses. It does use the extremely familiar eyeglasses form factor, but without the glasses. Leaving aside aesthetic considerations, from an industrial design point of view, that’s a bit problematic. The Vision Council of America estimates “approximately 75% of adults use some sort of vision correction. About 64% of them wear eyeglasses, and about 11% wear contact lenses, either exclusively, or with glasses.”

Project Glass is said to cost between $250 and $600. I don’t know if it would even be possible to bolt Project Glass’ essential apparatus onto a user’s own prescription eyewear. Or how much more money might be required to somehow refit Project Glass with one’s own prescription lenses. Or how lenses customized for myopia, hyperopia, presbyopia, astigmatism or combinations thereof can be used in connection with Project Glass. Or if those who also wear prescription sunglasses, would have to duplicate their Project Glass setup. It sounds like the combined cost of custom made prescription lenses and Project Glass may be prohibitively expensive to all but the geeky avant guard. If prescription lenses and Project Glass can’t affordably interoperate, does that mean Google’s writing off 50-75% of the population as potential users right off the bat?

Input — Given its diminutive size, voice prompts seem to be the only method of input into the system. What happens when a user grows dependent on Project Glass for way-finding or other semi-critical tasks but voice input is unavailable in a quiet or a noisy zone or due to network inaccessibility? It would be as frustrating as an obvious affordance, like a door handle, not responding to expected behavior, but more so.

Output — The device allows easy capture of still photographs as well as video in real time. Over many decades, people have come to expect the presence of a camera or, more often than not, the raising of it to eye level for the photo/video capture process to take place. Once the physical nature of the camera is exposed, the subjects being captured can either ignore it or demand the recording to be stopped, for example in some conservative cultures or in protected areas. By being utterly stealthy, Project Glass avoids these century-old conventions to allow secret visual recordings. Google Chairman Eric Schmidt had previously indicated that “The Google policy on a lot of things is to get right up to the creepy line and not cross it.” Is Project Glass crossing that “creepy” line?

Data — Speaking of creepy, it wouldn’t be an exaggeration to say that the data collected by Google of a Project Glass user (who’s not otherwise under broad surveillance) would far exceed any other attempt by a commercial entity in history.

Two years ago Eric Schmidt foretold this clearly: “It’s a future where you don’t forget anything…In this new future you’re never lost…We will know your position down to the foot and down to the inch over time…you’re never lonely…”

What happens when the state subpoenas your omnipresent and omniscient “invisible friend” Google for that extraordinarily comprehensive data collection on you where “nothing is forgotten”? What happens when your spouse does that in divorce proceedings? In a country like the United States where citizens don’t even have a national identity card, what happens when you can’t ever be practically “alone”? Exaggeration? In just a few years, the number of people who do not use a cellphone has come to belong to what Businessweek had dubbed “America’s Most Exclusive Club“. Data collected by Project Glass would dwarf cellphone tracking. Do we really have a legal framework for such commercial surveillance?

At this stage of development (from what can publicly be observed), it’s hard to tell if Project Glass is meant to be a mainstream product to be released by the end of this year, a Hollywood summer blockbuster or another windfarm-type pie-in-the-sky Google project.