An interim solution for iOS ‘multitasking’

There are many counterintuitive ‘rules’ in product design, these two are among the most intractable:

• The more successful a product, the harder it’s to upgrade.

• The more users say they want a product update, the more they complain when the change arrives.

It wouldn’t be unkind to ascribe both of them to the iOS platform: spectacularly successful and at the crossroads for the mother of all upgrades for both hardware and software, now commandeered for the first time by a single person who’s not named Steve Jobs. The financial impact of these design decisions is easily the 64-billion dollar question at Cupertino.

What has changed?

Having already sold over 120 million iPads in less than two years, Apple’s now making the sales pitch to hundreds of millions of potential post-PC consumers that iPads may be ‘OR’ devices, not just ‘AND’ adjuncts to their desktops and notebooks of yesteryear.

The iPhone in 2007 and the iPad in 2010 created their respective industry segments, then went on to dominate what was mostly virgin territory with a simple proposition: One Device > One Account > One App > One Window.

Several years after their introduction, now with many competitors, Apple is under pressure to examine every link in that chain of platform definition. And the one most contested is the last: One Window. While it’s true that iOS apps can contain two (and sometimes even more) ‘views’ in one screen, like the standard Master-Detail views, two different apps cannot share the same window. A blog writing app on an iPad can, for example, dedicate portions of its single window to video, map, search engine results or tweet displays, but not specifically to Vimeo, Google Maps, Bing or Twitter apps. In the sandboxed territories of iOS, ‘One Device > One Account > One App > One Window’ is still the law of the land.

As iPads move into business, education, healthcare and other vertical markets, however, expectations of what iPads should do beyond audio, video, ebook and simple app consumption have gone up dramatically. After all, users don’t just inertly read in one app at a time but write, code, design, compose, calculate, paint, clip, tweet, and, in general, perform multiple operations in multiple apps to complete a single task in one app.

In iOS, this involves double-clicking the Home button, swiping in the tray to find the other app, waiting for it to (re)load fully, locating the app view necessary to copy, double-clicking the Home button, finding the previous app in the tray and waiting for it to (re)load fully to paste the previously copied material. That’s just one operation between two apps. Composing a patient review for a doctor or creating a presentation for a student can easily involve many such operations among multiple apps.

Indeed, among the major post-iOS mobile platforms like Android, Metro and BlackBerry, iOS is the most cumbersome and slowest at inter-app navigation and task completion. There have been a few mitigating advances: gestural swipes, faster processors and more memory certainly help but the inter-app task sharing problem is becoming increasingly more acute. Unfortunately, solving iOS’s multitasking problem in general involves many other considerations, including introduction of UX complexity and thus considerable user re-education, to say nothing of major architectural OS changes. It may thus take Apple longer than expected to find an optimal solution. What can Apple do in the interim then?

Is ‘Multi’ the opposite of ‘One’?

Systems designers know all too well: when you just don’t have the time, money, staff or technology to solve a given problem, there are ways to cheat. Steve Jobs would be the first to tell you: that’s OK. A well executed cheat can be indistinguishable from a fundamental architectural transition.

From a design perspective, the weakest link in the one-task-many-operations-in-different-apps problem is the iOS clipboard. The single-slot clipboard. The one that forces the user to shuffle laboriously among apps to collect all the disparate items one. at. a. time.

But with a multi-slot clipboard, if you were writing a report, for example, you could go to a web page, copy the URL, a paragraph, maybe a photo and a person’s email address in one trip. Now a single trip back to the initial app and you have four items ready to be pasted into appropriate places with no more inter-app shuffle necessary. Instant 4X productivity gain. Simply put, if you had a four-slot clipboard, you can instantly quadruple your productivity. For a ten-slot clipboard, 10X!

Well, obviously, it’s not that easy. First of all, Apple doesn’t believe in multi-slot clipboards and doesn’t even ship one with Mac OS X. Also, you couldn’t really have an ‘infinite-slot’ clipboard, for iOS would run out of memory quickly. Finally, a multi-slot clipboard would require a visible UI for the user to select the right content, thereby introducing some cognitive complexity.

None of these objections seem insurmountable, though. iOS already has a similarly useful ‘option selectors’ like the recent ‘share sheets’ from which a user can send stuff to Twitter, Facebook, email, etc. Limiting the clipboard to four slots would enable at least 250-pixel square previews of each slot’s contents for easy identification. The Clipboard could pop, move up, slide in from right or perform some other clever animated appearance. Yes, there could be a cognitive penalty for having to be concerned about system-memory management, but a bit of user training for the concept of ‘First In, First Out’ or a little alert to the user indicating memory-intensive copying would go a long way.

It’s not my job to suggest Jony Ive how this might be implemented in UI and UX. But until Apple has a more general solution to multitasking and inter-app navigation, the four-slot clipboard with a visible UI should be announced at WWDC. I believe it would buy Ive another year for a more comprehensive architectural solution, as he’ll likely need it.

Things Apple Has Not Yet Done

It’s hard to like Apple. To the dismay of conventional thinkers everywhere, the fruit company sambas to its own tune: makes the wrong products, at the wrong prices, for the wrong markets, at the wrong time. And, infuriatingly, wins.

Some of Apple’s ill-advised moves are well known. When other PC companies were shuttering their retail stores, Apple opened dozens in the most expensive locales. During the post-dotcom crash, instead of layoffs, Apple spent millions to hire and boost R&D. To the “Show us a $500 netbook, now!” amen corner Apple gave the un-netbook iPad, not at $999 but $499. The App Store and iTunes are still not open. Google hasn’t been given the keys to iOS devices yet…Clearly, this is a company that hasn’t learned the market-share-über-alles lesson from the Wintel era and is repeating the same mistakes, again. Like these:

• Media company — The slick design of Apple gadgets wouldn’t be nearly enough if it weren’t for the fact that Apple has quietly become the world’s biggest digital content purveyor. The availability of a vast library of media, coupled with the ease of purchase and the lock-in effect these purchases create, could easily tempt a lesser outfit to fashionably declare itself a “media company”. After all, Macromedia tried that with its AtomFilms purchase in 2000. Real and Yahoo dabbled in various forms media creation, acquisition and distribution. Microsoft fancied itself a part-media company with investments in publishing (Slate) and cable (MSNBC, Comcast). Amazon has several imprints of its own. Netflix is now an episodic TV producer. Google is investing hundreds of millions in original material for YouTube. Apple, on the other hand, has always resisted creating and owning content, because…

• Indies — … Apple plays for the fat middle part of the bell curve. Once a bit player in computers and consumer electronics, Apple’s now a giant. Whether it’s music, TV shows, movies or ebooks, Apple targets the mainstream, and the mainstream demands the availability of mainstream content from top labels, studios and publishers. It’s very tempting to urge Apple to sign deals right and left with independent producers in entertainment and publishing, to bypass traditional gatekeepers and ‘disrupt’ their respective industries, on the cheap. Unfortunately, beyond modest promotional efforts with indies, it doesn’t look like Apple’s likely to upset the mainstream cart from which it makes so much money.


• Multitasking — “One device. One account. One app. One window. One task.” seems to be Apple’s current approach to Post-PC computing. If iPads are going to cannibalize PCs in the workplace or schools, iOS workflow patterns will have to evolve. Bringing multiple user accounts to the same device, showing two windows from two different apps in the same view with interaction between the two or letting all/most apps work in the background would necessitate quite a bit of user re-education in the iOS camp. It’s not clear for how long Apple can afford not to provide such functionalities.

• PDF replacement — Apple’s tumultuous love affair with PDF goes back nearly 25 years to Display PostScript during its NeXT prequel. PDF may now be “native” to Mac OS X and the closest format of exchange for visual fidelity, but it’s become slow, fat, cumbersome and not well integrated with HTML, the lingua franca of the web. While PDF is too entrenched for the print world, ePub 3.0 seems to be emerging as an alternative standard for interactive media publishing. Apple does support it, even with Apple-created extensions, but composing and publishing polished ePub material is still a maddeningly complex, hit-and-miss affair. iBooks Author is a great start, but its most promising output is iTunes-only. If Apple has big ideas in this space, it’s not obvious from its available tools or investments.

• HTML 5 tools — While iBooks Author makes composing app-like interactive content possible without having to use Xcode, Apple has no comparable semi-professional caliber tool for creating web sites/apps for the browser. Apple has resisted offering anything like a Hypercard-level tool for HTML that sits in between the immense but disjointed JavaScript/CSS open ecosystem and the powerful but hard-to-master Xcode. It has killed iWeb and still keeps iAd Producer mostly out of sight. Clearly, Apple doesn’t want more apps but more unique apps to showcase the App Store. HTML isn’t much of a differentiator there and until the ROI in HTML 5 vs. native apps becomes clearer to Apple, such tools are unlikely to arrive anytime soon.


• Discovery tools — Yes, Apple has Genius, but that’s a blackbox. Genius is simple and operates in the background silently. It doesn’t have a visual interface like Spotify, Aweditorium, Music Hunter, Pocket Hipster, Groovebug or Discovr Music, allowing users to actively move around a musical topology visually, aided by various social network inputs. With its Ping attempt and Twitter and Facebook tie-ups, Apple has shown it’s at least interested in the social angle, but a more dedicated, visual and fun discovery tool is still absent not just for music but also for TV, movies, books and apps.


• Map layers — Over the last few years Apple has acquired several map-related companies, one of which, PlaceBase, was known for creating “layers” of data-driven visualizations over maps. Even before its messy divorce from Google, Apple has chosen not to offer any such map enhancements. When properly designed, maps are great base-level tools over which lots of different kinds of information can be interactively delivered, especially on touch-driven mobile devices where Siri also resides.

• iOS device attachments — One of the factors that made iPods and iPhones so popular has been the multi-billion dollar ecosystem of peripherals that wrap around or plug into them. However, besides batteries and audio equipment, there’s been a decided dearth of peripherals that connect to the 30-pin port to do useful things in medicine, education, automation, etc. Apple’s attention and investment in this area have been lackluster. Perhaps the new iPad mini coupled with the tiny Lightning Connector will rekindle interest by Apple and third parties in various domains.

Apple glasses

• Wearables — Google Glass is slated for production in a year or so, Apple’s known assets in wearable computing devices amount to a few patents. There’s much debate as to how this field will shape up. Apple may choose to augment iPhones with simpler and cheaper devices like smart watches that work in tandem with the phone, instead of stand-alone but expensive devices like Google Glass. So far ‘wearables’ doesn’t even register as a hobby in Apple’s interestgram.


• Stylus — Apple has successfully educated half a billion users in the art of multitouch navigation and general use of mobile devices. That war, waged against one-year old babies and 90-year old grandmas, has been decisively won. However, until Apple invents a more precise method, taking impromptu notes, sketching diagrams and annotating documents with a (pressure sensitive) stylus remains a superior alternative to the finger. Some may consider the notion of a stylus (even one dedicated only to the specialized tasks cited above) a niche not worthy of Apple’s interest. And yet not too long ago 5-7 inch mobile devices were also considered niches.

• Games — Apple’s on course to become the biggest gaming platform. This without any dedicated game control or motion sensing input devices like the Xbox 360 Kinect and despite half-hearted attempts like the Game Center. Apple has been making steady progress on the CPU/GPU front on iOS devices and now the new Apple TV is also getting an A5X-class chip, capable of handling many console-level games. It remains unclear, however, if Apple has the desire or the dedicated resources to leapfrog not just Sony and Nintendo but also Microsoft in the games arena, with a strategy other than steady, slow erosion of the incumbents’ base.

• iOS Pro devices — Apple has so far seen no reason to bifurcate its iOS product line along entry/pro models, like MacBooks/MacBook Pros. iOS devices sell in the tens of millions every quarter into many complex markets in over 100 countries. Further complicating its SKU portfolio with more models is not the Apple way. More so than iPhones, an iPad with a “Pro” designation with specs to match has so far been not forthcoming. And yet several hundred million of these devices are now sold to business and education, where better security, provisioning, app distribution, mail handling, multitasking, hardware robustness, cloud connectivity, etc., will continue to be requested as check-mark items.

• Money — Apple hasn’t done much with money, other than accumulating about $140 billion in cash and marketable securities for its current balance sheet. It hasn’t yet made any device with NFC, operated as a bank, issued AppleMoney like Amazon Coins or Facebook Credits, offered a branded credit card or established a transactional platform (ignoring the ineptly introduced Passbook app). It has a tantalizing patent application for a virtual money transfer service (like electronic hawala) whereby iOS users can securely send and receive cash anywhere, even from strangers. With close to half a billion credit card accounts, the largest in the world, Apple has the best captive demographics for some sort a transactional sub-universe, but it’s anybody’s guess what it may actually end up doing with it or when.

Half empty or more to fill?

It would be easy and fun to spend another hour to triple this list of Things-Apple-Has-Not-Yet-Done. While not all of these would be easy to implement, none of them would be beyond Apple’s ability to execute. Most card-carrying AAPL doomsayers, however, would look at such a list and conclude: See, Apple’s fallen behind, Apple’s doomed!

There’s, of course, another way of interpreting the same list. Apple could spend a good part of the next decade bundling a handful of these Yet-To-Be-Done items annually into an exciting new iOS device/service to sell into its nearly half billion user base and beyond. Apple suffers from no saturation of market opportunities.

Apple will inevitably tackle most of these, but only in its own time and not when it’s yelled at. It’ll likely introduce products and services not on this or any other list that will end up rejiggering an industry or two. Apple will do so because it knows it won’t win by conventional means or obvious schedules…which makes it hard — for those who are easily distracted — to like Apple.

How many iDownloaders does it take to screw an App Store?

Search results for “iDownloader” at Apple App Store:


Yes, it’s a big operation. Yes, there’ll be a million apps soon. Yes, many apps will inevitably be similar. Yes, shady developers steal other developers’ IP. Yes, all sorts of people try to game it. Yes, the power law may apply to revenues. Yes, you can’t please everyone all the time. Yes, other app stores may be worse. Yes, the App Store was once the crown jewel of Apple’s mobile empire.

Yes, there are many ways to spin this… None fits the bill as much as the notion that Apple’s inability or unwillingness to fundamentally improve categorization, discovery, navigation, display, promotion, fraud, pricing and reviews at the App Store has been most glaring.

Chomp change, indeed.

Can Siri go deaf, mute and blind?

Earlier in “Is Siri really Apple’s future?” I outlined Siri’s strategic promise as a transition from procedural search to task completion and transactions. This time, I’ll explore that future in the context of two emerging trends:

  • Internet of Things is about objects as simple as RFID chips slapped on shipping containers and as vital as artificial organs sending and receiving signals to operate properly inside our bodies. It’s about the connectivity of computing objects without direct human intervention.
  • The best interface is no interface is about objects and tools that we interact with that no longer require elaborate or even minimal user interfaces to get things done. Like self-opening doors, it’s about giving form to objects so that their user interface is hidden in their user experience.

Apple’s strength has always been the hardware and software it creates that we love to carry, touch, interact with and talk about lovingly — above their mere utility — like jewelry, as Jony Ive calls it. So, at first, it seems these two trends — objects talking to each other and objects without discernible UIs — constitute a potential danger for Apple, which thrives on design of human touch and attention. What happens to Apple’s design advantage in an age of objects performing simple discreet tasks or “intuiting” and brokering our next command among themselves without the need for our touch or gaze? Indeed, what happens to UI design, in general, in an ocean of “interface-less” objects inter-networked ubiquitously?

Looks good, sounds better

Fortunately, though a star in her own right, Siri isn’t wedded to the screen. Even though she speaks in many tongues, Siri doesn’t need to speak (or listen, for that matter) to go about her business, either. Yes, Siri uses interface props like fancy cards, torn printouts, maps and a personable voice, but what makes Siri different is neither visuals nor voice.

Despite the knee-jerk reaction to Siri as “voice recognition for search,” Siri isn’t really about voice. In fact, I’d venture to guess Siri initially didn’t even have a voice. Siri’s more significant promise is about correlation, decisioning, task completion and transaction. The fact that Siri has a sassy “voice” (unlike her competitors) is just endearing “attitude”.


Those who are enthusiastic about Siri see her eventually infiltrating many gadgets around us. Often seen liaising with celebrities on TV, Siri is thought to be a shoo-in for the Apple TV interface Oscars, maybe even licensed to other TV manufacturers, for example. And yet the question remains, is Siri too high maintenance? When the most expensive BOM item in an iPhone 5 is the touchscreen at $44, nearly 1/4 costlier than the next item, can Siri afford to live outside of an iPhone without her audio-visual appeal?

Well, she already has. Siri Eyes Free integration is coming to nine automakers early this year, allowing drivers to interact with Siri without having to use the connected iPhone screen.


Given Siri Eyes Free, it’s not that difficult to imagine Siri Touch Free (see and talk but not touch), Siri Talk Free (see and touch but not talk) and so on. People who are impatient with Apple’s often lethargic roll out plans have already imagined Siri in all sorts of places, from aircraft cockpits to smart wristwatches to its rightful place next to an Apple TV.

Over the last decade, enterprise has spent billions to get their “business intelligence” infrastructure to answer analysts’ questions against massive databases from months to weeks to days to hours and even minutes. Now imagine an analyst querying that data by having a “natural” conversation with Siri, orchestrating some future Hadoop setup, continuously relaying nested, iterative questions funneled towards an answer, in real time. Imagine a doctor or a lawyer querying case histories by “conversing” with Siri. Forget voice, imagine Siri’s semantic layer responding to 3D gestures or touches on glass or any sensitized surface. Set aside active participation of a “user” and imagine a monitor with Siri reading microexpressions of a sleeping or crying baby and automatically vocalizing appropriate responses or simply rocking the cradle faster.

Scenarios abound, but can Siri really afford to go fully “embedded”?

There is some precedence. Apple has already created relatively successful devices by eliminating major UI affordances, perhaps best exemplified by the iPod nano ($149) that can become an iPod shuffle ($49) by losing its multitouch screen, made possible by the software magic of Genius, multi-lingual VoiceOver, shuffle, etc. In fact, the iPod shuffle wouldn’t need any buttons whatsoever, save for on/off, if Siri were embedded in it. Any audio functionality it currently has, and much more, could be controlled bi-directionally with ease, in all instances where Siri were functional and socially acceptable. 3G radio plus embedded Siri could also turn that tiny gadget into so many people’s dream of a sub-$100 iPhone.


Grounding Siri

Unfortunately, embedding Siri in devices that look like they may be great targets for Siri functionality isn’t without issues:

  • Offline — Although Siri requires a certain minimum horsepower to do its magic, much of that is spent ingesting and prepping audio to be transmitted to Apple’s servers which do the heavy lifting. Bringing that processing down to an embedded device that doesn’t require a constant connection to Apple maybe computationally feasible. However, Apple’s ability to advance Siri’s voice input decoding accuracy and pattern recognition depend on constant sampling of and adjusting input from tens of millions of Siri users. This would rule out Siri embedded into offline devices and create significant storage and syncing problems with seldom-connected devices.
  • Sensors — One of the key reasons why Siri is such a good fit for smartphones is the number of on-device sensors and the virtually unlimited range of apps it’s surrounded with. Siri is capable of “knowing” not only that you’re walking, but that you’ve also been walking wobbly, for 35 minutes, late at night, in a dark alley, around a dangerous part of a city, alone… and send a pre-designated alert silently on your behalf. While we haven’t seen examples of such deep integration from Apple yet, Siri embedded into devices that lack multiple sensors and apps would severely limit its potential utility.
  • Data — Siri’s utility is directly indexed to her access to data sources and, at this stage, third parties’ search (Yelp), computation (WolframAlpha) and transaction (OpenTable) facilities. Apple does and is expected to continue to add such partners in different domains on a regular basis. Siri embedded in radio-lacking devices that don’t have access to such data and processing, therefore, may be too crippled to be of interest.
  • Fragmentation — People expect to see Siri pop up in all sorts of places and Apple has taken the first step with Siri Eyes Free where Siri gives up her screen to capture the automotive industry. If Siri can drive in a car, does that also mean she can fly on an airplane, sail on a boat or ride on a train? Can she control a TV? Fit inside a wristwatch? Or a refrigerator? While Siri — being software — can technically inhabit anything with a CPU in it, the radio in a device is far more important to Siri than its CPU, for without connecting to Apple (and third party) servers, her utility is severely diminished.
  • Branding — Siri Eyes Free won’t light up the iPhone screen or respond to commands that would require displaying a webpage as an answer. What look like reasonable restrictions on Siri’s capabilities in this context shouldn’t, however, necessarily signal that Apple would create “subsets” of Siri for different domains. More people will use and become accustomed to Siri’s capabilities in iPhones than any other context. Degrading that familiarity significantly just to capture smaller markets wouldn’t be in Apple’s playbook. Instead of trying to embed Siri in everything in sight and thus diluting its brand equity, Apple would likely pair Siri with potential NFC or Bluetooth interfaces to devices in proximity.

What’s Act II for Siri?

In Siri’s debut, Apple has harvested the lowest hanging fruit and teamed up with just a handful of already available data services like Yelp and WolframAlpha, but has not really taken full advantage of on-device data, sensor input or other novel information.

As seen from outside, Siri’s progress at Apple has been slow, especially compared to Google that has had to play catch up. But Google must recognize a strategically indispensable weapon in Google Now (a Siri-for-Android, for all practical purposes) as a hook to those Android device manufacturers that would prefer to bypass Google’s ecosystem. None of them can do anything like it for some time to come, Samsung’s subpar attempts aside.

If you thought Maps was hard, injecting relationship metadata into Siri — fact by fact, domain by domain — is likely an order of magnitude more laborious, so Apple’s got her work cut out for Siri. It’d be prudent not to expect Apple to rush into embedding Siri in its non-signature devices just yet.

“The Creepy Line”

When asked in 2010 about the possibility of a Google “implant,” Google’s then-CEO Eric Schmidt famously said:

“Google policy is to get right up to the creepy line and not cross it.

With your permission you give us more information about you, about your friends, and we can improve the quality of our searches. We don’t need you to type at all. We know where you are. We know where you’ve been. We can more or less know what you’re thinking about.”

Since that reassuring depiction of what awaits us in the future, Google has danced energetically around “the creepy line” many times, from subverting users’ privacy preferences in Safari and paying the largest FTC fine in history to introducing the omniscient Google Glass that gets as close to human trafficking as possible without drilling into the brain.

When the internet behemoth raises the bar, others rush to conquer and some manage to surpass it. Buried in the minutiae of CES 2013, in a booth not much smaller than a 10,000-inch Samsung UHD TV, was Affectiva, showcasing its primary product Affdex:

“Affdex tracks facial and head gestures in real-time using key points on viewers’ face to recognize a rich array of emotional and cognitive states, such as enjoyment, attention and confusion.”



Deciphering concealed emotions by “reading” facial microexpressions, popularized by Paul Ekman and the hit TV series Lie To Me, is nothing new, of course. What’s lifting us over the creepy line is the imminent ubiquity of this technology, all packaged into a web browser and a notebook with a webcam, no installation required.


Eyes Wide Shut

Today, Affectiva asks viewers’ permission to record, as they watch TV commercials. What happens tomorrow? After all, DNA evidence in courts was first used in the late 1980s and has been controversial ever since. It’s been used to exonerate incarcerated people as well as abused and misused to convict innocent ones. Like DNA analysis, facial expression reading technology will advance and may attain similar stature in law and in other fields…Some day.

Currently, however, along with its twin brother face recognition technology, microexpression reading isn’t yet firmly grounded in law. This uncertainty gives it the necessary space to evolve technologically but also also opens the door to significant privacy and security abuse.


The technology, when packaged into a smartphone, for example, can be used to help some of those with Asperger’s syndrome to read facial expressions. But it can also be used in a videotelephony app as a surreptitious “lie detector.” It could be a great tool during remote diagnosis and counseling in the hands of trained professionals. But it could also be used to record, analyze and track people’s emotional state in public venues: in front of advertising panels, as well as courtrooms or even job interviews. It can help overloaded elementary school teachers better decipher the emotional state of at-risk children. But it can also lead focus-group obsessed movie studios to further mechanize character and plot development.

The GPU in our computers is the ideal matrix-vector processing tool to decode facial expressions in real-time in the very near future. It would be highly conceivable, for instance, for a presidential candidate to be peering into his teleprompter to see a rolling score of a million viewers’ reactions, passively recorded and decoded in real-time, allowing him to modulate his speech in synchronicity with that real time feedback. Would that be truly “representative” democracy or abdication of leadership?

And if these are possible or even likely scenarios, why wouldn’t we have the technology embedded in a Google Glass-like device or an iPhone 7, available all the time and everywhere. If we can use these gadgets to decode other people’s emotional state, why can’t these gadgets use the same to decode our own and display them back to us? What happens when, for the first time in homo sapiens history, we have constant (presumably unbiased) feedback on our own emotions? The distance from detecting emotional state by machines to suggesting (and even administering) emotion altering medicine can’t be that far, can it? How do we learn to live with that?

The technology is out there. From Apple’s Siri, Google already has the blueprint to advance Google Now from searching to transactions. One would think the recent hiring of Singularity promoter Ray Kurzweil as director of engineering points to much higher ambitions. Ambitions we’re not remotely prepared to parse yet. Much closer to that creepy line.