Can robots write sports previews?

Considered a creative skill, writing has long been seen as mostly immune to automation and commoditization — the seemingly inevitable end-state of anything touched by the Internet. Perhaps no longer.

What’s the score?

One of the more ubiquitous writing genres is sports reporting. Countless publications, portals, aggregators and distributors in print, radio, TV and Internet cover team rosters, game previews, schedules, results and all manner of short notices from Little League to college games to professional sports. An army of writers are routinely tasked to generate the base content for this wide spectrum of sports coverage.

Here’s a recent example. Despite having been promoted as championship contenders this year but currently being at the very bottom of the NBA standings, Brooklyn Nets and NY Knicks recently met. The day before the game, as is customary, a “preview” of the upcoming game for general syndication had to be written. Something with a lede like this:

Lede1

Now remember, there are games in all sports. At all levels. Across the entire world. Every single day. There are also daily and and hourly developments to be covered in finance, weather, healthcare, marketing, real estate, politics, entertainment, transportation, technology and myriad other fields. There’s always been an insatiable demand for expository writing across the board. While domains are very different, to an analytical eye all such data-driven writing share two important traits: they’re very structured and highly automatable. Everything in the game preview above is simple prose, wrapped around stored data, shown in blue here:

Lede2

It turns out one NBA game preview is pretty much the same as any other similar game. We could structurally separate parts that can be substituted for different data about the other 28 teams and roughly the same compositional logic:

Lede3

If we can now plug in team-specific names, places and data wherever there’s one of those blue-bracketed placeholders above, we could customize a game preview so specifically to a given event that I’m confident 95% of the reading public couldn’t tell if those sentences were composed by a human writer or an algorithm, like the one I pseudo-reverse-engineered and highly simplified below:

Lede4

Fortunately, or unfortunately depending on your perspective and profession, such algorithmic-writing is not some hovering, hyperlooping fantasy. Here’s the actual preview that ran across many sites on the Internet and elsewhere before the game:

Full Preview

And syndicated in one of the biggest such venues, Yahoo Sports:

Human Version

Who’s your daddy?

See the non-human byline below the headline, Automated Insights? That’s one of the new generation of companies involved in algorithmic writing. There are, and will be, others. For the initiated, the technology is quite straight forward. Often structured data is the gating factor, not compositional technology. Parsing and conditional templating technology is well understood by now. It’s tedious but low-scale pieces could be done with procedural programming, larger ones with rules engines and truly scalable and flexible ones with semantic coupling of the domain specific data.

In fact, many aspects of the writing itself is amenable to conditional embellishment of the parts of speech. For example, in the piece above, we could have pre-programmed a list of synonyms for “struggled” and picked a substitute randomly or one specific to geography, audience or sports. Lexical stylization can indeed get very sophisticated through contextual or randomized algorithms. Management of such conditional logic and metadata at scale has been possible for a couple of decades. When composing a personalized investment report or answering a question on your iPhone, your broker and Siri (though using different technologies underneath) already do something similar.

The advantage

In our example, the day before the game there was another “Knicks-Nets Preview” written by a human, Associated Press basketball writer Brian Mahoney, also syndicated in Yahoo Sports. The two pieces clearly serve different purposes. Mahoney’s article is much longer, as well as being significantly more detailed, colorful and analytical. Automated Insights’s preview is all about brevity, information, timeliness and, ultimately, volume, coverage and cost-effectiveness. In one millionth of the time it takes Mahoney to write one of his NBA previews, Automated Insights can generate previews for all the games not just in NBA but in all sports, anywhere on the planet, as long as there’s underlying data. And in a domain like sports, there’s plenty of data.

The differentiating cost of algorithmic writing is nearly all front-loaded on template and conditional logic programming. When done properly, this can obviate post-production fact checking and proof reading. Once set up, these pieces can be auto-produced when underlying data changes or when schedules are triggered. Thus the marginal cost of iterative articles approaches zero.

The day has arrived

Clearly, programmed robots can in fact write sports previews. And many other types of writing suitable for algorithmic automation. As is the case with the Internet, this will displace a lot of writers and also create concomitant technology jobs elsewhere.

It would be easy to dismiss this as procedural, utilitarian writing that doesn’t share much with literary prose. Granted. But such competition is not the focus of algorithmic writing. Not yet, anyway. Given enough nouns, verbs and associations in a specific knowledge domain, you’d be surprised how close you can come in compositional “believability” even today. Tomorrow, don’t be surprised if your next textbook or travel guide or cookbook is written mostly by domain-specific algorithms. And welcome to the [“brave” | “splendid” | “efficient” | “fearful” | “faceless” | “decimating”] new world of algorithms…eating yet another profession!

Follow Kontra on Twitter

You might also be interested in:
Is Siri really Apple’s future? and Can Siri go deaf, mute and blind?

Things Apple Has Not Yet Done

It’s hard to like Apple. To the dismay of conventional thinkers everywhere, the fruit company sambas to its own tune: makes the wrong products, at the wrong prices, for the wrong markets, at the wrong time. And, infuriatingly, wins.

Some of Apple’s ill-advised moves are well known. When other PC companies were shuttering their retail stores, Apple opened dozens in the most expensive locales. During the post-dotcom crash, instead of layoffs, Apple spent millions to hire and boost R&D. To the “Show us a $500 netbook, now!” amen corner Apple gave the un-netbook iPad, not at $999 but $499. The App Store and iTunes are still not open. Google hasn’t been given the keys to iOS devices yet…Clearly, this is a company that hasn’t learned the market-share-über-alles lesson from the Wintel era and is repeating the same mistakes, again. Like these:

• Media company — The slick design of Apple gadgets wouldn’t be nearly enough if it weren’t for the fact that Apple has quietly become the world’s biggest digital content purveyor. The availability of a vast library of media, coupled with the ease of purchase and the lock-in effect these purchases create, could easily tempt a lesser outfit to fashionably declare itself a “media company”. After all, Macromedia tried that with its AtomFilms purchase in 2000. Real and Yahoo dabbled in various forms media creation, acquisition and distribution. Microsoft fancied itself a part-media company with investments in publishing (Slate) and cable (MSNBC, Comcast). Amazon has several imprints of its own. Netflix is now an episodic TV producer. Google is investing hundreds of millions in original material for YouTube. Apple, on the other hand, has always resisted creating and owning content, because…

• Indies — … Apple plays for the fat middle part of the bell curve. Once a bit player in computers and consumer electronics, Apple’s now a giant. Whether it’s music, TV shows, movies or ebooks, Apple targets the mainstream, and the mainstream demands the availability of mainstream content from top labels, studios and publishers. It’s very tempting to urge Apple to sign deals right and left with independent producers in entertainment and publishing, to bypass traditional gatekeepers and ‘disrupt’ their respective industries, on the cheap. Unfortunately, beyond modest promotional efforts with indies, it doesn’t look like Apple’s likely to upset the mainstream cart from which it makes so much money.

Tapose

• Multitasking — “One device. One account. One app. One window. One task.” seems to be Apple’s current approach to Post-PC computing. If iPads are going to cannibalize PCs in the workplace or schools, iOS workflow patterns will have to evolve. Bringing multiple user accounts to the same device, showing two windows from two different apps in the same view with interaction between the two or letting all/most apps work in the background would necessitate quite a bit of user re-education in the iOS camp. It’s not clear for how long Apple can afford not to provide such functionalities.

• PDF replacement — Apple’s tumultuous love affair with PDF goes back nearly 25 years to Display PostScript during its NeXT prequel. PDF may now be “native” to Mac OS X and the closest format of exchange for visual fidelity, but it’s become slow, fat, cumbersome and not well integrated with HTML, the lingua franca of the web. While PDF is too entrenched for the print world, ePub 3.0 seems to be emerging as an alternative standard for interactive media publishing. Apple does support it, even with Apple-created extensions, but composing and publishing polished ePub material is still a maddeningly complex, hit-and-miss affair. iBooks Author is a great start, but its most promising output is iTunes-only. If Apple has big ideas in this space, it’s not obvious from its available tools or investments.

• HTML 5 tools — While iBooks Author makes composing app-like interactive content possible without having to use Xcode, Apple has no comparable semi-professional caliber tool for creating web sites/apps for the browser. Apple has resisted offering anything like a Hypercard-level tool for HTML that sits in between the immense but disjointed JavaScript/CSS open ecosystem and the powerful but hard-to-master Xcode. It has killed iWeb and still keeps iAd Producer mostly out of sight. Clearly, Apple doesn’t want more apps but more unique apps to showcase the App Store. HTML isn’t much of a differentiator there and until the ROI in HTML 5 vs. native apps becomes clearer to Apple, such tools are unlikely to arrive anytime soon.

Discovr

• Discovery tools — Yes, Apple has Genius, but that’s a blackbox. Genius is simple and operates in the background silently. It doesn’t have a visual interface like Spotify, Aweditorium, Music Hunter, Pocket Hipster, Groovebug or Discovr Music, allowing users to actively move around a musical topology visually, aided by various social network inputs. With its Ping attempt and Twitter and Facebook tie-ups, Apple has shown it’s at least interested in the social angle, but a more dedicated, visual and fun discovery tool is still absent not just for music but also for TV, movies, books and apps.

Pushpin

• Map layers — Over the last few years Apple has acquired several map-related companies, one of which, PlaceBase, was known for creating “layers” of data-driven visualizations over maps. Even before its messy divorce from Google, Apple has chosen not to offer any such map enhancements. When properly designed, maps are great base-level tools over which lots of different kinds of information can be interactively delivered, especially on touch-driven mobile devices where Siri also resides.

• iOS device attachments — One of the factors that made iPods and iPhones so popular has been the multi-billion dollar ecosystem of peripherals that wrap around or plug into them. However, besides batteries and audio equipment, there’s been a decided dearth of peripherals that connect to the 30-pin port to do useful things in medicine, education, automation, etc. Apple’s attention and investment in this area have been lackluster. Perhaps the new iPad mini coupled with the tiny Lightning Connector will rekindle interest by Apple and third parties in various domains.

Apple glasses

• Wearables — Google Glass is slated for production in a year or so, Apple’s known assets in wearable computing devices amount to a few patents. There’s much debate as to how this field will shape up. Apple may choose to augment iPhones with simpler and cheaper devices like smart watches that work in tandem with the phone, instead of stand-alone but expensive devices like Google Glass. So far ‘wearables’ doesn’t even register as a hobby in Apple’s interestgram.

Stylus

• Stylus — Apple has successfully educated half a billion users in the art of multitouch navigation and general use of mobile devices. That war, waged against one-year old babies and 90-year old grandmas, has been decisively won. However, until Apple invents a more precise method, taking impromptu notes, sketching diagrams and annotating documents with a (pressure sensitive) stylus remains a superior alternative to the finger. Some may consider the notion of a stylus (even one dedicated only to the specialized tasks cited above) a niche not worthy of Apple’s interest. And yet not too long ago 5-7 inch mobile devices were also considered niches.

• Games — Apple’s on course to become the biggest gaming platform. This without any dedicated game control or motion sensing input devices like the Xbox 360 Kinect and despite half-hearted attempts like the Game Center. Apple has been making steady progress on the CPU/GPU front on iOS devices and now the new Apple TV is also getting an A5X-class chip, capable of handling many console-level games. It remains unclear, however, if Apple has the desire or the dedicated resources to leapfrog not just Sony and Nintendo but also Microsoft in the games arena, with a strategy other than steady, slow erosion of the incumbents’ base.

• iOS Pro devices — Apple has so far seen no reason to bifurcate its iOS product line along entry/pro models, like MacBooks/MacBook Pros. iOS devices sell in the tens of millions every quarter into many complex markets in over 100 countries. Further complicating its SKU portfolio with more models is not the Apple way. More so than iPhones, an iPad with a “Pro” designation with specs to match has so far been not forthcoming. And yet several hundred million of these devices are now sold to business and education, where better security, provisioning, app distribution, mail handling, multitasking, hardware robustness, cloud connectivity, etc., will continue to be requested as check-mark items.

• Money — Apple hasn’t done much with money, other than accumulating about $140 billion in cash and marketable securities for its current balance sheet. It hasn’t yet made any device with NFC, operated as a bank, issued AppleMoney like Amazon Coins or Facebook Credits, offered a branded credit card or established a transactional platform (ignoring the ineptly introduced Passbook app). It has a tantalizing patent application for a virtual money transfer service (like electronic hawala) whereby iOS users can securely send and receive cash anywhere, even from strangers. With close to half a billion credit card accounts, the largest in the world, Apple has the best captive demographics for some sort a transactional sub-universe, but it’s anybody’s guess what it may actually end up doing with it or when.

Half empty or more to fill?

It would be easy and fun to spend another hour to triple this list of Things-Apple-Has-Not-Yet-Done. While not all of these would be easy to implement, none of them would be beyond Apple’s ability to execute. Most card-carrying AAPL doomsayers, however, would look at such a list and conclude: See, Apple’s fallen behind, Apple’s doomed!

There’s, of course, another way of interpreting the same list. Apple could spend a good part of the next decade bundling a handful of these Yet-To-Be-Done items annually into an exciting new iOS device/service to sell into its nearly half billion user base and beyond. Apple suffers from no saturation of market opportunities.

Apple will inevitably tackle most of these, but only in its own time and not when it’s yelled at. It’ll likely introduce products and services not on this or any other list that will end up rejiggering an industry or two. Apple will do so because it knows it won’t win by conventional means or obvious schedules…which makes it hard — for those who are easily distracted — to like Apple.

Can Siri go deaf, mute and blind?

Earlier in “Is Siri really Apple’s future?” I outlined Siri’s strategic promise as a transition from procedural search to task completion and transactions. This time, I’ll explore that future in the context of two emerging trends:

  • Internet of Things is about objects as simple as RFID chips slapped on shipping containers and as vital as artificial organs sending and receiving signals to operate properly inside our bodies. It’s about the connectivity of computing objects without direct human intervention.
  • The best interface is no interface is about objects and tools that we interact with that no longer require elaborate or even minimal user interfaces to get things done. Like self-opening doors, it’s about giving form to objects so that their user interface is hidden in their user experience.

Apple’s strength has always been the hardware and software it creates that we love to carry, touch, interact with and talk about lovingly — above their mere utility — like jewelry, as Jony Ive calls it. So, at first, it seems these two trends — objects talking to each other and objects without discernible UIs — constitute a potential danger for Apple, which thrives on design of human touch and attention. What happens to Apple’s design advantage in an age of objects performing simple discreet tasks or “intuiting” and brokering our next command among themselves without the need for our touch or gaze? Indeed, what happens to UI design, in general, in an ocean of “interface-less” objects inter-networked ubiquitously?

Looks good, sounds better

Fortunately, though a star in her own right, Siri isn’t wedded to the screen. Even though she speaks in many tongues, Siri doesn’t need to speak (or listen, for that matter) to go about her business, either. Yes, Siri uses interface props like fancy cards, torn printouts, maps and a personable voice, but what makes Siri different is neither visuals nor voice.

Despite the knee-jerk reaction to Siri as “voice recognition for search,” Siri isn’t really about voice. In fact, I’d venture to guess Siri initially didn’t even have a voice. Siri’s more significant promise is about correlation, decisioning, task completion and transaction. The fact that Siri has a sassy “voice” (unlike her competitors) is just endearing “attitude”.

Siri2

Those who are enthusiastic about Siri see her eventually infiltrating many gadgets around us. Often seen liaising with celebrities on TV, Siri is thought to be a shoo-in for the Apple TV interface Oscars, maybe even licensed to other TV manufacturers, for example. And yet the question remains, is Siri too high maintenance? When the most expensive BOM item in an iPhone 5 is the touchscreen at $44, nearly 1/4 costlier than the next item, can Siri afford to live outside of an iPhone without her audio-visual appeal?

Well, she already has. Siri Eyes Free integration is coming to nine automakers early this year, allowing drivers to interact with Siri without having to use the connected iPhone screen.

Sirieyesfree

Given Siri Eyes Free, it’s not that difficult to imagine Siri Touch Free (see and talk but not touch), Siri Talk Free (see and touch but not talk) and so on. People who are impatient with Apple’s often lethargic roll out plans have already imagined Siri in all sorts of places, from aircraft cockpits to smart wristwatches to its rightful place next to an Apple TV.

Over the last decade, enterprise has spent billions to get their “business intelligence” infrastructure to answer analysts’ questions against massive databases from months to weeks to days to hours and even minutes. Now imagine an analyst querying that data by having a “natural” conversation with Siri, orchestrating some future Hadoop setup, continuously relaying nested, iterative questions funneled towards an answer, in real time. Imagine a doctor or a lawyer querying case histories by “conversing” with Siri. Forget voice, imagine Siri’s semantic layer responding to 3D gestures or touches on glass or any sensitized surface. Set aside active participation of a “user” and imagine a monitor with Siri reading microexpressions of a sleeping or crying baby and automatically vocalizing appropriate responses or simply rocking the cradle faster.

Scenarios abound, but can Siri really afford to go fully “embedded”?

There is some precedence. Apple has already created relatively successful devices by eliminating major UI affordances, perhaps best exemplified by the iPod nano ($149) that can become an iPod shuffle ($49) by losing its multitouch screen, made possible by the software magic of Genius, multi-lingual VoiceOver, shuffle, etc. In fact, the iPod shuffle wouldn’t need any buttons whatsoever, save for on/off, if Siri were embedded in it. Any audio functionality it currently has, and much more, could be controlled bi-directionally with ease, in all instances where Siri were functional and socially acceptable. 3G radio plus embedded Siri could also turn that tiny gadget into so many people’s dream of a sub-$100 iPhone.

Ipods2

Grounding Siri

Unfortunately, embedding Siri in devices that look like they may be great targets for Siri functionality isn’t without issues:

  • Offline — Although Siri requires a certain minimum horsepower to do its magic, much of that is spent ingesting and prepping audio to be transmitted to Apple’s servers which do the heavy lifting. Bringing that processing down to an embedded device that doesn’t require a constant connection to Apple maybe computationally feasible. However, Apple’s ability to advance Siri’s voice input decoding accuracy and pattern recognition depend on constant sampling of and adjusting input from tens of millions of Siri users. This would rule out Siri embedded into offline devices and create significant storage and syncing problems with seldom-connected devices.
  • Sensors — One of the key reasons why Siri is such a good fit for smartphones is the number of on-device sensors and the virtually unlimited range of apps it’s surrounded with. Siri is capable of “knowing” not only that you’re walking, but that you’ve also been walking wobbly, for 35 minutes, late at night, in a dark alley, around a dangerous part of a city, alone… and send a pre-designated alert silently on your behalf. While we haven’t seen examples of such deep integration from Apple yet, Siri embedded into devices that lack multiple sensors and apps would severely limit its potential utility.
  • Data — Siri’s utility is directly indexed to her access to data sources and, at this stage, third parties’ search (Yelp), computation (WolframAlpha) and transaction (OpenTable) facilities. Apple does and is expected to continue to add such partners in different domains on a regular basis. Siri embedded in radio-lacking devices that don’t have access to such data and processing, therefore, may be too crippled to be of interest.
  • Fragmentation — People expect to see Siri pop up in all sorts of places and Apple has taken the first step with Siri Eyes Free where Siri gives up her screen to capture the automotive industry. If Siri can drive in a car, does that also mean she can fly on an airplane, sail on a boat or ride on a train? Can she control a TV? Fit inside a wristwatch? Or a refrigerator? While Siri — being software — can technically inhabit anything with a CPU in it, the radio in a device is far more important to Siri than its CPU, for without connecting to Apple (and third party) servers, her utility is severely diminished.
  • Branding — Siri Eyes Free won’t light up the iPhone screen or respond to commands that would require displaying a webpage as an answer. What look like reasonable restrictions on Siri’s capabilities in this context shouldn’t, however, necessarily signal that Apple would create “subsets” of Siri for different domains. More people will use and become accustomed to Siri’s capabilities in iPhones than any other context. Degrading that familiarity significantly just to capture smaller markets wouldn’t be in Apple’s playbook. Instead of trying to embed Siri in everything in sight and thus diluting its brand equity, Apple would likely pair Siri with potential NFC or Bluetooth interfaces to devices in proximity.

What’s Act II for Siri?

In Siri’s debut, Apple has harvested the lowest hanging fruit and teamed up with just a handful of already available data services like Yelp and WolframAlpha, but has not really taken full advantage of on-device data, sensor input or other novel information.

As seen from outside, Siri’s progress at Apple has been slow, especially compared to Google that has had to play catch up. But Google must recognize a strategically indispensable weapon in Google Now (a Siri-for-Android, for all practical purposes) as a hook to those Android device manufacturers that would prefer to bypass Google’s ecosystem. None of them can do anything like it for some time to come, Samsung’s subpar attempts aside.

If you thought Maps was hard, injecting relationship metadata into Siri — fact by fact, domain by domain — is likely an order of magnitude more laborious, so Apple’s got her work cut out for Siri. It’d be prudent not to expect Apple to rush into embedding Siri in its non-signature devices just yet.

“The Creepy Line”

When asked in 2010 about the possibility of a Google “implant,” Google’s then-CEO Eric Schmidt famously said:

“Google policy is to get right up to the creepy line and not cross it.

With your permission you give us more information about you, about your friends, and we can improve the quality of our searches. We don’t need you to type at all. We know where you are. We know where you’ve been. We can more or less know what you’re thinking about.”

Since that reassuring depiction of what awaits us in the future, Google has danced energetically around “the creepy line” many times, from subverting users’ privacy preferences in Safari and paying the largest FTC fine in history to introducing the omniscient Google Glass that gets as close to human trafficking as possible without drilling into the brain.

When the internet behemoth raises the bar, others rush to conquer and some manage to surpass it. Buried in the minutiae of CES 2013, in a booth not much smaller than a 10,000-inch Samsung UHD TV, was Affectiva, showcasing its primary product Affdex:

“Affdex tracks facial and head gestures in real-time using key points on viewers’ face to recognize a rich array of emotional and cognitive states, such as enjoyment, attention and confusion.”

affdex1

Affdex2

Deciphering concealed emotions by “reading” facial microexpressions, popularized by Paul Ekman and the hit TV series Lie To Me, is nothing new, of course. What’s lifting us over the creepy line is the imminent ubiquity of this technology, all packaged into a web browser and a notebook with a webcam, no installation required.

Affdex3

Eyes Wide Shut

Today, Affectiva asks viewers’ permission to record, as they watch TV commercials. What happens tomorrow? After all, DNA evidence in courts was first used in the late 1980s and has been controversial ever since. It’s been used to exonerate incarcerated people as well as abused and misused to convict innocent ones. Like DNA analysis, facial expression reading technology will advance and may attain similar stature in law and in other fields…Some day.

Currently, however, along with its twin brother face recognition technology, microexpression reading isn’t yet firmly grounded in law. This uncertainty gives it the necessary space to evolve technologically but also also opens the door to significant privacy and security abuse.

Janus-faced

The technology, when packaged into a smartphone, for example, can be used to help some of those with Asperger’s syndrome to read facial expressions. But it can also be used in a videotelephony app as a surreptitious “lie detector.” It could be a great tool during remote diagnosis and counseling in the hands of trained professionals. But it could also be used to record, analyze and track people’s emotional state in public venues: in front of advertising panels, as well as courtrooms or even job interviews. It can help overloaded elementary school teachers better decipher the emotional state of at-risk children. But it can also lead focus-group obsessed movie studios to further mechanize character and plot development.

The GPU in our computers is the ideal matrix-vector processing tool to decode facial expressions in real-time in the very near future. It would be highly conceivable, for instance, for a presidential candidate to be peering into his teleprompter to see a rolling score of a million viewers’ reactions, passively recorded and decoded in real-time, allowing him to modulate his speech in synchronicity with that real time feedback. Would that be truly “representative” democracy or abdication of leadership?

And if these are possible or even likely scenarios, why wouldn’t we have the technology embedded in a Google Glass-like device or an iPhone 7, available all the time and everywhere. If we can use these gadgets to decode other people’s emotional state, why can’t these gadgets use the same to decode our own and display them back to us? What happens when, for the first time in homo sapiens history, we have constant (presumably unbiased) feedback on our own emotions? The distance from detecting emotional state by machines to suggesting (and even administering) emotion altering medicine can’t be that far, can it? How do we learn to live with that?

The technology is out there. From Apple’s Siri, Google already has the blueprint to advance Google Now from searching to transactions. One would think the recent hiring of Singularity promoter Ray Kurzweil as director of engineering points to much higher ambitions. Ambitions we’re not remotely prepared to parse yet. Much closer to that creepy line.

What’s broken, patents or the legal system?

Referring to the America Invents Act (AIA), aimed to cull low-quality software, the head of the United States Patent and Trademark Office, David Kappos says:

“Give it a rest already. Give the AIA a chance to work. Give it a chance to even get started.”

He’s mostly reacting to studies that claim patent trolls enabled by USPTO cost the economy upwards of $29 billion annually. While awards vary, what’s constant is the exorbitant cost of litigating patent cases. Large scale cases can easily run into tens of millions, taking months and years.

One way to make sense of this situation is to declare the very notion of (software) patents archaic and indefensible in the 21st century. But what if the problem isn’t the fundamental notion or the general utility of patents, rather the inefficiencies in our legal system?

If the legal costs associated with getting and defending patents were 10X cheaper and the process of adjudication much faster, professional and predictable, would we feel differently about patent claims?