Skip to main content

5 distractions that cloud our thinking about AI

Kathryn Hume - 5 Distractions When Talking About AI

Join leaders in Boston on March 27 for an exclusive night of networking, insights, and conversation. Request an invite here.


One of the main arguments the Israeli historian Yuval Noah Harari makes in Sapiens: A Brief History of Humankind is that humans differ from other species because we can cooperate flexibly in large numbers, united in cause and spirit not by anything real, but by the fictions of our collective imagination. Examples of these fictions include gods, nations, money, and human rights, which are supported by religions, political structures, trade networks, and legal institutions, respectively.

As an entrepreneur, I’m increasingly appreciative of and fascinated by the power of collective fictions. Building a technology company is hard. Incredibly hard. Lost deals, fragile egos, impulsive choices, bugs in the code, missed deadlines, frantic sprints to deliver on customer requests, the doldrums of execution: Any number of things can temper the initial excitement of starting a new venture. Mission is another fiction required to keep a team united and driven when the proverbial shit hits the fan.

While a strong, charismatic group of leaders is key to establishing and sustaining a company mission, companies don’t exist in a vacuum — they exist in a market, and they participate in the larger collective fictions of the zeitgeist in which they operate. The borders are fluid and porous, and leadership can use this porousness to energize a team to feel like they’re on the right track, like they’re fighting the right battle at the right time.

These days, it is incredibly energizing to work for a company building software products with artificial intelligence (AI). AI is shorthand for products that use data to provide a service or insight to a user — or, as I argued in a previous post, AI is whatever computers cannot do until they can. But there wouldn’t be so much frenzied fervor around AI if it were as boring as building a product using statistics and data.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

Rather, what’s exciting the public are the collective fictions we’re building around what AI means — or could mean, or should mean — for society. It all becomes a lot more thrilling when we think about AI as computers doing things we’ve always thought only humans can do: when they start to speak, write, or even create art like we do, or when we no longer have to adulterate and contort our thoughts and language to speak Google or Excel.

The problem is that some of our collective fictions about AI, exciting though they may be, are distracting us from the real problems AI can and should be used to solve, as well as some of the real problems AI is creating and will only exacerbate if we’re not careful. In this post, I cover my top five distractions in contemporary public discourse around AI.

Distraction 1: The end of work

Anytime he hears rumblings that AI is going to replace the workforce as we know it, my father, who has 40 years of experience in software engineering, most recently in natural language processing and machine learning, placidly mentions Desk Set, a 1957 romantic comedy featuring Spencer Tracy and Katharine Hepburn. Desk Set features a group of librarians at a national broadcasting network who fear for their jobs when an engineer is brought in to install EMERAC (named after IBM’s ENIAC), an “electronic brain” that promises to do a better job fielding consumer trivia questions than they do. The film is both charming and prescient, and will seem very familiar to anyone reading about a world without work. The best scene features a virtuoso feat of associative memory showing the sheer brilliance of the character played by Katharine Hepburn (winning Tracy’s heart in the process), a brilliance the primitive electronic brain would have no chance of emulating. The movie ends with a literal deus ex machina where a machine accidentally prints pink slips to fire the entire company, only to get shut down due to its rogue disruption of operations.

Desk Set can teach us a lesson. The 1950s saw the rise of energy around AI. In 1952, Bell Labs’ Claude Shannon introduced Theseus, a maze-solving artificial mouse. In 1957, Frank Rosenblatt built his Mark I Perceptron, the grandfather of today’s neural networks. In 1958, H.P. Luhn wrote a groundbreaking paper about business intelligence that proposes an information management system we’re still working to make possible today. And Arthur Samuel coined the term “machine learning” upon release of his checkers-playing program in 1959 (Tom Mitchell has my favorite contemporary manifesto on what machine learning is and means). The world was buzzing with excitement. Society was to be totally transformed. Work would end, or at least fundamentally change to feature collaboration with intelligent machines.

This didn’t happen. We hit an AI winter. Deep learning was ridiculed as useless. Technology went on to change how we work and live, but not as the AI luminaries in the 1950s thought. Many new jobs were created. And no one in 1950 pictured a Bay Area full of silicon transistors and adolescent engineers making millions off mobile apps. No one imagined Mark Zuckerberg. No one imagined Peter Thiel.

We need to ask different questions and address different people and process challenges to make AI work in the enterprise. I’ve seen the capabilities of over 100 large enterprises over the past two years, and I can tell you we have a long way to go before smart machines outright replace people. AI products, based on data and statistics, produce probabilistic outputs whose accuracy and performance improve with exposure to more data over time. As cognitive scientist Amos Tversky said, “man is a deterministic device thrown into a probabilistic universe.”

People mistake correlation for cause. They prefer deterministic, clear instructions to uncertainties and confidence rates (I adore the first few paragraphs of this article, where Obama throws his hands up in despair after being briefed on the likely location of Osama bin Laden in 2011). Maura Grossman and Gordon Cormack have spent years marshaling evidence to show humans are not as thorough as they think, especially with the large volumes of electronic information we process today. Law firm risk departments, as Intapp CEO John Hall says, struggle immensely to break the conditioning of painstaking review to identify a conflict or potential piece of evidence. These habits must be broken to take advantage of the efficiencies AI can provide.

The moral of the story is, before we start pontificating about the end of work, we should start thinking about how to update our mental habits to get comfortable with probabilities and statistics. This requires training. It requires that senior management make decisions about their risk tolerance for uncertainty. It requires that management decide where transparency is necessary (situations where we need to know why the algorithm gave the answer it did, as in consumer credit) and where accuracy and speed are more important (as in self-driving cars, where it’s critical to make the right decision to save lives and less important that we know why a decision was made). It requires the art of figuring out where to put a human in the loop to bootstrap the data required for future automation. It requires a lot of work, and it’s creating new consulting and product management jobs to address the new AI workplace.

Distraction 2: Conversational interfaces

Just about everyone can talk; very few people have something truly meaningful and interesting to say.

The same holds for chatbots, software systems whose front end is designed to engage with an end user as if it were another human in conversation. Conversational AI is extremely popular these days for customer service workflows (a next-generation version of recorded options menus for airline, insurance, banking, or utilities companies) or even booking appointments at the hair salon or yoga studio. The principles behind conversational AI are great: It makes technology more friendly, enables technophobes like my grandmother to benefit from internet services by shouting requests to her Amazon Alexa, and promises immense efficiencies for businesses that serve large consumer bases by automating and improving customer service (which, despite my first point about the end of work, will likely impact service departments significantly).

The problem, however, is that people get so caught up in the excitement of building a smart bot that they forget that being able to talk doesn’t mean you have anything useful or intelligent to say. Indeed, at Fast Forward Labs, we’ve encountered many startups so hyped by the promise of conversational AI that they neglect the less sexy but incontrovertibly more important backend work of building the intelligence that powers a useful front end experience. This work includes collecting, cleaning, processing, and storing data that can be used to train the bot. Scoping and understanding the domain of questions you’d like to have your bot answer. Creating recommendation algorithms to align service to customer if needed. Designing for privacy. Building out workflow capabilities to escalate to a human in the case of customer confusion. And so on, and so on.

The more general point I’m making with this example is that AI is not magic. These systems are still early in their development and adoption, and very few off-the-shelf capabilities exist. In an early adopter phase, we’re still experimenting, still figuring out bespoke solutions on particular data sets, still restricting scope so we can build something useful that may not be nearly as exciting as our imagination desires.

When she gives talks about the power of data, my colleague Hilary Mason frequently references Google Maps as a paradigmatic data product. Why? Because it’s boring! The front end is meticulously designed to provide a useful, simple service, hiding the immense complexity and hard work that powers the application behind the scenes.

In short, conversation and language are not always the best way to present information. The best AI applications will come from designers who use judgment to interweave text, speech, image, and navigation through keys and buttons.

Distraction 3: Universal basic income

Universal basic income (UBI), a government program where everyone, at every income level in society, receives a stipend on a regular basis — Andy Stern, author of Raising the Floor, suggests $1,000 per month per U.S. citizen — is a corollary of the world without work. UBI is interesting because it unites libertarians (be they technocrats in Silicon Valley or hyper-conservatives like Charles Murray, who envisions a Jeffersonian ideal of neighbors supporting neighbors with autonomy and dignity) with socialist progressives (Andy Stern is a true man of the people, who led the Service Employee International Union for years) and futurists like Peter Diamandis.

However, UBI is a distraction from a much more profound economic problem being created by our current global, technology-driven economy: income inequality. We all know this is the root cause of Trumpism, Brexit, and many of the other nationalist, regressive political movements at play across the world today. It is critical we address it.

Income inequality is not a simple issue. It involves a complex interplay of globalization, technology, government programs, education, the stigma of vocational schools in the US, etc. In the Seventh Sense, Joshua Cooper Ramo explains how network infrastructure leads to polarizing effects, concentrating massive power in the hands of a few (Google, Facebook, Amazon, Uber) and distributing micro power and expression to the many (the millions connected on these platforms). Nicholas Bloom covers similar ground in this HBR article about corporations in the age of inequality. The economic consequences of our networked world can be dire, and they must be checked by thinking and approaches that did not exist in the 20th century. Returning to mercantilism and protectionism is not a solution. It’s a salve that can only lead to violence.

That said, one argument for UBI my heart cannot help but accept is that it can restore dignity and opportunity for the poor. Imagine if, to eat, you had to wait in lines at food pantries and could only afford unhealthy food that promotes obesity and diabetes. Imagine how much time you would waste in government offices, and how that time could be better spent to learn a new skill or create a new idea! The 2016 film I, Daniel Blake is a must see. It’s one of those movies that brings tears to my eyes just thinking of it. You watch a kind, hard-working, honest man go through the wringer of a bureaucratic system, pushed to the limits of his dignity before he eventually rebels. While UBI may not be the answer, we all have a moral obligation to empathize with those who might not share our political views. People who are scared and want a better life also have truths to tell.

Distraction 4: Existential risk

Enlightenment is man’s emergence from his self-imposed nonage. Nonage is the inability to use one’s own understanding without another’s guidance. This nonage is self-imposed if its cause lies not in lack of understanding but in indecision and lack of courage to use one’s own mind without another’s guidance. Dare to know! (Sapere aude.) “Have the courage to use your own understanding,” is therefore the motto of the enlightenment.

This is the first paragraph of Immanuel Kant’s 1784 essay “Answering the Question: What Is Enlightenment?“. I cite it because contemporary discourse about AI becoming an existential threat reminds me of the Middle Ages, when philosophers like Thomas Aquinas and Wilhelm von Ockham presented arguments on the basis of authority: “This is true because Aristotle once said that…” Descartes, Luther, Diderot, Galileo, and the other powerhouses of the Enlightenment thought this was rubbish and led to all sorts of confusion. They toppled the old guard and placed authority in the individual, each and every one of us born with the same rational capabilities to build arguments and arrive at conclusions.

Such radical self-reliance has waxed and waned throughout history, the Enlightenment offset by the pulsing passion of Romanticism, only to be resurrected in the more atavistic rationality of Thoreau or Emerson. It seems that the current pace of change in technology and society is tipping the scales back towards dependence and guidance. It’s so damn hard to keep up with everything that we can’t help but relinquish judgment to the experts. Which means that if Bill Gates, Stephen Hawking, and Elon Musk, the priests of engineering, science, and ruthless entrepreneurship, all think that AI is a threat to the human race, then we mere mortals may as well bow down to their authority.

The problem here is that the chicken-little logic espoused by the likes of Nick Bostrom, which urges us to prepare for the worst of all possible outcomes, distracts us from the real social issues AI is already exacerbating. An alternative approach to the ethics of AI, however, is quickly gaining traction. The Fairness, Accountability, and Transparency in Machine Learning movement focuses not on rogue machines going amok (another old idea, this time from Goethe’s 1797 poem “The Sorcerer’s Apprentice”), but on understanding how algorithms perpetuate and amplify existing social biases and doing something to change that.

There’s strong literature focused on the practical ethics of AI. A current Fast Forward Labs intern just published a post about a tool called FairML, which he used to examine implicit racial bias in criminal sentencing algorithms. Mathematician Cathy O’Neil regularly writes articles about the evils of technology for Bloomberg (her rhetoric can be alienating to technologists and pragmatic moderates). Gideon Mann, who leads data science for Bloomberg, is working on a Hippocratic oath for data scientists. Software engineer Blaise Agüera y Arcas and his machine intelligence team at Google are constantly examining and correcting for potential bias creeping into their algorithms. Data scientist Clare Corthell is mobilizing practitioners in San Francisco to discuss and develop ethical data science practices. The list goes on.

Designing ethical algorithms will be a marathon, not a sprint. Executive leadership at large enterprise organizations are just wrapping their heads around the financial potential of AI. Ethics is not their first concern. I predict the dynamics will resemble those in information security, where fear of a tarred reputation spurs corporations to act.

Distraction 5: Personhood

The language used to talk about AI and the design efforts made to make AI feel human and real invite anthropomorphism. In November, I spoke on a panel at a conference Google’s Artists and Machine Intelligence group hosted in Paris. It was a unique event because it brought together highly technical engineers and highly non-technical artists, which was a wonderful way to see how people who don’t work in machine learning understand, interpret, and respond to the language and metaphors engineers use to describe the vectors and linear algebra powering their machines. Sometimes this is productive: Artists like Ross Goodwin and Kyle McDonald deliberately play with the idea of relinquishing autonomy to a machine, liberating the human artist from the burden of choice and control, and opening the potential for serendipity as a network shuffles the traces of prior human work to create something radical, strange, and new. Sometimes this is not productive: One participant, upon learning that Deep Dream is actually an effort to interpret the black box impenetrability of neural networks, asked if AI might usher a new wave of Freudian psychoanalysis. (This stuff tries my patience.)

It’s up for debate whether artists can derive more creativity from viewing an AI as an autonomous partner or an instrument whose pegs can be tuned like the strings of a guitar to change the outcome of the performance. I think both means of understanding the technology are valid, but ultimately produce different results.

The general point here is that how we speak about AI changes what we think it is and what we think it can or can’t do. Our ability to anthropomorphize matrices multiplying numbers as determined by some function is worthy of wonder. But I can’t help but furrow my brow when I read about robots having rights like humans and animals. This would all be fine if it were only the path to consumer adoption, but these ideas of personhood may have legal consequences for consumer privacy rights. For example, courts are currently assessing whether the police have the right to information about a potential murder collected from Amazon Echo (privacy precedent here comes from Katz v. United States, the grandfather case in adapting the Fourth Amendment to our new digital age).

Joanna Bryson, a computer scientist at the University of Bath and Princeton, has proposed one of the more interesting explanations for why it doesn’t make legal sense to imbue AI with personhood: “AI cannot be a legal person because suffering in well-designed AI is incoherent.” Suffering, says Bryson, is integral to our intelligence as social species.

The crux of her argument is that we humans understand ourselves not as discrete monads or brains in a vat, but as essentially and intrinsically intertwined with other humans around us. We play by social rules, and avoid behaviors that lead to ostracism and alienation from the groups we are part of. We can construct what appears to be an empathetic response in robots, but we cannot construct a self-conscious, self-aware being who exercises choice and autonomy to pursue reward and recognition while avoiding suffering. (Perhaps reinforcement learning can get us there: I’m open to being convinced.)

This argument goes much deeper than business articles arguing that work requiring emotional intelligence — such as sales, customer relationships, nursing, and education — will be more valued than quantitative and repetitive work in the future. It’s an incredibly exciting lens through which to understand our own morality and psychology.

Conclusion

As mentioned at the beginning of this post, collective fictions are the driving force of group alignment and activity. They are powerful, beautiful, the stuff of passion and motivation and awe. The fictions we create about the potential of AI may just be the catalyst to drive real impact throughout society. That’s nothing short of amazing, as long as we can step back and make sure we don’t forget the hard work required to realize these visions — and the risks we have to address along the way.

An earlier version of this article was published on Quam Proxime by Kathryn Hume of Fast Forward Labs.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Want must-read news straight to your inbox?
Sign up for VB Daily