The real reason to be nervous about AI

An unlikely drama has unfolded in the media in recent weeks. The centerpiece of this drama is not a celebrity or a politician, but a sprawling computer system created by Google called LaMDA (Language Model for Dialogue Applications). A Google engineer, Blake Lemoine, was suspended for stating on Medium that LaMDA, with whom he communicated via text message, was “conscious.” This statement (and a subsequent one) Washington Post article) sparked a debate between those who think Lemoine is only stating an obvious truth — that machines can now or soon demonstrate the qualities of intelligence, autonomy, and sense — and those who dismiss this claim as naive and deliberate at best. wrong information at worst. Before we explain why I think those who oppose the sentiment story are right, and why that story serves the power interests of the tech industry, let’s define what we’re talking about.

LaMDA is a Large Language Model (LLM). LLMs ingest large amounts of text—almost always from Internet sources such as Wikipedia and Reddit—and, by iteratively applying statistical and probabilistic analysis, identify patterns in that text. This is the input. These patterns, once “learned” – a loaded word in artificial intelligence (AI) – can be used to produce plausible text as output. The ELIZA program, created in the mid-1960s by MIT computer scientist Joseph Weizenbaum, was a famous early example. ELIZA didn’t have access to a huge ocean of text or fast processing like LaMDA, but the basic principle was the same. One way to get a better idea of ​​LLMs is to note that AI researchers Emily M. Bender and Timnit Gebru refer to them as “stochastic parrots.”

There are many disturbing aspects to the growing use of LLMs. Computing at the scale of LLMs requires huge amounts of electrical power; most of this comes from fossil sources, contributing to climate change. The supply chains that power these systems and the human costs of mining the raw materials for computer components are also of concern. And there are pressing questions about what such systems should be used for — and to whose benefit.

The goal of most AI (which started as a pure research aspiration announced at a Dartmouth conference in 1956, but is now dominated by Silicon Valley guidelines) is to replace human effort and skill with thinking machines. So any time you hear about self-driving trucks or cars, instead of marveling at the technical achievement, you should explore the contours of an anti-labor program.

The futuristic promises about thinking machines don’t last. This is hype, yes, but also a propaganda campaign by the tech industry to convince us that they have created or are about to create systems that can be doctors, cooks and even life partners.

A simple Google search for the phrase “AI will…” yields millions of results, usually accompanied by images of ominous sci-fi-esque robots, suggesting that AI will soon replace humans in a dizzying number of areas. What is missing is some research into how these systems might actually work and what their limitations are. Once you open the curtain and see the wizard pulling levers, straining to keep the illusion going, you’ll wonder: why are we being told this?

Leave a Comment