Reprinted with permission from The Book of Minds: How to Understand Ourselves and Other Beings, From Animals to AI to Aliens by Philip Ball, published by The University of Chicago Press. © 2022 by Philip Ball. All rights reserved.
In 1984, computer scientist Aaron Sloman of the University of Birmingham in England published a paper arguing for more systematic thinking about the vague but intuitive concept of mind. It was time, he said, to admit into the conversation what we’d learned about animal cognition, as well as what research into artificial intelligence and computer systems told us. Sloman’s paper was titled “The Structure of the Space of Possible Spirits”.
“Clearly there is not just one kind of ghost,” he wrote:
“Besides clear individual differences between adults, there are differences between adults, children of different ages and infants. There are intercultural differences. There are also differences between humans, chimpanzees, dogs, mice and other animals. And there are differences between all those and machines. Also, machines are not all the same, even if they are made on the same production line, because identical computers can have very different characteristics if they use different programs.”
Now a professor emeritus, Sloman is the kind of academic who can’t be pigeonholed. His ideas bounce off information theory philosophy to behavioral science, along a trajectory that will leave fellow travelers giddy. Ask him a question and you’ll find yourself whisked away from the starting point. He may sound dismissive, even desperate about other attempts to ponder the mysteries of the mind. “Many facts are ignored or not noticed,” he told me, “either because the researchers don’t understand the concepts needed to describe them, or because the kind of research needed to investigate them isn’t taught in schools and universities.” .”
But Sloman shows deep humility about his own effort four decades ago to broaden the discourse on the mind. He thought his 1984 paper barely scratched the surface of the problem and had little impact. “My impression is that my thinking on these matters has been largely ignored,” he says — and understandably, “because making real progress is very difficult, time-consuming and too risky to attempt in the current climate of constant judgment through quotes. censuses, funding and new demonstrations.”
But there he is wrong. Several researchers at the forefront of artificial intelligence are now suggesting that Sloman’s paper had a catalytic effect. The mix of computer science and behaviorism must have seemed eccentric in the 1980s, but today it looks astonishingly prescient.
“We need to get rid of the idea that there is one big line between things with and without spirit,” he wrote. “Instead, informed by the variety of types of computational mechanisms already explored, we have to recognize that there are many discontinuities or divisions within the space of possible systems: space is not a continuum, nor is it a dichotomy.”
Part of this task of mapping the space of possible ghosts, Sloman said, was to research and classify the kinds of things different kinds of ghosts can do:
“This is a classification of different kinds of skills, abilities or behavioral traits – remembering that some of the behavior can be internal, for example recognizing a face, solving a problem, appreciating a poem. Different kinds of ghosts can then be described in terms of what they can and cannot do.”
The task is to explain what it is that allows different spirits to acquire their different skills.
Sign up for counter-intuitive, surprising and impactful stories delivered to your inbox every Thursday
“These explorations can be expected to reveal a very richly structured space,” Sloman wrote, “not one-dimensional, like a spectrum, not some sort of continuum. There will be not two but many extremes.” These can range from mechanisms so simple — like thermostats or speed controllers on motorbikes — that we wouldn’t conventionally compare them to ghosts at all, to the kind of sophisticated, responsive and adaptive behavior exemplified by simple organisms like bacteria and amoebae.” rather than fruitless attempts to divide the world into things with and things without the essence of mind or consciousness,” he wrote, “we must examine the many detailed similarities and differences between systems.”
This was a project for (among others) anthropologists and cognitive scientists, ethologists and computer scientists, philosophers and neuroscientists. Sloman thought AI researchers should focus less on how close artificial cognition could get to that of humans, and more on learning about how cognition evolved and how it manifests itself in other animals: squirrels, weaver birds, corvids, elephants. , orangutans , cetaceans, spiders, and so on. “Today’s AI,” he said, “throws more and more memory and speed and increasing amounts of training data at the problem, allowing progress to be reported with little understanding or replication of natural intelligence.” That, in his view, is not the right way to approach it.
Although Sloman’s concept of a Space of Possible Minds prompted some researchers to think about intelligence and how it might be created, cartography has barely begun. The relevant disciplines he listed were too far apart in the 1980s to have much of common interest, and in any case, we were just beginning to make progress in unraveling the cognitive complexities of our own minds. In the mid-1980s, a burst of corporate interest in so-called expert-system AI research would soon disappear, creating a lull that lasted until the early 1990s. The concept of “machine minds” was widely regarded as hyperbole.
Now the wheel has turned and there’s never been a better time to think about what Sloman’s “Mindspace” might look like. Not only is AI finally beginning to prove its worth, but there’s a widespread perception that further improvements — and perhaps even creating the kind of “artificial general intelligence,” with human-like capabilities, that the field’s founders envisioned will need to be improved. are an accurate consideration of how today’s alleged machine minds differ from our own.