In his recent book On what artificial intelligence could mean for a culture imbued with the spirit of self-improvement (an $11 billion industry in the US alone), Mark Coeckelbergh points to a sort of ghostly doppelganger that now accompanies us all: the quantified self, an invisible and ever-growing digital duplicate, made up of all the traces left behind when we read, write, watch or buy something online, or carry a device, such as a telephone, that can be tracked.
This is “our” data. On the other hand, neither are they: we don’t own or control them, and we hardly have any say in where they go. Companies buy, sell, and mine them to set patterns in our choices, and between our data and that of others. Algorithms focus on recommendations; whether we click through or not, or watch video clips that they predict will grab our attention, feedback is generated, sharpening the cumulative quantitative profile.
The potential to market self-improvement products tailored to your specific insecurities is obvious. (Think about how much home fitness equipment that now collects dust was once sold with the infomercial’s blunt tool.) Coeckelbergh, a professor of media and technology philosophy at the University of Vienna, worries that the effect of AI-driven self-improvement can only be to reinforce the already strong tendency towards egocentrism. The individual personality, driven by its own cybernetically amplified fears, would atrophy into “a thing, an idea, an essence isolated from others and the rest of the world and no longer changing,” he writes in Self improvement† The elements of a healthier ethos are found in philosophical and cultural traditions that emphasize that the self “can exist and improve only in relation to others and the wider environment.” The alternative to digging into digitally enhanced ruts would be “a better, harmonious integration into the social whole by fulfilling social obligations and developing virtues such as compassion and trustworthiness.”
Quite a task, that. It involves not just debate about values, but public decision-making about priorities and policies — decision-making that is ultimately political, as Coeckelbergh takes up in his other new book, The Political Philosophy of AI (Law enforcement). Some of the basic questions are as familiar as recent headlines. “Should social media be more tightly regulated or self-regulated to create better quality public discussion and political participation” – using AI capabilities to detect and remove misleading or hateful posts, or at least reduce their visibility ? Any discussion of the issue will no doubt return to established arguments about whether freedom of expression is an absolute right or a right limited by boundaries that need to be clarified. (Should a death threat be protected as freedom of speech? If not, is it a call to genocide?) New and emerging technologies force a return to some classic questions in the history of political thought “from Plato to NATO,” such as the saying goes.
In this respect, The Political Philosophy of AI also acts as an introduction to traditional debates, in a contemporary style. But Coeckelbergh also strives for what he calls “a non-instrumental understanding of technology,” for which technology “is not only a means to an end, but also shapes those ends.” Tools that can identify and stop the spread of falsehoods can also be used to “shift” attention to accurate information — perhaps enhanced by artificial intelligence systems that can assess whether a given source uses sound statistics and interpret them in a plausible way. Such a development would likely end certain political careers before they began, but more worrisome is that such technology, as the author puts it, “could be used to promote a rationalist or technosolutionist understanding of politics, which destroys the inherently agonistic [that is, conflictual] dimension of politics and risks that exclude other points of view.”
Whether or not lying is inherent in political life, there is something to be said for the benefits of its public exposure over the course of the debate. By directing debate, AI risks “making the ideal of democracy as deliberation more difficult to achieve… a threat to public accountability and increasing concentration of power.” That’s the dystopian potential. The absolute worst-case scenarios are that AI turns out to be a new life form, the next step in evolution, and becomes so powerful that managing human affairs will be the least of its concerns.
Coeckelbergh occasionally winks at that sort of transhumanist extrapolation, but his real focus is on demonstrating that a few thousand years of philosophical thought will not automatically be superseded by feats of digital engineering.
“The politics of AI,” he writes, “runs deep into what you and I do with technology at home, at work, with friends, and so on, which in turn shapes those politics.” Or at least it can, provided we put a fair amount of our attention on questioning what we’ve made of that technology, and vice versa.