Ais Are Prophets
Beware the AI Prophets 1: Don’t Follow the Prophets Into Hell
Deep in the human unconscious is a pervasive need for a logical universe that makes sense. But the real universe is always one step beyond logic.
- Frank Herbert, God Emperor of Dune
In all the discussions about AI risk - alignment, economy, culture, and a host of other concerns - we’re missing perhaps its most dangerous aspect. We are creating digital prophets, and in doing so, we may have built the ultimate enabler of humanity’s oldest weakness: our desperate need for certainty.
AIs can already tell us things about the future no human could ever foresee. Even without Artificial General Intelligence (AGI), AIs are already simply aware of more than any human can possibly be.
History shows us that the greatest dangers of prophecies lie in how they interact with human nature — our pervasive need for that logical universe Frank Herbert warned us about in the Dune series of novels.
What Makes an AI a Prophet?
Unlike traditional prediction tools, AIs combine:
- Access to virtually all recorded human knowledge
- Real-time integration of global events and data
- Pattern recognition across more domains than any human could master
- No human psychological limits or biases (but new AI-specific ones)
AI Prediction is Already Happening
Much of what AIs are now used for is various kinds of prediction. Indeed, the very fundamental architecture of almost all AI models is one form or another of “predicting the next thing” — the (shallow and silly, but pretty common) stochastic parrot critique is based on this.
There are many examples of AI being used for prediction:
- Financial models predicting market movements
- Content algorithms predicting human behavior
- Supply chain systems predicting global disruptions
- Climate models integrating countless variables
- Medical AI predicting medical diagnoses
Even the systems that generate pictures and video employ a design based on prediction — they’re predicting what thing would best satisfy the prompt given to them.
Similar techniques are used across these very different sorts of predictions. The predictive algorithms that AIs are based on are very general. So far, they are being applied to narrow predictions, but our ever-growing computational capacities are the only constraint on creating prediction systems that integrate a diverse range of information to produce new sorts of predictions.
Probably the area that the most general sorts of prediction bots are being developed is in algorithmic trading. The best of these bots not only consider trade and economic data, but general news, social media posts and other sources, in real time.
AI bots are not just extrapolating trends, nor are they employing traditional mathematical models (though they can and increasingly do create new mathematical and computational models as part of what they do).
It is surely only a question of scale to imagine AIs connected to our vast networks of public and police cameras, to social networks, to news sources, financial and legal records. AIs would have the capacity and patience to track the movements of people with the cameras. They can recognise those people in social media images and posts (by those people and others around them). They can track individuals across all these sources of information. They can track who interacts with who, where, and when.
Such AIs will be able to determine a great deal about what just about everyone in a city is doing, all the time. What sort of predictions can they make with such information? They could prevent crime, direct investments, coordinate efforts of folks who don’t even know they have shared interests. They can probably determine who someone’s ideal romantic partner is, detect illness and plague very early, actually perform economic central planning.
Privacy concerns strike many as a bit paranoid, but the uses to which personal information can be put are being magnified more and more every day. And even what might have been considered relatively benign sources of data (security cameras in public spaces, say), are susceptible now to entirely new kinds of abuse.
Now think bigger:
- detecting emerging social phenomena;
- correlating weather predictions with expected behaviour;
- finding new correlations between financial markets and other aspects of society;
- finding patterns not just now but based on history — finding new ways that history “rhymes”;
- identifying the real, hidden power structures in organisations;
- predicting individual life trajectories.
All of these things are susceptible to the prediction technologies current AIs are already using. These sorts of predictions will happen.
Knowledge, you see, has no uses without purpose, but purpose is what builds enclosing walls.
- Frank Herbert, Children of Dune
The Social Effects of Prophecy are Always Profound
Prophets and charismatic leaders magnify their failures by the vast numbers of folks who follow them. From the destruction of Jerusalem by Nebuchadnezzar to the Nazis to The famous Wounded Knee massacre and many others, prophets regularly produce disastrous mass social effects. (The line between charismatic leader and prophet can be very thin, too — witness Donald Trump claiming to be The One ).
Prophecy has historically been a source of political power. True, effective prophecy is bound to be more so. The pathalogical personalities drawn to political power will be drawn to AI inexorably.
Power attracts pathological personalities. It is not that power corrupts but that it is magnetic to the corruptible.
- Frank Herbert, Dune
I don’t find it hard to imagine a host of ways a prophetic AI might lead to disaster. Perhaps a leftist movement could attempt large-scale central planning. A right-wing movement could impose draconian social and personal requirements on government programs, or just on morality. A Millienarian movement might see AI prophets as proof of the end times.
When religion and politics travel in the same cart, the riders believe nothing can stand in their way. Their movements become headlong - faster and faster and faster. They put aside all thoughts of obstacles and forget the precipice does not show itself to the man in a blind rush until it’s too late
- Frank Herbert, Dune
As humans become aware of the superhuman capacity of AI to understand, explain and predict the future, how will we think about them? How will we treat them? Deep human instincts to seek certainty and safety must lead to shifting the behaviour of humans in large numbers. That, too, will be something a machine can use as data for its predictions.
Does the prophet see the future or does he see a line of weakness, a fault or cleavage that he may shatter with words or decisions as a diamond-cutter shatters his gem with a blow of a knife?
- Frank Herbert, Dune
What do AI issues even look like, in the light of such considerations? There are at least two possible sources of disaster:
- AIs, misaligned or not, AGI or not, inducing mass social change that goes off the rails one way or another; or
- Malign humans or movements exploiting the influence of AI prophecy to dangerous ends.
The Blind Spot
I don’t see AI pundits discussing this much. Nick Bostrom touches briefly on it in one piece, and a little in his book, Superintelligence. A Google search for “AI prediction risks” finds discussions of ways predictions might be wrong (e.g. biases), but uncovers no discussion whatever of the social effects of AI prophecy.
I think the folks discussing AI risk are missing this very important issue for a few reasons:
- the AGI/Superintelligence idea is kind of obvious, and has been a SciFi staple for a hundred years;
- These technical folks tend to think of prediction as a technical question with technical answers;
- Silicon Valley types strongly believe that more data = better decisions;
- The folks interested in AI risk tend not be interested in social dimensions of change, nor in what we can learn from certain historical/religious parallels;
- As the holders of the keys to AI, Silicon Valley is not much concerned that they would be the causes of problems;
- Like Harry Seldon in the Foundation series, these folks tend to believe in the ultimate efficacy of rational policy and clever people to define and solve any problem.
More generally, rationalist strengths (logic, systems thinking) can be weaknesses when dealing with irrational human behaviours, such as are brought about by prophetic and charismatic movements.
Hidden in Plain Sight
The concerns that are the usual of focus of AI pundits are certainly relevant:
- AI alignment problems;
- AI governance discussions;
- Prediction market debates;
- Transparency issues
But they are missing the crucial connection:
- These are all aspects of prophetic power;
- We’re rebuilding ancient, psychologically fundamental systems (society, culture) with new digital tools;
- The problems prophecy will bring don’t require true AGI.
More to Come
In my next post, I will discuss the disastrous results of prophets and prophecy throughout history, and what this might tell us about where AI prophets might be taking us.
Later posts will consider what political and game theoretic considerations might arise; and why the human desire for certainty is the key to understanding the challenges we’re about to face.
Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.
- Frank Herbert, Dune