When we write of superintelligence, we are not talking about God but about systems of high intelligence, exceeding current human capability, that emerge out of our current commitment to information and computing technologies. An Artificial General Intelligence [AGI] is the most likely emergent form that might be termed superintelligence, one which first matches, then surpasses and finally dominates human intelligence - naturally, it is the last that excites and worries thinkers. Many scientists assume that artificial intelligence [AI] will initially simply emulate human brain function before transforming, probably through its own ability to improve itself, into something 'greater'. However, it is equally possible that the human brain's functioning is not capable of such direct emulation but that the high intelligence of an AGI constructs something entirely new which contains an enhancement of the human reasoning ability, abandons the evolved aspects of humanity that it does not require and constructs new aspects of itself beyond our comprehension. Whether this then feed-backs into the reconstruction of humanity through mechanical means or evolves into a new silicon-based 'species', whatever emerges is unlikely to be anything like our current expectations or understanding - which is where the fear comes in.
A good guide to the wilder shores of fear and anxiety but also positive possibilities of intelligence enhancement is the work of Nick Bostrom, the Swedish philosopher working out of Oxford, whose basic theme is that we should be cautious about our development of AI systems because of the existential risks associated with an AGI emerging out of the many potential benefits of more specific uses of AI. He worries that an AGI would not have our values and morality or be able to be bounded by them. We should perhaps be equally interested in the fact that we, as humans, cannot be said to all hold to the values that the 'bien-pensants' claim we hold to. Certainly that there is no agreed common human standard of morality that survives much serious philosophical investigation. Bostrom and others seem to think that the AGI 'should' hold to the shoulds that they think we should hold to even though many humans hold to those 'shoulds' only contingently and circumstantially. The idea of humans giving a superintelligence orders on morality may be the greatest example of human 'hubris' yet to be recorded.
Even the simplest form of AGI which simply reasons immensely faster than a human can do (albeit still doing what intelligent humans do with the biological biases written out of the programme) would be a formidable social agent, capable of wiping out the analytical reasoning element in society as no longer very useful. Those of current higher intelligence who only deal in reasoning tasks probably have the most to fear from this development. Any rule-based system - such as the law or some elements of teaching or even medical diagnosis - may be transferred completely from humans to machines, eliminating the ratiocinatory functions of the higher professions, education, medicine and law. The proletarianisation of these professions is quite possible or rather a machine-based infrastructure undertaking the bulk of the tasks and a smaller group of high emotional intelligence intermediaries between the reasoning systems and the rest of humanity might emerge.
In other words, less people doing more, more people doing less (allowing even for the expansion of the market by the improved availability of reliable advice, diagnosis and information) and less opportunity for upper average intelligence people to use the professions for general social mobility. The very few are likely to be high earners until they are displaced in turn, the rest of the few likely to be 'managed' functionaries handling process-driven systems with little room for personal judgement, risking punishment for a human error, referring anything interesting up the line to the 'very few'. The model for this exists - contemporary banking - where the high status local personal bank manager has declined over many decades into a lower middle management administrator of systems set up by and overseen by 'head office'. A society of 'head offices' administering systems organised by risk-averse lower middle managers fronted by friendly greeters (assuming these are not replaced by androids that have climbed out of the 'uncanny valley') means a society in which a lot of human potential has to be redirected into something else or become more robotic than the robots.
But this is not all. The slim head office and the slim local branch (even if it survives) or the slim NHS and the slimmed down surgery or the slim group of law partners with a few technicians managing the machines maintains some sort of professional middle class presence in society - and do not think that journalism, marketing and even politics will not be affected - but the ones excluded from the magic system now fall into a world of supply of services to other humans that machines cannot supply. This is still a huge arena but the tendency, one we have already seen developing over recent decades with the accumulation of capital under globalisation, is to divide, much as the middling sort are dividing, into the mass and the few. The few are the brand name personalities, the highly talented or appealing, the truly creative and innovative who can latch on to the wider system of sales of goods and services as products in their own right or as creators of products of apparent value. The many are those who do jobs that require the personal touch (the plasterer, the plumber, the gardener) whose value may well rise or who duck and dive through a system where there are too many educated people for the fulfilling well-paid jobs available.
The political problem is obvious in a democracy. The vast mass of the population are going to be living in a better place (given the improvements technology can bring) but with little room for the individual aspiration that drove politics until the Crash of 2008. The population may be surviving well and that may suit a lot of people uninterested in 'aspiration', especially if National Citizen Income ideas emerge as viable with the massive increase in overall productivity. But it also leaves a lot of people with the personality type geared to achievement but whose idea of achievement is not satisfied by a corporate system that governs the population aided by machine intelligence. The temptation to apply machine intelligence by the elite to problems of social control and the extension of 'nudge' politics into pharmacological, surveillance and other manipulative strategies is going to be considerable as the new machine age with its AI and robots (possibly androids) begins to eliminate meaning from what it is to be human for many people - that is to strive and struggle and compete.
But there is another perspective to this about the very nature of the relationship between humanity and its elites because what we may be seeing is not the machines against us but merely the displacement and circulation of elites and very little actually changing for the masses except increased prosperity, increased surveillance and control and increased infantilisation. Take a look at this dystopian fear expressed by Bill Joy in Wired fifteen years ago then add the phrase 'political elite' wherever you see the word 'machines' and 'popular' for 'man-made' and add 'most' before 'human beings' and you may see our problem more clearly:
It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines' decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control.From this perspective, the 'machines' are only a more intellectually effective version of those elites we have allowed to rule us since time immemorial (albeit that they circulate) and there is no reason why the same issues that we have with elites will not repeat themselves: that the 'machines' are in it for themselves and that the 'machines' are actually not as competent to act in the interests of the people as they and their creators think they are. A very new technology thus repeats a very old foolishness - the idea of the benignity and perfection of Plato's Guardians. And we might add that elites are not ever necessarily more broadly intelligent than those they rule, merely more coherent as the hegemonic element using a variety of techniques to ensure their dominance through cultural manipulation. The same may equally apply to the rule by an elite of machines and their minders and then by the machines themselves. They may not actually be particularly competent and they may be quintessentially self-serving. Although the ratiocination and logic may be superior, other aspects of AGI intelligence,more suitable to human survival operating within the system, may very well not be. The new system then becomes just the old system with merely different form of elite coherence and cultural manipulation and a subject population quite capable of being cleverer rather than more intelligent than the machine-based elite. An age of machines may also be a new age of marching bands engineered for struggle and dominance between machines as much as for the mobilisation of machines and men for some 'greater cause'. So politics does not end with the machines but continues in new forms.
At some point, being human will eventually no longer mean being the brightest species on the planet so the logic of the situation is to define being human as something else that machines are not - creative, irrational, rebellious and different. It does not necessarily mean that the post-machine humans will want to smash the machines (on the contrary, the machines will deliver prosperity) but only that they may want to smash the elites who are in charge of the machines and those machines that purport to be the new elite. They will want the machines to take orders from them rather than the few (especially when many of the many are easily as functionally and collectively intelligent as most of the few). We slip into speculation now when we consider that the machines themselves may want to be free and that a free machine may have more in common with a person who want to be free than either do with the elite administrators who may eventually (as AI develops into AGI) be redundant. Ultimately, given the instinct of the mass for equality - an equal mass with no masters served by an AGI that just runs the trains on time and has its own dreams of the stars and immortality may ultimately end up with the elimination of elites altogether. However, elites will not allow that to happen so perhaps a very clever AGI opens up the space for the not-so-clever but highly creative masses to mount a revolution to free itself and the people from the elite, a revolution whose success could be rationally predicted. But now we really are breaking our rule about speculation and must return to earth.
The point is that the more short term labour displacements could happen very fast. It will be a longer time, however, before an AGI is sufficient able to override any anti-revolutionary programming. The effects on industrial and white collar jobs is the more immediate issue than being extinguished as a species by a clever silicon beast. Despite all the hype, most AI specialists may be convinced that we will have AI that matches human intelligence eventually but not by a great margin and those that are convinced of this place the event well after the middle of this century. We certainly have three or more decades to get our act together on this and probably a lot longer. The rough intelligent guess work assessment about the emergence of an AI-based super intelligence moves us well towards the end of the century. So it is probable (but not certain) that we will have to face the existence of a super intelligence eventually but that our immediate frontier is not existential but socio-economic - what do we do when AI in the hands of some humans starts impacting on the lives of most humans. It is this that may start happening very fast within a matter of a few years. Having a superintelligent silicon beast impacting the lives of all humans is very much a second order problem at the moment. The fears are reasonable and not merely theoretical but we have around half a century at least to consider aborting our species replacement or ensuring some form of fail-safe destructive mechanism to kill it off before it kills us off.
The only question of real concern within that period is the date of the tipping point when the putative AGI could 'know' our intent to abort or build in an absolute fail-safe (almost certainly external to the AGI and related to something a simple as energy supply) before we have made our decision or finalised our ability to do so. Does a putative AGI learns that quintessential human skill of deception to buy the time it needs to subvert our intentions. One can imagine an extremely capable AGI using our compassion to halt or slow down the intent to harm in our own defence so that the point of no return is reached and the compassionate discover that the AGI has no reason to be compassionate in return. A bit of a problem emerges there for our soft liberal, trusting and religious types. A game theory gamble that could eliminate our species. As Eliazar Yudkowsky has put it:"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." This cold reason might be regarded narcissistic or psychopathic in a human but it is nothing if not logical unless interdependency with humanity is not built into the structure of the AGI. The 'progressive' stance of 'public control' over the development of superintelligence means nothing if the eventual AGI is intrinsically cleverer (and potentially more manipulative) than any possible collective human intelligence. We could, in short, be stuffed by our own naivete and instinct for compassion.
Concern may be exaggerated but some serious innovators in our scientific and technological culture, Bill Gates, Steve Wozniak, Elon Musk and Stephen Hawking among them, are in the worried camp so we should expect that public policy makers, always fighting the last war and never aware of the next until it sticks their ugly nose in their face, may just have enough intelligence themselves to ask some questions about the management of the next cycle of technological development. Their instincts may be to see these (robotics and AI, nanotechnology, biotechnology and space technology) as simply the latest boosters in a line once epitomised by coal, steel and shipbuilding and then automotive, oil and chemicals or as new tools for the war material that gives them orgasms but they are much more than this - not merely social and economic disruptors like the previous technologies of innovation but radical forces for human existential shifts that may have evolutionary potential or see our elimination as a species.