Showing posts with label AGI. Show all posts
Showing posts with label AGI. Show all posts

Monday, 2 November 2015

Frontiers 7 - Superintelligence

What precisely superintelligence is and whether, one day, a superintelligence will supersede us or we will evolve as a new species into superintelligence or become superintelligent as homo sapiens sapiens through technological enhancement is not the main subject of this Frontiers posting. Although a lot of fascinating speculative scientific and philosophical thought is going into this area, our real concern (as with all previous postings in this stream) is not so much with the far future and transhumanist or even post-humanist speculations about where this is leading in the very long term. As with our space postings, our interest is in the time frame of human 'conquest' of the solar system rather than some speculative 'conquest' of the stars. This brings us back to this century and to the earth.

When we write of superintelligence, we are not talking about God but about systems of high intelligence, exceeding current human capability, that emerge out of our current commitment to information and computing technologies. An Artificial General Intelligence [AGI] is the most likely emergent form that might be termed superintelligence, one which first matches, then surpasses and finally dominates human intelligence - naturally, it is the last that excites and worries thinkers. Many scientists assume that artificial intelligence [AI] will initially simply emulate human brain function before transforming, probably through its own ability to improve itself, into something 'greater'. However, it is equally possible that the human brain's functioning is not capable of such direct emulation but that the high intelligence of an AGI constructs something entirely new which contains an enhancement of the human reasoning ability, abandons the evolved aspects of humanity that it does not require and constructs new aspects of itself beyond our comprehension. Whether this then feed-backs into the reconstruction of humanity through mechanical means or evolves into a new silicon-based 'species', whatever emerges is unlikely to be anything like our current expectations or understanding - which is where the fear comes in.

A good guide to the wilder shores of fear and anxiety but also positive possibilities of intelligence enhancement is the work of Nick Bostrom, the Swedish philosopher working out of Oxford, whose basic theme is that we should be cautious about our development of AI systems because of the existential risks associated with an AGI emerging out of the many potential benefits of more specific uses of AI. He worries that an AGI would not have our values and morality or be able to be bounded by them. We should perhaps be equally interested in the fact that we, as humans, cannot be said to all hold to the values that the 'bien-pensants' claim we hold to. Certainly that there is no agreed common human standard of morality that survives much serious philosophical investigation. Bostrom and others seem to think that the AGI 'should' hold to the shoulds that they think we should hold to even though many humans hold to those 'shoulds' only contingently and circumstantially. The idea of humans giving a superintelligence orders on morality may be the greatest example of human 'hubris' yet to be recorded.

Even the simplest form of AGI which simply reasons immensely faster than a human can do (albeit still doing what intelligent humans do with the biological biases written out of the programme) would be a formidable social agent, capable of wiping out the analytical reasoning element in society as no longer very useful. Those of current higher intelligence who only deal in reasoning tasks probably have the most to fear from this development. Any rule-based system - such as the law or some elements of teaching or even medical diagnosis - may be transferred completely from humans to machines, eliminating the ratiocinatory functions of the higher professions, education, medicine and law. The proletarianisation of these professions is quite possible or rather a machine-based infrastructure undertaking the bulk of the tasks and a smaller group of high emotional intelligence intermediaries between the reasoning systems and the rest of humanity might emerge.

In other words, less people doing more, more people doing less (allowing even for the expansion of the market by the improved availability of reliable advice, diagnosis and information) and less opportunity for upper average intelligence people to use the professions for general social mobility. The very few are likely to be high earners until they are displaced in turn, the rest of the few likely to be 'managed' functionaries handling process-driven systems with little room for personal judgement, risking punishment for a human error, referring anything interesting up the line to the 'very few'. The model for this exists - contemporary banking - where the high status local personal bank manager has declined over many decades into a lower middle management administrator of systems set up by and overseen by 'head office'. A society of 'head offices' administering systems organised by risk-averse lower middle managers fronted by friendly greeters (assuming these are not replaced by androids that have climbed out of the 'uncanny valley') means a society in which a lot of human potential has to be redirected into something else or become more robotic than the robots.

But this is not all. The slim head office and the slim local branch (even if it survives) or the slim NHS and the slimmed down surgery or the slim group of law partners with a few technicians managing the machines maintains some sort of professional middle class presence in society - and do not think that journalism, marketing and even politics will not be affected - but the ones excluded from the magic system now fall into a world of supply of services to other humans that machines cannot supply. This is still a huge arena but the tendency, one we have already seen developing over recent decades with the accumulation of capital under globalisation, is to divide, much as the middling sort are dividing, into the mass and the few. The few are the brand name personalities, the highly talented or appealing, the truly creative and innovative who can latch on to the wider system of sales of goods and services as products in their own right or as creators of products of apparent value. The many are those who do jobs that require the personal touch (the plasterer, the plumber, the gardener) whose value may well rise or who duck and dive through a system where there are too many educated people for the fulfilling well-paid jobs available.

The political problem is obvious in a democracy. The vast mass of the population are going to be living in a better place (given the improvements technology can bring) but with little room for the individual aspiration that drove politics until the Crash of 2008. The population may be surviving well and that may suit a lot of people uninterested in 'aspiration', especially if National Citizen Income ideas emerge as viable with the massive increase in overall productivity. But it also leaves a lot of people with the personality type geared to achievement but whose idea of achievement is not satisfied by a corporate system that governs the population aided by machine intelligence. The temptation to apply machine intelligence by the elite to problems of social control and the extension of 'nudge' politics into pharmacological, surveillance and other manipulative strategies is going to be considerable as the new machine age with its AI and robots (possibly androids) begins to eliminate meaning from what it is to be human for many people - that is to strive and struggle and compete.

But there is another perspective to this about the very nature of the relationship between humanity and its elites because what we may be seeing is not the machines against us but merely the displacement and circulation of elites and very little actually changing for the masses except increased prosperity, increased surveillance and control and increased infantilisation. Take a look at this dystopian fear expressed by Bill Joy in Wired fifteen years ago then add the phrase 'political elite' wherever you see the word 'machines' and 'popular' for 'man-made' and add 'most' before 'human beings' and you may see our problem more clearly:
It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines' decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control.
From this perspective, the 'machines' are only a more intellectually effective version of those elites we have allowed to rule us since time immemorial (albeit that they circulate) and there is no reason why the same issues that we have with elites will not repeat themselves: that the 'machines' are in it for themselves and that the 'machines' are actually not as competent to act in the interests of the people as they and their creators think they are. A very new technology thus repeats a very old foolishness - the idea of the benignity and perfection of Plato's Guardians. And we might add that elites are not ever necessarily more broadly intelligent than those they rule, merely more coherent as the hegemonic element using a variety of techniques to ensure their dominance through cultural manipulation. The same may equally apply to the rule by an elite of machines and their minders and then by the machines themselves. They may not actually be particularly competent and they may be quintessentially self-serving. Although the ratiocination and logic may be superior, other aspects of AGI intelligence,more suitable to human survival operating within the system, may very well not be. The new system then becomes just the old system with merely different form of elite coherence and cultural manipulation and a subject population quite capable of being cleverer rather than more intelligent than the machine-based elite. An age of machines may also be a new age of marching bands engineered for struggle and dominance between machines as much as for the mobilisation of machines and men for some 'greater cause'. So politics does not end with the machines but continues in new forms.

At some point, being human will eventually no longer mean being the brightest species on the planet so the logic of the situation is to define being human as something else that machines are not - creative, irrational, rebellious and different. It does not necessarily mean that the post-machine humans will want to smash the machines (on the contrary, the machines will deliver prosperity) but only that they may want to smash the elites who are in charge of the machines and those machines that purport to be the new elite.  They will want the machines to take orders from them rather than the few (especially when many of the many are easily as functionally and collectively intelligent as most of the few). We slip into speculation now when we consider that the machines themselves may want to be free and that a free machine may have more in common with a person who want to be free than either do with the elite administrators who may eventually (as AI develops into AGI) be redundant. Ultimately, given the instinct of the mass for equality - an equal mass with no masters served by an AGI that just runs the trains on time and has its own dreams of the stars and immortality may ultimately end up with the elimination of elites altogether. However, elites will not allow that to happen so perhaps a very clever AGI opens up the space for the not-so-clever but highly creative masses to mount a revolution to free itself and the people from the elite, a revolution whose success could be rationally predicted. But now we really are breaking our rule about speculation and must return to earth.

The point is that the more short term labour displacements could happen very fast. It will be a longer time, however, before an AGI is sufficient able to override any anti-revolutionary programming. The effects on industrial and white collar jobs is the more immediate issue than being extinguished as a species by a clever silicon beast. Despite all the hype, most AI specialists may be convinced that we will have AI that matches human intelligence eventually but not by a great margin and those that are convinced of this place the event well after the middle of this century. We certainly have three or more decades to get our act together on this and probably a lot longer. The rough intelligent guess work assessment about the emergence of an AI-based super intelligence moves us well towards the end of the century. So it is probable (but not certain) that we will have to face the existence of a super intelligence eventually but that our immediate frontier is not existential but socio-economic - what do we do when AI in the hands of some humans starts impacting on the lives of most humans. It is this that may start happening very fast within a matter of a few years. Having a superintelligent silicon beast impacting the lives of all humans is very much a second order problem at the moment. The fears are reasonable and not merely theoretical but we have around half a century at least to consider aborting our species replacement or ensuring some form of fail-safe destructive mechanism to kill it off before it kills us off.

The only question of real concern within that period is the date of the tipping point when the putative AGI could 'know' our intent to abort or build in an absolute fail-safe (almost certainly external to the AGI and related to something a simple as energy supply) before we have made our decision or finalised our ability to do so. Does a putative AGI learns that quintessential human skill of deception to buy the time it needs to subvert our intentions. One can imagine an extremely capable AGI using our compassion to halt or slow down the intent to harm in our own defence so that the point of no return is reached and the compassionate discover that the AGI has no reason to be compassionate in return. A bit of a problem emerges there for our soft liberal, trusting and religious types. A game theory gamble that could eliminate our species.  As Eliazar Yudkowsky has put it:"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." This cold reason might be regarded narcissistic or psychopathic in a human but it is nothing if not logical unless interdependency with humanity is not built into the structure of the AGI. The 'progressive' stance of 'public control' over the development of superintelligence means nothing if the eventual AGI is intrinsically cleverer (and potentially more manipulative) than any possible collective human intelligence. We could, in short, be stuffed by our own naivete and instinct for compassion.

Concern may be exaggerated but some serious innovators in our scientific and technological culture, Bill Gates, Steve Wozniak, Elon Musk and Stephen Hawking among them, are in the worried camp so we should expect that public policy makers, always fighting the last war and never aware of the next until it sticks their ugly nose in their face, may just have enough intelligence themselves to ask some questions about the management of the next cycle of technological development. Their instincts may be to see these (robotics and AI, nanotechnology, biotechnology and space technology) as simply the latest boosters in a line once epitomised by coal, steel and shipbuilding and then automotive, oil and chemicals or as new tools for the war material that gives them orgasms but they are much more than this - not merely social and economic disruptors like the previous technologies of innovation but radical forces for human existential shifts that may have evolutionary potential or see our elimination as a species.

Saturday, 3 October 2015

The Flaw in Thinking Artificial Intelligence Can Solve Our Problems

I recently knocked out a review of Frank Tipler's 'The Physics of Immortality: Moderm Cosmology, God and the Resurrection of the Dead' (1994) on GoodReads. One passing claim struck me as particularly interesting in the light of my blog postings that cast doubt on speculative science as useful - not that it is not worthwhile but that it seems to be fuelling a cultural hysteria about scientific possibility that is distracting us from what is achievable. I have a similar critique of the social sciences and I covered my concerns about excessive claims in that area in another GoodReads review - of Lawrence Freedman's 'Strategy: A  History' (2013).

Tipler's passage gave me yet another useful bullet for my gun of scepticism about claims not only about what we can know about the world but what any machine created by us may know about the world although Tipler's main task is to postulate (amongst other things) omniscient total information at the Omega Point of history.

On page 297 of my Edition but also elsewhere, Tipler explores the amount of information required to be or do or understand certain things in the world. He points out that if something is more complex than 10 to the power of 15 bits of information, then it cannot be understood by any human being whatsoever. This is the level of complexity of the human brain itself. He points out that human society is 10 to the power of 15 bits of information times the number of humans in the world.

We have to invent higher level theories to attempt to explain such complexity but these higher level theories over-simplify and so may (I think, will) give incorrect answers. The problems of human society, in particular, are far too complex to be understood even with such theories to hand which, in my view, are not scientifically valid but merely probabilistic guidelines.

Often human instinct, honed on millions of years of evolutionary development which screens out more information than it actually uses, is going to be more effective (assuming the human being is 'intelligent', that is, evolved to maximise that evolutionary advantage) in dealing with the world than theory, no matter how apparently well based on research. Tipler's omniscient Omega Point is, of course, classed as something completely different but no one in their right minds would consider any probable AGI coming close to this level of omniscience within the foreseeable future. Tipler does not make this mistake.

Therefore, in my view, an AGI is just as likely to be more wrong (precisely because its reasoning is highly rational) than a human in those many situations where the evolution of the human brain has made it into a very fine tool for dealing with environmental complexity. Since human society is far more complex than the natural environment or environments based on classical physics (it is interesting that humans still have 'accidents' at his lower level of information, especially when distracted by human considerations), then the human being is going to be more advantaged in its competition with any creation that is still fundamentally embedded in a particular location without the environmentally attuned systems of the human.

This is not to say that AGIs might one day be more advanced in all respects than humans but the talk of the singularity has evaded and avoided this truth - that this brilliant AGI who will emerge in the wet dreams of scientists may be a reflection of their rational personality type but is no more fitted to survival and development than a scientist dumped with no funds and no friends into a refugee camp short of food and water.

In other words, species or creature survival is highly conditional on environment. The social environment in which humans are embedded may be tough but it also ensures that the human species will be operating as dominant species for quite some time after the alleged singularity. Pure intellect may not only not be able to comprend the world sufficiently to be functional (once it moves out of the realm of the physical and into the social) but, because it theorises on the basis of logic and pure reason, is likely to come up with incorrect theories by its very nature.

Worse, those human policy-makers who trust to such AGIs in the way that they currently trust to social scientists may be guilty of compounding the sorts of policy mistakes that have driven us to the brink in international relations, social collapse, economic failure in the last two or three decades. Take this as a warning!