Introduction

Before starting the main thread of the discussion concerning Artificial Intelligence (AI) it is possibly worth making some cross-reference to an earlier section of this website that discusses Artificial Life; as there is clearly a notion that AI might one-day 'evolve' as an independent intelligent, sentient lifeform that transcends the boundary of what we can really describe as technology. It might also be useful to cross-reference the section addressing Human Intelligence and the general discussion of Cognition as a  precursor to complexity of any attempt to replicate intelligence by any artificial means.

AI BrainAs we start this particular section, there might be some ambiguity as to whether AI should be discussed in terms of development or evolution. However, it is going to be suggested that even a discussion of AI in terms of just technology needs to highlight its speculative nature, as while it is no longer pure science fiction, it still remains in our science future and therefore its outcome is uncertain. For this reason, this section will try to outline just one of many possible paths in which the development of AI might proceed as a process closely entwined with the future evolution-by-design of homo sapiens. However, the basis of this evolutionary model is predicated on two caveats. First, it is assumed that true artificial intelligence will require some degree of sentience, which at this point in time has to be described as speculative within the constraints of current technology. Second, it is believe that path to AI will not proceed as a natural progression of Moore's Law as applied to present-day computers, but as indicated, it will become far more entwined with the demands of human evolution, at least, in the short to medium term. Therefore, should the possibility of sentient AI emerge at some point in the future, it will not supersede 'homo sapiens', as we understand it today, but rather some future descendent that has already become a hybrid of humanity and technology.

singularityWhile not the focus of this discussion, now might be a good time to highlight the idea often referred to as the `singularity`, which appears to be based on the almost exponential acceleration of AI technology, which leads to smarter-than-human intelligence, i.e. a sort of ultimate extrapolation of Moore's Law. The term `The Singularity` is considered analogous to the breakdown of physics near a gravitational singularity, but in this case, it would be the future of humanity that may disappear behind an event horizon of evolution. Of course, you might think we would all question whether we would be able to understand the nature and aspirations of such an intelligence before its creation, but the more relevant question may be whether: we will even be asked? As early as 1965, I. J. Good wrote of an 'intelligence explosion' by which machines that surpass human intellect, even if by a small amount, could recursively improve their own designs in ways unforeseen by their original human designers and, in so doing, achieve far greater intelligence. It is then argued that this effect would lead to a cascade of self-improvements through which a super-intelligence might emerge. However, it was Vernor Vinge, in 1982, who suggested that the creation of smarter-than-human intelligence might trigger a breakdown in human society, as currently understood, and named this event 'the singularity'. In this context, it was Vinge who appears to have drawn the analogy to the gravitational singularity that exists behind the event horizontal of a black hole. Later, Ray Kurzweil modified the concept of a singularity to include an exponential growth of any technology, not just AI, as the inevitable consequence of accelerating change as generalized by Moore's law. By way of examples:

  • Aubrey de Grey applied the idea to medical technology in which improvements occur so fast that 'human lifespan' would increased by more than one year per year. However, it is not clear as to whether the implication of this prediction on world population was ever seriously considered.

  • In an even wider context, Robin Hanson has cited the agricultural and industrial revolutions as past 'singularities' with the next economic singularity possibly increasing economic growth by up to 250 times via the virtual replacement of all human labour. Again, the issue of what an idle world population, estimated to become 9 billion by 2045, might do, especially given the speculation of an ever-increasingly life expectancy.

As such, speculative predictions need to be put into some contextual model, which then consider the social impact of such changes. So while the idea of a singularity is a possibility, its probability is less clear in any given timeframe and it is unlikely to just appear within a technology vacuum beyond the reach of any social and political considerations. Therefore, this section will proceed on the basis that humanity will continue to evolve, not by natural selection, but by purposeful design in response to future social and environmental demands. In this context, AI is but one of several technologies, which may ultimately converge to create new forms of intelligent life, which from our present-day perspective may appear closer to the description of the singularity than homo sapien.