Future Technology

futureIn this section, we shall consider some further possibilities in which both the hybrid and technology-led approaches may continue to develop. We will expand on these speculations in the following sub-sections:

Any extrapolation of evolution into the future is by its very nature speculative, but unlike the previous sections on current and military technology, we might wish to be more expansive in our predictions. However, history also suggests that there is more than a degree of random chance and natural chaos accompanying the evolution of life on Earth. As such, there is no reason to assume that the future will not hold just as many surprises. Clearly, there are multiple paths that AI evolution could take and a single unforeseen event could deflect humanity in a different direction and in so doing affect AI. Therefore, with these notes of caution ringing in our ears, we shall proceed.

Now that we have had an opportunity to consider some of the ideas behind the Hybrid AI paradigm, it is probably a good point to re-evaluate some of the AI evolutionary assumptions with a more critical eye. While chance and chaos are factors that can change the future, most predictions fail because they do not foresee the implications of the more practical issues that affect how the future unfolds:

  • Limits on material resources
  • Eventual economic costs
  • Changes in social structures
  • Technologies that revolutionise the solution
  • The problems are bigger than first realised
  • We are not as smart as we thought

Of course, the further a prediction ranges into the future, the greater the probability of error. The hybrid AI paradigm already outlined has six stages that stretch some 500 years into the future. In all honesty, the chance of the details of any prediction being accurate over this period of time will be rapidly approaching zero.

Could even the great minds of the 16th and 17th century, such as Copernicus and Newton have predicted the nature of life in the 21st century?

Therefore, our ability to foresee the scope of life in the 25th century and beyond is probably limited. Add to this the possibility that AI does eventually change the very nature of intelligence, then life and society beyond the 25th century may not only be unrecognisable, but possibly incomprehensible to us today.

Which path will humanity possibly take?

Referencing the timeline associated with our evolutionary AI paradigm shows that there are two paths leading to strong AI or 'homo primus':

  • the hybrid path
  • and the technology path

One of the main assumptions has been that the evolution towards AI could be a hybrid process entwined with humanity. The rationale for this position being that it provides a more stepwise approach towards creating artificial intelligence, plus greater social cohesion at each stage of the process. However, we still need to challenge this assumption, as it may only be wishful thinking that mankind remains central to the development of intelligent life on Earth. The alternative path is essentially a technology-led process, where AI evolves based on continued improvements in computer technology with weak AI simply getting stronger. While there are many detailed facets to the technology-led approach, its main assumption appears to be based on an extrapolation of processing power to a point where processing capacity exceeds that of the human brain. However, the review of human and computer intelligence has suggested that simply increasing sequential processing speed, even from millions to billions of instructions per second, will only result in faster computation, not necessarily intelligence, and certainly not sentience. Subsequently, it was recognised that advances in neural networks, which appears to be more analogous to the operation of the brain, may be a more appropriate approach to AI processing. However, on initial examination, the size of present-day neural networks appears minuscule in comparison to the human brain:

Table : Brain-Neural Net Comparison

  Neurons Connections Connectivity
Human Brain 1011 104 1015
Neural Net 103 101 104

At first glance, the difference in total connectivity of the brain and today's neural networks is of the order of 100 billion and one might assume that this colossal difference in capacity is just too big to close in any near-term future. However, this position ignores the phenomenal effect of exponential growth associated with Moore's Law.

Figure: Exponential Growth based on Moore's Law

moores law

In 1965, Gordon Moore was Director of R&D at Fairchild Semiconductors and was asked to write an article predicting what would happen in the semiconductor industry over the next 10 years, i.e. up to 1975. In the article entitled 'Cramming more components onto Integrated Circuits', Moore outlined the effect of what was to become known as Moore's Law. Possibly surprising even Moore, his law has held true for the last 40 years, over which time the growth in processing capacity is approaching a factor of 1 billion. Based on this growth, the 0.1-micron wavelength limit, corresponding to light in the ultra-violet range, will be reached by 2020. At this point, Moore's Law could cease to be valid, unless a new architectural approach to fabricating processors is devised. However, this will still account for a further 1000-fold increase by that date and this does not preclude the possibility that an alternative approach will be found to maintain Moore's Law for another 50 years or longer. On this basis, many have predicted that computers will be smarter than humans by around 2030-2050. While it is difficult to refute the extrapolation of growth based on 40 years of tested observation, the prediction that computers will be more intelligent than humans in the next 50 years seems a bit too one-dimensional given the complexities involved. The following quote is used to reflect an alternative view:

Science's biggest mystery is the nature of consciousness. It is not that we possess bad or imperfect theories of human awareness; we simply have no such theories at all - Nick Herbert

While processing capacity is undoubtedly important to any future ability to support AI, in itself, it may never be the total answer. Clearly, there is a need for an architecture, both hardware and software, that would describe how the higher functions of intelligence could be supported. At this point in time, AI is exploring two essentially difference architectures, but each appears to only provide a partial solution:

The symbolic approach goes some way to explaining how humans use knowledge, but does not explain how it is acquired or learnt. In contrast, the neural network approach goes some way to explaining how knowledge could be acquired through the parallel processing of sensory information, but not necessarily how that information is then processed and stored as useable knowledge. Many still favour the neural network approach because it appears to be more representative of the human model, which is known to work. Therefore, this school of thought argues that it is only a matter of time before we discover how the brain solves the problem of supporting higher brain functions. Equally, given the potential advances in brain scanners over the next 50 years, we may well learn the secrets of the brain's internal architecture by taking it apart, neuron-by-neuron, if necessary.

In some respects, based on only the technical arguments, it is difficult to discount that some form of AI might start to emerge from the technology-led path within the next 100 years. However, the nature of this intelligence may be difficult to predict, especially if the goal of strong AI is not necessarily to replicate humanity, but ultimately to create an intelligence that is superior to humanity, e.g. the singularity hypothesis. Clearly, before embarking down this path, we need to ask ourselves some basic questions that go beyond just technology:

What is meant by superior intelligence?
What is the purpose of this intelligence?
What is its relationship to humanity?
What are the implications for humanity?

In an attempt to broadly answer all these concerns in a positive way, there are some who might argue that strong AI should be superior only in the sense that it is more capable of solving problems, which will be of benefit to humanity. As such, strong AI remains a subservient tool to humanity and there is nothing to worry about:

But does anybody really believe this line?