Stage-2: 100-200 Years

implantsAlthough aspects of stage-2 will be developing in parallel to stage-1, there is an assumption that stage-2 will only start to take on any significance after stage-1 matures. Whether this takes 50 or 100 years is probably not the main issue. In comparison to natural evolution, artificial evolution may appear almost instantaneous but, in practise, it will still take time. Equally, artificial evolution will still be a transitional process, and in many cases, different solutions will exist at the same time, although some may be in the ascendancy, while others in decline. The timeline of our AI paradigm suggests that Homo Computerus will continue to evolve, as outlined, over the next 100 years and beyond. However, during this time, the seeds for the next step in hybrid AI evolution may also be germinating. Of all the evolutionary stages being proposed, Homo Cyberneticus could, in some ways, be the most controversial, both in terms of its technical complexity and social acceptance. While Homo Computerus integrates AI technology into its society to the point that it becomes totally dependent on its capability, Homo Cyberneticus would attempt to physically extend the capacity of the human brain by integrating it with an AI system. In practise, while the external changes may be minimal, the internal changes could be a key step that leads to the re-engineering of the human brain and so a possibly more important question needs to be raised:

Would Homo Cyberneticus end up as humanity with an AI extension or AI with biological peripherals?

In truth, there may be no definitive answer to this question at this stage. While in the beginning, the impact of brain-computer-interface (BCI) may be minimal; there is clearly the potential for the ‘tail to wag the dog’. However, it is highlighted that the path towards Homo Cyberneticus is still highly speculative, as amply underlined by the following, and even more speculative abilities:

  • The ability of the hybrid brain to process information becomes almost unbounded through the fusion of the brain’s creativity and imagination combined with the number crunching and information storage ability of sequential and neural processors.

  • Homo Cyberneticus will effectively become a society of telepaths in the sense that they can share thoughts and ideas in a way that mobile telephony currently allows people to exchange conversations.

At this point, let us again stop and reflect on the implications of these speculative abilities. In both cases, there is a suggestion that high-level thought processes in the brain, which we currently do not really understand, can be intercepted via neural implants and transferred either to a computer extension or another person.

Is this statement more science fiction than analytical prediction?
Is this approach socially acceptable?

These are the sort of questions that we must constantly ask ourselves, so let us take another reality check in order to assess what is required for the evolution of Homo Cyberneticus to even start. As a benchmark, we will try to consider a broad spectrum of issues that are required to take place:

  • The maturing of Homo Computerus to facilitate a new era of cross-discipline science and an acceleration of knowledge systems.
  • A major breakthrough in neural implants that allows human thoughts to be transferred as signals or thought patterns.
  • Applications to be found that would justify the cost of development and provide a motive for pioneering the implant process.
  • Society to be convinced of the benefits and accept the implication of further change.

The first bullet highlights the dependency on Homo Computerus being technically possible and socially acceptable. Although there are still major issues that do not make this evolutionary step a certainty, given that major aspects of Homo Computerus already exist, it would seem to be a reasonable assumption that this condition will be met.

Towards a Connected Society

Of course, any evolutionary step will take time to establish a critical mass, which could then consolidate to become a new acceptable norm. However, Homo Cyberneticus is not a hereditary attribute and therefore it may be more accurate to describe it as a social evolution. In either case, change is still invariably driven by need, but clearly such radical change would have to invoke an equally compelling need. It is also reasonable to assume that such changes would not necessarily be pioneered within the normal framework of society and so we need to identify special cases, ones in which the quality of life, or survival itself, may be the compelling need:

  • Medical conditions
    Today, we are aware of many medical conditions, which impair the quality of life to a point where individuals are already prepared to undergo any risk to help alleviate their condition. Clearly, even today, this is one of the driving forces behind prosthetic implants, which could offer the possibility of overcoming some major physical disabilities. However, it is also possible that the quality of life of many seriously infirmed or mentally disabled patients could be dramatically improved using a two-way BCI. For example, it has been suggested that a computer simulation of sensory inputs could allow an artificial reality to be created that provided a patient some level of escape from the day-to-day confinement of their condition.

  • Armed conflicts
    At the other end of the spectrum, armed conflict can put the lives of groups of individuals under extreme threat. One of the main defences in modern warfare is information superiority, as characterised by military phrases such as network centric warfare. The ability of specialist groups of soldiers to communicate and exchange information between themselves and new generations of AI military intelligence systems, at the level implied by the BCI, could be the difference between life and death. It is probably safe to assume that this strategic possibility is already under serious consideration within a defence-funded, research programme somewhere in the world at this very moment.

  • Extreme environments
    This category also relates to an environment where human life is put at risk, although in this case it is associated with exploration. For example, the exploration of space is a challenge that has inspired many to risk their lives and although the use of robotics will become increasingly important, the vast distances involved requires either a high degree of AI autonomy or closer proximity of human intelligence due to signal latency. Bridging the divide between man and machine could be an essential prerequisite of further deep space exploration.

Applications such as these could provide the initial financial and personal impetus required to develop implants to the level where they would be capable of supporting a brain-computer-interface. However, the broader acceptance of implants by a wider section of society would also depend on the process being relatively risk-free and inexpensive. Of course, what people will accept will still depend on the expectation of the benefits and their current circumstances. However, the initial acceptance of implants may take place in relatively low-key applications rather than as an immediate invitation to join ‘The Borg’.

  • Location Tracking
    Today, society is starting to tag certain types of criminals via mobile GPS trackers that have to be strapped to their person. It is not such a large step to imagine that a broader section of society might accept an ‘inoculation’ of nano-implants, which allows them to be continuously tracked via the wireless GPS network.

  • Micro-Positioning
    Research is already taking place into micro-positioning tags that could, in theory, have an accuracy measured in millimetres. If they could be injected into a person’s fingertips, such a system would allow a person to interact with virtual equipment in the home or at work. For example, a light switch could simply become a mark on the wall, if touched in close proximity by a positional implant within the fingertips it would trigger the activation of the lights. Clearly, hundreds of other applications of this type could be sufficient motivation to accept minor implants in order to be able to interact with the evolving environment.

  • Nano-Technology
    In the next 100 years, nano-technology could revolutionise the process required to insert implants in terms of risk and cost. Equally, these advances could go a long way in minimising the current concerns relating to power and radio emission associated with such devices, especially when in close proximity to the brain.

So there is an implication that the acceptance of implants, which leads to BCI, is itself an evolutionary process. Certain sections of society might begin to weigh up the increasing benefits against falling risks and decide that a break-even point for them has been reached. So, at this point, let us get a bit more adventurous in our speculations regarding how changes in society might lead to greater acceptance of neural implants.

  • Society and Prisons:
    Freedom of speech (and thought) in conjunction with a right to privacy is considered a basic human right, which will be vigorously defended. Unfortunately, such freedoms and rights can be abused and used to attack the fabric of the society in which we live. As the population, and expectation of life, increases through the 21st century, mounting social pressure leads to more aberrant criminal behaviour. However, the demands for political correctness and the need to treat prisoners humanely cause costs to spiral out of control. Equally, the authorities cannot resolve the demands of ‘law-abiding’ voters to be protected and the apparent failure of the prison system to correct or even deter criminal behaviour. By the end of the 21st century, the system is in crisis and a new solution is required. Parole is offered to prisoners who will accept BCI implants, which allows vision and hearing to be intercepted plus the overall mental state of a prisoner to be monitored by an AI system. Any unusual deviation in mental state, known to be reflective of an aberrant behaviour, escalates the AI system to actively process vision and auditory data. Sensory data is not only recorded as evidence, but also interpreted in real-time to determine the danger level. Depending on the level of risk, the prisoner can simply be reminded that he is starting to infringe the conditions of his parole, but if necessary physically disabled.

  • Society and Law Enforcement:
    The rise of global terrorism and organised crime continues to escalate throughout the 21st century. Politicians and law enforcement agencies see technology as one effective means to combat this threat. As in any conflict, the control and use of information is considered vital and AI surveillance systems grow to actively monitor almost every major city and town. However, such systems do not make arrests and the initial failure to disseminate information in real-time to law enforcement officers on the streets, undermines the effectiveness of these systems. As a result, some specialist units start to be formed that require its officers to accept BCI implants as a condition of the job. The interception of vision and auditory data, while on operational duty, is up-loaded to the AI system, which carries out extensive analysis, such as database searches to match images with known criminals, plus real-time lie-detection based on face patterns and voice inflection. In addition, such systems help purge the force of corruption, while monitoring the location and life-signs of all officers while on active duty. Over time, the system is seen as almost essential and spreads to many other associated security professions.

  • Society and Medical Services:
    Throughout the 21st century, the expectation of the quality and span of life increases. This expectation is underpinned by medical services that are under increasing pressure. The economics of a society with a growing population of elderly people is also putting more demand on the medical services. Medical expert systems develop throughout the 21st century, which go some way to satisfying the demand for more immediate access to medical advice, but unfortunately, society also becomes more litigious and the cost of medical insurance escalates. The stress level in doctors also increases, as after years of intensive training they face a rate of change in medical technology that is almost impossible to keep track of, growing patient demands and the worry of legal actions for malpractice, often caused by overwork. By the 22nd century, some doctors may begin to accept the necessity for implants. In one scenario, an elderly patient has collapsed; basic telemetry implants in the patient have detected the condition and raised the alarm. The AI system has determined which doctor can be contacted and provides immediate access via the BCI link to the patient’s records, condition and location. The doctor is able to make an initial diagnosis and asks the AI system to cross reference the central medical expert system for the most appropriate drug available and a second opinion on the suspected condition. The drug and a rapid response medical team are dispatched to the patient’s location.

The purpose of these examples has simply been to illustrate how implant technology might both evolve and be accepted into society. In some ways, like the special cases addressed earlier, people may be compelled by circumstance to accept the technology, not because they want to, but because they have to, in order to function in society. However, we should remember that we are speculating about developments set some 100-200 years in the future and, by this time, other fields of technology could provide alternative solutions to implants.

  • AI Systems:
    If the validity of Moore’s Law holds true for the next 100 years, processing and storage capacity would be 1020 times greater than today. While this is more than a little optimistic, a more conservative estimate might assume that the law only holds good until 2020, after which instead of increasing by 100% every 18 months, the increase is only 25%. If this were the case, processing power would still be a billion times greater that today. Either way, it is probably reasonable to infer that AI systems, in conjunction with new architectures based on the brain, are to some degree intelligent. The open question may then bewhat would such systems require from humanity?

  • Virtual Reality:
    Assuming that AI systems are to some extent intelligent, but not sentient, and that humanity still has a role to play in terms of its creativity and imagination, then the issue of the man-machine interface is important. In the case of Homo Cyberneticus we have assumed that this interface begins to be physically integrated into the body and brain via prosthetic and neural implants. While there is a reasonable probability that this option will develop, it may not necessarily be the preferred route for the majority of society. Based on Pareto’s principle that you can get 80% of anything based on 20% effort, virtual reality may continue to develop to the point that it competes with neural implants as a man-machine interface of choice. For example, text and voice recognition in combination with VR glasses and wireless technology could allow the majority of people to access AI systems via VR, whenever required.

  • Genetic Engineering:
    Our AI paradigm acknowledges that genetic engineering could allow the human blueprint to be changed in many ways. While it is believed that this field of science will eventually start to manipulate the human genome, as a solution that dramatically increases the intelligence and ability of humanity, it cannot compete with AI and robotics. However, some groups will undoubtedly select this option, as the preferred way to improve the human condition. In a wider context, genetic engineering of agricultural crops will start to become commonplace, in part, due to the ability of AI systems to accurately predict and model the impacts of genetic change.

  • Robotics:
    As a result of AI systems and advances in new materials and portable power sources, robotics comes of age in the 21st century. By the 22nd century, robotics and AI systems are causing profound changes in society by making many manual and professional job roles redundant. As a whole, humanity is forced to question its future role and purpose. As always, some people may embrace the changes, while others may violently reject the imposition that technology appears to be putting on their lives and the future of humanity.

The net result is that stage-2, as described, could be the most dangerous transition for intelligent life on Earth. While many technophiles may continue to enthuse about the exciting prospects new technology will bring, the reality could be that over the next 100-200 years, technology begins to simply overwhelm the majority of humanity, making many redundant and fearful of the future.

Of course, not all predictions come true, do they?