The Technology Model

 

Today, the scope of technology has become so broad that it is difficult to provide an executive overview of all its implications. However, we might attempt to summarise human evolutionary developments in terms of the relationship between man and machine. Originally, man was the only machine available to carry out most manual tasks, although early agricultural developments quickly ‘enlisted’ the help of domestic animals to do some of the heavy lifting. Of course, the technology developments of the industrial revolution then started to create machines that could replace both man and animals in many aspects of industrialised manual labour. Then, in the 1970’s, we saw the start of another new era in which ‘programmable computer systems’ began to replace man in many repetitive mental tasks by using human-coded decision-trees that could operate both faster and on much larger volumes of data. Today, we are possibly at the start of yet another man-machine era in the form of ‘ cognitive computing’ , which holds the potential to process the huge increase in global data, which is now swamping many aspects of modern life. However, in order to extrapolate such developments into the future, we really need to narrow the scope of the technology model to just a few key areas of developments that might be seen as critical, e.g. cognitive computing, genetics, nanotechnology, robotics, energy production and space-technology.

Might this list of technologies have some weighted significance?

For the purposes of this discussion, the scope of ‘cognitive computing’ will be described as encompassing the near-term developments in both weak AI and expert systems, where the mention of artificial intelligence (AI) does not yet infer man being replaced by machine, more of a symbiotic relationship, at least initially. However, the positioning of cognitive computing at the front of the list is based on the assumption that advances in this field of computing will also be the major facilitator of developments in all other fields of technology. Next in the list are advances in genetic engineering , which holds out the promise of major improvements in healthcare, especially in terms of preventive medicine, although this is not the reason for its position in the list. For a deeper understanding of cellular and DNA mechanisms may also point the way for major advances in the field of nanotechnology, where the scope of both will undoubtedly extend beyond the field of medicine into other areas. In this context, a more  detailed understanding of cellular and molecular biology will be predicated on mechanisms that operate on the nanoscale, where the biological cell might be seen as a potential blueprint that nanotechnology might one-day aspire to emulate, but initially only help to manipulate. However, the wider scope of such technology developments will be deferred to a later discussion. Next in the list is robotics, but not necessarily in the humanoid form usually envisaged, as incremental improvements in the design of any form of autonomous robots will depend heavily on developments in the previous fields, i.e. AI, genetics, nanotechnology. While, in all probability, the initial scope of future robotic designs will remain orientated towards commercial and industrial scaled manufacturing, robotic designs could also start to operate in areas known to be hazardous to human physiology, e.g. mining, nuclear plants, fire fighting and even space exploration, plus start to exploit the scalability of its form beyond the human form, both large and small.

But where does energy production and space exploration fit within this technology model?

Today, it may be stating the obvious that the modern world depends on cheap and reliable energy, which in the 20th century was primarily sourced from non-renewable resources, such as coal, oil and gas. However, while these resources were initially cheap and plentiful, it is now recognise that they are both finite and contribute to pollution and global climate change. As such, the 21st century has to seek alternative solutions that might be supplied by energy sources, such as solar, wind and tidal, or even a new generation of nuclear reactors. While the details of this debate will be deferred to some future discussion, two earlier discussions entitled ‘Other Obstacles to Progress’ and ‘Further Energy Considerations’ may provide some initial introduction to the scope of issues that our future technology model must resolve.  Finally, we shall simply touch on the idea of future developments in the field of space exploration, which might be separated into a number of problem areas, e.g. launch systems, propulsion systems and habitats. Today, most launch systems are almost prohibitively expensive, ranging in cost from $2,500/kg up to $15,000/kg, where the cost of a NASA space shuttle launch was estimated to be $500 million with a maximum payload in the region of 5000 kilograms. Of course, this cost is compounded when the specification requires the provisioning of an environment suitable for human habitation, which might one-day be unnecessary, if the tasks could be done by autonomous AI robots. Even so, having gotten into space, the next problem is providing a propulsion system to take you somewhere useful without necessitating the weight of the fuel being carried onboard. Again, the wider details of future space technology will be deferred to a later discussion, but should things start to go ‘badly wrong’ here on planet Earth, the priority in this field of research and development might quickly escalate.

So which of these technologies do we really need to focus on?

In just the last few years, there have been huge strides taken in the field of weak AI and expert systems, where we might cite IBM’s Watson system as the current state of the art. However, there is still a lot of hype around the field of AI, which has seen many periods of ‘over-hype’ in the last 50 years or so, as the marketing of any perceived breakthrough was often seen to be critical to secure the funding for further research and development. However, putting aside this somewhat cynical viewpoint, it is clear that  developments are fast approaching what might truly be called a weak AI/expert system, although this technology still needs to address many problem areas, e.g. reasoning & planning, information & knowledge structures, language & semantic processing, visual perception with the possibility of both logical and emotional interpretations. This said, a very recent area of significant development relates to neural networks, e.g. convolutional and deep learning networks, which are now seen as a key technology that can address the issue of pattern/visual recognition through which images, are first analysed by Graphical Processing Units (GPUs) and then sorted into categories and subsequently organised into information and knowledge trees. This idea of pattern sorting is also being extended into other areas, e.g. language processing.

Note: Deep Learning is a subfield of machine learning concerned with algorithms supported by the structure and function of artificial neural networks. To-date, the development of deep learning has progressed based on ever-faster processors and the enormous increases in data to train ever-larger neural networks. Key to the idea of deep learning is that its performance continues to increase as the size of the neural network and data increases, while earlier machine learning techniques tended to reach a plateau in performance. The ‘deep’ hierarchy within the neural network allows the ‘learning’ of more complicated concepts by building on simpler ones, hence the label of ‘deep learning’.

However, expert systems on the level of IBM’s Watson also require a range of other techniques, e.g. rule learning, decision trees, Bayesian learning, genetic algorithms, augmented by other techniques, e.g. search optimisation, logic reasoning, probabilistic reasoning  and even control theory.

So what is cognitive computing?

While many might describe cognitive computing as an attempt to simulate human thought processes, we shall start with a more pragmatic definition of a self-learning system that can use data mining, pattern recognition and natural language processing to do things that the human brain cannot do. However, it is important to recognise the limitations of cognitive systems like Watson and to understand that human intelligence is still required, although being increasingly augmented with a weak AI ability to process and recall huge amounts of data underpinned by an ever-increasing reasoning capability.

Note: At this point, care is required when using words like ‘intelligence’, ‘cognition’ and even ‘sentience’. If we accept a basic definition of ‘intelligence’ as an ability to use information to solve problems, then it might reasonably be argued that computers are becoming increasingly intelligent. If we then extend this definition to ‘cognition’ as an ability to acquire knowledge and understanding through intelligent thought, experience, and senses, we might still question the current scope of cognitive computing. Finally, if we add the definition of ‘sentience’ as a capacity to feel, perceive, or experience subjectively, we might realise why the term ‘weak AI’ is often preferred. However, the idea of sentience can be subjective, although many may accept that it is closely related to the idea of ‘self-awareness’, which in evolutionary terms is the boundary of that which seeks to survive. As such, it may one-day be possible for technology to create a system that also seeks to survive and, if so, we would  have to ask whether it has a degree of sentience.

In 2011, IBM’s cognitive computing system called Watson hit the headlines by beating the top two human contestants of all time on a TV show called ‘Jeopardy’. In this context, Watson was primarily a question-answering computer system with natural language interface with access to 200 million pages of structured and unstructured content requiring four terabytes (1012) of disk storage as generalised below.

Since that date, IBM has made Watson 24 times faster and improved its overall performance by 2400%, while at the same time making it 90% smaller so that what once was the size of a room can now fit into three pizza sized boxes. Today, Watson-like systems are being developed using cloud-based APIs, which IBM hopes will create a $10 billion business model within the next ten years. According to another source, the market potential for cognitive computing systems will increase from $200 million in 2015 to over $11 billion by 2024. While most of the ideas within machine learning underpinning cognitive computing are not entirely new, the application demand appears to be increasing in-line with the exponential increases in data volumes. So, as machine learning algorithms continue to improve, especially in conjunction with their decision making abilities, the practicality of cognitive computing systems will most likely spread into almost every aspect of the modern world.

So how might cognitive computing help in real-world applications?

Today, 700,000 medical research papers are released each year, while a single MRI scan may consist of up to 5000 images and, as such, a single patient may acquire millions of gigabytes (>1011) of medical data during their lifetime. This volume of data can be disruptive, on many levels, as human staff become increasingly swamped with data that they cannot process in real-time. Therefore, we might try to characterise the scope of a medical cognitive computing system as shown below.

Today, the basic idea of healthcare has grown into a $7-8 billion global industry with some $3-4 billion being spent in the US alone, but where some estimates suggest that 30-40% of this huge sum of money is being wasted in inefficiency in the overall system plus ineffective or even inappropriate treatments. Of course, with the volume of data doubling every five years, across all services and industry, the application for cognitive computing can only expand beyond healthcare systems into almost every facet of modern life. For example, many large-scale industrial process can require up to 80,000 sensors to be in operation, which conceptually have to be monitored and analysed. The retail industry is trying to cope with over 500 million tweets and 55 million Facebook updates every single day and without the ability to process this data, individual companies are often blind to new buying patterns and fashion trends within the society they seek to supply. It is estimated that the Internet of Things will grow from its current 1% market base to something closer to 50% by 2020, which will include increases in data volumes from applications ranging from city-wide traffic management through to home security. In this future world, security will no longer be predicated on only the system firewall’  but rather develop to include behavioural analysis based on real-time traffic patterns in order to identify and nullify increasingly sophisticated cyber attacks. In the field of service utilities, it is estimated that 680 million smart-meters may be fitted in the coming years, which will add another 280 petabytes (1015) of data to be processed. Equally, the development of driverless cars and trucks may produce another 350Mbytes of data every second, all needing to be analysed and stored. Today, 2.5 quintillion (1018) bytes of data is being created everyday, where 90% of all data has been created within the last 2 years, although 80% of this data is now referred to as ‘dark databecause it is essentially unstructured and unused by any analytical decision making processes. Overall, what we might realise in this brief outline is that the world is rapidly approaching another paradigm shift, which will require the acceptance of a new technology with all of its social implications.

So what are the implications of any future technology model?

In this section entitled ‘man-made models, we have outlined how humanity has  survived through a mixture of evolutionary chance plus social and technology developments. While this process stretches back to the emergence of homo-sapiens some 200,000 years ago, in only the last 100 years, we appear to have created a world in which  many would not be able to survive without the support of technology in all its various forms. However, in this same timeframe, we have also created a world which is being increasingly threaten by the side-effects of technology in the form of over-population, depletion of natural resources plus global pollution and climate change. Nevertheless, as outlined, the future of humanity may still depend on a number of key developments, e.g. cognitive computing, genetics, nanotechnology, robotics, energy production and space-technology, which it is hoped has the potential to change the world for the better. Whether this is the case, only time will tell, but it seems that humanity has little choice but to continue to follow the evolutionary development road that has brought it so far. On this basis, the next significant step along this road may be defined by the ‘ cognitive era’  linked to developments in machine learning. As always, the decision to follow this path will not necessarily be taken by a majority vote, but rather guided by the interests of large corporations and the necessity of both politicians and economists to maintain GDP growth or, at least, the illusion of growth for as long as possible. If so, cognitive computing will continue to change the nature of work that can be done by people and while technology has always had this sort of impact, it is possible that this time it will be far more ‘pervasive’. Of course, present-day PR-marketing suggests that cognitive computing systems will help humanity perform many tasks both faster and more accurately, while also making them cheaper and more efficient. Even if true, it may only help to underwrite the belief that weak AI will simply do ever more things better than humans, such that we may only hope that enough of humanity can adapt to this brave new world, although the following graphic might throw some doubt on this hope.

 Today, cognitive computing systems can be trained to see images and hear sounds plus has an increasing ability to read and write in any language, often better than most humans. Given that these skills underpin 80% of human employment in the global service sector, there is reason to believe these jobs could soon be under threat. Equally, the ability of cognitive computing systems could soon be extended using a potentially new generation of neuromorphic processors, which exhibit stochastic rather than deterministic behaviour. Neuromorphic processors are an attempt to mimic the neural network architecture of the brain. The premise of these processors is to replicate the functions of neurons and to build artificial neural systems, which might then lead to processors which can replicate some of the stochastic characteristics of the human brain. In deterministic models, output is determined by the parameter values and initial conditions, while stochastic models possess some inherent randomness, such that the same set of parameter values and initial conditions can lead to different outputs. While neuromorphic technology is still in its infancy, with many different approaches still being researched, a number of large-scale neuromorphic systems have already been developed. Today, neuromorphic technology already scales to neural networks encompassing millions of neurons with many billions of synapse connections, although their potential is still to be realized. However, the rate of progress being outlined would seem to suggest that humanity needs to seriously consider the implications of developing machines that may operate in direct competition with humans. While such systems may help humanity in many spheres of social, economic and political life, combining advanced cognitive systems with superior robotic functions might also prove to be a threat to human society, at least, as we understand it today. Back in 1942, science fiction author Isaac Asimov defined three laws of robotics, as shown below, which he believed might protect humanity from any AI robotic developments:

  1.  A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Of course, today, we might realise that Asimov’s laws are more a reflection of human-coded programming, which no longer appears  relevant to the direction of cognitive computing developments. In an earlier note, it was suggested that it might one-day be possible for technology to create a system that prioritises its own survival and, if so, we might then have to ask whether this system has more than a degree of sentience, such that Asimov’s laws would not necessarily be its prime directive.

But is the technology model the only consideration?

Obviously not, for there are equally important considerations in respect to other areas of social, economic and political change, which are rooted in the past history of some 196 nation states, which may yet come to affect the future of humanity, although this is the focus of the next section entitled Development Models. Equally, the  technology model may still advance within the framework of society in a series of ‘cause and effect’ steps, where new technology is first developed, to be either accepted or rejected on the grounds of some social, economic and/or political consensus, although not necessarily implying any form of democratic voting. For once a technology is accepted, possibly by just a small but powerful minority, society will change and adapt to a new norm and, in so doing, pave the way for yet further technology change, which might have previously been rejected.  So let us initially assume that a new era of cognitive AI continues to develop and leads to an exponential growth in knowledge and discovery, which then opens up new opportunities, many of which cannot be imagined today, let alone predicted with any certainty. Even so, we might still question whether this technology model holds out optimism for all, as it seems improbable that a growing global population can be a net beneficiary of the changes implied. So while some sections of society may find new areas of work that are both rewarding and valuable, they may be in a minority. If so, we possibly need to think about the real issues in a more direct way, although it may lead us towards some very unpalatable conclusions.

What is the underlying problem that humanity needs to solve?

Despite all the complexity that will surround any actual solution, the basic answer can be succinctly summarised in one word – sustainability. For the growth of humanity in terms of its population, resource needs and environmental impact is simply not sustainable on its current trajectory and, as such, something has to change. Of course, if some small powerful minority can survive into the future, with the help of cognitive-robotic systems that negates the dependency on the majority, would this minority still care about what happens to the majority? If so, the danger would not necessarily come from a rational form of AI but rather an irrational form of humanity.