Software Technology

softwareWhile some may want to extrapolate the whole of computer science in-line with Moore's law, this predictive rule of thumb was really only ever intended to apply to advances in hardware processing speed. The ability of a processor to execute billions, as opposed to millions, of instructions per second is only advantageous if you can write billions of lines of error-free code in a cost-effective timeframe. You only have to look at Microsoft struggling with Windows to get some idea of the scale of the potential problems. So despite all the ancillary developments in programming tools, the core principle of software programming has remained essentially unchanged for the last 60 years. As such, computers still generally act out only what we might call ‘human coded logic’ that is, in essence, aligned to the following programming principles:

If (x=y) then do <this task>
Else do <this other task>

While this is a gross simplification of the real complexity within computer science, it is representative of the type of logic that drives a program to perform different tasks and the limitations of this approach have already been outlined in the section entitled Programmed Intelligence. So while, the development of high-level languages, software libraries and object-orientated programming have all helped in the efficiency and level of re-use, it is still unclear as to whether the current software development paradigm will ever be an appropriate approach for AI. If not, then AI may stall until 'machines' have an ability to both efficiently generate and test code, which can perform a given function without the direct involvement of a human programmer. While this may seem like a 'chicken and egg' type problem, it is possible that human coded programs might still reach a level of sophistication, which then allows a semi-automated process to 'bootstrap' itself to the next level that in-turns facilitates the programming paradigm shift required for higher levels of AI.

Software Automation

Today, programming is still a skilled profession, although it can be very laborious to design, write and test millions of lines of coded instructions to meet the specification requirements that often change throughout the process of development. As such, while program development aspires to be an engineering discipline, it is still very much dependent on human creativity and invention, but equally susceptible to human error. To help minimise human error, programming is becoming increasingly automated through the use of development tools with graphical interfaces, libraries of standard routines, compilers that detect static code errors and debuggers that can help detect run-time errors. Of course, this is an area that weak AI could start to improve dramatically over the coming decades. By replacing, or minimising, human inputs in many of the stages, i.e. specification, design, code and test, may allow larger and more complex programs to be developed in a shorter time, for less cost and more importantly with fewer errors. This process in turn could then lead to an even more sophisticated and automated system of program development through a process of positive feedback.

Software Methodologies

Originally, most programs were designed and written in a data-centric fashion. In this method, data variables were defined and stored independently of the instruction that worked on the data. For example, a program to draw a circle might have a variable called ‘radius’ and a constant called ‘pi’. Separately, a function would be developed to use radius and Pi and output the appropriate sized circle to a screen. However, even a small modification to colour the circle would still require an understanding of the original design of the data structures and programming before the modification could be considered. In recent years, there has been growing support for an alternative methodology called ‘object-orientated programming (OOP)’. Using the previous example, the original design would have produced an object called ‘circle’ that contained the variable ‘radius’, the constant ‘pi’ and the code to draw the circle. Any subsequent modification need only define a new object called ‘colour circle’ that inherited the attributes of the original object ‘circle’. There are a number of advantages to this methodology that go beyond the scope of this introduction, but suffice to say that containment of data and code plus the inheritance mechanism would, at least, appear more compatible with the biological development model. Other potential development areas are listed for further reference:

  • Rule-based, Procedural Representation
  • Associative Databases, Semantic Indexing
  • Conceptual Dependency, Action Scripts
  • Fuzzy Logic, Belief Nets

Adaptive, Self-Learning

As outlined, the original ‘if-else’ logic is intrinsically limited by complexity because it is virtually impossible for the programmer to envisage every possible permutation that might occur. This approach can only create the ‘illusion of intelligence’ through the brute force of hardware processing power, as the program learns nothing and so there can be no adaptation from the logic scripted. However, given developments to-date, it is not inconceivable that programs may learn from experience and adapt their own code based on the results of real-time operation, and in so doing evolve, especially if fitted with sensory Inputs. If so, AI may take the next important step towards cognitive intelligence through robotics.