There is global agreement among modern artificial intelligence (AI) professionals that it falls short of human capabilities in some critical sense, even though AI algorithms have beaten humans in many specific domains such as chess.
It has been suggested by some that as soon as AI researchers figure out how to do something, that capability ceases to be regarded as intelligent.
Chess was considered the epitome of intelligence until Deep Blue won the world championship from Kasparov.
But even these researchers agree that something important is missing from modern AI.
As this subdivion of artificial intelligence is only just coalescing, “Artificial General Intelligence” (AGI) is the emerging term of art used to denote “real”AI.
As the name implies, the emerging consensus is that the missing characteristic is generality.
Current AI algorithms with human-equivalent or superior performance are characterised by a deliberately programmed competence only in a single, restricted domain.
Deep Blue became the world champion at chess, but it cannot even play checkers, let alone drive a car or make a scientific discovery.
Such modern AI algorithms resemble all biological life with the sole exception of Homo sapiens.
A bee exhibits competence at building hives; a beaver exhibits competence at building dams; but a bee doesn’t build dams, and a beaver can’t learn to build a hive.
A human, watching, can learn to do both; but this is a unique ability among biological lifeforms.
It is debatable whether human intelligence is truly general, we are certainly better at some cognitive tasks than others but human intelligence is surely significantly more generally applicable than non-hominid intelligence.
It is usually easy to envisage the sort of safety issues that may result from AI operating only within a specific domain.
It is a qualitatively different class of problem to handle an AGI operating across many novel contexts that cannot be predicted in advance.
When human engineers build a nuclear reactor, they envision the specific events that could go on inside it – valves failing, computers failing, cores increasing in temperature – and engineer the reactor to render these events non-catastrophic.
Or, on a more mundane level, building a toaster involves envisioning bread and envisioning the reaction of the bread to the toasters heating element.
The toaster itself does not know that its purpose is to make toast, the purpose of the toaster is represented within the designer’s mind, but is not explicitly represented in computations inside the toaster – and so if you place cloth inside a toaster, it may catch fire, as the design executes in an unenvisioned context with an unenvisioned side effect.
- The local, specific behaviour of the AI may not be predictable apart from its safety, even if the programmers do everything right;
- Verifying the safety of the system becomes a greater challenge because we must verify what the system is trying to do, rather than being able to verify the system’s safe behaviour in all operating contexts;
- Ethical cognition itself must be taken as a subject matter of engineering.