An Argument Against AGI

An Argument Against AGI

Technology December 18, 2017 / By Nitzan Hermon
An Argument Against AGI
SYNOPSIS

Our unique, innate ability as creative humans is an important identification. It will reveal currents in automation, AI, and innovation opportunities.

Our recent attempts to compute AI, or rather AGI (artificial general Intelligence) are not new. They can be traced back quite some time.

How far back depends on whom you ask, for the purpose of this write–up I will focus on Minsky, McCarthy and MIT’s AI Lab as the starting point (but we could easily go back to Turing and beyond).

When Minsky and McCarthy started the lab in 1959 they were very much set on computing general intelligence. Machines that think, armed with consciousness and able at learning.

They sought to achieve that by a number of means. Of varying techniques, but a shared point of view. In an interview to Jeffery Mishlove (of Thinking Allowed), John McCarthy confessed to:

“There are 2 ways of looking at computing artificial intelligence, you can look at it from the point of view of biology or point of view of computer science You could imitative the nervous system as far as you understand the nervous system or you can immediate human psychology as far as you understand human psychology”

To annotate: “looking at it from the point of view of biology…” means the kind of “wet” programming the brain does, neurons, axons and so forth. Simulating such process is likely to be a reference to neural networks, which are (broadly speaking) sets of networks modeled around the way the brain works: namely taking a stab at clustering logic in proximity and generating hierarchies of information. “Looking at it from the point of psychology” is equally as provoking, as it refers to human intelligence as buckets of knowledge.

To push that slightly further before annotating, here is a quote by Marvin Minsky, part of a monologue in Machine Dreams.

“In order (for a machine) to be intelligent we have to give it several different kinds of thinking, when it switches from one of those to another we will say that it is changing emotions” “Emotion (in itself) is not a very profound thing, it’s just a switch between different modes of operation”

This is the other way of computing a brain. Buckets of knowledge, and sets of actions, compartmentalized in different vertical buckets. Switched by emotions.

Beyond being offensive to humanities, this argument is also easy to debunk using a quick thought experiment.


Imagine an intellectual person sitting in a chair, and doing nothing at all, starring into thin air. That person is clearly consciousness, and intelligent. Her lack of action does nothing to rob her of the consciousness and intelligent title.

In other words, intelligence is not conditioned by action. It need not be modeled around goals, nor operational switches.

This is an important point to stay on. This idea that we can compute human intelligence (we can’t), or that the brain is a computer (it’s not) is the underpinning belief that has been fueling generations of researchers, keen and persistent in their pursuit to compute a brain. Allan Watts refers to life, as an on–going oscillation. A continuous frequency of the brain.

wave

Source

You can imagine your intelligence (or soul, or consciousness) as a vibrating frequency. Like the trajectory of an analog sound wave as it travels through space. It is continuous, and unlike a digital factor it is not broken into a set of high fidelity instances.

A digital sound wave — say an MP3 file — is economic in storage because it can slice the edges of the frequency range, and then create a high res sound–alike composition.

Let’s say that I could somehow peer into your brain and start creating a map of your intelligence. It may take me up to a few decades, but eventually I succeeded in computationally solving your intelligence.

The problem is that I only solved the computation of your brain, and only at one point in time.

In other words I only solved one instance of intelligence. Anchored to a point in time, and a subject. The brain–as is conciseness, and intelligence–is an on–going analog frequency. Not instance based digital permutations.

For a much wider palette of opinions on the prospect of AGI (artificial general intelligence) I highly recommend What To Think of Machines That Think, by John Brockman.


If you’re still reading, it is safe for me to assume that that you’re onboard my pro–human view. That is the view that the brain is not a computer problem, and hence we should abort the idea of AGI. Narrow AI on the other hand is alive, and strong.

Once we accept the premise that AGI is of no use we can start identifying the opportunities in narrow AI. Think of your calculator as a narrow expert in calculation, think of a medical journal scanner as the best tool we have to consume terabytes of medical journals. And machine vision algorithms as the best face recognizer.

We can do the same tasks the machine is doing: calculate on a piece of paper, skim journals and classify images, but only to a certain extent. Let’s draw this.

Intelligence on a narrow plain

On the left there is ‘0’, no intelligence is being used, and nothing is being done. We can learn to perform a task, and as a next step we might relay this knowledge to a machine, so it could hyper mathematize it. We can foresee improvements in computing power, access to data, and other technologies pushing the machine’s ability into infinity and the unknown. ON A NARROW DOMAIN. A singular plain. Life, as a system, contains many of these trajectories. Hunting, flying, writing, internet browsing, coding, dancing, driving. We learn new tasks, and some of them graduate to automation. New domains are added, while other become obsolete. For example self–driving car technician, or a horseshoe maker respectively. We can imagine the system of life — i.e. humanities — as made of an infinite amount of these single line trajectories.

What would AGI need to achieve

Under this lens we can position AGI in the bottom right, as holding infinite ability in infinite domains. But we established the fallacies of that view so that leaves us with a much more current, and useful alternative.

In the absence of AGI is it up for us to navigate in this bend. Crossing, and linking disparate skills and disciplines. We hold an intellectual monopoly in that regard. Machines are incredibly capable in their unique domains but are blind to anything else, they only compute and extend the steps we relayed to the domain.

Another way of thinking about it, is when we come up with a new skill or a technology, we might slowly improve it, maturing the domain for other humans to participate in it. As part of that on–boarding, sets of instructions need to be written. In which point the domains is ready for a machine to excel it.

That machine is domain specific, and holds no intelligence. Its algorithms could do a lot of things that could we never do. For example, untangle messy data, or make assumptions about the future. But the bend is uniquely creative, and can’t be mechanically produced.

This is the core of it. If your product or service is narrow by design then you’re open for a machine to excel, or replace you.

This is absolutely not limited to lower grade jobs, as the machine hold no interest in your prestige. It really holds no interests at all. The system can perform tasks efficiently. And if you’re job is on a narrow domain, and can be broken into steps then you’re not using your human advantage properly.

Our unique, innate ability as creative humans is an important identification. It will reveal currents in automation, AI, and innovation opportunities.

 

Nitzan Hermon is a designer and researcher of  AI, human machine augmentation and language. Through his writing, academic and industry work he is writing a new, sober narrative in the collaboration between humans and machines. 

This article originally appeared at Everything Will Happend 

comments powered by Disqus
RECOMMENDED
FOR YOU