Robot, the stupid.

Robot, the stupid.

Robot, the stupid.

Artificial Intelligence researchers foster society's hope for quick robotic revolution. Optimistically, they announce that each new cyborg will change the world. But so far, none has.

People interested in cyborgs, robots, androids and all the freaky futurist, technology-related ideas often navigate the web search of updates on the progress in the field. There they meet Aiko. She is clothed in a silicone body, weighs 30 kg, measures 152 cm, and is a woman-android created by Canadian-Vietnamese designer Le Trung. Aiko speaks fluent English and Japanese and is very skillful at cleaning, washing windows, and vacuuming. She reads books and distinguishes colors, knows how to learn and remember new things. What is more, when Le Trung tried to kiss her in public, she immediately hit him in the face. Undeterred by her violent behavior, Le Trung  talked about her as his wife. Sometimes however, he referred to her as his child or a project that would leave behind for posterity. Le Trung  ensures that in a few years Aiko will become very much like a real woman. When this happens, he will win a life companion, and leave a treasure for the next generations: a fully humane female android. 

It all sounds very promising; unfortunately there is one “but."  Aiko cannot walk. She moves in a wheelchair. She is incapable of doing one very simple thing that every mature and healthy human being can easily do. Why is that? Well, first of all, Le Trung could not afford to finance software for walking. The main reason behind it however is that the current design of Aiko is not compatible with any good software that enables walking. If Le Trung wanted Aiko to walk, he would have to replace her with a new, better model. But then he would lose what he had achieved so far. Tough choice.

Looking for further examples, we encounter Kenji. Kenji belongs to a group of robots geared with customized software, enabling emotional response to external stimuli. For Kenji’s creators the biggest success was Kenji’s devotion to the doll kept in the room where Kenji spent the most time. When the doll was gone, Kenji immediately started asking where it was and when it would come back. He missed the doll. When it returned, he was hugging it all the time. Scientists  were overwhelmed with this success: they had their first emotional machine! Thanks to the many weeks of iterated behavior based on complex code Kenji equipped himself with something that can roughly be described as a feeling of tenderness. Caretakers enjoyed Kenji's progress until his sensitivity began to be dangerous. At some point, Kenji began to display the level of commitment specific to psychopaths.

It all started one day when a young student appeared at the lab. Her task was to test new procedures and software for Kenji. The girl regularly spent time with him. But on the day when her internship ended, Kenji protested in a rather blunt way: he didn’t let her leave the lab, and hugged her so hard with his hydraulic arms that she could not get out. Fortunately, the student managed to escape when two staff members  came to rescue her and turned the robot off.

Following the event, a worried Dr. Takahashi - Kenji's primary caretaker - confessed that the enthusiasm of the research group was premature. What is even worse, since the incident with the girl, every time Kenji is activated, he reacts similarly towards the first encountered person. He  immediately wants to hug the victim and articulates loudly his love and affection thanks to 20-watt speaker he’s equipped with. However, Dr. Takahashi does not want to turn Kenji off. He believes that the day will come when, thanks to various improvements, a less compulsive Kenji will be able to meet with people without frightening them.

AI Winter, or "The vodka is good but the meat is spoiled”

Artificial intelligence (AI) researchers often do not demonstrate restraint in fostering society's hope for robotic revolution. On the contrary – they usually announce that each new cyborg will change the world. The situation with the handicapped super-robot Aiko and hyper-emotional Kenji is representative of the entire field of AI. There is a long record of failed experiments, investment failures and successes that in the end where not successes at all.

On the other hand, there are very few research areas that faced so many eruptions interspersed with waves of enthusiasm and criticism, resulting in essential financial cuts. In the history of artificial intelligence there is phenomenon known as " AI winter", meaning a period of reduced funding for research. The term was coined by analogy to the idea of nuclear winter. It first appeared in 1984 as a subject for public debate at the General Assembly of  American Association of Artificial Intelligence. It provided a brief description of the emotional state of the research community centered on AI: the collapse of faith in the future of the field and increasingly difficult to conceal pessimism.

This mood was related to lack of success in machine translation. In the 60s, during the Cold War, the U.S. government became interested in the automatic translation of Russian documents and scientific reports. In 1954 the U.S. launched a program of support to build a machine translator. Initially, the researchers were very optimistic. Noam Chomsky's groundbreaking works on generative grammar were harnessed to improve the process. But scientists were overmatched by the problem of ambiguity and contextuality of language.

Devoid of context, the machine committed comical errors. One example was a sentence first translated from the Russian, and then back into Russian from English: "The spirit indeed is willing, but the flesh is weak" was machine translated as: "Vodka is good but the meat spoiled." Similarly, "Out of sight, out of mind" became "The blind idiot."

In 1964, the U.S. National Research Council (NRC), concerned with the lack of progress in AI, created the Advisory Committee on Automatic Language Processing (Alpaca), to take a closer look at the problem of translation. In 1966 the Committee concluded that machine translation is not only expensive, but is also less accurate and slower than the work of man. Based on these conclusions, the NRC refused all further assistance after its initial release of approximately $20 million. This was only the beginning of AI funding problems.

Two major AI winters occurred from 1974-1980 and 1987-1993. In the 60s Defense Advanced Research Projects Agency (DARPA) spent millions of dollars on AI. The head of DARPA, JRC Licklider, believed deeply that DARPA should invest in people, not in specific projects. Public money was lavished on AI researchers: Marvin Minsky, John McCarthy, Herbert A. Simon and Allen Newell. It was a time when confidence in the development of artificial intelligence and its potential for the army were unwavering. Artificial Intelligence ruled everywhere, not only in the realm of economy, but also ideologically. Cybernetics became a new metaphysical paradigm.

AI proponents claimed that soon the ideal machine would surpass humans intellectually, but would still serve him and protect him. However, after nearly a decade of spending without limits that have not led to any breakthrough, the government became impatient. The Senate passed an amendment that required DARPA to fund specific research projects, rather than researchers. Researchers  were expected to demonstrate that their work would soon be beneficiary for the army. They failed.  DARPA issued a scathing review of the AI efforts.

DARPA was deeply disappointed with the results of scientists working on understanding speech in the methodological framework offered by Carnegie Mellon University. DARPA hoped for a system that could respond to remote voice commands. Although the research team developed a system that could recognize English, it worked properly only when the words were spoken in a specific order. DARPA felt cheated and in 1974 recalled a three million U.S. dollar grant. Cuts in research funded by the government affected all academic centers in the United States. It was not until many years later that speech recognition tools based on Carnegie Melloon technology finally celebrated their success. The speech recognition market reached a value of $4 Billion in 2001.

Lighthill report vs. fifth generation

The situation in the UK was similar. AI funding decreased sharply in response to the so-called Lighthill report in 1973. Professor Sir James Lighthill was asked by Parliament to evaluate the development of AI in the UK. Lighthill argued that AI was unnecessary: other areas of knowledge were able to achieve better results and needed more funding. One problem Lighthill identified was a consistent difficulty in moving from theory to application: many AI algorithms which were spectacular in theory turned to dust in the face of reality. Machine’s collision with the real world seemed to be unsolvable. The report led to the almost complete dismantling of AI research in England.

A revival of interest in artificial intelligence began in the UK in 1983 when Alvey – was launched. It was a research project funded by the British government and worth 350 million pounds. Two years earlier, the Japanese Ministry of International Trade and Industry allocated $850 million for the so-called fifth-generation computers. The aim was to write programs and build machines that could carry on a conversation, translate easily into foreign languages, interpret photographs and paintings, in other words: achieve an almost human level of rationality. Alvey was Britain's response to the project. However, until 1991, few of the tasks foreseen for Alvey were achieved. A large part of them remain unrealized, and it’s 2013. As with other AI projects, expectations were just too high.

But let's go back to 1983 when DARPA resumed funding for AI research. The long term goal was to establish the so-called strong artificial intelligence (strong AI), which - according to John Searle, who coined the term – was for a machine to become a man. It’s worth noting that both Aiko and Kenji (both from Japan) are trying to implement this very concept. In 1985, the U.S. government granted one hundred million U.S. dollars for  92 projects in sixty institutions - in industry, universities and government labs. Two leading AI researchers who survived the first AI winter, Roger Schank and Marvin Minsky, warned the government and business against excessive enthusiasm. They believed that the ambitions of AI ballooned out of control, which inevitably led to disappointment. Hans Moravec, a known AI researcher and enthusiast, claimed that the crisis was caused by unrealistic predictions of his colleagues who kept repeating the story of a bright future. Just three years later, the billion dollare AI industry began to decline.

A few projects survived the funding cuts. Moravec found himself among the survivors working on the iDART - combat management system. It proved to be very successful, saved billions of dollars during the first Gulf War.

Fear of the next winter

Now, in the early twenty-first century, when AI has become commonplace, it's  successes are often marginalized, mainly because AI seems obvious to us. Nick Bostrom said that even intelligent objects become so evident that one forgets that they are intelligent. Rodney Brooks - innovative, highly talented researcher and programmer – agrees. He points out that despite the general view of the fall of artificial intelligence, it surrounds us at every turn.

Technologies developed by artificial intelligence researchers have achieved commercial success in many areas, like the once discredited machine translation, data mining, industrial robotics, logistics, speech recognition and medical diagnostics. Fuzzy logic (that is, one in which the state between 0 and 1 extends to a number of intermediate values) has been harnessed to build automatic transmissions in Audi TT, VW Touareg, and Skoda.

The fear of another winter gradually gave way. Some researchers have continued to voice concern that the new AI winter may come from another project too ambitious or unrealistic promises made to the public by eminent scientists. There were, for example, fears that the robot Cog would ruin the barely rebounding reputation of AI. But it did not happen. Cog was a project carried out in the Humanoid Robotics Group at MIT by Rodney Brooks, together with multi-disciplinary group of researchers, which included well-known philosopher Daniel Dennett. The Cog project was based on the assumption that the level of human intelligence requires experience gathered through contact with people. So Cog was to enter into such interactions and to learn the way infants learn.

The aims of the Cog team were, inter alia, (1) to design and create a humanoid face that will help the robot in establishing and maintaining social contact with people, (2) to create a robot that would be able to influence people and objects as a man, (3) to build a mechanical proprioception system, and (4) the development of complex systems of sight, hearing, touch, and vocalization. The very list of tasks shows how ambitious project Cog was! Yet, once again, (surprise, surprise!) Cog funding was cut in 2003 when the project failed to meet inflated expectations. This time, however, it was not the end of a partially successful project. Cog built the collective imagination around the idea of something resembling a human being.  Likewise, Deep Blue, which, despite some setbacks, was an excellent chess champion - and that was all that mattered.

Waiting for spring

From the peak of inflated expectations to the deep pit of dispair – emotions surrounding AI have risen and plummeted again and again.  After observing these mood swings, I gradually changed my mind about the goals of creating an intelligent machine. I thought for a long time that the humanoid robot should be a mirror of man. But at some point I began asking myself why we want the robot to be more perfect than man? Why we don’t we give up on that idea? Why, when the ambitious idea fails yet again, do we stubbornly try again? After all, we have little reason to believe that technology will solve the riddles of humanity to create an artificial human being. It certainly will not happen soon.

So, are there other factors that push man to persist in this absurd field? After all, if you think about it, even the name: artificial intelligence, is quite grotesque. I think to myself, the machine is not exactly a mirror, it is rather something that must remain different, yet still surpass us. It is the otherness of a machine that allows human beings to feel safe. We don’t like resemblance, it’s disturbing. Various pop-culture visions of androids and cyborgs acquiring control of the world confirm this. Because the machine surpasses us at tasks we don’t feel like doing while remaining quite different form us, we behave towards it as somewhat pathological parents. We challenge it, demand results, get offended by its failures and then return with a new dose of energy and expectations.
So, to make this long story short, it doesn’t matter that Aiko and Kenji get lost in the human world. They are The Others, they are weird, but they make a beautiful couple at what they do best. He hugs and she punches.

Article Featured Image: Project Aiko

This article originally appeared at Biweekly

comments powered by Disqus