For the first time ever, a computer has beaten a professional player of Go, 'most complex game ever devised', marking a major breakthrough when it comes to artificial intelligence and neural networks, reports suggest. TEDxBrussels interviewed IT guru Oliver Wloch to unveil the potential dangers and opportunities of a "machine world".
1/ If we follow the buzz around artificial intelligence lately, we may get the impression that machines might be able to take over the world and destroy it. What's your take on this matter?
In the foreseeable future, machines will not take over the world on their own volition. In fact, we are nowhere near the point where a machine would gather enough self-awareness to actually devise its own plans based on a "human" understanding of the world.
However, machines are getting more connected and determine more aspects of our lives. Therefore, systems grow whose behavior we cannot simulate in advance. We have already started dealing with these consequences on a semi-daily basis. We have gotten used to credit card data being stolen because some attacker found a flaw in a human-made system. So, we can hardly control even seemingly simple systems. If we start giving machines agency, we have little to no control over their behavior when faced with a new situation.
Imagine some bank robbers manipulating the traffic lights at a busy crossing. The automatic traffic control system notices the traffic jam, but is blissfully unaware of the manipulation. The traffic lights designer thought about malfunctions and built in self-test mechanisms to identify them. But neither he nor his own system could foresee the smart idea one of the bank robbers had. He works at a traffic lights producer and understood how to circumvent those self-tests. Thus, the system determines everything is in order but notices growing traffic jams through different sensors.
The traffic lights appear to be working smoothly and no accident has been reported. (Cars have an automatic accident reporter these days.) The traffic jam is growing quickly. The connected police communication system is getting hot because police officers are sharing information about the bank robbery and the sudden, inexplicable, traffic congestion. The mix of hectic police talk and the unexplainable situation in the streets leads the system to switch to emergency mode. No longer is any vehicle allowed to enter the city. By means of omnipresent, fully automatized communication systems, everybody in the city is asked to "stay calm" and "to not open any windows". Schools are closing, people panicking - for no reason. Being afraid, people start acting irrationally. The number of car accidents increases, even other parts of the city start being clotted. Panic is increasing while a few human beings start looking at log files trying to understand what happened, which events lead to the current situation. But, they have a hard time because they cannot understand how the machine came to its judgment. The creator is called in to help. He lives outside the city and cannot enter because the system had closed all ways into the city...
Despite significant advances in artificial intelligence, "common sense" and especially "intuition" are some characteristics that are infamously difficult to teach machines. In fact, even a network of computers which taught itself to identify cats does not have any understanding of things other than cats in their various forms. The key message is that complex, automatized systems are enough to cause serious trouble. Additionally, let us keep in mind that military forces around the world have a strong interest in autonomous fighting robots. Most of their behavior would be classified as "evil" if it was not covered by a democratic legislation authorizing their use. But where is the line drawn? Would a fighting robot feel compassion? If yes, could the "compassion mode" be used to trick the robot? There are more questions than answers, but yes, we have to be careful!
2/ What role does big data play in this?
Big data can be seen as a training-set to identify and learn as many patterns as possible. The approach is successfully used in consumer research, scientific endeavors and espionage. It definitely works in making machines understand a little bit more about their environment and it definitely helps people to make sense of otherwise unused data. Here, I clearly see the problem more on the people side. Whoever has access to a lot of personal, economic etc. information has a tremendous edge.
3/ As machines keep getting smarter and more powerful, will they ever develop so-called human values?
A lot of "so-called" human values can be derived logically from simple observations. If, e.g., there is constant war, the cost of fighting quickly goes beyond its profit potential because more and more resources are allocated to this unfruitful fighting. As long as rules can be derived this easily, machines could develop them as well. It would either have to be "hard coded" by the programmer or learned behavior. But there are more subtle things such as love. The concept is so complex that generations of poets have failed to define it adequately. If the human mind does not understand it fully, we cannot come up with a set of rules describing it. Consequently, we cannot feed a machine with such a rule set to make it “appear” human.
Could, however, machines by themselves become so developed that they would be on par with thought-leading human beings? (This way, they would be showing human traits such as intuition.) I do not believe this will happen.
4/ What does your research focus on?
Hackability of network systems
5/ What is your vision of the Deeper Future?
There are a few scenarios. The nicest one entails a future where human beings are supported by machines to remove the burden of boring, repetitive or tough jobs. Information can flow freely and creativity reaches new heights. Robots are constantly at our disposal to support our endeavors. We will have identified a lot of pathological patterns and many illnesses incurable today will either become curable or be defeated before they develop. Also, having machines rise in usefulness will help human beings understand themselves better. If you can build a machine, you develop a deep understanding of it. However, what I actually fear is that information will be compressed in the hands of a few companies and state-sponsored agencies. Because not only is it expensive to process genuine information, you also need access to it. If the hurdles to use such technologies become too high, the prospects for a bright future worsen significantly.
Interview by Bibbi Abruzzini