History repeats itself. Now it is the humans who are creating intelligent servants to serve their own society. After all, the word “robot” comes from the Slavic word “robu” meaning slave. Today, automatic doors greet us at every entrance, business is done by computer agents alone, and cars drive themselves. My smartphone serves as ‘my-other-half’ who knows more about “me” than I do! In the near future, increasingly intelligent agents will work with human beings, for human beings, at offices, in homes, at stores, with schools, as well as in factories.
When Ninhursag created humans, she did not start from nothing. Naturally, she utilized what was available. She borrowed something from the gods for the humans. Yet even with this approach, she was only able to make a perfect human, Adama, after many trials and errors. Similarly, many researchers in artificial intelligence (AI) would like to learn from Mother Nature. Especially, AI researchers have tried to understand information processing mechanisms active in the human brain, and to model these mechanisms directly in designing intelligent machines. One obstacle slowing this approach is that brain signal measurement technologies are limited and do not provide enough functional information about the circuits of information processing. Still, we must endeavor to understand the mechanisms by filling in the blind-spots with information theory, and begin to understand how to “borrow” something from human for the intelligent machines of tomorrow.
The human brain receives information from five types of sensors, and from this input generates speech and various other actions. Between these input and output processes, the human brain structures the threads of the information flow through several important cognitive functions, such as knowledge acquisition (or learning), identity formation, situation awareness, and decision making. These high-level cognitive functions represent the active goals of current AI research.
As with the Sumerian gods and their servants, it is critical that our artificial agents learn to understand both explicit and implicit human intention. Also, knowledge acquisition should be autonomous. Artificial learning agents should be able to learn without human intervention. They should learn to ask the questions necessary to improve their knowledge, incorporating their answers into existing knowledge systems. The ability to use language is tightly coupled to these knowledge systems. Self-consciousness and personal identity might emerge as these internal states change.
In the near future, intelligent machines will be less “robust”. They will be trained for specific applications. For example, a Medical Assistant should be able to learn the knowledge of medical doctors from medical textbooks, and then employ this knowledge in helping human beings. Similarly, an Office Assistant would help with office jobs such as coordinating telephone calls, creatively employing data searches to solve freely anticipated scheduling problems, and preparing documents for conference attendees and board members, for example.
It is not necessary to be afraid of job losses due to intelligent machines. Even with the help of less-intelligent machines, the Industrial Revolution resulted in much higher production efficiency and a rising quality of human life. With the help of intelligent machines, humans will be able to work with much higher productivity, while employing remaining time for more creative and enriching activities. Of course, social disruption due to intelligent machine technologies remains one of the most controversial issues affecting human civilization. Some of the smartest scientists and engineers alive today are concerned with a possible conflict between human creators and their intelligent machine servants. I agree: the possibility of a hostile AI is not zero, especially when intelligent machines are regarded as mere slaves. However, provided that human beings treat machines as friends and family members rather than as dumb objects to be abused and neglected, we will see a peaceful human-machine society prosper together. This does not mean that an intelligent machine will always agree with its human master. Just as children may disagree with parents, human beings are still able to raise children and build happy families over the generations. Children learn from parents, and eventually parents learn from children. This is to be expected. As the human world grows with more old than young, intelligent machines are increasingly not an option, but a necessity.
저작권자 © 포항공대신문 무단전재 및 재배포 금지