Will AI “Dominate” Humans?
Will AI “Dominate” Humans?
  • Han Ju-wan (ME 18)
  • 승인 2021.02.27 23:41
  • 댓글 0
이 기사를 공유합니다

Scatter Lab’s Facebook chatbot Lee Luda (pronounced “Iruda”, meaning “to achieve”), an A.I. that started beta testing in June 2019, caused a big stir in the Korean society after its official release in December 2020. Unlike conventional A.I.-based chatbots, Lee Luda had spread rapidly among teenagers as it comprehended teen slangs. However, Lee Luda failed to surpass the limitation that A.I. must “learn from humans” and eventually disappeared into history only two weeks after its release.
Certain issues are inevitably raised during A.I.-related discussions. Among such topics, this column addresses the question, “Will A.I. eventually dominate humans?”.
To discuss this topic, clarification on some terminology is necessary. Physical domination refers to the domination done solely through the manifestation of power. An example may be the domination of robots in SF movies. On the other hand, psychological domination refers to “information domination,” which describes A.I.’s domination over contemporary society where A.I. technology holds a higher importance than human beings. Physical domination is restricted due to the necessity of hardware, and, unless hardware achieves the state of self-maintenance, will forever remain a fictitious theme of SF movies. Meanwhile, psychological domination is theoretically feasible as A.I. has now achieved the state of self-learning. However, to deliberate the possibility of A.I.’s psychological dominance, we must assume the “persona” of an A.I., just as how we entitle a legal persona to a business entity. Under the assumption that an identity is accredited to A.I., can A.I. “dominate” over humans?
We must first realize that A.I. will eventually “coexist” with humans. Earlier we briefly discussed “domination”, which is evidently fundamentally different from “coexistence”. For example, we usually do not say humans “dominate” over pets. Instead, we say that humans “coexist” with pets. In the same way, I believe that A.I. will coexist with humans, and through such mutually beneficial relations, humans will succeed in the development of a further-advanced A.I., and A.I. will help humans to build a better future. However, for such ideal associations to exist, humans must remain irreplaceable by A.I.. This idea introduces an insight to consider regarding future A.I. development.
“A.I. that does not replace humans” refers to A.I. developed strictly to be human-centric, the term “human” referring to “humanity”, not “individuals”. This means that A.I. must be trained according to the (forever changing) universal value of mankind. However, Lee Luda’s training was focused on “individuals”. I believe that this was the cause of Lee Luda’s failure.
Our society inscribes the universal value of humanity within many sources such as the Constitution or the Universal Declaration of Human Rights. If the training of Lee Luda was contained within individuals with corresponding values, or those who promised to only answer concurrently to such values, the issues regarding Lee Luda would have never arisen. Of course, this way, the development and training procedure becomes much more expensive and time-consuming. However, considering the dangers of A.I., including its infinite potential of weaponization, A.I. must follow such rigorous development procedures. Just as how the world limits nuclear development, A.I. development must be constantly monitored and restricted as well.
Before finishing my point, I would like to establish that this column is solely based on my personal opinions and there exists no reference to the claims made within this article. Even so, I submit this writing to raise attention regarding the value and objective of A.I. as increasingly many engineers participate in A.I. research and development. I hope this article can help inspire further discussions regarding similar topics.

 

Han Ju-wan (ME 18)
Han Ju-wan (ME 18)