Looking forward to "responsible AI" from vision to reality

  Liu Haiming (Professor and Doctoral Supervisor of Chongqing University)

  This year’s "Government Work Report" of the State Council clearly stated: formulate policies to support the high-quality development of the digital economy and carry out the "artificial intelligence+"action.

  AI technology is empowering various fields, and the resulting economic and social benefits are also considerable. In the iteration of AI technology, no one can accurately predict how much potential of this kind of technology has not been tapped. While AI scientists continue to make agents more "smart", the security issues related to this technology have also become a public topic.

  At the National People’s Congress this year, Qi Xiangdong, a member of Chinese People’s Political Consultative Conference, paid attention to the innovation and development of "AI+ security", and gave detailed answers on how to deal with information security and cyber security threats in the AI era. Sharla Cheung, a deputy to the National People’s Congress, proposed to strengthen the safe application of artificial intelligence technology, accelerate the legislation of the Artificial Intelligence Law, and build a security barrier for key information infrastructure.

  The suggestions put forward by the deputies for the safety of artificial intelligence applications may rise to the mandatory protection of the safe application of artificial intelligence through legislation. This also shows that a consensus on the safe application of artificial intelligence is taking shape.

  Just last week, Musk filed a lawsuit against altman and OpenAI, accusing the defendant of forgetting the core part of OpenAI’s established mission, that is, developing useful and harmless artificial general intelligence. Facing the accusation, altman promised that OpenAI’s mission was to "ensure that artificial intelligence benefits all mankind".

  The anxiety and anxiety about the application of AI technology coexist at home and abroad, which is the normal consideration of human beings for their own interests and security. People put forward the vision of "responsible AI", which also aims to relieve the worries of applying this technology. As for why such worries are difficult to rule out, it lies in the uncertainty of its future development, which makes more people worry. Some people even think that "from the development of AI, we can see how small human capabilities are, and it is only a matter of time before machines surpass people."

  The biggest hidden danger in the world is ignorance or indifference to risks. When the safety of artificial intelligence has become a global public topic, there are many voices at home and abroad to legislate for the development of artificial intelligence. This sense of hardship is gratifying. Since the sense of urgency has generally awakened, when the whole society holds a "so-called" attitude towards the security of artificial intelligence, why not worry that "responsible AI" will not turn from a vision into a reality?

  To turn "responsible AI" from a vision into a reality, it needs to go through an indispensable trilogy, and gradually implement it from the most basic safety awareness to specific actions in pursuit of safety.

  From vision to reality, "responsible AI" requires human beings to know what true self-love is. Artificial intelligence technology is an invention of human beings, and the value of agents lies in benefiting human beings. The more useful it is to human beings, the more worthy of being loved by human beings. To do this, it is very few scientists who need to love themselves and master the iterative "button" of AI technology. Such "leaders" really know how to love themselves, that is, their inventions deserve to be loved by the world, and they deserve to be loved by others because of their technical contributions.

  Judging from Musk’s legal challenge to altman, the significance of this lawsuit lies in teaching the technical "head" who has mastered the "button" of AI development to learn true self-love. They know true self-love, and the public’s worries about the safety of AI technology are much less.

  From vision to reality, "responsible AI" requires human beings to know what real self-discipline is. Self-love is formed in the process of self-control, and a person who loves himself morally must be a truly self-disciplined person.

  "Responsible AI" has become a global public topic, because people who master the "button" of AI development technology pursue technology iteration in technology research and development, which invisibly ignores the consideration of AI security issues. Without such security considerations, the speed of AI technology has gone up, but this technology may also be out of the moral reins, forgetting that technology is to benefit human society, not to add to human society.

  Self-discipline is the rule that people make for themselves. In the AI technical team with self-discipline, "responsible AI" means that the design of agents must be equipped with heavy security constraints. Once an agent has the possibility and risk of hurting human emotions or interests in operation, it must be able to stop such behavior tendency. This tendency seems to be an agent’s own business, but in fact it can only be an agent designer’s self-discipline problem.

  Only those who create agents are strict with self-discipline, regard the safety of users as their own safety, and "don’t do to others what you don’t want them to do to you", can they make AI "responsible" first when designing agents.

  From vision to reality, "responsible AI" requires the whole society to have a sense of responsibility.

  Responsibility is a kind of compulsion. Through external legislation and effective litigation, institutions and individuals who do not know how to love themselves in technology research and development gradually learn to love themselves. Through moral consciousness, more researchers of artificial intelligence technology will establish a sense of responsibility for technological development, and more people of insight, including public figures like NPC deputies and CPPCC members, will constantly appeal to scientists, technical experts, entrepreneurs and the whole society to attach great importance to AI security issues. The stronger the sense of responsibility, the safer the future AI will be.

  To make AI "responsible", human beings need to consciously take responsibility first. The vision picture of AI security has been slowly unfolding, and it is not far to expect "responsible AI" to become a reality.