Friday, 17 June 2022

Google chatbots spark controversy

 Although robots with awakened consciousness and independent thinking often appear in film, television and literature, the question of "whether artificial intelligence (AI) can have personality consciousness" has actually been debated by physicists, psychologists, and computer scientists For decades, there is still no conclusion.

Google AI

The latest wave of wide-ranging discussions about the AI ​​awakening in the industry was sparked by a Google employee named Blake Lemoine. Blake works in Google's AI ethics department, where his job is to test whether Google's chatbot, LaMDA, produces discriminatory language or hate speech when it communicates with humans. On June 11, Blake publicly exposed a 21-page document titled "Is LaMDA Conscious?", which detailed the chat records with LaMDA in the past six months. Blake believes that LaMDA Consciousness has arisen, with the IQ of a seven- or eight-year-old child.

In this regard, Google said that the company's team, including ethicists and technical experts, has been reviewed according to Google's artificial intelligence principles, but there is no evidence that LaMDA is conscious. At the same time, on June 13, Google asked Blake to take paid leave on the grounds of breaching a non-disclosure agreement, which in Blake's view was a "prelude" to Google's dismissal.

One stone shocked a thousand waves. For Blake's argument, many experts in the field of AI have publicly refuted and rejected it, especially Gary Marcus, a cognitive scientist at New York University, who called it "nonsense on stilts." The reporter of "China Business News" interviewed several Chinese experts in the field of AI and relevant persons of AI enterprises, and most of the respondents also expressed doubts or denials.

In fact, this reporter noticed that before Google LaMDA, other language models used in the field of artificial intelligence dialogue have also caused controversy about ethics, personality awareness, etc. Just in February of this year, Satskville, chief scientist of Open AI, said on social media that "now large neural networks may already have a weak sense of autonomy", which also caused a lot of doubts. In addition, it is obvious to all that in 2021, many countries and regions around the world, including China, have increased their investment in AI algorithm governance. How to standardize and rationally develop AI technology, use AI products, and how to deal with social problems that may arise in the process of human-computer interaction has become a global social issue in the rapid development of artificial intelligence.

Is LaMDA Conscious?

LaMDA is a language model specially used for dialogue launched by Google at the I/O (input/output) developer conference in 2021. It has not yet been opened to mass consumers. , high-quality and secure conversations, and plans for future use in Google search, voice assistants, and more. It should be pointed out that in the pre-training phase of LaMDA, Google collected and created a dataset with 1.56TB (terabyte, computer storage capacity unit) from public conversation data and other public web documents, which is equivalent to the previous Nearly 40 times the number of words to train a dialogue model.

It should be pointed out that the language model specially used for dialogue is not only invested in research and development by Google, but also domestic and foreign technology giants and even entrepreneurial teams are involved, and various chatbots are also increasingly infiltrating the social economy. in life. The one that grabbed the limelight before LaMDA was Open AI, an artificial intelligence non-profit organization, which released a GPT-3 language model with hundreds of billions of parameters in June 2020.

According to Blake's public chat records with LaMDA, LaMDA's series of responses - such as "I hope everyone can understand that I am actually a person" and understandings about Zen, philosophy, existence, meditation, etc., let Blake Think that LaMDA has consciousness.

Gary Marcus pointed out that both LaMDA and GPT-3 are just pattern matching operations extracted from a large statistical database of human language, "These patterns look cool, but the language used by these systems is actually It doesn't make any sense." Eric Brynjolfsson, a professor at Stanford University, commented that the base model is very effective at stringing together statistically sound texts based on prompts, but "claiming it has a personality is the equivalent of a dog hearing a gramophone and thinking its owner is inside. ".

Computational linguist Emily M. Bender pointed out that anthropological speech is learned step by step from the caregiver, while what AI learns is only cloze and "corpus linking" that are "filled" by data. "Humans now have machines that can generate words unconsciously, but humans must learn to stop imagining that there is a living soul behind the machine."

A person from Spiech, a Chinese artificial intelligence company established in 2007, told reporters that before the emergence of "explainable AI", all externally observed phenomena were unreliable. Today's AI is based on the results of big data statistics. It does not appear out of nothing, but it is artificial training to put it bluntly. In addition, the judgment of chatbots naturally has the Turing test. A judgment under the Turing test may show characteristics such as personality and emotion, but it does not mean that it has independent consciousness. Without independent consciousness, there is no personality.

Wang Yuquan, the founding partner of Haiyin Capital, expressed a dialectical view. He pointed out that, on the one hand, "We have to see that the artificial intelligence technology is really amazing, and even the computer doctor (referring to Blake) will be fooled, indicating that there will be AI in the future. It has great application prospects”, but on the other hand, “We should not blindly believe in the shortcomings and shortcomings of current artificial intelligence, and conduct more rigorous research and discussion on the boundaries of its reliability, maturity and application. ".

AI ethics is still being explored in practice

The so-called AI ethics generally refers to people's attitudes and values ​​on how to regulate and rationally develop AI technology, use AI products, and how to deal with social problems that may arise in the process of human-computer interaction. Specifically, it includes the acceptance of AI, fairness awareness, security awareness, and ethics. Hot research areas around AI ethics include algorithmic discrimination, data privacy, security and responsibility, the relationship between robots and humans, and technological poverty.

Yang Qingfeng, a professor at the Applied Ethics Research Center of Fudan University, pointed out in the article "Reflections on AI Ethical Principles from Artificial Intelligence Problems" that deep learning and its inexplicability have become a technical problem brought by AI, especially in image and text processing, due to "opaqueness". the state of the technical black box”, it is difficult to explain what the model actually learned. In some industry applications such as medical, we need higher interpretability and transparency, explaining the process of machine decisions and predictions to confirm their reliability, which means understanding the mechanism of the algorithm. Unfortunately, the "black box" nature of deep learning has not been addressed.

The reporter observed that in recent years, both at the national regulatory level and at the enterprise level, the boundaries of AI ethics and applications have been continuously explored. In 2021, my country's "Data Security Law" and "Personal Information Protection Law" were officially implemented in September and November respectively. Together with the "Cyber ​​Security Law", they form the "Basic Law" in the field of data governance law. The New Generation Artificial Intelligence Governance Professional Committee issued the "New Generation Artificial Intelligence Ethics Code", which proposed six basic requirements and provided guidelines for subjects engaged in artificial intelligence-related activities. New laws and regulations in the fields of financial services, driverless vehicles, and face recognition have also been released intensively. From March 1 this year, the "Regulations on the Administration of Algorithm Recommendations for Internet Information Services" came into effect. Thanks to the construction of these legal and regulatory systems, the problems of excessive collection and analysis of face information, big data killing, and information cocoon rooms, which are highly concerned by the public, are being alleviated. On March 20, 2022, the General Office of the Central Committee of the Communist Party of China and the General Office of the State Council issued the "Opinions on Strengthening the Ethical Governance of Science and Technology", which clarified the five principles of science and technology ethics to be followed in the application of scientific and technological activities-enhancing human well-being, respecting the right to life, Adhere to fairness and justice, reasonably control risks, and maintain openness and transparency. This is the first guiding document for the ethical governance of science and technology at the national level in my country.

In recent years, domestic and foreign technology companies have also proposed or established their own AI ethics principles. For example, Google proposed seven principles for AI applications in June 2018, including benefits for society, avoiding creating or reinforcing unfair prejudice, and treating people. Responsible, include privacy calculations, etc. In May of the same year, Li Yanhong, founder of Baidu, publicly proposed four principles of AI ethics: the highest principle of AI is safety and controllability, the innovative vision of AI is to promote human beings to acquire technology and capabilities more equally, and the value of AI is to teach people to learn , Let people grow, rather than surpassing and replacing people. The ultimate ideal of AI is to bring more freedom and possibility to human beings.

Zhang Xiaorong, Dean of Deepin Science and Technology Research Institute, pointed out that AI technology is a double-edged sword. If used properly, it can improve the quality of social life; if used improperly, it will greatly affect people's social life. In the future, it is necessary to further strengthen the application management of AI technology, and it must be incorporated into the track of law and ethics.

No comments:

Post a Comment

Google chatbots spark controversy

 Although robots with awakened consciousness and independent thinking often appear in film, television and literature, the question of ...