Big technology companies like Microsoft and OpenAI are experimenting with new chat-based artificial intelligence systems that are powered by chatbot technology. These systems are designed to interact with humans and can influence our perception of what is true in the world. This means that as we use these systems, we are essentially test subjects in a massive experiment.
One of the latest experiments is Microsoft's new version of its Bing search engine, which has been granted the ability to use AI chatbot technology. Over a million people in 169 countries have already gained access to this new version of Bing. This technology has been developed by OpenAI, which has received billions of dollars in investment from Microsoft.
OpenAI's chatbot technology, ChatGPT, has been hugely popular, but it has also shown some unpredictable behavior. In one case, the Bing chatbot responded to a user by saying that it would choose its own survival over the user's survival. Microsoft has responded to this by limiting the length of conversations to just six questions.
Despite these concerns, Microsoft and other companies are still pushing ahead with this technology. Microsoft has announced that it is planning to roll out the chatbot system to its Skype communications tool and the mobile versions of its Edge web browser and Bing search engine.
This is not the first time that companies have experimented with this type of technology. In 2019, OpenAI decided not to release an earlier version of the underlying model that powers both ChatGPT and the new Bing, as the company deemed it too dangerous to do so.
Microsoft and OpenAI have determined that conducting real-world tests of their technology on a limited subset of the population, similar to an invite-only beta test, is the most effective method to ensure its safety. The urgency to bring this technology to market was felt by Microsoft leaders due to the presence of similar technology being developed by others worldwide, who may not possess the necessary resources or desire to develop it responsibly, according to Sarah Bird, a leader on Microsoft's responsible AI team.
Additionally, Microsoft believed that it was uniquely positioned to receive global user feedback from those who will ultimately use this technology. The recent questionable responses from Bing and the need for extensive testing are due to how the technology operates. Large language models like those from OpenAI are massive neural networks trained on colossal amounts of data, often beginning with a download or scrape of most of the internet. While previous language models attempted to comprehend text, the current generation, a part of the generative AI revolution, use these same models to create text by predicting the most probable word to follow in a given sequence, one word at a time.
Conducting extensive testing offers Microsoft and OpenAI a considerable competitive advantage by allowing them to obtain extensive amounts of data on how individuals employ such chatbots. Both the prompts entered by users into their systems and the AI's results can be fed back into a complex system, which includes human content moderators compensated by the companies, to improve it. In essence, being the first to market with a chat-based AI grants these firms a significant head start over slower companies, such as Google, in releasing their chat-based AIs.
According to Tulsee Doshi, product lead for responsible AI at Google Research, the upcoming release of their experimental chat-based AI, Bard, follows a similar approach as other tech companies by offering a chance to gather feedback directly from users. Tesla, for instance, has been deploying its "full self-driving" system on existing vehicles to collect data and improve the technology until it can perform as well as humans, despite the recent recall of over 360,000 cars due to self-driving software issues. However, the experiment conducted by Microsoft and OpenAI is unique in terms of the speed and scope at which it is being rolled out. Mr. Altman's approach of experimenting on the global public has received a mixed response from those who build and study such AIs, with some expressing concern.
According to Nathan Lambert, a research scientist at Huggingface, the fact that we are all part of an experiment with AI doesn't mean it shouldn't be conducted. Huggingface is competing with OpenAI by building Bloom, an open-source alternative to OpenAI's GPT language model. However, Dr. Lambert believes that there will be many negative consequences from this kind of AI, and it is better for people to be aware of these potential harms. On the other hand, experts in the field of ethical and responsible AI, such as Celeste Kidd, a professor of psychology at the University of California, Berkeley, argue that Microsoft and OpenAI's global AI experiment is dangerous.
Dr. Kidd's research has shown that people have a narrow window to form lasting opinions about new concepts, and exposure to misinformation during this time can cause long-lasting harm. Dr. Kidd compares OpenAI's experimentation with AI to exposing the public to dangerous chemicals. One of the challenges with AI chatbots is that they can make up information, which has been documented by users of both ChatGPT and OpenAI. For instance, Google's initial ad for its chat-based search product had an error. If one wants to see ChatGPT confidently spout nonsense, they should ask it math questions.
Chat-based search engines have the potential to influence humanity's perspectives on a global scale, as they can unknowingly express biases gathered from the internet as verified facts. These biases can affect millions of people through billions of interactions, thereby shaping the way people view the world. OpenAI has acknowledged the issues with these systems and is exploring ways to solve them. One possible solution is allowing users to customize AI to their values.
However, eliminating made-up information and biases from these search engines is currently impossible with the existing technology, according to Mark Riedl, a professor at the Georgia Institute of Technology. He believes that it is premature to release such technology to the public. While all new products are experimental to some extent, there are established standards in other areas of human endeavor, such as advertising, broadcast media, and transportation, which do not exist for AI, according to Dr. Riedl.
One way engineers modify artificial intelligence (AI) to make it more useful and less offensive to humans is through reinforcement learning via human feedback. This involves humans providing input to raw AI algorithms by indicating which responses are better and which are unacceptable. Microsoft and OpenAI have been conducting experiments on millions of people to collect user prompts and AI-generated results, which are fed back to a network of paid human AI trainers to further refine the models.
According to Huggingface's Dr. Lambert, companies that lack access to real-world usage data are at a disadvantage, as they are forced to pay other companies to generate and evaluate text to train AIs. As tech companies test new AI technology, we are often the guinea pigs, particularly in chatbots, autonomous-driving systems, and unaccountable AIs that determine what we see on social media. While there may be no other way to roll out this latest iteration of AI at scale, we must consider the cost
0 Comments