Generative Artificial Intelligence Technology Takes the Headlines This Year. Here’s Why and What Comes Next
People are increasingly using chatbots that respond in human-like ways. Like any technology, there are pros and cons. Generative artificial intelligence technology relies on a computational model that uses language patterns to predict the next words in a sentence, responding to a user query with a human-like reply. The model consists of multiple layers of interconnected nodes, vaguely inspired by the neural connections in the brain. During training, the connected nodes go through billions of pieces of written content aggregated from the internet, learning patterns by adjusting the strength of different node connections. Other types of generative AI have been developed to produce images, videos, and more.
Why Has Generative AI Been So Big This Year?
We’ve had language models for many years. But the breakthrough with systems like ChatGPT is that they required more training to be dialog partners and helpers. They were trained on a larger data set. They had more connections, in the billions to trillions. They were also presented to the public with a very user-friendly interface. These factors are what made them successful, and people were amazed at how similar they were to humans.
Where Do You Think Generative AI Will Have the Greatest Impact?
This is still a big open question. I can input a prompt for ChatGPT, saying “please write a summary of my research paper that includes these points in it,” and it will often generate a good summary. As an assistant, it is extremely useful. For generative images, the systems can produce stock images. You can simply say that you need an image of a robot walking with a dog, and it will generate that. But these systems are not perfect. They make mistakes. Sometimes they cause “hallucinations.” If I ask ChatGPT to write an article about a subject while including some quotes, it sometimes makes up quotes that don’t exist. It can also generate incorrect text.
Are There Any Other Concerns?
It requires a lot of energy. It operates in massive data centers containing vast numbers of computers that need lots of electricity and use a lot of water for cooling. So there is an environmental impact. These systems are trained on human language, and human society has many biases that are reflected in the language those systems have absorbed – racial, gender, and other demographic biases.
What Do You Think of the Hype?
People should be aware that AI is a field that tends to be overhyped, since its inception in the 1950s, and should be somewhat skeptical of the claims. We’ve seen time and time again that those claims are greatly exaggerated. They are not humans. Even though they seem similar to humans, they are different in many ways. People should see them as tools to enhance our human intelligence, not to replace it – and ensure that a human is in the loop instead of giving them too much autonomy.
What Are the Potential Implications of the Recent Turmoil at OpenAI on the Generative AI Landscape?
The [upheaval] shows something we already knew. There is a kind of extremism in the AI community, both in terms of research and in terms of commercial AI, about how to think about AI safety – how quickly to release these AI systems to the public and what necessary safeguards are. I think it makes it very clear that we should not rely on the big corporations where power is concentrated now to make these huge decisions about how to safeguard AI systems. We really need independent people, for example, government oversight or independent ethics committees, to have more authority.
What
What is expected to happen next?
We are in a state of uncertainty about what these systems are and what they can do, and how they will evolve. I hope we find some reasonable regulation that mitigates potential harms but does not overly restrict what could be very beneficial technology.
Source: https://www.sciencenews.org/article/generative-ai-chatgpt-safety
Leave a Reply