Did OpenAI invent artificial intelligence technology that could pose a “threat to humanity”? From some recent headlines, you might be inclined to think so.
Conflicting Reports
Reuters and The Information reported last week that several members of the OpenAI team indicated in a message to the company’s board of directors the “capability” and “potential danger” of an internal research project known as “Q*”. According to the reports, this AI project could solve some mathematics problems – although limited to elementary school level – but in the researchers’ opinion, it represents an opportunity for advancement toward a missing technological breakthrough.
The Controversy Over the Message
There is now controversy over whether the OpenAI board actually received such a message – The Verge points to a source suggesting that it did not happen. But regardless of the framework of Q*, it may not actually be something as significant or threatening as it seems. It might not even be new.
A Possible Interpretation of Q*
AI researchers on X (formerly known as Twitter), including Yann LeCun, the chief AI scientist at Meta, were quick to express skepticism that Q* is anything more than an extension of existing work at OpenAI – and in other AI research labs as well. In a post on X, Rick Lamerz, who writes the Substack newsletter Coding with Intelligence, referenced a guest lecture given by OpenAI founder John Schulman seven years ago in which he described a mathematical function called “Q*”.
The Possible Direction of Q*
Many researchers believe that the “Q” in “Q*” refers to “Q-learning”, which is an AI technique that helps the model learn and improve on a specific task by taking “correct” defined actions and receiving a reward for doing so. Researchers say that the star, at the same time, could be a reference to A*, which is an algorithm for checking nodes that form graphs and exploring paths between those nodes. Please disregard the sheer volume of complete nonsense about Q*.
The Potential Impact of Q*
Lamerz believes that if Q* uses some of the techniques described in a paper published by OpenAI researchers in May, it could significantly increase the capabilities of language models. Based on the paper, OpenAI might have discovered a way to control “thought chains” of language models, allowing it to guide the models to follow more desirable and logical “paths” to reach conclusions.
Not a Threat to Humanity
But whatever emerges from Q* – and the simple mathematical equations it solves – will not pose a danger to humanity.
Leave a Reply