Keeping up with the rapidly evolving field of artificial intelligence is challenging. Therefore, in order for AI to do this on your behalf, here is a useful summary of recent news in the world of machine learning, along with important research and experiments that we haven’t covered separately.
OpenAI Scandal Exposes Business Risks
This week, it was impossible to ignore the ongoing controversy surrounding AI company OpenAI, and this includes me as a journalist, which is causing fatigue for my sleep-deprived brain. The board ousted Sam Altman, the CEO and co-founder, due to what they deemed a misalignment of priorities on his part: prioritizing AI commercialization at the expense of safety.
Altman was reinstated as CEO thanks to efforts from Microsoft, and most of the original board members were replaced. But this story illustrates the risks of AI companies, even large and influential ones like OpenAI, where the temptation to leverage commercial funding sources increases.
It’s not that AI labs necessarily want to merge with commerce-driven venture capital firms and tech giants. However, the exorbitant costs of training and developing AI models make it difficult to avoid this fate.
According to CNBC, training a large language model like GPT-3, the precursor to OpenAI’s flagship AI model GPT-4, can cost over $4 million. This estimate does not include the costs of hiring data scientists, AI experts, and software engineers, who command high salaries.
It is no coincidence that many large AI labs have strategic agreements with public cloud service providers; computational resources, especially at a time when the chips used for training AI models are scarce (benefiting vendors like Nvidia), have become more valuable than gold to these labs. OpenAI’s main competitor, Anthropic, has been backed by both Google and Amazon. Meanwhile, Cohere and Character.ai are supported by Google Cloud, which is also their exclusive computing infrastructure provider.
However, as this week showed, these investments come with risks. Tech giants have their own agendas, and sufficient weight to influence the realization of their interests.
OpenAI attempted to maintain some independence through a unique structure that limits total returns for investors. But Microsoft demonstrated that computational resources can be like capital in making a company responsive; a significant portion of Microsoft’s investment in OpenAI is in the form of Azure cloud credits, and the threat of withholding these credits is enough to catch the attention of any board of directors.
Unless there are collective increases in investments in super-capable public computing resources or specialized AI grant programs, the current situation is unlikely to change anytime soon. AI companies of a certain scale, like most startups, are compelled to relinquish control over their destinies if they wish to grow. We hope that most of these companies, unlike OpenAI, strike a deal with the devil they know.
Other AI News
Here are some other AI news stories from the past few days:
OpenAI Will Not Destroy Humanity
Did OpenAI invent AI technology that could threaten humanity? Some recent headlines may lead one to believe so. But there’s no need to worry, experts say.
California Considers AI Regulations
The California Privacy Protection Agency is preparing to impose restrictions on AI. The agency has released guidelines on how individuals’ data can be used in AI, inspired by regulations in the European Union.
Bard Answers YouTube Questions
Google announced that its AI program Bard can now answer questions about YouTube videos.
Launch
Grok from X
Elon Musk, the owner of X, confirmed that Grok will be available to all Premium+ subscribers at the company sometime this week.
Stability AI Launches Video Generator
Stability AI has announced an AI model that generates videos by animating existing images.
Anthropic Releases Claude 2.1
Anthropic has released a new update for its large language model Claude 2.1, making it a strong competitor to OpenAI’s GPT series models.
OpenAI and Open Artificial Intelligence
The OpenAI scandal highlighted the importance of the powers controlling the emerging AI revolution, prompting many to question what happens if you rely entirely on a central player owning the information, and what happens if things go wrong thereafter.
AI21 Labs Raises Funds
AI21 Labs, the company developing generative AI products similar to OpenAI’s GPT-4 and ChatGPT models, raised $53 million last week, bringing its total funding to $336 million.
Machine Learning Insights
It’s challenging to make AI models more explicit about when they need more information to produce reliable answers, as the model doesn’t know the difference between right and wrong. However, by making the model reveal a little about its internal workings, you can gain a better understanding of when it is likely to be dishonest.
The Purdue team created a human-readable “RIP map” of how the neural network represents visual concepts in its latent space. Similar items are clustered together, and overlaps with other areas can indicate either similarity between those groups or confusion within part of the model. Lead researcher David Glaich said, “What we’re doing is taking these complex groups of information generated by the network and giving people an understanding of how the network sees data on a macro level.”
If your dataset is limited, it may be best not to extrapolate too far from it, but if you must do so, a tool like the “Senseiver” from Los Alamos National Laboratory could be your best option. The model is based on Google’s Perceiver, capable of taking a few scattered measurements and making high-accuracy predictions by filling in the gaps.
This could be for things like climate measurements and other scientific readings, or even 3D data such as low-resolution maps generated by high-altitude scanners. The model can operate on edge computing devices, such as drones, which may now be able to search for specific features (in the test case, methane leaks) instead of reading data and returning it for later analysis.
Meanwhile, researchers are working on making the devices running these neural networks resemble the neural network itself. They created an array with 16 electrodes and then covered it with a layer of conductive fibers in a random yet consistently structured network. Where these fibers overlap, they can form or break connections based on a number of factors. Interestingly, they closely resemble how neurons in our brains form connections and then dynamically reinforce or discard them.
The team from UCLA/Sydney University reported that the network was able to recognize handwritten digits with an accuracy of up to 93.4%, surpassing traditional methods for the same range. It is certainly interesting, but still far from practical use, although self-organizing networks will eventually find their way into the world of tools.
Collaboration
Artificial Intelligence in the Service of Humanity
It is nice to see machine learning models helping people, and we have some examples of that this week.
GeoMatch Tool to Assist Refugees and Migrants
Researchers at Stanford University are working on a tool called GeoMatch aimed at helping refugees and migrants find the right location based on their situation and skills. This tool is not an automated process, as decisions are currently made by recruitment officers and other officials who, despite their expertise and knowledge, may not be sure that their choices are data-driven. The GeoMatch model takes a number of features and suggests a location where the person might find a good job opportunity.
Automated Feeding System for People Who Cannot Eat Independently
Robotics researchers at the University of Washington introduced an automated feeding system for individuals who are unable to eat on their own. The system was developed through several iterations and evolved with community feedback, and co-lead Ethan Gordon said, “We’ve reached the point where we can handle most types of food that a person can deal with using a fork. So we can’t handle soup, for example. But the robot can manage everything from mashed potatoes or noodles to fruit salad or actual vegetable salad, as well as pre-cut pizza or sandwiches or meat chunks.”
Assisting Visually Impaired Individuals
There are several projects aimed at helping visually impaired individuals navigate the world, from Be My AI to Microsoft’s Seeing AI, which is a suite of models specifically designed for daily tasks. Google had their own special app, Project Guideline, to help guide people on a path while walking or running along a specific trail. Google made it open source, which generally means they are letting it go, but their loss is a gain for other researchers, as the work done by the billion-dollar company can now be used in personal projects.
FathomVerse Game
Finally, there is a game/tool called FathomVerse designed to identify marine creatures in the same way that apps like iNaturalist identify leaves and plants. But it needs your help, as animals like anemones and octopuses are difficult and tricky to recognize. So sign up for the beta and see if you can help this project take off!
Leave a Reply