Self-Learning and the Future: How Artificial Neural Networks Reveal Learning Methods in Biological Minds

In the past decade, we have witnessed remarkable advancements in artificial intelligence systems, with most of these systems relying on vast amounts of labeled data to achieve optimal performance. However, this strategy, known as supervised learning, despite its success, has clear shortcomings, raising questions about how machines learn compared to the human and animal brain. Animals, including humans, do not use labeled data in their learning process; instead, they rely on exploring their environment to understand the world around them. This is where the concept of “self-supervised learning” comes into play, which has become a focal point for many researchers in computational neuroscience. In this article, we will discuss these new strategies and how artificial intelligence is advancing through self-supervised learning methods, which may reveal new ways to understand how our brains work.

History of Self-Supervised Learning and Challenges of Supervised Learning

Over the past decade, the most successful artificial intelligence systems have relied on an enormous amount of manually labeled data. The strategy of education involves using samples of classified data to train artificial neural networks to distinguish between different objects, for example, classifying an image as “tabby cat” or “tiger cat.” This method is remarkably successful at times but suffers from inherent flaws. The supervised learning process relies on data that requires significant human effort to classify, and neural networks often take shortcuts, learning to associate classifications with superficial or irrelevant information, resulting in problems with understanding the data at a deeper level.

Warnings have been issued by some scientists, such as Alexey Ivry from the University of California, stating that these systems resemble college students who skip classes and have only a little knowledge when exams approach. The idea here is that the systems do not actually learn but succeed in “passing the exam” based on their focus on superficial information. But how can potential self-supervised learning reveal the methods of learning used by the human brain? Currently, researchers armed with an understanding of how machine learning and neuroscience come together are offering a more suitable model for the way living creatures learn.

Self-Supervised Learning Techniques

Some computational neuroscientists have turned to self-supervised learning strategies that avoid relying on human-labeled data. Instead, self-learning models draw labels from the data itself. For example, the large language model ignores any external labels, relying only on predicting the next words in a sentence based on the previous words. This approach has proven successful in processing languages like never before, marking a qualitative leap toward exploring models that distinguish more accurately between objects and elements.

In the field of computer vision, a team led by Kaiming He introduced the “masked autoencoder” model, which relies on an approach known previously. This model conceals parts of images and encourages the neural network to restore the details. The idea is that the system learns the fundamentals of shape, not just patterns, allowing it to better understand the essence and characteristics of the image. This shift illustrates the main principle of self-supervised learning, which is based on building knowledge from the bottom up through self-experiences and interaction with data, rather than relying on external parameters.

Simulating Human Learning in Computational Systems

Some neuroscientists believe that computational systems simulate human learning methods. Evidence suggests that the bulk of human brain functions relies on self-supervised learning. For example, when people interact with their surroundings, they continuously make predictions about what will happen next, such as the location of a moving object. This mechanism closely resembles what the self-learning model aims to achieve by filling in gaps in the data it deals with. And when we learn from our mistakes, we gain more knowledge, which is what self-supervised learning in neural networks seeks.

Has been

Conduct experiments aimed at reconstructing human learning pathways in computational systems. In an intriguing experiment, a group of scientists developed a system based on two different learning pathways: one for object recognition and the other for processing the movement of these objects. When this model was trained, the results revealed that this type of architecture feeds the cumulative memory of both knowledges. Ultimately, what matters is that all of this reduces reliance on blind learning theories that depend solely on human descriptions, paving the way for systems capable of learning in the same way the human brain does.

Research on Human Brain Activity Compared to Artificial Intelligence

Recent research shows interesting similarities between brain activity in living organisms such as mice and artificial intelligence when exposed to visual and auditory stimuli. A group of researchers led by Richards recorded neural activity in the visual cortex of mice while they watched videos, allowing them to understand how the human brain and artificial intelligence respond to these stimuli. The results revealed that there are specialized pathways in the brain—one related to stationary objects and the other related to motion. This allows for accurate predictions of future visuals, supporting the hypothesis that having multiple pathways aids in understanding the world around us, as having only one pathway is insufficient.

Self-Learning Techniques in Artificial Intelligence

Self-learning is becoming more common in artificial intelligence models such as Wav2Vec 2.0, developed by a team led by Yann-Lecun King. This system demonstrates an incredible ability to convert audio into latent representations without the need for prior labels. For example, this artificial intelligence was trained using 600 hours of audio data, which is roughly equivalent to what a child is exposed to in the first two years of life. Subsequently, audio clips from audiobooks in English, French, and Mandarin were played, and the results of the artificial intelligence were compared to the performance of 412 individuals whose brain activity was recorded. Research showed that the activity in the initial layers of the neural network aligns with activity in the primary auditory cortex, while the activity in the deeper layers aligns with activity in higher brain layers.

Challenges and Metrics in Artificial Intelligence Techniques

Despite the achievements, there is still controversy regarding the effectiveness of the methods used in artificial intelligence. Computational neuroscientist Josh McDermott notes that some experiments provide inadequate representations, as the signals produced by neural networks in deeper layers may not accurately reflect the representations found in living brains. Some members of the scientific community believe that self-learning represents an advancement, but it still suffers from many of the issues present in supervised learning models. More research is needed to improve the models to better reflect how brains process information, potentially by adding more feedback links, which may help simulate the complex dynamics of the brain.

Future Conclusions and Their Implications for Artificial Intelligence

Future projects will need to integrate self-learning with more complex network structures, such as recurrent neural networks, which will be able to simulate the workings of brains. If future research demonstrates systematic similarities across a variety of sensory systems, it will serve as strong evidence for how information is processed in intelligent ways. These findings may drive us further toward understanding the neural processes that govern learning and intelligence, potentially generating more powerful and liberating artificial intelligence models in new fields.

Learning and Adaptation in Artificial Intelligence and Biological Systems

It is worth noting that biological systems possess the adaptability and flexibility that are central to understanding how information is processed. Despite the advancements in artificial intelligence, the methods and systems used still fall short of simulating the complexity present in biological systems. Since neural activity in the brain carries profound implications regarding how language is processed and interact with environmental structures, understanding the complex processes in the brain holds promise for surpassing the current limits of artificial intelligence models.

Link
Source: https://www.quantamagazine.org/self-taught-ai-shows-similarities-to-how-the-brain-works-20220811/

AI was used ezycontent

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *