Not so simple machines: Cracking the code for materials that can learn
It’s easy to think of machine learning as an entirely digital phenomenon, enabled by computers and algorithms that mimic brain-like behavior.
But the first machines were simulated, and now a small but growing body of research shows that mechanical systems can also “learn.” Physicists at the University of Michigan provide the latest research in this field.
The team of Shuaifeng Li and Xiaoming Mao at the University of Michigan has designed an algorithm that provides a mathematical framework for learning how to operate in a lattice called a mechanical neural network.
“We saw that the material could learn tasks and perform calculations on its own,” Li said.
Researchers have shown how the algorithm can be used to “train” materials to solve problems, such as identifying different species of iris plants. These materials could one day create structures capable of solving more advanced problems, such as airplane wings that can optimize their shape for different wind conditions without the intervention of humans or computers to help.
That future is still a long way off, but insights from the new UM study could also provide more immediate inspiration for researchers outside the field, said postdoctoral researcher Li.
The algorithm is based on a method called backpropagation, which has been used to implement learning in digital and optical systems. Because algorithms are apparently indifferent to how information is carried, it could also help open new avenues to explore how living systems learn, researchers say.
“We see backpropagation theory being successful in many physical systems,” Li said. “I think it might also help biologists understand how biological neural networks work in humans and other species.”
Li and Mao, professors in the Department of Physics at the University of Michigan, published their new research in the journal Nature Communications.
MNN 101
The idea of using physical objects in computing has been around for decades. But the focus on mechanical neural networks is newer, and interest is growing along with other recent advances in artificial intelligence.
Most of these advances—certainly the most obvious ones—are in the field of computer technology. Hundreds of millions of people turn to AI-powered chatbots like ChatGPT every week for help writing emails, planning vacations, and more.
These artificial intelligence assistants are based on artificial neural networks. Although their workings are complex and largely invisible, they provide a useful analogy for understanding mechanical neural networks, Li said.
When using a chatbot, a user types in a command or question that is interpreted by a neural network algorithm running on a computer network with massive processing power. Based on what the system learns from the vast amount of data, it generates a response or output that pops up on the user’s screen.
Mechanical Neural Networks (MNN) have the same basic elements. For Li and Mao’s study, the input was a weight fixed to a material that acted as a handling system. The output is the way the material changes its shape due to the weight acting on it.
“Force is the input message, the material itself is like the processor, and the deformation of the material is the output or response,” Li said.
In this study, the “processor” material is a rubbery 3D-printed mesh composed of tiny triangles that form larger trapezoids. These materials learn by adjusting the stiffness or flexibility of specific parts of the lattice.
To enable their future applications—such as aircraft wings that adjust their properties during flight—MNNs will need to be able to adjust these parts on their own. Materials are being researched that would do this, but you can’t order them from the catalog yet.
So Lee simulated this behavior by printing new versions of the processor with thicker or thinner parts to get the desired response. The main contribution of Li and Mao’s work is the algorithm that guides how to adjust the material of these fragments.
How to train your MNN
Although the math behind backpropagation theory is complicated, the idea itself is intuitive, Li said.
To start the process, you need to know what your input is and how you want the system to respond. You then apply the input and see how the actual response differs from the desired response. The network then takes that difference and uses it to inform how it changes itself to get closer to the desired output in subsequent iterations.
Mathematically, the difference between the actual output and the desired output corresponds to an expression called the loss function. By applying mathematical operators called gradients to this loss function, the network learns how to change.
Li says his MNN will provide the information if you know what you’re looking for.
“It can display gradients automatically,” Li said, adding that he was helped by cameras and computer code in this research. “It’s really convenient and efficient.”
Consider the case where the lattice consists entirely of segments with the same thickness and stiffness. If you hang a weight at the center node (the point where the line segments intersect), the adjacent nodes to its left and right will move down the same amount due to the symmetry of the system.
But let’s say you want to create a lattice that not only provides an asymmetric response, but the most asymmetric response possible. That is, you want to build a network that maximizes the difference in motion between nodes to the left and nodes to the right of the weights.
Li and Mao used their algorithm and a simple experimental setup to create a lattice that gave this solution. (Another similarity to biology, Li says, is that the method only cares about what nearby connections are doing, similar to how neurons operate.)
Going a step further, the researchers also provided a large dataset of input forces, similar to those used in computer machine learning, to train their MNN.
In one example, different input forces corresponded to different sizes of petals and leaves on the iris plant, a defining feature that helps differentiate species. Li can then present plants of unknown species to the trained grid, which can correctly sort them.
Li is already working to establish the complexity of the system and the problems it can solve using MNNs carrying sound waves.
“We can encode more information into the input,” Li said. “With sound waves, you have amplitude, frequency and phase that encode the data.”
At the same time, the UM team is also working on a broader class of material networks, including polymer and nanoparticle components. With these, they can create new systems in which to apply their algorithms and work toward fully autonomous learning machines.
This work was supported by the Office of Naval Research and the National Science Foundation’s Center for Complex Particle Systems (COMPASS).
2024-12-09 17:29:41