The new design is stackable and reconfigurable to swap out and build upon existing neural network sensors and processors

Imagine a more sustainable future where cellphones, smartwatches and other wearables don’t have to be shelved for a newer model or thrown away. Instead, they could be upgraded with the latest sensors and processors that would snap onto a device’s internal chip, like LEGO bricks embedded into an existing build. These reconfigurable chips could keep devices updated while reducing our e-waste.

MIT engineers have now taken a step toward that modular vision with a LEGO-like design for a stackable and reconfigurable artificial intelligence chip.

The design includes alternating layers of sensing and processing elements, as well as light-emitting diodes (LEDs), which allow the layers of the chip to communicate optically. Other modular chip designs use conventional wiring to route signals between layers. Such complex connections are difficult, if not impossible, to cut and rewire, making these stackable designs unreconfigurable.

The MIT design uses light instead of physical wires to transmit information through the chip. The chip can therefore be reconfigured, swapping or stacking layers to add new sensors or updated processors, for example.

“You can add as many layers of computation and sensors as you want, for example for light, pressure and even smell,” says Jihoon Kang, a postdoctoral fellow at MIT. “We call it a reconfigurable LEGO-like AI chip because it’s infinitely expandable depending on the combination of layers. »

Researchers aim to apply the design to edge computing devices — autonomous sensors and other electronic devices that operate independently of centralized or distributed resources such as supercomputers or cloud computing.

“As we enter the era of the sensor network-based Internet of Things, the demand for advanced multifunctional computing devices will increase dramatically,” said Jeehwan Kim, associate professor of mechanical engineering at MIT. “Our proposed hardware architecture will provide great versatility for edge computing in the future. »

The team’s results will be published in Natural electronics. In addition to Kim and Kang, MIT authors include co-first authors Chanyeol Choi, Hyunseok Kim, and Min-Kyu Song, and contributing authors Hanwool Yeon, Celesta Chang, Jun Min Suh, Jiho Shin, Kuangye Lu, Bo-In Park, Yeongin Kim, Han Eol Lee, Doyoon Lee, Subeen Pang, Sang-Hoon Bae, Hun S. Kum, and Peng Lin, along with collaborators from Harvard University, Tsinghua University, Zhejiang University, and more.

Light the way

Team Design is currently set up to perform basic image recognition tasks. It does this via a layering of image sensors, LEDs and processors made up of artificial synapses — networks of memory resistors, or “memristors,” that the team has previously developed that work together like a network “-a chip. » Each array can be trained to send signals process and classify directly on a chip without the need for external software or an internet connection.

In their new chip design, the researchers combined image sensors with networks of artificial synapses, each trained to recognize specific letters – in this case, M, I, and T. While a traditional approach was to relay signals from a sensor over physical wires with Connected to a processor, the team instead fabricated an optical system between each sensor and a network of artificial synapses to allow communication between the layers without the need for a physical connection.

“Other chips are physically wired through metal, which makes them difficult to rewire and redesign, so if you wanted to add a new function you would have to create a new chip,” says MIT postdoc Hyunseok Kim. “We’ve replaced that physical wired connection with an optical communications system that gives us the freedom to stack and add chips as we please. »

The team’s optical communication system consists of a pair of photodetectors and LEDs, each decorated with tiny pixels. Photodetectors form an image sensor to receive data and LEDs to transmit data to the next layer. When a signal (e.g. the image of a letter) reaches the image sensor, the image’s light pattern encodes a specific configuration of LED pixels, which in turn stimulates another layer of photodetectors as well as a network of artificial synapses. which classifies the signal based on the pattern and strength of the incident LED light.

stack up

The team fabricated a single chip with a core measuring about 4 square millimeters, or about the size of a piece of confetti. The chip is stacked with three image recognition blocks, each comprising an image sensor, an optical communication layer and a series of artificial synapses to classify one of the three letters M, I or T. They then projected a pixelated image of random letters onto the chip and measured the electrical current each neural network produced in response. (The larger the current, the more likely it is that the image is the letter that particular board is trained to recognize.)

“De-noise” processor, and found the chip, and then identified the images accurately.

“We demonstrated stackability, interchangeability, and the ability to build new features into the chip,” notes Min-Kyu Song, a postdoc at MIT.

Researchers plan to add more sensing and processing capabilities to the chip, and envision limitless applications.

“We can add layers to a cellphone’s camera so it can recognize more complex images, or turn them into health monitors that can be integrated into a wearable electronic skin,” suggests Choi, who previously developed “smart” skins with Kim for vital surveillance. Panel.

Another idea, he adds, involves modular, electronically embedded chips that consumers can build with the latest sensor and processor “building blocks.”

“We can create a general chip platform and each level could be sold separately as a video game,” says Jeehwan Kim. “We could create different types of neural networks, like for image or speech recognition, and let the customer choose what they want and add an existing chip like a LEGO. »

This research was partially supported by the South Korean Ministry of Commerce, Industry and Energy (MOTIE); Korea Institute of Science and Technology (KIST); and Samsung’s global research program.

Leave a Comment