BrainChip’s second generation Akida platform cultivates high-performance, intelligent edge devices
BrainChip Holdings Ltd, the producer of ultra low power, fully digital, neuromorphic AI IP, is launching the second iteration of the Akida platform, the event-based digital neuromorphic processor IP that imitates the human brain for efficient data processing. The updates to the platform manifest as processing time improvements, self-management features, on-device learning, and development and deployment enhancements—ultimately targeting model training efficiency and cost.
The second generation Akida platform seeks to cultivate intelligent edge devices for the Artificial Intelligence of Things (AIoT) solutions and services market with the power of its advanced neural processing system, according to the vendor. Now underpinned by 8-bit processing, the platform continues to drive high performance and efficiency for edge devices.
BrainChip’s announcement features three central innovations that redefine the Akida platform: Temporal Event Based Neural Nets (TENN) spatial-temporal convolutions; Vision Transformers (ViT) acceleration; and on-device learning for ongoing improvement and data-less configuration.
Temporal Event Based Neural Nets (TENN) spatial-temporal convolutions, which consumes raw data directly from sensors, enriches the ability to process raw time-continuous streaming data with high accuracy and reduced model size and operations. Raw time-continuous streaming data—including audio classification, analysis of MRI and CT scans, target tracking, time series analytics, and video analytics—are critical in industries such as automotive, digital health, and smart home/city applications. The TENNs ultimately decrease design cycle size and lower cost of development with streamlined implementations.
Akida’s Vision Transformers (ViT) acceleration builds off of ViT’s enhanced performance on computer vision tasks, such as image classification, object detection, and semantic segmentation. In conjunction with Akida’s simultaneous multi-layer processing and hardware support for skip connections, ViT acceleration enables Akida to self-manage complex network execution—including RESNET-50—without CPU involvement, mitigating heavy loads on the system.
“We see an increasing demand for real-time, on-device, intelligence in AI applications powered by our MCUs and the need to make sensors smarter for industrial and IoT devices,” said Roger Wendelken, senior vice president in Renesas’ IoT and Infrastructure Business Unit. “We licensed Akida neural processors because of their unique neuromorphic approach to bring hyper-efficient acceleration for today’s mainstream AI models at the edge. With the addition of advanced temporal convolution and vision transformers, we can see how low-power MCUs can revolutionize vision, perception, and predictive applications in a wide variety of markets like industrial and consumer IoT and personalized healthcare, just to name a few.”
On-device learning, which allows the Akida IP platform to perform ongoing improvements and enact data-less customization, presents an opportunity to innovate new solutions not previously possible, according to the vendor. These opportunities include small form factor tools, such as hearable and wearable devices that intake raw audio input, and medical devices responsible for monitoring vital signs.
“This is truly event based, which means it only operates when needed; it only communicates when needed,” said Nandan Nayampally, chief marketing officer at BrainChip. “The notable features are that it handles many layers simultaneously, which is unlikely for the traditional deep learning accelerators. It handles long range skip connections, which means it can go across multiple layers, which is important for simplifying data propagation. And then it supports 8-,4-, 2- and 1-bit weights and activations, which makes it very scalable across most edge AI models.”
“The real thing is you can think of very disruptive edge solutions now,” said Nayampally. “And by that we mean you can think of cloudless edge solutions that are intelligent enough to handle all kinds of vision, sensory, and predictive capabilities.”
Other advancements developed within Akida’s latest iteration include:
- MetaTF software allowing developers to operate within their framework of choice, such as TensorFlow/Keras or Edge Impulse, for streamlined development and deployment of AI solutions.
- A runtime engine designed for high efficiency and transparent autonomous model acceleration management.
- Support for Convolutional Neural Networks (CNN), Deep Learning Networks (DNN), Vision Transformer Networks (ViT), and Spiking Neural Networks (SNNs).
To learn more about BrainChip’s Akida platform advancements, please visit https://brainchip.com/.