B2B Marketing Agency logo

JWPM Consulting

BrainChip ASX: Akida neuromorphic processor. AI at the edge.

22 July 2020

A new silicon chip designed to move AI (artificial Intelligence) processing out of server farms and into mobile and IoT devices has been developed by Brainchip, an Australian ASX listed company.



Artificial Intelligence has become hugely important because it hands off to high-speed electronics the building-blocks for understanding, reasoning, planning, perception, and decision making. For specific applications, electronic devices can perform these tasks orders of magnitude faster than humans.


AI is essentially recognizing patterns in data.



The data can be conventional transactional data, or streaming data produced by sensors. Where AI is particularly useful is finding patterns in complex data and/or massive data sets.

AI is routinely applied to everyday life. Search engines (Google) for example are a classic example of AI used at a massive scale. Fraud detection in financial transactions particularly credit cards is another. Analyzing audio signals can be used to detect cracked railway wheels and worn bearings in machinery. The list of existing and potential AI applications is enormous.

To date, AI required massive powerful data centres to house the machines that performed the AI grunt processing.


Artificial Intelligence and edge computing

The vast majority of AI is processed in server farms ("The Cloud") providing the computing grunt, power supply, and cooling to support the heavy duty processing required to run traditional AI models.

The new frontier is AI at the edge - small powerful devices that can perform AI without relying on high-speed links to data centres.

Brainchip has developed and is commercializing single chip AI processing that delivers high-performance, small footprint, ultra energy efficient edge applications.

This new technology called "Akida™" will have dramatic ramifications by spawning the development of a new breed of smart devices including robots.


Akida™ is modeled on the architecture and data processing methods of the human brain.



Such devices are generically neuromorphic processors (or neural network processors) and are the latest thing.

Brainchip's first Akida 1000™ chip is now in production and technology company's have been working on applications for over 12 months.



However, Brainchip's revenue model is licensing IP. Allowing others to incorporate their technology into ASIC (Application Specific Integrated Circuits) rather than selling chips. While a slower path to market, the leverage effect is enormous.

I imagine actual chip sales will be mostly for evaluation and R&D purposes.




The Terminator CPU was a neural network processor

A pivotal plot element in the “Terminator” movie series is the fictional Cyberdyne Corporation responsible for developing the self-aware Skynet military command and control system that went rogue and set about killing-off the human race.

The first movie (released in 1984) and sequels tell the story of the Computer Scientist (Miles Dyson) whose work provided the technology that powered Skynet. Dyson developed his neural network processor by reverse engineering the CPU from the first terminator (played by Arnold Schwarzenegger) that traveled back through time. The Terminator was crushed in a hydraulic press in a factory, leaving intact only a robot arm and the fabled broken silicon chip - the Terminator's Neural Network CPU - a gift from the future.

Miles Dyson said of the chip "It was a radical design, ideas we would never have thought of..."




38 years ago, when Terminator 1 was released, a neural network based CPU was pure futuristic fantasy.


Neuromorphic processor - from science-fiction to reality




It seems we have arrived at the future. Australian company Brainchip Holdings is about to launch the first of a series of chips supporting brain-like neural processing that will likely change the world.

The new chip called Akida™ is designed to learn and adapt.


This could be a Skynet moment.




Artificial intelligence requires processing grunt

So what's the fuss? Why is the development of a neuromorphic processor important and what difference can it make to the processing of artificial intelligence workloads?

To understand this we need to first understand artificial intelligence and compare the way existing computers process AI and then understand why the human brain does it so much better and more efficiently.

Artificial Intelligence is deep learning and decision making based on finding patterns in large scale data.

The data can be anything from financial transactions, audio signals, packet data traffic through routers, pictures, words – anything that can be transmitted, processed, or stored.


Powerful insights don’t come from hard-edged exact matches, but from fuzzy imperfect matches.



As one example, speech recognition requires tolerance for natural tonal variation, mispronunciations, sloppy grammar, stuttering, repetition and other speech flaws. Traditional computing, developed over 60 years and part of our everyday lives is based on precise matching. 1 + 1 always equals 2.

Finding close but not exact matches using computers built for unambiguous precision requires statistical calculations resulting in billions of high-speed computations. The computing power needed just can’t fit in small packages or powered by small batteries.


But somehow, animal and human brains do it effortlessly.



And herein lies the motivation for pursuing neuromorphic computing.

If a brain the size of a bird can outperform server farm computers in many useful vision, sound, motion, and other sensor processing tasks - using vastly less energy, occupying vastly less volume, and do it "on the fly" (see what I did there?) then perhaps we should understand the method.

Neuromorphic hardware mimics brain methods to vastly increase AI computational capacity in a much smaller package using much less power.

Brainchip's Akida™ performs AI in a single chip and requires only fractions of 1 watt to operate. It is highly energy efficient.


AI at the edge compared with AI processed in the cloud

The principal difference is where the compute to achieve AI takes place. The vast majority (I am guessing over 90%) of AI is processed in data centres not on the local device where the data has been acquired.

"The edge" is a term to describe devices that interface with the real world - like facial and object recognition cameras, sensors that "smell", audio sensors, industrial sensors and others.

An iPhone is a ubiquitous example of an edge device, but in industry it could be (for example) a vibration analyzer, a video camera that scans objects on a conveyor belt, or an artificial nose. But, in reality there are literally thousands of use cases for Edge AI.


AI at the edge means analyzing sensor data at the point of acquisition avoiding transmission to cloud based data centres.



And that's a game changer because sending data back through the internet to servers that undertake the heavy AI processing has obvious limitations.

Data connections are not always available nor reliable and link speed prevents real time results. And there are serious concerns that future bandwidth, even with the additional capacity provided by 5G networks, will be swamped by the proliferation of smart IoT devices attempting connection to cloud computing .

For collision avoidance or steering around landscape features, decision making must be instant.




The fact that Akida™ can replace power hungry and comparatively large servers (for AI applications) with one chip consuming less than 1 watt illustrates the energy efficiency of neural network processing and provides clues as to why the human brain is so powerful.


Compact high speed neuromorphic hardware consuming low power enables powerful functionality in small portable devices. Low cost means large scale deployment of these devices.




Industrial sensors - power efficient edge AI use cases.

Sensors are devices that represent real world events using electrical signals.

We are familiar with simple sensors that detect (for example) voltage, current, resistance, temperature, linear or rotational speed.

However, complex sensors such as video cameras, vibration sensors, microphones, finger-print touch pads, and even olfactory (smell) sensors generate high volumes of data requiring serious computing grunt for analysis to produce actionable information.

Real time decision making is needed for autonomous vehicle control where streaming sensor data (from Lidar, Video, 3D Accelerometers, and others) is generated from modelling the landscape into which the vehicle is moving. Reliably detecting the DIFFERENCE between shadows from overhead tree foliage, or a bird flying directly at the vehicle COMPARED TO a pedestrian is a non-trivial problem.

The challenge is that objects are almost never identically represented so inference systems need wide tolerance. An object may be viewed at different angles, different magnifications, different size and position within the viewing frame, and different lighting - but must still be recognized as the same object.


Pattern recognition must detect only distinguishing features and not rely on pixel perfect matches.



Traditional computer systems (Von-Neumann) solve the problem through brute force; a neuromorphic processor applies a radically different architecture.




Brainchip: Replicating the human brain in silicon

The key to understanding Brainchip and neuromorphic computing is to look at how mammalian brains solve the problem of fast and efficient pattern recognition; there are five key principals...

  • When it comes to patterns, near enough is good enough
  • Simplifying through modelling and abstraction
  • We don’t live in the real world, but something near to it
  • We only process what’s different, we ignore sameness
  • Memory and processing happen in the same place



Let's examine each of these principles...

0. For Brains - patterns are EVERYTHING

Appreciating the nature of intelligence requires understanding the importance of patterns.

"Patterns" in the broadest sense; we are not talking checks, stripes, or polka-dots.


But, we are talking about (say) "I walked into the room and I immediately sensed that something was out of place."



The brain is a device that stores (learns) patterns and later matches them to new data, and updates or identifies new patterns; getting smarter through experience.


Humans start loading patterns acquired from our senses from before birth. By the time we are five we have accumulated enough data and language to support accelerated learning. We build on foundations. Starting with simple "real world" knowledge, then increasingly abstract concepts to construct a skleton to support the flesh of knowledge.

While this sounds allegorical, such knowledge structures actually build the neurological pathways to reach stored information.

Humans identify patterns in everything: language, math, facial recognition, personalities, situations, pain, movement - every piece of data that passes through our nervous system and brain is processed looking for and responding to patterns. It's the way we perceive the world and make decisions.


The word 'experience' is another way of saying 'accumulated a lot of patterns'.



Pattern recognition is not done by choice; it is automatic and fundamental for thought.



1. When it comes to patterns, near enough is good enough

The genius of the way humans process information however is our pattern matching method is highly tolerant of ambiguity. Pattern recognition need not be hard edged exact matches, we both identify patterns but also have a sense of how well matched they are. Hence, words like "hunch", "intuition", and "Déjà vu" feature in our vernacular.

Further, patterns from one domain are recognized in another. Observing abstract art, a person might recognize similarities to signs of the zodiac, that the position of key visual elements is ordered by the prime number sequence, or that natural patterns inspire the designing of strong, lightweight engineered structures.

This cross-domain pattern matching is a key element in creativity and problem solving.

Animals and humans routinely search for patterns in everything, we can't help but do it - it's hard wired. Higher intelligence is the greater capacity to notice useful patterns.

In the quest for low powered high performance AI suitable for edge applications, the obvious path is to work out how the brain does it. Brainchip has studied the brain and its data processing methods. While many others have been working on it...


Brainchip has successfully translated these learnings from the wet organic world of the human brain and applied them to the dry silicon world of electronics.




2. Simplifying through modelling and abstraction

The human brain is compact with low power consumption (comparatively) but punches above its weight in processing grunt. This is achieved by abstracting information (simplifying it) to reduce the workload.

Even though we think we “see”, “hear”, “taste”, “smell” and “feel” the world clearly – our brains convert signals received from our senses into simplified and abstract data.

A lot of automatic pre-processing takes place in the brain for all sensory inputs before our conscious receives the information.

Vision for example is automatically corrected to straighten lines distorted by the eye (the eye is actually a very poor camera). Visual data is compensated for the fact that only a tiny part of the central vision is high resolution. And the eyes themselves each split the field of vision into separate data streams representing left field and right fields of view, and further down the preprocessing stream, the brain mixes the two streams together. Thus, through brain trauma, it is possible to totally lose the left or right fields of vision even though both eyes function perfectly.

Further, your brain incorporates information from other senses to "make sense” of the data the vision system has captured. For example, the brain knows from experience that the world is right way up. We have difficulty making visual sense of the world when it is upside down or the image defies physical laws. The sky is almost always at the top and water is almost always at the bottom, trees grow up not sideways or down.

Modelling and simplification is critical to reducing complex data to a form that can be stored, classified, and retrieved using simple and efficient methods.


3. We don’t live in the real world, but something near to it

The implication is both fascinating and scary; we don't actually see the world as it is.




Through prior learning and experience we build visual models of the physical world around us, and social models of people, stereotypes, and how we should respond to social interactions and situations. Behavioral scientists call this "our scripting."

Visual models are particularly important to enable rapid assessment and decision making. When scenes don’t conform to our visual models, the preprocessing system is temporarily thwarted, and our brains need to adjust to the new reality.

Further, during this preprocessing the brain predicts what is important and discards irrelevant information. Our senses are continually filtered to only deliver to our conscious information deemed relevant.


We don't just filter reality; we construct it, inferring an outside world from an ambiguous input.



Anil Seth, a neuroscientist at the University of Sussex, explains that we routinely fabricate our world view before all the facts are in, and most often base decisions on scant real data. We rely on a vast database of models to assist the construction of our own version of reality. We make assumptions.

Fun fact: In Air crash investigations, the eye-witness testimony of younger people is considered more credible than older people because they have less life experience upon which to fabricate a new version of events based on what they think they saw rather than what they actually saw (leaving aside vision problems). This is related to the perception that older people are perceived to think more slowly where the true explanation is they have far more life data to search through to find patterns.


The reason we turn grey as we grow older is because we no longer see the world in black and white.



While the above examples are visual, the same applies to all information that we process - sight, smell, touch, hearing, and balance. But, also in language and our world view. Prejudice comes from having existing experience and models. Hence, what was said before. We judge before all the facts are in.

Seth and the philosopher Andy Clark, a colleague in Sussex, refer to the predictions made by the brain as “controlled hallucinations”. The idea is that the brain is always building models of the world to explain and predict incoming information; it updates these models when prediction and experience obtained from sensory inputs disagree.


A model is a smaller, simpler, representation of a real thing. Modelling is a data processing method aimed at making useful predictions from scant data



This is the trade-off for high-speed decision making necessary for effective operation. The explanation is probably in evolution; those that acted on the assumption that it was a tiger outlived those that stood there thinking "hang on, is that really a tiger?"

Sometimes we get it wrong. This is a potential problem when applying the new world of AI to things like autonomous vehicle control. We accept humans having accidents but seem to have zero tolerance when driver-less vehicles do the same (even if the accident rate is orders of magnitude better).

Delivering Processing speed from a compact, power efficient package is the objective of both organic brains and neuromorphic computing; even if that means sometimes making an error.

The human mind is a device that models the real world, and neuromorphic computing is based on replicating this abstraction and simplification.

The human brain is inherently sparse in its approach to data processing; stripping out redundant information to increase speed and reduce storage overhead

It seems that brevity truly is the soul of wit




4. We only process what’s different, we ignore sameness


Spike neural processing - radical architecture




Neuromorphic processing is based on mimicking the sparse spiking neuronal synaptic architecture and organic self-organization of the human brain.

Easier said than done, and Brainchip has been working on it for over two decades.

The ability to capture masses of data, discern the salient features, and later to instantaneously recognize those features – is central to high speed decision making. The sense of this can be summed-up in a familiar phrase we have all uttered “I’ve seen this somewhere before”.



"Salient features" is the operative term in the above paragraph. This introduces the concept of Spike Neural Processing and from where the name "Akida" derives. Akida is Greek for spike. Whereas brute force AI (using traditional Von Neumann architecture) must process all data looking for patterns, brains cut the workload down through spiking events where a neuron reaches a set threshold in response to new sensor data before transmitting a signal to a cascading sequence of subsequent neurons (a neural network).


The neural spike thresholds are the triggers for recognizing the salient features.


Simplistically, the difference between traditional architecture based AI processing and neuromorphic AI processing is this...

The traditional computing approach employs very high speed sequential processing. It's like checking to see if a library has the same book by retrieving every book in the library one at a time and comparing it to the one in your hand.

A library search based on neural network principals would work very differently. As you walked into the library, a person might say "Fiction books are all in the back half of the library." This is analogous to a spiking event. Upon arriving there, another person would direct you to the shelves matching the first letter of the author's surname. The second spiking event. Now the search is narrowed to just a few shelves. Finally, a 3rd person says "I know this book, I think you will find it on the last shelf on the right." Finally, you see amongst the books one of a similar size and color. Pulling it out from the shelf, you confirm the match.

The time difference and energy required between the two search methods is huge.


But! Consider this. What if the first person said "That's a fiction book. This library only stocks non-fiction." Search over.



The sequential search method could only reach that conclusion after checking every book in the library. And that is where truly enormous time and power savings are made. The vast majority of AI processing involves data that contains nothing of interest; you have to sift a lot of dirt to find the gold. Neuromorphic artificial intelligence is highly efficient at ignoring irrelevant data.



The flaw in my library book analogy is that books and libraries provide efficient search methods through a universal classification system. However, neuromorphic methods classify and store data of any type on the fly through detecting salient features, building a searchable library about anything.

And in any case, if libraries did have people present who could reliably point you in the right direction as described above, the search method described would be miles faster than first using the catalogue - computerized or not.

The neuromorphic approach feeds data to be processed into a cascading network of neurons that are connected in a pattern analogous to the salient features of the stored item. Individual neurons spike when a match is detected in a salient feature. The propagation of electrical signals through this network rapidly spreads out exponentially in a massively parallel fashion. Electrical signals propagate at roughly 90% of the speed of light.

However, Akida™ only processes data that exceeds a spike threshold (set through learning). If an event in the spike neural network fails to exceed the threshold, propagation past that point ceases. Vastly less work is done, thus increasing pattern matching speed and saving power.


Neural processing through spiking events will identify a pattern match orders of magnitude faster than traditional computing methods and with far greater electrical efficiency.


5. Memory and processing happen in the same place

A big difference between traditional computer (Von Neumann architecture) and brain architecture (replicated in neuromorphic computing) is that MEMORY and PROCESSING happen in the same place. A Von Neumann machine stores data separately, and retrieves it from memory, routes it through the central processor and then sends results back to memory. Memory calls are usually the biggest bottle neck (impediment to speed) in any traditional computer.

It is possible for a Von Neumann machine to emulate neural networks. But, Neural network processors that could be classed as "neuromorphic" are in the minority.


In strictly "neuromorphic computing", both processor and memory are combined. Human brains are MASSIVELY parallel in their processing.



Using the library analogy, searching for a book using Von Neumann architecture you would not enter the library (memory) at all. The search would be orchestrated from outside of the library by an agent (CPU) that managed the sequential retrieval of books, comparing each to the search target until either a match is made or the end is reached. That's how traditional computing works.

You can appreciate therefore that traditional AI pattern matching methods that need to retrieve thousands (often millions) of words of information from core memory (or worse, hard-disc storage) will be both much slower and consume vastly more electrical power.


Processing spike events is a radically different architecture, as the spiking occurs within the memory bank. Processing is to a large extent decentralized and occurs throughout the brain. You don't retrieve from memory; electrical signals propagate through it.



This delivers an enormous speed gain and power efficiency when looking to find patterns.

Thus both human brains and Akida™ only process the spike events, and process them in parallel (simultaneously) which dramatically cuts the workload, speeds response time dramatically, and reduces power consumption.

Learning is about setting and fine tuning spike thresholds and setting-up a pattern of links to other neurons. This is how the brain operates and Akida™ mimics this structure in the nueromorphic fabric at the core of the chip.

Even as I write this, I realise the above explanation is a bit like explaining motor cars as simply four tyres and a steering wheel. Obviously, the devil is in the detail.


Suffice to say Brainchip has worked the problem and the result is Akida™.


Introducing the JAST algrorithm

If you wish to delve further into the inspiration and development of Spike Neural Networks and how they work, this is an excellent presentation delivered by Simon Thorpe who (along with 3 other colleagues) developed the JAST algorithm that was acquired by BrainChip and which was important for the development of Akida™. This presentation provides great background (in simplified form) to the figuring out of how the brain processes pattern recognition and solves the problem of unsupervised learning.

The presentation also provides a more detailed insight into why Neuromorphic processors are inherently more efficient at pattern recognition (using Feedforward Convolutional Neural Networks) compared with previous AI methods that rely on "brute force" conventional computing architecture.






Machine learning on neuromorphic processors

Artificial Intelligence introduces the topic of machine learning.

It works like this. Instead of programming a video device to recognise a cat (for example), you point the device at a series of cats from various angles and tell the machine "these are all cats." Such learning usually also includes similar sized animals such as small dogs and rabbits "by the way, these are NOT cats" (this is called supervised learning - Akida™ is capable of unsupervised learning).


The learning method identifies the salient and differentiating features, from then on the machine will recognise cats on its own.



Such a device could be used in deluxe cat doors that let your cat through but keeps other neighborhood moggies out (and for the USA market - shoots them on sight). Today, video recognition exists, but for a cat door would be stupidly expensive. But, at $10 to $20 per chip, Akida™ will make such applications commonplace. Akida™ can also receive direct video input to process pixels on chip.




Human brains do not store pixel perfect images, they store abstractions - the minimum amount of data points needed to support future pattern recognition. Akida™ replicates this approach in silicon. Human vision is diabolically clever only having sufficient resolution to discern high detail in a small area at the centre of the visual field ( a small area of the retina called The Macular). Apparent clarity and color depth across the entire visual field is a brain generated illusion. This dramatically cuts down data, and abstraction within the brain cuts it down even further.

The analogy would be comparing a full color picture of a person's face to a very simple sketch with minimal lines - easily recognizable as the same person. But, the sketch requires vastly less work to draw. Interestingly, we see this all the time with political cartoonists who draw satirical pictures. Even grossly distorted cartoon versions of a person (caricatures) are easily recognizable. Clearly the brain doesn't rely on exact matching.

Recognising objects is nothing new in the world of AI - however, doing it "at the edge" (i.e within a small low powered device not connected to a power hungry server in the cloud) is revolutionary. And further, Akida™ is capable of learning and inference at the edge. That suggests supporting the ability for edge devices to self-learn and improve through "experience."

With a truly bewildering array of off-the-shelf sensors already on the market, the development of astoundingly intelligent and useful enhancements will soon proliferate by including within the sensor package a nueromorphic processor like Akida™.


This will enable the sensor to, not just output data, but output a decision.



Hybridised sensors will also emerge that look for patterns in the data generated by multiple sensors. This is after all, what the human brain does. Taste, for example, is greatly dependent on smell and is influenced by visual appearance.


For building truly autonomous human like robots – this is the missing link.


Artificial intelligence Narrow AI vs. General AI




Humanoid robots is just one of thousands of potential applications.

However, mischievously I have taken a bit of creative licence here. Talking about humanoid robots connotes autonomous thinking, self-direction, and perhaps even artificial consciousness (but, if something is conscious – is its consciousness artificial?).

Robo-phobic thinking inculcated through years of imaginative fiction leads us to expect such Robots might exceed their brief to the detriment of humanity.

No need to panic, Akida™is not in this ballpark.

This is where the reader may wish to research the difference between Narrow AI and General AI. Akida™is designed for task specific (or narrow) AI where its cleverness is applied to fast learning and processing for very specific useful applications.

While it does apply brain architecture, Neuromorphic computing is nowhere near capable of emulating the power of the human brain. Although, it could well be the first crack of the Pandora lid.

However, Akida™ could be applied to helping humanoid robots maintain balance, control their limbs and navigate around objects. It could also be used for very narrow autonomy like...


IF [Floor is Dirty] AND [Cat Outside] THEN START [Vacuuming Routine]



Robotics is certainly an obvious field of application for Akida™, but probably more for robots that look like this...




On their website Brainchip has identified the obvious applications of this new technology such as: person-type recognition (businessman versus delivery driver for example), hand gesture control, data-packet sniffing (looking for dodgy internet traffic), voice recognition, complex object recognition, financial transaction analysis, and autonomous vehicle control.

One can't help thinking that this list is mysteriously modest.


Broad spectrum breath testing - hand-held medical diagnostics

They is also mention of olfactory signal processing (smell) suggesting use for devices that can identify the presence of molecules in air. Such "bloodhound" devices could be deployed for advanced medical diagnostics that could identify an extraordinary range of diseases and medical conditions simply through sampling a person's breath.


The impact on the medical diagnostics industry could be, er - breath taking.



On 23 July 2020 the company announced that Nobel Prize Laureate Professor Barry Marshall has joined Brainchip's scientific advisory board. Prof. Marshall was awarded the Nobel Prize in 2005 for his pioneering work in the discovery of the bacterium Helicobacter pylori and its role in gastritis and peptic ulcer disease. The gold-standard test for Helicobacter pylori involves analysing a person's breath.

Detection of endogenous (originating from within a person, animal or organism) volatile organic compounds resulting from various disease states has been one of the primary diagnostic tools used by physicians since the days of Hippocrates. With the advent of blood testing, biopsies, X-rays, and CT scans the use of breath to detect medical problems fell out of clinical practice.

The modern era of breath testing commenced in 1971, when Nobel Prize winner Linus Pauling demonstrated that human breath is complex, containing well over 200 different volatile organic compounds.

Certainly works for Police breath testing to measure blood alcohol levels.

Sensing and quantifying the presence of organic compounds (and other molecules) using electronic olfactory sensors feeding into an AI processing system could be the basis for low-cost immediate medical diagnostics assisting clinicians to more quickly identify conditions warranting further investigation and providing earlier indications long before symptoms are apparent.

[March 2021 update]

Semico published an article in their March, 2021 IPI Newsletter detailing a press release between NaNose (Nano Artificial Nose) Medical and BrainChip announcing a new system for electronic breath diagnosis of COVID-19. This system used an artificially intelligent nano-array based on molecularly modified gold nanoparticles and a random network of single-walled carbon nanotubes paired with the Akida neuromorphic AI processor from BrainChip.

Since then, NaNose has broadened the spectrum of diseases that can be detected and is working toward commercial release...





But, who knows what other applications will emerge when seasoned industrial engineers get their hands on Akida™?


History shows the launch of a first generation chip leads to rapid development

As a moment in the history of technological achievement, the launch of Akida™ could be likened to the launch of Intel's 8008 microprocessor in 1972, the first 8 Bit CPU that spawned the development of a new generation of desktop computing, embedded device control, Industry 3.0 automation and the ubiquity of the internet.

The microprocessor enabled practical computing at the edge. No need to submit data to a mainframe sitting in a large air-conditioned room tended to by chaps in white coats, and waiting hours or days for results to come back.

The Atmel ATTINY9-TS8R Microprocessor costs around USD$0.38



Nearly forty years later, we know how that has turned out. Barely a device exists that doesn't have at least one microprocessor chip in it. Recent estimates suggest the average household has 50 microprocessors tucked away in washing machines, air-conditioning systems, motor vehicles, game consoles, calculators, dish-washers and almost anything that has electricity running through it.

The clock speed of the 8008 back in 1972 was a leisurely 0.8 MHz and the chip housed 3,500 transistors. 40 years later the 8th generation Intel chips run at clock speeds 5,000 times faster and house billions of transistors.

Price has been a key enabler for market proliferation. Putting a microprocessor in a washing machine (instead of an electro-mechanical cam wheel timer) is feasible because the microprocessor chip costs a few cents and is quicker and simpler to factory-program, orders of magnitude more reliable, and delivers far greater functionality (although most of us wash clothes on the same settings all the time).

Akida™ could choose the optimum settings based on scanning the clothes, weighing them, and perhaps also even smelling them.

Machine contains 4.5 kg of ALL WHITES --> Add 35 ml of bleach.


The game changing significance of Brainchip's work cannot be overstated.


AI at the edge will dramatically improve the performance of artificial limbs.


AI Chip market size and growth

Attempting to forecast future market demand is always difficult particularly for ground-breaking new technology.

"I think there is a world market for maybe five computers." (Thomas Watson, president of IBM, 1943)

It’s difficult to put an exact number on it, because it involves seeing into the future, however a trawl of the internet on a regular basis by this author reveals that the estimated market size and growth rate is increasing all the time.

A recent update (July 2021) estimates the Artificial Intelligence Chip Market was valued at USD 6.31 Billion in 2018 and is projected to reach...


USD$114.13 Billion by 2026, growing at a CAGR of 43.39 % from 2019 to 2026. That's huge!



McKinsey and Company in 2018 estimated that Edge Computing (devices and applications that use Edge AI chips) represents a potential market of USD$175 billion and growing to USD$215 billion in hardware by 2025.

Of course, like many markets there exist specific segments. If you narrow the focus to a segment where BrainChip has its maximum competitive advantage, according to Market Research Future (MRFR), “Self-Learning Neuromorphic Chip Market Analysis by Application, by Vertical – Forecast 2030” valuation is poised to reach USD 2.76 Billion by 2030, registering an 26.23% CAGR throughout the forecast period (2022–2030).

However, like the 1943 IBM example, until a new technology is fully understood and its applications reach maturity, it is impossible to predict future market size. New technologies create new markets. This author (that's me) thinks the above estimates are modest. Edge AI processing chips spawned by Akida™ and closely followed by IBM, Intel and others will see exponential growth.




Akida™ is not an AI accelerator

A new class of devices called "neural accelerators" are emerging that act as enhancers to existing microprocessor based digital circuits. The heavy AI processing (machine learning and subsequent machine pattern recognition) tasks are handed-off to the accelerator.

However, systems based on this architecture lack flexibility, and are inherently power hungry, require more footprint and are not an elegant solution.

The approach with Akida™ is to change all that by providing everything needed in one integrated circuit package to support an optimised application. With Akida™ all of the system components needed (CPU, memory, sensor interfaces, and data interfaces) to build an application are in one package.


Akida™ is NOT an accelerator; it is a processing system. Everything needed for AI in one chip.



This is leading edge technology and will enable future innovation to both improve existing devices and spawn an era of new thinking that will lead to - who knows?


Hand gesture control is one of many possible applications.

Brainchip's Akida™ System on a chip

The challenge with commercializing any new technology is packaging it so that it can be applied in the real world by others.


All very well developing Spike Neural Networking, but how do I plug it into my circuit?



Brainchip has addressed this problem by packaging the SNN into a neuromorphic chip that provides interfaces and management systems .

The chip contains all of the elements required to interface conventional digital systems to the neuromorphic fabric, thus providing a “system on a chip” rather than an inconvenient stand-alone neuromorphic device that might have engineers scratching their heads trying to figure out how to work with it.

Akida™ supports programming in existing high-level languages like Python (an open source language gaining huge popularity due to its universality, simplicity and power) and has onboard standard digital interfaces - PCI-Express 2.1, USB 3.0, I3S and I2S as well as external memory expansion interfaces.

In addition, Akida™ is compatible with Tensor Flow - a free and open-source software library for machine learning and artificial intelligence (developed by Google).

Brainchip has taken the heavy lifting out of interfacing standard digital signals with the analogue and "spike-event" neuromorphic core, and supports ganging together chips with built in serial connectivity to allow up to 64 devices to be arrayed for a single solution.



At a target price of roughly $10 to $20 per chip, the cost for tinkerers to play with the new technology will be very affordable as will subsequent deployment in working devices at scale when manufactured.

Brainchip announced to the market on 2 July 2020, they had successfully produced the first Akida™ chip silicon wafers and the company is now testing and evaluating the first batch of devices. Pending the need for further manufacturing fine-tuning , Akida™ is not far from commercial release.


Applications are already in development

Brainchip has seeded the market by making available development kits and a development environment for software engineers to begin "playing" with the system and skilling-up using the application development tools. Brainchip has been working with leading global technology companies through its Early Access Program (EAP). The first batch of Akida™ chips will soon be fitted to evaluation boards and shipped to Early Access Partners.

The complete application development environment toolkit, guides and examples can be downloaded here at no cost.

Before the first chips are available, there are likely already many AI applications in final design stages waiting for the hardware.

While Brainchip isn't the only organisation working on neuromorphic chips (for example IBM's "TrueNorth, and Intel's Loihi), it looks like being the first to market.

The First Generation Akida™ chip isn't theory; it's practical reality.

And with other manufacturers with Spike Neural Processing chips nearing final development, soon powerful AI applications will become common place.


Akida™ will soon be the Model-T of neuromorphic processors

The most exciting aspect of Brainchip's work is the pioneering nature of the technology.

A reminder, the model-T disrupted the automotive world through introducing production line manufacturing; it introduced car ownership to the average person. However, quite quickly the Model-T was improved upon and subsequent models were light years ahead. Great though Akida™ currently is, it is only the beginning.

The launch of the first microprocessors in the seventies put low cost data processing in to the hands of teenagers. Kids started building computers in their bedrooms and garages to become Microsoft and Apple.

Akida™ will provide the initial Artificial Intelligence building blocks for the next generation. Brainchip haven't just created a chip they have enabled an ecosystem through making available a free software suite of development tools.

And back at Brainchip, we have a new breed of engineers and scientists who have steeped themselves deeply in neuromorphic methods. They have opened the door into a whole new technological landscape.

While we think Akida™ is exciting, it's what comes after that will be truly amazing.


The world is about to change fast.


"Is the cat outside?"





By Justin Wearne



Further reading

  1. Brainchip Inc. official website
  2. Neuromorphic Chip Maker Takes Aim At The Edge [The Next Platform]
  3. Spiking Neural Networks, the Next Generation of Machine Learning
  4. Deep Learning With Spiking Neurons: Opportunities and Challenges
  5. How close are we to a real Star Trek style medical Tricorder? [Despina Moschou, The Independent]
  6. If only AI had a brain [Lab Manager]
  7. Intel’s Neuromorphic Chip Scales Up [and It Smells] [HPC Wire]
  8. Neuromorphic Computing - Beyond Today’s AI - New algorithmic approaches emulate the human brain’s interactions with the world. [Intel Labs]
  9. Neuromorphic Chips Take Shape [Communications of the ACM]
  10. Why Neuromorphic Matters: Deep Learning Applications - A must read.
  11. The VC’s Guide to Machine Learning
  12. What is AI? Everything you need to know about Artificial Intelligence.
  13. BRAINCHIP ASX: Revolutionising AI at the edge. Could this be the future SKYNET? [YouTube Video - a very good summary]
  14. In depth analysis of Brainchip as a stock [with discussion about the technology] [YouTube Video]
  15. Intel’s Neuromorphic System Hits 8 Million Neurons, 100 Million Coming by 2020 [The Spectrum]
  16. Bringing AI to the device: Edge AI chips come into their own TMT Predictions 2020 [Deloitte Insights]
  17. Inside Intel’s billion-dollar transformation in the age of AI [Fastcompany]
  18. Spiking Neural Networks for more efficient AI - [Chris Eliasmith Centre for Theoretical Neuroscience University of Waterloo], really fantastic explanation - Jan 21, 2020 (predates launch of Akida)
  19. Software Engineering in the era of Neuromorphic Computing - "Why is NASA interested in Neuromorphic Computing?" [Michael Lowry NASA Senior Scientist for Software Reliability]
  20. Could Breathalyzers Make Covid Testing Quicker and Easier? [Keith Gillogly, Wired - 15 Sept 2020]
  21. Artificial intelligence creates perfumes without being able to smell them [DW Made For Minds]
  22. Epileptic Seizure Detection Using a Neuromorphic-Compatible Deep Spiking Neural Network [Zarrin P.S., Zimmer R., Wenger C., Masquelier T. [2020]]
  23. Flux sur Autoroute, processing d’image [Highway traffic, image processing] [Simon Thorpe CerCo (Centre de Recherche Cerveau & Cognition, UMR 5549) & SpikeNet Technology SARL, Toulouse]
  24. Exclusive: US and UK announce AI partnership [AXIOS] 26 September 2020
  25. Breath analysis could offer a non-invasive means of intravenous drug monitoring if robust correlations between drug concentrations in breath and blood can be established. From: Volatile Biomarkers, 2013 [Science Direct]
  26. Moment of truth coming for the billion-dollar BrainChip [Sydney Morning Herald by Alan Kruger - note: might be pay-walled]
  27. Introducing a Brain-inspired Computer - TrueNorth's neurons to revolutionize system architecture [IBM website]
  28. Assessment of breath volatile organic compounds in acute cardiorespiratory breathlessness: a protocol describing a prospective real-world observational study [BMJ Journals]
  29. What’s Next in AI is Fluid Intelligence [IBM website]
  30. ARM sold to Nvidia for $40billion [Cambridge Independent]
  31. Interview with Brainchip CEO Lou Dinardo [16 September 2020] [Video: Pitt Street Research]
  32. BrainChip Holdings [ASX:BRN]: Accelerating Akida's commercialisation & collaborations [1 September 2020:Video: tcntv]
  33. BrainChip Awarded New Patent for Artificial Intelligence Dynamic Neural Network [Business Wire]
  34. BrainChip and VORAGO Technologies agree to collaborate on NASA project [Tech Invest]
  35. Intel inks agreement with Sandia National Laboratories to explore neuromorphic computing [Venture Beat - The Machine - Making Sense of AI]
  36. Congress Wants a 'Manhattan Project' for Military Artificial Intelligence [Military.com]
  37. Future Defense Task Force: Scrap obsolete weapons and boost AI [US Defense News]
  38. Is Brainchip A Buy? [ASX: BRN] | Stock Analysis | High Growth Tech Stock [YouTube - Project One]
  39. Is Brainchip Holdings [ASX: BRN] still a buy after its announcement today? | ASX Growth Stocks [YouTube ASX Investor]
  40. Hot AI Chips To Look Forward To In 2021 [Analytics India Magazine]
  41. Scientists linked artificial and biological neurons in a network — and it worked [The Mandarin]
  42. Scary applications of AI - "Slaughterbot" Autonomous Killer Drones | Technology [YouTube Video]
  43. BrainChip aims to cash in on industrial automation with neuromorphic chip Akida [Tech Channel - Naushad K. Cherrayil]
  44. How Intel Got Blindsided and Lost Apple’s Business [Marker]
  45. Neuromorphic Revolution - Will Neuromorphic Architectures Replace Moore’s Law? [Kevin Morris - Electronic Engineering Journal]
  46. Insights and approaches using deep learning to classify wildlife [Scientific Reports]
  47. Artificial Neural Nets Finally Yield Clues to How Brains Learn [Quanta Magazine]
  48. Rob May, General Partner PJC Venture Capital Conversation 181 with Inside Market Editor, Phil Carey - discuss edge computing on YouTube.
  49. Why Amazon, Google, and Microsoft Are Designing Their Own Chips [Bloomberg Businessweek: Ian King and Dina Bass]
  50. "Intelligent Vision Sensor" [You-Tube video - Sony]
  51. What is the Real Promise of Artificial Intelligence? [SEMICO Research and Consulting Group]
  52. NaNose Medical and BrainChip Innovation [SEMICO - Rich Wawrzyniak]
  53. NaNose - a practical application of Akida to detecting disease through breath analysis [You-Tube Video]
  54. Nanose company website
  55. What is the AI of things [AIoT]? [Embedded]
  56. Brainchip Holdings Ltd [BRN] acting CEO, Peter van der Made Interview June 2021 [YouTube]
  57. Spiking Neural Networks: Research Projects or Commercial Products? [Bryon Moyer, technology editor at Semiconductor Engineering]
  58. Our brains exist in a state of “controlled hallucination” [Insider Voice]
  59. BrainChip Launches Event-Domain AI Inference Dev Kits [EETimes: Sally Ward-Foxton]
  60. New demand, new markets: What edge computing means for hardware companies [McKinsey & Company]
  61. Brainchip - Where to next? An investment perspective [Youtube Video]
  62. Self-Learning Neuromorphic Chip Market To Grow At USD 2.76 Billion at a CAGR of 26.23% by 2030 - Report by Market Research Future [MRFR]

brainchipneuromorphicartificial intelligenceaisystem on a chipakidaai at the edge

Join our newsletter