B2B Marketing Agency logo

JWPM Consulting

BrainChip ASX: neuromorphic system. AI at the edge.

22 July 2020

A new silicon chip designed to move AI (artificial Intelligence) processing out of server farms and into mobile and IoT devices has been developed by Brainchip, an Australian ASX listed company.

AI processing like voice recognition, facial recognition, and many other AI functions aren’t performed on your smart phone they are handed-off via internet connections to server farms where power hungry computers do the heavy lifting.

However, smart phones are just one common example of edge devices that uses AI processing, there are hundreds of other applications.

This new chip will have dramatic ramifications for automation, high-speed machine learning, and decision making spawning the development of a new breed of smart devices including robots.


The chip is modeled on the architecture and data processing methods of the human brain.



Brainchip's first chip is now in production and technology company's have been working on applications for over 12 months.




Brainchip ASX - computer chip with a human-like brain

A pivotal plot element in the “Terminator” movie series is the fictional Cyberdyne Corporation responsible for developing the self-aware Skynet system that went rogue.

The movie tells the story of the Computer Scientist (Miles Dyson) who created Skynet basing his work on the CPU from the first terminator (played by Arnold Schwarzenegger) who traveled back through time. The Terminator was crushed in a hydraulic press in a factory, leaving intact only a robot arm and the fabled broken silicon chip - the Terminator's CPU - a gift from the future.

Miles Dyson said of the chip "It was a radical design, ideas we would never have thought of..."




According to the movie plot, the Terminator CPU was an artificial neural network with the ability to learn and adapt.



35 years ago, when Terminator 1 was released, a neural network based CPU was pure futuristic fantasy.


Nueromorphic system - from science-fiction to reality




It seems the future has arrived. Australian company Brainchip Holdings is about to launch the first of a series of chips supporting brain-like neural processing that will likely change the world.

The new chip called Akida™ is designed to learn and adapt.


This could be a Skynet moment.




Artificial intelligence requires processing grunt

Artificial Intelligence is learning and decision making based on finding patterns in masses of data.

The data can be anything from financial transactions, audio signals, packet data traffic through routers, pictures, words – anything that can be transmitted, processed, or stored.


The challenge is that powerful insights don’t come from hard-edged exact matches, but from fuzzy imperfect matches.



As one example, speech recognition requires tolerance for natural tonal variation, mispronunciations, and other variable characteristics common to speech. Traditional computing, which has been developed over 60 years and become part of our everyday lives is based on precise matching. 1 + 1 always equals 2.

To get it to find imperfect matches requires massive amounts of mathematical statistical calculations and billions of high-speed computations. It is being done but requires considerable computing power. Computing power that just can’t fit in small packages or powered by small batteries.

But somehow, animal and human brains do it effortlessly.

The aim of neural processing is to mimic brain methods to vastly increase computational capacity in a much smaller package using much less power; processing AI data on a single chip that previously required a bank of servers.

Brainchip's Akida™ performs AI in a single chip and requires only fractions of 1 watt to operate.


Powerful AI at the edge without remote data links

Their new Akida™ neuromorphic chip has a target price of $10 to $20 and will feature very low power consumption making embedded AI applications at the edge a practical reality.

"The edge" is a term to describe devices that interface with the real world - like facial and object recognition cameras, sensors that "smell", audio sensors and others. An iPhone is a ubiquitous example of an edge device, but in industry it could be (for example) a vibration analyzer, a video camera that scans objects on a conveyor belt, or an aromatic hydrocarbon "sniffer" in a petro-chemical plant.


AI at the edge means sensor inputs are analyzed at the point of acquisition rather than through transmission via the cloud to a data center.



And that's a game changer because sending data back through the internet to servers that undertake the heavy AI processing has obvious limitations.

Data connections are not always available nor reliable and link speed prevents real time results. And there are serious concerns that future bandwidth, even with the additional capacity provided by 5G networks, will be swamped by the proliferation of smart IoT devices.

Real time results are clearly important for systems that control motion. For collision avoidance or steering vehicles around landscape features, decision making needs to be instant. But, even for AI that doesn't need to be real-time, the grunt needed exceeds the space and electrical power available on remote devices.




Akida™ has been designed to address this problem; the new chip supports brain like high speed self-learning systems and pattern recognition in real time with power consumption at less than 1 watt.

Akida™ supports both on-chip-learning and inference, meaning learning mechanisms are built-in (rather than being pre-programmed) as well as later using the learnings to drive intelligent pattern recognition (inference). Further, Akida™ supports methods that can improve learning through experience, continuously developing more accurate inference models.

The fact that Akida™ can replace power hungry and comparatively large servers (for AI applications) with one chip consuming less than 1 watt illustrates the efficiency of neural network processing and provides clues as to why the human brain is so powerful.


Compact high speed AI processing with low energy consumption enables powerful functionality in small portable devices. Low cost means widespread deployment of these devices.




AI at the edge means more powerful sensors

Sensors are devices that translate the real world into electrical signals.

We are familiar with simple sensors that detect (for example) voltage, current, resistance, temperature, linear or rotational speed.

However, complex sensors such as video cameras, vibration sensors, microphones, finger-print touch pads, and even olfactory (smell) sensors generate high volumes of data requiring serious computing grunt for analysis to produce actionable information. Information that can form the basis for automated decision making.

Information from multiple sensors can be collected to provide huge datasets describing the real world.

Interpreting information from these sensors for decision making IS NOT as simple as (say) from a single voltage source which might be...


If [voltage] > 12.6 then [switch on motor]



However, when data sets become complex, the ability to recognise objects or other patterns requires initially learning what these patterns are, and then later scanning future new data to see if those patterns are present.

To be clear, we aren’t just talking about recording the data (logging) or detecting simple trends, that’s relatively easy. We are talking about deeply analysing the information to develop powerful insights.

Real time decision making is needed for autonomous vehicle control where streaming sensor data (from Lidar, Video, or some other sensor) is generated from modelling the landscape into which the vehicle is moving. Reliably detecting the DIFFERENCE between shadows from overhead tree foliage, or a bird flying directly at the vehicle COMPARED TO a pedestrian is a non-trivial problem.

The challenge is that patterns in any data are almost never identically represented so inference systems need wide tolerance. An object for example may be viewed at different angles, different magnifications, and different lighting - but must still be recognised as the same object. Clearly, the pattern recognition must detect only distinguishing features and not rely on pixel perfect matches.

It's a tough gig, and just as human's don't always get it right - so too for AI machines.


Building a library of stored patterns that later are recognized in new data (pattern matched) is called Artificial Intelligence, or simply written as AI



Traditional computing (von-neumann architecture) solves the problem through brute force. Simply check every new pattern with every old pattern and use a fast computer to get it done quickly (with a few tricks to take short cuts).

Nueromorphic computing is an architecture that applies a radically different approach to both simplifying the data and pattern matching through massive parallelism, methods developed from observing the human brain.




Brainchip: Replicating the human brain in silicon

It's called 'Artificial' Intelligence because essentially the brain is a device that does just that - it stores (learns) patterns and later matches new data to already stored patterns, while at the same time tweaking existing patterns and identifying new ones; getting smarter through experience.


Humans start loading patterns about the world and about our bodies from before birth. By the time we are 5 we have accumulated enough data and a sufficient knowledge of language and the real world to support accelerated learning.

Humans identify patterns in everything: language, math, facial recognition, personalities, situations, pain, movement - every piece of data that passes through our nervous system and brain is processed looking for and responding to patterns. It's the way we perceive the world and make decisions. The word 'experience' is another way of saying 'accumulated a lot of patterns'.




The genius of the way humans process information however is our pattern matching method is highly tolerant of ambiguity. Pattern recognition need not be hard edged exact matches, we both identify patterns but also have a sense of how well matched they are. Words like "hunch", "intuition", and "Déjà vu" come to mind.

Further, patterns from one domain are recognised in another. Observing a piece of abstract art, a person might recognise signs of the zodiac, that the position of key visual elements is inspired by the prime number sequence, or realise that natural patterns occurring in nature can be used as the inspiration for designing strong, lightweight engineered structures.

So, without going into great detail (because it's a big topic), AI can be as simple as identifying things, to as complex as seeing similar patterns in unrelated sequences of data (inference).

In the quest for low powered high performance AI suitable for edge applications, the obvious path is to work out how the brain does it. Brainchip has studied the brain and its data processing methods. While many others have been working on it...


Brainchip has successfully translated these learnings from the wet organic world of the human brain and applied them to the dry silicon world of electronics.



The human brain is compact with low power consumption (comparatively) but punches above its weight in processing grunt. This is achieved by abstracting information (simplifying it) to reduce the workload.

Even though we think we “see”, “hear”, “taste”, “smell” and “feel” the world clearly – our brains convert signals received from our senses into simplified and abstract data.

A lot of automatic pre-processing takes place in the brain for all sensory inputs before our conscious receives the information.

Vision for example is automatically corrected to straighten lines distorted by the eye (the eye is actually a very poor camera), visual data is compensated for the fact that only a tiny part of the central vison is high resolution.

Further, your brain incorporates information from other senses to "make sense” of the image the eyes have captured. For example, the brain knows from experience that the world is right way up. We have difficulty making visual sense of the world when it is upside down or the image defies physical laws. The sky is almost always at the top and water is almost always at the bottom, trees grow up not sideways or down.




When scenes don’t conform to these norms, the pre-processing system is temporarily thwarted, and we quickly become confused.

Further, during this pre-processing the brain predicts what is important and discards irrelevant information. Our senses are continually filtered to only deliver to our conscious information deemed relevant.

All this pre-processing takes place at blindingly fast speed and is achieved through simplifying and abstracting data.


It seems that brevity truly is the soul of wit



Sparsity is the key



The human mind is a device that models the real world, and neuromorphic computing is based on replicating this abstraction and simplification. Scientists and engineers working this problem are seeking to achieve the pattern matching efficiency of the brain by designing new data processing methods and circuitry using similar architectures and methods.


The human brain is inherently sparse in its approach to data processing; stripping out redundant information to increase speed and reduce storage overhead




Spike neural processing - radical architecture




Neuromorphic processing is based on mimicking the sparse spiking neuronal synaptic architecture and organic self-organisation of the human brain.

Easier said than done, and Brainchip has been working on it for over two decades.

The ability to capture masses of data, discern the salient features, and later to instantaneously recognise those features – is central to high speed decision making. The sense of this can be summed-up in a familiar phrase we have all uttered “I’ve seen this somewhere before”.



"Salient features" is the operative term in the above paragraph. This introduces the concept of Spike Neural Processing and from where the name "Akida" derives. Akida is Greek for spike. Whereas brute force AI (using traditional Von Neumann architecture) must process all data looking for patterns, brains cut the workload down through spiking events where a neuron reaches a set threshold in response to new sensor data before transmitting a signal to a cascading sequence of subsequent neurons (a neural network).


The neural spike thresholds are the triggers for recognising the salient features.


Simplistically, the difference between traditional architecture based AI processing and neuromorphic AI processing is this...

The traditional computing approach employs very high speed sequential processing. It's like checking to see if a library has the same book by retrieving every book in the library one at a time and comparing it to the one in your hand.

The neuromorphic approach feeds data to be processed into a cascading network of neurons that are connected in a pattern analogous to the salient features of the stored item. Individual neurons spike when a match is detected in a salient feature. The propagation of electrical signals through this network rapidly spreads out exponentially in a massively parallel fashion. Electrical signals propagate at roughly 90% of the speed of light.

However, Akida™ only processes data that exceeds a spike threshold (set through learning). If an event in the spike neural network fails to exceed the threshold, propagation past that point ceases. Less work is done, thus saving power.


Neural processing through spiking events will identify a pattern match orders of magnitude faster than traditional computing methods and with far greater electrical efficiency.



The difference between the two architectures can be further illustrated using fitting geometric shaped blocks into a board as an analogy...

  • Traditional (Von Neuman) computing works sequentially – try every shape one at a time in the remaining holes until they are all fitted.

  • Spike Neural processing – pick them all up at once and insert them at the same time – parallel processing.



Parallel processing enabled by spiking neuronal networks is the 2nd key.



Shape boards are used by cognitive psychologists to measure brain development in children. At a certain age (usually by 2 years) a child switches from trial and error to recognising which shapes fit in which holes. They only fit them one at a time because it's physically difficult to handle them all at once.

Thus both human brains and Akida™ only process the spike events, and process them in parallel (simultaneously) which dramatically cuts the workload, speeds response time dramatically, and reduces power consumption.

Learning is about setting and fine tuning spike thresholds and setting-up a pattern of links to other neurons. This is how the brain operates and Akida™ mimics this structure in the nueromorphic fabric at the core of the chip.

Even as I write this, I realise the above explanation is a bit like explaining motor cars as simply four tyres and a steering wheel. Obviously, the devil is in the detail.


Suffice to say Brainchip has worked the problem and the result is Akida™.




Machine learning on neuromorphic processors

Artificial Intelligence introduces the topic of machine learning.

It works like this. Instead of programming a video device to recognise a cat (for example), you point the device at a series of cats from various angles and tell the machine "these are all cats." Such learning usually also includes similar sized animals such as small dogs and rabbits "by the way, these are NOT cats".


The learning algorithm identifies the salient and differentiating features, from then on the machine will recognise cats on its own.



Such a device could be used in deluxe cat doors that let your cat through but keeps other neighborhood moggies out (and for the USA market - shoots them on sight). Today, video recognition exists, but for a cat door would be stupidly expensive. But, at $10 to $20 per chip, Akida™ will make such applications commonplace. Akida™ can also receive direct video input to process pixels on chip.




Human brains do not store pixel perfect images, they store abstractions - the minimum amount of data points needed to support future pattern recognition. Akida™ replicates this approach in silicon. Human vision is diabolically clever only having sufficient resolution to discern high detail in a small area at the centre of the visual field ( a small area of the retina called The Macular). Apparent clarity across the entire visual field is a brain generated illusion. This dramatically cuts down data, and abstraction within the brain cuts it down even further.

Recognising objects is nothing new in the world of AI - however, doing it "at the edge" (i.e within a small low powered device not connected to a power hungry server in the cloud) is revolutionary. And further, Akida™ is capable of learning and inference at the edge. That suggests supporting the ability for edge devices to self-learn and improve through "experience."

With a truly bewildering array of off-the-shelf sensors already on the market, the development of astoundingly intelligent and useful enhancements will soon proliferate by including within the sensor package a nueromorphic processor like Akida™.


This will enable the sensor to, not just output data, but output a decision.



Hybridised sensors will also emerge that look for patterns in the data generated by multiple sensors. This is after all, what the human brain does. Taste, for example, is greatly dependent on smell and is influenced by visual appearance.


For building truly autonomous human like robots – this is the missing link.


Artificial intelligence Narrow AI vs. General AI




Humanoid robots is just one of thousands of potential applications.

However, mischievously I have taken a bit of creative licence here. Talking about humanoid robots connotes autonomous thinking, self-direction, and perhaps even artificial consciousness (but, if something is conscious – is its consciousness artificial?).

Robo-phobic thinking inculcated through years of imaginative fiction leads us to expect such Robots might exceed their brief to the detriment of humanity.

No need to panic, Akida™is not in this ballpark.

This is where the reader may wish to research the difference between Narrow AI and General AI. Akida™is designed for task specific (or narrow) AI where its cleverness is applied to fast learning and processing for very specific useful applications.

While it does apply brain architecture, Neuromorphic computing is nowhere near capable of emulating the power of the human brain. Although, it could well be the first crack of the Pandora lid.

However, Akida™ could be applied to helping humanoid robots maintain balance, control their limbs and navigate around objects. It could also be used for very narrow autonomy like...


IF [Floor is Dirty] AND [Cat Outside] THEN START [Vacuuming Routine]



Robotics is certainly an obvious field of application for Akida™, but probably more for robots that look like this...




On their website Brainchip has identified the obvious applications of this new technology such as: person-type recognition (businessman versus delivery driver for example), hand gesture control, data-packet sniffing (looking for dodgy internet traffic), voice recognition, complex object recognition, financial transaction analysis, and autonomous vehicle control.

One can't help thinking that this list is mysteriously modest.


Broad spectrum breath testing - hand-held medical diagnostics

They is also mention of olfactory signal processing (smell) suggesting use for devices that can identify the presence of molecules in air. Such "bloodhound" devices could be deployed for advanced medical diagnostics that could identify an extraordinary range of diseases and medical conditions simply through sampling a person's breath.


The impact on the medical diagnostics industry could be, er - breath taking.



On 23 July 2020 the company announced that Nobel Prize Laureate Professor Barry Marshall has joined Brainchip's scientific advisory board. Prof. Marshall was awarded the Nobel Prize in 2005 for his pioneering work in the discovery of the bacterium Helicobacter pylori and its role in gastritis and peptic ulcer disease. The gold-standard test for Helicobacter pylori involves analysing a person's breath.

Detection of endogenous (originating from within a person, animal or organism) volatile organic compounds resulting from various disease states has been one of the primary diagnostic tools used by physicians since the days of Hippocrates. With the advent of blood testing, biopsies, X-rays, and CT scans the use of breath to detect medical problems fell out of clinical practice.

The modern era of breath testing commenced in 1971, when Nobel Prize winner Linus Pauling demonstrated that human breath is complex, containing well over 200 different volatile organic compounds.

Certainly works for Police breath testing to measure blood alcohol levels.

Sensing and quantifying the presence of organic compounds (and other molecules) using electronic olfactory sensors feeding into an AI processing system could be the basis for low-cost immediate medical diagnostics assisting clinicians to more quickly identify conditions warranting further investigation and providing earlier indications long before symptoms are apparent.

[March 2021 update]

Semico published an article in their March, 2021 IPI Newsletter detailing a press release between NaNose (Nano Artificial Nose) Medical and BrainChip announcing a new system for electronic breath diagnosis of COVID-19. This system used an artificially intelligent nano-array based on molecularly modified gold nanoparticles and a random network of single-walled carbon nanotubes paired with the Akida neuromorphic AI processor from BrainChip.

Since then, NaNose has broadened the spectrum of diseases that can be detected and is working toward commercial release...





But, who knows what other applications will emerge when seasoned industrial engineers get their hands on Akida™?


History shows the launch of a first generation chip leads to rapid development

As a moment in the history of technological achievement, the launch of Akida™ could be likened to the launch of Intel's 8008 microprocessor in 1972, the first 8 Bit CPU that spawned the development of a new generation of desktop computing, embedded device control, Industry 3.0 automation and the ubiquity of the internet.

The microprocessor enabled practical computing at the edge. No need to submit data to a mainframe sitting in a large air-conditioned room tended to by chaps in white coats, and waiting hours or days for results to come back.

The Atmel ATTINY9-TS8R Microprocessor costs around USD$0.38



Nearly forty years later, we know how that has turned out. Barely a device exists that doesn't have at least one microprocessor chip in it. Recent estimates suggest the average household has 50 microprocessors tucked away in washing machines, air-conditioning systems, motor vehicles, game consoles, calculators, dish-washers and almost anything that has electricity running through it.

The clock speed of the 8008 back in 1972 was a leisurely 0.8 MHz and the chip housed 3,500 transistors. 40 years later the 8th generation Intel chips run at clock speeds 5,000 times faster and house billions of transistors.

Price has been a key enabler for market proliferation. Putting a microprocessor in a washing machine (instead of an electro-mechanical cam wheel timer) is feasible because the microprocessor chip costs a few cents and is quicker and simpler to factory-program, orders of magnitude more reliable, and delivers far greater functionality (although most of us wash clothes on the same settings all the time).

Akida™ could choose the optimum settings based on scanning the clothes, weighing them, and perhaps also even smelling them.

Machine contains 4.5 kg of ALL WHITES --> Add 35 ml of bleach.


The game changing significance of Brainchip's work cannot be overstated.


AI at the edge will dramatically improve the performance of artificial limbs.


AI Chip market size and growth

Attempting to forecast future market demand is always difficult particularly for ground-breaking new technology.

"I think there is a world market for maybe five computers." (Thomas Watson, president of IBM, 1943)

It’s difficult to put an exact number on it, because it involves seeing into the future, however a trawl of the internet on a regular basis by this author reveals that the estimated market size and growth rate is increasing all the time.

A recent update (July 2021) estimates the Artificial Intelligence Chip Market was valued at USD 6.31 Billion in 2018 and is projected to reach...


USD$114.13 Billion by 2026, growing at a CAGR of 43.39 % from 2019 to 2026. That's huge!



However, like the 1943 IBM example, until a new technology is fully understood and its applications reach maturity, it is impossible to predict future market size. New technologies create new markets. This author (that's me) thinks the above estimates are modest. Edge AI processing chips spawned by Akida™ and closely followed by IBM, Intel and others will see exponential growth.




Akida™ is not an AI accelerator

A new class of devices called "neural accelerators" are emerging that act as enhancers to existing microprocessor based digital circuits. The heavy AI processing (machine learning and subsequent machine pattern recognition) tasks are handed-off to the accelerator.

However, systems based on this architecture lack flexibility, and are inherently power hungry, require more footprint and are not an elegant solution.

The approach with Akida™ is to change all that by providing everything needed in one integrated circuit package to support an optimised application. With Akida™ all of the system components needed (CPU, memory, sensor interfaces, and data interfaces) to build an application are in one package.


Akida™ is NOT an accelerator; it is a processing system. Everything needed for AI in one chip.



This is leading edge technology and will enable future innovation to both improve existing devices and spawn an era of new thinking that will lead to - who knows?


Hand gesture control is one of many possible applications.



Brainchip's Akida™ System on a chip

The chip contains all of the elements required to interface conventional digital systems to the neuromorphic fabric, thus providing a “system on a chip” rather than an inconvenient stand-alone neuromorphic device that might have engineers scratching their heads trying to figure out how to work with it.

Akida™ supports programming in existing high-level languages like Python (an open source language gaining huge popularity due to its universality, simplicity and power) and has onboard standard digital interfaces - PCI-Express 2.1, USB 3.0, I3S and I2S as well as external memory expansion interfaces.

Brainchip has taken the heavy lifting out of interfacing standard digital signals with the analogue and "spike-event" neuromorphic core, and supports ganging together chips with built in serial connectivity to allow up to 64 devices to be arrayed for a single solution.



At a target price of roughly $10 per chip, the cost for tinkerers to play with the new technology will be very affordable as will subsequent deployment in working devices at scale when manufactured.

Brainchip announced to the market on 2 July 2020, they had successfully produced the first Akida™ chip silicon wafers and the company is now testing and evaluating the first batch of devices. Pending the need for further manufacturing fine-tuning , Akida™ is not far from commercial release.


Applications are already in development

Brainchip has seeded the market by making available a development environment for software engineers to begin "playing" with the system and skilling-up using the application development tools. Brainchip has been working with leading global technology companies through its Early Access Program (EAP). The first batch of Akida™ chips will soon be fitted to evaluation boards and shipped to Early Access Partners.

The complete application development environment toolkit, guides and examples can be downloaded here at no cost.

Before the first chips are available, there are likely already many applications in final design stages waiting for the hardware.

While Brainchip isn't the only organisation working on neuromorphic chips (for example IBM's "TrueNorth, and Intel's Loihi), it looks like being the first to market.

The First Generation Akida™ chip isn't theory; it's practical reality.

And with other manufacturers with Spike Neural Processing chips nearing final development, soon powerful AI applications will become common place.


Akida™ will soon be the Model-T of neuromorphic processors

The most exciting aspect of Brainchip's work is the pioneering nature of the technology. The launch of the first microprocessors in the seventies put low cost data processing in to the hands of teenagers. Kids started building computers in their bedrooms and garages to become Microsoft and Apple.

Akida™ will put the initial building blocks for Artificial Intelligence in to the hands of the next generation. Brainchip haven't just created a chip they have enabled an ecosystem through making available a free software suite of development tools.

And back at Brainchip, we have a new breed of engineers and scientists who have steeped themselves deeply in neuromorphic methods. They have opened the door into a whole new technological landscape.

While we think Akida™ is exciting, it's what comes after that will be truly amazing.


The world is about to change fast.


"Is the cat outside?"





By Justin Wearne


-----------------------------------------

Further reading

  1. Brainchip Inc. official website
  2. Neuromorphic Chip Maker Takes Aim At The Edge [The Next Platform]
  3. Spiking Neural Networks, the Next Generation of Machine Learning
  4. Deep Learning With Spiking Neurons: Opportunities and Challenges
  5. How close are we to a real Star Trek style medical Tricorder? [Despina Moschou, The Independent]
  6. If only AI had a brain [Lab Manager]
  7. Intel’s Neuromorphic Chip Scales Up [and It Smells] [HPC Wire]
  8. Neuromorphic Computing - Beyond Today’s AI - New algorithmic approaches emulate the human brain’s interactions with the world. [Intel Labs]
  9. Neuromorphic Chips Take Shape [Communications of the ACM]
  10. Why Neuromorphic Matters: Deep Learning Applications - A must read.
  11. The VC’s Guide to Machine Learning
  12. What is AI? Everything you need to know about Artificial Intelligence.
  13. BRAINCHIP ASX: Revolutionising AI at the edge. Could this be the future SKYNET? [YouTube Video - a very good summary]
  14. In depth analysis of Brainchip as a stock [with discussion about the technology] [YouTube Video]
  15. Intel’s Neuromorphic System Hits 8 Million Neurons, 100 Million Coming by 2020 [The Spectrum]
  16. Bringing AI to the device: Edge AI chips come into their own TMT Predictions 2020 [Deloitte Insights]
  17. Inside Intel’s billion-dollar transformation in the age of AI [Fastcompany]
  18. Spiking Neural Networks for more efficient AI - [Chris Eliasmith Centre for Theoretical Neuroscience University of Waterloo], really fantastic explanation - Jan 21, 2020 (predates launch of Akida)
  19. Software Engineering in the era of Neuromorphic Computing - "Why is NASA interested in Neuromorphic Computing?" [Michael Lowry NASA Senior Scientist for Software Reliability]
  20. Could Breathalyzers Make Covid Testing Quicker and Easier? [Keith Gillogly, Wired - 15 Sept 2020]
  21. Artificial intelligence creates perfumes without being able to smell them [DW Made For Minds]
  22. Epileptic Seizure Detection Using a Neuromorphic-Compatible Deep Spiking Neural Network [Zarrin P.S., Zimmer R., Wenger C., Masquelier T. [2020]]
  23. Flux sur Autoroute, processing d’image [Highway traffic, image processing] [Simon Thorpe CerCo (Centre de Recherche Cerveau & Cognition, UMR 5549 & SpikeNet Technology SARL, Toulouse])
  24. Exclusive: US and UK announce AI partnership [AXIOS] 26 September 2020
  25. Breath analysis could offer a non-invasive means of intravenous drug monitoring if robust correlations between drug concentrations in breath and blood can be established. From: Volatile Biomarkers, 2013 [Science Direct]
  26. Moment of truth coming for the billion-dollar BrainChip [Sydney Morning Herald by Alan Kruger - note: might be pay-walled]
  27. Introducing a Brain-inspired Computer - TrueNorth's neurons to revolutionize system architecture [IBM website]
  28. Assessment of breath volatile organic compounds in acute cardiorespiratory breathlessness: a protocol describing a prospective real-world observational study [BMJ Journals]
  29. What’s Next in AI is Fluid Intelligence [IBM website]
  30. ARM sold to Nvidia for $40billion [Cambridge Independent]
  31. Interview with Brainchip CEO Lou Dinardo [16 September 2020] [Video: Pitt Street Research]
  32. BrainChip Holdings [ASX:BRN]: Accelerating Akida's commercialisation & collaborations [1 September 2020:Video: tcntv]
  33. BrainChip Awarded New Patent for Artificial Intelligence Dynamic Neural Network [Business Wire]
  34. BrainChip and VORAGO Technologies agree to collaborate on NASA project [Tech Invest]
  35. Intel inks agreement with Sandia National Laboratories to explore neuromorphic computing [Venture Beat - The Machine - Making Sense of AI]
  36. Congress Wants a 'Manhattan Project' for Military Artificial Intelligence [Military.com]
  37. Future Defense Task Force: Scrap obsolete weapons and boost AI [US Defense News]
  38. Is Brainchip A Buy? [ASX: BRN] | Stock Analysis | High Growth Tech Stock [YouTube - Project One]
  39. Is Brainchip Holdings [ASX: BRN] still a buy after its announcement today? | ASX Growth Stocks [YouTube ASX Investor]
  40. Hot AI Chips To Look Forward To In 2021 [Analytics India Magazine]
  41. Scientists linked artificial and biological neurons in a network — and it worked [The Mandarin]
  42. Scary applications of AI - "Slaughterbot" Autonomous Killer Drones | Technology [YouTube Video]
  43. BrainChip aims to cash in on industrial automation with neuromorphic chip Akida [Tech Channel - Naushad K. Cherrayil]
  44. How Intel Got Blindsided and Lost Apple’s Business [Marker]
  45. Neuromorphic Revolution - Will Neuromorphic Architectures Replace Moore’s Law? [Kevin Morris - Electronic Engineering Journal]
  46. Insights and approaches using deep learning to classify wildlife [Scientific Reports]
  47. Artificial Neural Nets Finally Yield Clues to How Brains Learn [Quanta Magazine]
  48. Rob May, General Partner PJC Venture Capital Conversation 181 with Inside Market Editor, Phil Carey - discuss edge computing on YouTube.
  49. Why Amazon, Google, and Microsoft Are Designing Their Own Chips [Bloomberg Businessweek: Ian King and Dina Bass]
  50. "Intelligent Vision Sensor" [You-Tube video - Sony]
  51. What is the Real Promise of Artificial Intelligence? [SEMICO Research and Consulting Group]
  52. NaNose Medical and BrainChip Innovation [SEMICO - Rich Wawrzyniak]
  53. NaNose - a practical application of Akida to detecting disease through breath analysis [You-Tube Video]
  54. Nanose company website
  55. What is the AI of things [AIoT]? [Embedded]
  56. Brainchip Holdings Ltd [BRN] acting CEO, Peter van der Made Interview June 2021 [YouTube]
  57. Spiking Neural Networks: Research Projects or Commercial Products? [Bryon Moyer, technology editor at Semiconductor Engineering]

brainchipneuromorphicartificial intelligenceaisystem on a chipakidaai at the edge

Join our newsletter