Edge computing and Artificial Intelligence: a new competitor for 5G

Pavel Konečný, CEO of Neuron soundware, explores the advantages offered by Artificial Intelligence (AI) in edge computing.

In 2016, I visited the CEBIT conference in Hannover. It was full of so called ‘smart’ things which I did not find smart at all. This ‘smart’ things hype included, in fact, many devices which were simply ‘connected’ and which, in most cases, delivered a narrowly defined single purpose benefit to the user. A few examples include:

  • A pipe valve allowing the user to monitor its position remotely (open/close);
  • A gas volume measurement device which, if secretly installed into a gas tank, could identify a truck driver stealing fuel; and
  • An electric plug, which could be switched on and off via wifi.

However, one very special presentation at CEBIT influenced my views on how AI might be delivered in the future. IBM presented a research project called SyNAPSE, developing an AI chip named ‘TrueNorth’ which could deliver computing power equivalent to the brain of an ant while consuming just 73mW of energy. The only clear disadvantage was that it cost about €897,660 per piece at that time.

This example proved that bringing AI to the edge of the network will be possible. It was also obvious that within a few years ‘Moore’s law’ – the observation that the number of transistors per square inch on an integrated circuit more or less doubles yearly – will cause the price to fall: the question was how fast the process will be and how many other similar solutions would emerge on the market. Already at that time, Neuron soundware started to pursue an Internet of Things (IoT) strategy, running AI algorithms at the edge of the network; and decided to develop its own IoT edge devices with audio recording and AI processing capabilities.

A few months later, I created a graph which shows the relationship between energy consumption and intelligence as a function of computing power that a piece of hardware can deliver:

  • With a few miliwatts, no intelligence could be achieved for reasonable price at that time;
  • Smartphones consume several watts of energy and provide enough computing for basic AI object recognition from images every second or so; and
  • Narrow AI, such as the capability to drive a car, would need hardware with up to a few hundred watts’ power consumption. The analysis camera, with input of about 10 times per second, required about four trillion floating point operations per second (TFLOPS), a measure of computer performance.

Translating this to what we do at Neuron soundware, you want to use the same computing performance either to drive a car or to analyse the sound of machines in order to detect an upcoming mechanical failure. Doing both would require computing power equivalent to the brain of an ant. And IBM made me see that a single ultra-low-energy-consuming chip which holds this power is possible.

The recent rise of edge computing

Edge computing capability has been on the rise since then and I have kept an eye on several other AI hardware acceleration projects.

In 2017, Movidius Neural Compute Stick offered 0.1 TFLOPS and about 0.5W power demand at a cost of less than $100 (€91.07). It is designed to extend the capability of less powerful boards, such as Raspberry Pi, providing about a computing power boost of around 10 times the original power level.

In 2018, Huawei introduced its Kirin 980 processor with 0.1W and almost 0.5 TFLOPS. Google announced their Edge TPU Units, while Rockchip demonstrated the RK3399 equipped with a neural processing unit. Both provided performance of about 3TFLOPS and cost around €89.

In 2019, specific microcomputers with hardware accelerators of AI technologies (specifically neural networks) become generally available for use. All key hardware players have released versions of the AI software stack optimised for edge computing, which further increases the performance. Google’s Edge TPU, for example, features a purpose built ASIC design to run AI inference processes. Nvidia’s Jetson Nano brings 128 CUDA cores into action for less than €89.  The ToyBrick RK3399 Pro, meanwhile, is one of the first developer boards to feature a neural processing unit: it slightly outperforms even the Nvidia Jetson.

Fast advancements in IoT technology allowed Neuron soundware to develop the nBox: an edge computing device which is capable not only of recording high quality audio with up to 12 channels, but also delivering AI through edge computing. By edge computing, we mean running only few processes in the cloud or central platform; with a majority of processes run on local platforms instead.

The importance of edge computing becomes obvious with Intel’s acquisition of Movidius for a figure estimated to be over €350m and Mobileye, an autonomous car chip maker, for more than €13.38bn. Tesla Motors has presented a purposely built AI enhanced computer which powers self-driving cars with 36 TFLOPS. That is enough computing power to process more than 2000 high resolution images from the car cameras per second; and Tesla claims it is sufficient performance to achieve autonomous driving.

Overall edge computing presents four key advantages:

  • Safety – all processed data can be stored locally with a tight control;
  • Speed – AI inference can process inputs in milliseconds, meaning minimal latency ;
  • Efficiency – embedded microcomputers are low power with affordable prices; and
  • Offline – the AI algorithm can be deployed in the field, where connectivity might be limited.

Advantages of edge computing over 5G

The question remains: why not avoid all this hardware effort by waiting for 5G networks in order to leverage the abundant cloud computing power and infrastructure? Here’s a few ideas why waiting might not be the best strategy.

Imagine you are sitting in a self-driving car when the car has lost 5G connectivity. The car will not only go blind, but it will literally lose its decision making power. Why risk this when computing capabilities required for the high bandwidth and low latency communication might be practically for the same cost as an extra neural processing unit? In addition, the overall energy demand would be higher than for AI inference using specific hardware;

Mobile internet providers want to cash out investment into development and deployment of 5G network. Although unlimited data plans might be technically possible, they might not be commercially available any time soon. For example, our nBox with 12 acoustic sensors can produce up to 1 TB of audio data per month. With the current price per gigabyte on 4G networks, transferring this amount of data to cloud would cost a fortune; and

Network coverage infrastructure will be primarily built in cities, leaving large parts of the country without 5G. In contrast, edge computing devices can be deployed immediately at right places with a clear one-off cost, which usually does not dramatically increase costs of the IoT solution.

Edge computing combined with AI will allow to process enormous amounts of data locally. The additional cost of hardware accelerators is marginal. The computing performance of neural networks is boosted about ten times every year. The data can be processed in parallel, thereby outperforming traditional CPU design.

The future is coming faster

Usage of edge computing in applications such as self-driving cars, facial recognition or predictive maintenance is just the beginning. We will soon have enough computing power to build truly independently operating machines. They will be able to move safely in cities, factories and be almost as competent in their work duties as humans.

Next year will mark 100 years since the word ‘robot’ was introduced, in the 1920 science fiction play R.U.R by the Czech writer Karel Čapek. His vision of humanoid robots quickly spread over the world. In this drama, robots become self-aware and capable of feeling emotions such as love. Seeing the pace of computer power increase and other IoT advancements, I think that Čapek’s visions might become true much sooner than we think.

Pavel Konečný

NeuronSW SE

+420 604 182 351

pavel.konecny@neuronsw.com

https://www.neuronsw.com/

Subscribe to our newsletter

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Featured Topics

Partner News

Advertisements

Media Partners

Similar Articles

More from Innovation News Network