MENU

AI, neural networks and the edge of the cloud

AI, neural networks and the edge of the cloud

Feature articles |
By eeNews Europe



A recent article in Business Insider stated, “AI isn’t part of the future of technology. AI is the future of technology”. AI and its related disciplines are moving rapidly from research labs to diverse real-world applications, with tangible benefits for consumers and innovative business models – from autonomous vehicles to natural language processing, and cognitive expert advisors.

The list of applications employing intelligent and computer vision crosses multiple industries and market segments. AI is already commonplace in both consumer applications, such as personal assistants, and in commercial use cases such as credit card fraud detection, security and robotics. AI is expected to be the next general purpose technology – a significantly disruptive long-term source of broadly diffused growth, which is likely to last for at least 75 years.

A significant section of this “intelligence” has primarily taken the shape of neural networks (NNs) for processing, segmenting and classifying images. NNs have proved themselves to be highly capable of multiple different tasks and producing fast accurate results, all while exceeding human capabilities. Open source frameworks such as Caffe and TensorFlow are enabling the dissemination and democratisation of NNs, creating a vibrant ecosystem of researchers and developers around them. The introduction of an NN API for Android will enable the industry to focus and accelerate the adoption of NNs even further.

 

Processing in edge devices

For NNs to do their job correctly, they first need to be trained. Typically, this is done ‘offline’ in the cloud and relies on powerful server hardware. The recognising of patterns of objects is known as inferencing and is done in real-time. It involves deploying and running the trained neural network model. Today, this stage is also performed in the cloud, but moving forward, due to scalability issues and to fully achieve AI’s potential it will need to be done at the edge – for example, on mobile and embedded devices. It is also driven by the increasing need for AI-enabled devices to operate remotely and/or untethered, such as drones, smartphones and augmented reality smart glasses.

Looking at connectivity in more detail, mobile networks might not always be available, whether they are 3G, 4G or 5G, not to mention the prohibitive costs involved to stream multiple simultaneous high-resolution video feeds. Therefore, sending data to and from the cloud and expecting a decision in real-time won’t be realistic. As such, it is now time to move the processing and deployment of NNs to edge devices. It is simply not practical to run them over the network due to the issues highlighted earlier – scalability, latency, sporadic inaccessibility and a lack of suitable security.


Why dedicated hardware acceleration is needed

On the other hand, deploying and running NNs on edge devices, brings its own unique challenges, such as limited compute resources, power and memory bandwidth.

To deliver the required level of performance within those constraints, a dedicated silicon offering hardware acceleration for neural networks in needed. This will provide the leap in performance required and much-reduced power consumption – something that consumers care about considerably. While they will come to expect the benefits of AI, such as improved search, they will not want it at a cost to their device’s battery life.

 

New use cases

The high performance of dedicated and local hardware acceleration will enable new use cases, for example, in areas such as smart security. The key driver is the reduced total cost of ownership. NNs can, for example, detect suspicious behaviour automatically, raise an alarm, engage operators to monitor the situation and then take action if needed. In addition, the bandwidth to transmit and store surveillance footage is significantly reduced.

Dedicated hardware will also enable security systems to perform on-device analytics, whether in a camera in a city centre, a stadium or a home security system. It has the ability to run multiple different network types, meaning it can enable more intelligent decision making and therefore reduce the number of false-positives, saving time and power. Also, due to the low-power processing, these cameras can be powered over the data network or even be battery operated, making them easier to deploy and manage.

Drones are another great example of AI and NNs working successfully together. They typically fly at speeds in excess of 150mph or 67m/s and therefore, the vision algorithms need to be run locally. Without dedicated hardware for NN acceleration, a drone would need to anticipate obstacles 10-15 meters ahead to avoid a collision. Due to network availability, bandwidth and latency, it is impossible to do this over the cloud. With a true hardware solution, the drone can run multiple NNs to identify and track objects simultaneously, at a distance of only one metre.

As part of my role in Imagination I work on a dedicated hardware offering to enable this required leap in performance and power consumption: the PowerVR Series 2NX neural network accelerator (NNA). Recently, a smartphone manufacturer announced that its hardware used to enable face detection for unlocking the device offered 600 billion operations per second – the Series2NX deliver up to 3.2 trillion operations a second in a single core. Typically, there will be thousands of photos on the smartphone which are sorted automatically in a number of ways, including, for example, identifying all photos will a particular person in them. A GPU could process around 2,400 pictures using one per cent of the battery power but using the same amount of power, the PowerVR Series2NX could handle 428,000 images, highlighting the full potential of NNs.


Looking to the future – the right combination of AI and NN accelerators

As the world becomes increasingly dependent on AI, so will the need for NNs – consumers will come to expect and demand the enhancements they bring, such image recognition, voice processing, language translation and more, but without negatively impacting performance or power.

The processing will have to be done in edge devices because delays due to latency, sporadic inaccessibility and a lack of suitable security simply won’t be acceptable to consumers and in some instances, could ultimately put lives at risks.

However, without a NNA in place some devices will struggle to keep up and ultimately will fail to succeed at their intended task. Therefore, it is clear that to make AI practicable, a NN running on an NNA in an edge device should be the platform of choice for the future.

 

About the author:

Francisco Socal is Technology Marketing Manager for PowerVR at Imagination Technologies – www.imgtec.com

 

Related articles:

Imagination launches flexible neural network IP

ST preps second neural network IC

Intel launches self-learning processor

Ceva and Brodmann17 partner to make AI pervasive

Cognitive hearing aid puts DNNs to work with the wearer’s brain

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s