Forget the CSI effect: Put some real AI brains behind video surveillance

Neural networks, ML and real time analytics

woman on camera as she enters her home

Sponsored Video is a growing segment of the exploding IoT market – that is, CCTV for security, traffic, workplace monitoring and more scenarios. The market for video surveillance is expected to be worth $43bn by 2025 – a CAGR of 11.8 per cent, according to Research and Markets.

If there’s one thing facilitating that, it’s the growing adoption of IP as the world’s transport and communication mechanism with the increasing ubiquity of Wi-Fi.

This should be welcome news to OEMs practiced in the art of building devices based on industry standards such as IP and x86. Surely, it’s a simple matter of channelling what you already know to deliver systems capable of processing, storing and communicating video traffic.

True, but video presents a challenge: a challenge in data volume and in the data processing. It’s a challenge that is changing customers’ IT infrastructure architectures and that, by extension, has implications on the kinds of products and systems OEMs supply.

Let’s start with data. A 2016 IHS study reckoned that by 2019 the world’s video cameras would be capable of generating 2,500PB of data in just a single day – up from 566PB in 2015, which was the equivalent of 1.3m double-layer Blu-ray film discs. Depending on frame rates and compression, one hour of high-definition digital video could create a file of 25GB.

The sheer amount of processing, storage and network transportation required for video of this kind is staggering and raises the question of where processing of the necessary analytics should take place in order to obtain the kind of real-time response demanded of CCTV.

There is a further challenge. Monitoring and processing a constant stream of video surveillance is beyond the physical capability of humans while most organisations will be unwilling to hire armies of staff simply to try and keep up with all that video.

That’s seeing the marriage of video with a branch of Artificial Intelligence known as Neural Networks to analyse video in real time. Neural Networks employs Machine Learning to teach the system to distinguish sets of actions as allowed and normal or unusual, different or wrong.

Neural networks uses a set of highly connected and tiered nodes that can modify themselves to learn, going beyond their initial training. Neural networks are already used by cloud providers and others as part of their AI-branded image recognition services.

Now, these are early days for AI and the phrase “AI” is widely overused, with “AI” being credited for being able to solve or simplify all kinds of human and business challenges.

That led a 2015 EU report produced with the UK’s Centre for the Protection of National Infrastructure to find that one of the biggest areas of misconception among the public was: “There are no limits to what video analytics can do” - it called this the Crime Scene Investigation effect. “It is true that with modern technologies, such as super-resolution and various forms of sensor fusion, it is possible to generate new views on existing, combined data, but it is against the laws of nature to generate new data out of thin air.”

Nevertheless, the number of cameras deployed, the resolution of the images, the networked nature of the processing and storage of video data and ultimately the AI based analytics capabilities are pushing the boundaries. Video analytics for pattern-, facial and object recognition, intrusion detection (in sterile zones), threat assessment (abandoned baggage, parked car) are just a few examples of a growing field of application.

The delivery, however, is not without its challenges and these are challenges OEMs working with x86-based systems should understand.

Edge analytics

The infrastructure of a typical video network requires a fog of elements in the form of large-scale deployments of fixed or moving camera and sensors, wireless and fixed networking, local edge data centres and fibre connectivity to cloud data centers with huge processing power.

With so many hours of video being produced on the edge, this opens up the question of where the analytics should take place. Given the tremendous amounts of data being produced and the problems of cost of transport, need for massive central storage and of latency, the edge would seem the natural place.

Of these three factors, latency is the main hurdle to achieving real-time analytics. It’s something that cannot be avoided if huge data packets are moving over congested networks to some distant data centre, to be ingested across old network architectures by an AI engine that would then push its findings back out of the data centre across the same infrastructure.

According to an IEEE report, Real-Time Video Analytics, the Killer App for Edge Computing, the only feasible approach to delivering large-scale, live video analytics is for a series of edges linked by the public cloud. IEEE reckons this combo of high-data volume, compute demands, latency requirements and cameras are the most challenging parts of the Internet of Things. Get it right, and large-scale video analytics: “Could well be edge computing’s ‘killer app.’ Tapping into the potential of recent dramatic increases in the capabilities of computer vision algorithms presents an exciting systems challenge.”

Away from the center

There’s no way OEMs can avoid this shift away from the centre. HPE OEM Solutions provides world-class technology solutions, guidance and support to OEMs. Its chief technologist Rod Anliker concurs on just how far the edge is transforming the data center and pushing compute processing outwards.

“The rapid digitisation of industry is exponentially increasing the amount of data produced at the edge. That is transforming the technology and infrastructure required to extract value,” Anliker said. “As the importance of fast, quality data insight grows, so does the demand for enhanced compute, storage and networking capabilities outside the data center. Analytics at the edge minimise latency and reduce the data transfer and storage costs associated with performing all analytics in the data center.”

Add to this AI, and the requirements of Neural Networks and Machine Learning with video analytics and you’re starting to look at the need for a tiered and distributed infrastructure from the device through AI-powered local edge with – possibly – some kind of central data center hubs.

Building the edge

If the trend is away from bigger and bigger x86 server capabilities at the center, and if you’re not in the market of making IP cameras, what does this edge look like for the typical OEM?

This edge sits between the device and the cloud or data center, so it’s not in a clean and spacious data center. That means edge systems must - therefore - be compact and ruggedised. They must be powerful, too: capable of running neural network and big data APIs locally or as part of a federated mesh with other distributed edge devices connected by cloud. Increasingly, systems will run multi-core x86 CPUs with – additionally – GPUs.

GPUs have their roots in gaming, but are being used more widely as part of the hardware architecture underpinning AI for ML and neural networks because of their performance, processing and efficiency. In one example, Nvidia’s Tesla P4 has been able to extract different attributes from a live video feed within 70 milliseconds with an accuracy rate of around 90 per cent.

Another building block of such an edge system is SSD, for fast storage and retrieval, for added density and to help further shrink overall device footprint.

In terms of AI and neural networks, the CPU and GPU architecture would likely work with a Hadoop cluster – Hadoop being the open-source storage and processing framework for big data, whose roots lie in Google’s MapReduce. Programming AI and neural networks itself takes us into a rich realm of mathematics, but off-the-shelf elements exist to get people going - such as Google’s TensorFlow for ML. Also, think databases: embedded and capable of supporting structured and unstructured data.

The software in your stack will almost certainly run on Linux: embedded or server, Linux has proven itself in terms of scalability and reliability for IoT and edge devices.

End of the beginning

The data centre isn’t disappearing, but many of its functions are being re-assigned to the edge thanks to IoT. Add to this mix real-time analytics for video and you’re looking at an architecture that must be powerful enough to run complex queries locally while supporting AI.

For OEMs practiced in the art of building systems based on standard architectures, this new world of the AI-powered edge is not a revolution but – rather – an evolution.

Sponsored by: Hewlett Packard Enterprise OEM Solutions.




Biting the hand that feeds IT © 1998–2018