Digifort explores how the AI industrial revolution might impact CCTV

Digifort AI CCTV

In this special blog, Nick Bowden, Managing Director of Digifort UK, looks at AI’s impact on the CCTV industry now and in to the future, as we enter a modern industrial revolution.

The global rise of Artificial Intelligence (AI) has fundamentally changed the way many industries work and is revolutionising CCTV.

How will the AI industrial revolution impact CCTV?

AI describes a type of computer processing which will ‘learn’ to improve its capability – often likened to human understanding. AI’s development is being driven by new and emerging, high-growth, technology-driven markets, such as datacoms, telecoms, networking, automotive, gaming, defence, consumer electronics and more; the CCTV industry is a beneficiary.

AI development is core to video analytics performance in CCTV, improving effectiveness, accuracy, channel density, and reducing cost. However, the HPC (high performance computing) hardware development which enables the processing escalation that AI demands, will also affect physical CCTV system capability, design, layout and cost.

AI now.
Software.

In CCTV applications, AI and analytics are synonymous. The video analytics technology used in CCTV solutions can broadly be split into three categories; Neural, Deep Learning and Binary. Digifort supports all three types, as well as integrating with the analytics in third-party NVR’s, analytics boxes, and cameras at the ‘edge’. Each has a cost performance consideration, so Digifort works with them all to ensure customers have flexibility, for example, not spending on advanced analytics where cheaper motion detection might do.

Leading analytics types use Machine Learning (ML) to train the software algorithm to recognise and interpret objects in a scene, including relevant movement and behaviour patterns. Like human recognition, many different objects can be identified, with a ‘confidence’ figure, from a stored library of known objects, learnt by the system over time. These objects might include people, vans, bikes, cars, trucks, groups of people, bags, cyclists and many more, including their colour profiles and movement directions.

State of the art, Deep Learning (DL), a category of ML, increases accuracy and effectiveness further. This includes the ability to automatically self-calibrate and discount scene items that are of no interest to the analytics, reducing false alarms. Another example is overlaying skeletal frames on people to track the position and movement of hands, arms and heads relative to the torso, along with speed of movement, to identify more complex behaviour patterns such as aggression or violence.

Hardware.

Current VMS applications with analytics use servers with CPU (Central Processing Units) for the VMS operation, with enough grunt to process video from the system’s cameras. Servers or PCs with GPU (Graphics Processing Units or Graphics cards) provide the computer processing capability needed for analytics. The Digifort VMS will easily process 100 to 200 cameras on a single, 2U server with a mid-range, Intel Xeon CPU-type processor, depending on the camera recording profiles. Digifort, an Nvidia partner, designs its analytics to run on Nvidia GPU, where a mid-range graphics card, such as the RTX A2000, will process up to 60ch of VA, depending on the type. So, both the VMS and VA processing are done using established, well-proven hardware so as to deliver cost-effective, CCTV system building blocks.

Network, storage and the cloud.

Perhaps the greatest limitation in CCTV applications currently, in both physical hardware terms and understanding, is network bandwidth and storage; critical to AI and analytics. If a 4MP camera on a remote site is streamed at 25FPS, using an efficient H265 compression algorithm, its bandwidth might be around 3Mbps, just as an example, (depending on scene activity, image quality and camera type). When this video is recorded for 31 days, it will need around 1.0TB of storage. An 8ch CCTV system will, for example, need a multiple of that; 24Mbps and 8TB. I am not sure what the average broadband connection delivers these days, but 100Mbps down speed and 20Mbps up speed is far better than my current residential connection, and that would not be enough, as it is the up speed we need when streaming from a remote site to a central location. Digifort offers rental license options and supports centralization on a remote server or cloud (someone else’s server), but the cost of cloud storage and the broadband connection necessary to stream effectively are expensive.

Intelligent VMS systems like Digifort have variable bit rate, adjusting camera bit rates in real-time to deliver the bandwidth where it needed most. However, unless the mindset moves from continuous recording to event recording, triggered by AI-driven analytics in real-time, for larger systems, this dog won’t hunt, as they say! And that’s before we factor in latency.

As well as exploring the role of AI now, the blog goes into detail about the impact AI is likely to have in the future, including improved performance in video analytics and better predictive analysis. Bowden also explores how AI is creating an explosion in demand for greater bandwidth, low latency and faster processing capability. He adds that AI-based analytics need the network to have zero latency to avoid delays between streamed video and analytics events triggering.

To read the full article, click here

For more Digifort news, click here


Share
Tweet
Post

Related posts

Scroll to Top