The Nx EVOS platform and Nx AI Manager from Network Optix are now validated on Intel AI Edge Systems.
The combined solution is designed for developers and technical decision makers building computer vision and vision-language workloads who need to run AI across fleets of heterogeneous edge devices without maintaining a different pipeline for every hardware configuration.
“Edge AI is only valuable when teams can deploy and manage it at scale,” said James Cox, VP of Business Development at Network Optix. “By bringing Nx EVOS and Nx AI Manager to Intel AI Edge Systems, we’re giving solution builders a unified way to deploy and manage AI-driven video across Intel CPUs, GPUs, and NPUs without rebuilding their pipeline for each device.
“That means less time spent wrestling with infrastructure and more time focusing on real-world use cases and outcomes.”
Tackling the Complexity of Edge AI Video
Deploying AI at the edge has traditionally required extensive customisation for each hardware platform, resulting in fragmented pipelines, duplicated effort, and long development cycles. This complexity slows time-to-market and makes it difficult to scale beyond pilots.
Nx EVOS addresses this challenge with Nx AI Manager, a universal AI inference pipeline that standardises model deployment and optimisation across Intel architectures, from Intel Core and Intel Xeon processors to Intel® Arc graphics and integrated NPUs.
Using Intel Distribution of OpenVINO and Intel media processing libraries such as Intel Video Processing Library (VPL) the platform allows solution builders to run computer vision and VLM workloads across cloud, data center, and edge devices using a single, hardware-agnostic approach.
What Nx EVOS + Intel AI Edge Systems Enable
With Nx EVOS and Nx AI Manager running on Intel AI Edge Systems, solution builders gain:
- Cross-hardware AI flexibility: Run AI models across Intel CPUs, integrated and discrete GPUs, and NPUs through one inference pipeline, reducing hardware-specific engineering and accelerating time-to-market.
- Scalable model management at the edge: Deploy, update, and roll back AI models over-the-air via Nx Cloud across hundreds or thousands of distributed edge devices from a single interface, keeping systems current with minimal operational overhead.
- Continuous improvement loop: Use configurable post-processing to automatically collect targeted video samples for retraining and fine-tuning models. This helps teams “close the AI loop” and improve model accuracy and robustness over time.
Together, these capabilities shorten development cycles, make fleets of devices easier to manage, and lower the overall complexity of maintaining AI in production.
To read more Network Optix news, click here.