Cisco Unified Edge equipped with Intel Xeon 6 SoCs introduces a future-ready AI infrastructure that enhances performance and security. It reduces network traffic by enabling real-time inferencing directly at the data source.
The edge is emerging as a key domain for computing, driven by the rising demand for agentic and physical AI workloads. Organizations now need flexible infrastructures capable of scaling across various sectors such as retail, manufacturing, and healthcare, while processing data closer to its origin.
In partnership, Cisco and Intel have announced an integrated platform designed specifically for distributed AI workloads. Utilizing Intel® Xeon® 6 system-on-chip (SoC), this solution brings together compute, networking, storage, and security nearer to edge-generated data. This enables efficient real-time AI inferencing and supports agentic workloads.
“A systems approach to AI infrastructure – one which integrates hardware, software and an open ecosystem – is essential to the future of compute, from the smallest edge device to the most complex data center,” said Sachin Katti, Chief Technology and AI Officer and General Manager of Intel’s Network and Edge Group.
“Together with Cisco, we’re redefining what’s possible: delivering a unified, secure, and scalable infrastructure that is purpose-built to handle the next decade of complex AI workloads generating real-time intelligence where it is needed most.”
This collaboration marks a significant step in evolving AI infrastructure to meet the growing complexity and immediacy of workloads at the edge.
Author’s summary: Intel and Cisco have developed a unique, integrated AI platform at the edge, combining advanced hardware and software to enable secure, scalable real-time inferencing close to data sources.