Publish date: 09.04.25

An AI PC—or Copilot+ PC, as per Microsoft’s terminology—is a new class of device engineered to natively execute AI workloads with greater speed and efficiency than legacy hardware lacking dedicated AI processing components.

Understanding NPUs

At the heart of an AI PC lies the Neural Processing Unit (NPU)—a processor purpose-built for accelerating AI and machine learning tasks. While GPUs have historically handled such workloads, they are not specifically optimised for them and are considerably less power-efficient.

Traditional processor performance is measured in gigahertz (GHz), or billions of cycles per second. In contrast, NPU performance is quantified in TOPStrillions of operations per second. When working with Large Language Models (LLMs), performance is also considered in tokens—the fundamental units of data. A single token typically represents approximately four characters of text. Accordingly, when an LLM generates a response, it does so in terms of tokens rather than words.

The efficacy of an AI model depends on the volume and velocity with which it can process tokens, and this, in turn, is influenced by both the complexity of the model and the device’s NPU capability. Where LLMs are deployed locally—on-device—NPU performance, as measured in TOPS, directly affects response latency and throughput.

Benchmarking NPU Performance

As noted, TOPS (Trillions of Operations Per Second) is the key performance indicator for NPUs. To support local inference for tools such as Microsoft Copilot, a minimum of 40 TOPS is typically required.

Hardware vendors such as Intel, AMD, Apple and Qualcomm often list NPU specifications in their technical documentation. Example figures include:

  • Apple M4 Pro: 38 TOPS
  • AMD Ryzen AI 9: 55 TOPS
  • AMD Ryzen 9 8945HS: 16 TOPS
  • Qualcomm Snapdragon X Elite: 45 TOPS
  • Intel Core Ultra 5 (245KF): 13 TOPS

It’s important to note that many of these devices are optimised for mobile platforms—particularly laptops—where power consumption is a critical design constraint. One prominent use case is Windows Studio Effects, a suite of AI-powered features that enhance video conferencing (e.g. background blur and replacement, noise suppression, and auto-framing using face detection and digital panning).

The AI PC and Windows 11

There is significant overlap between the specifications required for an AI PC and those mandated for devices running Windows 11, as the operating system now includes Copilot functionality natively.

However, the AI PC represents more than a simple upgrade path. It is increasingly viewed as a foundational platform for organisations seeking to optimise performance, security, and operational efficiency in an AI-driven landscape.

With support for Windows 10 ending in October 2025, IT decision-makers will be required to evaluate hardware investments strategically. If you refresh with hardware that doesn’t have native AI capabilities, you’ll miss out on potential efficiency gains for your workforce for years to come.

Luckily, this is where we can help. Our free Windows 11 Readiness Assessment can help you to understand where your organisation sits in terms of AI capabilities and readiness.

Edge AI and Sustainability

Executing AI workloads at the edge—on-device—also offers significant sustainability benefits.

Energy consumption in data centres is vast. Its measured in kilowatts. NPUs in contrast are measured in milliwatts, which are orders of magnitude smaller. Shifting processing from the cloud to the local device represents massive gains in AI task energy efficiency.

The Case for On-Device AI

While cloud-based AI tools (e.g. Microsoft Copilot, Bing Image Creator, Adobe Firefly, Runway, Udio, etc.) can be accessed from any device with a web browser, AI PCs offer distinct advantages, especially for enterprise environments.

From a data governance and compliance standpoint, organisations may prefer to keep AI workloads on-premises to reduce the risk of data leakage. While consumers might casually query an LLM about personal finances, businesses are understandably cautious about exposing sensitive strategic information to third-party platforms.

Running LLMs locally allows organisations to maintain data sovereignty and auditability. Additionally, users may appreciate the privacy afforded by local inference—whether experimenting with generative tools or engaging in personal queries they would prefer not to be stored in the cloud.

Conclusion

AI is reshaping the digital workplace at an unprecedented pace. For IT leaders, investing in NPU-equipped AI PCs is both a question of performance, and a strategic imperative that encompasses security, efficiency, cost reduction, and future-readiness.

With the future of AI in business moving decisively towards end point computing, Xeretec are on hand to guide you and help you to realise the full potential of AI within secure, high-performance, and energy-efficient IT environments.

Get in touch today to see how we can help