Learning Velocity, Trust, and the Future of the AI PC at Work
Mar 25, 2026
As companies deploy AI more widely, they've begun to cast an eye towards how the technology will reshape both workplaces and workforces over the longer term. Forecasts of artificial intelligence's impact run the gamut from what I'll call 'business as usual' scenarios in which AI is impactful but not fundamentally transformative, to predictions that AI will redefine the nature and scope of work across virtually every industry.
A recent Gartner report weighs in on this idea, implying that AI is likely to reshape work without causing dramatic, sweeping, job losses. Companies seeking to navigate the next few years successfully will still face challenges when it comes to navigating AI deployments successfully, however, and the Gartner report Predicts 2026: AI’s Impact on the Future of Workforce lays out a vision of how artificial intelligence will transform business expectations and corporate best practices over the next 2-4 years.
By 2030, Gartner predicts the half-life of technical skills will shrink to just 2-5 years, from its current 8-12 years. This metric refers to the amount of time it takes for roughly half of someone's technical knowledge to become outdated. Such a sharp reduction in less than five years is a sign of just how significant AI is likely to be -- and how quickly it may signal a potential shift in how we think about operating systems and applications.
Other Gartner predictions include:
- By 2028, at least one large enterprise will expect to create and maintain digital AI avatars of its employees
- By 2028, organizations that display AI agents in team structures will have 15% lower employee engagement compared to those that don’t
- By 2029, 60% of digital products will be architected primarily for AI agent consumption, with human-facing UX becoming a secondary consideration
- By 2029, their forecast suggests that 30% of employees who were initially displaced by AI will be rehired, often at a higher cost, due to ineffective workforce transition strategies.
Learning Velocity and Trust Underpin AI
Learning velocity is only directly referenced in one Gartner prediction, but I'd argue it's one of two powerful indirect themes that connects much of the overall piece. Websites and backends aren't going to rewrite themselves to adopt more agentic-friendly UX principles -- at least not wholly -- and the shifting demands of e-commerce and internet best practices will require digital marketing and web dev teams to reconsider the basic principles of web design.
Employees rarely enjoy ideal circumstances for learning a new product, platform, or piece of software. A sustained, company-wide reduction in skill longevity will pressure organizations to shorten the time between the initial debut of new capabilities and when employees can use the tool widely and safely. Continuing education and adaptive learning can help, but only if these tools mesh well with daily work and align with performance measurements.
Trust is another indirect theme of the Gartner report. As this AMD whitepaper discusses in more detail, trust is essential to long-term success of AI, as well as to the continuing education initiatives proposed above. Rolling out AI successfully will require inter-departmental cooperation. Companies will need to build trust internally and externally, often at the same time.
There's a connection between trust and speed that makes the word 'velocity' in "learning velocity" particularly apt. Velocity refers to the speed at which something moves. People, generally speaking, are willing to move more quickly if they feel safe and adequately supported. Thus, even learning velocity can be argued as ultimately depending on trust. It flourishes when people know the rules and understand how they will be evaluated, and fails when employees fear being penalized for experimentation or held to poorly-communicated standards.
Building an AI-Ready Enterprise From the Endpoint Up
At AMD, while recognizing that cloud and data center infrastructure play essential roles, we also look at AI through an endpoint-centric lens, partly because local devices are where people interact with AI and draw conclusions regarding its usefulness and relevance. Responsiveness, privacy, manageability, and ease of use all influence whether an AI feature becomes part of routine work or remains an occasional demo, but these characteristics are substantially shaped by the underlying PC hardware that provides them.
This is where AI PCs enter the picture. An AI PC is a system with a dedicated neural processing unit designed to run AI workloads efficiently on the device. Some AI tasks are too large to run anywhere but the cloud, but many can run locally; this reduces latency, limits unnecessary data movement, and provides service when an internet connection might be limited or unavailable. Future AI deployments are expected to increasingly use hybrid models, which split processing between local devices and cloud environments. AI is changing where compute happens and AMD iis focused on understanding our customers' needs and equipping them for long-term success, as I discuss in the video below.
AMD Ryzen™ AI PRO processors are designed to support a wide range of current and emerging AI workloads bydelivering high performance locally. Ryzen AI PRO CPUs pair an advanced NPU with advanced AMD PRO manageability and include features intended to help protect system data. Both traits are critical given the importance of endpoint security in the age of artificial intelligence in the enterprise.
AI tools increasingly record meeting transcripts, access customer records, write code, and create product plans. The PCs where these tools run need hardware-based security that protects them at the platform level, complementing software-based protections with hardware-level security features.
AMD PRO manageability tools similarly help IT safeguard and repair devices from all of the non-security issues that can crop up, including boot loops, OS-and-driver-level incompatibility, unexpected remote imaging requirements, or the need to trigger system restore. AMD PRO processors integrate open, standards-based fleet management through DASH and offer comprehensive support throughout the product lifecycle. These capabilities help IT maintain a consistently good user experience across far-flung locations with hybrid workforces.
Conclusion
Transformative technologies are rarely frictionless, but the partners and platforms organizations choose can influence how effectively new capabilities are adopted. AI PCs represent one of the most tangible ways employees experience enterprise AI strategies, and investments in these systems can help translate broader initiatives into everyday workflows.
Organizations that prioritize employee learning, establish clear guidelines for AI usage, and adopt consistent approaches to security and device management may be better positioned to realize value from AI adoption over time. AMD Ryzen AI PRO processors are designed to support these efforts, alongside the employees and IT teams working to scale AI across commercial environments.
Gartner, By Arun Chandrasekaran, Helen Poitevin, Tori Paulman, Brent Stewart, Shawn Murphy, Afraz Jaffri, 14 November 2025
GARTNER is a trademark of Gartner, Inc. and/or its affiliates.