Proof of Value, Not Just Proof of Concept: The Agentic AI Litmus Test
- Chris from Texas
- Aug 20
- 4 min read

Far from the machine learning of the past two decades—or generative AI of five years ago for that matter—agentic AI is the next evolution in artificial intelligence that is already here. Agentic AI promises to reshape how people do work. Workday® is positioning itself to do just that with Workday Illuminate (2024), announcing its next gen of AI Agents at 2025 DevCon (2025), and its recent acquisition of Flowise (2025) promise to change the way people do work.
It is clear Workday is doubling down on artificial intelligence, since announcing Illuminate, Workday’s press releases have changed from “a leading enterprise platform that helps organizations manage their most important assets” to “Workday is the AI platform for managing people, money, and agents.”
This past week I had unexpectedly spent more time learning about Agentic Artificial intelligence that I would have imagined. My company had partnered with Google to host an Agentic AI Summit covering topics from architecture to production deployment, from idea to impact. There were interesting and engaging presentations coupled with hands on lab work using Google’s Agent Development Kit (ADK). It was a fascinating—and full—three days of the latest in agentic artificial intelligence. Later in the week I participated in a webinar on Human Resources and the imperative of Artificial Intelligence. Where the three-day summit was focused on the specifics of agentic AI, the webinar provided a broader perspective that was techno-agnostic. Both contained insights that I have been reflecting on for the past week and believe are key for HR pros responsible for their organization’s HR technology stack. Perhaps the key is how do you ensure that the agents that you are either developing, deploying, or buying are adding value to your organization.
Proof of Value vs. Proof of Concept

One of the clear takeaways from both the Agentic AI summit and HR and the imperative of AI meetings is: we need to be clear about the value that agentic AI tools we are either buying or building, and deploying will be adding economic value to our organizations. For software companies—like Workday—this question may focus more on whether the agent will generate revenue for them. For most, especially in the HRIS space, it is the economic question of whether we buy, build, or if it will add any value whatsoever to accomplishing our organization’s goals whatever those may be. So how do we do that?
Understand the Goal

I had a history professor say one time that the most important question you can ask yourself is “So what?” This happens a lot with HR metrics and analytics. We do the same metrics because we always have done those metrics. We mine data based on what we’ve always done without asking if it matters to the business. Understand what your organization’s goals/values are and align to them. Don’t just create some science experiment “so you can do AI.”
Commit to the Problem
Understand the problem that you are trying to solve. Commit to solving the problem without specifically committing to the (potential) solution. Agentic AI solutions are not cheap. Nor are they easy to get to production. And when they do get to production, they will not be without their issues. Ask yourself: Will this reduce human chore? Will this increase productivity? Or quality? Or decision support? Will this make our HR organization go from reactive to proactive? Or perhaps give our managers useable decision support? Will this help us achieve a stated goal or an organizational value? Commit to understanding the problem, before committing to a solution and be clear on when AI is the right solution.
Understand the Risk
Understand what components are being used by your agent and their associated risks. As an example, one critical component to agentic AI is the use of MCPs, or model context protocol. MCPs bridge the gap between LLMs and external tools, APIs, and data. MCP is the tool that gets real time data directly into LLMs to ensure that their actions and answers use the current and accurate data. Some risks include data leakage or retrieving malicious or manipulative content. Understand what protocols and components are being used in agents and partners with your Cybersecurity and Governance teams to ensure you stay protected.
Understand that this is a Journey

Artificial intelligence is transforming and evolving at a breakneck pace. Think about this: only three years have passed since ChatGPT was released by OpenAI making generative AI mainstream. We are now looking at significant development in the agentic AI space where they are not only gaining more autonomy but also the ability to have multi-agent collaboration, agent delegation, and self-healing (ability to fix bugs improve reliability and downtime). The promises of Agentic AI seem to be endless and to be sure, many observers and industry experts are still divided on whether Agentic AI will deliver on its promises when it says it will. Certainly, there seems to be an endless supply of use cases of where AI will improve—if not outright revolutionize tools and technologies—but one of the statements that almost every speaker made was “agents are very, very hard to get into production.” This was sobering coming from a tech giant and doesn’t even speak to the unknown of when you do. Just ask Delta Airlines.
What should HR technology professionals do when we’re overwhelmed by tech debt, optimization demands, and ongoing projects? It’s easy to fall into one of two camps: seeing agentic AI as a game-changer—or just another shiny tool to figure out. What we do is: understand and align to value, understand the goal, commit to the problem, understand the risk, and get started.
Author: Chris from TX

Thanks for the insight coming off the recent work you did. Will be interesting to see what is announced at Rising with respect to agentic AI.