top of page
wday-partners-reversed-black-logo-sales-partner.png

AI 101 with Mando

What is AI?

Artificial intelligence, or AI for short, is sometimes thought of as a new discipline, yet its roots go back to classical statistics. If you’ve ever calculated the odds of a particular outcome or drawn a trend line from a set of data points, you’re using the same math that powers modern AI – albeit with fewer parameters. At its core, a language model is a very large conditional probability function that estimates the likelihood of the next word given all previous words and a repository of context. The transition from running logistic regression over a few dozen data points to implementing a 70-billion parameter large language model (LLM) is mostly a difference in scale, not underlying techniques. If you recall terms like “priors”, “likelihoods” and “overfitting” from your statistics class, you already speak the language of modern AI.


Why the sudden boom?

Three important breakthroughs drove the most recent AI wave. First, the transformer architecture (originally published in a 2017 paper by Google) replaced recurrent neural networks by allowing every token (or word) to “attend” to every other token. This opened the door to extremely long “context windows,” where models could support massive amounts of user input. Second, hardware vendors began optimizing silicon for “inference” rather than the traditional use case of graphics. Inference is the use of an AI model by an application after it has been trained. NVIDIA, the most popular AI chip company, designs microprocessors whose single nodes can run inference over half a trillion parameters. Finally, the open source community started pushing the frontier of possibilities by releasing benchmarks for quality. For the first time, practitioners were able to quantify how useful a particular model was for a specific task, such as solving math problems or playing chess.


Implications for the workplace and HR

The practical consequences in the workplace are already measurable. Field experiments in the domains of government and banking show immediate reclamation of 5-7% of the work week on chores like document drafting and summarization. For HR teams, the effect is twofold: repetitive tier-one queries and policy look-ups move from busy team members to AI so that analysts can shift to strategic roles like review and decision-making.


The catch: bias and hallucinations

With great power comes great responsibility. While language models can support nearly unbounded volumes of inputs and outputs, they tend to “hallucinate,” or invent plausible but false statements. This occurs when the statistical weight of their pre-training data conflicts with a niche question. Stanford’s empirical audit of AI legal assistants (published in 2024) found a false-citation rate of roughly one in six answers, a pattern that echoed across a variety of knowledge work domains.


Bias is another problem with language models. Because most LLMs are trained on a “common crawl” of publicly available internet data, answer skew to the most prevalent types of responses and phrases found in that training corpus. Because LLMs have no intrinsic understanding of what is authoritative versus not, they use a "popularity contest” across their priors to determine how to respond rather than reasoning about the substance of a particular request.


An enterprise-grade AI architecture

The way to harness AI for maximize usefulness is to constrain its inputs and outputs to a particular domain. One of the most promising architectures in the industry is retrieval-augmented generation (RAG), which is the foundation of how Mando AI is built. Instead of depending on a language model's hazy recollection of an abundance of training data, RAG uses search technology over a highly curated set of domain-specific data to ground a language model’s outputs in evidence-based answers.


Mando AI builds on this by narrowing its corpus to Workday content from the vendor, experts in the ecosystem, and customer context. Every answer cites the paragraph and source that supports the claim, giving users confidence about the quality of the answers being generated. Personalization layers are used to re-rank results so that AI outputs are highly specific to a user or organization over time.


Curious for more?

As a member of the Customer Sharing Movement, you get to try Mando AI via the "SPEED SEARCH powered by Mando" exclusive offer. Your backlog will never be the same. Check it out!


Author: Adi from Texas


ree

 
 
 

Comments


bottom of page