Reddy Case Study · AI Implementation

What if instead of training, you made AI-powered video games for call centers?

As part-time Director of Learning at Reddy, I helped build the systems that turn business logic into realistic practice environments — giving call center agents the experience of fifty AI conversations in dozens of scenarios before their first real call.


Why This Matters

Experiential learning used to be expensive. Now it's possible at scale.

Five years ago, building a custom training simulation for one call scenario at one company was practically impossible. Now, with intelligent systems that leverage AI and human insights into what makes great customer service, you can build dozens of simulations with only 20 hours a week.

What changed isn't just smarter models. It's smarter use of them — deterministic pipelines, well-organized data, and human operators who understand both the technology and the learning science behind it. That's where the work at Reddy sits.

I've always believed that people learn by doing. The research backs this up. But experiential learning has historically been limited by cost and logistics — you can't put every new hire through a hundred practice scenarios with a live trainer. AI changes that equation. Not by replacing the trainer's expertise, but by making it possible to deliver the kind of practice that expertise demands, at a scale that wasn't available before.


The Role

Half learning scientist, half forward-deployed engineer

Reddy builds AI coaching for call centers. My role bridged two worlds: at the strategic level, I worked with the founders on learning theory, LMS architecture, and making sure simulations drove real behavioral change. At the tactical level, I was writing code, building data pipelines, and deploying simulations directly to enterprise clients.

I was embedded with client teams as a forward-deployed engineer — the person who understood both the platform's capabilities and the pedagogical goals well enough to build simulations that actually worked. 110 commits and 400,000+ lines of code, all in 20 hours a week over 1.5 years.

Strategic

Learning science behind feedback and QA, LMS architecture, and ensuring simulations drove behavioral change — not just engagement metrics.

Technical

Coded simulations, built data pipelines, designed evaluation rubrics, and deployed directly to Fortune 100 clients as a forward-deployed engineer.


The Work

The pipeline: from raw business logic to working simulation

Every client's training needs are different — different call drivers, compliance rules, customer personas, skills, tools. The challenge was building a process that could take all of that raw knowledge and reliably produce training that was consistent, realistic, and pedagogically sound. Most simulation platforms on the market do either screen-based training or an AI customer voice — we were doing both, simultaneously, in a single integrated experience.

Neither the human nor the AI can do this alone. The AI can't understand why a particular escalation path matters to a particular customer segment. The human can't hand-build 45 scenario variations. The pipeline works because each step plays to the right strengths:

01

Data

SOPs, call recordings, knowledge bases, scorecards — every client's source material is different. The first job is ingesting raw business logic and turning it into structured data that downstream systems can actually work with.

02

Organized patterns

Extract the decision trees, compliance rules, and conversation flows from the structured data. This is where the shape of the simulation starts to emerge — what scenarios matter, what branches exist, what a good outcome looks like.

03

Human review

A human who understands the business logic, the customer experience, and the learner's reality reviews the patterns before anything gets built. AI is good at finding structure. Humans are good at knowing which structure matters.

04

Transformation to code

AI generates functional HTML that recreates the learner's actual workspace — their CRM, their scripts, their tools — populated with AI customers who behave like real customers. Each simulation is closer to a custom video game than a training module.

05

Human review again

Another human pass. Does the AI customer behave realistically? Are the scenarios logically sound? Does the evaluation rubric capture what actually matters for this call driver? Simulations that train the wrong behavior are worse than no training at all.

06

Final craft

The last step is artistry — tuning feedback quality, adjusting difficulty curves, refining the AI evaluator with datasets of both positive and negative examples. This is where platform expertise and learning science meet. The final product couldn't exist without either the human or the AI.

This reflects how I worked during my time at Reddy in 2024–2025. Since then, the team has automated significantly more of this pipeline — their AI now handles the vast majority of the heavy lifting and they're putting powerful tools directly in their customers' hands. What I built was part of the foundation that made that possible.


The Scale

Hundreds of simulations. Dozens of call drivers. Fortune 100 clients.

I was one of the first people at Reddy building simulations that performed deterministically — engineering pipelines that produced consistent, reliable training at scale rather than relying on open-ended AI generation.

3.5x
ROI on investment in Reddy
+50%
improvement in ramp speed for new agents
+61%
increase in CSAT scores

Clients I worked with directly:

Morgan & Morgan Fortune 500 Retail Globe Food Delivery App Affiliated Monitoring ISG Harte Hanks

The Takeaway

AI made the training possible. The learning science made it work.

The tools to create amazing AI simulations exist now, but tools don't automatically produce good learning outcomes — they reproduce whatever data you point them at. What made this work was combining AI's ability to generate and scale with a human understanding of how people actually learn, how customers actually behave, and what a new hire actually needs to feel prepared.

The education metrics matter more to me than the technology. Ramp speed, CSAT scores, and agent confidence are the measurements that count. The pipeline is just how we get there.

What I learned from Reddy is something I bring to every engagement now: AI expands what's possible in training, but only if you pair it with the right expertise and hold it to real standards. The models will continue to grow their capabilities, and so will the need for experts who know how to use them.


See It in Action

Want to build AI-powered training for your team?

See how I work