top of page
Pink Marble

AI-Enhanced Approach to Building Scenario-Based Workflow Training 

I led the end‑to‑end design and development of a Date Spanner workflow training using the ADDIE model, with a strong emphasis on scenario‑based learning, assessment‑driven design, and thoughtful use of AI to accelerate production. The solution combines a demo‑style video and an interactive Articulate Rise course with a final competency assessment to validate real‑world decision‑making for complex reorder eligibility scenarios. 

The Problem I Solved

When I began supporting this initiative, Date Spanner was already a critical reference tool for the Diabetes Customer Service team—but training on how to use it effectively was inconsistent and heavily dependent on live SME facilitation. Agents were expected to apply complex eligibility rules, payer nuances, and coding decisions during live calls, yet existing materials focused more on reference information than on building a repeatable decision‑making process.

 

The goal of this project was to redesign the learning experience from the ground up: transforming a dense, high‑risk workflow into a concise, scalable training that demonstrates the process clearly and then assesses whether learners can apply it correctly in realistic scenarios. 

Project Overview

Role: Senior Instructional Designer (Lead) 
 

Collaborators: Diabetes Customer Service Leadership; Subject Matter Expert; Customer Service Manager 
 

Tools & Frameworks:

  • ADDIE Model (Analysis, Design, Development, Implementation, Evaluation)

  • Scenario‑Based Learning

  • Adult Learning Theory

  • AI‑Assisted Instructional Design

  • Articulate Rise 360 (course and assessment)

  • Rise AI (first‑draft course structure and assessment foundations)

  • Microsoft 365 Copilot (assessment storyboard drafting)

  • Synthesia (AI avatar and AI voice‑over)

  • Microsoft Excel (SME artifacts)

  • SharePoint (source materials) 

 

Media & Deliverables: Demo‑style workflow video; scenario‑based Rise eLearning; final competency assessment 
 

Delivery: Self‑paced digital training designed for onboarding and ongoing reinforcement 

Analyze

Translate SME Expertise Into Measurable Skills 

Discovery and intake revealed that most errors were not caused by lack of access to DateSpanner, but by inconsistent application of eligibility logic under pressure. Agents struggled to confidently identify required inputs, match the correct rule line, and apply the resulting reorder date accurately.

 

I analyzed SME materials and identified a core instructional opportunity: reduce cognitive load by centering training around a five‑input eligibility model and a single, repeatable decision process.

 

AI was used during analysis to help synthesize dense documentation and surface recurring error patterns more efficiently—allowing faster alignment on the behaviors that truly needed to be trained. 

Design

Design for Application, Not Memorization 

After reviewing intake feedback and aligning on next steps with stakeholders, I intentionally designed the learning experience video‑first, followed by assessment. The goal was to ensure learners clearly understood the workflow before being asked to demonstrate competency.

​

I began by scheduling and facilitating a dedicated demo recording session with the SME. The SME was selected because he previously trained this process live; this training needed to capture and scale that expertise in a way live facilitation could no longer support.​

Demo-Style Workflow Video (Designed First) 

To capture the authentic workflow, the SME walked through the full DateSpanner process via screen share while narrating each step as he would in a live session. I recorded this using Synthesia’s screen‑recording and transcription workflow, which generated a complete transcript of the presentation.

  • Raw input: ~45‑minute SME‑led walkthrough

  • Design decision: Create a concise 6‑minute overview video focused only on the highest‑value steps and decisions

  • Final cut: ~3 minutes of direct demo footage, supported by refined narration

 

AI supported this process by:

  • Transcribing the SME’s narration into editable text

  • Helping rewrite and tighten the script into clear, learner‑friendly language

  • Supporting storyboarding to align the video to key learning objectives

 

I then used Synthesia’s AI avatar and AI voice‑over to deliver the refined narration, creating a polished, consistent, and scalable demo while preserving the SME’s real workflow and teaching approach.​

Date Spanner Workflow Overview designed with Synthesia 

Scenario-Based Competency Assessment (Designed Second) 

Only after the overview video was finalized did I design the scenario‑based competency assessment, ensuring every question aligned to decisions demonstrated in the video.

​

To accelerate development while maintaining rigor:

  • I used Microsoft 365 Copilot to generate an initial assessment storyboard (scenario framing and draft feedback language).

  • I used Rise AI to create a first‑pass course and assessment structure.

  • I then iterated heavily—refining scenarios, tightening distractors, validating answer logic, and ensuring each item measured a real job decision.

Develop

Using AI to Accelerate—Not Replace—Instructional Design 

During development, AI functioned as a production accelerator rather than a design substitute. Key outcomes included a 6‑minute demo‑style video derived from SME expertise (refined through AI‑assisted script rewriting and storyboarding, plus AI avatar and AI voice‑over), a structured Rise course with clear transitions and a final recap video, and a 10–15 question scenario‑based assessment with targeted feedback tied to real reorder decisions. By leveraging AI for drafting, editing, and production, I was able to focus my time on learning flow, assessment validity, and stakeholder alignment.  

Implement 

Scaleable Assessment-Driven Deployment 

The training is designed as a self‑paced Rise course with an embedded recap video and a required competency assessment. Learners must achieve an 80% passing score to complete the training, ensuring that completion reflects demonstrated understanding—not passive consumption.


The modular design supports reuse for onboarding and allows updates as payer rules and workflows change.

Evaluate 

Coming Soon

This training has not yet launched. A post‑implementation evaluation plan is in place to measure both learning effectiveness and operational impact.


Planned evaluation includes:

  • Learner completion and assessment performance

  • Analysis of scenario‑level question data to identify knowledge gaps

  • Post‑launch audits comparing reorder eligibility errors before and after training

  • Feedback from SMEs and customer service leadership

 

Results will be documented in a future update to this case study.

Impact Highlights 

Anticipated 

  • Reduced cognitive load through a clear, five‑input eligibility model

  • Faster time‑to‑competency via a concise, demo‑driven learning experience

  • Improved consistency in eligibility decision‑making

  • Transparent, responsible use of AI to enhance—not replace—human expertise

  • A future‑ready training asset built for scale and change

CONTACT ME
  • Black LinkedIn Icon
  • Black Facebook Icon
  • Black Instagram Icon

Adriana Danielle Salinas

Designer |  Educator  |  Curator 

​

Austin, Texas (CST) 

​

adsalinas03@gmail.com

​

713-252-4096

© 2026 By Adriana Danielle Salinas

bottom of page