Academy roadmap

AI Guidance Academy implementation plan

A structured, repeatable training + governance program designed to produce approvals, evidence, and measurable outcomes for regulated firms.

Phased delivery

Phase 1 — Foundation (0–6 weeks)

  • Curriculum v1: 6 learning paths (exec, risk, engineering, audit, data leak, EU AI Act)
  • Decision flow v1: Start‑Here wizard + zone outputs + controls checklist
  • Use‑case library v1: 20–30 use cases with risk ratings and deployment guidance
  • Governance pack v1: policy templates + acceptable use matrix + vendor questionnaire
  • Assessment: short quiz per path + completion certificates (PDF/HTML)

Phase 2 — Role‑Based Academy (6–12 weeks)

  • Tracks: Exec, Risk/Compliance, Engineering, Audit/Assurance, Advisory/Tax
  • Role scenarios: “what to approve” + “what to block” + evidence examples
  • Rubrics: approval checklist and red‑flags per role
  • Live evaluation: scenario-based prompts; learners classify and defend decisions
  • Metrics: time‑to‑approval, reduction in unsafe prompts, adoption rate per team

Phase 3 — Policy & Evidence Engine (12–20 weeks)

  • Policy builder: guided generator for AI Use Policy + Prompt Safety Rules
  • Audit evidence trail: store decisions, prompt templates, and approvals
  • Vendor registry: central vendor risk records per model/tool
  • Controls mapping: align controls to EU AI Act, internal risk framework, and ISO/NIST

Phase 4 — Enterprise Rollout (20+ weeks)

  • Certification: internal “AI Approved Practitioner” badge
  • Regulatory readiness: exportable audit packs by business line
  • Continuous updates: quarterly refresh of policy, examples, and tool guidance
  • Benchmarking: compare adoption and risk posture across business lines

Core products

  • Learning Paths (short, executive‑first)
  • Decision Cockpit (Start‑Here wizard)
  • Use‑Case Library (industry + department)
  • Example Lab (safe/unsafe prompts + redaction)
  • Governance Pack (templates + checklists)
  • Tool Comparison (risk‑posture matrix)

Governance model

  • Owner: Risk/Compliance (policy + approvals)
  • Operators: Engineering leadership (controls + enforcement)
  • Advisors: Legal + Data Protection + Audit

Success metrics

  • Approval time per use‑case
  • % staff trained by business line
  • % prompts compliant with policy
  • Reduction in sensitive data leakage incidents
  • Audit readiness score per department