Skip to main content
← All posts
2026-04-25

What we actually test before shipping a SCORM package

Our pre-delivery checklist: manifest validation, completion tracking, suspend_data sizing, mobile rendering, and the iframe edge cases. Ops, not marketing.


We test every package in the LMSs our clients actually use, before we hand it over. This post is the literal checklist we run through, not a sales pitch. If a row in the table below doesn't get a green tick, the package doesn't ship.

It's documentation, not marketing. We're publishing it because "we test in real LMSs" is easy to claim and harder to substantiate — so here's the substance.

The checklist

Every package goes through these eight checks before delivery. The "tool" column is what we actually use, not what's theoretically available.

TestWhat we checkTool
Manifest validationschemaversion is set, identifier values are unique, href paths resolve, adlcp:scormtype="sco" is present on the launchable resourceSCORM Cloud + Rustici manifest validator
Launch-in-iframe smoke testThe SCO loads inside a sandboxed iframe with no CORS errors, no X-Frame-Options blocks, no console exceptions on first paintChrome DevTools console
Completion trackingLMSFinish() (1.2) or Terminate() (2004) actually fires on exit, and the status field is set to a valid value — cmi.core.lesson_status in 1.2, cmi.completion_status (plus cmi.success_status where relevant) in 2004DevTools console + network panel
suspend_data sizingThe serialised state stays well under the 4,096-character SCORM 1.2 cap across the longest plausible sessionDevTools watch on LMSSetValue
Resume / bookmarkClose the course mid-deck, reopen, land on the right slide with the right stateManual run-through
Mobile renderingThe deck reflows at narrow viewports, touch targets are reachable, no horizontal scroll on a phone-sized windowChrome DevTools device mode
Accessibility spot-checkKeyboard-only navigation works, tab order is sensible, focus is visible, key slides announce something coherent to a screen readerKeyboard + screen-reader spot-check
Scoring round-trip (if the course has a quiz)Score posts to the LMS, the gradebook reflects it, a passing score actually unlocks completionManual verify in a test instance of the target LMS

That's it. There isn't a hidden ninth check we left off the list.

Per-LMS quirks

Each LMS has its own quirks. We keep a private catalogue per client because the list grows every time a new LMS version ships. The five most common ones — manifest strictness, suspend_data limits below the 4KB spec, completion semantics that gate on passed rather than completed, scoring fields that get rewritten on ingest, manifest title length caps — are documented publicly in 5 SCORM errors that break PowerPoint conversions.

If you're shipping into a specific LMS and you want to know what we already know about it, the fastest way is to ask.

Tooling

Nothing exotic. The combination matters more than any single tool.

  • SCORM Cloud for the broad "is this a valid SCORM package at all" check. It's permissive — passing here is necessary but not sufficient.
  • Rustici's manifest validator for the strict imsmanifest.xml read.
  • Chrome DevTools for everything runtime: watching LMSSetValue calls, catching iframe sandbox errors, checking responsive layout in device mode.
  • A keyboard and a screen reader for the accessibility spot-check. Not a full WCAG audit — a spot-check on the slides where it matters (anything interactive, anything time-bound, anything where a missed announcement loses the learner).
  • A test instance of the target LMS for the scoring round-trip. SCORM Cloud will not tell you that a real LMS will silently rewrite passed to completed.

What we don't test

Worth being explicit about the limits, because the list above sounds more comprehensive than it is.

  • We don't test every version of every LMS. We test the versions our clients are actually deployed on. New LMS versions ship every quarter; chasing every release is busywork.
  • We don't run an automated regression suite. Every package gets walked through manually because the things that break in SCORM (iframe sandboxes, screen-reader announcements, "did the resume button bring me back to the right slide") are not the things automation catches cheaply.
  • We don't run a full WCAG 2.1 AA audit on every package. The accessibility check is a spot-check, not certification. If you need a formal audit, that's a separate piece of work — say so up front.
  • We don't pen-test the LMS itself. If your LMS instance has security misconfigurations, we'll spot the obvious ones, but it's not what we're hired for.
  • We don't load-test the LMS's SCORM endpoints. If you're rolling out to 50,000 learners on day one, that's an LMS-vendor conversation.

Why publish this

Two reasons.

The first is for ourselves. Writing the checklist down forces it to stay honest. If we ever stop running one of these checks, the page either gets edited or it becomes a lie. That's a useful kind of accountability.

The second is for buyers. The people who hire us at the senior end of the market — L&D leads at organisations where a broken SCORM rollout turns into an internal incident — already know that "we test in real LMSs" is the easy claim. What they're looking for is the operational detail behind it. This is that detail.

If you'd rather hand the testing off, get a quote. If you want to know more about who we are, the about page covers it.