Prototype: This dashboard is in active development. Data and features will be refined through workshops with end users.

The Missing Layer in AI

This tool shows that the choices made before an AI system runs - what data to include, how to group it, and how to process it - shape what it finds, yet these choices are often invisible. Current AI guidance focuses on whether models are accurate, fair, and explainable, but misses a more basic question: would the results change if the system were set up differently?

To address this, these specification choices need to be recorded, tested, and shared as standard practice so that when AI informs real decisions, the assumptions behind it are visible and open to challenge.

Why This Benefits Everyone

For Builders Specification sensitivity isn't just a check - it's a discovery tool. Changing the setup reveals insights no single model run can show, exposing patterns that would otherwise stay hidden.
For Decision Makers It provides evidence that choices were tested. When challenged, you can show what changed and what stayed consistent - making AI outputs more defensible.
For Those Affected It allows people to ask: would this decision change under different choices? Not just "why this result," but "what would change if the setup changed?"
What's Next → These issues aren't unique to education. See how they apply across other domains.