56 operational terms grounded in federal law, published governance frameworks, and The Language Firm's applied methodology. Every definition is anchored to a specific statute, framework, or service. Use these in leadership meetings, board presentations, vendor evaluation conversations, and incident response documentation.
The speed at which a school can produce, update, and locate compliance-critical documents. Low velocity is not just an operational inconvenience. FERPA requires schools to honor parent record access requests within 45 days (34 CFR §99.10), and IDEA requires Prior Written Notice before any change to a student's identification, evaluation, or placement (34 CFR §300.503). A school with low Documentation Velocity cannot meet either clock.
FERPA 34 CFR §99.10: 45 days to honor parent access requests. IDEA 34 CFR §300.503: PWN required before any proposed change.
Total staff time consumed by tasks performed manually that could be partially or fully streamlined. Measured in hours per week per role. In a federal compliance context, Manual Overhead is significant because most of it is not optional work; it is the discharge of federal legal obligations under IDEA (IEP documentation, progress monitoring, Prior Written Notice) and FERPA (record keeping, access requests, disclosure tracking). The problem is not that the work exists. The problem is that no system has been built to do it efficiently, and the documentation proving it was done may not survive review.
IDEA Part B requires documented IEP process at every stage. Time spent manually is time the system should absorb.
A structured review of how a specific task is actually performed step by step. Reveals hidden steps, staff workarounds, undocumented tools, and compliance gaps that org charts and policy manuals do not capture. A Process Audit is always conducted with the governing federal statute in view, because the question is not just "how is this done?" but "does how this is done satisfy what federal law requires, and can the district produce the documentation to prove it?"
The audit finds where daily practice diverges from federal obligation.
A repeatable administrative process in one of five categories: Documentation and Record-Keeping, Communication Routing, Compliance and Reporting, Scheduling and Coordination, or Data Entry and Transfer. When a workflow is documented and standardized, it moves from a staff-dependent task to a process that runs the same way every time regardless of who is in the role. Each workflow category maps to at least one federal obligation; the category determines which compliance review applies and which statute governs the documentation requirement.
Every workflow category has a governing statute. Documenting the workflow is how the school demonstrates it can meet that obligation consistently, not just when the right person is present.
The practice of identifying, measuring, and streamlining repeatable administrative processes in a school or district. Automation targets are evaluated against FERPA's school official exception before implementation. Tools that touch student data must have a signed DPA. Any automation that routes student data through a third party without a documented agreement is not an efficiency gain; it is a FERPA violation operating at speed.
FERPA 34 CFR §99.31(a)(1): automated workflows that access student records must qualify under the school official exception.
A score from -2 to +3 measuring how well a tool integrates into existing school workflows. Accounts for time saved, time added (overhead), and workflow categories affected. A tool cannot score above 0 if it lacks a signed DPA under FERPA, fails COPPA's opt-in consent requirement for student-facing features, or has not been reviewed against PPRA for profiling functions. Compliance status gates the rating. Efficiency gains do not override legal exposure.
No tool earns a positive rating without clearing FERPA §99.31, COPPA §6502, and PPRA §1232h review.
One of five operational areas classifying tool and process impact: Documentation & Record-Keeping, Communication Routing, Compliance & Reporting, Scheduling & Coordination, and Data Entry & Transfer. Each category maps to specific federal obligations: Documentation ties to FERPA and IDEA; Communication to FERPA disclosure rules and IDEA parental notification; Compliance & Reporting to IDEA monitoring and Title program accountability; Scheduling to IDEA procedural timelines; Data Entry to FERPA's school official exception and COPPA consent requirements.
Every workflow category has at least one governing statute. The category determines which compliance review applies.
Gap between when a tool is approved and when it is consistently used as intended. Most tools: 4 to 12 weeks. During this period the district pays without receiving the workflow benefit. In the compliance context, Adoption Lag also means the documentation the tool was supposed to produce is still not being produced, so federal obligations that depend on that documentation are still going unmet even though the tool is technically on the approved list.
IDEA procedural safeguard timelines do not pause for Adoption Lag. Title II staff development documentation should account for it.
Staff hours returned to higher-value work when a compliance gap is closed, a manual process is streamlined, or a governance system replaces ad hoc documentation. The emphasis is where recovered time goes, not just that it was freed. For federal reporting purposes, Capacity Recovery is the measurable outcome that justifies governance investment under Title I and Title III program accountability requirements, and under IDEA §616 state monitoring, which requires evidence of program effectiveness, not just expenditure documentation.
Title I §1111 accountability requires documented outcomes. Capacity Recovery is the unit of measurement.
The total time cost of adopting a new tool that vendors exclude from their marketing, both upfront and ongoing. Includes configuration, SSO provisioning, staff training, and the compliance documentation the district must create independently: the signed DPA (FERPA §99.31), the COPPA consent verification process, the PPRA review if the tool generates student profiles, and the board approval documentation if required by district policy. High Setup Tax tools are often "free" tools whose true cost is entirely compliance labor.
Every new tool requires a FERPA §99.31 DPA before student data is accessed. That process is part of Setup Tax whether vendors acknowledge it or not.
Net change in weekly hours for a specific role when a tool or process change is introduced. Hours saved minus hours added (setup, training, maintenance, compliance documentation). Expressed as a range with a Confidence Rating (H/M/L). For federal program reporting, Staff Time Impact for roles discharging federal obligations (SPED coordinators, Title I teachers, counselors) is the evidence base for demonstrating that governance investments improve program delivery, not just operational efficiency.
IDEA §616 state monitoring and Title program accountability require demonstrated impact on federally-mandated roles.
Hours or days required to bring a new tool into documented compliance with district, state, and federal requirements. High Time-to-Compliance is a large hidden cost regardless of sticker price. Under COPPA's 2025 amendments (effective June 23, 2025; full compliance required April 22, 2026), any student-facing tool requires documented opt-in parental consent before use, a process that takes time regardless of how good the tool is. A tool adopted before its Time-to-Compliance is complete is a tool in active federal violation during the gap period.
COPPA §6502(b)(1): no collection from children under 13 without verifiable parental consent. The clock starts at first student use.
Tools adopted by staff without district knowledge, approval, or vetting. Common in K-12 because teachers independently sign up for free-tier AI and edtech tools. Under federal law, each Shadow IT tool is a potential unauthorized disclosure of student education records under FERPA (34 CFR §99.31: no signed DPA means the vendor does not qualify as a school official). If students interact with the tool directly, each use is a potential COPPA violation with penalties up to $51,744 per affected child. "I didn't know" is not a federal defense.
FERPA 34 CFR §99.31(a)(1): third-party access to student data requires a written agreement before any data is shared. Shadow IT has none.
The full span of a tool's presence in a school: discovery, vetting, approval, deployment, adoption, maintenance, review, and retirement or renewal. Most districts manage only the middle stages; they approve and deploy, but rarely document the vetting that preceded approval or the review that should precede renewal. Under FERPA, COPPA, and PPRA, compliance documentation must exist at every stage, not just at the point of adoption. A tool that was compliant at approval may no longer be compliant after a vendor updates its data practices or switches the foundation model powering the product.
COPPA 2025 amendments (effective April 22, 2026) require re-evaluation of any tool previously approved under the opt-out framework.
The accumulation of overlapping, redundant, or unvetted technology tools across a district. Occurs when staff adopt tools independently without centralized review. Under federal law, Tool Sprawl is not merely a governance inconvenience; it is the accumulation of potential unauthorized disclosures under FERPA and potential COPPA violations for every student-facing tool without a verified consent process. Tool Sprawl reduction is a federal compliance obligation, not a preference.
Every tool accessing student records without a DPA is a potential violation of FERPA 34 CFR §99.31(a)(1).
Count of all tools in active use compared against the district's official approved list. High index = governance gaps. In the federal compliance context, the Tool Sprawl Index is a direct measure of unauthorized FERPA disclosure risk: every tool above the approved list that accesses student data without a DPA is a potential §99.31 violation. A TSI above 2.0x indicates significant unmanaged federal exposure.
FERPA 34 CFR §99.31(a)(1): the school official exception requires a written agreement. TSI measures how many tools lack one.
The backlog of tools currently in use that have not undergone formal privacy, security, or workflow review. Every unvetted tool is a federal risk that accumulates. Vetting Debt grows faster than most districts can retire it, particularly with the introduction of AI tools, which may trigger FERPA (student data), COPPA (student-facing features for under-13 users), and PPRA (tools that generate behavioral or psychological profiles of students). The April 22, 2026 COPPA compliance deadline means existing Vetting Debt for student-facing AI tools is not deferred. It is overdue.
COPPA April 22, 2026 deadline: unvetted student-facing tools are currently in potential violation of the 2025 opt-in consent amendments.
The overall state of a district's policies, documentation, and procedures for managing AI tools. Ranges from "no formal governance" to "documented, reviewed, and actively maintained." Most districts are in the first category. Under the 2025 COPPA amendments, a district with no formal AI governance is a district without documented opt-in consent processes for student-facing AI: a direct federal exposure. Under PPRA (20 U.S.C. §1232h), AI tools that generate behavioral or psychological profiles of students require parental notification. Under FERPA, every AI tool that processes student records requires a DPA. AI Governance Posture is the measure of whether any of this is in place.
COPPA §6502(b): opt-in consent required by April 22, 2026 for every AI tool students interact with. PPRA §1232h: profiling tools require parental notification regardless of COPPA status.
The documented capacity of a school to fulfill IDEA's Child Find obligation: the requirement to actively identify, locate, and evaluate all children who may have a disability and be eligible for special education services. Child Find is not passive. IDEA 34 CFR §300.111 requires proactive outreach. Child Find Readiness measures whether the workflows, documentation chains, and referral-to-evaluation timelines that support this obligation are documented, staffed, and functioning, or exist only in someone's head.
IDEA 34 CFR §300.111: Child Find is a mandatory federal obligation. Failure to identify eligible students is an IDEA violation regardless of intent.
The total scope of federal and regulatory exposure across all technology tools, data systems, and AI-enabled processes. Expands with every tool adopted without corresponding documentation. Larger surface area = more audit risk across more statutes simultaneously. Under FERPA, each undocumented vendor relationship is one unit of surface area. Under COPPA, each student-facing tool without consent documentation is another. Under PPRA, each profiling or survey tool without parental notification is another. Under IDEA, each gap in IEP or Child Find documentation is another. Compliance Surface Area grows silently, until it does not.
Four statutes (FERPA, IDEA, COPPA, PPRA) can generate simultaneous exposure from a single undocumented tool adoption.
A specific instance where a federally-required document does not exist, is not current, or cannot be located within a reasonable timeframe. Each gap has a governing statute. Missing DPA: FERPA 34 CFR §99.31 (unauthorized disclosure of student records). Missing AI Output Review Policy: COPPA §6502 (undocumented consent process). Missing Prior Written Notice template: IDEA 34 CFR §300.503 (denial of procedural safeguards). Missing incident response plan: FERPA breach notification + COPPA §6502. Missing Child Find procedure: IDEA 34 CFR §300.111 (active identification failure). Each gap is an audit finding waiting to happen. The Forensic Read names the statute, not just the document.
Every documentation gap has a corresponding federal violation if it surfaces during an audit or parent request.
The documented capacity to respond to technology or AI incidents involving students. Includes notification sequences, communication review chains, incident log templates, and coordination procedures. Assessed before an incident, not during one. Under FERPA, schools must have procedures for responding to unauthorized disclosures. Under the 2025 COPPA amendments, breach response procedures are now part of the formal security program required of districts handling student data. A school that discovers a breach during an incident and tries to build its response procedures at the same time has low Incident Response Readiness and high federal exposure.
COPPA 2025 §6502: formal written security program required. FERPA: unauthorized disclosure procedures must exist before a breach, not after.
A written school policy requiring staff to review AI-generated content (text, images, assessments, summaries) before use with or about students. Under COPPA's 2025 amendments, districts must document how student data is used in AI outputs and how that use has been consented to. Under PPRA, AI-generated profiles or analyses of individual students trigger parental notification requirements even when generated by an approved tool. An Output Review Policy operationalizes these obligations at the building level, where the actual decisions are made.
PPRA §1232h: surveys and analyses revealing personal student information require parental consent. AI-generated profiles can trigger this. COPPA §6502: documented consent process required for any student data use.
Documented time and process data for how a task is currently performed before any change is introduced. Without a baseline, there is no way to measure whether an intervention improved anything, and no way to meet federal reporting requirements that demand demonstrated outcomes, not just documented spending. Under Title I §1111 accountability requirements and IDEA §616 state monitoring, governance investments must be tied to measurable program outcomes. Baseline Measurement is what makes that connection possible.
Title I §1111: accountability requires demonstrated outcomes. IDEA §616: state monitoring requires evidence of program improvement. Neither is possible without a documented baseline.
Transparency indicator on every time-impact estimate. H = observed or verified data from this school. M = benchmarked against comparable tools or schools. L = feature-based estimation only. When presenting Capacity Recovery or Staff Time Impact figures in federal program reports or board documentation, the Confidence Rating signals how much evidentiary weight the number carries. An H-rated finding is defensible in a Title I audit. An L-rated finding is a projection that needs to be labeled as such.
Federal program reporting requires accurate representation of outcome data. Confidence Rating prevents overstatement.
Measurable return on a governance investment expressed in compliance gaps closed, documentation gaps resolved, audit exposure reduced, or staff hours recovered: not vendor engagement metrics. The question is not "are teachers using it?" but "what changed in the district's compliance posture, and can we document that change in the language federal program reporting requires?" Under Title I §1111, Title III §3121, and IDEA §616, expenditure must be connected to program outcomes. Learning ROI is the unit of that connection.
Title I §1111, Title III §3121, IDEA §616: all require demonstrated program outcomes, not just expenditure records.
The complete sequence of documents required to demonstrate that a school is providing a Free Appropriate Public Education (FAPE) for a specific student with a disability, from Child Find referral through IEP development, annual review, Prior Written Notice, progress reporting, and placement documentation. The FAPE Documentation Chain must be complete, current, and locatable. A broken or missing link at any stage is an IDEA violation, regardless of what services were actually provided.
IDEA 34 CFR §300.320 (IEP content), §300.503 (PWN), §300.111 (Child Find), §300.322 (parent participation). All four must be documented to demonstrate FAPE.
A maintained inventory of all Data Privacy Agreements between the school or district and every third-party vendor that accesses student education records. Under FERPA 34 CFR §99.31(a)(1), vendors only qualify as "school officials" with legitimate educational interest if they operate under a written agreement with specific data-use restrictions. A DPA Registry makes that compliance visible, auditable, and maintainable, and flags when agreements lapse or need updating as vendor data practices change. Under the 2025 COPPA amendments, the Registry should also document consent verification status for every student-facing tool.
FERPA 34 CFR §99.31(a)(1): the DPA is not optional. It is the legal mechanism that authorizes third-party access to student records.
The documented state of a school's verifiable parental consent processes for every student-facing tool that may collect personal data from children under 13. Under the January 2025 FTC amendments to COPPA, the consent framework shifted from opt-out to opt-in, meaning explicit, documented parental consent is required before data collection begins, not after. Full compliance is required by April 22, 2026. COPPA Consent Status is tracked per tool: Compliant, In Progress, Not Started, or Not Applicable (tool does not collect data from under-13 users). Any tool with In Progress or Not Started status after April 22, 2026 carries penalties up to $51,744 per affected child.
COPPA 15 U.S.C. §6502(b): opt-in parental consent required. FTC enforcement: penalties up to $51,744 per affected child per violation.
The exposure created when a vendor's own dependencies (foundation models, open-source libraries, cloud infrastructure providers) are compromised without the vendor's knowledge or the district's notification. Most Data Privacy Agreements address what the vendor does with student data. They do not address what happens when something the vendor is built on gets compromised. The March 2026 LiteLLM supply chain attack demonstrated the mechanism: a Python library with 95 million monthly downloads was backdoored with credential-stealing malware and automatically pulled into vendor environments that did not pin to verified versions. If an edtech vendor your school depends on was running that library, student data access credentials may have been exposed. The DPA did not cover it.
NIST AI RMF MAP 1.1, MAP 1.5: identify and document third-party dependencies. MIT AI Risk Repository Domain 2.1: AI system security vulnerabilities including supply chain compromise. FERPA §99.31: the DPA must cover the actual data path, not just the vendor's front door.
An inventory of all components, libraries, and dependencies that make up a vendor's product. An SBOM is the ingredient list for software. It shows what the product is built from, including components the vendor did not build itself but relies on. Without an SBOM, the district cannot evaluate what the vendor is built on, cannot assess supply chain risk, and cannot determine whether a reported vulnerability in an open-source library affects the tools in its environment. The Five Eyes joint guidance on AI/ML supply chain risks (March 2026) validates SBOM requirements as a national security priority that extends to education infrastructure.
NIST AI RMF MAP 3.4: document AI system dependencies. Five Eyes AI/ML Supply Chain Guidance (2026): SBOM requirements for AI infrastructure. A vendor that cannot produce an SBOM may not control what its product is built from.
The relationship between an edtech vendor's product and the underlying AI model (GPT, Claude, Gemini, Llama, or a proprietary model) that powers it. Many vendors market an "AI-powered" product without disclosing which foundation model powers it, where inference occurs, or what data flows to the model provider. A school district approves the vendor's product. It does not approve the hundreds of components underneath it. When the vendor switches foundation models mid-contract, the product's data handling, retention policies, and processing jurisdiction can change overnight. If the contract does not require notification of model changes, the district will not know.
NIST AI RMF MAP 3.4: document which models power the product and where inference occurs. FERPA §99.31: the DPA must cover the actual data path. COPPA §6502: if student input reaches the model provider, the model provider's retention policies apply.
A vendor re-evaluation initiated by a specific event rather than a calendar date. Five trigger types: supply chain incident (a component the vendor depends on is compromised), model or infrastructure change (the vendor switches foundation models or cloud providers), contract renewal or amendment, regulatory change (new federal or state requirements affecting the vendor relationship), and reported vulnerability in a known dependency. A calendar-only review cycle misses the events that actually change risk. Trigger-Based Review ensures the evaluation process re-enters at the relevant phase when conditions change, not twelve months later.
NIST AI RMF GOVERN 1.1: AI risk management is a continuous process. MEASURE 2.6: evaluation frequency should match the rate of change in the risk environment. Calendar-only review does not satisfy this.
The practice of reading a district's full document ecosystem (Data Privacy Agreements, vendor terms of service, privacy policies, master service agreements, federal program requirements, and governance policies) against each other to identify contradictions, gaps, and silent overrides. A compliance review asks whether the right documents are on file. Compliance Forensics asks whether the language in those documents actually does what everyone in the building assumes it does: whether the definitions stay consistent across agreements, whether the commitments hold up when read against the vendor's own terms, and whether the language creates the appearance of protection without the enforceable obligation. In TLF practice, Compliance Forensics is operationalized through The Forensic Read™ (see Section 09).
Reads the DPA against the vendor's terms of service, privacy policy, and master service agreement to identify where definitions drift, commitments dissolve, and one document silently overrides another.
The condition where the same term carries different definitions across a vendor's DPA, terms of service, and privacy policy. "Personal information," "educational purpose," "school official," "de-identified data," and "aggregate data" are common drift points. When definitions drift, the protection the district believes it signed for may not apply to the data path that matters most. Definition Drift is invisible to a checklist review that confirms the DPA is signed. It is visible to a forensic reading that compares how the same term functions across every document in the vendor relationship.
FERPA §99.31 requires the written agreement to specify data-use restrictions. If the DPA's definitions are narrower than the vendor's actual data practices (as defined elsewhere in its own documents), the agreement does not cover what the district thinks it covers.
The signature page at the end of a governance workbook or compliance protocol where a named person signs and dates a statement confirming that they completed the documentation, that the entries are accurate, and that they understand the record may be reviewed in an audit or compliance inquiry. The Attestation Page is what converts a completed workbook from internal paperwork into filed evidence with a chain of accountability: a specific person, in a specific role, on a specific date, standing behind the record. Without it, the documentation is unsigned work product. With it, it is evidence.
USAC audits require documentation that a named person reviewed and approved. IDEA procedural safeguards require signed records. Every TLF governance workbook ends with an Attestation Page because compliance without a named, accountable person is not compliance.
An indication that a federal action (executive order, agency directive, DOJ litigation, FTC policy statement) may challenge or override a state law that a district currently relies on for AI governance. Executive Order 14365 (December 11, 2025) established a DOJ AI Litigation Task Force, directed the Commerce Department to identify "onerous" state AI laws, and directed the FTC to issue a preemption policy statement. State AI laws that regulate vendor transparency, algorithmic bias, and AI disclosure are potential targets. A Federal Preemption Signal does not mean a state law has been invalidated. It means the federal government has initiated a process that could lead to litigation, which could lead to injunctions, which could change compliance obligations. Until a court acts, every state law on the books is enforceable. Comply with current law. Prepare for change.
EO 14365, §3 (DOJ Task Force), §4 (Commerce evaluation), §7 (FTC statement). The child safety carve-out preserves state authority over student data privacy and COPPA-adjacent protections, but state laws regulating AI broadly may not be protected. Districts should track which of their state-level AI protections fall inside or outside that carve-out.
The Language Firm's registered four-stage investigative methodology for reading vendor and regulatory language as evidence. Where a compliance review asks whether a document exists, The Forensic Read asks what the language in that document actually does, where its commitments hold up, where they dissolve, and where one document silently overrides another. The methodology is applied across the full document ecosystem surrounding any tool, vendor, or federal obligation: DPAs, terms of service, privacy policies, master service agreements, vendor disclosures, and the federal statutes those documents are accountable to. Every TLF service is the application of The Forensic Read to a specific scope: the Clarifier Workshop teaches it, the Default Settings Briefing applies it to one to ten products, the Watchlist Subscription runs it weekly on a district's named tools, and the Tool Vault demonstrates it publicly through the District Filing, the Weekly Incident Bulletin, First Watch, and the Federal Findings Digest. The four stages are Read, Trace, Surface, and Build.
The Forensic Read™ is the methodology underneath every TLF service. It is what an applied linguist sees that a compliance checklist misses: function, not just content; what the language gets to do, not only what it says.
The first stage of The Forensic Read. The investigator reads each document in the vendor or regulatory ecosystem in full, in its own terms, before reading any document against any other. Definitions are catalogued. Commitments are separated from sentiment. Modal verbs (must, shall, will, may, should, can) are tagged because each one carries a different obligation strength. Reservations of right, indemnification clauses, limitation of liability, and the precedence-of-documents clauses are flagged for use in Stage 2. The Read Stage produces a structured inventory of what the document actually says, distinct from what the cover letter or sales conversation suggested it says.
"We take student privacy seriously" is sentiment. "We will notify the district within 72 hours of a confirmed breach" is a commitment. Stage 1 separates the two systematically across every document in the ecosystem.
The second stage. Definitions, commitments, and reservations are traced across documents to identify where meaning shifts between agreements. The DPA's definition of "student data" is read against the privacy policy's definition. The terms of service' notification commitment is read against the DPA's. The master service agreement's precedence clause is read against both. Where definitions drift, where commitments dissolve, and where one document silently overrides another, the trace records the location and quotes the conflict in the parties' own words. The Trace Stage produces a documented map of where the language ecosystem holds together and where it does not.
Most districts engage legal review at the point of contract signing, not as ongoing tracing. The Trace Stage is what catches the silent override no one was reading for at signing.
The third stage. The drifts, gaps, and silent overrides identified in the Trace Stage are surfaced as documented findings, each tied to the federal obligation or governance framework it intersects. A definition drift on "personal information" surfaces as a FERPA §99.31 finding. A missing notification commitment surfaces as a COPPA §6502 finding. A precedence clause that subordinates the DPA to a privacy policy surfaces as a FERPA written-agreement finding. Each surfaced finding includes the exact language from the documents, the statute it touches, and the exposure the district is currently carrying. The Surface Stage produces evidence, not impressions.
The Surface Stage is where forensic reading becomes reportable. Findings go to district leadership and legal counsel as documented evidence, not as advisory commentary.
The fourth stage. The findings surfaced in Stage 3 are converted into governance infrastructure: audit-ready documentation, vendor evaluation protocols, compliance workbooks (where applicable), incident response plans, and the weekly intelligence products that keep the building current as the language continues to shift. The Build Stage is what distinguishes The Forensic Read from a one-time audit. The investigation produces a finding; the build produces a system the district can defend on a Tuesday morning when a parent, a board member, or a federal monitor asks to see it. Without Stage 4, Stages 1 through 3 produce a report. With Stage 4, they produce governance.
The Build Stage is why The Forensic Read is a service category, not a deliverable. Districts do not need more reports. They need governance systems that hold up when the language changes again.
A facilitated three-session engagement (90 minutes per session, one per week, virtual) where district leadership teams learn to read vendor language forensically, evaluate AI and edtech tools against actual pedagogical needs, and identify the governance gaps in their current documentation. Each session applies TLF methodology to the district's own tools, documents, and context. Participants bring a list of their AI and edtech tools and have their DPAs accessible during sessions; no orientation packets, no pre-collected questionnaires. The engagement includes the Clarifier Reference Guide (TLF Integrity Cycle, Forensic Read worked example, EWA Lexicon terms covered, Educator Pedagogy Protocol template, structural overview of the Upstream Vendor Risk Evaluation Protocol), five months of scheduled office hours following the final session, and a 30-day complimentary Watchlist Subscription trial on up to ten of the district's tools. Cohort cap: 10. Modality: virtual. Pricing: $3,500 per cohort. ESA, BOCES, and consortium bookings are supported.
The Clarifier teaches your team to see what your vendors are telling you, what your documentation actually proves, and where the distance between the two creates exposure. Methodology stays in-house after the engagement.
A signed, dated, sourced governance record on the AI default behavior of one to ten products a district names. Drawn entirely from public sources: the vendor's privacy policy, terms of service, public documentation, support pages, and any public regulatory filings. No documents are required from the district's team. The Briefing applies The Forensic Read to the public-facing language of each named product and produces a record the district can file as evidence of due diligence on what those products were actually configured to do at the time of review. Useful before adopting a product, before renewing a contract, before answering a parent question, or before a board presentation about a tool already in use. The product-level companion to the Watchlist Subscription's ongoing monitoring.
The Briefing answers a single question on the record: what does this product do by default, and where does the public language commit to doing something different from what the marketing implies?
Weekly forensic monitoring of a district's named tool inventory for incidents, contract drift, vendor language changes, and federal regulatory developments that affect the district's specific stack. The Watchlist applies the same methodology used in the public Tool Vault, but pointed at the district's tools instead of TLF's editorial selections. The district names the tools to be watched; no DPAs, paperwork, or internal records are shared with TLF. The forensic read is on the vendor's public-facing language, not on the district's internal documents. Quarterly Drift Audits are included at the highest tier, applying the First Watch methodology to the district's specific inventory rather than to publicly covered tools. The Tool Vault is a public watchlist; a Subscription is the district's personal one.
What changes in vendor language six months after a contract is signed is what districts most often miss. The Watchlist is the standing presence that catches it weekly, not after a parent or auditor surfaces it.
The Language Firm's free public resource library: four publications that keep every K-12 district current on the AI tools, federal findings, and vendor incidents shaping the governance environment. The Vault is the public application of The Forensic Read to tools and policies TLF selects for editorial coverage. It exists so that any administrator, technology coordinator, or compliance lead can see the methodology at work before deciding whether to bring it inside their building. The four publications are the District Filing (weekly tool intelligence with Tool Spotlight Cards), the Weekly Incident Bulletin (weekly incident brief), First Watch: The Drift Audit (monthly governance audit revisiting previously covered tools), and the Federal Findings Digest (monthly federal regulatory rundown).
Free, always accessible, no paywall. The Tool Vault is the public face of TLF methodology and the on-ramp to every paid engagement.
The Language Firm's weekly tool intelligence publication. Each issue covers one AI tool in full operational depth, scored across vendor support, staff training, data handling, and workflow fit, with a decision label of Proceed, Caution, or Do Not Deploy. Each issue includes a Tool Spotlight (full operational review), a Tool Spotlight Card (downloadable, versioned one-page PDF of the tool review with the School Fit Snapshot, decision label, and evidence links, updated when vendor terms or data practices change), Ten Things Every Educator Needs to Know (ten field-by-field findings written for staff communication and internal documentation), an Endurance Skills Guide (six professional practices that apply regardless of which tool arrives next), a Skill in Focus (deep dive into one educator competency tied to the tool reviewed), and a Policy Update (one federal or state policy explained in plain language with direct application to the tool covered). Every issue is archived; every Tool Card is versioned; one tool, fully vetted, every week.
The District Filing is the weekly application of The Forensic Read to a publicly chosen tool. Districts use it directly for tools they have, and as a model for how to evaluate the ones they are considering.
A weekly free publication delivered every Monday surfacing three to five (sometimes more) education and AI incidents that create real audit risk. Each entry explains what broke, what it means, what a federal monitor would look for, and exactly what to do about it before anyone asks. Includes verified resources to back up future governance changes and reminders to verify security features and user policies in the tools the district already operates. Any administrator can Google the incident; what they cannot Google is whether it creates a documentation gap in their specific district, which federal requirement it touches, and what to put in writing. The Bulletin closes that gap weekly and is the most accessible entry point to TLF methodology.
Free for K-12 administrators, technology coordinators, and compliance leaders. Subscriber base grows weekly. The Bulletin is the short-form public proof that TLF watches what most districts cannot.
A monthly systematic, primary-source-verified governance audit revisiting tools previously covered in the District Filing's Tool Spotlight Cards. The Drift Audit tests whether the governance signals identified at the time of original coverage still hold: whether the privacy policy still says what it said, whether the terms of service have shifted, whether the public posture aligns with the documented language, and whether any of the changes affect the original Proceed, Caution, or Do Not Deploy label. Each finding is paired with a forensic pattern classification: policy-posture divergence, convergence, or asymmetric movement (each defined separately in this lexicon). The Drift Audit is the recurring reminder that what a tool committed to in March may not be what it commits to in September, and that the difference between the two is what the district has to govern.
First Watch is the monthly standing presence that prevents Tool Spotlight Cards from going stale. A governance signal is only useful as long as it still describes the tool the vendor is actually shipping.
A monthly publication summarizing the most relevant federal findings on AI published that month, selected for K-12 impact and organized by program area. Each entry includes a plain-language summary of what the federal document says, a TLF interpretation of the compliance implication for K-12 schools, a Your Action section with specific concrete steps the district's team can take, and a clear flag when a federal notice does not apply to K-12 so districts do not spend time on it. Each entry cites a primary source. The Digest is the monthly companion to the Weekly Incident Bulletin: where the Bulletin tracks what broke, the Digest tracks what was published, what was ruled on, what was withdrawn, and what changed in the federal posture toward AI in education.
The Digest is the regulatory mirror of the Drift Audit. Tools shift; statutes and agency guidance shift too. Both shifts have to be tracked, and most districts cannot track either without a standing presence doing the reading.
The five-stage compliance operations framework underneath every Language Firm service. The cycle moves districts from unknown risk to audit-ready evidence and stays current because the routine does, not because a person remembers. The five stages are Document (map current tools, workflows, evidence holdings, and documentation practices to establish the baseline), Assess (score every tool and process against the specific federal programs, statutes, and data governance requirements applicable to the district), Build (turn findings into systems with named owners and deadlines, build evidence infrastructure inside platforms the district already uses), Maintain (quarterly evidence review, findings briefs with priority actions ranked by audit exposure, active credential maintenance), and Inform (weekly and monthly intelligence tracking AI tool risk, federal regulatory changes, and compliance calendar obligations). Each stage feeds the next. The cycle does not end.
The TLF Integrity Cycle is what the Clarifier Workshop teaches in Session 1. It is the operating logic the district carries forward into every future tool review, every contract renewal, and every federal monitoring response.
A forensic pattern classification used in the Drift Audit. Divergence occurs when a vendor's documented policy language and its public posture (marketing claims, support page statements, executive interviews, public commitments) move in different directions over time. The privacy policy quietly broadens what data the vendor can use; the marketing posture continues to claim "we don't sell student data." The terms of service add a clause permitting model training on user inputs; the homepage continues to say "your data stays with you." Divergence is the most common pattern, and the most dangerous, because the district's confidence is shaped by the posture while the obligations are shaped by the language. The Drift Audit names the divergence in the parties' own words.
Vendors do not announce divergence. The Drift Audit catches it because two evidentiary streams are tracked separately and compared. One stream alone hides it.
A forensic pattern classification used in the Drift Audit. Convergence occurs when a vendor's documented policy language and its public posture move in the same direction toward a stronger, clearer commitment. The privacy policy is tightened; the marketing language reflects the tightening. The terms of service add a clear notification commitment; the support documentation explains how it will be honored. Convergence is the pattern districts should reward and document: a vendor whose language and posture are aligning toward greater accountability is a vendor whose Tool Spotlight Card may move from Caution to Proceed in a future Drift Audit. Convergence is also a useful negotiation lever during contract renewal.
Convergence is the pattern that confirms a vendor is genuinely improving rather than only marketing improvement. It is also the pattern that justifies upgrading a tool's governance standing.
A forensic pattern classification used in the Drift Audit. Asymmetric movement occurs when one of the two evidentiary streams (policy language or public posture) shifts substantially while the other remains static. The terms of service are quietly rewritten while the marketing pages stay the same. The CEO publicly commits to a new privacy standard while the privacy policy remains unchanged. Asymmetric movement is the early-warning pattern: it usually precedes either divergence (if the static stream stays static) or convergence (if the static stream catches up). The Drift Audit flags asymmetric movement so districts can ask the vendor which direction the catch-up will go before the next contract renewal.
Asymmetric movement is the moment to ask. By the time the streams realign, the district has either gained or lost ground in the language without negotiating it.
A structured protocol for capturing what the teachers in a district's building know about their students' actual classroom needs, applied to evaluate whether a proposed or current tool is pedagogically appropriate before the district commits. Introduced in the Clarifier Workshop as the method that closes the gap between a tool's compliance paperwork and the question that compliance paperwork does not answer: will the teachers in this building actually use this tool for the purpose the vendor claims, and does the tool serve the students with the most legal protection (English learners, students with IEPs and 504 plans, twice-exceptional learners) rather than only the "average" student edtech is typically designed for? The protocol captures three categories of input, all expressed in non-identifiable terms so that no individual student information can be traced back to any learner: pedagogical fit and intended use, student needs at the classroom level (in aggregate or by general profile), and red flags or concerns. Findings are aggregated during the workshop into an overarching pedagogy posture per subject and grade band, refreshable in-house from the questionnaire template whenever curriculum, teacher pedagogies, or student needs change.
The protocol exists because compliance paperwork alone does not predict adoption. Pedagogical fit does. Aggregated, non-identifiable, FERPA-clean, and refreshable in-house from the template provided in the Clarifier Reference Guide.
A single-vendor evaluation document that captures what a vendor is built on (the foundation models, infrastructure, and third-party components beneath the surface product), the contractual and legal coverage of those upstream dependencies, the vendor's incident response posture, and the gaps between standard Data Privacy Agreement coverage and the actual upstream risks the district is taking on. Each vendor in the district's tool inventory can receive its own Upstream Vendor Risk Assessment. Assessments are signed, dated, and filed alongside the district's other governance records. The methodology is published in The Language Firm's Upstream Vendor Risk Evaluation Protocol, which is grounded in NIST AI RMF, the MIT AI Risk Repository, FERPA, COPPA, and the SDPC DPA Framework, and is referenced as a structural overview in the Clarifier Reference Guide. Upstream Vendor Risk Assessments fill the gap between what a standard DPA covers (the vendor's own data handling) and what districts actually need to know about the supply chain underneath the vendor: the foundation model the tool is built on, what happens when that foundation model changes, what happens when an upstream component is compromised, and whether the district's contract addresses any of it.
One assessment per vendor. Signed, dated, filed. The DPA covers the front door; this assessment covers everything underneath. Distinct from "Upstream Vendor Risk" (the exposure category in Section 07), this is the document that records and signs off on it.