The Language Firm · Education Workflow Automation

EWA Lexicon
Federal Compliance Edition · Version 5

56 operational terms grounded in federal law, published governance frameworks, and The Language Firm's applied methodology. Every definition is anchored to a specific statute, framework, or service. Use these in leadership meetings, board presentations, vendor evaluation conversations, and incident response documentation.

📋
For K-12 schools operating under federal law. Every term is grounded in a specific federal statute, published governance framework, or registered TLF methodology. Use the citations when someone asks why any of this matters. Print it. Share it. The more people in your district who speak this language, the faster you move from reactive to proactive, and from audit exposure to documented compliance.
What changed in Version 5. The lexicon now reflects the four current TLF service offerings on languagefirm.org and the registered methodology that runs through all of them. Five new entries name the products and their underlying logic; eight new entries name the methodology stages, the publication artifacts, and the forensic pattern vocabulary used in Drift Audits. Several v4 entries were rewritten to align with how the work is actually delivered now.
What was removed in Version 5. Three v4 entries described services that are no longer offered as named engagements. The underlying ideas remain, but the delivery model has changed and the lexicon now reflects what TLF actually sells.
What changed in Version 4. Five terms formalized the District Filing methodology. Several of those terms have been rewritten in v5 to reflect current delivery. See the v5 changelog above for what was kept, what was reframed, and what was removed.
What changed in Version 3. Eight new terms across two new sections reflected how the compliance landscape shifted. Vendor accountability extended beyond the DPA to the vendor's entire supply chain. Forensic language analysis named the specific investigative practice that identifies where document language fails. Existing term descriptions were updated to align with TLF's positioning at that time.
Workflow & Process
Documentation Velocity
FERPA § 99.10 · IDEA § 300.503

The speed at which a school can produce, update, and locate compliance-critical documents. Low velocity is not just an operational inconvenience. FERPA requires schools to honor parent record access requests within 45 days (34 CFR §99.10), and IDEA requires Prior Written Notice before any change to a student's identification, evaluation, or placement (34 CFR §300.503). A school with low Documentation Velocity cannot meet either clock.

Federal Clock

FERPA 34 CFR §99.10: 45 days to honor parent access requests. IDEA 34 CFR §300.503: PWN required before any proposed change.

Workflow & Process
Manual Overhead
IDEA § 300.320 · FERPA § 99

Total staff time consumed by tasks performed manually that could be partially or fully streamlined. Measured in hours per week per role. In a federal compliance context, Manual Overhead is significant because most of it is not optional work; it is the discharge of federal legal obligations under IDEA (IEP documentation, progress monitoring, Prior Written Notice) and FERPA (record keeping, access requests, disclosure tracking). The problem is not that the work exists. The problem is that no system has been built to do it efficiently, and the documentation proving it was done may not survive review.

Example: A SPED coordinator spending 6 hrs/wk manually drafting IEP meeting notifications is discharging an IDEA obligation (34 CFR §300.322), not doing optional paperwork.
Federal Anchor

IDEA Part B requires documented IEP process at every stage. Time spent manually is time the system should absorb.

Workflow & Process
Process Audit
All governing statutes

A structured review of how a specific task is actually performed step by step. Reveals hidden steps, staff workarounds, undocumented tools, and compliance gaps that org charts and policy manuals do not capture. A Process Audit is always conducted with the governing federal statute in view, because the question is not just "how is this done?" but "does how this is done satisfy what federal law requires, and can the district produce the documentation to prove it?"

Example: Mapping how a SPED coordinator completes IEP documentation from referral to signed copy, against IDEA 34 CFR §300.320 requirements at each step.
Federal Anchor

The audit finds where daily practice diverges from federal obligation.

Workflow & Process
Workflow
FERPA · IDEA · COPPA · Title Programs

A repeatable administrative process in one of five categories: Documentation and Record-Keeping, Communication Routing, Compliance and Reporting, Scheduling and Coordination, or Data Entry and Transfer. When a workflow is documented and standardized, it moves from a staff-dependent task to a process that runs the same way every time regardless of who is in the role. Each workflow category maps to at least one federal obligation; the category determines which compliance review applies and which statute governs the documentation requirement.

Example: A SPED coordinator's IEP routing process is a workflow in the Documentation and Record-Keeping category, governed by IDEA 34 CFR §300.320. When documented and standardized, it runs the same way whether the coordinator who designed it is in the building or not.
Federal Anchor

Every workflow category has a governing statute. Documenting the workflow is how the school demonstrates it can meet that obligation consistently, not just when the right person is present.

Workflow & Process
Workflow Automation (EWA)
FERPA · IDEA · COPPA

The practice of identifying, measuring, and streamlining repeatable administrative processes in a school or district. Automation targets are evaluated against FERPA's school official exception before implementation. Tools that touch student data must have a signed DPA. Any automation that routes student data through a third party without a documented agreement is not an efficiency gain; it is a FERPA violation operating at speed.

Example: Automating attendance follow-up emails inside an approved platform recovers 3 hrs/wk for the front office without routing student data through an unapproved tool.
Federal Anchor

FERPA 34 CFR §99.31(a)(1): automated workflows that access student records must qualify under the school official exception.

Workflow & Process
Workflow Fit Rating
FERPA · COPPA · IDEA

A score from -2 to +3 measuring how well a tool integrates into existing school workflows. Accounts for time saved, time added (overhead), and workflow categories affected. A tool cannot score above 0 if it lacks a signed DPA under FERPA, fails COPPA's opt-in consent requirement for student-facing features, or has not been reviewed against PPRA for profiling functions. Compliance status gates the rating. Efficiency gains do not override legal exposure.

Example: A tool saving 5 hrs/wk but missing a DPA has a Workflow Fit Rating of 0 regardless of feature quality. The compliance gate must clear before the score counts.
Federal Anchor

No tool earns a positive rating without clearing FERPA §99.31, COPPA §6502, and PPRA §1232h review.

Workflow & Process
Workflow Category
FERPA · IDEA · Title Programs

One of five operational areas classifying tool and process impact: Documentation & Record-Keeping, Communication Routing, Compliance & Reporting, Scheduling & Coordination, and Data Entry & Transfer. Each category maps to specific federal obligations: Documentation ties to FERPA and IDEA; Communication to FERPA disclosure rules and IDEA parental notification; Compliance & Reporting to IDEA monitoring and Title program accountability; Scheduling to IDEA procedural timelines; Data Entry to FERPA's school official exception and COPPA consent requirements.

Federal Anchor

Every workflow category has at least one governing statute. The category determines which compliance review applies.

Time & Capacity
Adoption Lag
Title II · IDEA § 300

Gap between when a tool is approved and when it is consistently used as intended. Most tools: 4 to 12 weeks. During this period the district pays without receiving the workflow benefit. In the compliance context, Adoption Lag also means the documentation the tool was supposed to produce is still not being produced, so federal obligations that depend on that documentation are still going unmet even though the tool is technically on the approved list.

Example: A school approves an IEP-tracking tool in September. Staff are not using it consistently until November. IEP timeline compliance gaps in October are attributable to Adoption Lag, not policy failure.
Federal Anchor

IDEA procedural safeguard timelines do not pause for Adoption Lag. Title II staff development documentation should account for it.

Time & Capacity
Capacity Recovery
Title I § 1111 · IDEA § 616

Staff hours returned to higher-value work when a compliance gap is closed, a manual process is streamlined, or a governance system replaces ad hoc documentation. The emphasis is where recovered time goes, not just that it was freed. For federal reporting purposes, Capacity Recovery is the measurable outcome that justifies governance investment under Title I and Title III program accountability requirements, and under IDEA §616 state monitoring, which requires evidence of program effectiveness, not just expenditure documentation.

Example: Closing 4 documentation gaps and standardizing the vendor review process recovers 3 hrs/wk for the technology coordinator, redirected to active tool governance. Documented as a program outcome in Title I accountability reporting.
Federal Anchor

Title I §1111 accountability requires documented outcomes. Capacity Recovery is the unit of measurement.

Time & Capacity
Setup Tax
FERPA § 99.31 · COPPA § 6502

The total time cost of adopting a new tool that vendors exclude from their marketing, both upfront and ongoing. Includes configuration, SSO provisioning, staff training, and the compliance documentation the district must create independently: the signed DPA (FERPA §99.31), the COPPA consent verification process, the PPRA review if the tool generates student profiles, and the board approval documentation if required by district policy. High Setup Tax tools are often "free" tools whose true cost is entirely compliance labor.

Example: A free-tier AI writing tool requires: DPA negotiation (4 hrs), COPPA consent review (2 hrs), staff training (3 hrs), admin setup (1 hr) = 10 hrs of Setup Tax before the first use.
Federal Anchor

Every new tool requires a FERPA §99.31 DPA before student data is accessed. That process is part of Setup Tax whether vendors acknowledge it or not.

Time & Capacity
Staff Time Impact
IDEA § 300 · Title Programs

Net change in weekly hours for a specific role when a tool or process change is introduced. Hours saved minus hours added (setup, training, maintenance, compliance documentation). Expressed as a range with a Confidence Rating (H/M/L). For federal program reporting, Staff Time Impact for roles discharging federal obligations (SPED coordinators, Title I teachers, counselors) is the evidence base for demonstrating that governance investments improve program delivery, not just operational efficiency.

Example: A standardized vendor review process saves a technology coordinator approximately 3 to 5 hrs/wk net of overhead. Confidence: M. Documented as evidence of governance program improvement in state monitoring.
Federal Anchor

IDEA §616 state monitoring and Title program accountability require demonstrated impact on federally-mandated roles.

Time & Capacity
Time-to-Compliance
FERPA · COPPA 2025 · IDEA

Hours or days required to bring a new tool into documented compliance with district, state, and federal requirements. High Time-to-Compliance is a large hidden cost regardless of sticker price. Under COPPA's 2025 amendments (effective June 23, 2025; full compliance required April 22, 2026), any student-facing tool requires documented opt-in parental consent before use, a process that takes time regardless of how good the tool is. A tool adopted before its Time-to-Compliance is complete is a tool in active federal violation during the gap period.

Example: A tool approved in January but not COPPA-compliant until March has a 60-day gap of potential COPPA exposure, at up to $51,744 per affected child.
Federal Anchor

COPPA §6502(b)(1): no collection from children under 13 without verifiable parental consent. The clock starts at first student use.

Tool Governance
Shadow IT
FERPA § 99.31 · COPPA § 6502

Tools adopted by staff without district knowledge, approval, or vetting. Common in K-12 because teachers independently sign up for free-tier AI and edtech tools. Under federal law, each Shadow IT tool is a potential unauthorized disclosure of student education records under FERPA (34 CFR §99.31: no signed DPA means the vendor does not qualify as a school official). If students interact with the tool directly, each use is a potential COPPA violation with penalties up to $51,744 per affected child. "I didn't know" is not a federal defense.

Example: A teacher signs up for a free AI grading tool using student work samples. No DPA. No COPPA review. Every student record processed = one FERPA exposure.
Federal Anchor

FERPA 34 CFR §99.31(a)(1): third-party access to student data requires a written agreement before any data is shared. Shadow IT has none.

Tool Governance
Tool Lifecycle
FERPA · COPPA · PPRA

The full span of a tool's presence in a school: discovery, vetting, approval, deployment, adoption, maintenance, review, and retirement or renewal. Most districts manage only the middle stages; they approve and deploy, but rarely document the vetting that preceded approval or the review that should precede renewal. Under FERPA, COPPA, and PPRA, compliance documentation must exist at every stage, not just at the point of adoption. A tool that was compliant at approval may no longer be compliant after a vendor updates its data practices or switches the foundation model powering the product.

Example: A tool approved in 2022 under the old COPPA opt-out framework requires re-review under the 2025 opt-in amendments before April 22, 2026.
Federal Anchor

COPPA 2025 amendments (effective April 22, 2026) require re-evaluation of any tool previously approved under the opt-out framework.

Tool Governance
Tool Sprawl
FERPA § 99.31 · COPPA § 6502

The accumulation of overlapping, redundant, or unvetted technology tools across a district. Occurs when staff adopt tools independently without centralized review. Under federal law, Tool Sprawl is not merely a governance inconvenience; it is the accumulation of potential unauthorized disclosures under FERPA and potential COPPA violations for every student-facing tool without a verified consent process. Tool Sprawl reduction is a federal compliance obligation, not a preference.

Example: A district with 45 tools in active use and 12 on the approved list has 33 tools operating without DPAs: 33 potential FERPA violations.
Federal Anchor

Every tool accessing student records without a DPA is a potential violation of FERPA 34 CFR §99.31(a)(1).

Tool Governance
Tool Sprawl Index
FERPA § 99.31 · COPPA § 6502

Count of all tools in active use compared against the district's official approved list. High index = governance gaps. In the federal compliance context, the Tool Sprawl Index is a direct measure of unauthorized FERPA disclosure risk: every tool above the approved list that accesses student data without a DPA is a potential §99.31 violation. A TSI above 2.0x indicates significant unmanaged federal exposure.

Example: 45 tools in use, 12 on approved list = TSI of 3.75x. 33 tools operating without FERPA-required DPAs.
Federal Anchor

FERPA 34 CFR §99.31(a)(1): the school official exception requires a written agreement. TSI measures how many tools lack one.

Tool Governance
Vetting Debt
FERPA · COPPA 2025 · PPRA

The backlog of tools currently in use that have not undergone formal privacy, security, or workflow review. Every unvetted tool is a federal risk that accumulates. Vetting Debt grows faster than most districts can retire it, particularly with the introduction of AI tools, which may trigger FERPA (student data), COPPA (student-facing features for under-13 users), and PPRA (tools that generate behavioral or psychological profiles of students). The April 22, 2026 COPPA compliance deadline means existing Vetting Debt for student-facing AI tools is not deferred. It is overdue.

Example: A district with 20 unvetted tools has 20 tools that may be generating FERPA violations, COPPA exposure, and PPRA notification gaps simultaneously.
Federal Anchor

COPPA April 22, 2026 deadline: unvetted student-facing tools are currently in potential violation of the 2025 opt-in consent amendments.

Compliance & Risk
AI Governance Posture
COPPA 2025 · FERPA · PPRA

The overall state of a district's policies, documentation, and procedures for managing AI tools. Ranges from "no formal governance" to "documented, reviewed, and actively maintained." Most districts are in the first category. Under the 2025 COPPA amendments, a district with no formal AI governance is a district without documented opt-in consent processes for student-facing AI: a direct federal exposure. Under PPRA (20 U.S.C. §1232h), AI tools that generate behavioral or psychological profiles of students require parental notification. Under FERPA, every AI tool that processes student records requires a DPA. AI Governance Posture is the measure of whether any of this is in place.

Example: A school where teachers use five AI tools independently, none with DPAs, none with consent documentation = AI Governance Posture: No formal governance. Active federal exposure.
Federal Anchor

COPPA §6502(b): opt-in consent required by April 22, 2026 for every AI tool students interact with. PPRA §1232h: profiling tools require parental notification regardless of COPPA status.

Compliance & Risk
Child Find Readiness
IDEA § 300.111 · 34 CFR § 300.111

The documented capacity of a school to fulfill IDEA's Child Find obligation: the requirement to actively identify, locate, and evaluate all children who may have a disability and be eligible for special education services. Child Find is not passive. IDEA 34 CFR §300.111 requires proactive outreach. Child Find Readiness measures whether the workflows, documentation chains, and referral-to-evaluation timelines that support this obligation are documented, staffed, and functioning, or exist only in someone's head.

Example: A school with no documented referral intake workflow, no timeline tracking, and no handoff procedure when the SPED coordinator is absent has low Child Find Readiness and active IDEA exposure.
Federal Anchor

IDEA 34 CFR §300.111: Child Find is a mandatory federal obligation. Failure to identify eligible students is an IDEA violation regardless of intent.

Compliance & Risk
Compliance Surface Area
FERPA · IDEA · COPPA · PPRA

The total scope of federal and regulatory exposure across all technology tools, data systems, and AI-enabled processes. Expands with every tool adopted without corresponding documentation. Larger surface area = more audit risk across more statutes simultaneously. Under FERPA, each undocumented vendor relationship is one unit of surface area. Under COPPA, each student-facing tool without consent documentation is another. Under PPRA, each profiling or survey tool without parental notification is another. Under IDEA, each gap in IEP or Child Find documentation is another. Compliance Surface Area grows silently, until it does not.

Federal Anchor

Four statutes (FERPA, IDEA, COPPA, PPRA) can generate simultaneous exposure from a single undocumented tool adoption.

Compliance & Risk
Documentation Gap
FERPA · IDEA · COPPA · PPRA

A specific instance where a federally-required document does not exist, is not current, or cannot be located within a reasonable timeframe. Each gap has a governing statute. Missing DPA: FERPA 34 CFR §99.31 (unauthorized disclosure of student records). Missing AI Output Review Policy: COPPA §6502 (undocumented consent process). Missing Prior Written Notice template: IDEA 34 CFR §300.503 (denial of procedural safeguards). Missing incident response plan: FERPA breach notification + COPPA §6502. Missing Child Find procedure: IDEA 34 CFR §300.111 (active identification failure). Each gap is an audit finding waiting to happen. The Forensic Read names the statute, not just the document.

Example: No signed DPA on file for an AI tool processing student essays = one Documentation Gap under FERPA 34 CFR §99.31.
Federal Anchor

Every documentation gap has a corresponding federal violation if it surfaces during an audit or parent request.

Compliance & Risk
Incident Response Readiness
FERPA · COPPA 2025 · PPRA

The documented capacity to respond to technology or AI incidents involving students. Includes notification sequences, communication review chains, incident log templates, and coordination procedures. Assessed before an incident, not during one. Under FERPA, schools must have procedures for responding to unauthorized disclosures. Under the 2025 COPPA amendments, breach response procedures are now part of the formal security program required of districts handling student data. A school that discovers a breach during an incident and tries to build its response procedures at the same time has low Incident Response Readiness and high federal exposure.

Example: The December 2024 PowerSchool breach affected 62 million student records. Districts without documented incident response plans had no framework for meeting their FERPA and COPPA notification obligations.
Federal Anchor

COPPA 2025 §6502: formal written security program required. FERPA: unauthorized disclosure procedures must exist before a breach, not after.

Compliance & Risk
Output Review Policy
COPPA 2025 · FERPA · PPRA

A written school policy requiring staff to review AI-generated content (text, images, assessments, summaries) before use with or about students. Under COPPA's 2025 amendments, districts must document how student data is used in AI outputs and how that use has been consented to. Under PPRA, AI-generated profiles or analyses of individual students trigger parental notification requirements even when generated by an approved tool. An Output Review Policy operationalizes these obligations at the building level, where the actual decisions are made.

Example: A teacher uses an AI tool to summarize student reading performance. Without an Output Review Policy, there is no documented process for verifying the summary does not constitute an unauthorized FERPA disclosure or a PPRA-triggering student profile.
Federal Anchor

PPRA §1232h: surveys and analyses revealing personal student information require parental consent. AI-generated profiles can trigger this. COPPA §6502: documented consent process required for any student data use.

Measurement & Reporting
Baseline Measurement
Title I § 1111 · IDEA § 616

Documented time and process data for how a task is currently performed before any change is introduced. Without a baseline, there is no way to measure whether an intervention improved anything, and no way to meet federal reporting requirements that demand demonstrated outcomes, not just documented spending. Under Title I §1111 accountability requirements and IDEA §616 state monitoring, governance investments must be tied to measurable program outcomes. Baseline Measurement is what makes that connection possible.

Example: Before standardizing vendor review, document that the technology coordinator currently spends 8 hrs/wk on ad hoc tool evaluation with no documented output. That number becomes the program baseline.
Federal Anchor

Title I §1111: accountability requires demonstrated outcomes. IDEA §616: state monitoring requires evidence of program improvement. Neither is possible without a documented baseline.

Measurement & Reporting
Confidence Rating (H/M/L)
Title I · IDEA · Internal

Transparency indicator on every time-impact estimate. H = observed or verified data from this school. M = benchmarked against comparable tools or schools. L = feature-based estimation only. When presenting Capacity Recovery or Staff Time Impact figures in federal program reports or board documentation, the Confidence Rating signals how much evidentiary weight the number carries. An H-rated finding is defensible in a Title I audit. An L-rated finding is a projection that needs to be labeled as such.

Federal Anchor

Federal program reporting requires accurate representation of outcome data. Confidence Rating prevents overstatement.

Measurement & Reporting
Learning ROI
Title I § 1111 · Title III § 3121 · IDEA § 616

Measurable return on a governance investment expressed in compliance gaps closed, documentation gaps resolved, audit exposure reduced, or staff hours recovered: not vendor engagement metrics. The question is not "are teachers using it?" but "what changed in the district's compliance posture, and can we document that change in the language federal program reporting requires?" Under Title I §1111, Title III §3121, and IDEA §616, expenditure must be connected to program outcomes. Learning ROI is the unit of that connection.

Example: A governance investment closed 4 Documentation Gaps under IDEA and recovered 6 hrs/wk for the SPED coordinator. Board presentation: 4 federal compliance gaps resolved, 6 hrs/wk returned to student-facing services.
Federal Anchor

Title I §1111, Title III §3121, IDEA §616: all require demonstrated program outcomes, not just expenditure records.

Added in Version 2
FAPE Documentation Chain
IDEA § 300.320 · § 300.503 · § 300.111

The complete sequence of documents required to demonstrate that a school is providing a Free Appropriate Public Education (FAPE) for a specific student with a disability, from Child Find referral through IEP development, annual review, Prior Written Notice, progress reporting, and placement documentation. The FAPE Documentation Chain must be complete, current, and locatable. A broken or missing link at any stage is an IDEA violation, regardless of what services were actually provided.

Example: A SPED coordinator who has been providing services correctly but has no documented PWN for a placement change has a broken FAPE Documentation Chain and an active IDEA §300.503 violation.
Federal Anchor

IDEA 34 CFR §300.320 (IEP content), §300.503 (PWN), §300.111 (Child Find), §300.322 (parent participation). All four must be documented to demonstrate FAPE.

Added in Version 2
DPA Registry
FERPA § 99.31(a)(1) · COPPA § 6502

A maintained inventory of all Data Privacy Agreements between the school or district and every third-party vendor that accesses student education records. Under FERPA 34 CFR §99.31(a)(1), vendors only qualify as "school officials" with legitimate educational interest if they operate under a written agreement with specific data-use restrictions. A DPA Registry makes that compliance visible, auditable, and maintainable, and flags when agreements lapse or need updating as vendor data practices change. Under the 2025 COPPA amendments, the Registry should also document consent verification status for every student-facing tool.

Example: A DPA Registry with 45 tools listed, 12 with current signed DPAs, and 33 flagged as missing = immediate FERPA remediation priority.
Federal Anchor

FERPA 34 CFR §99.31(a)(1): the DPA is not optional. It is the legal mechanism that authorizes third-party access to student records.

Added in Version 2
COPPA Consent Status
COPPA 2025 · 15 U.S.C. § 6502 · FTC Rule

The documented state of a school's verifiable parental consent processes for every student-facing tool that may collect personal data from children under 13. Under the January 2025 FTC amendments to COPPA, the consent framework shifted from opt-out to opt-in, meaning explicit, documented parental consent is required before data collection begins, not after. Full compliance is required by April 22, 2026. COPPA Consent Status is tracked per tool: Compliant, In Progress, Not Started, or Not Applicable (tool does not collect data from under-13 users). Any tool with In Progress or Not Started status after April 22, 2026 carries penalties up to $51,744 per affected child.

Example: A school with 8 student-facing AI tools, 2 with documented opt-in consent processes, 6 without = COPPA Consent Status: Critical. The April 22, 2026 deadline is a hard legal date.
Federal Anchor

COPPA 15 U.S.C. §6502(b): opt-in parental consent required. FTC enforcement: penalties up to $51,744 per affected child per violation.

Added in Version 3
Upstream Vendor Risk
NIST AI RMF · MIT AI Risk Repository · FERPA · COPPA

The exposure created when a vendor's own dependencies (foundation models, open-source libraries, cloud infrastructure providers) are compromised without the vendor's knowledge or the district's notification. Most Data Privacy Agreements address what the vendor does with student data. They do not address what happens when something the vendor is built on gets compromised. The March 2026 LiteLLM supply chain attack demonstrated the mechanism: a Python library with 95 million monthly downloads was backdoored with credential-stealing malware and automatically pulled into vendor environments that did not pin to verified versions. If an edtech vendor your school depends on was running that library, student data access credentials may have been exposed. The DPA did not cover it.

Example: A vendor's AI-powered reading tool uses a third-party foundation model for inference. That model provider suffers a data breach. The vendor's DPA covers the vendor's own systems. It says nothing about the model provider's breach. The district is exposed and was never notified.
Framework Anchor

NIST AI RMF MAP 1.1, MAP 1.5: identify and document third-party dependencies. MIT AI Risk Repository Domain 2.1: AI system security vulnerabilities including supply chain compromise. FERPA §99.31: the DPA must cover the actual data path, not just the vendor's front door.

Added in Version 3
Software Bill of Materials (SBOM)
NIST AI RMF · Five Eyes AI/ML Supply Chain Guidance (March 2026)

An inventory of all components, libraries, and dependencies that make up a vendor's product. An SBOM is the ingredient list for software. It shows what the product is built from, including components the vendor did not build itself but relies on. Without an SBOM, the district cannot evaluate what the vendor is built on, cannot assess supply chain risk, and cannot determine whether a reported vulnerability in an open-source library affects the tools in its environment. The Five Eyes joint guidance on AI/ML supply chain risks (March 2026) validates SBOM requirements as a national security priority that extends to education infrastructure.

Example: A district asks a vendor for its SBOM. The vendor cannot produce one. This means the vendor may not have visibility into its own supply chain, and the district cannot evaluate the risk that supply chain introduces to student data.
Framework Anchor

NIST AI RMF MAP 3.4: document AI system dependencies. Five Eyes AI/ML Supply Chain Guidance (2026): SBOM requirements for AI infrastructure. A vendor that cannot produce an SBOM may not control what its product is built from.

Added in Version 3
Foundation Model Dependency
NIST AI RMF · FERPA · COPPA

The relationship between an edtech vendor's product and the underlying AI model (GPT, Claude, Gemini, Llama, or a proprietary model) that powers it. Many vendors market an "AI-powered" product without disclosing which foundation model powers it, where inference occurs, or what data flows to the model provider. A school district approves the vendor's product. It does not approve the hundreds of components underneath it. When the vendor switches foundation models mid-contract, the product's data handling, retention policies, and processing jurisdiction can change overnight. If the contract does not require notification of model changes, the district will not know.

Example: A vendor switches from one foundation model to another. The new model provider retains student input for training. The DPA with the vendor does not address model provider data practices. The district's data governance posture changed without anyone in the building being notified.
Framework Anchor

NIST AI RMF MAP 3.4: document which models power the product and where inference occurs. FERPA §99.31: the DPA must cover the actual data path. COPPA §6502: if student input reaches the model provider, the model provider's retention policies apply.

Added in Version 3
Trigger-Based Review
NIST AI RMF GOVERN 1.1, MEASURE 2.6

A vendor re-evaluation initiated by a specific event rather than a calendar date. Five trigger types: supply chain incident (a component the vendor depends on is compromised), model or infrastructure change (the vendor switches foundation models or cloud providers), contract renewal or amendment, regulatory change (new federal or state requirements affecting the vendor relationship), and reported vulnerability in a known dependency. A calendar-only review cycle misses the events that actually change risk. Trigger-Based Review ensures the evaluation process re-enters at the relevant phase when conditions change, not twelve months later.

Example: A vendor's open-source dependency is compromised in a supply chain attack reported in the TLF Weekly Incident Bulletin. Trigger-Based Review activates: Sections 3 (Foundation Model Mapping), 4 (Data Flow Documentation), and 6 (Incident Monitoring) of the Upstream Vendor Risk Evaluation Protocol are re-evaluated immediately, not at the next annual review.
Framework Anchor

NIST AI RMF GOVERN 1.1: AI risk management is a continuous process. MEASURE 2.6: evaluation frequency should match the rate of change in the risk environment. Calendar-only review does not satisfy this.

Added in Version 3
Compliance Forensics
FERPA · COPPA · State Law · NIST AI RMF

The practice of reading a district's full document ecosystem (Data Privacy Agreements, vendor terms of service, privacy policies, master service agreements, federal program requirements, and governance policies) against each other to identify contradictions, gaps, and silent overrides. A compliance review asks whether the right documents are on file. Compliance Forensics asks whether the language in those documents actually does what everyone in the building assumes it does: whether the definitions stay consistent across agreements, whether the commitments hold up when read against the vendor's own terms, and whether the language creates the appearance of protection without the enforceable obligation. In TLF practice, Compliance Forensics is operationalized through The Forensic Read™ (see Section 09).

Example: A vendor's DPA defines "personal information" to include student records. The same vendor's terms of service define "personal information" to exclude data processed by third-party AI models. The DPA appears to protect the district. The terms of service override it for the most sensitive data path. Compliance Forensics finds this before an auditor does.
Practice Anchor

Reads the DPA against the vendor's terms of service, privacy policy, and master service agreement to identify where definitions drift, commitments dissolve, and one document silently overrides another.

Added in Version 3
Definition Drift
FERPA · COPPA · SDPC DPA Framework

The condition where the same term carries different definitions across a vendor's DPA, terms of service, and privacy policy. "Personal information," "educational purpose," "school official," "de-identified data," and "aggregate data" are common drift points. When definitions drift, the protection the district believes it signed for may not apply to the data path that matters most. Definition Drift is invisible to a checklist review that confirms the DPA is signed. It is visible to a forensic reading that compares how the same term functions across every document in the vendor relationship.

Example: A DPA defines "student data" broadly. The vendor's privacy policy defines "student data" to exclude metadata, usage analytics, and behavioral signals. The DPA covers the narrow category. Everything else flows through the privacy policy, where the vendor retains broader rights. The district signed both documents. The definitions do not match.
Practice Anchor

FERPA §99.31 requires the written agreement to specify data-use restrictions. If the DPA's definitions are narrower than the vendor's actual data practices (as defined elsewhere in its own documents), the agreement does not cover what the district thinks it covers.

Added in Version 3
Attestation Page
USAC · FCC Pilot Program · IDEA · FERPA

The signature page at the end of a governance workbook or compliance protocol where a named person signs and dates a statement confirming that they completed the documentation, that the entries are accurate, and that they understand the record may be reviewed in an audit or compliance inquiry. The Attestation Page is what converts a completed workbook from internal paperwork into filed evidence with a chain of accountability: a specific person, in a specific role, on a specific date, standing behind the record. Without it, the documentation is unsigned work product. With it, it is evidence.

Example: A completed Cybersecurity Governance Journal without a signed Attestation Page is a worksheet. The same journal with a signed Attestation Page, filed with the E-rate coordinator within five business days, is audit-ready documentation that proves due diligence.
Practice Anchor

USAC audits require documentation that a named person reviewed and approved. IDEA procedural safeguards require signed records. Every TLF governance workbook ends with an Attestation Page because compliance without a named, accountable person is not compliance.

Added in Version 3
Federal Preemption Signal
EO 14365 (December 2025) · Commerce Clause · FTC Act § 5

An indication that a federal action (executive order, agency directive, DOJ litigation, FTC policy statement) may challenge or override a state law that a district currently relies on for AI governance. Executive Order 14365 (December 11, 2025) established a DOJ AI Litigation Task Force, directed the Commerce Department to identify "onerous" state AI laws, and directed the FTC to issue a preemption policy statement. State AI laws that regulate vendor transparency, algorithmic bias, and AI disclosure are potential targets. A Federal Preemption Signal does not mean a state law has been invalidated. It means the federal government has initiated a process that could lead to litigation, which could lead to injunctions, which could change compliance obligations. Until a court acts, every state law on the books is enforceable. Comply with current law. Prepare for change.

Example: Your district relies on a state AI transparency law when evaluating vendors. The Commerce Department names that law as "onerous" in its March 2026 evaluation. The law has not been struck down. But the Federal Preemption Signal means you should prepare a contingency: what changes in your governance framework if that law is enjoined?
Regulatory Anchor

EO 14365, §3 (DOJ Task Force), §4 (Commerce evaluation), §7 (FTC statement). The child safety carve-out preserves state authority over student data privacy and COPPA-adjacent protections, but state laws regulating AI broadly may not be protected. Districts should track which of their state-level AI protections fall inside or outside that carve-out.

New · Version 5
The Forensic Read™
TLF Registered Methodology · FERPA · COPPA · NIST AI RMF

The Language Firm's registered four-stage investigative methodology for reading vendor and regulatory language as evidence. Where a compliance review asks whether a document exists, The Forensic Read asks what the language in that document actually does, where its commitments hold up, where they dissolve, and where one document silently overrides another. The methodology is applied across the full document ecosystem surrounding any tool, vendor, or federal obligation: DPAs, terms of service, privacy policies, master service agreements, vendor disclosures, and the federal statutes those documents are accountable to. Every TLF service is the application of The Forensic Read to a specific scope: the Clarifier Workshop teaches it, the Default Settings Briefing applies it to one to ten products, the Watchlist Subscription runs it weekly on a district's named tools, and the Tool Vault demonstrates it publicly through the District Filing, the Weekly Incident Bulletin, First Watch, and the Federal Findings Digest. The four stages are Read, Trace, Surface, and Build.

Practice Anchor

The Forensic Read™ is the methodology underneath every TLF service. It is what an applied linguist sees that a compliance checklist misses: function, not just content; what the language gets to do, not only what it says.

New · Version 5
Read Stage
The Forensic Read™ · Stage 1 of 4

The first stage of The Forensic Read. The investigator reads each document in the vendor or regulatory ecosystem in full, in its own terms, before reading any document against any other. Definitions are catalogued. Commitments are separated from sentiment. Modal verbs (must, shall, will, may, should, can) are tagged because each one carries a different obligation strength. Reservations of right, indemnification clauses, limitation of liability, and the precedence-of-documents clauses are flagged for use in Stage 2. The Read Stage produces a structured inventory of what the document actually says, distinct from what the cover letter or sales conversation suggested it says.

Practice Anchor

"We take student privacy seriously" is sentiment. "We will notify the district within 72 hours of a confirmed breach" is a commitment. Stage 1 separates the two systematically across every document in the ecosystem.

New · Version 5
Trace Stage
The Forensic Read™ · Stage 2 of 4

The second stage. Definitions, commitments, and reservations are traced across documents to identify where meaning shifts between agreements. The DPA's definition of "student data" is read against the privacy policy's definition. The terms of service' notification commitment is read against the DPA's. The master service agreement's precedence clause is read against both. Where definitions drift, where commitments dissolve, and where one document silently overrides another, the trace records the location and quotes the conflict in the parties' own words. The Trace Stage produces a documented map of where the language ecosystem holds together and where it does not.

Practice Anchor

Most districts engage legal review at the point of contract signing, not as ongoing tracing. The Trace Stage is what catches the silent override no one was reading for at signing.

New · Version 5
Surface Stage
The Forensic Read™ · Stage 3 of 4

The third stage. The drifts, gaps, and silent overrides identified in the Trace Stage are surfaced as documented findings, each tied to the federal obligation or governance framework it intersects. A definition drift on "personal information" surfaces as a FERPA §99.31 finding. A missing notification commitment surfaces as a COPPA §6502 finding. A precedence clause that subordinates the DPA to a privacy policy surfaces as a FERPA written-agreement finding. Each surfaced finding includes the exact language from the documents, the statute it touches, and the exposure the district is currently carrying. The Surface Stage produces evidence, not impressions.

Practice Anchor

The Surface Stage is where forensic reading becomes reportable. Findings go to district leadership and legal counsel as documented evidence, not as advisory commentary.

New · Version 5
Build Stage
The Forensic Read™ · Stage 4 of 4

The fourth stage. The findings surfaced in Stage 3 are converted into governance infrastructure: audit-ready documentation, vendor evaluation protocols, compliance workbooks (where applicable), incident response plans, and the weekly intelligence products that keep the building current as the language continues to shift. The Build Stage is what distinguishes The Forensic Read from a one-time audit. The investigation produces a finding; the build produces a system the district can defend on a Tuesday morning when a parent, a board member, or a federal monitor asks to see it. Without Stage 4, Stages 1 through 3 produce a report. With Stage 4, they produce governance.

Practice Anchor

The Build Stage is why The Forensic Read is a service category, not a deliverable. Districts do not need more reports. They need governance systems that hold up when the language changes again.

TLF Service
The Clarifier Workshop
The Forensic Read™ · TLF Integrity Cycle · languagefirm.org/theclarifierworkshop

A facilitated three-session engagement (90 minutes per session, one per week, virtual) where district leadership teams learn to read vendor language forensically, evaluate AI and edtech tools against actual pedagogical needs, and identify the governance gaps in their current documentation. Each session applies TLF methodology to the district's own tools, documents, and context. Participants bring a list of their AI and edtech tools and have their DPAs accessible during sessions; no orientation packets, no pre-collected questionnaires. The engagement includes the Clarifier Reference Guide (TLF Integrity Cycle, Forensic Read worked example, EWA Lexicon terms covered, Educator Pedagogy Protocol template, structural overview of the Upstream Vendor Risk Evaluation Protocol), five months of scheduled office hours following the final session, and a 30-day complimentary Watchlist Subscription trial on up to ten of the district's tools. Cohort cap: 10. Modality: virtual. Pricing: $3,500 per cohort. ESA, BOCES, and consortium bookings are supported.

Service Anchor

The Clarifier teaches your team to see what your vendors are telling you, what your documentation actually proves, and where the distance between the two creates exposure. Methodology stays in-house after the engagement.

TLF Service
The Default Settings Briefing
The Forensic Read™ · languagefirm.org/the-default-settings-briefing

A signed, dated, sourced governance record on the AI default behavior of one to ten products a district names. Drawn entirely from public sources: the vendor's privacy policy, terms of service, public documentation, support pages, and any public regulatory filings. No documents are required from the district's team. The Briefing applies The Forensic Read to the public-facing language of each named product and produces a record the district can file as evidence of due diligence on what those products were actually configured to do at the time of review. Useful before adopting a product, before renewing a contract, before answering a parent question, or before a board presentation about a tool already in use. The product-level companion to the Watchlist Subscription's ongoing monitoring.

Service Anchor

The Briefing answers a single question on the record: what does this product do by default, and where does the public language commit to doing something different from what the marketing implies?

TLF Service
The Watchlist Subscription
The Forensic Read™ · languagefirm.org/the-watchlist-subscription

Weekly forensic monitoring of a district's named tool inventory for incidents, contract drift, vendor language changes, and federal regulatory developments that affect the district's specific stack. The Watchlist applies the same methodology used in the public Tool Vault, but pointed at the district's tools instead of TLF's editorial selections. The district names the tools to be watched; no DPAs, paperwork, or internal records are shared with TLF. The forensic read is on the vendor's public-facing language, not on the district's internal documents. Quarterly Drift Audits are included at the highest tier, applying the First Watch methodology to the district's specific inventory rather than to publicly covered tools. The Tool Vault is a public watchlist; a Subscription is the district's personal one.

Service Anchor

What changes in vendor language six months after a contract is signed is what districts most often miss. The Watchlist is the standing presence that catches it weekly, not after a parent or auditor surfaces it.

TLF Service
The Tool Vault
Free Public Resource · languagefirm.org/toolvault

The Language Firm's free public resource library: four publications that keep every K-12 district current on the AI tools, federal findings, and vendor incidents shaping the governance environment. The Vault is the public application of The Forensic Read to tools and policies TLF selects for editorial coverage. It exists so that any administrator, technology coordinator, or compliance lead can see the methodology at work before deciding whether to bring it inside their building. The four publications are the District Filing (weekly tool intelligence with Tool Spotlight Cards), the Weekly Incident Bulletin (weekly incident brief), First Watch: The Drift Audit (monthly governance audit revisiting previously covered tools), and the Federal Findings Digest (monthly federal regulatory rundown).

Resource Anchor

Free, always accessible, no paywall. The Tool Vault is the public face of TLF methodology and the on-ramp to every paid engagement.

Publication
The District Filing
Tool Vault · Weekly · languagefirm.org/districtfilingarchive

The Language Firm's weekly tool intelligence publication. Each issue covers one AI tool in full operational depth, scored across vendor support, staff training, data handling, and workflow fit, with a decision label of Proceed, Caution, or Do Not Deploy. Each issue includes a Tool Spotlight (full operational review), a Tool Spotlight Card (downloadable, versioned one-page PDF of the tool review with the School Fit Snapshot, decision label, and evidence links, updated when vendor terms or data practices change), Ten Things Every Educator Needs to Know (ten field-by-field findings written for staff communication and internal documentation), an Endurance Skills Guide (six professional practices that apply regardless of which tool arrives next), a Skill in Focus (deep dive into one educator competency tied to the tool reviewed), and a Policy Update (one federal or state policy explained in plain language with direct application to the tool covered). Every issue is archived; every Tool Card is versioned; one tool, fully vetted, every week.

Publication Anchor

The District Filing is the weekly application of The Forensic Read to a publicly chosen tool. Districts use it directly for tools they have, and as a model for how to evaluate the ones they are considering.

Publication
The Weekly Incident Bulletin
Tool Vault · Weekly · Free

A weekly free publication delivered every Monday surfacing three to five (sometimes more) education and AI incidents that create real audit risk. Each entry explains what broke, what it means, what a federal monitor would look for, and exactly what to do about it before anyone asks. Includes verified resources to back up future governance changes and reminders to verify security features and user policies in the tools the district already operates. Any administrator can Google the incident; what they cannot Google is whether it creates a documentation gap in their specific district, which federal requirement it touches, and what to put in writing. The Bulletin closes that gap weekly and is the most accessible entry point to TLF methodology.

Publication Anchor

Free for K-12 administrators, technology coordinators, and compliance leaders. Subscriber base grows weekly. The Bulletin is the short-form public proof that TLF watches what most districts cannot.

Publication
First Watch: The Drift Audit
Tool Vault · Monthly · Companion to the District Filing

A monthly systematic, primary-source-verified governance audit revisiting tools previously covered in the District Filing's Tool Spotlight Cards. The Drift Audit tests whether the governance signals identified at the time of original coverage still hold: whether the privacy policy still says what it said, whether the terms of service have shifted, whether the public posture aligns with the documented language, and whether any of the changes affect the original Proceed, Caution, or Do Not Deploy label. Each finding is paired with a forensic pattern classification: policy-posture divergence, convergence, or asymmetric movement (each defined separately in this lexicon). The Drift Audit is the recurring reminder that what a tool committed to in March may not be what it commits to in September, and that the difference between the two is what the district has to govern.

Publication Anchor

First Watch is the monthly standing presence that prevents Tool Spotlight Cards from going stale. A governance signal is only useful as long as it still describes the tool the vendor is actually shipping.

Publication
The Federal Findings Digest
Tool Vault · Monthly · languagefirm.org/digest-archive

A monthly publication summarizing the most relevant federal findings on AI published that month, selected for K-12 impact and organized by program area. Each entry includes a plain-language summary of what the federal document says, a TLF interpretation of the compliance implication for K-12 schools, a Your Action section with specific concrete steps the district's team can take, and a clear flag when a federal notice does not apply to K-12 so districts do not spend time on it. Each entry cites a primary source. The Digest is the monthly companion to the Weekly Incident Bulletin: where the Bulletin tracks what broke, the Digest tracks what was published, what was ruled on, what was withdrawn, and what changed in the federal posture toward AI in education.

Publication Anchor

The Digest is the regulatory mirror of the Drift Audit. Tools shift; statutes and agency guidance shift too. Both shifts have to be tracked, and most districts cannot track either without a standing presence doing the reading.

New · Version 5
TLF Integrity Cycle
TLF Operational Framework · v1.0, March 2026

The five-stage compliance operations framework underneath every Language Firm service. The cycle moves districts from unknown risk to audit-ready evidence and stays current because the routine does, not because a person remembers. The five stages are Document (map current tools, workflows, evidence holdings, and documentation practices to establish the baseline), Assess (score every tool and process against the specific federal programs, statutes, and data governance requirements applicable to the district), Build (turn findings into systems with named owners and deadlines, build evidence infrastructure inside platforms the district already uses), Maintain (quarterly evidence review, findings briefs with priority actions ranked by audit exposure, active credential maintenance), and Inform (weekly and monthly intelligence tracking AI tool risk, federal regulatory changes, and compliance calendar obligations). Each stage feeds the next. The cycle does not end.

Framework Anchor

The TLF Integrity Cycle is what the Clarifier Workshop teaches in Session 1. It is the operating logic the district carries forward into every future tool review, every contract renewal, and every federal monitoring response.

New · Version 5
Policy-Posture Divergence
First Watch: The Drift Audit · Forensic Pattern

A forensic pattern classification used in the Drift Audit. Divergence occurs when a vendor's documented policy language and its public posture (marketing claims, support page statements, executive interviews, public commitments) move in different directions over time. The privacy policy quietly broadens what data the vendor can use; the marketing posture continues to claim "we don't sell student data." The terms of service add a clause permitting model training on user inputs; the homepage continues to say "your data stays with you." Divergence is the most common pattern, and the most dangerous, because the district's confidence is shaped by the posture while the obligations are shaped by the language. The Drift Audit names the divergence in the parties' own words.

Practice Anchor

Vendors do not announce divergence. The Drift Audit catches it because two evidentiary streams are tracked separately and compared. One stream alone hides it.

New · Version 5
Convergence
First Watch: The Drift Audit · Forensic Pattern

A forensic pattern classification used in the Drift Audit. Convergence occurs when a vendor's documented policy language and its public posture move in the same direction toward a stronger, clearer commitment. The privacy policy is tightened; the marketing language reflects the tightening. The terms of service add a clear notification commitment; the support documentation explains how it will be honored. Convergence is the pattern districts should reward and document: a vendor whose language and posture are aligning toward greater accountability is a vendor whose Tool Spotlight Card may move from Caution to Proceed in a future Drift Audit. Convergence is also a useful negotiation lever during contract renewal.

Practice Anchor

Convergence is the pattern that confirms a vendor is genuinely improving rather than only marketing improvement. It is also the pattern that justifies upgrading a tool's governance standing.

New · Version 5
Asymmetric Movement
First Watch: The Drift Audit · Forensic Pattern

A forensic pattern classification used in the Drift Audit. Asymmetric movement occurs when one of the two evidentiary streams (policy language or public posture) shifts substantially while the other remains static. The terms of service are quietly rewritten while the marketing pages stay the same. The CEO publicly commits to a new privacy standard while the privacy policy remains unchanged. Asymmetric movement is the early-warning pattern: it usually precedes either divergence (if the static stream stays static) or convergence (if the static stream catches up). The Drift Audit flags asymmetric movement so districts can ask the vendor which direction the catch-up will go before the next contract renewal.

Practice Anchor

Asymmetric movement is the moment to ask. By the time the streams realign, the district has either gained or lost ground in the language without negotiating it.

Rewritten in v5
Educator Pedagogy Protocol
FERPA · IDEA · Section 504 · The Clarifier Workshop

A structured protocol for capturing what the teachers in a district's building know about their students' actual classroom needs, applied to evaluate whether a proposed or current tool is pedagogically appropriate before the district commits. Introduced in the Clarifier Workshop as the method that closes the gap between a tool's compliance paperwork and the question that compliance paperwork does not answer: will the teachers in this building actually use this tool for the purpose the vendor claims, and does the tool serve the students with the most legal protection (English learners, students with IEPs and 504 plans, twice-exceptional learners) rather than only the "average" student edtech is typically designed for? The protocol captures three categories of input, all expressed in non-identifiable terms so that no individual student information can be traced back to any learner: pedagogical fit and intended use, student needs at the classroom level (in aggregate or by general profile), and red flags or concerns. Findings are aggregated during the workshop into an overarching pedagogy posture per subject and grade band, refreshable in-house from the questionnaire template whenever curriculum, teacher pedagogies, or student needs change.

Visualization: The aggregated pedagogy posture and the vendor's stated claims are plotted together on a one-page radar chart across five dimensions (pedagogical alignment, average-student coverage, non-average student coverage, outcome alignment, trust posture), each rated 1 to 4. The visible gap between the two profiles shows where the tool fits the district's classrooms and where it fails them. A superintendent can read the chart in 30 seconds; a teacher can defend it in front of a board.
Practice Anchor

The protocol exists because compliance paperwork alone does not predict adoption. Pedagogical fit does. Aggregated, non-identifiable, FERPA-clean, and refreshable in-house from the template provided in the Clarifier Reference Guide.

Retained from v4
Upstream Vendor Risk Assessment
NIST AI RMF · MIT AI Risk Repository · FERPA · COPPA · SDPC DPA Framework

A single-vendor evaluation document that captures what a vendor is built on (the foundation models, infrastructure, and third-party components beneath the surface product), the contractual and legal coverage of those upstream dependencies, the vendor's incident response posture, and the gaps between standard Data Privacy Agreement coverage and the actual upstream risks the district is taking on. Each vendor in the district's tool inventory can receive its own Upstream Vendor Risk Assessment. Assessments are signed, dated, and filed alongside the district's other governance records. The methodology is published in The Language Firm's Upstream Vendor Risk Evaluation Protocol, which is grounded in NIST AI RMF, the MIT AI Risk Repository, FERPA, COPPA, and the SDPC DPA Framework, and is referenced as a structural overview in the Clarifier Reference Guide. Upstream Vendor Risk Assessments fill the gap between what a standard DPA covers (the vendor's own data handling) and what districts actually need to know about the supply chain underneath the vendor: the foundation model the tool is built on, what happens when that foundation model changes, what happens when an upstream component is compromised, and whether the district's contract addresses any of it.

Practice Anchor

One assessment per vendor. Signed, dated, filed. The DPA covers the front door; this assessment covers everything underneath. Distinct from "Upstream Vendor Risk" (the exposure category in Section 07), this is the document that records and signs off on it.

56 terms
Operational definitions grounded in federal statute, governance frameworks, and TLF methodology
4 services
The Clarifier Workshop · The Default Settings Briefing · The Watchlist Subscription · The Tool Vault
19 new in v5
The Forensic Read™ methodology, services, publications, and forensic pattern vocabulary