AI Governance

Protecting LCAs, EPDs, and Formulations in AI Workflows

Walker Ryan
Walker RyanCEO / Founder
March 5, 20265 min read

Draft LCAs, EPDs, and proprietary formulations are among the most sensitive assets a construction materials manufacturer holds. When AI tools enter the picture, fear of leakage stalls pilots and forces meetings with Legal, IT, and Sustainability. This guide shows how to evaluate LLM vendors, set "no‑train" and retention controls, use practical redaction and dummy data, and tighten contracts and internal policies. It is written for 2026 realities, with limited time, mixed data quality, and pressure from public procurement to publish more EPDs.

Sealed Formula Beaker

Why AI plus LCAs and EPDs is a special risk

Draft LCAs and EPDs often include unpublished recipes, supplier names, plant energy blends, and preliminary impact factors. If that content escapes, it can harm pricing power, future compliance positions, and procurement standing. Teams also worry that prompts or files might recieve model training or long retention by default.

Unlike typical office documents, these files tie to regulatory claims and future audits. That makes traceability, reproducibility, and who-touched-what logs essential, not optional.

Vet LLM vendors with a standard pack your counsel trusts

Start every vendor conversation with a consistent security and data questionnaire. The Cloud Security Alliance’s AI‑CAIQ gives you a current, structured set of controls to ask about training use, data residency, encryption, logging, incident response, and sub‑processors. Point vendors to the official AI‑CAIQ self‑assessment and expect written answers your legal team can file.

Ask for third‑party attestations that are fresh, not just logos. SOC 2 Type II, ISO 27001, and a documented breach process matter. Require explicit statements on customer data not being used for model training, retention windows, and location of storage and backups.

Configure no‑train and retention the right way

Treat no‑train as a control you verify, not a marketing claim. Require a contract clause that prohibits training on customer content and a configuration proof such as a console screenshot or API parameter reference. Pair that with short log retention, private networking, and customer‑managed keys where available. Map these controls to the NIST AI RMF Playbook actions for govern, map, measure, and manage so auditors have common language.

Keep sensitive content out of prompts whenever possible. Store source documents in your own repository, then use retrieval that sends only the smallest necessary chunks. Turn on output and input filtering for secrets and known product codes, and block uploads to general chat surfaces that bypass logging.

Redaction and dummy data that still let you test

Create prompt and document templates with selective redaction. Mask supplier names, trade names, and exact percentages while keeping structure, ranges, and units. For formulations, replace one or two proprietary components with chemically similar stand‑ins and keep total mass balance to preserve LCA math. This preserves test utility without revealing the crown jewels.

When evaluating classification, summarization, or gap‑check use cases, seed synthetic but realistic PCR sections and plant energy profiles. Validate that model performance on redacted sets tracks closely to full‑fidelity benchmarks before you expose any real data.

Contract the guardrails in NDAs and DPAs

Update NDAs and DPAs so AI uses are explicit. Include prohibitions on secondary use and model training, tight data retention, named data residency, sub‑processor pre‑approval, and audit rights. Add a requirement for secure deletion requests and a duty to notify on any incident involving prompts, embeddings, or chat logs that contain confidential business information.

For internal developers and partners, add a lightweight addendum that covers generated artifacts. Make clear that model outputs derived from confidential inputs inherit confidentiality and must remain in approved repositories.

Internal policies that unlock usage without surprises

Define data sensitivity tiers that your teams can actually remember. For example, published EPDs are Tier 1, draft EPDs and LCAs are Tier 2, and formulations and supplier pricing are Tier 3. Allow Tier 1 in approved hosted copilots, require retrieval with redaction for Tier 2, and restrict Tier 3 to isolated environments with additional review.

Backstop usage with secure defaults for developers. Reference the 2025 OWASP Top 10 for LLM Applications to prevent common failure modes like prompt injection, excessive data exposure, and insecure output handling. Require human review for any customer‑facing text that cites unpublished impact numbers.

EPD programs and public procurement are tightening in 2026

Federal attention on embodied carbon continues to rise, and more bids ask for product‑specific EPDs. EPA signaled plans to set additional thresholds and support EPD adoption through 2025 activities, with program work rolling into 2026. See EPA’s update on cleaner construction materials and expected thresholds by the end of 2025 here.

A pragmatic rollout path

Pick one narrow workflow, like summarizing published EPDs or auto‑checking draft tables for missing fields. Run it with vetted vendors, no‑train configured, and templated redaction. Track cycle time saved and error rates. When the controls hold up and the utility is clear, expand to the next workflow that offers value without exposing formulations.

Frequently Asked Questions

Request a contract clause banning training on your data, a configuration proof for no‑train, documented retention settings, and third‑party attestations. Use the Cloud Security Alliance AI‑CAIQ to structure the questions.

Redact supplier names and exact percentages, substitute one or two components with chemically similar stand‑ins, and keep mass balance. Validate performance against a private benchmark before allowing any real data.

Focus on prompt injection, data leakage, insecure output handling, and poisoning controls. The 2025 OWASP Top 10 for LLM Applications provides a concise, current checklist.

Map your controls to the NIST AI RMF Playbook. It gives shared language for govern, map, measure, and manage functions that auditors and regulators recognize.

Yes. More public EPDs means more drafts in circulation. Keep strict tiering and logging. EPA’s 2025 update on cleaner construction materials indicates continued expansion of thresholds into 2026, which raises the volume and sensitivity of preparatory data (EPA update).

Want to implement this at your facility?

Parq helps construction materials manufacturers deploy AI solutions like the ones described in this article. Let's talk about your specific needs.

Get in Touch

About the Author

More in AI Governance