Human Oversight and Secured-Only Systems: Pennsylvania Supreme Court’s Interim Framework for Judicial Use of Generative AI

Human Oversight and Secured-Only Systems: Pennsylvania Supreme Court’s Interim Framework for Judicial Use of Generative AI

Introduction

On September 9, 2025, the Supreme Court of Pennsylvania issued an Interim Policy governing how judicial officers and court personnel across the Unified Judicial System (UJS) may use generative artificial intelligence (GenAI). Framed as a system-wide administrative directive, the Policy both authorizes and constrains GenAI use on UJS Technology Resources, emphasizing confidentiality, accuracy, ethical compliance, and institutional oversight. It applies to a broad set of “Personnel,” including state-level and county-level court employees, judicial officers (including senior judges) and their staff, and staff of Supreme Court-established boards and committees.

The central themes are clear: (1) GenAI may be used for specific, work-related tasks where it demonstrably aids efficiency and public service; (2) humans remain accountable for content and outcomes; and (3) any non-public information may only be processed by a “Secured AI System” that contractually guarantees confidentiality, non-retention, and non-disclosure. Leadership—defined to include the Chief Justice, appellate and district President Judges, and the Court Administrator of Pennsylvania—is charged with approval, oversight, and vendor-management duties.

This commentary explains the policy’s scope, the core rules it announces, and its practical implications for judges, law clerks, administrators, and court technologists. It also situates the Policy within the existing ethical and public-access framework that governs Pennsylvania courts.

Summary of the Opinion

The Interim Policy contains carefully defined terms, a general applicability clause, and detailed rules on authorization, approved use cases, personnel responsibilities, data security, and implementation:

  • Scope and Applicability: The Policy governs Personnel using GenAI on UJS Technology Resources, expressly including personal devices used for work purposes.
  • Core Permission with Guardrails: Personnel are authorized to use GenAI for work as set forth in the Policy and only via tools approved by Leadership. Supervisory pre-approval or disclosure of GenAI use within work product may be required.
  • Permitted Use Cases: Summarizing documents; preliminary legal research (only with tools trained on comprehensive, up-to-date, reputable legal authorities); drafting initial document versions (e.g., communications, memoranda); editing/assessing readability of public documents; and providing public-facing chatbots or similar services.
  • Human Accountability and Competence: Personnel must comply with ethical rules and laws, maintain technical competence with GenAI, and remain responsible for the accuracy of any AI-assisted work. The Policy warns of “hallucinations,” bias, and limitations inherent to GenAI.
  • Confidentiality and Security: Personnel may share case or administrative records with a Secured AI System that guarantees confidentiality, non-retention, and no model training on user data. Sharing any Non-Public Information with a Non-Secured AI System is prohibited.
  • Leadership Oversight and Procurement: Leadership must ensure compliance, including thorough review of vendor contracts and end-user license agreements (EULAs), even retroactively for pre-existing agreements, to verify security, non-retention, and non-disclosure commitments.

Analysis

Precedents and Authorities Cited or Incorporated

Although not a traditional adjudicative opinion, the Policy roots its requirements in existing legal and ethical frameworks:

  • 42 Pa.C.S. § 102: Used to define who qualifies as “judicial officers” within the Unified Judicial System. This anchors the Policy’s personnel coverage in statute.
  • Case Records Public Access Policy of the Unified Judicial System of Pennsylvania (Sections 9.0 and 10.0): Referenced to delineate categories of restricted information in case records.
  • Electronic Case Record Public Access Policy (Section 3.00): Additional reference point for restrictions on access to electronic case data.
  • Codes and Conduct Rules: The Policy requires compliance with the Code of Judicial Conduct, the Rules Governing Standards of Conduct of Magisterial District Judges, the Code of Conduct for Employees of the Unified Judicial System, and the Rules of Professional Conduct. These authorities provide the ethical baseline—impartiality, confidentiality, competence, and accountability—onto which the AI governance layer is added.
  • Dictionary Definition of “Artificial Intelligence”: A definitional footnote citing Merriam-Webster reflects the Policy’s pragmatic approach to terminology in a fast-evolving field.

Notably, the Policy does not cite judicial case law, which is unsurprising given its administrative, system-governance character. Instead, it aligns GenAI usage with existing confidentiality and access restrictions and with the ethical obligations already binding on judges and court staff.

Legal Reasoning and Structure

The Policy’s reasoning proceeds from three pillars: duty of confidentiality, duty of competence and accuracy, and institutional oversight.

  • Confidentiality as the Non-Negotiable Floor: The crux is the Secured AI System vs. Non-Secured AI System distinction. To protect case integrity, litigant privacy, and privileged court materials, any AI system that retains, trains on, or exposes user inputs is categorically unsuitable for non-public content. By defining “Secured” to require non-retention, no training on user data, no sale/transfer, and no public exposure—extending these duties to subcontractors—the Policy establishes a high bar for permissible AI tools.
  • Competence and Human Accountability: Explicit statements that GenAI can hallucinate, carry bias, and overlook human-appreciated nuances undergird the requirement that personnel become technically proficient and review AI outputs. The human author remains responsible for the accuracy and lawfulness of work product. This echoes the broader legal-ethics trend recognizing “technological competence” as part of professional competence.
  • Institutional Oversight and Procurement Gatekeeping: Leadership approval for tools, potential supervisory approval or mandatory disclosure for specific uses, and rigorous contract/EULA vetting reinforce that AI choices are systemic risk decisions, not merely personal productivity choices. The Policy also anticipates county-level realities by instructing judicial Leadership to ensure compliance even where non-judicial county IT departments procure technology.
  • Permitted Uses Balanced with Safety: The permitted-use list is purposefully pragmatic: efficiency-enhancing functions (summarization, drafting, readability edits) and public service (chatbots) are greenlit, but only within security and competence confines. Preliminary legal research is permitted provided the tool’s training corpus is “comprehensive, up-to-date, [and] reputable”—a quality threshold that excludes many general-purpose tools for legal research purposes.

Impact and Forward-Looking Considerations

The Policy will shape court operations, procurement practices, and public-facing services across Pennsylvania’s judiciary:

  • Tool Selection and Market Signaling: By defining “Secured AI System” in stringent terms (no retention, no training on user data), the Policy effectively requires vendors to offer true non-retention, non-training, and strong contractual privacy protections. This will push courts toward enterprise-grade deployments or on-premises/isolated solutions.
  • Research and Drafting Practices: Judicial officers and staff may use GenAI to accelerate routine tasks but must verify content against primary sources. For legal research, many consumer-grade models will be disqualified unless they demonstrate reliable, current coverage of legal authorities.
  • Public-Facing Chatbots: The judiciary can deploy informational chatbots to assist self-represented litigants. To avoid risk (e.g., unintentionally offering legal advice or disseminating inaccurate information), Leadership will likely pair these services with clear disclaimers, scope limitations, and human-oversight escalation—consistent with the Policy’s emphasis on accuracy and ethical compliance.
  • Training and Competence Programs: Because personnel must be “proficient” in GenAI’s capabilities and limitations, courts will need ongoing training on prompt design, verification methods, bias detection, and safe data handling.
  • Contracting and Governance: Procurement will require careful due diligence: data residency and transmission security; discrete processing vs. model training; subcontractor obligations; audit rights; incident response; and verification that no content is exposed to the public domain. The Policy explicitly requires reviewing pre-existing contracts for compliance.
  • Transparency and Documentation: The authorization for supervisors to require disclosure of GenAI use will drive the development of standardized disclosure notations, internal logs, or metadata tags to make AI assistance transparent within the judiciary.
  • Ethical Alignment and Risk Reduction: Tying GenAI use to existing codes of conduct should reduce risks of biased or inaccurate outputs influencing decisions, preserve public confidence, and avoid leakage of restricted information into external systems.
  • Harmonization with Public Access Policies: Because “Non-Public Information” maps to the UJS Public Access Policies, the GenAI rules inherit that framework’s classifications. This creates a consistent, cross-policy approach to data handling in AI contexts.

Complex Concepts Simplified

  • UJS Technology Resources: All court-owned or provided devices, software, networks, and storage—and also any personal device used for work purposes. If you use your personal phone/laptop for UJS work, this Policy applies.
  • Generative AI (GenAI): Systems that create text, audio, or images in response to prompts. They rely on patterns in training data and can produce convincing but sometimes inaccurate results.
  • “Hallucinations”: When an AI produces content that is inaccurate, fabricated, or unsupported. Human verification is mandatory.
  • Secured AI System: An AI service that:
    • Protects confidentiality and privilege of inputs;
    • Does not retain user-entered data or documents;
    • Does not train models on that data;
    • Does not transfer/sell data to third parties;
    • Does not expose data to the public domain;
    • Extends all these duties to any subcontractors.
  • Non-Secured AI System: Any AI service that retains user inputs, trains on them, or can disclose them to third parties or the public. Do not input non-public information into these systems.
  • Case Records vs. Administrative Records: Case Records are filings, orders, opinions, dockets, transcripts, exhibits, etc., maintained by the courts. Administrative Records are internal notes, drafts, memoranda, and work product. Both categories can contain non-public information.
  • Non-Public Information: Information restricted by law or policy (e.g., specific categories defined in the UJS Public Access Policies). Treat it as confidential unless clearly designated public.
  • Preliminary Legal Research with GenAI: Permissible only if the AI tool’s legal sources are comprehensive, current, and reputable. Always verify with primary sources.
  • Leadership: The Chief Justice, President Judges (appellate and district), and the Court Administrator of Pennsylvania, or their designees. They approve tools, enforce compliance, and ensure contracts meet security requirements.

Detailed Discussion of Key Sections

Section 3: Authorization and Use

  • Approved Tools Only: Personnel may not install or use unapproved GenAI tools on UJS Technology Resources.
  • Supervisory Control and Disclosure: Even if a tool is approved, supervisors may require case-by-case approval or disclosure within the work product that GenAI was used. This promotes transparency and quality control.
  • Permitted Uses:
    • Summaries: Drafting concise summaries of lengthy materials.
    • Preliminary legal research: Aiding early-stage issue spotting with reputable, up-to-date legal sources (not a substitute for authoritative research).
    • Drafting initial versions: Memos, emails, and similar documents—subject to human review and editing.
    • Readability edits of public documents: Improving clarity and accessibility of materials intended for public consumption.
    • Public chatbots: Offering information and triage for the public and self-represented litigants, with institutional oversight.

Section 4: Responsibilities and Ethics

  • Ethics First: All existing codes of conduct and professional rules apply when using GenAI. This includes impartiality, confidentiality, diligence, competence, and avoidance of bias.
  • Copyright: Personnel must ensure fair use and proper attribution. Be alert to GenAI outputs that may replicate copyrighted text; verify before reuse.
  • Competence: Users must know GenAI’s capabilities and limitations and stay current as tools evolve.
  • Human Accountability: People, not machines, are responsible for final work product. Every GenAI-assisted output requires a human accuracy and bias check.

Section 5: Data Security and Sharing

  • Secured-Only for Non-Public Content: Personnel may share case or administrative records with an AI tool only if it satisfies the Policy’s “Secured” definition. If a system retains inputs, trains on them, or allows third-party access, it is non-secured and off-limits for non-public data.
  • Presumption of Non-Confidentiality in Non-Secured Systems: The Commentary instructs personnel to assume that any information entered into a non-secured system becomes non-confidential. When in doubt, escalate through supervisory channels.

Section 6: Implementation and Enforcement

  • Contract and EULA Review: Leadership must review new and existing agreements to verify:
    • No data retention of user inputs;
    • No training on user data;
    • Security of transmission pathways;
    • No vendor or subcontractor rights to view/repurpose content;
    • No exposure of content to the public domain.
  • County IT Coordination: Where counties manage technology, judicial Leadership must ensure local solutions still comply before authorizing use.

Practical Implications and Implementation Guidance

  • For Judges and Law Clerks:
    • Use GenAI as an assistant, not an authority; verify citations and analysis with primary sources.
    • Do not paste confidential drafts or sealed materials into non-secured tools.
    • When required, disclose GenAI assistance in internal work product using standard notations.
    • Be mindful that AI outputs may embed biases; actively check for fairness and neutrality.
  • For Court Administrators and IT:
    • Create and maintain an approved-tool registry with clear “secured” status and permitted use cases.
    • Embed security and privacy terms in contracts: no retention, no training, subcontractor flow-down, breach notice, and audit rights.
    • Offer standardized trainings on safe prompting, data classification, and verification practices.
    • Develop logging or metadata standards for GenAI-assisted work and guidance for public-facing chatbot disclaimers.
  • For Communications and Public Information Officers:
    • Use GenAI to improve readability of public documents, followed by a human final review to preserve legal precision.
    • For chatbots, implement: scope limitations (information, not advice), clear disclaimers, escalation to human assistance, and continuous quality monitoring.

Open Questions and Areas to Monitor

  • Verification of “Secured” Status: What forms of assurance suffice (contractual attestations, third-party audits, technical certifications)? The Policy mandates due diligence but does not prescribe a particular assurance framework.
  • Disclosure Mechanics: How should “disclosure of GenAI use” be implemented (footnotes, internal cover sheets, document metadata, docket notes for public documents)? Local practices may vary pending further guidance.
  • Scope of “Preliminary” Research: Where is the line between preliminary AI-aided research and authoritative research requiring traditional databases? Courts may develop internal protocols.
  • Retention for Audit vs. Strict Non-Retention: The Policy’s “no retention” rule applies to user data entered into the AI system. Agencies should reconcile this with the judiciary’s own record-keeping and audit needs without allowing vendors to retain content.

Key Takeaways

  • New Controlling Principle: Non-public judicial data may be used with GenAI only in a Secured AI System that guarantees confidentiality, non-retention, no model training on user data, no third-party sharing, and no exposure to the public domain.
  • Human Oversight is Mandatory: Personnel are accountable for accuracy and bias mitigation; GenAI outputs must be reviewed and verified.
  • Approved Tools and Possible Disclosure: Only Leadership-approved tools may be used; supervisors can require pre-approval and disclosure of GenAI assistance in work product.
  • Permitted Uses with Guardrails: Summarization, initial drafting, readability edits of public materials, preliminary legal research with reputable sources, and public-facing chatbots are allowed—subject to security and ethics constraints.
  • Leadership’s Enforcement Role: Leadership must ensure compliance, including rigorous review of vendor contracts and EULAs, and align county-managed technology with Policy requirements.

Conclusion

The Supreme Court of Pennsylvania’s Interim Policy establishes a comprehensive governance framework for GenAI in the courts, anchored by two central commitments: protecting non-public judicial information through secured-only AI environments and preserving human responsibility for accuracy, ethics, and fairness. By defining permitted uses, mandating competence, and assigning clear procurement and oversight duties, the Policy enables cautious, beneficial adoption of GenAI while guarding against data leakage, bias, and unreliable outputs.

In the broader legal context, this Policy exemplifies how courts can integrate emerging technologies without compromising institutional integrity or litigant privacy. As implementations mature and technologies evolve, further guidance may refine assurance mechanisms for “secured” systems, standardize disclosure practices, and delineate the contours of permissible research. For now, the message is unequivocal: leverage GenAI for efficiency and access to justice, but do so under strict confidentiality controls and with vigilant human oversight.

Note: This commentary is for informational purposes and is based on the text of the Interim Policy as provided. It does not constitute legal advice.

Comments