Decades in Business,
Technology and Digital Law

  1. Home
  2. โ€”
  3. Blog
  4. โ€”
  5. โš–๏ธ๐€๐ˆ ๐†๐จ๐ฏ๐ž๐ซ๐ง๐š๐ง๐œ๐ž ๐๐ž๐ฌ๐ญ ๐๐ซ๐š๐œ๐ญ๐ข๐œ๐ž๐ฌ ๐Ÿ๐จ๐ซ ๐‹๐š๐ฐ ๐…๐ข๐ซ๐ฆ๐ฌ

โš–๏ธ๐€๐ˆ ๐†๐จ๐ฏ๐ž๐ซ๐ง๐š๐ง๐œ๐ž ๐๐ž๐ฌ๐ญ ๐๐ซ๐š๐œ๐ญ๐ข๐œ๐ž๐ฌ ๐Ÿ๐จ๐ซ ๐‹๐š๐ฐ ๐…๐ข๐ซ๐ฆ๐ฌ

by | Feb 24, 2026 | Blog

Generative AI (GAI) is now embedded in the day-to-day practice of law, sometimes as an obvious โ€œchatโ€ interface, but increasingly as a quiet feature inside research platforms, document tools, contract analytics, eDiscovery, and even email and productivity suites. That reality creates a governance problem: firms need a repeatable way to control who can use AI, for what, with what data, and under what verification and supervision standards.

The ABAโ€™s Standing Committee on Ethics and Professional Responsibility put a bright spotlight on these issues in Formal Opinion 512 (July 29, 2024), emphasizing that lawyers must account for duties of competence, confidentiality, communication, supervision, candor, and reasonable fees when using generative AI tools.

The practical message for firms is straightforward: โ€œAI governanceโ€ is now part of professional responsibility risk management, not a discretionary tech initiative.

๐–๐ก๐š๐ญ โ€œ๐€๐ˆ ๐†๐จ๐ฏ๐ž๐ซ๐ง๐š๐ง๐œ๐žโ€ ๐Œ๐ž๐š๐ง๐ฌ ๐ข๐ง ๐š ๐‹๐š๐ฐ ๐…๐ข๐ซ๐ฆ

AI governance is the operating system that turns ethical duties into daily workflows. In a law-firm context, a credible program typically includes:

  • Rules (policies, standards, and client-facing commitments)
  • Process (intake, approvals, audits, incident response)
  • People (clear accountability and supervision)
  • Technology controls (access management, logging, data loss prevention (DLP), approved tools)
  • Verification discipline (how AI outputs are checked before they become advice, filings, or client deliverables)

Two widely used governance frameworks map well onto law-firm needs:

  • NIST AI Risk Management Framework (RMF) frames AI risk management as a lifecycle approach organized around Govern, Map, Measure, Manage.
  • ISO/IEC 42001 describes an โ€œAI Management Systemโ€ model (policies, objectives, roles, supplier controls, continuous improvement) that aligns naturally with firm governance structures.

You do not need a certification program to benefit from these frameworks. They function well as scaffolding for law-firm controls.

๐“๐ก๐ž ๐‘๐ข๐ฌ๐ค ๐‚๐š๐ญ๐ž๐ ๐จ๐ซ๐ข๐ž๐ฌ ๐“๐ก๐š๐ญ ๐ƒ๐ซ๐ข๐ฏ๐ž ๐‹๐š๐ฐ-๐…๐ข๐ซ๐ฆ ๐†๐จ๐ฏ๐ž๐ซ๐ง๐š๐ง๐œ๐ž

๐‚๐จ๐ง๐Ÿ๐ข๐๐ž๐ง๐ญ๐ข๐š๐ฅ๐ข๐ญ๐ฒ ๐š๐ง๐ ๐ƒ๐š๐ญ๐š ๐„๐ฑ๐ฉ๐จ๐ฌ๐ฎ๐ซ๐ž

Formal Opinion 512 is explicit that confidentiality duties apply when lawyers use GAI tools, and that lawyers must consider the risks associated with a toolโ€™s operation, especially when client information is input into external systems.
Governance implication: firms must classify tools (public vs. enterprise vs. on-prem vs. vendor-embedded) and define what data may be shared with each class.

๐€๐œ๐œ๐ฎ๐ซ๐š๐œ๐ฒ, ๐‡๐š๐ฅ๐ฅ๐ฎ๐œ๐ข๐ง๐š๐ญ๐ข๐จ๐ง๐ฌ, ๐š๐ง๐ ๐‚๐ข๐ญ๐š๐ญ๐ข๐จ๐ง ๐‘๐ข๐ฌ๐ค

The ABA warns that GAI can produce โ€œhallucinationsโ€ and that lawyers must apply appropriate independent review to avoid incompetent work and misleading submissions.
Governance implication: firms need defined verification workflows for different use cases (research memos, contracts, filings, client advice, marketing).

๐’๐ฎ๐ฉ๐ž๐ซ๐ฏ๐ข๐ฌ๐ข๐จ๐ง, ๐€๐ ๐ž๐ง๐ญ๐ฌ, ๐š๐ง๐ ๐–๐จ๐ซ๐ค๐Ÿ๐ฅ๐จ๐ฐ ๐ƒ๐ข๐ฌ๐œ๐ข๐ฉ๐ฅ๐ข๐ง๐ž

Formal Opinion 512 ties AI use to duties to supervise those assisting with legal services (including nonlawyers and โ€œagentsโ€) and to maintain overall accountability for the work product.
Governance implication: the firm must treat AI as a regulated capability, not an ad hoc personal preference.

๐…๐ž๐ž๐ฌ ๐š๐ง๐ ๐๐ข๐ฅ๐ฅ๐ข๐ง๐  ๐‰๐ฎ๐๐ ๐ฆ๐ž๐ง๐ญ

The ABA flags that time spent learning a tool generally shouldnโ€™t be billed to clients, while time spent using and verifying outputs may be billed if reasonable.
Governance implication: firms need consistent billing guidance and documentation expectations when AI is used.

๐“๐ก๐ž ๐‚๐จ๐ซ๐ž ๐†๐จ๐ฏ๐ž๐ซ๐ง๐š๐ง๐œ๐ž ๐‚๐จ๐ง๐ญ๐ซ๐จ๐ฅ๐ฌ ๐„๐ฏ๐ž๐ซ๐ฒ ๐…๐ข๐ซ๐ฆ ๐’๐ก๐จ๐ฎ๐ฅ๐ ๐ˆ๐ฆ๐ฉ๐ฅ๐ž๐ฆ๐ž๐ง๐ญ

Below is a practical โ€œminimum viable governanceโ€ package, written for firms that want controls that auditors, clients, and risk committees can recognize as serious.

๐Ÿ) ๐€๐๐จ๐ฉ๐ญ ๐š ๐…๐ข๐ซ๐ฆ ๐€๐ˆ ๐”๐ฌ๐ž ๐๐จ๐ฅ๐ข๐œ๐ฒ (๐‚๐จ๐ฆ๐ฉ๐ฅ๐ข๐š๐ง๐œ๐ž ๐ƒ๐จ๐œ #๐Ÿ)

This is the keystone document. It should be short enough to be used, but specific enough to be enforceable. It should include:

  • Tool categories & approval status (Approved / Conditional / Prohibited)
  • Data handling rules (client confidential info, PHI, trade secrets, internal firm strategy)
  • Use-case boundaries (brainstorming vs. drafting vs. research vs. filings)
  • Verification standards (what must be checked, and by whom)
  • Client communication triggers (when disclosure/consent may be required)
  • Recordkeeping (when prompts/outputs must be retained in the matter file)
  • Escalation (what to do when AI output appears wrong, biased, or risky)

Formal Opinion 512 is a strong backbone for the policyโ€™s โ€œwhy,โ€ because it explicitly ties GAI use to competence, confidentiality, communication, supervision, candor, and fees.
NIST AI RMFโ€™s GOVERN function provides a practical structure for assigning accountability and defining risk tolerance.

๐Ÿ) ๐‚๐ซ๐ž๐š๐ญ๐ž ๐š ๐๐ซ๐š๐œ๐ญ๐ข๐œ๐ž-๐†๐ซ๐จ๐ฎ๐ฉ ๐€๐ˆ ๐ˆ๐ง๐ญ๐š๐ค๐ž ๐๐ฎ๐ž๐ฌ๐ญ๐ข๐จ๐ง๐ง๐š๐ข๐ซ๐ž (๐‚๐จ๐ฆ๐ฉ๐ฅ๐ข๐š๐ง๐œ๐ž ๐ƒ๐จ๐œ #๐Ÿ)

Firms usually underestimate how many โ€œAI use casesโ€ exist until they inventory them. A lightweight intake questionnaire (completed by each practice group and updated quarterly) should capture:

  • What tools are being used (including vendor-embedded AI features)
  • Whether client data is input (and what types)
  • Whether outputs are client-facing or court-facing
  • Reliance level (idea generation vs. substantive legal conclusions)
  • Human review steps currently used
  • Known failure modes (e.g., hallucinated citations, drafting errors, confidentiality risk)

This document operationalizes the MAP step of NIST AI RMF, capturing context, stakeholders, intended use, and impact.

๐Ÿ‘) ๐ˆ๐ฆ๐ฉ๐ฅ๐ž๐ฆ๐ž๐ง๐ญ ๐š๐ง ๐€๐ˆ ๐•๐ž๐ง๐๐จ๐ซ ๐‚๐จ๐ง๐Ÿ๐ข๐๐ž๐ง๐ญ๐ข๐š๐ฅ๐ข๐ญ๐ฒ & ๐‘๐ข๐ฌ๐ค ๐‚๐ก๐ž๐œ๐ค๐ฅ๐ข๐ฌ๐ญ (๐‚๐จ๐ฆ๐ฉ๐ฅ๐ข๐š๐ง๐œ๐ž ๐ƒ๐จ๐œ #๐Ÿ‘)

Most AI governance failures come from vendor terms and product design, not lawyer intent. A standard checklist should be required for any AI tool approval (including โ€œfreeโ€ tools and AI features embedded in existing vendor platforms). It should cover:

  • Training use: Are prompts/outputs used to train models? Is opt-out available?
  • Retention & deletion: How long are prompts/outputs stored? Can the firm delete?
  • Disclosure: Can the vendor share data with affiliates/subprocessors/regulators?
  • Security: Access controls, encryption, incident response obligations
  • Logging/auditing: Can the firm audit use and retrieve logs for investigations?
  • Data residency (if relevant to client requirements)
  • Subprocessor list and change controls
  • IP/ownership terms for outputs and any customer materials

ISO/IEC 42001โ€™s emphasis on lifecycle and supplier controls is directly relevant here.
And Formal Opinion 512โ€™s confidentiality and supervision themes give the risk rationale that partners and clients will understand.

๐•๐ž๐ซ๐ข๐Ÿ๐ข๐œ๐š๐ญ๐ข๐จ๐ง

Formal Opinion 512 makes the key point: lawyers may use AI as a tool, but cannot offload professional judgment to it, and must independently review outputs to a degree appropriate to the task and the toolโ€™s risk profile.

A governance-ready verification standard usually looks like this:

  • Research / citations: citations must be validated against primary sources (and quote-checked).
  • Factual statements: verify against the record and reliable sources; document checks.
  • Drafting: treat AI output as a โ€œdrafting aid,โ€ not a final document, lawyer must edit substantively.
  • Filings / tribunal-facing content: require a heightened review checkpoint and, where relevant, compliance with court rules and local orders about AI use.

This can be documented as a one-page โ€œVerification Matrixโ€ appended to the AI Use Policy.

๐‚๐จ๐ง๐œ๐ฅ๐ฎ๐ฌ๐ข๐จ๐ง

The ABA has made the direction clear: lawyers and law firms must treat generative AI as part of the ethical risk environment, especially for competence, confidentiality, communication, supervision, and billing.

For most firms, the fastest path to defensible governance is not a 50-page policy manual. It is a tight compliance โ€œstarter setโ€ that can be implemented, audited, and improved.

 

How Can GalkinLaw Help?

Fields marked with an * are required

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Would you like to schedule an initial consultation?
How do you prefer to be contacted?
This field is hidden when viewing the form
Disclaimer