Decades in Business,
Technology and Digital Law

  1. Home
  2. Blog
  3. ⚖️Navigating AI Compliance: Lessons from Oregon’s Regulatory Framework

⚖️Navigating AI Compliance: Lessons from Oregon’s Regulatory Framework

by | Jan 12, 2025 | Blog

AI Legal Compliance

While the guidance issued by the Oregon Department of Justice on December 24, 2024 focuses on the application of AI to Oregon law (https://www.doj.state.or.us/wp-content/uploads/2024/12/AI-Guidance-12-24-24.pdf), it serves as a valuable primer for businesses everywhere navigating the complexities of AI regulation.

Many of the principles outlined—such as consumer protection, data privacy, and anti-discrimination—reflect broader concerns that regulators globally are addressing as AI becomes more integrated into business operations.

By understanding how existing laws, even those not explicitly written with AI in mind, can apply to AI-driven tools and practices, companies can proactively manage risks, build trust, and align their innovations with ethical and legal standards.

  1. The Shift to Probabilistic Computing
    Traditional computing produced predictable results, but AI’s reliance on probabilistic models introduces new risks, such as biases, lack of transparency, and challenges to accountability. Companies must address these risks to maintain fairness, trust, and compliance.
  2. Core Risks in AI Deployment
  • Privacy Concerns: AI systems often process large amounts of personal data, raising the risk of breaches or misuse.
  • Bias and Discrimination: Training data reflecting historical biases can lead to discriminatory outcomes, especially in areas like lending and hiring.
  • Transparency Issues: AI decisions can be opaque, complicating consumer understanding and redress.
  1. Applicable Oregon Laws and Compliance Expectations
  • Unlawful Trade Practices Act (UTPA):
    • Misleading claims about AI capabilities or outcomes, whether intentional or not, can violate the law.
    • Failing to disclose known defects, misrepresenting endorsements, or using manipulative pricing strategies with AI tools may result in liability.
  • Oregon Consumer Privacy Act (OCPA):
    • Requires clear disclosure about the use of personal data in AI training. Sensitive data use demands explicit consumer consent.
    • Developers must offer mechanisms for consent withdrawal and deletion of consumer data.
    • Data Protection Assessments are mandatory for activities posing significant risks,
  • Oregon Equality Act:
    • AI systems must avoid discrimination in decision-making processes. For example, biased training data leading to discriminatory lending or housing outcomes may constitute a violation.
  1. Key Takeaways for Businesses and Developers
  • Ensure Transparency: Clearly communicate AI capabilities, limitations, and data practices to consumers.
  • Mitigate Bias: Regularly audit AI systems to identify and rectify biased outcomes.
  • Protect Data Privacy: Adhere to privacy laws by obtaining explicit consent for data use and ensuring compliance with data protection measures.
  • Stay Proactive with Compliance: Conduct regular assessments to identify risks and align with evolving laws.

How Can GalkinLaw Help?

Fields marked with an * are required

"*" indicates required fields

Would you like to schedule a free initial consultation?
How do you prefer to be contacted?
This field is hidden when viewing the form
*
This field is for validation purposes and should be left unchanged.