If you use AI to screen job applicants, personalize marketing, analyze employee performance, or automate any decision that affects a person in California, new laws that took effect January 1, 2026 may already apply to your business.
California's regulatory updates aren't aspirational—they are in force now. And given California's economic size and the geographic distribution of digital commerce, compliance attorneys are treating these rules as the effective national standard, regardless of where your company is incorporated.
Here is what changed, who is affected, and what you specifically need to do.
The CCPA's Expanded Automated Decisionmaking Rules
The California Privacy Protection Agency finalized sweeping updates to the California Consumer Privacy Act that specifically govern Automated Decisionmaking Technology (ADMT)—a category now defined extraordinarily broadly.
Under the updated regulations, ADMT means any technology that processes personal information using computation to replace or substantially replace human decision-making. That definition captures:
- AI-assisted resume screening and hiring tools
- Employee performance monitoring and productivity tracking software
- Behavioral advertising platforms using AI-driven targeting
- AI-powered credit and loan assessment tools
- Any algorithmic customer segmentation or scoring system
If your business uses any of the above, and any person affected by those decisions is a California resident, these rules apply to you.
What the Rules Require
Mandatory Pre-Use Notification. Before using ADMT to make a significant decision about someone, you must provide a plain-language, clearly visible notice explaining: what the system does, the fundamental logic it uses, and the individual's right to opt out. This notice must be accessible before the processing occurs—not buried in a terms of service document.
Two Accessible Opt-Out Methods. You must offer at least two clear, accessible mechanisms through which individuals can opt out of automated processing. A broken form or a link that requires five clicks to find does not meet this standard.
Formal Risk Assessments. You must conduct and document exhaustive assessments that weigh the privacy harms and discriminatory risks of your AI system against its operational benefits. These assessments must be updated whenever material changes are made to the system. Formal submissions to the California Privacy Protection Agency will be required annually starting in 2028.
Meaningful Human Oversight. The regulations provide a limited exemption from some ADMT obligations if a human is genuinely involved in the final decision—but the bar for "genuine" is high. The human reviewer must understand how to interpret the AI's output, must actively analyze it alongside other relevant information, and must have actual authority to override the AI's recommendation. A rubber-stamp process does not qualify.
Critical for outsourced AI: The regulations explicitly state that delegating AI operations to a third-party vendor does not transfer your compliance liability. Your business remains fully responsible for the practices of every vendor whose AI tools touch your customers' or employees' data.
The Cybersecurity Audit Threshold
Separate provisions within the updated regulations require businesses with under $50 million in annual revenue to complete comprehensive cybersecurity audits by April 1, 2030. For smaller businesses, this deadline may feel distant, but the preparation required—documenting AI systems, establishing governance policies, and verifying vendor compliance—takes considerably longer than most owners anticipate.
Assembly Bill 2013: AI Training Data Disclosure
Effective simultaneously on January 1, 2026, California's Assembly Bill 2013 mandates unprecedented transparency from any developer creating or materially modifying a generative AI system for public use in California.
Required public disclosures include:
- The origins and ownership of training datasets
- The presence of copyrighted, trademarked, or patented material in the training data
- Whether personal information was included
- The volume of data points used
- Whether synthetic data generation was applied during development
For small businesses and startups building proprietary AI tools as a competitive advantage, AB 2013 creates an immediate tension: the law requires detailed public disclosure of dataset information, but provides no explicit intellectual property exemption. A business attempting to comply in good faith may inadvertently expose its "secret recipe" of curated training data to better-funded competitors.
The statute does not yet define exactly how granular a "high-level summary" must be, leaving businesses to navigate ambiguity with real legal exposure on both sides: insufficient disclosure risks regulatory action; excessive disclosure risks competitive harm.
How Cyber Insurance Is Changing
Regulatory pressure is reshaping cyber insurance underwriting at the same time new threats are emerging. For small businesses that historically relied on a standard cyber policy as a financial backstop, several things have changed.
Insurance carriers now require AI governance documentation. To qualify for a viable policy in 2026, carriers want explicit evidence of: data provenance tracking, model explainability practices, and human-in-the-loop oversight mechanisms. If you cannot demonstrate that your AI systems are free of discriminatory bias, or cannot trace the lineage of data in automated decisions, you may face denial of coverage or significantly elevated premiums.
Coverage gaps around "machine-driven errors" are real. When an AI agent autonomously executes a flawed financial transaction, hallucinates a defamatory statement, or scrapes copyrighted content, the resulting liability often straddles the boundary between a standard cyber policy (covering data breaches and network disruptions) and an Errors & Omissions policy (covering professional negligence). Many legacy policies have no explicit language for AI-generated incidents—meaning your claim may not be covered at all.
What to do: Request explicit policy language addressing AI-generated incidents before renewing or purchasing a cyber policy. Ask specifically about coverage for prompt injection attacks, AI-generated hallucinations that result in financial loss, and recommendation poisoning resulting in business decisions made on corrupted advice.
Your Practical Compliance Roadmap
Compliance doesn't require an enterprise budget. It requires a structured starting point.
Step 1: Inventory Your AI Systems (This Week)
Create a simple register of every AI tool your business uses. For each one, document:
- What personal data it processes
- What decisions it influences or makes
- Who the vendor is and what their data retention policy states
- Whether a California resident could be affected
Step 2: Implement an AI Acceptable Use Policy (This Month)
A written policy that specifies approved tools, prohibited data types, and required human oversight checkpoints is the foundation of both compliance and basic security. The policy should explicitly require human review of any AI-generated decision that affects hiring, performance, credit, or customer service outcomes.
Step 3: Review Vendor Agreements (This Quarter)
Every SaaS contract and automation platform agreement should be reviewed for clauses covering:
- Data retention and model training use
- Breach notification responsibilities
- AI bias and audit rights
- Compliance representations for CCPA and AB 2013
Pro Tip: Under the updated CCPA, "we are not responsible for our vendors' practices" is not a legal defense. Include explicit AI governance representations in your vendor contracts, and terminate agreements with vendors who cannot provide them.
Step 4: Align With the NIST AI Risk Management Framework
The National Institute of Standards and Technology's AI Risk Management Framework (version 1.1) is a voluntary but highly practical tool for small businesses. It provides a structured methodology for identifying, assessing, and mitigating AI-specific risks—and its language maps closely to what regulators and insurance carriers expect to see in a compliance program.
For healthcare or financial services businesses specifically, the companion NIST Privacy Framework closes the gap between cybersecurity (preventing unauthorized access) and privacy compliance (ensuring authorized AI processing doesn't cause harmful inferences about individuals).
Step 5: Address Technical Controls
Four technical controls that provide disproportionate risk reduction:
- Automated dependency scanning in any software development pipeline to catch slopsquatting before a hallucinated package installs malware
- Strict input sanitization for any AI system that processes external documents, emails, or customer inputs
- API permission audits to enforce least-privilege access on every AI integration
- Differential Privacy implementation for any model fine-tuned on sensitive customer or patient data
The Cost of Waiting
An estimated 47% of small businesses currently operate with no dedicated cybersecurity budget—and the threats covered in this series are specifically designed to remain invisible until the damage is done.
The regulatory penalties compound the financial risk. CCPA violations carry fines of up to $7,500 per intentional violation. At scale, across a customer list of even moderate size, non-compliance with the ADMT rules is an existential financial exposure for a small business.
The good news: the foundational steps—policy documentation, vendor review, AI inventory, NIST alignment—cost time, not money. The businesses that start now will be positioned for the 2028 mandatory reporting requirements from a position of strength rather than crisis response.
This concludes our five-part AI security for small business series. Start from the beginning with Shadow AI: The Invisible Risk Hiding Inside Your Own Team, or review the full series:
- Shadow AI and the Hidden Productivity Trap
- Slopsquatting and AI Supply Chain Risks
- Prompt Injection, AI Worms, and Autonomous Agent Attacks
- AI Recommendation Poisoning and Model Inversion
- California's New AI Laws and Your Compliance Roadmap (this article)
Need help building an AI governance and compliance program for your business? Schedule a consultation with the SafeLab team—we'll start with an AI inventory and give you a prioritized action plan.