California’s Bold Move to Lay Down The Law in AI
California is the fifth largest economy in the world, larger than the UK, France or India. When the state legislates against Big Tech, it has a knock on effect that goes beyond its borders. The latest act of legislation, under the guise of protecting privacy for Californians, is to put guardrails on AI.
The California Privacy Protection Agency (CPPA) has unveiled draft regulations aiming to shape the landscape of automated decision making technology (ADMT), in other words, AI.
This move by the CPPA marks a significant leap in setting comprehensive rules for the AI space. Taking cues from the European Union’s General Data Protection Regulation (GDPR), California seeks to enhance individual rights over automated decisions, providing a robust framework that tech giants can’t easily sidestep.
The proposed regulations include opt-out rights, pre-use notice requirements, and access rights, empowering Californians to have meaningful control over how their data fuels automation and AI tech. If these rules come into effect, major players like Meta, heavily reliant on user tracking for targeted ads, may face substantial challenges, just as they are in Europe.
The proposed definition for ADMT in the draft framework is:
“any system, software, or process — including one derived from machine-learning, statistics, other data-processing or artificial intelligence — that processes personal information and uses computation as whole or part of a system to make or execute a decision or facilitate human decision making.”
The definition also includes a reference to “profiling,” which is defined as:
“any form of automated processing of personal information to evaluate certain personal aspects relating to a natural person and in particular to analyze or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements”
California’s risk-based approach aligns with the EU’s AI Act, emphasising the need for responsible AI use. However, with the EU struggling to reach a consensus on AI regulations, California could emerge as a global leader in shaping the AI landscape.
Here’s The Thing ➜ There’s an issue with this approach — the tech companies can just add layers of consent around the tech, neutralising the regs.
We see this today with the abomination that is cookie controls. Does anyone read the cookie consent agreement before hitting the “accept” button? Of course not, we just hit whatever button will remove the consent from that page (and why are there cookie controls on EVERY PAGE!?)
Meta’s approach to getting around EU regs on data privacy is to charge $15/month for an “ad-free” Instagram or Facebook experience (although the user still gets ads, just not the targeted ones.)
So, whilst the world talks about building safety controls into AI, it’s clear that the regulator’s approach is to constrain the use and application of tech they deem harmful.
The regulator’s message is clear, you can build what you want, but you can’t use it if it’s harmful.
Further Reading
FREE TO WISER! READERS ➜ If you’re new to ChatGPT and want to know the basics of how to get started, what you can do with AI, and get sample prompts to get you going…sign up for the free weekly newsletter and get your free copy of The Beginner’s Guide To ChatGPT.
About The Author
Rick Huckstep has worked in technology his entire career, as a corporate sales leader, investor in tech startups and keynote speaker. From his home in Spain, Rick is thought leader in artificial intelligence, emerging technologies and the future of work.
🤔 Join The Mailing List and Get Wiser! every week (and your free eBooks and resources)