Colorado lawmakers, being watched across the country, scale back artificial intelligence law
Pushback from Gov. Polis led to the proposed changes to delay the law for one year and exempt businesses with fewer than 500 employees in the first 15 months. All companies using AI must abide by state anti-discrimination laws.


With about a week left in this year’s lawmaking session, a new bill was introduced Monday that would change Colorado’s controversial artificial intelligence law — or at least its impact on small businesses.
Senate Bill 318 would reduce the administrative tasks smaller companies must take to protect consumers against discrimination if their AI systems are used to decide who gets a job, housing, personal loans, health care, insurance coverage, educational opportunities, or legal or essential government services.
The measure would make the resource-intensive parts of the AI law apply initially to companies with 500 or more employees worldwide, instead of 50 or more. That would step down gradually until April 1, 2029, when companies with fewer than 100 workers would be exempt.
Without the revisions to Colorado’s current AI law, any company — even those that didn’t develop their AI — must complete risk assessments, notify consumers that AI is used to make critical decisions, and respond to consumers who appeal critical decisions influenced by AI. That law goes into effect Feb. 1.
“Drafting policy around AI is an uphill battle,” Senate Majority Leader Robert Rodriguez, a Denver Democrat and the bill’s prime sponsor, said in a statement. “As soon as you land on workable policy, you realize you’re two steps behind where the technology currently is.”
Senate Bill 318 builds on last year’s “first-of-its-kind legislation to implement commonsense guardrails, address concerns that I’m hearing from stakeholders, and ensure we’re keeping up with this evolving AI landscape,” he added. “The ultimate goal with this policy is to ensure that we’re protecting consumers that — whether they like it or not — are along for the AI ride.”
Consumer advocates were reluctantly OK with how the law ended up last year because it aimed to protect consumers from computer systems trained on biased data.
But the changes in the bill were disappointing, said Matthew Scherer, who focuses on workers’ rights as senior policy council at Center for Democracy and Technology.
“Industry got nearly all of the changes it wanted, while public interest groups got only a fraction of what we wanted,” he said. “That said, while the bill strips the law down to its foundation, that foundation is still there and it’s still strong. Labor, consumer and civil rights groups are still processing, but I think there’s an understanding that the tech industry has spent a year trying to make an example out of Colorado and is feeling buoyed by their power in D.C., and this might be the best we can get right now.”
But for AI developers and pretty much any company that might use AI, including technology they didn’t develop, there’s still a big burden on small businesses. Some of it is just delayed, said Chris Erickson, cofounder of venture capital firm Range Ventures in Denver.
“The first few years gives them some relief from some parts of the bill but that does ramp up pretty significantly. And over time, you are still left with companies of a pretty small size having to implement a bunch of these things,” Erickson said. “Deployers in a lot of cases are not going to be tech savvy. They have purchased a piece of software that’s doing a lot of stuff in the background.”
Proposed changes
The measure would change Senate Bill 205, which faced an outcry from the tech and business community when it was passed by the legislature last year and signed into law by Gov. Jared Polis.
Some said it would hurt AI development in Colorado and companies would go elsewhere since no similar law exists in the U.S. Others complained about the heavier burden on small companies and tech startups, who along with the big guys must notify consumers when AI is used for critical decisions and provide explanations to consumers when asked.
They also took issue with the term “deployers,” which is any company that uses their own AI or someone else’s to make consequential decisions. They could be held liable if the AI decision was in one of eight types of covered categories: education, employment, financial services, health care, housing, insurance, legal services and essential government services.
The pushback caused Polis, along with Rodriguez and state Attorney General Phil Weiser, to pursue revisions. A legislative task force was created and volunteers from the business and consumer advocate community began meeting in August to find compromises. They didn’t find much, which led Rodriguez to spend weeks drafting an update.
The bill clarifies the definition of “algorithmic discrimination,” as AI that makes a decision that violates any local, state or federal anti-discrimination law. The bill added a new term, “principal basis,” to clarify that this law only affects AI systems that “make consequential decisions without meaningful human involvement.”
And it emphasized that technologies where the AI isn’t a substantial factor in consequential decisions are excluded. Some of those were cited in the current law, which doesn’t consider tools like spell check, generative AI systems like ChatGPT or video games as capable of making such decisions.
But the revisions are complex for an already lengthy law that seems to favor the business community, which was Polis’ intent when he asked for changes last year.
The bill strips out some of the language the industry felt was vague, like the requirement to use “reasonable care” to protect consumers from algorithmic discrimination.
It proposes to limit what consumers could appeal to only decisions “based on incorrect personal data or unlawful information” and not limited by deadlines or competition, like a job offer that is no longer available.
It narrows what “housing” and “financial or lending services” mean. Housing decisions would only refer to a person’s primary residence. Financial or lending services decisions only cover personal and household financial services, according to the bill.
Also exempt are technology startups that have raised less than $10 million from third-party investors, post revenues below $5 million and have existed for less than five years.
Nonexempt companies would have to create a risk-management policy that spells out potential discriminatory risks and how they would mitigate them, and update it annually. Companies must let consumers know in a statement how decisions are made and what personal information may be considered by the AI system.
Trade secrets, as before, are still protected.
It will still be up to the attorney general to set rules, investigate violations and enforce civil penalties of up to $20,000 per violation starting Jan. 1, 2027. Companies and developers that find and fix inadvertent violations affecting fewer than 1,000 customers would not be liable.
Reading through an explanation of the bill, Rodriguez essentially kept to his original premise that the law is to add guardrails so companies aren’t overly reliant on a technology system that could decide a person’s fate unfairly.
“This is a quintessential compromise bill,” said Grace Gedye, a policy analyst at Consumer Reports, which advocated for more consumer protections in the bill. “I suspect no one feels like they are getting everything they want here.”
Politics reporter Jesse Paul contributed to this story. This is a developing story and may be updated.