When Washington Fell Short, Colorado Took the Lead: States Are Now Driving AI Workplace Rules

Diverse business team in a meeting discussing AI workplace regulations with reports and laptops on the table.

By Zenia Pearl V. Nicolas

There was a real push in Washington earlier this year to lay down some rules around the use of AI in hiring, employee monitoring, and workplace decisions. The proposal had potential, it could have been the country’s first serious step toward regulating how technology influences people’s careers. But it never made it through Congress.

With the bill off the table, there’s no national rulebook. And for HR leaders, that raises a big question: What now?

The answer, at least for now, seems to be unfolding state by state. And Colorado has jumped in first.

Colorado’s AI Law Is Setting a New Standard

Colorado State Capitol building with text highlighting new AI legislation setting a national standard

Colorado just passed a law that’s turning heads not just for how broad it is, but because of who it holds accountable. This law doesn’t only focus on the tech companies building AI. It also puts responsibility on the companies using it.

Here’s what the law asks for:

Effective evaluations. If a company employs AI to assist in making decisions regarding personnel such as who gets employed or advanced, it must carefully examine how that system operates. Might it be biased against individuals. Could it miss something important? That kind of reflection is now required.

Clear notice. Individuals are entitled to be informed when an algorithm is assessing them, not solely a human.

Shared responsibility. Even if an AI tool was built by a third party, the employer using it must still answer for how it works in practice.

It’s a shift from past tech regulations, which mostly focused on the creators. This time, users, especially employers—have skin in the game too.

Source: HR Dive – Colorado AI Law Sets High Bar, Analysts Say

HR Can’t Wait on Federal Rules

Diverse HR professionals working in a modern office with a banner reading 'HR Can’t Wait on Federal Rules'

Artificial intelligence is already part of how many organizations hire, track performance, and shape workforce decisions. But without clear federal guidance, there’s growing pressure on HR to figure things out at the local level.

According to SHRM, states like California, New York, and Illinois are watching Colorado closely—and some are already drafting their own legislation. What’s happening now isn’t a one-off. It’s likely the start of a bigger shift.

For HR professionals, this means more than compliance. It means adapting fast, even when the rules change from state to state.

What HR Teams Should Be Asking

If you’re in HR, this is the moment to ask yourself and your team:

  • Are we using AI in any part of our hiring or employee review process?
  • Do we actually know how those tools work behind the scenes?
  • Have we ever looked into whether they could be biased—or legally risky?

Even if your company doesn’t operate in Colorado, these are smart questions to start asking now.

This Goes Deeper Than Policy

It’s easy to treat this like a regulatory hurdle. However, that is just a portion of it. These recent laws require us to contemplate more seriously the decision-making process—and who ought to be accountable when outcomes deviate from expectations.

HR plays a central role in this. It’s the team that brings people and systems together. If technology is becoming more involved in the hiring and evaluation process, then HR needs to be the voice that keeps fairness and human impact in focus.

This type of change isn’t novel. California’s consumer privacy law prompted a surge of comparable legislation throughout the nation

Colorado may be the first to act on AI in the workplace but it’s probably not the last.

Here’s Where to Start

You don’t have to change everything at once, but you should begin establishing the foundation.

  • Consult with your legal and compliance departments. See what tools you’re currently using and whether they fall under these new rules.
  • Reach out to your vendors. Ask how their AI systems are built. What data do they use? How do they avoid bias?
  • Create your own internal standards. Think through how AI should (and shouldn’t) be used in your organization.

Decide what’s acceptable before you’re told what has to change.

Business team in a modern office reviewing legal compliance documents and discussing strategy

Starting early puts your team in control—before the regulations catch up to you.

We tend to look to Washington for big policy shifts. But this time, it’s the states taking action. For HR, this entails remaining vigilant, being adaptable, and taking initiative in the responsible implementation of AI. Ultimately, the tools might be driven by machines—but the repercussions still affect individuals.

Source: States to Lead AI Workplace Regulation After Federal Ban Fails

This moment is more than a shift in policy, it’s a shift in accountability.

We’re entering an era where the decisions that shape careers may be made by machines, but the responsibility remains deeply human. As HR professionals, leaders, and changemakers, it’s on us to ask the harder questions, to push for transparency, and to advocate for fairness before regulations force our hand.

Because at the end of the day, behind every algorithm is a real person whose future could be rewritten in milliseconds.

Start the conversations. Audit the systems. And most importantly, lead with intention because when we shape how AI is used in the workplace, we’re not just building compliance.

We’re protecting people.

Contact Us
Newsletter
Name