Human + AI: Being Smart on AI Regulation
Striking the Balance: Navigating the Complexities of AI Regulation in a Human-Centric World
Google is one of the world’s most recognizable brands, and a trailblazer in the field of artificial intelligence (AI). They acquired AI pioneer DeepMind in 2014, and have prioritized it since, investing billions of dollars and leading the pack in the creation of foundational models. According to the Stanford State of AI Report for 2024, they have also spent the most on their models, in training costs and compute.
And yet, they’ve often made the news for public AI blunders in their rush to keep up. As reported by Bloomberg in April of 2023, internal tests at Google had employees warning about bad performance by their chatbot of the time, Bard, calling it “a pathological liar” and “cringe-worthy.”
Despite pledging in 2021 to double their AI ethics team, Google’s leadership team has instead felt the need to more aggressively embrace risk to keep up with the competition.
In 2024, we’ve seen these woes continue, with an uproar over inaccurate image creation that saw Nazis portrayed as global majority, and in the current AI Overview flubs advising people to eat rocks, put glue on pizza, and declaring that US President Andrew Jackson graduated from college in 2005.
As one of the world’s private leaders in AI, Alphabet, the company behind Google, will get these issues fixed, but with each episode comes a hit to their credibility, as the world’s foremost expert on search. As a bridge to information, how much falsity can Google manage to output and still retain the public’s trust?
For me, this is at the heart of our need for good AI regulation: smart checks and balances to protect all of us, from CEO to end-user, from the dangers of a runaway market and an insufficiently tested, or safeguarded, product.
In this article we consider this need, the regulation that’s out there now (including Colorado’s new law), and how organizations can best prepare to utilize it.
Why We Need Smart AI Regulation
It recently made news, across media outlets, that AI firm Anthropic had gained some insight of what goes on inside a generative AI when it creates its outputs.
“What's going on inside of them?” AI researcher Chris Olah told Wired’s Steven Levy. “We have these systems, we don't know what's going on. It seems crazy.”
Their research revealed specific neurons firing as certain topics were considered, and how these could be manipulated, or even potentially controlled.
This demonstrates that even the savviest AI developer does not fully understand how the technology currently works.
At a time when Google is racing to get out product, and OpenAI is disbanding its own AI risk team, researchers like Chris Olah at Anthropic are working against terrific resistance to get us the information we need. Resources, he tells Levy, have been hard to come by.
“It’s not cheap.”
This is where good regulation will help the industry reach even greater heights. In the Human + AI consideration, we need human action to reap these AI regulation benefits:
Safety: To protect from AI use for malicious ends (such as generating weaponry, cyberattacks, and criminal disinformation) and dangerous system development.
Privacy: With extensive access to personal data and poor understanding of its behavior, how can we define and ensure privacy protections for anyone?
Bias: From the start, we’ve seen AIs amplify bias alongside the incredible capacities. Regulation can help organizations promote fairness and prioritize diverse and representative datasets.
Trust: With ignorance about AI rampant, building public trust in AI grows only more essential. Clear, effective regulation helps provide the foundation that trust is built on.
Legal Recourse: All technological innovations include a risk of harm, and regulations will help organizations and individuals alike understand their exposure and recourse when these issues arise. Smart regulation will provide protection as well as legal recourse for AI.
Balance and Accountability in AI: Good regulation protects companies by enforcing, (and thus helping them fund) rigorous testing, documentation, and continuous monitoring of their AI systems.
AI Regulations as They Stand Now
EU
The European Union’s EU AI Act was proposed in 2021, passed in 2023, and comes into full effect in June 2026, making it the world’s most prominent AI regulation. The stated goal is to establish standards and regulations for AI within the EU, setting risk categories, requiring transparency and labeling, and governing data use to ensure proper privacy and disposal.
Its risk categories determine the obligations businesses must follow, and the penalties for failure to comply, and will no doubt influence other laws to follow. They are:
Unacceptable: This AI is prohibited, facing the strictest penalties in the law. Examples include systems that do (or seek to do) harm, manipulate people or groups through unconscious messages, exploit vulnerabilities, or evaluate people based on behavior or appearance.
High: Systems that fall under the EU’s product safety legislation, like AI in toys, aircraft, and medical technology, as well as those used in education, and public services like law enforcement. Companies must register, establish quality systems, conduct evaluations with stringent record-keeping, use proper disclosure and transparency, report on incidents, and meet data-management requirements for systems in this category.
Limited: With small potential for harm or infringement of customer rights, products that fall into this category include chatbots and other forms of AI-generated content. The main requirements here involve transparency and disclosure.
Minimal: The lowest category is unregulated, and is for systems deemed to pose no threat at all, such as spam filters, ad blockers, and video game internal aspects such as inventory managers.
Recently, we’ve seen the EU press Microsoft to disclose information on Copilot use in Bing, on the grounds of potential hallucination and a concern of failure to properly disclose risks, with the threat of fines for failure to comply.
USA
In the US, the AI regulatory push is ongoing, and so far driven by the EU’s law, the Biden Administration's Executive Order on AI, and the Federal Trade Commission (FTC). Numerous US states have task forces looking into legislation, though as of May, only Colorado has passed any. The primary focus at this level is usually on privacy rights, including AI use in hiring and medicine, with clear disclosure and options to opt-out being certain to follow.
Executive Order and the White House: The executive order lays out goals and guidance, requiring developers share safety test results, directing NIST to set standards and red-team testing for public safety, and directing the Department of Commerce, Homeland Security, Department of Energy, and more to protect against AI-driven threats. It urges Congress for legislation on privacy and equity, to protect students, and support workers, while also not stifling innovation and competition. Since, we’ve seen a December 1st deadline issued for agencies to implement AI safeguards, and a requirement for internal transparency and AI heads to be in place across government.
Colorado: With a two-year grace period, this new law takes full effect in February, 2026. It requires compliance by creators and users of AI alike, with most coming for “high risk” systems, which influence decisions in areas like employment, finance, government, healthcare, insurance, and legal services. Developers must disclose information on the effects, make an annual review of discrimination, and disclose any failures to the state attorney general. Users must adhere to a risk management policy, conduct system reviews within 90 days of modifications, and notify consumers when AI has been used in decision-making. Consumers are given the right to opt-out of profiling.
Colorado’s governor signed the bill into law in May, but stated that it was his goal not to stifle innovation in the process. It remains to be seen how many other states will follow this path in the months to come.
How To Stay Ahead
With reasonable regulation needed and, as we’ve seen, rapidly coming online, it’s up to businesses to stay ahead.
My advice is largely practical:
Implementing Proactive Compliance: I anticipate the EU regulations will spread, and recommend maintaining clear audit trails with all AI use that’s above their limited risk classification. This includes documenting AI development and implementation, conducting internal risk assessments, and being transparent with data usage.
Promoting Ethical AI Practices: Ethics are central in AI regulation, so maintain clear standards on bias tracking, privacy protection, and lines of accountability. As the new Colorado law requires, it’s wise to conduct an annual review of potential impacts from AI discrimination if possible. Adapt for discoveries of bias, and be ready to disclose any potential issues.
Engaging with AI Regulators and Industry Groups: Regulation that doesn’t stifle innovation is the goal, so actively participate in discussions with regulators and industry groups to influence policy in this direction. Collaborate with other leaders to share your steps and gain insight into theirs.
Conclusion
At the time of this article, Google’s AI issues are still front-and-center, even in the fast-moving media cycle. This is an unfortunate risk for a company at the forefront of research and development in the world’s hottest field. Not to mention the potential impact it may have on user trust.
These failures are not a surprise, but rather were deemed a necessary risk, where it is better to ask for forgiveness than risk falling behind in output. They also increase the need for, and oncoming pace of broader regulations.
In the Human + AI series, we look at the actions we need to take in the ongoing human-AI collaboration, and that includes being smart on AI regulation.
It is not just about compliance: businesses that embrace their own role in regulation and successfully anticipate it, will be better positioned as the leaders of this AI-partnered future.