
One of President Donald Trump’s first executive orders in his second term called for developing an AI action plan. Photo by Anna Moneymaker/Getty Images
Imagine a not-too-distant future where you let an intelligent robot manage your finances. It knows everything about you. It follows your moves, analyzes markets, adapts to your goals and invests faster and smarter than you can. Your investments soar. But then one day, you wake up to a nightmare: Your savings have been transferred to a rogue state, and they’re gone.
You seek remedies and justice but find none. Who’s to blame? The robot’s developer? The artificial intelligence company behind the robot’s “brain”? The bank that approved the transactions? Lawsuits fly, fingers point, and your lawyer searches for precedents, but finds none. Meanwhile, you’ve lost everything.
This is not the doomsday scenario of human extinction that some people in the AI field have warned could arise from the technology. It is a more realistic one and, in some cases, already present. AI systems are already making life-altering decisions for many people, in areas ranging from education to hiring and law enforcement. Health insurance companies have used AI tools to determine whether to cover patients’ medical procedures. People have been arrested based on faulty matches by facial recognition algorithms.
By bringing government and industry together to develop policy solutions, it is possible to reduce these risks and future ones. I am a former IBM executive with decades of experience in digital transformation and AI. I now focus on tech policy as a senior fellow at Harvard Kennedy School’s Mossavar-Rahmani Center for Business and Government. I also advise tech startups and invest in venture capital.
Drawing from this experience, my team spent a year researching a way forward for AI governance. We conducted interviews with 49 tech industry leaders and members of Congress, and analyzed 150 AI-related bills introduced in the last session of Congress. We used this data to develop a model for AI governance that fosters innovation while also offering protections against harms, like a rogue AI draining your life savings.
Striking a balance
The increasing use of AI in all aspects of people’s lives raises a new set of questions to which history has few answers. At the same time, the urgency to address how it should be governed is growing. Policymakers appear to be paralyzed, debating whether to let innovation flourish without controls or risk slowing progress. However, I believe that the binary choice between regulation and innovation is a false one.
Instead, it’s possible to chart a different approach that can help guide innovation in a direction that adheres to existing laws and societal norms without stifling creativity, competition and entrepreneurship.
Bloomberg Intelligence analyst Tamlin Bason explains the regulatory landscape and the need for a balanced approach to AI governance.
The U.S. has consistently demonstrated its ability to drive economic growth. The American tech innovation system is rooted in entrepreneurial spirit, public and private investment, an open market and legal protections for intellectual property and trade secrets. From the early days of the Industrial Revolution to the rise of the internet and modern digital technologies, the U.S. has maintained its leadership by balancing economic incentives with strategic policy interventions.
In January 2025, President Donald Trump issued an executive order calling for the development of an AI action plan for America. My team and I have developed an AI governance model that can underpin an action plan.
A new governance model
Previous presidential administrations have waded into AI governance, including the Biden administration’s since-recinded executive order. There has also been an increasing number of regulations concerning AI passed at the state level. But the U.S. has mostly avoided imposing regulations on AI. This hands-off approach stems in part from a disconnect between Congress and industry, with each doubting the other’s understanding of the technologies requiring governance.
The industry is divided into distinct camps, with smaller companies allowing tech giants to lead governance discussions. Other contributing factors include ideological resistance to regulation, geopolitical concerns and insufficient coalition-building that have marked past technology policymaking efforts. Yet, our study showed that both parties in Congress favor a uniquely American approach to governance.
Congress agrees on extending American leadership, addressing AI’s infrastructure needs and focusing on specific uses of the technology – instead of trying to regulate the technology itself. How to do it? My team’s findings led us to develop the Dynamic Governance Model, a policy-agnostic and nonregulatory method that can be applied to different industries and uses of the technology. It starts with a legislative or executive body setting a policy goal and consists of three subsequent steps:
Establish a public-private partnership in which public and private sector experts work together to identify standards for evaluating the policy goal. This approach combines industry leaders’ technical expertise and innovation focus with policymakers’ agenda of protecting the public interest through oversight and accountability. By integrating these complementary roles, governance can evolve together with technological developments.
Create an ecosystem for audit and compliance mechanisms. This market-based approach builds on the standards from the previous step and executes technical audits and compliance reviews. Setting voluntary standards and measuring against them is good, but it can fall short without real oversight. Private sector auditing firms can provide oversight so long as those auditors meet fixed ethical and professional standards.
Set up accountability and liability for AI systems. This step outlines the responsibilities that a company must bear if its products harm people or fail to meet standards. Effective enforcement requires coordinated efforts across institutions. Congress can establish legislative foundations, including liability criteria and sector-specific regulations. It can also create mechanisms for ongoing oversight or rely on existing government agencies for enforcement. Courts will interpret statutes and resolve conflicts, setting precedents. Judicial rulings will clarify ambiguous areas and contribute to a sturdier framework.
Benefits of balance
I believe that this approach offers a balanced path forward, fostering public trust while allowing innovation to thrive. In contrast to conventional regulatory methods that impose blanket restrictions on industry, like the one adopted by the European Union, our model:
is incremental, integrating learning at each step.
draws on the existing approaches used in the U.S. for driving public policy, such as competition law, existing regulations and civil litigation.
can contribute to the development of new laws without imposing excessive burdens on companies.
draws on past voluntary commitments and industry standards, and encourages trust between the public and private sectors.
The U.S. has long led the world in technological growth and innovation. Pursuing a public-private partnership approach to AI governance should enable policymakers and industry leaders to advance their goals while balancing innovation with transparency and responsibility. We believe that our governance model is aligned with the Trump administration’s goal of removing barriers for industry but also supports the public’s desire for guardrails.
Carvão advises tech startups and invests in venture capital.
Advertisement

Advertisement
Contact Us
If you would like to place dofollow backlinks in our website or paid content reach out to info@qhubonews.com