AI governance is never a solvable problem: Owen Larter of Google DeepMind

Anand Kumar
By
Anand Kumar
Anand Kumar
Senior Journalist Editor
Anand Kumar is a Senior Journalist at Global India Broadcast News, covering national affairs, education, and digital media. He focuses on fact-based reporting and in-depth analysis...
- Senior Journalist Editor
8 Min Read
#image_title

At a time when artificial intelligence is no longer a laboratory achievement but rather a geopolitical and development force, Google DeepMind is recalibrating how it works with governments. This week they announced their national AI partnerships with the Indian government, which include working with the Anusandan National Research Foundation (ANRF) to make its scientific AI models more accessible, as well as supporting IIT-Bombay with a $50,000 grant to use Gemma to process health policy and administration documents in Hindi to build a new “India-centric trait database.”

Owen Larter, senior director and head of border policy and public affairs at Google DeepMind. (official photo)
Owen Larter, senior director and head of border policy and public affairs at Google DeepMind. (official photo)

Owen Larter, a senior director and head of border policy and public affairs at Google DeepMind, says this is not just an expansion into a large market, but rather reflects a view that India’s position is uniquely stimulating, with strong ties to the developing world. It must play a role in shaping how the benefits of AI are distributed more equitably across geographies. However, Larter argues that the responsibility for ensuring that AI works safely for everyone also falls on the companies building pioneering systems, which must ensure that governments understand what these technologies can do. He points out that transparency is a prerequisite for effective regulation. Edited excerpts:

Q: Google DeepMind has been vocal about the extreme risks posed by advanced artificial intelligence. How do you prioritize near-term harms versus long-term existential concerns in your political work?

Owen Larter: This is a really important conversation, and clearly our mission is to develop advanced AI and put it into the world responsibly. We’re excited about how people can use this technology, like leading Indian scientists who are using AlphaFold to develop new types of cancer treatments. If we want to continue to make progress in this area, we need to ensure that this technology is trustworthy and we need to continue to build governance frameworks.

There is little risk in compartmentalizing the different risks that we need to address. This will be kind of an ongoing journey to come up with solid frameworks. There are some principles that we should work on. We need to continue to build a really strong scientific understanding of the technology, what it can do, its capabilities, and its limitations. It is then important to work with partners to understand the impact this technology will have when used in the real world, and to test mitigation.

This is really an approach that we need to apply whatever the risk set, whether it’s protecting the safety of children or ensuring that our systems are useful in different languages, all the way to the critical risk of advanced border systems developing capabilities that could be misused by bad actors to commit a cyberattack or create a bioweapon. DeepMind has had a boundary safety framework since 2024, which we iterate over time. This will not be a constant problem. AI governance will never be a solved problem, but rather an ongoing journey.

Q: Are we seeing the emergence of different organizational philosophies globally? Is convergence desirable, or should we expect organizational pluralism?

Owen Larter: I think there’s definitely a convergence in certain places. All of these different organizational philosophies are trying to do the same thing; Every country wants to use AI in their economies. But there are risks that must be understood and addressed. We are seeing some different approaches, with the EU moving first and a little further than other jurisdictions. The United States is taking a slightly different approach, and there are some regulations addressing border management in California and New York now. This will continue to evolve.

We want to join hands and help governments around the world to help them understand technology. It is a responsibility on our part to share information about what you could be doing today and where you are headed. One part of the conversation that was really encouraging this week was the attention given to the importance of developing standardized best practices for testing risk systems, and applying mitigation before the system goes into the world.

Q: The AI ​​Safety Summit Series is an international coordination. What mechanisms have proven effective in translating high-level commitments into policy action?

Owen Larter: The AI ​​Impact Summit in India in particular was really important in highlighting some important issues that haven’t been addressed much in previous summits, particularly the importance of spreading access and opportunity with AI and making sure that is put into people’s hands. The multilingual discussions that have taken place are essential. It is something we are leaning into and trying to do more with the grant we gave to IIT-Bombay. Regular discussion at the global level is really important. I am really happy that this will be implemented in Switzerland and then in the UAE.

Q: Given India’s strengths in digital public infrastructure and the scale of its reach, what unique contributions can it make to global AI governance discussions?

Owen Larter: It has been absolutely essential that as the technology matures, discussions about how to use it mature alongside it. It’s great that this Summit series is being expanded a bit. I think India is going to be absolutely key to how this technology is developed and used. It is clear that India will become an AI superpower in its own rights. That’s why we continue to invest here.

S. At what point does a model become a “limit” to management clarity?

Owen Larter: We need to think about different types of systems, to enhance understanding of how they work and the risks. From a legal perspective, definition is important but it’s easy to get caught up in semantics. We think of boundary systems as the kind of most advanced systems that exist at any one point. Of course, the frontier continues to advance, as systems become more capable.

One reason we care about borders is that they may develop capabilities that can pose risks. Our framework is a monitoring mechanism as we continue to push to test these systems and see if they develop capabilities that might pose some of these risks around bioweapons or cybersecurity. Or acquire capabilities that need attention to ensure that humans continue to manage these systems in a safe manner. It’s interesting to see this becoming an increasing standard across the industry. We are proud to have acted early on the integrity of our borders. Conversations between industry, government and civil society about how to improve will be crucial to continued progress.

Share This Article
Anand Kumar
Senior Journalist Editor
Follow:
Anand Kumar is a Senior Journalist at Global India Broadcast News, covering national affairs, education, and digital media. He focuses on fact-based reporting and in-depth analysis of current events.
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *