Understanding the Global AI Summits: Safety, Innovation and Inclusivity

12–19 minutes

There have now been three international AI Summits. Since the third summit in February this year, it’s been all over the news, But what are the AI Summits? Why is it in the news? And what do I really need to know? In this post I will highlight all the takeaways you need.

Everyone knows that the use of AI is being adopted so quickly and in so many different ways. We are in the midst of the AI revolution, and governments from all across the world are acknowledging there are several challenges with this fast adoption. Ways of dealing with AI safety and security are not advancing at the same rate. The digital divide is between countries (and within countries) is growing larger. Governments do not want to delay the rate of development, but want to ensure that the challenges are being considered by everyone.

There is a general consensus that this is an international effort, not one single nation can take the lead. Hence the creation of the global AI summits. The AI Summits began their life as the primarily as the AI Safety summit – calling urgent attention to global response to AI safety and security. In 2023 the UK held the first of the series. By the end of the event, thirty countries signed the Bletchley Declaration, declaring general consensus that AI Safety needs urgent attention.

The following year, in the 2024 AI Seoul Summit, 29 countries again signed the Seoul Ministerial Statement which largely builds upon the Bletchley Declaration. Moreover, at this event, several large, leading AI companies signed the Frontier AI Safety Commitments. Demonstrating that large companies are critical to ensuring AI Safety, and that they largely are in agreement with the outcomes.

Then, finally, in February 2025 France held the AI Action Summit. Similarly to all previous years, the event ended with an official statement, The Statement on Inclusive and Sustainable AI. 62 countries signed this document- however, somewhat controversially, the UK and the US did not sign.

In this post we take a journey back through the previous summits. Then we shall compare with the most recent summit; to see where it went wrong?

The Summits of 23 and 24

These summits bring people together from all over the world. Acknowledging the global effort required. Governments, academics, engineers, large tech companies are all in attendance. It is well known that knowledge is not distributed equally, so international effort with attendance from all is crucial.

I think these summits are essential. AI has the power to drive huge economic growth, scientific progress and be of huge benefit to the general public. However they also pose huge safety risks if not developed or used properly. The use of AI for generating disinformation content and propaganda is an a current problem e.g Deepfake videos of the President Zelensky which appeared in the news in 2022.

The Frontier AI model has been the biggest target of all discussions, with a general feeling that these systems carry the largest AI Safety risk.

I believe it has started off in the right path. global support and commitment to AI safety, from government and private companies. It is welcome to see private companies, including those leading companies present and open minded.

AI Safety Summit 2023

The first AI safety summit, held in the UK, saw a great attendance from people coming from all over the world. It concluded with a positive commitment from international governments to ensure AI is developed and deployed responsibly.

The UK lead the way with the AI summits series. The first, AI Safety Summit, was held in November 2023 at Bletchley Park. This is old news now, and was even held by the previous, Sunak, government. This summit brought together international governments, leading AI companies, and research experts for the first time to discuss AI and how we should respond to its safety.

The main aims were centered all around AI Safety – this was the purpose of the Summit after all. To have a shared understanding of the risk of Frontier AI and how to collaborate internationally. To take appropriate measures to increase Frontier AI Safety. To showcase how ensuring the safe development of AI will enable AI to be used for good globally.

The Bletchley Declaration

As a result of this Summit, The Bletchley Declaration, was signed by 30 countries (including the US and UK). I have read through the Bletchley Declaration, and I believe it is taking a fair and balanced view to AI Safety. It focuses on the innovation vs safety trade-off, encouraging risk-based control of Frontier AI capabilities – rather than blanket rules to cover all. It acknowledges the potential benefits of AI across multiple aspects of society, including housing, employment, education, health and justice, as the motivation for continued innovation investment.

the recognition that the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed

However, it also addresses the potential risks, both known and unknown, which are associated with AI. In particular this declaration really addresses Frontier AI. It appears the primary global concern is the use of AI in cybersecurity, biotechnology and disinformation.

The declaration calls for a pro-innovation and proportionate governance; which essentially means, although highlighting safety risks, they do not want to stunt innovation with overly restrictive safety regulation. Instead nations should create risk-based assessments. The positive impact is maximised whilst also taking into account risks.

Essentially a ‘don’t stop, keep innovating’ message, where risk is managed depending on application. For example the specific call-out of cybersecurity and biotechnology would require a much higher level safety assurance of AI in these areas. I believe this is the right message, an overly restrictive environment will stunt development and research, particularly as AI safety is still a maturing area.

We also see the nations directly calling out for the private companies developing Frontier AI to take responsibility for its safety, with transparent plans for measuring, monitoring and mitigating harmful capabilities. Since most advanced AI technology lies within large co-operations, rather than in governments, this is a sensible approach. The governments role will be to take responsibility to ensure that this transparency happens.

Ending this agreement was the call for regular, global AI summits to continue the dialogue on responsible and good AI. This led the pathway to the AI Summit of 2024, held in Seoul, The Republic of Korea.

In one sentence; Safety is important, but should be balanced carefully with risk so as to not stunt innovation.

AI Seoul Summit 2024

Fast forward another year, the UK co-hosted the AI Seoul Summit of 2024 with Korea. Upfront we can see the name change, dropping ‘Safety’. Is this an early sign of what is to come?

Overall the summit was still focused on AI safety and innovation. This year the theme of AI inclusivity is specifically added to the list of goals. The summit discussed;

  • Safety: To reaffirm the commitment to AI safety and to further develop a roadmap for ensuring AI safety
  • Innovation: To emphasise the importance of promoting innovation within AI development
  • Inclusivity: To champion the equitable sharing of AI’s opportunities and benefits

Frontier AI Safety Commitments

Interestingly, this Seoul Summit saw support explicit written commitments from leading AI companies. Several high profile AI companies (including Amazon, Anthropic, Meta, Microsoft and OpenAI) signed the Frontier AI Safety Commitments. These commitments essentially confirm that these companies hold the responsibility for safety risk management of their Frontier AI models and should continue to invest in risk management.

This is quite an achievement, and I believe is a positive indication that there is shared agreement between public and private bodies that AI safety is a concern and a priority. Maybe at another time I will explore in more detail whether these companies have stuck to their word, but for now have a look at OpenAI’s Safety Approach.

Seoul Ministerial Statement

Several international governments sign the Seoul Ministerial Statement. This statement, unlike the Bletchley Declaration, contains more specific details and actions. It follows the same three themes above; safety, innovation and inclusivity. Similarly to the Bletchley Declaration, innovation is still very much being promoted. There is additional detail on how safety risk management should operate.

We recognise the importance of governance approaches that foster innovation and the development of AI industry ecosystems with the goal of maximising the potential benefits of AI for our economies and societies.

Benefits of AI for education, healthcare and administration are specifically mentioned. We might see additional investment and uptake in these areas, in the future.

Moreover the inclusivity of AI is called out. There is a consensus that the benefits of AI should be shared equally over the world, and additional efforts are required to make this happen. This was briefly mentioned in the Bletchley Declaration, but is much more prominent in this Statement. It’s not really clear what this entails, but I can imagine perhaps there is a divide in computer hardware infrastructure (the equipment you need to run AI), a divide in language support (English is dominating) and other cultural differences that aren’t considered by the developers from a different cultural background to those that are using it.

In our efforts to foster an inclusive digital transformation, we recognise that the benefits of AI should be shared equitably. We seek to promote our shared vision to leverage the benefits of AI for all, including vulnerable groups

Everyone is in agreement. This time innovation is still important, as is Frontier AI safety. Companies – it’s on you to be accountable. But also, world, we should make sure everyone equally benefits from AI.

AI Action Summit 2025

The AI Action Summit of 2025, held by France. So here we are, these summits have all lead us to this moment. France 2025. This year the UK and the USA did not sign the Summit Statement. What does this mean?

Several important outputs have come out of this summit, which might be overshadowed by the Summit Statement news. Firstly The Statement On Inclusive and Sustainable AI, secondly the International AI Safety Report, thirdly the launch of CurrentAI, and finally the Coalition for Environmentally Sustainable AI.

To be clear, although the UK has not signed the Statement, they are supporters of Coalition for the Sustainable Development of AI – calling for a more environmentally sustainable use of AI.

The Statement On Inclusive and Sustainable AI

I have read the statement, and I can see where perhaps some misalignment is coming from. The major points raised in previous event have been focused on Frontier AI Safety and Innovation. This feels somewhat diluted in this statement. The previous outcomes have been referenced at a high level in one ( of six) paragraphs. The rest of the document is primarily focussed on inclusivity and diversity. Even the reaffirmed priorities (listed below) seem to be light on the Safety front.

  • Promoting AI accessibility to reduce digital divides;
  • Ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all
  • Making innovation in AI thrive by enabling conditions for its development and avoiding market concentration driving industrial recovery and development
  • Encouraging AI deployment that positively shapes the future of work and labour markets and delivers opportunity for sustainable growth
  • Making AI sustainable for people and the planet
  • Reinforcing international cooperation to promote coordination in international governance

When the issue of Safety hasn’t yet been resolved, are we muddying the waters too soon? Or is it that the world feels diversifying the market and reducing the digital divide is more important?

Diversification over Safety?

Firstly there is a major point about diversification of AI capabilities, ensuring developing countries have the ability to build their own capabilities. I certainly agree with the need for diversification of AI building, but is this priority more urgent than the safety and innovation – I’m less sure?

Moreover, their definition of inclusivity has subtly changed. The outcome of the Seoul Summit discusses inclusivity as ensuring all have access to the benefits of AI. In these outcome inclusivity is for helping developing-countries to create their own capabilities. Subtle changes in definitions, but to me they have quite different needs.

Urgency to narrow the inequalities and assist developing countries in artificial intelligence capacity-building so they can build AI capacities.

If we are actively seeking more variety in AI capabilities building from more groups, then AI Safety is going to be more complex and more important.

Innovate-last?

Secondly, I think the lack of mention of protecting and encouraging innovation is also in contrast to previous outcomes. The only reference to innovation is in the context of enabling innovation by “avoiding market concentration”. Which to me, feels like a dig at the very large US tech companies who have a bit of a monopoly over the AI landscape – no wonder they did not sign this agreement.

We will have to see what happens as a result. Diversity in innovation is always a good thing, as long as we are not impacting on the innovation of those leading companies. If we are being critical, we need the high level of investment that these companies can made, to really make ground-breaking developments. I’m not sure stopping this is a good thing.

Sustainable development – yes please

Interestingly this year there have been discussions on the energy usage of AI, which hasn’t been explicit in other Statements. I feel this topic is very important, as the energy consumption needed to run the buildings and computers that actually make the AI accessible to all, is extremely high. Should we look to improve this – yes we definitely should.

Both the UK and the USA have not signed this statement. The UK states that it is because of concerns about national security and “global governance”. The US stating that “pro-growth AI policies” should be prioritised over safety.

The Coalition for Environmentally Sustainable AI

Coalition for Environmentally Sustainable AI was launched at the Summit, and it has support from countries and companies – in short develop AI in line with environmental goals. You can see their official website here.

It aims at building a global community of stakeholders willing to contribute to initiatives for aligning AI development with global sustainability goals and fostering responsible AI that supports the environmental policies

I’m so happy to see an environmental concern raised too. It is not just about safety but making sure we are being responsible with the environmental aspects too. This is being supported by both governments and companies.

So now, we must balance the AI capabilities around the world, we must ensure it is inclusive and accessible to all. It should be safe, and we should use it responsibly. Oh we also should make sure we are developing in a sustainable manner. And maybe we should look at global AI governance?

Where does that leave us?

I think it is great to see a sustained, global effort on the safety of AI. In 2023 and 2024, the message with regard to Safety, was to prioritise safety – in a balanced way by ensuring we can still get the benefits from AI.

Moreover, the world is calling for a bridge in the divide between countries who have the ability to create their own AI systems, and those countries who are not able to create their own.

The Coalition for Environmentally Sustainable AI is great, and the fact it has been supported by a huge number of companies and countries (including the UK) is really reassuring. It’s one to keep an eye on. We will watch this space, and follow closely in the run-up to the AI Summit of 2026.

In the meantime, I will follow up here with some details about how well the major AI companies are living up to their commitments, and what the outcomes of the International AI Safety Report were.

Conclusions

Firstly, thank you for getting to the end. This piece ended up being a lot longer than I had anticipated! I have thrown in several of my opinions in this post, and why not end with some more.

Environmental Impact, also very very important. We don’t want to be creating in a way that is not sustainable. So now we need to act. AI Revolution 2.0 – The Greener Way.

Safety is important, but safety is extremely important where AI is actually being used. We have not solved this yet, we are still learning – we can’t lose sight of this.

Innovation is important, without innovation none of this would have happened. We must keep innovating.

At the end of the day, I think most people agree we want to use AI to benefit everyone, and we want to do that in a way which is safe, responsible and respectful of the planet. People have different opinions on how we get there, and which challenge do we prioritise first – but I believe we are trying.

2 responses to “Understanding the Global AI Summits: Safety, Innovation and Inclusivity”

  1. […] A word of caution, since it all sounds very positive. Like goal based agents, since these agents use complicated, highly dimensional sensor data and can deal with so many different scenarios by learning appropriate responses, understanding why actions were taken is very challenging. If a disastrous action was taken, how do we find out the reason? And how could we prevent this from happening again? We can’t just look into the model to find out why – it’s a black meaningless box. This is why there is lots of caution in the community about how these agents are used, and why AI safety is a big topic right now. […]

    Like

  2. […] All of the world, governments are acting to ensure AI is used safely and responsibly. They have acknowledged the risks AI poses and are in agreement these need addressing. This is comforting right? There are people taking the real risks seriously, but the undertone of all AI Security discussions is that the benefits still outweigh the risks, and we should continue to adopt this awesome technology. If you are interested in Global Opinions see my post on the AI Safety Summits. […]

    Like

Leave a comment