AI Action Summit review: Differing views cast doubt on AI’s ability to benefit whole of society


During the third global artificial intelligence (AI) summit in Paris, dozens of governments and companies outlined their commitments to making the technology open, sustainable and work for “public interest”, but AI experts believe there is a clear tension in the direction of travel.

Speaking with Computer Weekly, AI Action Summit attendees highlighted how AI is caught between competing rhetorical and developmental imperatives.

They noted, for example, that while the emphasis on AI as an open, public asset is promising, there is worryingly little in place to prevent further centralisations of power around the technology, which is still largely dominated by a handful of powerful corporations and countries.

They added that key political and industry figures – despite their apparent commitments to more positive, socially useful visions of AI – are making a worrying push towards deregulation, which could undermine public trust and create a race to the bottom in terms of safety and standards.

Despite the tensions present, there is consensus that the summit opened more room for competing visions of AI, even if there is no guarantee these will win out in the long run.

The Paris summit follows the inaugural AI Safety Summit hosted by the UK government at Bletchley Park in November 2023, and the second AI Seoul Summit in South Korea in May 2024, both of which largely focused on risks associated with the technology and placed an emphasis on improving its safety through international scientific cooperation and research.

To expand the scope of discussions, the AI Action Summit was organised around five dedicated work streams: public service AI, the future of work, innovation and culture, trust in AI, and global governance.

During the previous summit in Seoul, tech experts and civil society groups said that while there was a positive emphasis on expanding AI safety research and deepening international scientific cooperation, they had concerns about the domination of the AI safety field by narrow corporate interests.

In particular, they stressed the need for mandatory AI safety commitments from companies; socio-technical evaluations of systems that take into account how they interact with people and institutions in real-world situations; and wider participation from the public, workers and others affected by AI-powered systems.

However, despite the expanded scope of the AI Action Summit, many of these concerns remain in some form.

AI Action Summit developments

Over the course of the two-day summit, two major initiatives were announced, including the Coalition for Environmentally Sustainable AI, which aims to bring together “stakeholders across the AI ​​value chain for dialogue and ambitious collaborative initiatives”; and Current AI, a “public interest” foundation launched by French president Emmanuel Macron that seeks to steer the development of the technology in more socially beneficial directions.

Backed by nine governments – including Finland, France, Germany, Chile, India, Kenya, Morocco, Nigeria, Slovenia and Switzerland – as well as an assortment of philanthropic bodies and private companies (including Google and Salesforce, which are listed as “core partners”), Current AI aims to “reshape” the AI landscape by expanding access to high-quality datasets; investing in open source tooling and infrastructure to improve transparency around AI; and measuring its social and environmental impact. 

European governments and private companies also partnered to commit around €200bn to AI-related investments, which is currently the largest public-private investment in the world. In the run up to the summit, Macron announced the country would attract €109bn worth of private investment in datacentres and AI projects “in the coming years”.

The summit ended with 61 countries – including France, China, India, Japan, Australia and Canada – signing a Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet at the AI Action Summit in Paris, which affirmed a number of shared priorities.

This includes promoting AI accessibility to reduce digital divides between rich and developing countries; “ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all”; avoiding market concentrations around the technology; reinforcing international cooperation; making AI sustainable; and encouraging deployments that “positively” shape labour markets.

However, the UK and US governments refused to sign the joint declaration. While it is still not clear exactly why, a spokesperson for prime minister Keir Starmer said at the time that the government would “only ever sign up to initiatives that are in UK national interests”.

Throughout the course of the event, AI developers and key political figures from the US and Europe – including US vice-president JD Vance, Macron, and European commissioner Ursula von der Leyen – decried regulatory “red tape” around AI, arguing it is holding back innovation.

Vance, for example, said “excessive regulation of the AI sector could kill a transformative industry”, while both Macron and European Union (EU) digital chief Henna Virkkunen strongly indicated that the bloc would simplify its rules and implement them in a business-friendly way to help AI on the continent scale. “We have to cut red tape – and we will,” von der Leyen added.

There were also several developments in the immediate wake of the summit. This includes the EU gutting its AI liability directive, which focused on providing recourse to people when their rights have been infringed by AI systems, and the rebranding of the UK’s AI Safety Institute to the AI Security Institute (AISI), which means it will no longer consider bias and freedom of expression issues, and focus more narrowly on security of the technology.

AI at a crossroads

Out of those Computer Weekly spoke with, many identified a clear tension in the direction of travel set for the technology over the course of the summit.

For example, although key political figures were espousing rhetoric about the need for open, inclusive, sustainable and public interest AI in one breath, in the next they were decrying regulatory red tape, while committing hundreds of billions to proliferating the technology without clear guardrails.

For Sandra Wachter – a professor of technology and regulation, with a focus on AI ethics and law, at the Oxford Internet Institute (OII) – it is unclear which red tape political figures such as Macron, Vance and von der Leyen were even referring to.

“I often ask people to list the laws that are standing in the way of progress,” she said. “In many areas, we don’t even have laws, or when we do have laws they aren’t good enough to actually address this, so I don’t see how any of this is holding AI back.”

Highlighting common AI booster rhetoric championed by political and industry figures, Wachter said she would like to see the conversation flipped on its head: “If my technology is so beneficial, if my products are so good for everyone, why wouldn’t I guarantee its safety by holding myself to account?”

Commenting on the EU’s decision to quietly rescind its AI liability directive in the wake of the summit, Wachter said that while other avenues still exist to challenge harmful automated decision-making, the decision represents a worrying potential sea change for AI regulation.

“It worries me a lot because it’s been done under the ‘We need to foster innovation’ banner, but what type of innovation? For whom? Who wins if we have biased, unsustainable, misleading, deceptive AI?” she said, adding that it is not clear to her how the lives of every citizen will be improved by people not being able to get their day in court if AI has harmed them.

“Is it the eight billionaires, or the other eight billion people? It’s very clear that most people will not benefit from a system that isn’t tested, that isn’t safe, that is racist, and that destroys the planet … so this idea that regulation is holding back innovation is completely misguided.”

It’s very clear that most people will not benefit from a system that isn’t tested, that isn’t safe, that is racist, and that destroys the planet … so this idea that regulation is holding back innovation is completely misguided
Sandra Wachter, Oxford Internet Institute

Wachter added that AI is an “inherently problematic technology, in that its problems are rooted in how it works”, meaning that if it is going to be used for any kind of greater good, “then you have to make sure that you hold those negative side effects back as much as possible”.

Further warning against the dangers of creating a false dichotomy around innovation and regulation, Linda Griffin, vice-predicant of global affairs at Mozilla, said: “We should be very sceptical of claims against regulation.”

She added that she personally finds the anti-regulation rhetoric worrying: “Innovation, growth and profits for a handful of the biggest companies in the world does not mean innovation and growth for the rest of us.”

Gaia Marcus, director of the Ada Lovelace Institute (ALI), also came away from the summit “feeling like we’re at something of a crossroads when it comes to AI development and deployment”, arguing that governments need to build out the incentives to make sure any AI systems deployed in their jurisdictions are both safe and trustworthy.

She added that it was especially important to ensure alternative models and systems are built outside the walled gardens of big tech, so governments “won’t be paying extortionate rents to a few technology companies for a generation”; and for any incentives introduced to ensure the safety of general-purpose AI systems at the bottom of the technology stack, which everything else is built on top of.

Commenting on the current inflection point of AI, Marcus said: “One path is really about winners and losers, about pushing corporate interests or a narrow set of national interests ahead of the public interest, which we’d say is a path to nowhere, and then the other path is about nations working together to build a world where AI works for people in society.”

For international cooperation to be successful, Marcus said that – in the same way there are shared standards and norms around aviation or pharmaceuticals – it is key to create “shared infrastructure for building and testing AI systems”.

She added: “They’ll be no greater barrier to the transformative potential of AI than fading public confidence”, and that like-minded countries which recognise the costs of unaddressed risks must find other forums to continue building the safety agenda. “For a summit that was framed around action, we really wanted to see governments urgently coming together to start building the incentives, institutions and alternatives that will enable broad access and enjoyment of the benefits of AI.”

However, Marcus acknowledged that the current geopolitical situation between the US, China and the EU makes it harder to ensure pre-deployment safety, at least in the short term.

Despite the geopolitical tensions present and the calls for deregulation, Mike Bracken, a founding partner at digital transformation consultancy Public Digital, was more optimistic about the prospects for international collaboration going forward, arguing that AI’s “constituent elements” means it always requires a mixture of sovereign action and collaboration.

“Each country needed its own datacentres. Where you locate them, how you fund them, who operates them, what tooling they work and what power they use – that is an almost entirely sovereign question to each country,” he said.

“But once you’ve got all that set up, you still need to collaborate around data. The data structures that helped create AlphaFold were essentially the creation of international collaboration. We have some sovereign data, but for this to be a truly global play, we’re going to have to share and that means understanding diverse regulatory environments and having a place to share them.”

Public interest vs corporate interest AI

For Bracken, the major success of the summit was in how it managed to reset the narrative around AI by casting it as a public asset.

“Resetting AI as a public good and basically as a team sport is good for all of us,” he said. “The real politik of that, the technology involved, the players, that makes it a messy business, and it’s an inexact science, but when we look back, we’ll look back at this as the moment where AI was reset as a public asset.”

Bracken also praised Current AI as “a really strong outcome” of the summit: “I’ve attended many government-backed events which result in statements and handshakes and warm words – the ones that really matter are the ones that result in institutions, money, change and delivery.

“What Macron has done is change the weather. We’re now talking about AI as a public asset – it’s there to help with health and education and all these other sectors, and is not simply seen as an extension of monopolistic technology providers.”

Commenting on the launch of Current AI, Nyalleng Moorosi – a research fellow at the Distributed AI Research Institute (DAIR) who previously worked for Google as a software engineer – said that while it is promising to see public resources being committed to develop AI as a shared resource, what exactly constitutes “public interest” still needs to be properly defined to avoid capture by narrow corporate interests.

“It depends on how the models get built. You certainly have to worry about representation and bias and inclusion, but then it’s also about what architectures you choose. We’re going to want tools that are auditable, that have some transparency,” she said.

“You also have to be very careful about what you outsource and the kinds of contracts you sign, because even if it’s ‘public AI’, you might still be using cloud compute from private companies, and you want to make sure you don’t get locked into contracts where it’s not very clear about who owns the public data or what’s sharable, and the kinds of security guarantees in place. Private industry does want this data, and we should not forget how powerful private industry is.”

Marcus said that while the emphasis on sustainable, public interest AI is a positive development of the summit – in that it’s pushing an vision of the technology where the tooling and infrastructure underpinning it are widely accessible. Current AI will need to maintain a wide range of funding sources to ensure its ongoing independence and avoid the risks of corporate capture, as well as be very transparent with its aims and its allocation of resources, time and money.

Echoing Moorosi, Marcus added that it is also currently not clear what is meant by “public interest” exactly: “That could mean so many different things, and public interest AI should continue to mean loads of different things, as long as that doesn’t lead to a sort of ‘public interest washing’.”

Any meaningful intervention that aims to centre the interest of the ‘public’ needs to go beyond aligning with the ‘innovation’ narrative
Abeba Birhane, Artificial Intelligence Accountability Lab

On public participation in AI development and regulation, Marcus concluded: “Hopefully this has given us the floor and not the ceiling in terms of wider civil society participation … you need the voices of people that represent various publics to know that you’ve got that vision piece.”

Andrew Strait, associate director at the ALI, said that although it is commendable that Current AI is steering the technology towards public interest use cases and wider accessibility – particularly through its emphasis on open source approaches – it will likely face conflicting pressure from its funders: “I think the challenge will be who sits in their governance board, who sits in the steering committee, and how well can they keep the focus on non-profit, public interest projects.”

In a blog post published in the wake of the summit, Abeba Birhane, founder and principal investigator at the Artificial Intelligence Accountability Lab (AIAL), also questioned what “public interest” means in the context of AI: “There is nothing that makes AI systems inherently good. Without intentional rectification and proper guardrails, AI often leads to surveillance, manipulation, inequity and erosion of fundamental rights and human agency while concentrating power, wealth and influence in the hands of AI developers and vendors.”

She added that current “public interest” approaches – characterised by a focus on equipping public institutions with AI tools and “AI-for-good” initiatives that seek to use the technology to “solve” social, cultural or political issues; and improving existing systems by, for example, reducing their bias – boil down to “giving the ‘public’ more AI or feeding existing corporate models with more or ‘better’ data”.

Birhane said that while many of these approaches are well-meaning, and “it might be a mistake to cast all these initiatives as unproductive and unhelpful, they fall largely within the techno-solutionist paradigm – the belief that all or most problems can be solved through technology.

“This approach is unlikely to bring about any meaningful change to the public, as these techno-centric solutions are never developed outside of corporate silos. Any meaningful intervention that aims to centre the interest of the ‘public’ needs to go beyond aligning with the ‘innovation’ narrative.”

Market and geopolitical power concentrations

Most of those Computer Weekly spoke with characterised market concentration as the defining issue around AI.

“It’s probably the most important thing because it’s about power, and that’s what it all boils down to – who has power and who is dependent on whom,” said Wachter, adding that it is vital to decrease dependency on a select few cloud compute providers or chip manufacturers.

She further added that aside from the market power wielded by a few companies, discussions around AI are largely dominated internationally by the US and China, and – to a lesser extent – the EU: “That’s not all of the world, a lot of other countries are affected by AI, but don’t have a voice in shaping it.”

Commenting on the central role played by the Indian government during the AI Action Summit, Bracken said “this was a France-India summit”, and that the subcontinents “highly centralised technology estate”, which has brought hundreds of millions of people into the formal economy, was largely done via open source tech deployments.

“They’re not using proprietary licenses like the G7, they’ve done it themselves. And, of course, they are incredibly well-positioned. They’re already delivering AI-based services in many sectors and regions. Macron was smart to invite them in.”

He added that given the sheer size of India’s population, on top of the 500 million in Europe, “suddenly you’re talking real numbers… we might just look back at this as the moment where public AI became a thing set for billions of people”.

On the next summit, which Macron confirmed will be hosted in India, Griffin said: “The French did a good job of broadening the tent and making it more inclusive, so if India can double down on that, it will be really important – not being in Europe, not being in the US, not being in China, it’s got a chance to really think about how the rest of us kind of fit into this.”

However, Wachter warned that to effectively bring more voices into conversations around AI and help positively shape the technology’s direction of travel, there needs to be a rejection of “arms-race” rhetoric, which only serves as a negative mental model where the only way to travel is downward.

“That’s the only direction that you can go if you think you have to constantly underbid your opponent, it’s just a race to the bottom otherwise,” she said. “It’s not about throwing all our values overboard and saying, ‘Well, they are jumping off the bridge, let’s beat them to the punch and jump off the bridge faster’, it’s about fostering technologies that adhere to our stated values.”

Highlighting how consumers are more likely to buy Fairtrade products because they know it’s ethically sourced, Wachter said governments similarly need to incentivise and invest in ethical AI approaches that prove systems have been built with the social and safety consequences of the technology in mind. “That’s the only way of changing course away from racing into the abyss,” she said.

Many of those Computer Weekly spoke with said that one of the most positive aspects of the summit wasn’t the main conference, but the fringe events happening on the margins. For Griffin, these events allowed for “freer conversations” in which people were able to express their “deep concerns” and “anger” over the current degree of market concentration around AI.

“I’ve not come across any other big AI gatherings where people were able to really sharply define and call out how it’s not in anyone’s interest to have this market so concentrated like it is now,” she said, highlighting how the renewed emphasis on open source and public interest AI “is a seed” that can help move us away from proprietary, black-box AI development, and the “pure for-profit motive”.

Open source, smaller models

In attempting to solve the problem of market concentration, European leaders at the summit laid out how open source would be an integral part of their AI approaches.

Marcus said this new emphasis on open source AI – while not a silver bullet – can help to undermine the “monoculture underpinning the development and deployment of AI tools” by introducing greater plurality to the mix, while simultaneously pushing the dial in terms of what’s expected of large companies developing proprietary systems by improving their degrees of openness to allow for more meaningful evaluations and audits.

However, Strait warned that open source approaches can also be leveraged by corporate incumbents to their advantage, pointing to how open source communities have previously been tapped by large corporates as a source of free labour that they’ve used to build up their walled gardens and dependencies.

“Just because you make something accessible doesn’t mean it automatically creates public benefit or avoids further entrenching the power of major players – who have every right to use the same open source projects for their own purposes – but if you use it to give other organisations access to something private companies already have, it can help produce a more level playing field,” he said.

Strait agreed that while the open source AI is hardly going to solve all the issues around the technology, making cutting-edge tools more accessible to a wider pool of people can help challenge AI’s market concentration, as well as reverse “behind-closed-doors” development trend that currently characterises the technology.

For Moorosi, the turn towards open source tooling and architectures during the Action Summit can help place a greater emphasis on smaller, more tailored and context-specific AI models – which are less resource-intensive than the big Silicon Valley models – as well as empower a greater diversity of developers to influence and control the direction of AI.

“Lots of progress happens when multiple people are able to tinker with these technologies,” she said, adding that it’s imperative to support localised AI model building that is directly tied to people’s needs in a specific context, rather than trying to foist privately controlled large language model (LLM) infrastructure onto every situation. 

“There’s so much that can be done with small models, and so many times, when it’s a really critical application, you do want small, because you want auditability and explainability,” she said, adding that smaller models can be particularly powerful when underpinned by tailored, high-quality data sets.

“One of the things you find with these massive models is that they are optimised over too many things, so it does pretty good on a lot of things, but when it really matters you need excellence – you need to minimise error, and you need to be able to track and catch any errors fast.”

Highlighting the current situation where AI development is centred around creating the biggest models with as much data as possible before “throwing it out into the world”, Griffin agreed there needs to be a move towards more precise applications of AI, especially for public service delivery.

“You can’t just have proprietary, walled garden models where no one understands what’s going into them, because there’s no public trust,” she said. “There’s a lot of talk in the UK and other countries about AI manifestly changing public services, brilliant, I’m up for that – but there’s no public trust, that’s the missing element … that’s why open source is really important and breaking this market concentration is important too.”

However, despite her concerns, Griffin said that the summit was a resounding success when it came to changing the conversation around open source, which during the previous two summits was treated exclusively as a risk.

“Open for open’s sake is bad,” said Griffin, “but the current leaders in the market have their proprietary systems and open source isn’t in their commercial interests, so they’ve been very good at briefing against it and making policy-makers worry. All AI is tricky and dangerous, but it’s not a particular characteristic that belongs to open source, and even open source needs guardrails.”

Similar points were made by Wachter, who argued that it is important to look who is arguing against open source and why they may have an interest in closed systems not open to others.

She added that although the question of open source is nuanced, in that you must consider things such as infrastructural dependencies and data access, “the general idea of open source is great because it allows others to enter the market and develop new things. It also makes auditing easier. Are there risks? Yes, of course. There is risk with everything, but just because there’s risk doesn’t mean that this would outweigh the benefits that come from it.”

Griffin said that there has been a realisation in Europe – which was vocalised during the event by head of Current AI, Martin Tisne – that unless you’re the US or China, “we’re all in the same boat with AI, and we can’t play in this arena unless we have open systems”.

Griffin said governments outside of these two major power blocs should think about the levers they have available to move the dial on open source even further, which could include building their own national models as Greece and Spain are doing; providing material support to businesses building in the open or otherwise contributing to open data sets; and placing open source requirements in AI procurement rules. “It’s not headline-grabbing, but I think that’s what needs to happen,” she said.

[Silicon Valley firms are] not factoring in the cost of having a whole community without water, a whole community without electricity, or the mental health impacts of data workers
Nyalleng Moorosi, Distributed AI Research Institute

Moorosi further argued that a greater emphasis on smaller AI models can also reduce the negative environmental and social externalities associated with the development of LLMs, and which are rarely considered for by Silicon Valley firms as they don’t internalise the costs themselves.

“They’re not factoring in the cost of having a whole community without water, a whole community without electricity, or the mental health impacts of data workers,” she said, adding that mass web scraping means they’re not even paying for the public or copyrighted data fuelling their models.

“If you’re not paying for the data and you’re not paying for any of your externalities, then obviously it feels like you can access infinite resources to build the infinite machine. Africans have to think about cost – we don’t have infinite money, we don’t have infinite compute – and it forces you to think differently and be creative.”

On eliminating the unfair labour practices by multinational tech firms – without which the technology would not exist in its current state – Moorosi said that their ability to outsource AI work to jurisdictions with lax labour regulations should be restricted, which could be done by implementing laws prohibiting the differential pricing that allows them to pay people less because of where they’re based in the world.

“If you work in a developing country, you don’t get paid as much as if you work in Mountain View or Zurich, even if you do the same job,” she said.

Major concerns and moving forward

Despite some positive steps made during the summit, most of those Computer Weekly spoke with said that “the intense concentration of the whole AI stack” remains their most pressing concern that needs to be resolved.

Highlighting the history of the internet – which was initially developed for military communications before being opened, and then closed-off again via a process of corporate enclosure in the late 1990s – Griffin, for example, said there has been “a lot of collective amnesia … I don’t think people really understand or think enough about the steps that happened or didn’t happen to keep the internet open … We have interoperable email. That was not a certainty.”

She added that “open source is a key ingredient in the antidote” to market concentration, and creating a future where control over the development and deployment of AI technology is much more distributed.

Bracken said that while there is now a clear emphasis, especially in Europe, on the need for open source tooling and sovereign capabilities outside the purview of large American technology firms, achieving economic growth with AI will depend on the willingness of governments to actively intervene in markets.

“You’ve got to be active, you’ve got to shape the market to the outcomes that you want,” he added. “The characterisation of AI’s importance to society so far has been too often on either end of an extreme – the first around existential safety, and another end around wildly buoyant enthusiasm from those who seek to capture the regulatory environment, as if there are only five or six companies that can really give us AI.”

Bracken concluded that while he understands both the exuberance of the market and concerns about the potentially existential risk of the technology, “both of those positions are now untenable”.

For Marcus, concentration has created a “lack of political vision” around the future of AI, as the domination of the technology by relatively narrow national or corporate interests means there are currently a lack of “credible alternatives” being built.

“We need to know there are credible attempts to broaden the universe of those potential futures by various people having a stake in the technologies that could get built and the data that underpin them,” she said. “[We also] need to ask the fundamental question whether states are in the position to manage the incentives around what technologies get deployed in their in their jurisdictions … we’ve got a lot of drive to build, but we don’t know if the roof is going to hold and the walls are safe, and that’s pretty important.”

Moorosi added that she is particularly concerned about AI’s market concentration in the context of the technology’s increasing militarisation, arguing that the trend towards both tech giants and small AI startups hawking their wares to defence contractors or state military bodies is creating a literal arms race. This militarisation could undermine efforts towards responsible and public interest AI, as it will likely prioritise power concentration and secrecy over inclusivity, she said.

“Contracts in the military and warfare are so massive that I feel like there wouldn’t be much of an incentive to develop for anything else, except a little bit on the side here and there,” she said, noting the use of AI tools in Gaza by the Israeli military – which reportedly have high error rates and have contributed to the indiscriminate killing of civilians – means any claims to greater “precision” should be challenged. “AI in warfare is currently a really crude science.”

Going into the next summit, Strait said that we need to rethink the current emphasis on deregulation: “What the public and even businesses need is reassurance the technology is safe, effective and reliable – you can’t do that without regulation, and reputational pressure will only get you so far.

“There is a lot more to do in terms of how you can create a more equitable and thriving market of AI that is more internationally inclusive and not just dominated by a handful of large US technology companies. Fundamentally it’s never good when you have a handful of technology companies based in Silicon Valley deciding a technology that is changing our energy policy, climate policy, foreign policy and security policy – that’s a very unhealthy environment.”



Source link

More From Author

Prebiotic synthesis of the major classes of iron–sulfur clusters

Stable luminescent diphenylamine biphenylmethyl radicals with α-type D0 → D1 transition and antiferromagnetic properties

Leave a Reply

Your email address will not be published. Required fields are marked *