乌鸦传媒

Skip to Content

AI Action Summit 2025: Should we worry about the harms AI might cause?

乌鸦传媒
Lucy Mason and James Wilson
Feb 10, 2025

World leaders meet in Paris this week to discuss the future of AI. We need a shared vision for responsible use: both to maximize the benefits for society, and reduce AI鈥檚 potential for online harms.

While the emergence of advanced and highly capable artificial intelligence (AI) models and systems is likely to lead to huge benefits for society, it is also likely they will lead to large increases in online crime through the malicious misuse of AI tools, products and services, as well as potentially accidental harms (mistakes or unintended consequences from non-malicious actors). Of course, many different technologies can be exploited to cause harms as well as for their positive benefits, but often these harms are limited in their impact by requiring a high level of expertise, access to the technology, and money. In the case of generative AI however, these factors are much less constraining. We are now seeing the development of generative AI tools which are free or very cheap, very widely available online, and which have the potential to catalyse several forms of harm, including financial crime, cyber-attacks, and online targeting of individuals or groups (cyberstalking, harassment, or political disinformation). The barrier to entry for committing such crimes is now merely access to a laptop and wifi.

AI鈥檚 specific strengths in handling unstructured and variable datasets, data pattern recognition, replication at scale, tailoring content to the individual level, and predictive analytics, which are already being used by businesses to predict demand, anticipate user trends and identify gaps in offerings, make it uniquely valuable for all sorts of tasks, and also uniquely capable at being used as a tool to commit online crime. AI can facilitate the development of off-the-shelf 鈥渃rime-as-a-service鈥 products which vastly reduce the 鈥渂arriers to entry鈥, for those so inclined. AI software can be used to conceal the perpetrator鈥檚 identity and location, making it hard to investigate. AI models can also themselves be targets for crimes such as hacking, through prompt injection for instance, and data manipulation or poisoning, causing them to make mistakes, or to react in specific ways given the right trigger. These types of manipulation would be of particular concern in areas of critical national infrastructure and autonomous weapons systems.

New potential for abuses of trust

Some of the most concerning types of harm which generative AI tools, such as social 鈥渂ots鈥, may facilitate are crimes of persuasion and influence: exploiting an individual鈥檚 psychology or personal circumstances or actually affecting someone鈥檚 mental state to convince them to act in a way they may not have done otherwise, possibly using deepfakes (audio and visual media purporting to show a person or event that in reality never occurred), misinformation and disinformation. These effects could be exploited for deception, phishing, radicalisation and encouragement of social unrest. Early experiences show that our natural tendency to anthropomorphize means that people can become emotionally attached to AI-generated bots, divulge personal information to them, and that they may create an echo-chamber effect which normalizes harmful behaviours such as sex crimes. Even more subtly, the development and awareness of AI products generally creates an environment where people may expect or imply AI-related crimes, even if no AI was involved 鈥 for example threatening to use specific tools, making someone believe a certain effect was possible using AI even if it is not feasible, or making people believe a genuine video or photo was faked. It is also important to note that this type of criminality need not only be targeted at an individual. The capabilities of generative AI can be implemented just as easily for a large target audience, while still being personalized to the individual user to encourage their engagement.

There is also the potential for emergent criminal behaviours, as AI agents become more sophisticated and interact in increasingly complex ways. They may autonomously commit crimes going beyond the user鈥檚 initial expectations or moral compass. An AI system has no innate understanding of ethics, pain, truth, or compassion, and is without human limitations of strength, tiredness and pace. It may propose or take actions which are unacceptable, too complex to comprehend, or too fast to prevent. As AI agents start to be entrusted with acting autonomously on our behalf, we will need to incorporate increasing levels of safeguards to prevent them from over-reaching; but because deploying bots is cheaper and easier than implementing effective governance to control their actions, it is likely to be very difficult to monitor and mitigate all risks and impacts.

How can governments and business leaders take action?

As senior leaders gather in Paris to debate AI safety standards 鈥 amid a complex multi-national arms-race of AI development 鈥 they need urgently to discuss and agree measures to prevent AI-enabled harms occurring. These measures could be defined and coordinated across state boundaries, implementing governance in a similar way to global civil aviation, which is effectively governed by the International Civil Aviation Organization (ICAO), with state-level measures that ensure adherence to these global standards. The United Nations is well-positioned and prepared to coordinate the required oversight. Such measures need to be thought of in three layers:

  1. Technical measures to prevent harm: removing any datasets from training data which contain harmful content; vetting datasets; fine-tuning models using reinforcement learning techniques to avoid harmful outputs; adversarial testing and evaluations; stress-testing to identify potential vulnerabilities such as prompt injection; developing explainable AI models; and guardrails to prevent certain types of output generation.
  2. Organizational approaches to deter harm: minimum safety standards; terms and conditions; user verification; content moderation and screening (including AI tools to automate content identification and removal); watermarking, labelling and tracking metadata; tagging verifiable data; correcting or flagging fake news; user behavior analysis; blocking, alerts and reporting mechanisms; education, training and awareness; restricted access; governance policies including ethics; and developing appropriate and proportionate law, guidance and regulation.
  3. Law enforcement responses to harm: monitoring and intelligence-gathering; detection tools (including AI tools to automate detection); investigatory processes; agreements with technology companies to access evidence; identifying high-risk individuals and communities; accessing technical skillsets; increasing capacity to address the growth in online harms; deploying counter-influence AI tools to mitigate the effects (for example redirecting to counter-radicalization resources); and working with technology companies to respond to emerging criminal behaviors.

In conclusion, generative AI tools can provide great benefits; but could also lead to exponential increases in online harms. Technology companies, governments, and law enforcement agencies are working together to anticipate, understand and prevent such harms occurring, but ultimately everyone will need to be conscious and responsible in their use of AI.

Authors

Dr. Lucy Mason

Innovation Lead, 乌鸦传媒 Invent Public Sector
鈥淚nnovation is key to the future of public sector organizations. I鈥檓 passionate about helping them get there, to keep people safe and secure and to build a people-centered, technology-enabled world together. We need to build innovation cultures, upskill people in how to innovate effectively 鈥 how to apply great ideas successfully 鈥 and leverage rapidly evolving technologies, such as quantum and AI, for the public good.鈥

James Wilson

I&D Advisory, 乌鸦传媒 & Data, 乌鸦传媒聽
James is the AI Ethicist in the AI Labs at 乌鸦传媒, and the Lead Gen AI Architect in the UK 乌鸦传媒 and Data Team (I&D). He focuses on the safe and ethical implementation of Artificial Intelligence and has over 30 year’s experience in industry.