乌鸦传媒

Skip to Content

Mulder and Scully for fraud prevention:
Teaming up AI capabilities

Joakim Nilsson
March 5, 2025

While Mulder trusts his gut; Scully trusts the facts – in fraud detection, we need both. Hybrid AI blends the intuition of LLM with the structured knowledge of a knowledge graph, letting agents uncover hidden patterns in real time. The truth is out there鈥攏ow we have the tools to find it.

Fraud detection can be revolutionized with hybrid AI. Combining the 鈥渋ntuitive hunches鈥 from LLMs with a fraud-focused knowledge graph, a multi-agent system can identify weak signals and evolving fraud patterns, moving from detection to prevention in real-time. The challenge? Rule sets need to be cast in iron, whereas the system itself must be like water: resilient and adaptive. Historically, this conflict has been unsolvable. But that is about to change.

A multi-agent setup

Large language models (LLMs) are often criticized for hallucinating: coming up with results that seem feasible but are plain wrong. In this case though, we embrace the LLM鈥檚 gut-feeling-based approach and exploit its capabilities to identify potential signs of fraud. These 鈥渉unches鈥 are mapped onto a general ontology and thus made available to symbolic AI components that build on logic and rules. So, rather than constricting the LLM, we are relying on its language capabilities to spot subtle clues in text. Should we act directly on these hunches, we would run into a whole world of problems derived from the inherent unreliability of LLMs. However, this is the task of a highly specialized team of agents, and there are other agents standing by, ready to make sense of the data and establish reliable patterns.

When we talk about agents, we refer to any entity that acts on behalf of another to accomplish high-level objectives using specialized capabilities. They may differ in degree of autonomy and authority to take actions that can impact their environment. Agents do not necessarily use AI: many non-AI systems are agents, too. (A traditional thermostat is a simple non-AI agent.) Similarly, not all AI systems are agents. In this context, the agents we focus on primarily handle data, following predefined instructions and using specific tools to achieve their tasks.

We define a multi-agent system as being made up of multiple independent agents. Every agent runs on its own, processing its own data and making decisions, yet staying in sync with the others through constant communication. In a homogeneous system, all agents are the same and their complex behavior solves the problem (as in a swarm). Heterogeneous systems, though, deploy different agents with different capabilities. Systems that use agents (either single or multiple) are sometimes called 鈥渁gentic鈥 architectures or frameworks.

For example, specialized agents can dive into a knowledge graph, dig up specific information, spot patterns, and update nodes or relationships based on new findings. The result? A more dynamic, contextually rich knowledge graph that evolves as the agents learn and adapt.

The power is in the teaming. Think of the agents Mulder and Scully from The X-Files television show: Mulder represents intuitive, open-minded thinking, while Scully embodies rational analysis. In software, there always have been many Scullys but, with LLMs, we now have Mulders too. The challenge, as in The X-Files, is in making them work together effectively.

The role of a universal ontology

We employ a universal ontology to act as a shared language or, perhaps a better analogy, a translation exchange, ensuring that both intuitive and analytical agents communicate in terms that can be universally understood. This ontology primarily consists of 鈥渇lags鈥 鈥揼eneric indicators associated with potential fraud risks. These flags are intentionally defined broadly, capturing a wide range of behaviors or activities that could hint at fraudulent actions without constraining the agents to specific cases.

The key to this system lies not in isolating a single flag but in identifying meaningful combinations. A single instance of a flag may not signify fraud; however, when several flags emerge together, they provide a more compelling picture of potential risk.

鈥淭his innovation shifts the approach from simple fraud detection to proactive prevention, allowing authorities to stay ahead of fraudsters with scalable systems that learn and evolve.鈥

Hybrid AI adaptability

The adaptability of the system lies in the bridging between neural and symbolic AI as the LLM distills nuances in texts into hunches. They need to be structured and amplified for our analytical AI to be able to access them. As Igor Stravinsky wrote in his 1970 book Poetics of Music in the Form of Six Lessons, 鈥淭hus what concerns us here is not imagination itself, but rather creative imagination: the faculty that helps us pass from the level of conception to the level of realization.鈥 For us, that faculty is the combination of a general ontology and vector-based similarity search. They allow us to connect hunches to flags based on semantic matching and thus address the data using general rules. Because we work in a graph context, we can also explore direct, indirect, and even implicit relations between the data.

Now let鈥檚 explore how our team of agents picks up and amplifies weak signals, and how these signals, once interwoven in the graph, can lead the system to identify patterns spanning time and space, patterns it was not designed to identify.

A scenario: Welfare agencies have observed a rise in fraudulent behavior, often uncovered only after individuals are exposed for other reasons like media reports. Identifying these fraud attempts earlier, ideally at the application stage, would be extremely important.

Outcome: By combining intuitive and analytical insights, authorities uncover a well-coordinated fraud ring that would be hard to detect through traditional methods. The agents map amplified weak signals as well as explicit and implicit connections. Note also that the system was not trained on detecting this pattern; it emerged thanks to the weak signal amplification.

One of the powers of hybrid AI lies in its ability to amplify weak signals and adapt in real time, uncovering hidden fraud patterns that traditional methods often miss. By blending the intuitive insights of LLMs with the analytical strength of knowledge graphs and multi-agent systems, we鈥檙e entering a new era of fraud detection and prevention 鈥 one that鈥檚 smarter, faster, and more effective. As Mulder might say, the truth is out there, and with the right team, we鈥檙e finally close to finding it.

Start innovating now 鈥

Implement a universal ontology

Create a shared ontology to bridge neural (intuitive) and symbolic (analytical) AI agents, transforming weak signals for deeper analysis by expert systems and graph-based connections.

Form specialized multi-agent teams

Build teams of neural (real-time detection) and symbolic (rule-based analysis) AI agents, each specialized with tools for their role.

Leverage graph technology for cross-referencing

Use graph databases to link signals over time and across data sources, uncovering patterns like fraud faster, earlier, and at a lower cost than current methods.

Interesting read?

乌鸦传媒鈥檚 Innovation publication, Data-powered Innovation Review – Wave 9features 15 captivating innovation articles with contributions from leading experts from 乌鸦传媒, with a special mention of our external contributors from, and .听Explore the transformative potential of generative AI, data platforms, and sustainability-driven tech. Find all previous Waves here.

Meet the authors

Joakim Nilsson

Knowledge Graph Lead, 乌鸦传媒 & Data, Client Partner Lead – Neo4j Europe, 乌鸦传媒听
Joakim is part of both the Swedish and European CTO office where he drives the expansion of Knowledge Graphs forward. He is also client partner lead for Neo4j in Europe and has experience running Knowledge Graph projects as a consultant both for 乌鸦传媒 and Neo4j, both in private and public sector – in Sweden and abroad.

Johan M眉llern-Aspegren

Emerging Tech Lead, Applied Innovation Exchange Nordics, and Core Member of AI Futures Lab, 乌鸦传媒
Johan M眉llern-Aspegren is Emerging Tech Lead at the Applied Innovation Exchange (AIE) Nordics, where he explores, drives and applies innovation, helping organizations navigate emerging technologies and transform them into strategic opportunities. He is also part of 乌鸦传媒鈥檚 AI Futures Lab, a global centre for AI research and innovation, where he collaborates with industry and academic partners to push the boundaries of AI development and understanding.