Neuramancer Raises €1.7M to Advance Forensic AI for Deepfake Detection

Neuramancer Raises €1.7M to Advance Forensic AI for Deepfake Detection

2 Min Read

The Bavarian startup is targeting insurance fraud first, and sees Europe’s push for explainable AI as a competitive edge

Neuramancer AI Solutions GmbH has secured a €1.7 million pre-seed funding round aimed at speeding up the commercialisation of its deepfake detection platform, initially focusing on the insurance sector.

The Bavarian firm, previously known as Neuraforge, was established on the premise that AI-generated media manipulation poses a genuine risk.

The German Insurance Association (GdV) has reported billions of euros lost annually to insurance fraud, noting an increase as generative AI facilitates easy alteration of damage photos and manipulation of video calls.

TNW City Coworking space – Where your best work happensA workspace designed for growth, collaboration, and endless networking opportunities in the heart of tech.

Neuramancer is wagering that as challenges increase, the solution’s value rises.

The investment round is led by Vanagon Ventures, joined by Bayern Kapital, through its Innovationsfonds EFRE II, Nuremberg’s ZOHO.VC, and the family office Lightfield Equity.

Senior figures from financial services and major tech companies, alongside seasoned platform founders, have also joined as angel investors.

Neuramancer’s method is described as focusing on forensic depth rather than pattern matching. Its detection system scrutinises statistical anomalies in image and video noise, concentrating on structural artefacts rather than semantic content.

This approach allows it to identify manipulations that conventional AI detectors might miss, including those made with the latest generative models, and to produce forensic analysis reports revealing not only whether media has been altered but also where and how.

“While many rely on opaque black-box models, we adopt a scientifically grounded, fully transparent approach,” said co-founder Anika Gruner.

“Explainable AI from Europe will offer a strategic advantage for companies needing protection against synthetic manipulation.”

This transparency is significant. As regulatory demands for auditable AI systems increase across the EU, particularly under the AI Act and specific frameworks, Neuramancer is presenting explainability as a compliance benefit.

Insurance companies assessing fraud prevention tools will face growing pressure to ensure their detection methods are understandable to regulators and courts.

The new funding will support platform development, team growth, and market entry, beginning with the German insurance industry before wider commercialisation.

Neuramancer is entering a market that was not significant until generative AI developed, presenting both opportunities and challenges. Detection tools must evolve alongside generation tools in a race with no signs of slowing down.

You might also like