Skip to main content

AI allows the creation of very convincing deepfakes (images, videos, audios).

These tools can manipulate public opinion, spread false information, and deceive crowds, posing a serious risk to society.

Deepfakes can discredit individuals or organizations by perfectly imitating a person’s voice, making them say things they never said.
This harms their reputation and credibility and can destabilize companies or institutions.

AI can impersonate a person via synthetic voices.

This impersonation can lead to financial fraud, unauthorized access to sensitive information, or targeted attacks against individuals or companies.

deepfake-voice-detection-techno
  • High-Performance Algorithms: Creation of advanced algorithms for precise deepfake detection.
  • Robustness: Training of deep learning models on large proprietary synthetic voice corpora.
deepfake-voice-detection-boost
  • Maximum Efficiency: Our tools are designed to achieve the highest levels of detection.
  • Scalability: Structured solutions to evolve over time and adapt to new threats.
deepfake-voice-detection-performance
  • Public Security: Authentication of public figures’ speeches to prevent manipulation.
  • Corporate Protection: Strengthening our voice authentication and identification products against deepfake attacks.
  • Investigations: Assistance in investigative work to detect mass manipulation attempts.

Our commitment to advancing audio deepfake detection has been recognized internationally. We’re proud to have secured 4th place globally in the prestigious ASVspoof 2024 competition (open conditions), a testament to the effectiveness and reliability of our cutting-edge technology. Our solution is designed to protect against the growing threat of audio deepfakes, ensuring the authenticity and security of your communications and authentication processes.

We benefit from the support of prestigious partners who believe in our expertise and innovative solutions.
Through collaboration with major players such as the Directorate General of Armaments (DGA) and the Defense Innovation Agency (AID), we develop cutting-edge technologies to anticipate and counter AI-related threats.
Their funding and trust testify to the credibility and effectiveness of our solutions.
The Institute for Research in Computer Science and Random Systems (IRISA) is also a key partner in conducting our innovative projects.

dga-aid-irisa-detection-voix-synthèse