Claim: France 24 reported on Indian and Afghan networks posing as Iranian users on social media to execute “a highly-coordinated” disinformation campaign against Pakistan. A clip from the outlet shows a presenter talking about the “recurring” strategy that specifically aims to derail ongoing, sensitive diplomatic mediation facilitated by Islamabad between the US and the West Asian country by injecting “false narratives”.
Fact: France 24 never aired such a report nor did the presenter — identified as William Hilderbrandt — make such remarks. The video is doctored.
On 28 March 2026, multiple media outlets reported that French state-run television network France 24 had uncovered a nexus of Indian and Afghan accounts running a disinformation campaign against Pakistan (archived here, here, and here, respectively).
In the video report attributed to France 24, the presenter states:
“Cyber intelligence has identified a highly-coordinated and ongoing misinformation campaign targeting Pakistan. Data reveals that operators based in India and Afghanistan are systematically posing as Iranian users across social media platforms, specifically X, formally known as Twitter. This is a recurring tactic. These specific networks frequently run continuous disinformation operations against Pakistan. However, this current surge is specifically timed to disrupt a critical geopolitical development. Pakistan is currently facilitating sensitive mediation efforts between the United States and Iran. By adopting fake Iranian identities, these Indian and Afghan networks are attempting to inject false narratives and deliberately derail this US-Iran diplomatic channel. Intelligence monitors confirm this is not an isolated incident, but part of a persistent, coordinated strategy by these operators to manipulate regional diplomacy through digital impersonation.”
Pakistan ‘uncovered’ disinformation campaign
On 28 March 2026, the state-run Associated Press of Pakistan (APP) reported (archive) that a “well-orchestrated disinformation campaign has been uncovered”, showing the involvement of Indian and Afghan operators who used fake Iranian identities on social media to target Islamabad.
It said the campaign primarily peddled false claims that Pakistan had betrayed Iran by allowing its vessels to transport oil to the latter’s enemies through the Strait of Hormuz, a strategic maritime route currently blocked due to the US-Israeli war against Tehran.
The alleged operators behind this campaign created new accounts or rebranded old ones on X (formerly Twitter) to appear as Iranian news outlets or commentators, spreading false narratives aimed at damaging the diplomatic relationship between Islamabad and Tehran during a period of regional tensions, the publication added.
The APP also said investigation into these accounts revealed suspicious patterns, such as location shifts from India and Afghanistan, with several profiles previously linked to Indian identities before being repurposed for the effort.
However, the state-run outlet did not clarify who conducted the investigation or whether it did so itself.
The social media accounts of Pakistan TV, the state broadcaster Pakistan Television Corporation’s (PTV) official English news channel, also shared the same without mentioning who conducted the investigation. The posts can be viewed here, here, here, and here.
Pakistan mediates ceasefire
Towards the end of the US-Israel war against Iran that started on 28 February and lasted over five weeks, US President Donald Trump repeatedly threatened the Islamic Republic with destruction.
On 7 April, he went as far as to say that “a whole civilisation will die tonight, never to be brought back again” if Iran did not give in.
However, Pakistan-led mediation efforts eventually culminated in a two-week ceasefire on 8 April, with the “Islamabad Talks” scheduled for 10 April, according to an announcement by Pakistan’s Prime Minister Shehbaz Sharif.
Sharif expressed “deepest and sincere gratitude to our brotherly countries” — Türkiye, China, Egypt, and Saudi Arabia — “for extending invaluable and all out support” with achieving the ceasefire. He also thanked members of the Gulf Cooperation Council (GCC).
Fact or Fiction?
Soch Fact Check first observed that the language used by the France 24 presenter, William Hilderbrandt, is blunt and unlike the neutral tone the French state-run news outlet typically adopts.
We also noticed several tell-tale signs of content generated or manipulated using artificial intelligence (AI) tools. For example:
- In the first seven seconds of the video, Hilderbrandt’s mouth, especially the lower half, appears blurred;
- At the 0:26 mark, the Eiffel Tower in the background vanishes;
- At the 0:33 and 0:54 marks, his lip goes out of sync with what he is saying;
- At the 0:55 mark, there is an unnatural pause and emphasis when saying “persistent, coordinated strategy”;
- At the 0:57 mark, as he says “operators”, his lips wobble.
Soch Fact Check also observed that the chyron at the bottom appears in solid vanadyl bluish colour, instead of France 24’s typical translucent one, visible in authentic segments here and here.
Deepfake detection tools
To corroborate whether the video was indeed made or doctored using AI tools, we checked it in various detectors such as Deepfake-O-Meter, InVID-WeVerify, and Global Online Deepfake Detection System (GODDS).
In Deepfake-O-Meter, we tested the content twice: once using the video and the second time using only the audio.
- The seven detectors we selected revealed that the probabilities of the video being generated or manipulated using AI were 99.9%, 79.2%, 67.3%, 65.2%, 64.5%, 48.5%, and 47.9%.
- The six detectors we selected revealed that the probabilities of the audio being generated or manipulated using AI were 100%, 100%, 99.5%, 99.1%, 98.5%, and 1.1%.
According to InVID-WeVerify, which “uses a machine learning classifier” that checks for “face swapping and face reenactment” using AI, the probability of the video being created synthetically is 96%. “The algorithms of the verification plugin find very strong evidence suggesting that this video contains AI-manipulated faces,” it said.
We also tested the video in the Global Online Deepfake Detection System (GODDS), a tool developed by Northwestern University’s Security & AI Lab (NSAIL) that uses a combination of various models along with human analysis to provide a holistic summary of the results.
GODDS used 22 deepfake detection algorithms for the visual content and 70 for the audio component. Two trained analysts also examined the clip.
All predictive models for the visual and audio content said the video “is likely to be fake”:
- The video is likely to be fake with a probability above 0.5, according to 8 of the 22 predictive models; it is likely to be fake with a probability below 0.5, according to the 14 other predictive models.
- The audio is likely to be fake with a probability above 0.5, according to 65 of the 70 predictive models; it is likely to be fake with a probability below 0.5, according to the 5 remaining predictive models.
According to the human analysts, the video contains “several indicators” that show it may be digitally manipulated via AI.
“As the subject speaks, his teeth appear to blur and change shape (e.g., 0:01, 0:27, 0:32, 0:33, etc.). Additionally, the subject’s teeth seem to disappear from 0:27-0:28, and reappear shortly afterward,” they said, adding that Hilderbrandt’s eyes also appear to blur while blinking at the following timestamps: 0:03, 0:06, 0:24, 0:25, 0:33, and 1:01.
The analysts explained that the journalist’s face “appears unnaturally smooth and blurred throughout the video”, indicating the possibility — though occasional — that some facial manipulations were “hidden” via blurring.
“The subjects’ voices seem to lack natural tonal and cadence variations characteristic of human voices,” they added. “We believe this media is likely manipulated via artificial intelligence.”
Sound engineer’s analysis
Soch Fact Check also sought a comment from Shaur Azher, a lecturer who teaches sound design and sound recording at the University of Karachi and the Shaheed Zulfikar Ali Bhutto Institute of Science and Technology (SZABIST). He also works as an audio engineer at our sister organisation, Soch Videos, and specialises in mixing and mastering audio.
Azher explained that for comparison purposes, Sample A is the claim and Sample B is an audio clip extracted from an authentic France 24 broadcast, with durations of one minute and three seconds and 29 seconds, respectively.
“Sample A is definitely a fabricated audio file utilising an AI voice model,” he said. “Its failure to meet EBU R128 standards, combined with its lack of organic jitter/shimmer, missing breath signatures, and an impossible -57 decibels (dB) noise floor, proves it was generated digitally and never broadcast by a legitimate European news network.”
He first provided a baseline technical assessment:
- Loudness and dynamics: Sample A features a highly-compressed dynamic range, peaking at -3.3 dB with a short-term Loudness Units relative to Full Scale (LUFS) of -12.0 and an integrated LUFS of -14.1. However, Sample B peaks at -9.3 dB with an integrated LUFS of -22.4.
- Broadcast compliance: Like most European broadcasters, France 24 strictly adheres to the EBU R128 loudness standard (target level of -23 LUFS ±0.5). Sample A’s -14.1 LUFS is drastically non-compliant, proving it did not pass through France 24’s mastering chain.
- Frequency response: The spectrogram for Sample A shows artificially-elevated coefficients between 100 and 3,700 hertz (Hz). This “boosted” mid-range is a common artefact of text-to-speech (TTS) models attempting to maximise vocal intelligibility.
- Sibilance: Sample A contains harsh, synthetic sibilance transients, whereas Sample B contains natural de-essed — or desibilised — vocal transients, which are typical of a professional broadcast microphone.
Then, he provided the following observations to support his conclusion:
- Jitter and shimmer measurement
Sample A exhibits unnaturally low jitter and shimmer. The fundamental frequency is perfectly stable, lacking the organic micro-fluctuations present in human vocal folds. This mechanical perfection is a primary hallmark of vocoder-based generative audio.
Sample B displays standard, natural variations in both amplitude and pitch modulation, consistent with human speech mechanics.
- Phase coherence analysis
Sample A: AI voices often struggle with phase coherence, especially in the higher frequency registers, because the audio is reconstructed from a mel-spectrogram rather than recorded by a physical diaphragm. Listening to the high-end frequencies reveals mild phasing issues and smearing typical of neural vocoders (like HiFi GAN).
Sample B: The phase coherence is completely intact across the spectrum, indicative of a single-point acoustic recording (a presenter speaking into a microphone).
- Breath signature comparison
Sample A: The respiratory mechanics are deeply flawed, with the file lacking natural inhalation phases. Where pauses do occur, they are dead-silent digital dropouts rather than actual physical breaths. The spectrogram visually confirms rigid, block-like silences.
Sample B: The natural respiratory cycles are audible. One can hear the broadcaster intake air naturally before articulating the next sentence, which bridges the phrases organically.
- Room tone fingerprint comparison
Sample A: The noise floor sits at an unnaturally clean -57 dB, which is practically an anechoic digital vacuum. It lacks any acoustic fingerprint, ambient room reflections or equipment hum.
Sample B features a consistent noise floor around -40 dB. This includes the natural room tone of the broadcast studio, subtle heating, ventilation, and air conditioning (HVAC) low end and the self-noise of the microphone preamp, which remains continuous even when the speaker pauses.
- Cepstral coefficient deviation testing (analysis of Mel Frequency Cepstral Coefficients):
Sample A: Aural analysis indicates a static vocal tract simulation. The transitions between formants are overly smooth and lack the physical friction of a human tongue, teeth, and palate interacting.
Sample B exhibits highly-dynamic and complex formant transitions, reflecting the rapid physical movement of a natural human vocal tract.
Original unrelated clip
We also reverse-searched keyframes from the viral video but only found posts that rehashed the clip on social media. Interestingly, some of them did not include France 24’s logo but those of CNN and BBC; they can be seen here, here, here, and here.
When we searched Hilderbrandt’s France 24 profile and his X account, we found a report about how “Israel tighten(ed) media restrictions on war coverage” shared here and here on 27 and 30 March 2026, respectively, depicting the journalist in the same clothes and sporting a red ribbon on his lapel — a symbol for HIV/AIDS awareness. It was also posted on YouTube.
Hilderbrandt’s body movements in the first seven seconds of the authentic report by France 24 match the ones in the viral doctored video. This could be the reason why the Deepfake-O-Meter and GODDS yielded comparatively lower probabilities for the video component as opposed to the audio, which is definitely fake.
Soch Fact Check also sought a comment from France 24, which said, “The video is a fake using the images of one of France 24’s journalists, William Hilderbrandt, who anchors the Paris Direct news segment from Friday to Sunday, between 10AM and 2PM Paris time.
“The on-screen graphics are also inconsistent with France 24’s editorial design,” it added.
Soch Fact Check, therefore, concludes that the video is doctored.
Virality
Soch Fact Check found the claim circulating here, here, here, here, here, and here on Facebook, here, here, here, here, and here on Instagram, and here and here on Threads.
It was also shared by the ruling Pakistan Muslim League-Nawaz (PML-N) Senator Abid Sher Ali, HUM News Investigations Editor Zahid Gishkori, and Express Media Group Associate Marketing Director Shirmeen Khurram.
The video was posted by X users @ZardSi, @PakVocals, and @EPropoganda1, all of which have previously shared misinformation debunked by Soch Fact Check.
The claim circulated here, here, and here on YouTube and here, here, and here on TikTok.
Some news outlets — such as Daily Jang, Times of Islamabad, and GTV Network HD — also shared the claim as it is.
Conclusion: France 24 never aired this report nor did journalist William Hilderbrandt make such remarks. The video is doctored.
Background image in cover photo: Wikimedia Commons
To appeal against our fact-check, please send an email to appeals@sochfactcheck.com