How digital forensics could prove what’s real in the age of deepfakesNEWS | 25 January 2026As deepfakes blur the line between truth and fiction, we’ll need a new class of forensic experts to determine what’s real, what’s fake and what can be proved in court
Imagine this scenario. The year is 2030; deepfakes and artificial-intelligence-generated content are everywhere, and you are a member of a new profession—a reality notary. From your office, clients ask you to verify the authenticity of photos, videos, e-mails, contracts, screenshots, audio recordings, text message threads, social media posts and biometric records. People arrive desperate to protect their money, reputation and sanity—and also their freedom.
All four are at stake on a rainy Monday when an elderly woman tells you her son has been accused of murder. She carries the evidence against him: a USB flash drive containing surveillance footage of the shooting. It is sealed in a plastic bag stapled to an affidavit, which explains that the drive contains evidence the prosecution intends to use. At the bottom is a string of numbers and letters: a cryptographic hash.
The Sterile Lab
On supporting science journalism
If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
Your first step isn’t to look at the video—that would be like traipsing through a crime scene. Instead you connect the drive to an offline computer with a write blocker, a hardware device that prevents any data from being written back to the drive. This is like bringing evidence into a sterile lab. The computer is where you hash the file. Cryptographic hashing, an integrity check in digital forensics, has an “avalanche effect” so that any tiny change—a deleted pixel or audio adjustment—results in an entirely different code. If you open the drive without protecting it, your computer could quietly modify metadata—information about the file—and you won’t know whether the file you received was the same one that the prosecution intends to present. When you hash the video, you get the same string of numbers and letters printed on the affidavit.
Next you create a copy and hash it, checking that the codes match. Then you lock the original in a secure archive. You move the copy to a forensic workstation, where you watch the video—what appears to be security camera footage showing the woman’s adult son approaching a man in an alley, lifting a pistol and firing a shot. The video is convincing because it’s boring—no cinematic angles, no dramatic lighting. You’ve actually seen it before—it recently began circulating online, weeks after the murder. The affidavit notes the exact time the police downloaded it from a social platform.
Watching the grainy footage, you remember why you do this. You were still at university in the mid-2020s when deepfakes went from novelty to big business. Verification firms reported a 10-fold jump in deepfakes between 2022 and 2023, and face-swap attacks surged by more than 700 percent in just six months. By 2024 a deepfake fraud attempt occurred every five minutes. You had friends whose bank accounts were emptied, and your grandparents wired thousands to a virtual-kidnapping scammer after receiving altered photos of your cousin while she traveled through Europe. You entered this profession because you saw how a single fabrication could ruin a life.
Digital Fingerprints
The next step in analyzing the video is to run a provenance check. In 2021 the Coalition for Content Provenance and Authenticity (C2PA) was founded to develop a standard for tracking a file’s history. C2PA Content Credentials work like a passport, collecting stamps as the file moves through the world. If the video has any, you could track its creation and modifications. But most have been slow to adopt, and Content Credentials are often stripped as files circulate online. In a 2025 Washington Post test, journalists attached Content Credentials to an AI-generated video, but every major platform where they uploaded it stripped the data.
Next you open the file’s metadata, though it rarely survives online transfers. The time stamps don’t match the time of the murder. They were reset at some point—all are now listed as midnight—and the device field is blank. The software tag tells you the file was last saved by the kind of common video encoder used by social platforms. Nothing indicates the clip came directly from a surveillance system.
When you look up the public court filings in the homicide case, you learn that the owner of the property with the security camera was slow to respond to the police request. The surveillance system was set to overwrite data every 72 hours, and by the time the police accessed it, the footage was gone. This is what made the video’s anonymous online appearance—with the murder shown from the exact angle of that security camera—a sensation.
The Physics of Deception
You begin the Internet sleuthing that investigators call open-source intelligence, or OSINT. You instruct an AI agent to search for an earlier copy of the video. After eight minutes, it delivers the results. A video posted two hours before the police download shows a partial record that says the recording was made with a phone.
The reason you are finding the C2PA data is that companies such as Truepic and Qualcomm developed ways for phones and cameras to cryptographically sign content at the point of capture. What’s clear now is that the video didn’t come from a security camera.
You watch it again for physics that don’t make sense. The slowed frames pass like a flip-book. You stare at shadows, at the lines of an alley door. Then, at the edge of a wall, light that shouldn’t be there pulses. It’s not a light bulb’s flicker but a rhythmic shimmer. Someone filmed a screen.
The flicker is the sign of two clocks out of sync. A phone camera scans the world line by line, top to bottom, many times each second, whereas a screen refreshes in cycles—60, 90 or 120 times per second. When a phone records a screen, it can capture the shimmer of the screen updating. But this still doesn’t tell you if the recorded screen showed the truth. Someone might have simply recorded the original surveillance monitor to save the footage before it was overwritten. To prove a deepfake, you have to look deeper.
Artifacts of the Fake
You check for watermarks now—invisible statistical patterns inside the image. For instance, SynthID is Google DeepMind’s watermark for Google-made AI content. Your software finds hints of what might be a watermark but nothing certain. Cropping, compression or filming a screen can damage watermarks, leaving only traces, like those of erased words on paper. This doesn’t mean that AI generated the whole scene; it suggests an AI system may have altered the footage before the screen was recorded.
Next you run it through a deepfake detector like Reality Defender. The analysis flags anomalies around the shooter’s face. You break the video apart into stills. You use the InVID-WeVerify plug-in to pull clear frames and do reverse-image searches on the accused son’s face to see if it appeared in another context. Nothing comes up.
On the drive is other evidence, including more recent footage from the same camera. The brickwork lines up with the video. This isn’t a fabricated scene.
You return to the shooter’s face. The alley’s lighting is harsh, casting a distinct grain. His jacket and hands and the wall behind him have its coarse digital noise, but his face doesn’t. It’s slightly smoother, from a cleaner source.
Security cameras give moving objects a distinct blur, and their footage is compressed. The shooter has that blur and blocky quality except for his face. You watch the video again, zoomed in on only the face. The outline of the jaw jitters faintly—two layers are ever so slightly misaligned.
The Final Calculation
You move back to when the shooter appears. He raises the weapon in his left hand. You call the woman. She tells you her son is right-handed and sends you videos of him playing sports as a teenager.
Lastly you go to the alley. The building’s maintenance records list the camera at 12 feet high. You measure its height and downward angle, using basic trigonometry to calculate the shooter’s height—three inches taller than the woman’s son.
The video makes sense now—it was made by cloning the son’s face, using an AI generator to superimpose it on the shooter and recording the screen with a phone to remove the generator’s watermark. Cleverly, whoever did this chose a phone that would generate Content Credentials, so viewers would see a cryptographically signed claim that the clip was recorded on that phone and that no edits were declared after capture. By doing this, the video’s maker essentially forged a certificate of authenticity for a lie.
The notarized document you will send to the public defender won’t read like a thriller but like a lab report. In 2030 a “reality notary” is no longer science fiction; it is the person whose services we use to ensure that people and institutions are what they appear to be.Author: Eric Sullivan. Deni Ellis Béchard. Source