Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
FBI Warns That Deepfakes Will Be Used Increasingly in Foreign Influence Ops (lawfareblog.com)
53 points by CapitalistCartr on March 22, 2021 | hide | past | favorite | 46 comments


I get that this is juicy for news and journos. but deepfakes are overrated especially in influence ops.

you don’t need fake content or lies to create disinformation. In fact, you want to use the truth as much as possible. Just not the whole truth. Selective editing is an extremely simple technique, which has worked since many decades. The real casualties of deepfakes aren't political adversaries but young women (something which isn't talked about enough).

edit: here is a really good take on this by grugq: https://medium.com/@thegrugq/cheap-fakes-beat-deep-fakes-b1a...


This is incredibly true. And the thing is, sometimes you don't even need selective editing, so much as framing what the footage one is about to see before someone sees it.

The aftermath of the 2020 U.S. presidential election is replete with examples of footage from election centers and dueling narratives emerging from people interpreting what the staffers were doing. I remember back at the 2004 presidential debates, people were accusing Bush of having a listening device strapped to his back based on a few dark pixels that looked like a bump. And 9/11 footage must have been poured over for millions of man hours now, with conspiracists seeing what they want to see.

I'm not sure if even "concrete" evidence like video proof is worth anything anymore. People will choose to believe what they want to. Evidence isn't the problem. Consensus is. Like the crisis of reproducibility in science, the communal framework for evaluating news (usually for political or other tribal ends) is broken. It doesn't matter how real evidence is if people aren't going to reject anything that conflicts their story, prima facie.

Deep fake footage is bad, but as you say, there are simpler techniques like selective editing, that have always existed. And you can provide the most pure and uncut evidence, but if there's no mutually agreed-upon protocol among observers, they will walk away with their own interpretation.


> I get that this is juicy for news and journos. but deepfakes are overrated especially in influence ops.

This warning isn't about what is, but what will be. I don't see how one can be familiar with the recent history of disinformation created using relatively unsophisticated methods, and simultaneously not understand the Pandora's Box being opened by cheap, easy, and perfectly-convincing synthetic content.


We're in an uncanny valley. When deep fakes are good enough to fool people consistently, and cheap and low skill enough to become ubiquitous, people will stop trusting any video or audio by default. It becomes futile to waste influence ops on it. Now, good ones are difficult enough to produce that they are plausible in the right context.

Given that legislative and law enforcement efforts are unlikely to roll back this tide, especially from foreign actors, the better path is forward: more deep fakes, cheaper and easier tools, lower friction methods of vouching for content, yielding less gullible consumers.


Before we had digital cameras, anyone could manipulate images in a darkroom. We did it in high school, it's not hard. The world didn't end with this disinformation, we just labeled the moon landing deniers as conspiracy theorists and moved on with our lives.


I'm not that optimistic, they will just believe video/images provided from their trusted source (like they do now), but the source will have more power over what reality it presents to its followers.


And more research into deepfake generation and detection methods. I'm sure there will be an arms race of sorts for detection and methods to reduce signatures of deepfake in the generated images.


This comment somewhat contradicts its parent comment. Deep fake detection research will postpone the day nobody trusts video and audio.

I don't know which is better. It's very hard for me to imagine the world where nobody believes such things. Will we continue to have a third of US society believing in an alternate reality? Will people start signing every statement cryptographically, so that nobody can put fake words in their mouths? Will the need to track the reputations of news providers (rather than taking the videos they provide at face value) render spewers of nonsense less influential?


Deepfake detection will not be an in-lock-step-antithetical-force to deepfake ubiquity. It does not necessarily postpone, it will have to live with and be the "immune system" if you will. The first successful deepfake that gets us all and proves the warnings correct will certainly ignite fervent research (and maybe policy and law) into mitigation/control of the technology. It would be used certainly as misinformation to legitimize an egregious act that succeeds and leaves everyone dumbfounded for X amount time necessary for whatever the egregious act is to succeed. Or there could be small scale, minor alterations of speech that disrupt automated systems meant to scour the internet for such-and-such person talking about such-and-such thing. Deep faking CEOs, quarterly reviews, etc.

But, alas, yes, even without deepfakes, society is susceptible to reality distortion.


Just imagine all of the conspiracies and "fake news" backed up by somewhat realistic looking videos. The concept of truth and shared realities was already under attack so maybe deepfakes will be the final blow in the arms race. Taken to an extreme, this could cause non deepfaked video evidence itself to come under question. This would be unfortunate as things like body cams and smartphone cameras have provided the average person with a kind of power they never had before. I guess the pendulum always swings back.


In my experience it seems like you could upload a video where nothing actually happens and if you put the narrative you want in the title people will just run with it.


To me, this means that we vitally need a cryptographic verification method that can track the image as produced by a verified microchip and through a custodial blockchain recording each edit of that image file. Then giving all media players the ability to verify and produce a certificate of authenticity.

I bet there are some security challenges there, but this is the only way I can think of in order to protect the Republic from purposeful misinformation by differentiating between real and edited/fake,synthesized, footage going forward.


I want something like this, but given the already enormous power cost of Bitcoin to maintain a paltry few Gb of data, and the petabytes of video produced by humanity every hour, it's not obviously possible to me even with the cleverest hash tree algorithms imaginable.


I appreciate the concern but it has to be said that there are lots of blockchain solutions many magnitudes more efficient than Bitcoin. Also there are crypto projects that already deal with streaming video: For example: https://www.thetatoken.org/ Bitcoin has the air in the room, but not the most advanced tech.


You're right, proof of work is not necessary. But I still can't imagine a blockchain that scales to encompass all the media humanity produces. (I'm not saying it's impossible -- math can be surprising.)


This is my fear. Currently, deep-fakes are fun. We share text messages of Donald Trump singing YMCA and everyone has a good laugh. But as you said, we're also a nation that easily falls for conspiracies. Everything from Q-Anon, Moon Landings being faked, and 9/11 being spearheaded by George Bush, Jr. Someone mentioned you can be the next multi-billionaire by creating a cryptographic watermark with validation, which at first sounds good. However, the obvious issue is that those who believe in these conspiracy theories won't trust "cryptographic watermarks".

Basically, anything and everything can be faked. And people will trust what confirms their biases.

Unfortunately, I personally see no solution to deep-fakes. The worms are out of the can, and we now have to question everything we see, which is A LOT to ask of people. Even if governments band together to "ban" deep-fakes, tightly regulating the internet to track deep-fake origins, or a more sinister "1984" approach, you still have the obvious issue that deep-fakes can be created by those who regulate the masses (our governments).


Cryptographic watermarking and validation will become increasingly necessary for all AV capture and distribution in the years to come. Want to be the next multi-billionaire? Solve this problem at scale.


The deep fake technology will kill the video star.

Movies will be made more and more with this tech to save $$ on casts.

There is already so much noise pretending to be news that few people pay attention anymore to the media.

Just another version of click bait.


Maybe, but maybe not. The only thing that has stopped us from having all our jobs replaced 50 years ago by some machine is that humans work for cheap, often cheaper than it would cost to service that would be burger flipping machine. It might still be cheaper in the future to hire an actor than to hire a technician who can produce an equivalent deep fake along with the costs of providing these hardware resources. It was one thing when we were crafting entire worlds out of practical effects vs CGI, but this is literally CGI on one individual. Even today, CGI isn't always cheaper than practical effects, and sometimes big shoots still use practical effects.


$$ casts is no big deal. It also saves you cost of location and probably 100 other people involved in movies.


Next level of blue screen. I would suspect stars of the future will be similar to "manufactured" and business formed bands of the past.

Sign up a youngster for a small fee. Scan the shit out of his body, voice etc.

Free actor for life.

Kid gets $500 lifetime payment.


Why sign a kid, when a 3d artist, a psychologist, and a writer, working in tandem with an economist and a publicist will get you much better results?

If you sign the kid, you still need the artist, the psychologist, the writer, the economist and the publicist in any case. Why would they not simply go all the way and cut out any potential name and likeness disputes?

OK, if the kid is an athlete, fine, I get that. EA has to do that now. But if I just want to create a bubble gum pop idol. my incentive is to be greedy and not share with anyone if possible.

AI technologies, with a little help from specialists like psychologists ensuring the product is addictive, make it possible for me to be greedy. It's happening in porn now, and if it's happening in porn, it'll happen everywhere else soon enough.


Because people don't like fake people posing as real people. This may change but manufacturing a "real" person currently goes over like a lead ballon.


You don't need AI technology to make a successful cartoon.


Not just blue screen but these new AI actors would do things that no real human being could do. One reason why Game of thrones had only 8 seasons is because crew got bored and kids grew up.

With deepfakes you can have peter pan like actors who will not age or even reverse age. everyone will have 8 pack abs and 36D busts (or whatever audience likes). In fact there could be A/B testing of how actors should look and behave.

Next obvious step is people like me volunteering to be that "extra" behind the key actor and engage in a bidding war. All I have to do is give my body scan and one of the zombies killed by Rick in walking dead could be me.

Bulbs did not replace candles, they created possibilities that candles could not.


I suspect that within the next 5 years we will see an incredibly sophisticated deepfake of some rising star politician saying something abhorrent. There will be a media firestorm and their career will be ruined. Even if the media issues a retraction and describes how the deepfake was made, I still suspect there would be no recovering from the initial controversy.


More likely that it will be used as a way to doubt the reality of a real abhorrent thing that some well established politician said/did.

In reality it will be both and no one will change because people don’t believe anything from the “other” side anyway.


This makes me think videos will not be seen as a source of truth but as a source of hearsay: "this is exactly what they would say anyway, so who cares if they did"?

Following this line of thought, why not release deepfakes of yourself to provide plausible deniability for your statements? If people believe it - good! You meant to say it. If people decide it's not what you would have said, you didn't say it anyway.


Another possibility is that we have people defending the use of such fake videos using the reasoning "that's what this person is already saying! that's what this person already thinks! this video represents everything we already believe to be true about this person"


exactly, just as they deny grainy videos today


I think the opposite, a deep fake will say something abhorrent, but radicalized supporters will agree with what was said and push it as truth. The politician will neither confirm nor deny the deep fake, and the Overton window will shift.


What if discrediting the media is the point? Certain politicians say abhorrent things all of the time and go out of their way to vilify the media. A well-planted deepfake that makes a news cycle before being discredited would give them a permanent excuse to dismiss anything as "fake news". I'm surprised that it didn't happen in 2020.


What can a politician even say to appear abhorrent to his tribe?


My guess is, even official media and government organizations will take advantage of deepfakes if they think it will increase their profits, or penetration of their messaging. A year ago, the government and media teamed up to lie to the public and claim masks weren't effective, "for our own good" so medical people could obtain masks without economic competition from the plebian hordes.

Nobody apologized and it seems the public didn't care either. So we can conclude that truthful information isn't always desirable, especially when it comes to feeding the masses. Anybody with a brain already realizes this and understands that deepfakes can and will be used on the general public, "for our own good."


Remember that if FBI is warning the Americans about this technology, the USA Gov is already using it against other countries.


Indeed, and if we use the Snowden revelations as a guide it would not be surprising if they have used or will use this against its own citizens.


*in their foreign influence ops

Fixed that for you


Yeah, that's a much more realistic picture of the US State Department's international role


Anyone have thoughts on NFTs in this space to ensure that a credible author signed a video and therefore it is a credible video? NFT noob here :)


I envision a new James Bond series, featuring Max Headroom


Or more realistically, starring Sean Connery.


would love to see Sean Connery impersonating Mike Myers


... by the FBI.


The general public must be protected from this threat - through censorship and moderation of information by the technocrats, the state approved media, and the intelligentcia... maybe “thought police” should also be considered.


Every time I see articles like this I think the same, sarcastically.

I also wonder, who is going to be the first major public figure to claim some evidence of his/her wrong doing is a deep fake and part of a foreign influence op?


Trump?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: