
Meta thinks social media can protect us from deep fakes
Deep fakes are arguably the most dangerous aspect of artificial intelligence. Creating fake photos, audio and even videos is relatively easy these days. For example, see the Morgan Freeman and Tom Cruise deepfake below.
But while social media has so far been used as a mechanism to spread deepfakes, Instagram Director Adam Mosseri thinks it could actually play a key role in exposing their lies…
How deep is the production of fakes?
To date, the main method used to create deepfake videos is an approach called generative adversarial networks (GANs).
An artificial intelligence model either generates fake video clips or displays real video clips. The second AI model attempts to identify fakes. Running this process repeatedly trains the first model to produce increasingly convincing fakes.
However, diffusion models like DALL-E 2 are now becoming dominant. They take actual film footage and then make various changes to create numerous variations of them. Text prompts can be used to guide AI models to the results we want, making them easier to use—and the more people who use them, the better they train.
Examples of deepfake videos
Here’s a famous example of a Morgan Freeman deepfake, which was created three years ago when technology was far less sophisticated than today:
The other is Iron Man played by Tom Cruise:
Brits may also recognize Martin Lewis, known for his financial advice, here promoting a cryptocurrency scam with a deepfake:
metaexecutable Adam Mosseri Argues that social media can actually make things better rather than worse by helping flag false content – although he does point out that social media isn’t perfect in this regard and each of us needs to consider the source.
Over the years, our ability to produce photorealistic images (both still and dynamic) has gotten better and better. When I was ten years old, Jurassic Park blew my mind, but it was a $63 million movie. Four years later, I’m even more impressed with GoldenEye for the N64 now that it’s released. Now that we look back on these media, they appear crude at best. Whether you’re bullish or bearish on the technology, it’s clear that generative AI is producing content that’s difficult to discern from real-world records and is improving rapidly.
a friend, @ Lessingperhaps a decade ago prompted me to realize that any claim should be evaluated not only on its content, but also on the credibility of the person or institution making the claim. Maybe this happened a few years ago, but now we collectively realize that it has become more important to consider who is saying something than what they are saying when evaluating the validity of a statement.
As an online platform, our role is to label the content generated as artificial intelligence to the best of our ability. But some content will inevitably slip through the cracks, and not all misrepresentations will be generated by AI, so we must also provide context about who is sharing so that you can assess for yourself how much you trust their content.
It will become increasingly important that viewers or readers have a sharp mind when they consume records or records that claim to be reality. My advice is to “always” consider who is speaking.
image: shamuk
FTC: We use auto affiliate links to earn revenue. More.
2024-12-16 13:22:02