• by HPsquared on 5/30/2025, 11:22:58 AM

    There was a brief window where photography and videos became widespread so events could be documented and evidenced. Generative AI is drawing that period to an end, and we're returning to "who said / posted this" and having to trust the source rather than the image/message itself. It's a big regression.

  • by IshKebab on 5/30/2025, 12:11:58 PM

    Interesting question but this article completely failed to answer it and really went off the rails half way through.

    Ars answered this much much better:

    https://arstechnica.com/ai/2025/05/ai-video-just-took-a-star...

    > As these tools become more powerful and affordable, skepticism in media will grow. But the question isn't whether we can trust what we see and hear. It's whether we can trust who's showing it to us. In an era where anyone can generate a realistic video of anything for $1.50, the credibility of the source becomes our primary anchor to truth. The medium was never the message—the messenger always was.

  • by dsign on 5/30/2025, 12:13:59 PM

    One thing is to see Sam Altman peddling his wares, another altogether is to hear politicians and big corp executives treating AI as if it were something that should be adopted post-haste in the name of progress. I don't get it.

  • by Applejinx on 5/30/2025, 11:45:10 AM

    One point to bear in mind is, lies have proven more effective in the ABSENCE of evidence. I don't know how many times I've run across the idea of 'guess what, Portland (or New York City, or whatever) is burned to the ground because of the enemies!'

    This gets believed not because there's evidence, but because it's making a statement about enemies that is believed.

    So for whoever finds lies compelling, I don't think it's about evidence or lack of evidence. It's about why they want to believe in those enemies, and evidence just gets in the way.

  • by Elaris on 5/30/2025, 12:27:34 PM

    This got me thinking. Sometimes it feels like a story doesn’t have to be true as long as it feels right, people believe it. And if it spreads fast and sounds good, it becomes “truth” for many. That’s kind of scary. Now that anyone can easily make something look real and convincing, it’s harder to tell what’s real anymore. Maybe the best thing we can do is slow down a bit, ask more questions, and not trust something just because it fits what we already believe.

  • by psychoslave on 5/30/2025, 1:25:49 PM

    >Once we take the counterfeits for granted, we may begin doubting the veracity of all the information presented to us through media.

    Hmm, that's not a totally new stuff. I mean, anyone taking take time to document themselves about how mass media work should already be acquainted by the fact that anything you get in them is either bare propaganda or some catch eye trigger void of any information to attract audience.

    There is no way an independent professional can make a living while staying in integrity with the will to provide relevant feedback without falling in this or that truism. Audience is already captured by other giant dumping schemes.

    Think "fabric of the consent".

    So the big change that might occurs here, is the distribution of how many people do believe what they get thrown at there face.

    Also, the only thing that you might previously be taken as only reliable information in a publication was that the utterer of the sentence knew the words emitted, or at least had the ability to utter its form. Now you still don't know if the utterer had any sense of the sentence spoken, but you don't even know if the person could actually even utter it, let alone have ever been aware of the associated concepts and notions.

  • by metalman on 5/30/2025, 12:04:34 PM

    "lies" are always more compelling than the truth. truth=what is

    vs, a whole wide range of "wouldn't it be nice .....if", "cant we just....", and the massive background of myth, legend, fantasy, dreaming, etc so into this we have created a mega capable machine rendered virtual sur-reality.....much like the ancient myth/legends where odesius sits to table where at a fantastic feast..nothing is as it seems

  • by 0xbadcafebee on 5/30/2025, 11:58:04 AM

    When something new is happening (or new information comes to light), and that thing has the potential to do harm, people come out of the woodwork to make doomsday predictions. Usually the doomsday predictions are wrong. A lot of these predictions involve technologies we all take for granted today.

    Like the telephone. People were terrified when they first heard about it. How will I know who's really on the other end? Won't it ruin our lives, making it impossible to leave the house, because people will be calling at all hours? Will it electrocute me? Will it burn down my house? Will evil spirits be attracted to it, and seep out of the receiver? (that was a real concern)

    It turns out we just adapt to technology and then forget we were ever concerned. Sometimes that's not a great thing... but it doesn't bring about doomsday.

  • by keiferski on 5/30/2025, 12:13:23 PM

    Can someone tell me why this idea isn’t workable and wouldn’t solve most deepfake issues?

    All camera and phone manufacturers embed a code in each photo / video they produce.

    All social media channels prioritize content that has these codes, and either block or de-prioritize content without them.

    Result: the internet is filled with a vast amount of AI generated nonsense, but it’s mostly not treated as anything but entertainment. Any real content can be traced back to physical cameras.

    The main issue I see is if the validation code is hacked at the camera level. But that is at least as preventable as say, preventing printers from counterfeiting money.

  • by heresie-dabord on 5/30/2025, 12:07:59 PM

    From TFA:

        Mythmaking, more than truth seeking, is what seems likely to define the future of media and of the public square. The reason extraordinarily strange conspiracy theories have spread so widely in recent years may have less to do with the nature of credulity than with the nature of faith. 
    
    The reason why strange and even outright deranged notions have spread so widely is that they have been monetised. It is a Gibberish Economy.

  • by indest on 5/30/2025, 12:05:55 PM

    lies have always been more compelling.

  • by drweevil on 5/30/2025, 2:25:28 PM

    What is a lie, and what is the truth? These are age-old questions, not some recent phenomenon. The Spanish-American War was at least in part precipitated by the infamous "yellow journalism" of the time. Propaganda and disinformation played a large role in the events leading to and including WWII. And what is The Truth? Using photography as an example, lies can easily be told by omission, even without any dark-room chicanery. What is the photographer's subject? What is off-frame? Which photographs did the editor select for publication? What story is not being told?

    If anything, the idea that one can take information as "true" based on trust alone (what does the photograph show, what did the New York Times publish) seems to be a recent aberration. AI will be doing us a favor if it destroys this notion, and encourages people to be more skeptical, and to sharpen their critical thinking skills. Forget about what is "true" or "false." Information may be believed on a provisional basis. But it must "make sense" (a whole subject in itself), and it must be corroborated. If not, it is not actionable. There simply is no silver bullet, AI or no AI. Iain M Bank's Culture series provides an interesting treatment of this subject, if anyone is interested.

  • by logic_node on 5/30/2025, 12:45:05 PM

    It is unsettling how AI can create lies that are more persuasive than the truth. This truly challenges our ability to differentiate fact from fiction in the digital age.

  • by IanCal on 5/30/2025, 12:20:17 PM

    > OpenAI CEO Sam Altman has voiced worries about the use of AI to influence elections, but he says the threat will go away once “everyone gets used to it.”

    Then he's lying or a complete moron.

    People have been able to fake things for ages, since you can entirely fabricate any text because you can just type it. The same as you can pass on any rumour by speaking it.

    People are fundamentally aware of this. Nobody is confused about whether or not you can make up "X said Y".

    *AND YET* people fall for this stuff all the time. Humans are bad at this and the ways in which we are bad at this is extensively documented.

    The idea that once you can very quickly and cheaply generate fake images that somehow people will treat it with drastically more scepticism than text or talking is insane.

    Frankly the side I see more likely is what's in the article - that just as real reporting is dismissed as fake news that legitimate images will be decried as AI if they don't fit someones narrative. It's a super easy get-out clause mentally. We see this now with people commenting about how someone elses comment simply cannot be from a real person because they used the word "delve", or structured things, or had an em dash. Hank Green has a video I can't find now where people looked at a space X explosion and said it was fake and AI and CGI, because it was filmed well with a drone - so it looks just like fake content.

  • by ImHereToVote on 5/30/2025, 12:18:48 PM

    Lies are already more compelling than the truth. The difference is whether you like rebel lies, or establishment lies.

  • by titouanch on 5/30/2025, 11:54:36 AM

    Gen AI could have us headed towards a cartesian crisis.

  • by intended on 5/30/2025, 12:12:06 PM

    So this is an actual problem I am considering and have an approach. Talking essentially about our inability to know:

    1) if a piece of content is a fact or not.

    2) if the person you are acting with is a human or a bot.

    I think its easier if you take the most nihilistic view possible, as opposed to the optimistic or average case:

    1) Everything is content. Information/Facts are simply a privileged version of content.

    2) Assume all participants are bots.

    The benefit is that we reduce the total amount of issues we are dealing with. We don’t focus on the variants of content being shared, or conversation partners, but on the invariant processes, rules and norms we agree upon.

    So We can’t agree on may be facts - but what we can agree on is that the norms or process was followed.

    The alternative, to hold on to some semblance or desire to assume people are people, and the inputs are factual, was possible to an extent in an earlier era. However the issue is that at this juncture, our easy BS filters are insufficient, and verification is increasingly computationally, economically, and energetically taxing.

    I’m sure others have had better ideas, but this is the distance I have been able to travel and the journey I can articulate.

    Side note

    There’s a few Harvard professors who have written about misinformation, pointing out that total amount of misinfo consumed isn’t that high. Essentially : that demand for misinformation is limited. I find that this is true, but sheer quantity isnt the problem with misinfo, its amplification by trusted sources.

    What GenAI does is different, it does make it easier to make more content, but it also makes it easier to make better quality content.

    Today it’s not an issue of the quantity of misinformation going up, it’s an issue of our processes to figure out BS getting fooled.

    This is all putting pressure on fact finding processes, and largely making facts expensive information products - compared to “content” that looks good enough.

  • by bluebarbet on 5/30/2025, 12:38:56 PM

    Article raised interesting questions but suggested no answers.

    To the extent there's a technical fix to this problem of mass gaslighting, surely it's cryptography.

    Specifically, the domain name system and TLS certificates, functioning on the web-of-trust principle. It's already up and running. It's good enough to lock down money, so it should be enough to suggest whether a video is legit.

    We decide which entities are trustworthy (say: reuters.com, cbc.ca), they vouch for the veracity of all their content, and the rest we assume is fake slop. Done.

  • by JimDabell on 5/30/2025, 12:22:15 PM

    The good news is that AI has been shown to be effective at debunking things too:

    > Durably reducing conspiracy beliefs through dialogues with AI

    > Conspiracy theory beliefs are notoriously persistent. Influential hypotheses propose that they fulfill important psychological needs, thus resisting counterevidence. Yet previous failures in correcting conspiracy beliefs may be due to counterevidence being insufficiently compelling and tailored. To evaluate this possibility, we leveraged developments in generative artificial intelligence and engaged 2190 conspiracy believers in personalized evidence-based dialogues with GPT-4 Turbo. The intervention reduced conspiracy belief by ~20%. The effect remained 2 months later, generalized across a wide range of conspiracy theories, and occurred even among participants with deeply entrenched beliefs. Although the dialogues focused on a single conspiracy, they nonetheless diminished belief in unrelated conspiracies and shifted conspiracy-related behavioral intentions. These findings suggest that many conspiracy theory believers can revise their views if presented with sufficiently compelling evidence.

    https://pubmed.ncbi.nlm.nih.gov/39264999/

    A huge part of the problem with disinformation on the Internet is that it takes far more work to debunk a lie than it does to spread one. AI seems to be an opportunity to at least level the playing field. It’s always been easy to spread lies online. Now maybe it will be easy to catch and correct them.

  • by ThinkBeat on 5/30/2025, 12:11:57 PM

    Then the AI in question is qualified to become a politician. With congress these days it cant get much worse.

  • by ToucanLoucan on 5/30/2025, 11:29:53 AM

    > The concern is valid. But there’s a deeper worry, one that involves the enlargement not of our gullibility but of our cynicism. OpenAI CEO Sam Altman has voiced worries about the use of AI to influence elections, but he says the threat will go away once “everyone gets used to it.”

    > Some experts believe the opposite is true: The risks will grow as we acclimate ourselves to the presence of deepfakes. Once we take the counterfeits for granted, we may begin doubting the veracity of all the information presented to us through media. We may, in the words of the mathematics professor and deepfake authority Noah Giansiracusa, start to “doubt reality itself.” We’ll go from a world where our bias was to take everything as evidence to one where our bias is to take nothing as evidence.

    It is journalistic malpractice that these viewpoints are presented as though the former has anything to actually say. Of course Altman says it's no big deal, he's selling the fucking things. He is not an engineer, he is not a sociologist, he is not an expert at anything except some vague notion of businessness. Why is his opinion next to an expert's, even setting aside his flagrant and massive bias in the discussion at hand!?

    "The owner of the orphan crushing machine says it'll be fine once we adjust to the sound of the orphans being crushed."

    > “Every expert I spoke with,” reports an Atlantic writer, “said it’s a matter of when, not if, we reach a deepfake inflection point, after which forged videos and audio spreading false information will flood the internet.”

    Depending where you go this is already true. Facebook is absolutely saturated in the shit. I have to constantly mute accounts and "show less like" on BlueSky posts because it's just AI generated allegedly attractive women (I personally prefer the ones that look... well, human, but that's just me). Every online art community either is trying to remove the AI garbage or has just given up and categorized it, asking users uploading it to please tag it so their other users who don't want to see it can mute it, and of course they don't because AI people are lazy.

    Also I'd be remiss to not point out that this is, yet again, something I and many many others predicted back when this shit started getting going properly, and waddaya know.

    That said, to be honest, I'm not that worried about the political angle. The politics of fakery, deep or otherwise, has always meant it's highly believable and consumable for the audience it's intended for because it's basically an amped-up version of political cartoons. Conservatives don't need their "Obama is destroying America!" images to be photorealistic to believe them, they just need them to stroke their confirmation bias. They're fine believing it even if it's flagrantly fake.