2 Comments

Totally agree. The mix of badly coded software like AI detection in combination with evil human behavior will increase the volume of misinformation and misinterpreted content.

Reading Sam Altman's interview about AGI and misinformation leaves me a bit worried. Hallucinating LLMS, corrupted AI generated code and human behaviour will be the challenge we have to cope with.

"I'm particularly worried that these models could be used for large-scale disinformation," Altman said. "Now that they're getting better at writing computer code, [they] could be used for offensive cyberattacks." (https://abcnews.go.com/Technology/openai-ceo-sam-altman-ai-reshape-society-acknowledges/story?id=97897122)

Expand full comment

Love this topic. I’ve wanted to write on the good systems of human verification out there in the world today. An how potentially they can be amplified in a post-AI driven media world.

Getty has built a strong business on it. One of Twitter’s best innovations, Community Notes, has somehow withstood Elon’s housecleaning. Wikipedia’s talk pages and public edit history. BellingCat’s citizen journalism. ProPublica’s investigative journalism for public good. Even Reddit’s broad use as a search engine.

Expand full comment