logo80lv
Articlesclick_arrow
Professional Services
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
Order outsourcing
Advertiseplayer
profile_loginLogIn

Google's New Research Shows How Generative AI Harms the Internet

Basically, the overabundance of fake AI-made content online is by design.

It seems that in this ongoing conflict between AI zealots and anti-AI activists, Google has decided to play both sides, pushing the AI agenda with its Gemini, Vertex, and a slew of other projects while simultaneously warning the public about the dangers of generative artificial intelligence. First, YouTube makes it simpler for users to remove AI content simulating their voice and appearance, and now, a new report from Google's AI research team surfaces online, detailing how the technology in question is already harming the internet.

Avishek Das/Getty

First spotted by 404 Media, a research paper from Google DeepMind delves into the "misuse" of generative AI, analyzing existing academic sources and 200 news articles to illustrate how artificial intelligence is being employed to make the web a worse place.

According to the report, the most common tactics in real-world misuse of AI involve manipulating human likeness (e.g., deepfakes) and falsifying evidence, mostly deployed with the intent to influence public opinion, enable scam or fraudulent activities, or generate profit. The research also finds that one doesn't need to be Albert Einstein to abuse the system, with most cases involving easily accessible generative AI models requiring minimal technical expertise.

"The widespread availability, accessibility, and hyperrealism of GenAI outputs across modalities has also enabled new, lower-level forms of misuse that blur the lines between authentic presentation and deception," the report concludes, highlighting that the primary harm of AI is its ability to obscure the distinction between what is real and what is fake online.

Interestingly, despite the report's insistence on labeling these instances as "misuse", it acknowledges that most cases of using AI to flood the internet with machine-generated content, fake or otherwise, are "often neither overtly malicious nor explicitly violate these tools' content policies or terms of services". In layman's terms, this means that such uses are by design and the AI tools are functioning as intended, a fact that, let's be honest here, is pretty obvious even without any scientific papers.

Even more interesting is the fact that both the report and the aforementioned YouTube content policy update identify deepfakes as the most harmful application of generative AI. While it's hard to argue with this – I mean, just imagine AI-generated videos of politicians declaring war on each other – Google's focus on warning the public about deepfakes in particular can be seen as somewhat puzzling, considering the billions the tech giant itself has already poured into AI research, which, among other things, also makes it easier to create deepfakes.

Do they know something we don't? Should we brace ourselves for a wave of nearly indistinguishable deepfakes causing real-life chaos? Is there any truth to the theory that current-generation artificial intelligence was developed years before the 2022 AI boom and was only released to the public as part of an experiment, which is entering a new stage? Tell us what you think in the comments!

You can read the full 29-page report by clicking this link. Don't forget to join our 80 Level Talent platform and our Telegram channel, follow us on InstagramTwitterLinkedInTikTok, and Reddit, where we share breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more