Étiquette : generative ai (Page 2 of 4)

Deezer and Ipsos study: AI fools 97% of listeners

https://no-flux.beaude.net/wp-content/uploads/2025/11/EN-CAROUSEL-1-1-2048x2048-1.png

“Initially, all participants were asked to listen to three tracks and determine whether or not they were fully AI-generated – 97% of the respondents failed. A majority (71%) of the respondents were surprised by these results and more than half (52%) felt uncomfortable by not being able to tell the difference. All key results from the survey can be found below.
Deezer has taken an industry-unique approach towards AI music, championing the rights of artists, while ensuring transparency for music fans – the company is so far the only streaming platform to detect and clearly tag 100% AI-generated content for its users.”

Source : Deezer and Ipsos study: AI fools 97% of listeners

Migros: une boîte à biscuits ornée d’un renne à cinq pattes

https://no-flux.beaude.net/wp-content/uploads/2025/10/5WcNIn6KqdGAjZbb_VsXMl.jpg

«Manifestement, ce détail a échappé à tout le monde», a réagi Sarah Reusser, porte-parole de la Migros. «Un renne à cinq pattes est ainsi plus rapide et peut livrer les produits Migros encore plus efficacement à notre clientèle.» La communicante n’a en revanche pas voulu prendre position sur une éventuelle utilisation de l’intelligence artificielle pour générer l’image ratée du renne du Père Noël.
Quant à savoir ce qu’il adviendra des 10’776 exemplaires de la boîte, la Migros a dit étudier «différentes options pour les revaloriser, que ce soit via une promotion ou comme article phare dans un outlet, voire pour une action solidaire». Et de conclure: «Qui sait, peut-être qu’elle va devenir un objet collector, avec ce défaut de fabrication qui la rend unique.»

Source : Migros: une boîte à biscuits ornée d’un renne à cinq pattes | 24 heures

An Opinionated Guide to Using AI Right Now

“The chart at the top of this post shows what people use AI for today. But I’d bet that in two years, that chart looks completely different. And that isn’t just because AI changed what it can do, but also because users figured out what it should do. So, pick a system and start with something that actually matters to you, like a report you need to write, a problem you’re trying to solve, or a project you have been putting off. Then try something ridiculous just to see what happens. The goal isn’t to become an AI expert. It’s to build intuition about what these systems can and can’t do, because that intuition is what will matter as these tools keep evolving.
The future of AI isn’t just about better models. It’s about people figuring out what to do with them.”

Source : An Opinionated Guide to Using AI Right NowEthan Mollick (Wharton School of the University of Pennsylvania)

Deloitte and Anthropic Alliance

image

“Together, Deloitte and Anthropic help organizations deploy trustworthy AI at scale to improve efficiency, elevate user experiences, and address complex challenges. With Claude by Anthropic now available to 470,000 Deloitte professionals worldwide, we’re setting a new standard for industry-specific AI adoption, innovation, and service.
Deloitte’s holistic approach, anchored in its Trustworthy AI™ (TAI) Framework, integrates seamlessly with Anthropic’s Constitutional AI (CAI) deployment, leveraging Deloitte’s experience in large-scale system integration to help clients develop operational safeguards that manage risk and foster innovation through Anthropic’s class-leading models. Deloitte stands ready to implement Claude in any industry, with more than 5,000 delivery centers, 10,000 strategy and analytics practitioners, and over 800 professionals certified through the first formal training and certification program introduced by any Anthropic alliance.”

Source : Deloitte and Anthropic Alliance | Deloitte US

Deloitte Australia to partially refund $290,000 report filled with suspected AI-generated errors

https://dims.apnews.com/dims4/default/5b5de5d/2147483647/strip/true/crop/2687x1791+0+0/resize/1440x960!/format/webp/quality/90/?url=https%3A%2F%2Fassets.apnews.com%2Fb1%2F05%2F53eba17b75c37b990a2329d99a31%2F9fd4117fdc80492e97d7c5746f639b34

“Deloitte Australia will partially refund the 440,000 Australian dollars ($290,000) paid by the Australian government for a report that was littered with apparent AI-generated errors, including a fabricated quote from a federal court judgment and references to nonexistent academic research papers. The financial services firm’s report to the Department of Employment and Workplace Relations was originally published on the department’s website in July.
A revised version was published Friday after Chris Rudge, a Sydney University researcher of health and welfare law, said he alerted the media that the report was “full of fabricated references.” Deloitte had reviewed the 237-page report and “confirmed some footnotes and references were incorrect,” the department said in a statement Tuesday. “Deloitte had agreed to repay the final instalment under its contract,” the department said. The amount will be made public after the refund is reimbursed.
Asked to comment on the report’s inaccuracies, Deloitte told The Associated Press in a statement the “matter has been resolved directly with the client.” Deloitte did not respond when asked if the errors were generated by AI. ”

Source : Deloitte Australia to partially refund $290,000 report filled with suspected AI-generated errors | AP News

Sora 2 Watermark Removers Flood the Web

https://images.unsplash.com/photo-1684493735679-359868df0e18?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDN8fE9wZW4lMjBBSXxlbnwwfHx8fDE3NTk4NDM5MTd8MA&ixlib=rb-4.1.0&q=80&w=2000

“Tobac showed 404 Media a few horrifying videos to illustrate her point. In one, a child pleads with their parents for bail money. In another, a woman tells the local news she’s going home after trying to vote because her polling place was shut down. In a third, Sam Altman tells a room that he can no longer keep Open AI afloat because the copyright cases have become too much to handle. All of the videos looked real. None of them have a watermark. “All of these examples have one thing in common,” Tobac said. “They’re attempting to generate AI content for use off Sora 2’s platform on other social media to create mass or targeted confusion, harm, scams, dangerous action, or fear for everyday folk who don’t understand how believable AI can look now in 2025.””

Source : Sora 2 Watermark Removers Flood the Web

18% des médias et 33% des sites tech les plus recommandés par Google sont générés par IA

Google Discover - Fake news from GenAI media

“L’algorithme Discover de « recommandation de contenus » de Google, principale source de trafic des sites journalistiques français, est devenu une « pompe à fric » pour les sites monétisés par la publicité, majoritairement financés par… la régie publicitaire de Google. Au point que près de 20 % des 1 000 sites d’info les plus recommandés par Google Discover, et 33 % des 120 sites les plus recommandés par Google News, à la rubrique Technologie, sont générés par IA.”

Source : 18% des médias et 33% des sites tech les plus recommandés par Google sont générés par IA – Next

Lawsuit: A chatbot hinted a kid should kill his parents over screen time limits

Two examples of interactions users have had with chatbots from the company Character.AI.

“ »It is simply a terrible harm these defendants and others like them are causing and concealing as a matter of product design, distribution and programming, » the lawsuit states.The suit argues that the concerning interactions experienced by the plaintiffs’ children were not « hallucinations, » a term researchers use to refer to an AI chatbot’s tendency to make things up. « This was ongoing manipulation and abuse, active isolation and encouragement designed to and that did incite anger and violence. »According to the suit, the 17-year-old engaged in self-harm after being encouraged to do so by the bot, which the suit says « convinced him that his family did not love him. »”

Source : Lawsuit: A chatbot hinted a kid should kill his parents over screen time limits : NPR

Un expert judiciaire accusé d’utiliser une IA pour générer son rapport sur les deepfakes

Affichage de l'image en full screen

“…dans son texte, Jeff Hancock explique aussi que « la difficulté à identifier les deepfakes vient de la technologie sophistiquée utilisée pour créer des reproductions homogènes et réalistes de l’apparence et de la voix d’une personne. ». Pour l’affirmer, il s’appuie sur une étude qui « a montré que même lorsque les individus sont informés de l’existence des deepfakes, ils peuvent avoir du mal à faire la distinction entre un contenu réel et un contenu manipulé. Cette difficulté est exacerbée sur les plateformes de médias sociaux, où les deepfakes peuvent se propager rapidement avant d’être identifiés et supprimés. ». Sauf que cette étude ne semble pas exister.”

Source : Un expert judiciaire accusé d’utiliser une IA pour générer son rapport sur les deepfakes – Next

« Older posts Newer posts »

© 2026 no-Flux

Theme by Anders NorenUp ↑