Étiquette : deep learning (Page 2 of 11)

Releasing Re-LAION 5B: transparent iteration on LAION-5B with additional safety fixes

“Re-LAION-5B fixes the issues as reported by Stanford Internet Observatory in December 2023 for the original LAION-5B and is available for download in two versions, Re-LAION-5B research and Re-LAION-5B research-safe. The work was completed in partnership with the Internet Watch Foundation (IWF), the Canadian Center for Child Protection (C3P), and Stanford Internet Observatory. For the work, we utilized lists of link and image hashes provided by our partners, as of July 2024. In all, 2236 links were removed after matching with the lists of link and image hashes provided by our partners. These links also subsume 1008 links found by the Stanford Internet Observatory report in Dec 2023. Note: A substantial fraction of these links known to IWF and C3P are most likely dead (as organizations make continual efforts to take the known material down from public web), therefore this number is an upper bound for links leading to potential CSAM. Total number of text-link to images pairs in Re-LAION-5B: 5.5 B (5,526,641,167)”

Source : Releasing Re-LAION 5B: transparent iteration on LAION-5B with additional safety fixes | LAION

L’IA générative, une bulle ? Goldman Sachs met les pieds dans le plat

https://no-flux.beaude.net/wp-content/uploads/2024/09/braedon-mcleod-unsplash.jpg

“Si les autres analystes de Goldman Sachs restent optimistes sur la capacité de l’IA à générer des retours sur investissements une fois trouvées ses applications les plus efficaces (« killer app »), Daron Acemoglu invite de son côté à ne pas analyser les coûts de la technologie d’un seul point de vue financier : « Le PIB n’est pas tout. » Sans se déclarer spécifiquement inquiets par la profusion de deepfakes, l’économiste rappelle que les IA génératives alimentent la désinformation, de même qu’elles peuvent être utilisées de manière malveillante. Or « un investissement de mille milliards de dollars dans les deepfakes ajouterait mille milliards de dollars au PIB, mais je ne pense pas que la plupart des gens s’en réjouiraient ni n’en bénéficieraient ».”

Source : L’IA générative, une bulle ? Goldman Sachs met les pieds dans le plat – Next

IA générative : quels modèles sont vraiment ouverts

“Dans l’IA générative comme dans d’autres domaines du numérique, le mot « open » peut attirer avec ses promesses de transparence, de traçabilité, de sécurité ou de réutilisation possible. S’il est beaucoup utilisé de façon marketing, le nouvel AI Act européen prévoit des exemptions pour les modèles « ouverts ». Des chercheurs néerlandais ont classé 40 modèles de génération de textes et six modèles de génération d’images se prétendant « open » par degré d’ouverture réelle.”

Source : IA générative : quels modèles sont vraiment ouverts – Next

OpenAI’s News Corp deal licenses content from WSJ, New York Post, and more

https://no-flux.beaude.net/wp-content/uploads/2024/05/STK149_AI_02.jpg

“OpenAI has struck a deal with News Corp, the media company that owns The Wall Street Journal, the New York Post, The Daily Telegraph, and others. As reported by The Wall Street Journal, OpenAI’s deal with News Corp could be worth over $250 million in the next five years “in the form of cash and credits for use of OpenAI technology.””

Source : OpenAI’s News Corp deal licenses content from WSJ, New York Post, and more – The Verge

Spotify’s AI Voice Translation Pilot Means Your Favorite Podcasters Might Be Heard in Your Native Language

“This Spotify-developed tool leverages the latest innovations—one of which is OpenAI’s newly released voice generation technology—to match the original speaker’s style, making for a more authentic listening experience that sounds more personal and natural than traditional dubbing. A podcast episode originally recorded in English can now be available in other languages while keeping the speaker’s distinctive speech characteristics. ”

Source : Spotify’s AI Voice Translation Pilot Means Your Favorite Podcasters Might Be Heard in Your Native Language — Spotify

Do Foundation Model Providers Comply with the EU AI Act? – Stanford CRFM

https://no-flux.beaude.net/wp-content/uploads/2023/06/results.png

“We find that foundation model providers unevenly comply with the stated requirements of the draft EU AI Act. Enacting and enforcing the EU AI Act will bring about significant positive change in the foundation model ecosystem. Foundation model providers’ compliance with requirements regarding copyright, energy, risk, and evaluation is especially poor, indicating areas where model providers can improve. Our assessment shows sharp divides along the boundary of open vs. closed releases: we believe that all providers can feasibly improve their conduct, independent of where they fall along this spectrum. Overall, our analysis speaks to a broader trend of waning transparency: providers should take action to collectively set industry standards that improve transparency, and policymakers should take action to ensure adequate transparency underlies this general-purpose technology.”

Source : Stanford CRFM

Sam Altman, ChatGPT Creator and OpenAI CEO, Urges Senate for AI Regulation

“Some of the toughest questions and comments toward Mr. Altman came from Dr. Marcus, who noted OpenAI hasn’t been transparent about the data its uses to develop its systems. He expressed doubt in Mr. Altman’s prediction that new jobs will replace those killed off by A.I. “We have unprecedented opportunities here but we are also facing a perfect storm of corporate irresponsibility, widespread deployment, lack of adequate regulation and inherent unreliability,” Dr. Marcus said. Tech companies have argued that Congress should be careful with any broad rules that lump different kinds of A.I. together. In Tuesday’s hearing, Ms. Montgomery of IBM called for an A.I. law that is similar to Europe’s proposed regulations, which outlines various levels of risk. She called for rules that focus on specific uses, not regulating the technology itself.”

Source : Sam Altman, ChatGPT Creator and OpenAI CEO, Urges Senate for AI Regulation – The New York Times

L’effarante hypocrisie des géants de la tech face à la régulation de l’intelligence artificielle

https://no-flux.beaude.net/wp-content/uploads/2023/05/58b91be_1490690420.jpg

“Un sourire sur les lèvres, Sam Altman a dû rentrer très satisfait de son escapade à Washington. On imagine le directeur d’OpenAI, éditeur de ChatGPT, taper dans les mains de ses collaborateurs de retour à San Francisco. Mission accomplie, sans doute au-delà de ses espérances, après avoir passé mardi soir trois heures devant une commission parlementaire. Non seulement Sam Altman a charmé les sénateurs. Mais en plus, il s’est posé comme le noble défenseur d’une régulation de l’intelligence artificielle (IA). Un sommet d’hypocrisie, à notre sens. La partie a été beaucoup trop facile pour le directeur d’OpenAI. On a été loin, très loin des interrogatoires musclés de directeurs d’autres géants de la tech, questionnés ces dernières années sur les effets nocifs des réseaux sociaux ou sur des fuites de données. Non, Sam Altman a pu tranquillement présenter les avantages et inconvénients de l’IA et une nouvelle fois appeler à sa régulation.”

Source : L’effarante hypocrisie des géants de la tech face à la régulation de l’intelligence artificielle – Le Temps

Google « We Have No Moat, And Neither Does OpenAI »

https://no-flux.beaude.net/wp-content/uploads/2023/05/https3A2F2Fsubstack-post-media.s3.amazonaws.com2Fpublic2Fimages2F241fe3ef-3919-4a63-9c68-9e2e77cc2fc0_1366x588.webp

“At the beginning of March the open source community got their hands on their first really capable foundation model, as Meta’s LLaMA was leaked to the public. It had no instruction or conversation tuning, and no RLHF. Nonetheless, the community immediately understood the significance of what they had been given. A tremendous outpouring of innovation followed, with just days between major developments (see The Timeline for the full breakdown). Here we are, barely a month later, and there are variants with instruction tuning, quantization, quality improvements, human evals, multimodality, RLHF, etc. etc. many of which build on each other. Most importantly, they have solved the scaling problem to the extent that anyone can tinker. Many of the new ideas are from ordinary people. The barrier to entry for training and experimentation has dropped from the total output of a major research organization to one person, an evening, and a beefy laptop.”

Source : Google « We Have No Moat, And Neither Does OpenAI »

« Older posts Newer posts »

© 2025 no-Flux

Theme by Anders NorenUp ↑