Auteur/autrice : noflux (Page 18 of 614)

AI has better ‘bedside manner’ than some doctors, study finds

Doctor & Patient ChatGPT

“ChatGPT appears to have a better ‘bedside manner’ than some doctors – at least when their written advice is rated for quality and empathy, a study has shown. The findings highlight the potential for AI assistants to play a role in medicine, according to the authors of the work, who suggest such agents could help draft doctors’ communications with patients. “The opportunities for improving healthcare with AI are massive,” said Dr John Ayers, of the University of California San Diego. However, others noted that the findings do not mean ChatGPT is actually a better doctor and cautioned against delegating clinical responsibility given that the chatbot has a tendency to produce “facts” that are untrue.”

Source : AI has better ‘bedside manner’ than some doctors, study finds | Artificial intelligence (AI) | The Guardian

Google « We Have No Moat, And Neither Does OpenAI »

https://no-flux.beaude.net/wp-content/uploads/2023/05/https3A2F2Fsubstack-post-media.s3.amazonaws.com2Fpublic2Fimages2F241fe3ef-3919-4a63-9c68-9e2e77cc2fc0_1366x588.webp

“At the beginning of March the open source community got their hands on their first really capable foundation model, as Meta’s LLaMA was leaked to the public. It had no instruction or conversation tuning, and no RLHF. Nonetheless, the community immediately understood the significance of what they had been given. A tremendous outpouring of innovation followed, with just days between major developments (see The Timeline for the full breakdown). Here we are, barely a month later, and there are variants with instruction tuning, quantization, quality improvements, human evals, multimodality, RLHF, etc. etc. many of which build on each other. Most importantly, they have solved the scaling problem to the extent that anyone can tinker. Many of the new ideas are from ordinary people. The barrier to entry for training and experimentation has dropped from the total output of a major research organization to one person, an evening, and a beefy laptop.”

Source : Google « We Have No Moat, And Neither Does OpenAI »

Comment l’Europe veut réguler très rapidement l’intelligence artificielle

https://no-flux.beaude.net/wp-content/uploads/2023/05/f67e068_2022-03-23t101753z-1994423264-rc2a8t9ais9g-rtrmadp-3-ukraine-crisis-eu-stateaid.jpg

“Leurs concepteurs devront notamment empêcher la création de contenu illégal, de résumés de données protégées par le droit d’auteur et ne pas entraîner des algorithmes sur du contenu protégé. OpenAI, éditrice de ChatGPT, ainsi que ses concurrents, devra aussi évaluer et limiter les risques et s’enregistrer dans la base de données de l’UE.
De nouvelles interdictions ont été décidées concernant d’autres systèmes d’IA: pas de systèmes de reconnaissance des émotions ou encore pas de récupération de données biométriques provenant des médias sociaux ou de la vidéosurveillance pour créer des bases de données de reconnaissance faciale. Les domaines d’application sont donc très larges.”

Source : Comment l’Europe veut réguler très rapidement l’intelligence artificielle – Le Temps

30 years of a free and open Web | CERN

“Exactly 30 years ago, on 30 April 1993, CERN made an important announcement. Walter Hoogland and Helmut Weber, respectively the Director of Research and Director of Administration at the time, decided to publicly release the tool that Tim Berners-Lee had first proposed in 1989 to allow scientists and institutes working on CERN data all over the globe to share information accurately and quickly. Little did they know how much it would change the world. On this day in 1993, CERN released the World Wide Web to the public. Now, it is an integral feature of our daily lives: according to the International Telecommunications Union, more than 5 billion people, two thirds of the worldwide population, rely on the internet regularly for research, industry, communications and entertainment. “Most people would agree that the public release was the best thing we could have done, and that it was the source of the success of the World Wide Web,” says Walter Hoogland, co-signatory of the document that proclaimed the Web’s release, “apart from, of course, the World Wide Web itself!””

Source : 30 years of a free and open Web | CERN

‘The Godfather of AI’ Quits Google and Warns of Danger Ahead

Geoffrey Hinton, wearing a dark sweater.

“Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.
Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”
As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”
Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop”

Source : ‘The Godfather of AI’ Quits Google and Warns of Danger Ahead – The New York Times

Yann Le Cun, «parrain de l’IA» : «L’intelligence artificielle conduira peut-être à un nouveau siècle des Lumières»

https://no-flux.beaude.net/wp-content/uploads/2023/04/V4VMZ3OSMRFBTIISSODVQUZWQQ.jpg

“«Certains ont parlé dans des termes trop excessifs des dangers possibles des systèmes d’IA jusqu’à la «destruction de l’humanité». Mais l’IA en tant qu’amplificateur de l’intelligence humaine conduira peut-être à une espèce de nouvelle renaissance, un nouveau siècle des Lumières avec une accélération du progrès scientifique, peut-être du progrès social. Ça fait peur, comme toute technologie qui risque de déstabiliser et de changer la société.
«C’était le cas pour l’imprimerie. L’Eglise catholique disait que ça détruirait la société, mais elle s’est plutôt améliorée. Ça a engendré bien sûr l’apparition du mouvement protestant et des guerres de religion pendant un ou deux siècles en Europe, mais a également permis l’essor du siècle des Lumières, de la philosophie, le rationalisme, la science, la démocratie, la révolution américaine et la révolution française… Il n’y aurait pas eu ça sans l’imprimerie. A la même époque, au XVe siècle, l’Empire ottoman a interdit l’usage de l’imprimerie. Ils avaient trop peur d’une déstabilisation possible de la société et de la religion. La conséquence est que l’Empire ottoman a pris 250 ans de retard dans le progrès scientifique et social, ce qui a grandement contribué à son déclin. Alors qu’au Moyen Age, l’Empire ottoman a été dominant dans la science. Donc on a le risque, certainement en Europe, mais aussi dans certaines parties du monde, de faire face à un nouveau déclin si on est trop frileux sur le déploiement de l’intelligence artificielle.»”

Source : Yann Le Cun, «parrain de l’IA» : «L’intelligence artificielle conduira peut-être à un nouveau siècle des Lumières» – Libération

Are universities to slow to cope with Generative AI?

https://no-flux.beaude.net/wp-content/uploads/2023/04/AI-Policy-LSE-Impact.png

“Part of the problem is that even a singular system like ChatGPT encompasses a dizzying array of use cases for academics, students and administrators that are still in the process of being discovered. Its underlying capacities are expanding at a seemingly faster rate than universities are able to cope with, evidenced in the launch of GPT-4 (and the hugely significant ChatGPT plug in architecture), all while universities are still grappling with GPT-3.5. Furthermore, generative AI is a broader category than ChatGPT with images, videos, code, music and voice likely to hit mainstream awareness with the same force over the coming months and years.
In what Filip Vostal and I have described as the Accelerated Academy, the pace of working life increases (albeit unevenly), but policymaking still moves too slowly to cope. In siloed and centralised universities there is a recurrent problem of a distance from practice, where policies are formulated and procedures developed with too little awareness of on the ground realities. When the use cases of generative AI and the problems it generates are being discovered on a daily basis, we urgently need mechanisms to identify and filter these issues from across the university in order to respond in a way which escapes the established time horizons of the teaching and learning bureaucracy”

Source : Are universities to slow to cope with Generative AI? | Impact of Social Sciences

Le patron d’une plate-forme de cryptomonnaies condamné à verser plus de 3 milliards d’euros

“C’est une sanction au montant historiquement élevé pour une arnaque à la cryptomonnaie : Cornelius Johannes Steynberg, patron de la plate-forme Mirror Trading International (MTI), a été condamné jeudi 27 avril à verser plus de 3,4 milliards de dollars (3 milliards d’euros), a annoncé la Commodity Futures Trading Commission (CFTC), l’agence de régulation financière américaine. Placée en liquidation judiciaire en juillet 2021, MTI garantissait aux investisseurs des rendements potentiels stratosphériques de plus de 100 % par an, grâce à un algorithme qui permettait d’effectuer des transactions sur le marché des changes, lequel s’est révélé être imaginaire. Selon la CFTC, MTI et M. Steynberg, un ressortissant sud-africain, auraient accepté plus de 1,7 milliard de dollars de dépôts sous forme de bitcoins, dont une partie provenait de quelque 23 000 résidents américains. Sa condamnation, prononcée par un juge d’une cour fédérale de l’ouest du Texas, consiste à rembourser l’ensemble des déposants ainsi qu’à payer une amende d’un montant équivalent à destination de la CFTC.”

Source : Le patron d’une plate-forme de cryptomonnaies condamné à verser plus de 3 milliards d’euros

OpenAI’s CEO Says the Age of Giant AI Models Is Already Over

https://no-flux.beaude.net/wp-content/uploads/2023/04/Sam-Altman-OpenAI-MIT-Business-1246870629.jpg

“Altman’s statement suggests that GPT-4 could be the last major advance to emerge from OpenAI’s strategy of making the models bigger and feeding them more data. He did not say what kind of research strategies or techniques might take its place. In the paper describing GPT-4, OpenAI says its estimates suggest diminishing returns on scaling up model size. Altman said there are also physical limits to how many data centers the company can build and how quickly it can build them.
Nick Frosst, a cofounder at Cohere who previously worked on AI at Google, says Altman’s feeling that going bigger will not work indefinitely rings true. He, too, believes that progress on transformers, the type of machine learning model at the heart of GPT-4 and its rivals, lies beyond scaling. “There are lots of ways of making transformers way, way better and more useful, and lots of them don’t involve adding parameters to the model,” he says. Frosst says that new AI model designs, or architectures, and further tuning based on human feedback are promising directions that many researchers are already exploring.”

Source : OpenAI’s CEO Says the Age of Giant AI Models Is Already Over | WIRED

« Older posts Newer posts »

© 2025 no-Flux

Theme by Anders NorenUp ↑