“I want to stress again that EA is a very serious and intelligent movement promoted by very serious and intelligent people because, to the untrained eye, it can sometimes look like a cult of unhinged narcissists. That Nauru project, for example? That wasn’t the only weird idea the folk at FTX had dreamed up in the name of effective altruism. According to the court filings, the FTX Foundation, the non-profit arm of FTX, had authorised a $300,000 (£230,000) grant to an individual to “write a book about how to figure out what humans’ utility function is (are)”. The foundation also made a $400,000 grant “to an entity that posted animated videos on YouTube related to ‘rationalist and [effective altruism] material’, including videos on ‘grabby aliens’”.
So there you go. Some of the best minds of our generation (or so they’d have you believe) are busying themselves with strategies on grabby aliens and Pacific island bunkers. Is this effective? Is this altruism? I can’t tell you for sure what the future of effective altruism is, but the road to hell is paved with good intentions. ”
Étiquette : future (Page 2 of 7)

“Eric Ghysels, an economics professor at the University of North Carolina at Chapel Hill, noted that while an AI can be speedier than human investors moment-to-moment, it’s sluggish to adapt to « paradigm-shifting events » like the war in Ukraine — or maybe even the rise of AI. Meaning, in his opinion, an AI can’t beat human investors over time. « Maybe one day it will, but for now AI is limited to plagiarizing history, » Ghysels told the WSJ.”
Source : AI Is Doing a Terrible Job Trading Stocks in the Real World
“Imagine a game in which you could have intelligent, unscripted and dynamic conversations with non-playable characters (NPCs) with persistent personalities that evolve over time and accurate facial animations and expressions, all in your native tongue.”
“Peter Thiel says he’s going to be cryogenically preserved, though he isn’t sure if the tech works. He told independent journalist Bari Weiss that it’s « the sort of thing we’re supposed to try to do. » Thiel, worth $8.31 billion, regularly expresses interest in anti-aging and curing chronic diseases.”
Source : Peter Thiel Says He’ll Be Frozen After Death, but Doubts It Works

“At the beginning of March the open source community got their hands on their first really capable foundation model, as Meta’s LLaMA was leaked to the public. It had no instruction or conversation tuning, and no RLHF. Nonetheless, the community immediately understood the significance of what they had been given. A tremendous outpouring of innovation followed, with just days between major developments (see The Timeline for the full breakdown). Here we are, barely a month later, and there are variants with instruction tuning, quantization, quality improvements, human evals, multimodality, RLHF, etc. etc. many of which build on each other. Most importantly, they have solved the scaling problem to the extent that anyone can tinker. Many of the new ideas are from ordinary people. The barrier to entry for training and experimentation has dropped from the total output of a major research organization to one person, an evening, and a beefy laptop.”
Source : Google « We Have No Moat, And Neither Does OpenAI »

“Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.
Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”
As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”
Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop”
Source : ‘The Godfather of AI’ Quits Google and Warns of Danger Ahead – The New York Times

“«Certains ont parlé dans des termes trop excessifs des dangers possibles des systèmes d’IA jusqu’à la «destruction de l’humanité». Mais l’IA en tant qu’amplificateur de l’intelligence humaine conduira peut-être à une espèce de nouvelle renaissance, un nouveau siècle des Lumières avec une accélération du progrès scientifique, peut-être du progrès social. Ça fait peur, comme toute technologie qui risque de déstabiliser et de changer la société.
«C’était le cas pour l’imprimerie. L’Eglise catholique disait que ça détruirait la société, mais elle s’est plutôt améliorée. Ça a engendré bien sûr l’apparition du mouvement protestant et des guerres de religion pendant un ou deux siècles en Europe, mais a également permis l’essor du siècle des Lumières, de la philosophie, le rationalisme, la science, la démocratie, la révolution américaine et la révolution française… Il n’y aurait pas eu ça sans l’imprimerie. A la même époque, au XVe siècle, l’Empire ottoman a interdit l’usage de l’imprimerie. Ils avaient trop peur d’une déstabilisation possible de la société et de la religion. La conséquence est que l’Empire ottoman a pris 250 ans de retard dans le progrès scientifique et social, ce qui a grandement contribué à son déclin. Alors qu’au Moyen Age, l’Empire ottoman a été dominant dans la science. Donc on a le risque, certainement en Europe, mais aussi dans certaines parties du monde, de faire face à un nouveau déclin si on est trop frileux sur le déploiement de l’intelligence artificielle.»”

“Altman’s statement suggests that GPT-4 could be the last major advance to emerge from OpenAI’s strategy of making the models bigger and feeding them more data. He did not say what kind of research strategies or techniques might take its place. In the paper describing GPT-4, OpenAI says its estimates suggest diminishing returns on scaling up model size. Altman said there are also physical limits to how many data centers the company can build and how quickly it can build them.
Nick Frosst, a cofounder at Cohere who previously worked on AI at Google, says Altman’s feeling that going bigger will not work indefinitely rings true. He, too, believes that progress on transformers, the type of machine learning model at the heart of GPT-4 and its rivals, lies beyond scaling. “There are lots of ways of making transformers way, way better and more useful, and lots of them don’t involve adding parameters to the model,” he says. Frosst says that new AI model designs, or architectures, and further tuning based on human feedback are promising directions that many researchers are already exploring.”
Source : OpenAI’s CEO Says the Age of Giant AI Models Is Already Over | WIRED

“Désinformation, polarisation des opinions, bulles de filtres, harcèlement, surveillance, etc. Les réseaux sociaux font beaucoup parler. Souvent en mal. Depuis sa prise de contrôle par Elon Musk, en octobre 2022, Twitter est plongé dans un chaos opérationnel. Facebook est quant à lui régulièrement critiqué pour avoir négligé les enjeux de désinformation, de vie privée ou de diffusion des contenus haineux. Sans parler de Tiktok, accusé de complicité d’espionnage au profit de la Chine. On a demandé à six témoins, journalistes et chercheurs spécialistes du numérique, à quoi pourrait ressembler le réseau social idéal. ”
Source : À quoi ressemblerait votre réseau social idéal ? | la revue des médias

“Les êtres humains sont complexes. Même si notre néocortex est très développé, nous restons des animaux, avec une partie de cerveau ancienne et des instincts de survie, de reproduction… Nos aspirations plus élevées, nos questionnements philosophiques sont souvent en conflit avec nos pulsions et nos désirs, qui peuvent nous pousser à mentir, à voler, à souhaiter du mal à autrui. La plupart des gens peinent à séparer ces deux choses. Ils pensent que l’une n’est pas possible sans l’autre. Mais le type de machine intelligente que nous voulons créer ne sera pas sujette à ces pulsions. Et elle ne pourra pas les développer. Les IA ne peuvent pas nous poser de risque existentiel, vouloir nous détruire ou prendre le contrôle du monde. Le vrai danger lié à cette technologie, c’est qu’elle soit utilisée à des mauvaises fins: désinformation, abus de pouvoir, etc. Ce risque-là, bien sûr, est réel.” – Jeff Hawkins, ex. Palm, Numenta
Source : Progrès technologiques: «Les machines n’ont pas d’intelligence. Mais elles en auront» | 24 heures
