Étiquette : deep learning (Page 8 of 12)

“Google et DeepMind ont convenu de bien délimiter le champ d’action des algorithmes d’optimisation afin d’éviter un incident d’exploitation. « Nos opérateurs sont toujours en contrôle et peuvent choisir de quitter le mode de contrôle de l’IA à tout moment. Dans ces scénarios, le système de contrôle passera du contrôle de l’intelligence artificielle aux règles et heuristiques sur site qui définissent aujourd’hui l’industrie de l’automatisation », précise la société. En suivant ce chemin, il s’avère que Google a réduit sa facture électrique : en moyenne, l’entreprise dit avoir obtenu des économies d’énergie de 30 % alors que le mécanisme n’est déployé que depuis quelques mois”.

Source : IA : Google confie les clés du refroidissement de ses data centers à ses algorithmes – Tech – Numerama

loss-1

“OpenAI Five lost two games against top Dota 2 players at The International in Vancouver this week, maintaining a good chance of winning for the first 20-35 minutes of both games”.

Source : The International 2018: Results

“We don’t just want this to be an academically interesting result – we want it to be used in real treatment. So our paper also takes on one of the key barriers for AI in clinical practice: the “black box” problem. For most AI systems, it’s very hard to understand exactly why they make a recommendation. That’s a huge issue for clinicians and patients who need to understand the system’s reasoning, not just its output – the why as well as the what.
Our system takes a novel approach to this problem, combining two different neural networks with an easily interpretable representation between them. The first neural network, known as the segmentation network, analyses the OCT scan to provide a map of the different types of eye tissue and the features of disease it sees, such as haemorrhages, lesions, irregular fluid or other symptoms of eye disease. This map allows eyecare professionals to gain insight into the system’s “thinking.” The second network, known as the classification network, analyses this map to present clinicians with diagnoses and a referral recommendation. Crucially, the network expresses this recommendation as a percentage, allowing clinicians to assess the system’s confidence in its analysis”.

Source : A major milestone for the treatment of eye disease | DeepMind

Four video still images of Tucker Carlson speaking

Four video still images that mirror the original Tucker Carlson video. The face on the speaker appears to be that of actor Nicolas Cage.

“Lyu says a skilled forger could get around his eye-blinking tool simply by collecting images that show a person blinking. But he adds that his team has developed an even more effective technique, but says he’s keeping it secret for the moment. “I’d rather hold off at least for a little bit,” Lyu says. “We have a little advantage over the forgers right now, and we want to keep that advantage.””

Source : The Defense Department has produced the first tools for catching deepfakes – MIT Technology Review

« Le problème est distinct de celui de la reproductibilité en IA, à cause duquel les chercheurs ne peuvent pas reproduire les résultats d’autrui en raison de pratiques expérimentales et de publication incorrectes. Il diffère également de celui de la «boîte noire» ou de l’«interprétabilité» de l’apprentissage automatique, c’est-à-dire la difficulté dans laquelle on se trouve d’expliquer comment une IA particulière est parvenue à ses conclusions. Comme le dit Rahimi,  »j’essaie de faire la distinction entre un système d’apprentissage automatique qui est une boîte noire et un domaine entier qui est devenu une boîte noire ». »

Source : Le deep learning est-il autre chose que de « l’alchimie » ? | InternetActu.net

«DARPA’s MediFor program brings together world-class researchers to attempt to level the digital imagery playing field, which currently favors the manipulator, by developing technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform. If successful, the MediFor platform will automatically detect manipulations, provide detailed information about how these manipulations were performed, and reasonoverall integrity of visual media to facilitate decisions regarding the use of any questionable image or video».

Source : Media Forensics

«People are remarkably good at focusing their attention on a particular person in a noisy environment, mentally “muting” all other voices and sounds. Known as the cocktail party effect, this capability comes natural to us humans. However, automatic speech separation — separating an audio signal into its individual speech sources — while a well-studied problem, remains a significant challenge for computers. In “Looking to Listen at the Cocktail Party”, we present a deep learning audio-visual model for isolating a single speech signal from a mixture of sounds such as other voices and background noise».

Source : Research Blog: Looking to Listen: Audio-Visual Speech Separation

«The most basic problem is that researchers often don’t share their source code. At the AAAI meeting, Odd Erik Gundersen, a computer scientist at the Norwegian University of Science and Technology in Trondheim, reported the results of a survey of 400 algorithms presented in papers at two top AI conferences in the past few years. He found that only 6% of the presenters shared the algorithm’s code. Only a third shared the data they tested their algorithms on, and just half shared « pseudocode »—a limited summary of an algorithm. (In many cases, code is also absent from AI papers published in journals, including Science and Nature.)».

Source : Missing data hinder replication of artificial intelligence studies | Science | AAAS

« Older posts Newer posts »

© 2025 no-Flux

Theme by Anders NorenUp ↑