“Google et DeepMind ont convenu de bien délimiter le champ d’action des algorithmes d’optimisation afin d’éviter un incident d’exploitation. « Nos opérateurs sont toujours en contrôle et peuvent choisir de quitter le mode de contrôle de l’IA à tout moment. Dans ces scénarios, le système de contrôle passera du contrôle de l’intelligence artificielle aux règles et heuristiques sur site qui définissent aujourd’hui l’industrie de l’automatisation », précise la société. En suivant ce chemin, il s’avère que Google a réduit sa facture électrique : en moyenne, l’entreprise dit avoir obtenu des économies d’énergie de 30 % alors que le mécanisme n’est déployé que depuis quelques mois”.
“OpenAI Five lost two games against top Dota 2 players at The International in Vancouver this week, maintaining a good chance of winning for the first 20-35 minutes of both games”.
“We don’t just want this to be an academically interesting result – we want it to be used in real treatment. So our paper also takes on one of the key barriers for AI in clinical practice: the “black box” problem. For most AI systems, it’s very hard to understand exactly why they make a recommendation. That’s a huge issue for clinicians and patients who need to understand the system’s reasoning, not just its output – the why as well as the what. Our system takes a novel approach to this problem, combining two different neural networks with an easily interpretable representation between them. The first neural network, known as the segmentation network, analyses the OCT scan to provide a map of the different types of eye tissue and the features of disease it sees, such as haemorrhages, lesions, irregular fluid or other symptoms of eye disease. This map allows eyecare professionals to gain insight into the system’s “thinking.” The second network, known as the classification network, analyses this map to present clinicians with diagnoses and a referral recommendation. Crucially, the network expresses this recommendation as a percentage, allowing clinicians to assess the system’s confidence in its analysis”.
“Lyu says a skilled forger could get around his eye-blinking tool simply by collecting images that show a person blinking. But he adds that his team has developed an even more effective technique, but says he’s keeping it secret for the moment. “I’d rather hold off at least for a little bit,” Lyu says. “We have a little advantage over the forgers right now, and we want to keep that advantage.””
“Creating slow-motion footage is all about capturing a large number of frames per second. If you don’t record enough, it becomes choppy and unwatchable as soon as you slow down your video — unless, that is, you use artificial intelligence to imagine the extra frames”.
« Le problème est distinct de celui de la reproductibilité en IA, à cause duquel les chercheurs ne peuvent pas reproduire les résultats d’autrui en raison de pratiques expérimentales et de publication incorrectes. Il diffère également de celui de la «boîte noire» ou de l’«interprétabilité» de l’apprentissage automatique, c’est-à-dire la difficulté dans laquelle on se trouve d’expliquer comment une IA particulière est parvenue à ses conclusions. Comme le dit Rahimi, »j’essaie de faire la distinction entre un système d’apprentissage automatique qui est une boîte noire et un domaine entier qui est devenu une boîte noire ». »
«DARPA’s MediFor program brings together world-class researchers to attempt to level the digital imagery playing field, which currently favors the manipulator, by developing technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform. If successful, the MediFor platform will automatically detect manipulations, provide detailed information about how these manipulations were performed, and reasonoverall integrity of visual media to facilitate decisions regarding the use of any questionable image or video».
«People are remarkably good at focusing their attention on a particular person in a noisy environment, mentally “muting” all other voices and sounds. Known as the cocktail party effect, this capability comes natural to us humans. However, automatic speech separation — separating an audio signal into its individual speech sources — while a well-studied problem, remains a significant challenge for computers. In “Looking to Listen at the Cocktail Party”, we present a deep learning audio-visual model for isolating a single speech signal from a mixture of sounds such as other voices and background noise».
«The most basic problem is that researchers often don’t share their source code. At the AAAI meeting, Odd Erik Gundersen, a computer scientist at the Norwegian University of Science and Technology in Trondheim, reported the results of a survey of 400 algorithms presented in papers at two top AI conferences in the past few years. He found that only 6% of the presenters shared the algorithm’s code. Only a third shared the data they tested their algorithms on, and just half shared « pseudocode »—a limited summary of an algorithm. (In many cases, code is also absent from AI papers published in journals, including Science and Nature.)».
«The time between calling 911 and the ambulance arriving can be critical for saving heart attack victims, but the person on the phone may not know what’s happening: this AI parses non-verbal clues to help diagnose from a distance».