Étiquette : deep learning (Page 7 of 11)

Google LYNA

“In both datasets, LYNA was able to correctly distinguish a slide with metastatic cancer from a slide without cancer 99% of the time. Further, LYNA was able to accurately pinpoint the location of both cancers and other suspicious regions within each slide, some of which were too small to be consistently detected by pathologists. As such, we reasoned that one potential benefit of LYNA could be to highlight these areas of concern for pathologists to review and determine the final diagnosis.”

Source : Google AI Blog: Applying Deep Learning to Metastatic Breast Cancer Detection

“In a world where surveillance technology is being deployed everywhere from airports and stadiums to public schools and hotels and raising a plethora of privacy concerns, it’s perhaps inevitable that farms on land and at sea would find ways to exploit it to improve productivity. Just this year, American agribusiness giant Cargill Inc. said it was working with an Irish tech start-up on a facial-recognition system to monitor cows so farmers can adjust feeding regimens to enhance milk production. Scanners will allow them to track food and water intake and even detect when females are having fertile days. Salmon farming may be next in line. As fish vies with beef and chicken as the global protein food of choice, exporters like Norway, the world’s biggest producer of the pinkish-orange fish, have become the focal point for radical marine-farming methods designed to help the $232 billion aquaculture industry feed the world.”

Source : Salmon Farmers Are Scanning Fish Faces to Fight Killer Lice – Bloomberg

“With a unified model for a large number of languages, we run the risk of being mediocre for each language, which makes the problem challenging. Moreover, it’s difficult to get human-annotated data for many of the languages. Although SynthText has been helpful as a way to bootstrap training, it’s not yet a replacement for human-annotated data sets. We are therefore exploring ways to bridge the domain gap between our synthetic engine and real-world distribution of text on images”.

Source : Rosetta: Understanding text in images and videos with machine learning – Facebook Code

“Google et DeepMind ont convenu de bien délimiter le champ d’action des algorithmes d’optimisation afin d’éviter un incident d’exploitation. « Nos opérateurs sont toujours en contrôle et peuvent choisir de quitter le mode de contrôle de l’IA à tout moment. Dans ces scénarios, le système de contrôle passera du contrôle de l’intelligence artificielle aux règles et heuristiques sur site qui définissent aujourd’hui l’industrie de l’automatisation », précise la société. En suivant ce chemin, il s’avère que Google a réduit sa facture électrique : en moyenne, l’entreprise dit avoir obtenu des économies d’énergie de 30 % alors que le mécanisme n’est déployé que depuis quelques mois”.

Source : IA : Google confie les clés du refroidissement de ses data centers à ses algorithmes – Tech – Numerama

loss-1

“OpenAI Five lost two games against top Dota 2 players at The International in Vancouver this week, maintaining a good chance of winning for the first 20-35 minutes of both games”.

Source : The International 2018: Results

“We don’t just want this to be an academically interesting result – we want it to be used in real treatment. So our paper also takes on one of the key barriers for AI in clinical practice: the “black box” problem. For most AI systems, it’s very hard to understand exactly why they make a recommendation. That’s a huge issue for clinicians and patients who need to understand the system’s reasoning, not just its output – the why as well as the what.
Our system takes a novel approach to this problem, combining two different neural networks with an easily interpretable representation between them. The first neural network, known as the segmentation network, analyses the OCT scan to provide a map of the different types of eye tissue and the features of disease it sees, such as haemorrhages, lesions, irregular fluid or other symptoms of eye disease. This map allows eyecare professionals to gain insight into the system’s “thinking.” The second network, known as the classification network, analyses this map to present clinicians with diagnoses and a referral recommendation. Crucially, the network expresses this recommendation as a percentage, allowing clinicians to assess the system’s confidence in its analysis”.

Source : A major milestone for the treatment of eye disease | DeepMind

Four video still images of Tucker Carlson speaking

Four video still images that mirror the original Tucker Carlson video. The face on the speaker appears to be that of actor Nicolas Cage.

“Lyu says a skilled forger could get around his eye-blinking tool simply by collecting images that show a person blinking. But he adds that his team has developed an even more effective technique, but says he’s keeping it secret for the moment. “I’d rather hold off at least for a little bit,” Lyu says. “We have a little advantage over the forgers right now, and we want to keep that advantage.””

Source : The Defense Department has produced the first tools for catching deepfakes – MIT Technology Review

« Le problème est distinct de celui de la reproductibilité en IA, à cause duquel les chercheurs ne peuvent pas reproduire les résultats d’autrui en raison de pratiques expérimentales et de publication incorrectes. Il diffère également de celui de la «boîte noire» ou de l’«interprétabilité» de l’apprentissage automatique, c’est-à-dire la difficulté dans laquelle on se trouve d’expliquer comment une IA particulière est parvenue à ses conclusions. Comme le dit Rahimi,  »j’essaie de faire la distinction entre un système d’apprentissage automatique qui est une boîte noire et un domaine entier qui est devenu une boîte noire ». »

Source : Le deep learning est-il autre chose que de « l’alchimie » ? | InternetActu.net

« Older posts Newer posts »

© 2025 no-Flux

Theme by Anders NorenUp ↑