Étiquette : artificial intelligence (Page 13 of 24)

Microsoft - Zo

“California governor Jerry Brown signed regulations into law last Friday (Sept. 30) that should make it easier for Californians to know whether they’re speaking to a human or a bot. The new law goes into effect on July 1, 2019—Botageddon, as we’re going to call it—and could have far-reaching consequences for how automated systems communicate with people online. It will require companies to disclose whether they are using a bot to communicate with the public on the internet (something like “Hi, I’m a bot.”)”

Source : A new law means California’s bots have to disclose they’re not human — Quartz

Google Home

“Ryan Germick est responsable du développement de l’intelligence artificielle de l’enceinte Google Home. Pour lui conférer de l’empathie et de l’humour, il s’est entouré de comédiens, d’artistes en improvisation, de journalistes satiriques et de dialoguistes recrutés auprès du Studio Pixar. «Il faut tout un village pour élever une assistante virtuelle», explique-t-il sur le site de CNBC. Ainsi avec son équipe, ils ont réussi à créer une personnalité amicale, qu’ils décrivent comme une sorte de bibliothécaire excentrique et intello.”

Source : Sous le charme de l’assistante virtuelle de Google – Tendances Web

“With a unified model for a large number of languages, we run the risk of being mediocre for each language, which makes the problem challenging. Moreover, it’s difficult to get human-annotated data for many of the languages. Although SynthText has been helpful as a way to bootstrap training, it’s not yet a replacement for human-annotated data sets. We are therefore exploring ways to bridge the domain gap between our synthetic engine and real-world distribution of text on images”.

Source : Rosetta: Understanding text in images and videos with machine learning – Facebook Code

“DARPA envisions a future in which machines are more than just tools that execute human-programmed rules or generalize from human-curated data sets. Rather, the machines DARPA envisions will function more as colleagues than as tools. Towards this end, DARPA research and development in human-machine symbiosis sets a goal to partner with machines. Enabling computing systems in this manner is of critical importance because sensor, information, and communication systems generate data at rates beyond which humans can assimilate, understand, and act. Incorporating these technologies in military systems that collaborate with warfighters will facilitate better decisions in complex, time-critical, battlefield environments; enable a shared understanding of massive, incomplete, and contradictory information; and empower unmanned systems to perform critical missions safely and with high degrees of autonomy. DARPA is focusing its investments on a third wave of AI that brings forth machines that understand and reason in context”.

Source : AI Next Campaign

“Google et DeepMind ont convenu de bien délimiter le champ d’action des algorithmes d’optimisation afin d’éviter un incident d’exploitation. « Nos opérateurs sont toujours en contrôle et peuvent choisir de quitter le mode de contrôle de l’IA à tout moment. Dans ces scénarios, le système de contrôle passera du contrôle de l’intelligence artificielle aux règles et heuristiques sur site qui définissent aujourd’hui l’industrie de l’automatisation », précise la société. En suivant ce chemin, il s’avère que Google a réduit sa facture électrique : en moyenne, l’entreprise dit avoir obtenu des économies d’énergie de 30 % alors que le mécanisme n’est déployé que depuis quelques mois”.

Source : IA : Google confie les clés du refroidissement de ses data centers à ses algorithmes – Tech – Numerama

“We don’t just want this to be an academically interesting result – we want it to be used in real treatment. So our paper also takes on one of the key barriers for AI in clinical practice: the “black box” problem. For most AI systems, it’s very hard to understand exactly why they make a recommendation. That’s a huge issue for clinicians and patients who need to understand the system’s reasoning, not just its output – the why as well as the what.
Our system takes a novel approach to this problem, combining two different neural networks with an easily interpretable representation between them. The first neural network, known as the segmentation network, analyses the OCT scan to provide a map of the different types of eye tissue and the features of disease it sees, such as haemorrhages, lesions, irregular fluid or other symptoms of eye disease. This map allows eyecare professionals to gain insight into the system’s “thinking.” The second network, known as the classification network, analyses this map to present clinicians with diagnoses and a referral recommendation. Crucially, the network expresses this recommendation as a percentage, allowing clinicians to assess the system’s confidence in its analysis”.

Source : A major milestone for the treatment of eye disease | DeepMind

Two women outdoors looking at a mobile device using facial recognition technology.

“Microsoft announced Tuesday that it has updated its facial recognition technology with significant improvements in the system’s ability to recognize gender across skin tones. That improvement addresses recent concerns that commercially available facial recognition technologies more accurately recognized gender of people with lighter skin tones than darker skin tones, and that they performed best on males with lighter skin and worst on females with darker skin.
With the new improvements, Microsoft said it was able to reduce the error rates for men and women with darker skin by up to 20 times. For all women, the company said the error rates were reduced by nine times. Overall, the company said that, with these improvements, they were able to significantly reduce accuracy differences across the demographics”.

Source : Microsoft improves facial recognition to perform well across all skin tones, genders

“Our team of five neural networks, OpenAI Five, has started to defeat amateur human teams at Dota 2. While today we play with restrictions, we aim to beat a team of top professionals at The International in August subject only to a limited set of heroes. We may not succeed: Dota 2 is one of the most popular and complex esports games in the world, with creative and motivated professionals who train year-round to earn part of Dota’s annual $40M prize pool.
OpenAI Five plays 180 years worth of games against itself every day, learning via self-play. It trains using a scaled-up version of Proximal Policy Optimization running on 256 GPUs and 128,000 CPU cores — a larger-scale version of the system we built to play the much-simpler solo variant of the game last year. Using a separate LSTM for each hero and no human data, it learns recognizable strategies. This indicates that reinforcement learning can yield long-term planning with large but achievable scale — without fundamental advances, contrary to our own expectations upon starting the project”.

Source : OpenAI Five

« Older posts Newer posts »

© 2025 no-Flux

Theme by Anders NorenUp ↑