Étiquette : deep learning (Page 6 of 11)

“We are building the next generation of media through the power of AI. Copyrights, distribution rights, and infringement claims will soon be things of the past. To give you a glimpse of what we have been working on we created a free resource of 100k high-quality faces. Every image was generated by our internal AI systems as it continually improves. Use them in your presentations, projects, mockups or wherever — all for just a link back to us!”

Source : 100,000 AI-Generated Faces – Free to Download!

“The Cerebras Wafer Scale Engine 46,225 mm2 with 1.2 Trillion transistors and 400,000 AI-optimized cores. By comparison, the largest Graphics Processing Unit is 815 mm2 and has 21.1 Billion transistors.”

Source : Home – Cerebras

“We present a method for detecting one very popular Photoshop manipulation — image warping applied to human faces — using a model trained entirely using fake images that were automatically generated by scripting Photoshop itself. We show that our model outperforms humans at the task of recognizing manipulated images, can predict the specific location of edits, and in some cases can be used to « undo » a manipulation to reconstruct the original, unedited image.”

Source : Detecting Photoshopped Faces by Scripting Photoshop

“After training our agents for an additional week, we played against MaNa, one of the world’s strongest StarCraft II players, and among the 10 strongest Protoss players. AlphaStar again won by 5 games to 0, demonstrating strong micro and macro-strategic skills. “I was impressed to see AlphaStar pull off advanced moves and different strategies across almost every game, using a very human style of gameplay I wouldn’t have expected,” he said. “I’ve realised how much my gameplay relies on forcing mistakes and being able to exploit human reactions, so this has put the game in a whole new light for me. We’re all excited to see what comes next.”

Source : AlphaStar: Mastering the Real-Time Strategy Game StarCraft II | DeepMind

“We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.”

via Tero Karras FI (YouTube)

Obvious Art

“The members of Obvious don’t deny that they borrowed substantially from Barrat’s code, but until recently, they didn’t publicize that fact either. This has created unease for some members of the AI art community, which is open and collaborative and taking its first steps into mainstream attention. Seeing an AI portrait on sale at Christie’s is a milestone that elevates the entire community, but has this event been hijacked by outsiders?”

Source : How three French students used borrowed code to put the first AI portrait in Christie’s – The Verge

« Older posts Newer posts »

© 2025 no-Flux

Theme by Anders NorenUp ↑