Étiquette : vulnerability (Page 3 of 38)

Responsible AI at Google Research: Context in AI Research (CAIR)

https://no-flux.beaude.net/wp-content/uploads/2023/11/CAIR.png

« Artificial intelligence (AI) and related machine learning (ML) technologies are increasingly influential in the world around us, making it imperative that we consider the potential impacts on society and individuals in all aspects of the technology that we create. To these ends, the Context in AI Research (CAIR) team develops novel AI methods in the context of the entire AI pipeline: from data to end-user feedback. The pipeline for building an AI system typically starts with data collection, followed by designing a model to run on that data, deployment of the model in the real world, and lastly, compiling and incorporation of human feedback ».

Source : Responsible AI at Google Research: Context in AI Research (CAIR)

Etoiles de David taguées : la France dénonce une campagne d’ingérence numérique russe

https://no-flux.beaude.net/wp-content/uploads/2023/11/d0f1d36_2023-11-07t173948z-1827778678-rc2o34ansloz-rtrmadp-3-israel-palestinians-france-antisemitism.jpg

“La diplomatie française explique que de nombreux comptes sur les réseaux sociaux attribués « avec un haut degré de confiance » au réseau Doppelgänger ont amplifié les photos de ces tags, mais surtout qu’ils ont été les premiers à les propager en ligne. L’enquête a été menée par Viginum, l’organisme français de lutte contre les opérations d’influence.
Comme l’avait constaté Le Monde, au moins deux photos des tags prises de nuit rue de Rocroy (10e arrondissement) avaient été massivement diffusées sur Facebook et Twitter par les comptes de Doppelgänger. Si ces comptes automatisés sont grossiers, les photos diffusées n’étaient trouvables nulle part ailleurs, laissant penser qu’elles avaient été prises ou réceptionnées par une personne en lien avec le réseau de désinformation.”

Source : Etoiles de David taguées : la France dénonce une campagne d’ingérence numérique russe

Here’s How Violent Extremists Are Exploiting Generative AI Tools

https://no-flux.beaude.net/wp-content/uploads/2023/11/security_extremists_ai.jpg

“Beyond detailing the threat posed by generative AI tools that can tweak images, Tech Against Terrorism has published a new report citing other ways in which gen AI tools can be used to help extremist groups. These include the use of autotranslation tools that can quickly and easily convert propaganda into multiple languages, or the ability to create personalized messages at scale to facilitate recruitment efforts online. But Hadley believes that AI also provides an opportunity to get ahead of extremist groups and use the technology to preempt what they will use it for.
 »We’re going to partner with Microsoft to figure out if there are ways using our archive of material to create a sort of gen AI detection system in order to counter the emerging threat that gen AI will be used for terrorist content at scale,” Hadley says. “We’re confident that gen AI can be used to defend against hostile uses of gen AI. »”

Source : Here’s How Violent Extremists Are Exploiting Generative AI Tools | WIRED

A source you can trust – M11-P | Leica

https://no-flux.beaude.net/wp-content/uploads/2023/11/leica_m11-p_website_home_hero_3840x1780-scaled.jpg

“A source you can trust – Inviting Clarity & Context The M11-P pioneers the use of encrypted metadata in compliance with the Content Authenticity Initiative (CAI). This feature provides an additional layer of transparency to the conception and modifications of an image file, bringing awareness to the file’s provenance. With the Leica Content Credentials, each image you capture receives a digital signature backed by a CAI-compliant certificate. You can easily verify the authenticity of your images at any time by visiting contentcredentials.org/verify or in the Leica FOTOS app.”

Source : Details – M11-P | Leica Camera US

Meta Sued Over Features That Hook Children to Instagram, Facebook

https://no-flux.beaude.net/wp-content/uploads/2023/10/24METAKIDS-vgph-superJumbo.jpg

“It’s unusual for so many states to come together to sue a tech giant for consumer harms. The coordination shows states are prioritizing the issue of children and online safety and combining legal resources to fight Meta, just as states had previously done for cases against Big Tobacco and Big Pharma companies. “Just like Big Tobacco and vaping companies have done in years past, Meta chose to maximize its profits at the expense of public health, specifically harming the health of the youngest among us,” Phil Weiser, Colorado’s attorney general, said in a statement.
Lawmakers around the globe have been trying to rein in platforms like Instagram and TikTok on behalf of children. Over the past few years, Britain, followed by states like California and Utah, passed laws to require social media platforms to boost privacy and safety protections for minors online. The Utah law, among other things, would require social media apps to turn off notifications by default for minors overnight to reduce interruptions to children’s sleep.”

Source : Meta Sued Over Features That Hook Children to Instagram, Facebook – The New York Times

This Tool Could Protect Artists From A.I. Image Generators

A hand gesturing in front of a computer screen showing examples of paintings imitated by A.I.

“To the human eye, the Glazed image still looks like her work, but the computer-learning model would pick up on something very different. It’s similar to a tool the University of Chicago team previously created to protect photos from facial recognition systems.
When Ms. Ortiz posted her Glazed work online, an image generator trained on those images wouldn’t be able to mimic her work. A prompt with her name would instead lead to images in some hybridized style of her works and Pollock’s.
“We’re taking our consent back,” Ms. Ortiz said. A.I.-generating tools, many of which charge users a fee to generate images, “have data that doesn’t belong to them,” she said. “That data is my artwork, that’s my life. It feels like my identity.”
The team at the University of Chicago admitted that their tool does not guarantee protection and could lead to countermeasures by anyone committed to emulating a particular artist. “We’re pragmatists,” Professor Zhao said. “We recognize the likely long delay before law and regulations and policies catch up. This is to fill that void.””

Source : This Tool Could Protect Artists From A.I. Image Generators – The New York Times

Frontier risk and preparedness

Frontier Risk And Preparedness

“To support the safety of highly-capable AI systems, we are developing our approach to catastrophic risk preparedness, including building a Preparedness team and launching a challenge.
The team will help track, evaluate, forecast and protect against catastrophic risks spanning multiple categories including:

  • Individualized persuasion
  • Cybersecurity
  • Chemical, biological, radiological, and nuclear (CBRN) threats
  • Autonomous replication and adaptation (ARA)”

Source : Frontier risk and preparedness

38TB of data accidentally exposed by Microsoft AI researchers

“Microsoft’s AI research team, while publishing a bucket of open-source training data on GitHub, accidentally exposed 38 terabytes of additional private data — including a disk backup of two employees’ workstations. The backup includes secrets, private keys, passwords, and over 30,000 internal Microsoft Teams messages. The researchers shared their files using an Azure feature called SAS tokens, which allows you to share data from Azure Storage accounts.”

Source : 38TB of data accidentally exposed by Microsoft AI researchers | Wiz Blog

With 0-days hitting Chrome, iOS, and dozens more this month, is no software safe?

https://no-flux.beaude.net/wp-content/uploads/2023/09/zeroday-800x534-1.jpg

“End users, admins, and researchers better brace yourselves: The number of apps being patched for zero-day vulnerabilities has skyrocketed this month and is likely to get worse in the following weeks. People have worked overtime in recent weeks to patch a raft of vulnerabilities actively exploited in the wild, with offerings from Apple, Microsoft, Google, Mozilla, Adobe, and Cisco all being affected since the beginning of the month. The number of zero-days tracked this month is considerably higher than the monthly average this year. September so far is at 10, compared with a total of 60 from January through August, according to security firm Mandiant. The company tracked 55 zero-days in 2022 and 81 in 2021. A sampling of the affected companies and products includes iOS and macOS, Windows, Chrome, Firefox, Acrobat and Reader, the Atlas VPN, and Cisco’s Adaptive Security Appliance Software and its Firepower Threat Defense. The number of apps is likely to grow because a single vulnerability that allows hackers to execute malicious code when users open a booby-trapped image included in a message or web page is present in possibly hundreds of apps.”

Source : With 0-days hitting Chrome, iOS, and dozens more this month, is no software safe? | Ars Technica

« Older posts Newer posts »

© 2025 no-Flux

Theme by Anders NorenUp ↑