this but ai
Download video: https://files.mastodon.social/media_attachments/files/114/292/758/912/789/830/original/e9705d70b7126ec4.mp4
Read more of this story at Slashdot.
In partnership with NVIDIA and HiddenLayer, as part of the Open Source Security Foundation, we are now launching the first stable version of our model signing library. Using digital signatures like those from Sigstore, we allow users to verify that the model used by the application is exactly the model that was created by the developers. In this blog post we will illustrate why this release is important from Google’s point of view.
With the advent of LLMs, the ML field has entered an era of rapid evolution. We have seen remarkable progress leading to weekly launches of various applications which incorporate ML models to perform tasks ranging from customer support, software development, and even performing security critical tasks.
However, this has also opened the door to a new wave of security threats. Model and data poisoning, prompt injection, prompt leaking and prompt evasion are just a few of the risks that have recently been in the news. Garnering less attention are the risks around the ML supply chain process: since models are an uninspectable collection of weights (sometimes also with arbitrary code), an attacker can tamper with them and achieve significant impact to those using the models. Users, developers, and practitioners need to examine an important question during their risk assessment process: “can I trust this model?”
Since its launch, Google’s Secure AI Framework (SAIF) has created guidance and technical solutions for creating AI applications that users can trust. A first step in achieving trust in the model is to permit users to verify its integrity and provenance, to prevent tampering across all processes from training to usage, via cryptographic signing.
To understand the need for the model signing project, let’s look at the way ML powered applications are developed, with an eye to where malicious tampering can occur.
Applications that use advanced AI models are typically developed in at least three different stages. First, a large foundation model is trained on large datasets. Next, a separate ML team finetunes the model to make it achieve good performance on application specific tasks. Finally, this fine-tuned model is embedded into an application.
The three steps involved in building an application that uses large language models.
These three stages are usually handled by different teams, and potentially even different companies, since each stage requires specialized expertise. To make models available from one stage to the next, practitioners leverage model hubs, which are repositories for storing models. Kaggle and HuggingFace are popular open source options, although internal model hubs could also be used.
This separation into stages creates multiple opportunities where a malicious user (or external threat actor who has compromised the internal infrastructure) could tamper with the model. This could range from just a slight alteration of the model weights that control model behavior, to injecting architectural backdoors — completely new model behaviors and capabilities that could be triggered only on specific inputs. It is also possible to exploit the serialization format and inject arbitrary code execution in the model as saved on disk — our whitepaper on AI supply chain integrity goes into more details on how popular model serialization libraries could be exploited. The following diagram summarizes the risks across the ML supply chain for developing a single model, as discussed in the whitepaper.
The supply chain diagram for building a single model, illustrating some supply chain risks (oval labels) and where model signing can defend against them (check marks)
The diagram shows several places where the model could be compromised. Most of these could be prevented by signing the model during training and verifying integrity before any usage, in every step: the signature would have to be verified when the model gets uploaded to a model hub, when the model gets selected to be deployed into an application (embedded or via remote APIs) and when the model is used as an intermediary during another training run. Assuming the training infrastructure is trustworthy and not compromised, this approach guarantees that each model user can trust the model.
Signing models is inspired by code signing, a critical step in traditional software development. A signed binary artifact helps users identify its producer and prevents tampering after publication. The average developer, however, would not want to manage keys and rotate them on compromise.
These challenges are addressed by using Sigstore, a collection of tools and services that make code signing secure and easy. By binding an OpenID Connect token to a workload or developer identity, Sigstore alleviates the need to manage or rotate long-lived secrets. Furthermore, signing is made transparent so signatures over malicious artifacts could be audited in a public transparency log, by anyone. This ensures that split-view attacks are not possible, so any user would get the exact same model. These features are why we recommend Sigstore’s signing mechanism as the default approach for signing ML models.
Today the OSS community is releasing the v1.0 stable version of our model signing library as a Python package supporting Sigstore and traditional signing methods. This model signing library is specialized to handle the sheer scale of ML models (which are usually much larger than traditional software components), and handles signing models represented as a directory tree. The package provides CLI utilities so that users can sign and verify model signatures for individual models. The package can also be used as a library which we plan to incorporate directly into model hub upload flows as well as into ML frameworks.
We can view model signing as establishing the foundation of trust in the ML ecosystem. We envision extending this approach to also include datasets and other ML-related artifacts. Then, we plan to build on top of signatures, towards fully tamper-proof metadata records, that can be read by both humans and machines. This has the potential to automate a significant fraction of the work needed to perform incident response in case of a compromise in the ML world. In an ideal world, an ML developer would not need to perform any code changes to the training code, while the framework itself would handle model signing and verification in a transparent manner.
Apple released security updates Monday to address software defects in the latest version of the company’s Safari browser and other applications across iOS, iPadOS and macOS.
The security issues addressed across the latest versions of Apple’s most popular platforms include 62 vulnerabilities affecting iOS 18.4 and iPadOS 18.4, 131 vulnerabilities affecting macOS Sequoia 15.4 and 14 vulnerabilities affecting Safari 18.4.
The batch of software defects addressed by Apple includes CVE-2025-24221, which could make sensitive keychain data accessible from an iOS backup, and CVE-2025-24245, which could allow an attacker to use a malicious application to access a user’s saved passwords in macOS.
Apple also released security updates in older versions of its operating systems to address two actively exploited zero-day vulnerabilities it identified and released emergency software patches for March 11.
A zero-day vulnerability in the company’s WebKit web browser engine, tracked as CVE-2025-24201, can allow an attacker to break out of WebKit’s Web Content sandbox and potentially conduct unauthorized actions. The second zero-day, CVE-2025-24200, can allow an attacker with physical access to disable USB Restricted Mode on a locked device.
Apple said both zero-days were actively exploited in an “extremely sophisticated attack against specific target individuals.” Apple released security updates Monday to address the zero-days in iOS 15.8.4 and 16.7.11, and iPadOS 15.8.4 and 16.7.11, versions of the company’s operating systems that power previous generation iPhones and iPads.
More information about Apple’s latest security updates are available on its website.
The post Apple issues fixes for vulnerabilities in both old and new OS versions appeared first on CyberScoop.
Take a moment to list all the digital accounts you've signed up for, and it's probably more than you realized: email, social media, banking, streaming services, cloud storage, music, gaming, and fitness...it adds up. But using the same login credentials for every service is a bad idea, and if you reuse passwords across accounts, let me summarize the simplest advice you should take away from this article: You shouldn't. But, of course, it's nearly impossible to remember as many unique usernames and secure passwords as you need for your various accounts. That's where password managers come in.
Password managers hide your various login credentials behind one main username and password so that logging into the password manager gives you access to everything else. It's a secure alternative to writing your passwords down or saving them in a spreadsheet, and more reliable than your memory. They can often store other data, too—think credit card numbers, PIN codes, and authenticator keys—and may also give you extra features like scanning data breaches for your credentials. If you've yet to switch to a password manager, consider this a sign to get started. It can be intimidating at first, but getting started may be easier than you think.
Password managers are all slightly different, but you'll find many of the same features across brands. First and foremost, they store your passwords—often popping up inside web browsers and on phones whenever you need to log into an account—and provide you with your login credentials with one click or tap. As sign-in technologies have evolved, though, so have password managers. Many can now also help with two-factor authentication codes and passkeys for websites or apps that need more than just a username and password. At the same time, these password managers are secured with a main username and password you need to remember—and often with biometric authentication, too.
Most password managers will also suggest strong passwords for new accounts: Passwords that mix up random special characters, letters, and numbers, so they're extremely difficult to hack. With a password manager, you don't actually need to know what your passwords are—the program handles everything. You'll often see password managers offer additional security features as well, ranging from notifying you of duplicate passwords, to dark web monitoring for your email addresses, usernames, or passwords. If your login details appear in a data breach, you get an alert about it, and you can change them.
You might wonder how password managers make sure your passwords are securely and privately locked away. Details vary between software packages, but they'll invariably use end-to-end encryption, with your main password as the decryption key, meaning that means no one else—from hackers to password manager developers to government agencies—can access your details without that password. Additional security measures are often implemented as well. Take 1Password as an example: It uses PBKDF2 (Password-Based Key Derivation Function 2) key strengthening, which, in simple terms, means that passwords are obscure enough that it would take decades to crack. It also gives users a secret key, known only to them, that works as an extra security layer on top of your password.
In other words, you can't just use your pet's name as your password manager password. Extra security layers, including two-factor authentication and biometric scans, are often added too. Where your credentials need to be synced across multiple devices, strong encryption protocols are again deployed. Without your password, the data is useless, and only you know your password.
Most password managers now combine local and cloud storage options, because we all need our passwords on so many devices. However, it's worth bearing in mind that the fewer places you have your password manager installed, the less chance there is of someone else gaining access to it—so some users just keep their password manager on their phone.
Simply put, using a password manager is a whole lot more secure than other options, like listing them in a Google Doc. Say, for example, that you left your laptop unlocked and someone sat down at it. With a Google Doc, that person would be more likely to access your password document than they would a password manager where they would need extra security clearance.
The free offerings from Google and Apple have improved significantly in recent years, but they still don't quite offer the level of protection, breadth of features, and cross-platform support of the best dedicated password managers. One example: In the case of Google Password Manager, on-device encryption (meaning that you manage the decryption key locally, as with a password manager, rather than Google managing it) remains an optional extra that you have to enable, rather than enabled by default.
Given the protection and features that come with dedicated password managers, it's typically worth most people investing in one. Some software packages offer a free tier, but they may be limited in terms of the features you get and the number of devices you can use them on. You can expect to pay a few bucks per month for most apps, but you can also look for bundled deals that include VPNs and adblockers, for instance. Whatever brand or package you choose, though, you should begin using a password manager. You get a private password vault, a host of protections to keep it safe, and added features like data breach monitoring and strong password generators. Plus, the best password managers sync seamlessly across all of your devices, ready when you need them.