Microsoft’s AI researchers have made a huge mistake.
According to a new report from the cloud security company Wiz, the Microsoft AI research team accidentally leaked 38TB of the company’s private data.
38 terabytes. that’s it many of data.
The exposed data included full backups of two employees’ computers. These backups contain sensitive personal data, including passwords to Microsoft services, secret keys, and more than 30,000 internal Microsoft Teams messages from more than 350 Microsoft employees.
The tweet may have been deleted
So, how did this happen? The report explains that Microsoft’s AI team uploaded a bucket of training data with open-source code and AI models for image recognition. Users who find the Github repository are given a link from Azure, Microsoft’s cloud storage service, to download the models.
One problem: The link provided by Microsoft’s AI team gives visitors complete access to the entire Azure storage account. And not only can visitors see everything in the account, they can also upload, overwrite, or delete files.
Wiz said this happened as a result of a feature in Azure called Shared Access Signature (SAS) tokens, which is “a signed URL that provides access to data in Azure Storage.” The SAS token could have been set up with limitations on what file or files could be accessed. However, this particular link is configured with full access.
Adding to the potential issues, according to Wiz, is that this data has been exposed since 2020.
Wiz contacted Microsoft earlier this year, on June 22, to warn them about their discovery. Two days later, Microsoft invalidated the SAS token, ending the issue. Microsoft conducted and completed an investigation into potential impacts in August.
Microsoft gave TechCrunch a statementclaiming that “no customer data was exposed, and no other internal services were put at risk because of this issue.”
Topics
Cybersecurity Microsoft