A research team patched one vulnerability. Safeguarding against the next ones will require constant vigilance.
In October 2023, two scientists at Microsoft discovered a startling vulnerability in a safety net intended to prevent bad actors from using artificial intelligence tools to concoct hazardous proteins for warfare or terrorism.
Those gaping security holes and how they were discovered were kept confidential until Thursday, when a report in the journal Science detailedhow researchersgeneratedthousands of AI-engineered versions of 72 toxins that escaped detection. The research team, a group of leading industry scientists and biosecurity experts, designed a patch to fix this problem foundin four different screening methods. But they warn that experts will have to keep searching for future breaches in this safety net.
“This is like a Windows update model for the planet. We will continue to stay on it and send out patches as needed, and also define the research processes and best practices moving forward to stay ahead of the curve as best we can,” Eric Horvitz, chief scientific officer of Microsoft and one of the leaders of the work, said at a press briefing.
The team considered the incident the first AI and biosecurity “zero day” — borrowing a term from the cyber-world for defense gaps software developers don’t know about that leave them susceptible to an attack.