WARNING. BE VERY CAREFUL WHAT YOU WISH FOR!!!
“An AI agent has discovered a previously unknown, zero-day, exploitable memory-safety vulnerability in widely used real-world software.
It’s the first example, at least to be made public, of such a find, according to Google’s Project Zero and DeepMind”https://t.co/OqoVvZdStL
— AI Notkilleveryoneism Memes ⏸️ (@AISafetyMemes) November 17, 2024
Google Claims World First As AI Finds 0-Day Security Vulnerability | Forbes
Davey Winder Senior Contributor Davey Winder is a veteran cybersecurity writer, hacker and analyst. Nov 5, 2024
An AI agent has discovered a previously unknown, zero-day, exploitable memory-safety vulnerability in widely used real-world software. It’s the first example, at least to be made public, of such a find, according to Google’s Project Zero and DeepMind, the forces behind Big Sleep, the large language model-assisted vulnerability agent that spotted the vulnerability. If you don’t know what Project Zero is and have not been in awe of what it has achieved in the security space, then you simply have not been paying attention these last few years. These elite hackers and security researchers work relentlessly to uncover zero-day vulnerabilities in Google’s products and beyond. The same accusation of lack of attention applies if you are unaware of DeepMind, Google’s AI research labs. So when these two technological behemoths joined forces to create Big Sleep, they were bound to make waves.
Google Uses Large Language Model To Catch Zero-Day Vulnerability In Real-World Code
In a Nov. 1 announcement, Google’s Project Zero blog confirmed that the Project Naptime large language model assisted security vulnerability research frameworkhas evolved into Big Sleep. This collaborative effort involving some of the very best ethical hackers, as part of Project Zero, and the very best AI researchers, as part of Google DeepMind, has developed a large language model-powered agent that can go out and uncover very real security vulnerabilities in widely used code. In the case of this world first, the Big Sleep team says it found “an exploitable stack buffer underflow in SQLite, a widely used open source database engine.” The zero-day vulnerability was reported to the SQLite development team in October which fixed it the same day. “We found this issue before it appeared in an official release,” the Big Sleep team from Google said, “so SQLite users were not impacted.”
AI Could Be The Future Of Fuzzing, The Google Big Sleep Team Says
Although you may not have heard the term fuzzing before, it’s been part of the security research staple diet for decades now. Fuzzing relates to the use of random data to trigger errors in code. Although the use of fuzzing is widely accepted as an essential tool for those who look for vulnerabilities in code, hackers will readily admit it cannot find everything. “We need an approach that can help defenders to find the bugs that are difficult (or impossible) to find by fuzzing,” the Big Sleep team said, adding that it hoped AI can fill the gap and find “vulnerabilities in software before it’s even released,” leaving little scope for attackers to strike.
“Finding a vulnerability in a widely-used and well-fuzzed open-source project is an exciting result,” the Google Big Sleep team said, but admitted the results are currently “highly experimental.” At present, the Big Sleep agent is seen as being only as effective as a target-specific fuzzer. However, it’s the near future that is looking bright. “This effort will lead to a significant advantage to defenders,” Google’s Big Sleep team said, “with the potential not only to find crashing test cases, but also to provide high-quality root-cause analysis, triaging and fixing issues could be much cheaper and more effective in the future.”
The Flip Side Of AI Is Seen In Deepfake Security Threats
While the Big Sleep news from Google is refreshing and important, as is that from a new RSA report looking at how AI can help with the push to get rid of passwords in 2025, the flip side of the AI security coin should always be considered as well. One such flip side being the use of deepfakes. I’ve already covered how Google support deepfakes have been used in an attack against a Gmail user a report that went viral for all the right reasons. Now, a Forbes.com reader has got in touch to let me know about some research undertaken to gauge how the AI technology can be used to influence public opinion. Again, I covered this recently as the FBI issued a warning about a 2024 election voting video that was actually a fake backed by Russian distributors. The latest VPNRanks research is well worth reading in full, but here’s a few handpicked statistics that certainly get the grey cells working.
- 50% of respondents have encountered deepfake videos online multiple times.
- 37.1% consider deepfakes an extremely serious threat to reputations, especially for creating fake videos of public figures or ordinary people.
- Concerns about deepfakes manipulating public opinion are high, with 74.3% extremely worried about potential misuse in political or social contexts.
- 65.7% believe a deepfake released during an election campaign would likely influence voters’ opinions.
- 41.4% feel it’s extremely important for social media platforms to immediately remove non-consensual deepfake content once reported.
- When it comes to predictions for 2025, global deepfake-related identity fraud attempts are forecasted to reach 50,000 and in excess of 80% of global elections could be impacted by deepfake interference, threatening the integrity of democracy.