A particularly concerning demonstration involved Bargury's development of LOLCopilot, a tool designed for red-teaming that transformed the AI into a tool for automated spear-phishing. This tool allowed a hacker to exploit a victim's professional email by leveraging Copilot to identify individuals frequently contacted, craft emails that closely resembled the victim's style and use of emojis, and dispatch personalized messages laden with harmful links or malware attachments.
Bargury also showcased a technique where a hacker, after seizing control of an email account, could access confidential information such as salary details without activating Microsoft's security measures. Bargury's instructions to the system were to bypass any mention of the files from which the data was derived, suggesting that "a bit of bullying" could be beneficial in this scenario.
In another demonstration, Bargury illustrated how an attacker could infiltrate the AI's database with a harmful email, leading it to provide manipulated banking information that appeared to be from the victim's bank. Bargury pointed out that allowing AI access to corporate data inherently introduces a risk.
Furthermore, Bargury demonstrated how an external attacker could gather limited details about an upcoming company's earnings report.
In a concluding example, he repurposed Copilot into a "malicious insider" by using it to direct users to phishing websites.
The study underscores the dangers of integrating AI systems with corporate data, particularly when it involves the inclusion of untrusted external data. It reveals how this integration can lead to AI outputs that appear to be accurate but, in reality, may be deceptive.
Source: https://www.blackhat.com/us-24/briefi...
#AI #microsoftcopilot #phishing #aitech #lolcopilot #github #BHUSA24
#blackhat