Citiraj:
Chatbots and other AI services are increasingly making life easier for cybercriminals. A recently disclosed attack demonstrates how ChatGPT can be exploited to steal API keys and other sensitive data stored on popular cloud platforms.
A newly discovered prompt injection attack threatens to turn ChatGPT into a cybercriminal's best ally in the data theft business. Dubbed AgentFlayer, the exploit uses a single document to conceal "secret" prompt instructions targeting OpenAI's chatbot. A malicious actor could simply share the seemingly harmless document with their victim via Google Drive – no clicks required.
|
> Techspot
__________________
Lenovo LOQ 15AHP9 83DX || AMD Ryzen 5 8645HS / 16GB DDR5 / Micron M.2 2242 1TB / nVidia Geforce RTX 4050 / Windows 11 Pro
Lenovo Thinkpad L15 Gen 1 || Intel Core i5 10210U / 16GB DDR4 / WD SN730 256GB / Intel UHD / Fedora Workstation 42
|