Jučer, 11:01
|
#735
|
White Rabbit
Datum registracije: May 2006
Lokacija: -
Postovi: 4,953
|
Citiraj:
Roughly 200,000 Linux-based Framework laptops shipped with a signed UEFI shell command (mm) that can be abused to bypass Secure Boot protections -- allowing attackers to load persistent bootkits like BlackLotus or HybridPetya. Framework has begun patching affected models, though some fixes and DBX updates are still pending. BleepingComputer reports: According to firmware security company Eclypsium, the problem stems from including a 'memory modify' (mm) command in legitimately signed UEFI shells that Framework shipped with its systems. The command provides direct read/write access to system memory and is intended for low-level diagnostics and firmware debugging. However, it can also be leveraged to break the Secure Boot trust chain by targeting the gSecurity2 variable, a critical component in the process of verifying the signatures of UEFI modules.
The mm command can be abused to overwrite gSecurity2 with NULL, effectively disabling signature verification. "This command writes zeros to the memory location containing the security handler pointer, effectively disabling signature verification for all subsequent module loads." The researchers also note that the attack can be automated via startup scripts to persist across reboots.
|
---
Citiraj:
Bruce Schneier and Barath Raghavan say agentic AI is already broken at the core. In their IEEE Security & Privacy essay, they argue that AI agents run on untrusted data, use unverified tools, and make decisions in hostile environments. Every part of the OODA loop (observe, orient, decide, act) is open to attack. Prompt injection, data poisoning, and tool misuse corrupt the system from the inside. The model's strength, treating all input as equal, also makes it exploitable. They call this the AI security trilemma: fast, smart, or secure. Pick two. Integrity isn't a feature you bolt on later. It has to be built in from the start. "Computer security has evolved over the decades," the authors wrote. "We addressed availability despite failures through replication and decentralization. We addressed confidentiality despite breaches using authenticated encryption. Now we need to address integrity despite corruption."
"Trustworthy AI agents require integrity because we can't build reliable systems on unreliable foundations. The question isn't whether we can add integrity to AI but whether the architecture permits integrity at all."
|
|
|
|