cyber
Copy Fail (CVE-2026-31431): a 732-byte Linux LPE β straight-line, no race, no per-distro offsets. Same Python script roots Ubuntu, Amazon Linux, RHEL, SUSE since 2017. Page-cache write bypasses on-disk file-integrity tools and crosses container boundaries. Found by Xint Code.
On April 21, 2026, a major breakthrough in cybersecurity happened: leading standardization initiatives gathered in Washington DC and agreed to begin coordinating collectively on AI security. A personal dream come true. The result: MOSAIC: Multi-Organization Secure AI Coordination. The goal: turn a fragmented landscape into clear, consistent standards and guidelines, to deal with the mounting risks of AI.
This important step was taken at the AI Security Policy Forum, organised and led by the OWASP AI Exchange, with SANS Institute as co-host - convening standard makers and policy stakeholders.
The initiatives at the table included: π BIML (Berryville Institute of Machine Learning) π Center for Internet Security (CIS) π Cloud Security Alliance (CSA) π Coalition for Secure AI (CoSAI) π National Institute of Standards and Technology (NIST) π OWASP AI Exchange (AIX) π OWASP GenAI Security Project π SANS Institute
The group agreed that it is now more important than ever to coordinate around the rapidly evolving possibilities and challenges of AI, as AI security risks mount.
One of the next steps is to provide a standardized map of the participating initiatives and a communication platform to exchange insights on a first list of identified topics (e.g., aligning with other initiatives such as SC42, building on OpenCRE, consensus on definitions), improve consistency, clarity, quality, and prevent unnecessary duplication. The idea is to move fast while maintaining independence and with lightweight coordination - not add more committees.
In addition to the organizations mentioned, the discussion also included journalists, representatives from International Telecommunication Union (ITU), The Aspen Institute, academia, and government β providing valuable perspectives on developments in both policy and industry. This helped prioritize the topics to focus on.
In the picture, from left to right, standing to sitting: Disesdi Shoshana Cox (AIX), Gary McGraw(BIML), Rob van der Veer (AIX), Anonymous, Duncan Sparrell, John Yeoh (CSA), Rock Lambros (GenAI), Norma Krayem, Brian Calkin (CIS), Matt Altomare (Aspen), Omar Santos (CoSAI), Aruneesh Salhotra (AIX), Jonathan Gibson (The Dispatch), Apostol Vassilev (NIST), Rhea Nygard, Ken Huang, Lav Varshney (Stony Brook University), Sounil Yu, and Sharon Goldman (Fortune)
Not in the picture, but involved, in alphabetical order: Rob T. Lee (SANS), Ryan Galluzzo (NIST), Soribel F.
A big thank you to: π Disesdi Shoshana CoxΒ for her idea to bring everybody together in a room to fulfil the connecting mission of the Exchange π The amazing thinktank at the AI Exchange π Spyros Gasteratos for his work on OpenCRE π Violeta Klein, CISSP, CEFA for shaping the story for the Forum π Straiker, Casco (YC X25), AI Security Academy, and SANS for supporting the Forum. π Software Improvement Group for donating the original threat model and initiating the AI Exchange
Letβs make AI a success! | 28 comments on LinkedIn
For 21 years, fast16 corrupted nuclear research calculations without anyone noticing. It predates Stuxnet by five years. The math was always wrong.
Since February, the Firefox team has been working around the clock using frontier AI models to find and fix latent security vulnerabilities in the browser.
Multi-lens code audit tool β 280 expert AI agents for code review, security testing, and infrastructure auditing - TheMorpheus407/RepoLens
Is security spending more tokens than your attacker?
The Red Sun vulnerability repository. Contribute to Nightmare-Eclipse/RedSun development by creating an account on GitHub.
A linter-fast, local-first security scanning tool written in rust. - PwnKit-Labs/foxguard
Why the moat is the system, not the model
A new open-source penetration testing framework called METATRON is gaining attention in the security research community for its fully offline, AI-driven approach to vulnerability assessment.
A security researcher found Anthropic's full CLI source code exposed through a source map file. 1,900 files. 512,000+ lines. Everything. Tagged with claudecode, security, typescript, ai.
The named Lockheed Martin employees have been given a deadline of 48 hours to βcease cooperation with the Zionist regime and leave the occupied territories immediatelyβ.
The Handala hackers associated with Iran have breached the personal email account of FBI Director Kash Patel and published photos and documents.
Welcome to Wikimedia's home for real-time and historical data on system performance.
A prompt injection in a GitHub issue triggered a chain reaction that ended with 4,000 developers getting OpenClaw installed without consent. The attack composes well-understood vulnerabilities into something new: one AI tool bootstrapping another.
SBOM 1.0: A specification for sandwich supply chain transparency.
On January 14, 2026, global telnet traffic observed by GreyNoise sensors fell off a cliff. A 59% sustained reduction, eighteen ASNs going completely silent, five countries vanishing from our data entirely. Six days later, CVE-2026-24061 dropped. Coincidence is one explanation.