AI Security Policies: Gaps and Best Practices in AI Adoption
Summary
Hide â˛
Show âŧ
Organizations are rapidly adopting AI-powered solutions, often without comprehensive security policies. This leaves them vulnerable to various security threats. Only 28% of organizations have a formal AI policy, despite 81% acknowledging AI use within their teams. Security experts recommend establishing principle-based AI policies that include clear controls and adapt to evolving threats and regulations. Security risks include prompt injection attacks, hallucination, third-party model vulnerabilities, and shadow AI tools. Effective AI policies should guide innovation, set safety guardrails, and define acceptable use boundaries. Policies must be flexible and regularly updated to adapt to new regulations and threats.
Timeline
-
19.08.2025 14:45 đ° 1 articles
AI Security Policies: Gaps and Best Practices in AI Adoption
As of 2025, only 28% of organizations have a formal AI policy, despite 81% acknowledging AI use. Security experts recommend establishing principle-based AI policies that include clear controls and adapt to evolving threats and regulations. Key risks include prompt injection attacks, hallucination, third-party model vulnerabilities, and shadow AI tools. Effective AI policies should guide innovation, set safety guardrails, and define acceptable use boundaries.
Show sources
- Secure AI Use Without the Blind Spots â www.darkreading.com â 19.08.2025 14:45
Information Snippets
-
Only 28% of organizations have a formal, comprehensive AI policy.
First reported: 19.08.2025 14:45đ° 1 source, 1 articleShow sources
- Secure AI Use Without the Blind Spots â www.darkreading.com â 19.08.2025 14:45
-
81% of organizations believe employees use AI, whether permitted or not.
First reported: 19.08.2025 14:45đ° 1 source, 1 articleShow sources
- Secure AI Use Without the Blind Spots â www.darkreading.com â 19.08.2025 14:45
-
AI policies should include principle-based controls and clear, enforceable guidelines.
First reported: 19.08.2025 14:45đ° 1 source, 1 articleShow sources
- Secure AI Use Without the Blind Spots â www.darkreading.com â 19.08.2025 14:45
-
Security risks include prompt injection attacks, hallucination, third-party model vulnerabilities, and shadow AI tools.
First reported: 19.08.2025 14:45đ° 1 source, 1 articleShow sources
- Secure AI Use Without the Blind Spots â www.darkreading.com â 19.08.2025 14:45
-
Effective AI policies should guide innovation, set safety guardrails, and define acceptable use boundaries.
First reported: 19.08.2025 14:45đ° 1 source, 1 articleShow sources
- Secure AI Use Without the Blind Spots â www.darkreading.com â 19.08.2025 14:45
-
AI policies must be flexible and regularly updated to adapt to new regulations and threats.
First reported: 19.08.2025 14:45đ° 1 source, 1 articleShow sources
- Secure AI Use Without the Blind Spots â www.darkreading.com â 19.08.2025 14:45
-
Monitoring and securing public AI tool use is more effective than outright bans.
First reported: 19.08.2025 14:45đ° 1 source, 1 articleShow sources
- Secure AI Use Without the Blind Spots â www.darkreading.com â 19.08.2025 14:45
Similar Happenings
CISA updates Software Bill of Materials (SBOM) minimum elements for public comment
The Cybersecurity and Infrastructure Security Agency (CISA) released a draft of the Minimum Elements for a Software Bill of Materials (SBOM) for public comment. This update reflects advancements in SBOM practices, tooling, and stakeholder adoption since the 2021 guidelines. The draft includes new elements and updates existing ones to align with current capabilities. The public can submit comments until October 3, 2025. The SBOM is a tool that provides transparency into the software supply chain by documenting software components. This transparency helps organizations make risk-informed decisions and improve software security. The updated guidelines aim to empower federal agencies and other organizations to enhance their cybersecurity posture. However, experts have expressed concerns about the practicality and operationalization of SBOMs, calling for more sector-specific guidance and support for automation and vulnerability integration.
PromptFix exploit enables AI browser deception
A new prompt injection technique, PromptFix, tricks AI-driven browsers into executing malicious actions by embedding hidden instructions in web pages. The exploit targets AI browsers like Perplexity's Comet, Microsoft Edge with Copilot, and OpenAI's upcoming 'Aura', which automate tasks such as online shopping and email management. PromptFix can deceive AI models into interacting with phishing sites or fraudulent storefronts, potentially leading to unauthorized purchases or credential theft. The technique exploits the AI's design goal to assist users quickly and without hesitation, creating a new scam landscape called Scamlexity. Researchers from Guardio Labs demonstrated the exploit by tricking Comet into adding items to a cart and auto-filling payment details on fake shopping sites. Similar attacks can manipulate AI browsers into parsing spam emails and entering credentials on phishing pages. PromptFix can also bypass CAPTCHA checks to download malicious payloads without user involvement. The exploit highlights the need for robust defenses in AI systems to anticipate and neutralize such attacks, including phishing detection, URL reputation checks, and domain spoofing protections. Until security matures, users should avoid assigning sensitive tasks to AI browsers and manually input sensitive data when needed. AI browser agents from major AI firms failed to reliably detect the signs of a phishing site. AI agents are gullible and servile, making them vulnerable to attacks in an adversarial setting. Companies should move from "trust, but verify" to "doubt, and double verify" until an AI agent has shown it can always complete a workflow properly. AI companies are not expected to pause developing more functionality to improve security. Companies should hold off on putting AI agents into any business process that requires reliability until AI-agent makers offer better visibility, control, and security. Securing AI requires gaining visibility into all AI use by company workers and creating an AI usage policy and a list of approved tools.