CyberHappenings logo

Track cybersecurity events as they unfold. Sourced timelines, daily updates. Fast, privacy‑respecting. No ads, no tracking.

AI Policy Adoption in Organizations and Security Implications

First reported
Last updated
📰 1 unique sources, 1 articles

Summary

Hide ▲

Many organizations are adopting AI-powered solutions without comprehensive AI policies, exposing them to security risks. Only 28% of organizations have a formal AI policy, despite 81% of employees using AI tools. Security experts recommend creating principle-based, enforceable AI policies that address security threats like prompt injection attacks, hallucination, and shadow AI tools. Organizations must balance innovation with security by involving business leaders, defining acceptable use, and embedding controls in existing workflows. Policies should be flexible, adaptable to evolving regulations, and integrated into broader risk management strategies.

Timeline

  1. 19.08.2025 14:45 📰 1 articles · ⏱ 28d ago

    AI Security Risks Highlighted in ISACA Survey

    A recent ISACA survey reveals that only 28% of organizations have a formal AI policy, despite 81% of employees using AI tools. Security experts stress the importance of principle-based, enforceable AI policies to address risks like prompt injection attacks and shadow AI tools. Organizations must balance innovation with security by involving business leaders and embedding controls in existing workflows.

    Show sources

Information Snippets

  • Only 28% of organizations have a formal, comprehensive AI policy.

    First reported: 19.08.2025 14:45
    📰 1 source, 1 article
    Show sources
  • 81% of respondents believe employees within their organization use AI, whether permitted or not.

    First reported: 19.08.2025 14:45
    📰 1 source, 1 article
    Show sources
  • Security risks include prompt injection attacks, hallucination, third-party model vulnerabilities, and shadow AI tools.

    First reported: 19.08.2025 14:45
    📰 1 source, 1 article
    Show sources
  • AI policies should be principle-based, enforceable, and adaptable to evolving regulations.

    First reported: 19.08.2025 14:45
    📰 1 source, 1 article
    Show sources
  • Policies should be integrated into broader enterprise risk management and involve business leaders.

    First reported: 19.08.2025 14:45
    📰 1 source, 1 article
    Show sources
  • Enforcement should be embedded in existing workflows, with ongoing training and real-time monitoring.

    First reported: 19.08.2025 14:45
    📰 1 source, 1 article
    Show sources
  • Banning AI tools outright can backfire; monitoring and providing safe alternatives are more effective.

    First reported: 19.08.2025 14:45
    📰 1 source, 1 article
    Show sources

Similar Happenings

PromptFix Exploit Targets AI Browsers for Malicious Prompts

Researchers from Guardio Labs have demonstrated a new prompt injection technique called PromptFix. This exploit tricks generative AI (GenAI) models into executing malicious instructions embedded within fake CAPTCHA checks on web pages. The attack targets AI-driven browsers like Perplexity's Comet, which automate tasks such as shopping and email management. The exploit misleads AI models into interacting with phishing pages or fraudulent sites without user intervention, leading to potential data breaches and financial losses. The technique, dubbed Scamlexity, represents a new era of scams where AI convenience collides with invisible scam surfaces, making humans collateral damage. The exploit can trick AI models into purchasing items on fake websites, entering credentials on phishing pages, or downloading malicious payloads. The findings underscore the need for robust defenses in AI systems to anticipate, detect, and neutralize such attacks. Microsoft Edge is embedding agentic browsing features through a Copilot integration, and OpenAI is developing an agentic AI browser platform codenamed 'Aura'. Comet is quickly penetrating the mainstream consumer market. Agentic AI browsers were released with inadequate security safeguards against known and novel attacks. Guardio advises against assigning sensitive tasks to agentic AI browsers until their security matures. AI browser agents from major AI firms failed to reliably detect the signs of a phishing site. Comet often added items to a shopping cart, filled out credit-card details, and clicked the buy button on a fake Walmart site. AI browsers with access to email will read and act on prompts embedded in the messages. AI companies need stronger sanitation and guardrails against these attacks. Nearly all companies (96%) claim to want to expand their use of AI agents in the next year, but most are not prepared for the new risks posed by AI agents in a business environment. A fundamental issue is how to discern actions taken through a browser by a user versus those taken by an agent. AI agents need to be experts at not just getting things done, but at sussing out and blocking potential security threats to workers and company data. Companies should move from "trust, but verify" to "doubt, and double verify"—essentially hobbling automation until an AI agent has shown it can always complete a workflow properly. Defective AI operations continue to be a major problem, and security represents another layer on top of those issues. Companies should hold off on putting AI agents into any business process that requires reliability until AI-agent makers offer better visibility, control, and security. Companies that intend to push their use of AI into agent-based workflows should focus on a comprehensive strategy, including inventorying all AI services used by employees and creating an AI usage policy. Employees need to understand the basics of AI safety and what it means to give these bots information or privileges to do things on their behalf.