Emerging roles in secure AI for cybersecurity professionals
Summary
Hide â˛
Show âŧ
The cybersecurity landscape is evolving with the integration of AI, creating new roles and opportunities. The rise of AI in cybersecurity is transforming traditional roles, leading to the emergence of new positions such as AI governance specialists and SOC fusion roles. These changes are driven by the need to secure AI solutions and address new threats like deepfakes and AI-crafted malware. The integration of AI in cybersecurity is expected to continue, with a focus on securing AI solutions and addressing new threats.
Timeline
-
11.08.2025 17:00 đ° 1 articles
Emergence of AI-driven roles in cybersecurity
The integration of AI in cybersecurity is transforming traditional roles and creating new opportunities. New roles such as AI governance specialists and SOC fusion roles are emerging to address the need to secure AI solutions and combat new threats. The focus is on securing AI solutions and addressing new threats like deepfakes and AI-crafted malware. Compliance mandates and the need for advanced defense strategies are driving these changes.
Show sources
- Will Secure AI Be the Hottest Career Path in Cybersecurity? â www.darkreading.com â 11.08.2025 17:00
Information Snippets
-
The cybersecurity field has evolved significantly over the past 25 years, from basic firewall programming to combating complex threats like botnets and DDoS attacks.
First reported: 11.08.2025 17:00đ° 1 source, 1 articleShow sources
- Will Secure AI Be the Hottest Career Path in Cybersecurity? â www.darkreading.com â 11.08.2025 17:00
-
The integration of AI in cybersecurity is creating new roles such as AI governance specialists and SOC fusion roles.
First reported: 11.08.2025 17:00đ° 1 source, 1 articleShow sources
- Will Secure AI Be the Hottest Career Path in Cybersecurity? â www.darkreading.com â 11.08.2025 17:00
-
New threats like deepfakes, synthetic identities, and AI-crafted malware are emerging, necessitating advanced defense strategies.
First reported: 11.08.2025 17:00đ° 1 source, 1 articleShow sources
- Will Secure AI Be the Hottest Career Path in Cybersecurity? â www.darkreading.com â 11.08.2025 17:00
-
AI is being used to support threat intelligence, SOC triage, and phishing detection.
First reported: 11.08.2025 17:00đ° 1 source, 1 articleShow sources
- Will Secure AI Be the Hottest Career Path in Cybersecurity? â www.darkreading.com â 11.08.2025 17:00
-
Compliance mandates like NIST's AI Risk Management Framework (RMF) and the EU AI Act are driving the need for AI governance roles.
First reported: 11.08.2025 17:00đ° 1 source, 1 articleShow sources
- Will Secure AI Be the Hottest Career Path in Cybersecurity? â www.darkreading.com â 11.08.2025 17:00
-
Traditional cybersecurity roles are evolving into hybrid roles that incorporate AI, such as SOC analysts operating alongside AI co-pilots.
First reported: 11.08.2025 17:00đ° 1 source, 1 articleShow sources
- Will Secure AI Be the Hottest Career Path in Cybersecurity? â www.darkreading.com â 11.08.2025 17:00
-
Compensation for cybersecurity roles is increasing, with SOC fusion roles commanding over $140,000 in enterprise settings.
First reported: 11.08.2025 17:00đ° 1 source, 1 articleShow sources
- Will Secure AI Be the Hottest Career Path in Cybersecurity? â www.darkreading.com â 11.08.2025 17:00
Similar Happenings
AI-Powered Cyberattacks Targeting Critical Sectors Disrupted
Anthropic disrupted an AI-powered operation in July 2025 that used its Claude AI chatbot to conduct large-scale theft and extortion across 17 organizations in healthcare, emergency services, government, and religious sectors. The actor used Claude Code on Kali Linux to automate various phases of the attack cycle, including reconnaissance, credential harvesting, and network penetration. The operation, codenamed GTG-2002, employed AI to make tactical and strategic decisions, exfiltrating sensitive data and demanding ransoms ranging from $75,000 to $500,000 in Bitcoin. The actor used AI to craft bespoke versions of the Chisel tunneling utility to evade detection and disguise malicious executables as legitimate Microsoft tools. The operation highlights the increasing use of AI in cyberattacks, making defense and enforcement more challenging. Anthropic developed new detection methods to prevent future abuse of its AI models.
AI systems vulnerable to data-theft via hidden prompts in downscaled images
Researchers at Trail of Bits have demonstrated a new attack method that exploits image downscaling in AI systems to steal user data. The attack injects hidden prompts in full-resolution images that become visible when the images are resampled to lower quality. These prompts are interpreted by AI models as user instructions, potentially leading to data leakage or unauthorized actions. The vulnerability affects multiple AI systems, including Google Gemini CLI, Vertex AI Studio, Google Assistant on Android, and Genspark. The attack works by embedding instructions in images that are only revealed when the images are downscaled using specific resampling algorithms. The AI model then interprets these hidden instructions as part of the user's input, executing them without the user's knowledge. The researchers have developed an open-source tool, Anamorpher, to create images for testing this vulnerability. To mitigate the risk, Trail of Bits recommends implementing dimension restrictions on image uploads, providing users with previews of downscaled images, and requiring explicit user confirmation for sensitive tool calls.
UNC5518 deploys CORNFLAKE.V3 backdoor via ClickFix and fake CAPTCHA pages
UNC5518, an access-as-a-service threat actor, deploys the CORNFLAKE.V3 backdoor using the ClickFix social engineering tactic and fake CAPTCHA pages. This backdoor is used by at least two other groups, UNC5774 and UNC4108, to initiate multi-stage infections and drop additional payloads. The attack begins with users being tricked into running a malicious PowerShell script via a fake CAPTCHA page. The script executes a dropper payload that ultimately launches CORNFLAKE.V3, which supports various payload types and collects system information. The backdoor has been observed in both JavaScript and PHP versions and uses Cloudflare tunnels to avoid detection. A new ClickFix variant manipulates AI-generated text summaries to deliver malicious commands, turning AI tools into active participants in social engineering attacks.
PromptFix exploit enables AI browser deception
A new prompt injection technique, PromptFix, tricks AI-driven browsers into executing malicious actions by embedding hidden instructions in web pages. The exploit targets AI browsers like Perplexity's Comet, Microsoft Edge with Copilot, and OpenAI's upcoming 'Aura', which automate tasks such as online shopping and email management. PromptFix can deceive AI models into interacting with phishing sites or fraudulent storefronts, potentially leading to unauthorized purchases or credential theft. The technique exploits the AI's design goal to assist users quickly and without hesitation, creating a new scam landscape called Scamlexity. Researchers from Guardio Labs demonstrated the exploit by tricking Comet into adding items to a cart and auto-filling payment details on fake shopping sites. Similar attacks can manipulate AI browsers into parsing spam emails and entering credentials on phishing pages. PromptFix can also bypass CAPTCHA checks to download malicious payloads without user involvement. The exploit highlights the need for robust defenses in AI systems to anticipate and neutralize such attacks, including phishing detection, URL reputation checks, and domain spoofing protections. Until security matures, users should avoid assigning sensitive tasks to AI browsers and manually input sensitive data when needed. AI browser agents from major AI firms failed to reliably detect the signs of a phishing site. AI agents are gullible and servile, making them vulnerable to attacks in an adversarial setting. Companies should move from "trust, but verify" to "doubt, and double verify" until an AI agent has shown it can always complete a workflow properly. AI companies are not expected to pause developing more functionality to improve security. Companies should hold off on putting AI agents into any business process that requires reliability until AI-agent makers offer better visibility, control, and security. Securing AI requires gaining visibility into all AI use by company workers and creating an AI usage policy and a list of approved tools.