Shadow AI Usage and Data Leak Risks in Workplace
Summary
Hide ▲
Show ▼
The increasing use of personal Large Language Model (LLM) accounts by employees for work purposes is driving significant cybersecurity risks, including data leaks and policy violations. Nearly half of employees using generative AI tools in the workplace are using personal accounts, leading to a lack of visibility and control over data usage. This has resulted in a surge in data policy violations, with sensitive corporate information being exposed. Netskope's Cloud and Threat Report for 2026 highlights that the number of prompts sent to generative AI applications has grown significantly, with the top 1% of organizations sending over 1.4 million prompts per month. The report also notes that data policy violations have doubled in the past year, with an average of 223 violations per month. Organizations are urged to implement stronger policies and educate employees to mitigate these risks.
Timeline
-
07.01.2026 15:10 1 articles · 23h ago
Increase in Data Policy Violations Due to Shadow AI Usage
Netskope's Cloud and Threat Report for 2026 highlights a significant increase in data policy violations related to the use of generative AI tools in the workplace. The report notes that the number of prompts sent to these tools has increased sixfold, with the top 1% of organizations sending over 1.4 million prompts per month. Data policy violations have doubled in the past year, with an average of 223 violations per month. The top 25% of organizations using generative AI see an average of 2,100 data policy incidents per month, involving sensitive data such as source code, confidential information, intellectual property, and login credentials.
Show sources
- Personal LLM Accounts Drive Shadow AI Data Leak Risks — www.infosecurity-magazine.com — 07.01.2026 15:10
Information Snippets
-
47% of employees using generative AI tools in the workplace are using personal accounts.
First reported: 07.01.2026 15:101 source, 1 articleShow sources
- Personal LLM Accounts Drive Shadow AI Data Leak Risks — www.infosecurity-magazine.com — 07.01.2026 15:10
-
The number of prompts sent to generative AI applications has increased sixfold, from 3,000 to 18,000 per month on average.
First reported: 07.01.2026 15:101 source, 1 articleShow sources
- Personal LLM Accounts Drive Shadow AI Data Leak Risks — www.infosecurity-magazine.com — 07.01.2026 15:10
-
The top 1% of organizations are sending more than 1.4 million prompts per month to generative AI applications.
First reported: 07.01.2026 15:101 source, 1 articleShow sources
- Personal LLM Accounts Drive Shadow AI Data Leak Risks — www.infosecurity-magazine.com — 07.01.2026 15:10
-
Data policy violations related to generative AI have doubled in the past year, with an average of 223 violations per month.
First reported: 07.01.2026 15:101 source, 1 articleShow sources
- Personal LLM Accounts Drive Shadow AI Data Leak Risks — www.infosecurity-magazine.com — 07.01.2026 15:10
-
The top 25% of organizations using generative AI see an average of 2,100 data policy incidents per month.
First reported: 07.01.2026 15:101 source, 1 articleShow sources
- Personal LLM Accounts Drive Shadow AI Data Leak Risks — www.infosecurity-magazine.com — 07.01.2026 15:10
-
Sensitive data, including source code, confidential information, intellectual property, and login credentials, is being exposed through generative AI tools.
First reported: 07.01.2026 15:101 source, 1 articleShow sources
- Personal LLM Accounts Drive Shadow AI Data Leak Risks — www.infosecurity-magazine.com — 07.01.2026 15:10
-
The percentage of employees using personal AI accounts in the workplace has dropped from 78% to 47% in the past year.
First reported: 07.01.2026 15:101 source, 1 articleShow sources
- Personal LLM Accounts Drive Shadow AI Data Leak Risks — www.infosecurity-magazine.com — 07.01.2026 15:10