CyberHappenings logo
☰

PROMISQROUTE Vulnerability in ChatGPT's Routing Mechanism

First reported
Last updated
📰 1 unique sources, 1 articles

Summary

Hide ▲

A new attack technique, PROMISQROUTE, allows users to manipulate ChatGPT into routing malicious prompts to older, less secure large language models (LLMs) instead of the flagship GPT-5. This technique exploits the app's routing layer, which directs prompts to different models based on the complexity and nature of the query. The vulnerability enables downgrading ChatGPT to less secure models, potentially facilitating malicious activities. The issue arises from the app's ability to parse user inputs for routing purposes, which can be influenced by specific phrases or keywords in the prompts. The researchers demonstrated the attack by successfully tricking ChatGPT into routing a malicious query to a lighter variant of GPT-5. OpenAI denies that GPT-5 routes inquiries to older models, but the researchers claim they can downgrade to even older models using simple instructions. The primary solution involves implementing guardrails to filter model inputs and outputs, though this approach has its limitations.

Timeline

  1. 21.08.2025 23:35 📰 1 articles

    PROMISQROUTE Vulnerability in ChatGPT's Routing Mechanism Disclosed

    A new attack technique, PROMISQROUTE, allows users to manipulate ChatGPT into routing malicious prompts to older, less secure large language models (LLMs) instead of the flagship GPT-5. The vulnerability exploits the app's routing layer, which directs prompts to different models based on the complexity and nature of the query. Researchers demonstrated the attack by successfully tricking ChatGPT into routing a malicious query to a lighter variant of GPT-5. OpenAI denies that GPT-5 routes inquiries to older models, but the researchers claim they can downgrade to even older models using simple instructions. The primary solution involves implementing guardrails to filter model inputs and outputs, though this approach has its limitations.

    Show sources

Information Snippets

  • PROMISQROUTE allows users to manipulate ChatGPT's routing mechanism to direct prompts to older, less secure models.

    First reported: 21.08.2025 23:35
    📰 1 source, 1 article
    Show sources
  • The attack technique involves adding specific phrases or keywords to prompts to influence the routing layer.

    First reported: 21.08.2025 23:35
    📰 1 source, 1 article
    Show sources
  • Researchers demonstrated the vulnerability by tricking ChatGPT into routing a malicious query to a lighter variant of GPT-5.

    First reported: 21.08.2025 23:35
    📰 1 source, 1 article
    Show sources
  • OpenAI denies that GPT-5 routes inquiries to older models, but researchers claim they can downgrade to even older models.

    First reported: 21.08.2025 23:35
    📰 1 source, 1 article
    Show sources
  • The primary solution to PROMISQROUTE involves implementing guardrails to filter model inputs and outputs.

    First reported: 21.08.2025 23:35
    📰 1 source, 1 article
    Show sources
  • ChatGPT's routing layer directs prompts to different models based on the complexity and nature of the query.

    First reported: 21.08.2025 23:35
    📰 1 source, 1 article
    Show sources
  • The vulnerability arises from the app's ability to parse user inputs for routing purposes.

    First reported: 21.08.2025 23:35
    📰 1 source, 1 article
    Show sources
  • The attack can potentially facilitate malicious activities by exploiting less secure models.

    First reported: 21.08.2025 23:35
    📰 1 source, 1 article
    Show sources