Lost in the Prompt Order: Revealing the Limitations of Causal Attention in Language Models
Abstract
Research reveals that causal attention in language models creates information bottlenecks when question-answer options follow context, leading to performance drops of over 14 percentage points compared to reversed prompt ordering.
Large language models exhibit surprising sensitivity to the structure of the prompt, but the mechanisms underlying this sensitivity remain poorly understood. In this work, we conduct an in-depth investigation on a striking case: in multiple-choice question answering, placing context before the questions and options (CQO) outperforms the reverse order (QOC) by over 14%p, consistently over a wide range of models and datasets. Through systematic architectural analysis, we identify causal attention as the core mechanism: in QOC prompts, the causal mask prevents option tokens from attending to context, creating an information bottleneck where context becomes invisible to options.
Community
Prompt order can break LMs performance — even with the same content.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Reasoning Beyond Chain-of-Thought: A Latent Computational Mode in Large Language Models (2026)
- Enhancing Instruction-Following Capabilities in Seq2Seq Models: DoLA Adaptations for T5 (2025)
- ContextFocus: Activation Steering for Contextual Faithfulness in Large Language Models (2026)
- Debiasing Large Language Models via Adaptive Causal Prompting with Sketch-of-Thought (2026)
- Failure Modes in Multi-Hop QA: The Weakest Link Law and the Recognition Bottleneck (2026)
- Behavior-Equivalent Token: Single-Token Replacement for Long Prompts in LLMs (2025)
- LatentRefusal: Latent-Signal Refusal for Unanswerable Text-to-SQL Queries (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper