HTTP/2 Fingerprinting Evasion
Beyond TLS, the HTTP/2 layer ALSO fingerprints. Settings frames, window sizes, header pseudo-order, priorities, all distinguishable. Here's the modern arms race.
What you’ll learn
- Explain HTTP/2 fingerprinting and what it captures.
- Recognise when you've evaded TLS but not HTTP/2 fingerprinting.
- Use libraries that emulate HTTP/2 settings (curl-cffi includes this).
- Survey the limits of HTTP-layer evasion.
You evaded TLS fingerprinting with curl-cffi. The request now reaches the server. It still 403s. What now?
HTTP/2 fingerprinting. The next layer.
What HTTP/2 fingerprints
HTTP/2 is a binary protocol with negotiable settings. Each client sends a SETTINGS frame on connection establishment, specifying:
SETTINGS_HEADER_TABLE_SIZESETTINGS_ENABLE_PUSHSETTINGS_MAX_CONCURRENT_STREAMSSETTINGS_INITIAL_WINDOW_SIZESETTINGS_MAX_FRAME_SIZESETTINGS_MAX_HEADER_LIST_SIZE
Different clients use different defaults. Chrome sends specific values; Python's hpack/h2 sends others; curl sends others. The combination is a fingerprint.
Plus:
- WINDOW_UPDATE values. Chrome's flow control quirks.
- HEADERS frame priority. Chrome sets specific stream weights.
- HTTP/2 pseudo-header order.
:method,:authority,:scheme,:path, order varies by client.
Akamai's "HTTP/2 fingerprinting" became a standardized name for this analysis; many vendors do similar checks.
The fingerprint format
The Akamai HTTP/2 fingerprint format is:
<SETTINGS>|<WINDOW_UPDATE>|<PRIORITY>|<HEADER_ORDER>
Example:
1:65536;3:1000;4:6291456;6:262144|15663105|0|m,a,s,p
1:65536, HEADER_TABLE_SIZE = 655363:1000, MAX_CONCURRENT_STREAMS = 10004:6291456, INITIAL_WINDOW_SIZE = 6 MB6:262144, MAX_HEADER_LIST_SIZE = 256 KB15663105, WINDOW_UPDATE delta0, no PRIORITY framesm,a,s,p, pseudo-header order: :method, :authority, :scheme, :path
Chrome and Python httpx have different values for almost every field.
How to evade
curl-cffi impersonates not just TLS but also HTTP/2 settings:
from curl_cffi import requests
r = requests.get(
"https://target.example.com/api/data",
impersonate="chrome120",
http_version="2.0",
)
Behind the scenes, libcurl-impersonate's HTTP/2 settings frame matches Chrome 120's exactly. JA4-H (HTTP-level JA4) and Akamai HTTP/2 fingerprints both pass.
Diagnosing HTTP/2 vs TLS fingerprinting
If TLS impersonation alone isn't enough, you're likely facing HTTP/2 too:
- TLS fixed → request reaches server → still 403.
- Switch to HTTP/1.1:
requests.get(url..., http_version="1.1"). If it works, HTTP/2 fingerprinting was the cause. - If still fails, there are other layers (JS challenges, header order, body fingerprinting).
curl-cffi's default impersonation matches HTTP/2 settings to the chosen browser, so for most cases the right impersonation profile fixes both.
Header order on HTTP/2
HTTP/2 pseudo-headers (:method, :authority, :scheme, :path) AND regular headers each have an order. Real Chrome sends:
:method, :authority, :scheme, :path
sec-ch-ua, sec-ch-ua-mobile, sec-ch-ua-platform
upgrade-insecure-requests
user-agent
accept
sec-fetch-site, sec-fetch-mode, sec-fetch-user, sec-fetch-dest
accept-encoding
accept-language
Python's requests doesn't even use HTTP/2 by default; when used through curl-cffi, the order matches Chrome.
tls-client also handles HTTP/2
import tls_client
s = tls_client.Session(
client_identifier="chrome_120",
h2_settings={
"HEADER_TABLE_SIZE": 65536,
"MAX_CONCURRENT_STREAMS": 1000,
"INITIAL_WINDOW_SIZE": 6291456,
"MAX_HEADER_LIST_SIZE": 262144,
},
pseudo_header_order=[":method", ":authority", ":scheme", ":path"],
header_order=["sec-ch-ua", "user-agent", "accept"...],
)
r = s.get("https://target.example.com/api/data")
More fine-grained than curl-cffi, but easier to misconfigure.
What still fails
Even with TLS + HTTP/2 impersonation matching Chrome:
- JS challenges, the page needs JS execution to set a cookie or solve a challenge. No HTTP-layer impersonation helps.
- Browser-fingerprinting JS, code that probes
navigator,screen,WebGLRenderingContext, etc. Needs a real browser. - Behavioral signals, mouse movement, scroll, focus events.
- IP reputation, datacenter IPs are flagged regardless of fingerprint.
At that point, you need:
- Playwright with patches (
playwright-stealth,undetected-chromedriver). - Residential proxies for IP reputation.
- Headless browser with fingerprint randomization,
nodriver,botasaurus.
The cost curve
Each evasion layer adds cost and complexity:
| Evasion | Library | Cost | Complexity |
|---|---|---|---|
| Headers | requests |
$0 | Trivial |
| TLS fingerprint | curl-cffi |
$0 | Low |
| HTTP/2 fingerprint | curl-cffi |
$0 | Low |
| Headless browser | Playwright | $0.50–$5/1k | Medium |
| Stealth browser | Patched Playwright | $1–$10/1k | High |
| Residential proxies | + Bright Data, etc. | $5–$15/GB | Medium |
| Full anti-bot bypass | Specialized providers | $5–$20/1k | Low (in $$$) |
At some point the cost of evasion exceeds the cost of using a specialized anti-bot bypass service (FlareSolverr-style). Know when to stop building and start buying.
A complete bypass workflow
For a Cloudflare-protected target:
- Try
curl-cffiwithimpersonate="chrome120". If 200, done. ~80% of mid-tier Cloudflare sites work here. - Add residential proxy. If 200, done.
- Add Playwright with
playwright-stealth. If 200, done. ~95% of Cloudflare done at this point. - Specialized service (ZenRows, ScrapingBee with stealth). 99%+ at this point.
- Accept and move on.
Each layer up costs more. Don't escalate prematurely.
Catalog108 lab
The /challenges/antibot/tls-fingerprint endpoint primarily checks TLS, with optional HTTP/2 checks. Pass with curl-cffi and impersonate="chrome120". Some variants of the lab also check HTTP/2 settings, explicitly verify by running with http_version="1.1" to see whether HTTP/2 was the layer at play.
PHP context
Same gap as TLS: PHP's standard cURL has no equivalent to curl-cffi. Workarounds:
- Build libcurl-impersonate from source, replace the linked libcurl.
- Shell out to Python for the impersonating fetches.
- Use a service (ScrapingBee, ZenRows) that handles fingerprinting server-side.
For pure PHP scraping against fingerprinting targets, the service route is often the pragmatic choice.
What "perfect" evasion looks like
A truly indistinguishable scraper would:
- Match TLS handshake exactly.
- Match HTTP/2 settings exactly.
- Match header content AND order exactly.
- Execute JS in a real browser environment.
- Pass behavioral fingerprinting (real mouse movements).
- Use clean residential IP geographically appropriate.
That's expensive. For most targets, 70-80% of the way is enough, the target isn't perfectly defended either.
Hands-on lab
You're at the end of the sub-path. Hit /challenges/antibot/tls-fingerprint with:
requests, should fail.curl-cffiwithimpersonate="chrome120", should succeed.curl-cffiwithhttp_version="1.1", should still succeed (no HTTP/2 layer to check).
Note the JA3 and Akamai HTTP/2 fingerprints in the response when present. You've now closed the loop on the entire anti-bot HTTP-layer stack, TLS, HTTP/2, and the libraries that defeat them. The next defense layer (JS challenges, behavior) is in Sub-Path 5 on Production & Scale.
Hands-on lab
Practice this lesson on Catalog108, our first-party scraping sandbox.
Open lab target →/challenges/antibot/tls-fingerprintQuiz, check your understanding
Pass mark is 70%. Pick the best answer; you’ll see the explanation right after.