Scraping Central is reader-supported. When you buy through links on our site, we may earn an affiliate commission.

3.11intermediate5 min read

Async API Consumption with `httpx`

Synchronous scraping leaves performance on the table. `httpx.AsyncClient` plus a semaphore plus `asyncio.gather` is the modern Python pattern.

What you’ll learn

  • Switch a synchronous `requests` scraper to `httpx.AsyncClient`.
  • Use `asyncio.gather` and a semaphore to control concurrency.
  • Combine async with backoff for 429-aware behavior.
  • Know when async actually helps and when it doesn't.

requests is synchronous. If you need to fetch 1,000 pages and each takes 200ms, that's 200 seconds. Async lets you overlap waits, 1,000 fetches finishing in 10-15 seconds at a sensible concurrency level. The library is httpx; the pattern is straightforward.

Install and import

pip install httpx[http2]
import httpx, asyncio

httpx ships sync and async APIs from the same package. The sync API is a drop-in replacement for requests (lesson 3.9 client works almost unchanged). The async API is what we're after here.

Hello, async

import httpx, asyncio

async def fetch_products(client, page):
  r = await client.get("https://practice.scrapingcentral.com/api/products",
  params={"page": page, "per_page": 50})
  r.raise_for_status()
  return r.json()["products"]

async def main():
  async with httpx.AsyncClient(timeout=10.0) as client:
  # Fetch 5 pages concurrently
  results = await asyncio.gather(
  *(fetch_products(client, p) for p in range(1, 6))
  )
  all_products = [p for page in results for p in page]
  print(len(all_products))

asyncio.run(main())

asyncio.gather runs the 5 coroutines concurrently. The five await client.get calls overlap in time, total wall time is roughly the longest single request, not the sum.

Controlling concurrency with a semaphore

Unbounded concurrency is a recipe for being rate-limited. A Semaphore caps how many requests are in flight:

async def fetch(client, sem, page):
  async with sem:
  r = await client.get(
  "https://practice.scrapingcentral.com/api/products",
  params={"page": page, "per_page": 50},
  )
  r.raise_for_status()
  return r.json()["products"]

async def main():
  sem = asyncio.Semaphore(10)  # at most 10 in flight
  async with httpx.AsyncClient(timeout=10.0, http2=True) as client:
  results = await asyncio.gather(
  *(fetch(client, sem, p) for p in range(1, 101))
  )
  return [p for page in results for p in page]

products = asyncio.run(main())
print(len(products))

10 concurrent requests is sane for most public APIs. Authenticated APIs often allow more; anti-bot ones often allow fewer.

Adding retries to async

A reusable async retry wrapper:

import httpx, asyncio, random

class PermanentError(Exception): pass

async def with_retry_async(coro_fn, max_attempts=5, base=1.0, cap=30.0):
  for attempt in range(max_attempts):
  try:
  return await coro_fn()
  except httpx.HTTPStatusError as e:
  status = e.response.status_code
  if status in (400, 404, 422):
  raise PermanentError(f"{status} {e}") from e
  if status == 429 or 500 <= status < 600:
  ra = e.response.headers.get("Retry-After")
  if ra and ra.isdigit():
  delay = float(ra)
  else:
  delay = min(cap, base * (2 ** attempt))
  delay = random.uniform(0, delay)
  await asyncio.sleep(delay)
  continue
  raise
  except (httpx.ConnectError, httpx.ReadTimeout):
  delay = min(cap, base * (2 ** attempt))
  delay = random.uniform(0, delay)
  await asyncio.sleep(delay)
  raise RuntimeError("max retries exceeded")

Usage:

async def fetch(client, page):
  async def do():
  r = await client.get(".../api/products", params={"page": page})
  r.raise_for_status()
  return r.json()
  return await with_retry_async(do)

A complete async client class

class AsyncCatalog108:
  BASE_URL = "https://practice.scrapingcentral.com"

  def __init__(self, concurrency: int = 10, timeout: float = 10.0):
  self._client: httpx.AsyncClient | None = None
  self._sem = asyncio.Semaphore(concurrency)
  self._timeout = timeout
  self.token: str | None = None

  async def __aenter__(self):
  self._client = httpx.AsyncClient(
  base_url=self.BASE_URL,
  timeout=self._timeout,
  http2=True,
  )
  return self

  async def __aexit__(self, *exc):
  if self._client:
  await self._client.aclose()

  async def _request(self, method: str, path: str, **kwargs):
  async with self._sem:
  if self.token:
  kwargs.setdefault("headers", {})
  kwargs["headers"].setdefault("Authorization", f"Bearer {self.token}")
  r = await self._client.request(method, path, **kwargs)
  r.raise_for_status()
  return r.json()

  async def login(self, email, password):
  data = await self._request("POST", "/api/auth/login",
  json={"email": email, "password": password})
  self.token = data["access_token"]

  async def products(self, page=1, per_page=12):
  return await self._request("GET", "/api/products",
  params={"page": page, "per_page": per_page})

  async def all_products(self):
  first = await self.products(page=1, per_page=50)
  total = first["pagination"]["total"]
  pages = (total + 49) // 50
  results = await asyncio.gather(
  *(self.products(page=p, per_page=50) for p in range(2, pages + 1))
  )
  return first["products"] + [p for r in results for p in r["products"]]

Usage:

async def main():
  async with AsyncCatalog108(concurrency=10) as client:
  await client.login("student@practice.scrapingcentral.com", "practice123")
  products = await client.all_products()
  print(len(products))

asyncio.run(main())

When async doesn't help

Async pays off when you're I/O-bound and parallelism is allowed. It doesn't help when:

  • The API rate-limits you to 1 request per second. Concurrency 10 gets throttled to 1.
  • You only have one or two URLs to hit. Setup overhead dominates.
  • CPU is the bottleneck (e.g. heavy parsing). Use a process pool instead.
  • The API requires strict ordering (auth → query → mutation). The first call's result feeds the next; you can't gather them.

In those cases, sync is simpler and just as fast.

httpx vs aiohttp

Both are excellent. httpx advantages:

  • Same API for sync + async, lets you prototype sync, scale async.
  • HTTP/2 built-in (http2=True).
  • More requests-like ergonomics.

aiohttp advantages:

  • More mature ecosystem for very high concurrency (10k+ connections).
  • Better integration with some async web frameworks.

For scraping, httpx is the better default. Switch to aiohttp only if you've measured a real reason.

Hands-on lab

Take the synchronous Catalog108 client from lesson 3.9 and convert it to async using the pattern above. Add a concurrency limit of 10. Fetch all products across all pages (use /api/products pagination). Compare wall time vs the synchronous version, you should see a 5–10x speedup, which is exactly the gain async is meant to deliver.

Hands-on lab

Practice this lesson on Catalog108, our first-party scraping sandbox.

Open lab target → /api/products

Quiz, check your understanding

Pass mark is 70%. Pick the best answer; you’ll see the explanation right after.

Async API Consumption with `httpx`1 / 8

What does `asyncio.gather(*coros)` do?

Score so far: 0 / 0