Scraping Central is reader-supported. When you buy through links on our site, we may earn an affiliate commission.

F11beginner5 min read

Console, Sources, Application Tabs

The three other DevTools panels every scraper should know, for interactive prototyping, reading minified JS, and inspecting client-side storage.

What you’ll learn

  • Use the Console as an interactive REPL against the live DOM.
  • Open Sources, find the JS file that produced an API call, set a breakpoint to inspect runtime values.
  • Read the Application tab to see cookies, localStorage, sessionStorage, IndexedDB, and Service Workers.
  • Recognise when client-side storage (not cookies) holds the auth token your scraper needs.

DevTools has more than Elements and Network. The other three panels, Console, Sources, Application, turn DevTools from a debugger into a full reverse-engineering kit.

Console: your live REPL

The Console isn't just for console.log output. It's a JavaScript REPL that has full access to the page's DOM and runtime.

Five things you'll actually do

// 1. Run a selector and count
document.querySelectorAll('article.product').length

// 2. Pull all titles
[...document.querySelectorAll('article.product h2')].map(el => el.innerText)

// 3. Access the most-recently-inspected element
$0.dataset.productId

// 4. Run an XPath
$x('//article[@class="product"][1]/h2')[0].innerText

// 5. Tab through the page's global variables
window.  // tab-complete the most useful global; some sites expose their state object here

The last one is a quiet superpower. Many React/Vue/Angular apps stash their state on a global (e.g. window.__NUXT__, window.__NEXT_DATA__, window._sharedData). Type window. and tab-complete, you might find the entire page's data sitting in one JSON object.

monitorEvents, see what the page is listening for

monitorEvents($0, 'click')  // logs every click on the currently-selected element
unmonitorEvents($0)  // stop

Useful when you can't figure out why a button doesn't seem to do anything you can intercept.

Sources: read the JS that runs

Sources is where the JS lives. Three reasons a scraper opens it:

1. Finding where an XHR is built

From Network → click a request → Initiator tab → click the file link. You land in Sources at the exact line that fired the request.

2. Pretty-printing minified code

Almost every production site ships minified JS. Click the {} button at the bottom of the Sources pane to pretty-print the current file. t.value=k.encrypt(c) becomes readable across multiple lines. Won't restore variable names, but the structure becomes visible.

3. Setting a breakpoint to capture a runtime value

This is the killer feature for reverse-engineering. Suppose a page signs every request with an HMAC built from a secret. The secret lives in the JS bundle, but it's computed at runtime, you can't grep for it. Instead:

  1. Find the file in Sources that builds the HMAC.
  2. Click the gutter next to the relevant line to set a breakpoint.
  3. Click around the page to trigger the call.
  4. When execution pauses at your breakpoint, hover over variables to see their values. Or use the Scope pane on the right.

You just read out a runtime secret without executing any of your own code. This technique (and variants, XHR breakpoints, event listener breakpoints, conditional breakpoints) is the heart of Sub-Path 3.

XHR breakpoints

In Sources → right pane → XHR/fetch Breakpoints+. Type a substring of the URL pattern (e.g. api/checkout). Now any fetch matching that substring pauses execution. You're frozen at the exact moment the call is being constructed, every variable in scope is your data.

DOM breakpoints

Right-click an element in the Elements panel → Break onSubtree modifications / Attribute modifications / Node removal. The JS that mutates that element pauses on the modification. Useful when you can't figure out what code is hiding/revealing content.

Application: cookies, storage, service workers

This is the home of every piece of client-side state. Three areas matter:

Cookies

Domains in the left pane → click one → see every cookie's name, value, domain, path, expires, size, flags. Edit values inline. Right-click a cookie → Delete to test what happens without it.

This is where you copy auth cookies from your logged-in browser for your scraper's session.

Local & session storage

Application → Local Storage → https://example.com → ...
Application → Session Storage → https://example.com → ...

localStorage survives browser restart; sessionStorage doesn't. Many modern SPAs store auth tokens (JWTs) here instead of cookies, especially when the API is on a different origin.

When you can't find the auth token in cookies, check localStorage. Pattern looks like:

key: "auth.token"
value: "eyJhbGciOiJIUzI1NiIs..."  (looks like a JWT)

For a scraper, you can read these from the browser DevTools, then plug the value into an Authorization: Bearer ... header on your scraper's requests. The token-replay recipe is identical to cookies.

IndexedDB

A larger client-side database. Few sites use it for data scrapers care about, but offline-first apps and chat clients sometimes cache entire message histories here. Browseable as a tree.

Service Workers

Service workers intercept network requests. If a site has one registered and your curl requests behave differently from your browser's, the SW is the culprit. Application → Service Workers → see what's registered, unregister to test the page without it.

A real reverse-engineering recipe

You want to scrape a SPA's product data. The Network tab shows /api/products?token=ZZZZZZZZ where ZZZZZZZZ is a long random string that changes on every page load.

Where does the token come from?

  1. Application → Local Storage. Yes, authToken is there.
  2. Where did it come from? Search Sources for the literal string authToken. Find a file like auth-init.min.js. Pretty-print.
  3. Set a breakpoint at the line that writes localStorage.setItem('authToken'...).
  4. Reload. Execution pauses. Inspect the call stack and scope: the token came from an earlier /api/auth/init response.
  5. Now your scraper recipe: POST /api/auth/init → read token from response → use it in Authorization: Bearer token.

Twenty minutes of DevTools work, no Playwright needed, scraper runs at HTTP speed.

Hands-on lab

Open practice.scrapingcentral.com/account/login and log in with the demo credentials. Then:

  1. Console: type document.cookie, what's there?
  2. Application → Local Storage, is anything stored?
  3. Application → Cookies → find the session cookie, note its name and HttpOnly flag.
  4. Console: type $0 after selecting the login form, what do you have?

By the end you'll know exactly how Catalog108's auth state lives in the browser, and you'll know what to copy into your scraper to replay that state.

Hands-on lab

Practice this lesson on Catalog108, our first-party scraping sandbox.

Open lab target → /account/login

Quiz, check your understanding

Pass mark is 70%. Pick the best answer; you’ll see the explanation right after.

Console, Sources, Application Tabs1 / 8

Which Console expression lists every product title on a page using a CSS selector?

Score so far: 0 / 0