Elements Panel, Inspect, Edit, Copy Selectors
Your eyes on a live page. How to use the Elements panel to plan a scrape before writing a single line of code.
What you’ll learn
- Open Elements, search the DOM, and locate any visible feature on a page.
- Use 'Copy selector' and 'Copy XPath' sensibly, and know why their output is usually too brittle to ship.
- Edit the DOM live to test what your scraper will see.
- Pin a specific element to a console variable for ad-hoc Python-style poking.
DevTools is your reconnaissance kit. Before you write a scraper for a new site, you should spend more time in DevTools than in your editor. The Elements panel, Chrome, Firefox, Safari, Edge all have an equivalent, is where you'll plan every scrape.
Opening it
Three ways, in order of usefulness:
- Right-click any visible thing → Inspect. Drops you in the Elements panel with the clicked node already highlighted. This is 90% of how you'll open it.
Ctrl + Shift + C/Cmd + Option + C. Enters "inspect" mode, hover any element, click to pin.F12/Cmd + Option + I. Opens DevTools with whichever panel was last active.
Get muscle memory for "right-click → inspect." That single habit shaves hours off scraping sessions.
What you see
The panel shows the current DOM, not the page source. Three sub-areas matter:
- The DOM tree on the left. Expandable. Bold-highlighted parts are what's actually visible.
- Computed styles / styles on the right (less relevant for scrapers, except for
display: nonedebugging). - The crumb trail at the bottom:
html > body > main > section.products > article.product. Click any breadcrumb to jump.
Searching the DOM
Ctrl+F (or Cmd+F) inside the Elements panel opens a search box that takes any of these:
- A plain string (matches text content)
- A CSS selector
- An XPath expression
The same search bar accepts all three. This is the single most useful feature in DevTools. Type your CSS or XPath, see live how many matches and which ones, before you write a line of scraper code.
.product .price ← CSS, ~24 matches on a listing page
//article[contains(@class, "product")] ← XPath, same thing
$14.99 ← text search
The match count is your selector's hit count. If you expected 12 products and got 24, your selector is too greedy.
"Copy selector" and "Copy XPath", the trap
Right-click any element in the Elements tree → Copy → Copy selector / Copy XPath. DevTools generates a selector for you.
The trap: the auto-generated selectors are usually unusable. Examples of what Chrome generates:
#__next > div > div > main > section:nth-child(2) > div > div:nth-child(3) > article > div.styles__Body-sc-12abc.dRfXyZ > h2
This walks every level from the root. The slightest layout change breaks it.
What to do instead: use it as a starting hint, then refactor by hand:
- Look at the generated path; find the most stable anchor (a class with a meaningful name, a
data-*, anid). - Strip everything above that anchor.
- Replace auto-generated class names with stable ones.
The auto-generated XPath is usually even worse, typically the full child-by-position path:
/html/body/div[1]/div/main/section[2]/div/div[3]/article/div[2]/h2
Never ship this. Auto-paths are first drafts only.
Editing the DOM live
Double-click any tag, attribute, or text node, it becomes editable. Useful tests:
- Delete a wrapper div to see if removing it changes layout (helps you understand which elements are structural vs. cosmetic).
- Edit a class to see if the element disappears (confirms which class is the visibility hook).
- Type into an input, hit Enter, watch the network panel, does it fire an XHR? (Gives you the URL for API scraping later.)
Edits don't persist, refresh and they're gone. Treat the panel as a sandbox.
Pin to $0, $1...
Click an element to select it. In the Console panel, the variable $0 is now that element. The previously-selected element is $1, then $2, $3, $4. You get five slots.
$0 // currently selected element
$0.getAttribute("data-id") // attribute value
$0.innerText // visible text
$0.querySelectorAll(".price").length // count of descendants
// XPath via the special $x helper
$x('//article[@class="product"]') // returns array of matches
$$('article.product') // CSS shortcut
This is your interactive scraper-prototyping shell. Test your selector here, confirm the count and content, then transplant into your script.
Force-state and pseudo-classes
The :hov button (top right of the styles pane) lets you force :hover, :focus, :active, :visited, :focus-within. Useful when content only appears on hover (tooltips, dropdown menus), pin the state so you can inspect the revealed content.
Two habits to build
-
Always confirm selector count in Elements search before scripting. If your selector matches 47 items but you expected 12, you've found a bug before writing the loop.
-
Inspect first, code second. Spend 5 minutes mapping the page in DevTools, which selectors are stable, which classes look auto-generated, which data lives in attributes vs. text, before writing any Python. Saves an hour of debugging.
Hands-on lab
Open practice.scrapingcentral.com/products in your browser, open DevTools, and:
- Use Elements search to count the number of
article.productcards. - Right-click one card → Copy selector. Note how brittle it is.
- Refactor that selector by hand to something resilient, should be 2–3 words.
- Pin the card with
$0in the console; extractinnerText, then run$0.querySelector('.price').innerTextto confirm you can get just the price.
By the end you'll have planned your /products scraper without writing a single line of Python.
Hands-on lab
Practice this lesson on Catalog108, our first-party scraping sandbox.
Open lab target →/productsQuiz, check your understanding
Pass mark is 70%. Pick the best answer; you’ll see the explanation right after.