Symfony Panther, Playwright/ChromeDriver for PHP
PHP's first-class browser automation library. Same model as Playwright, friendly Symfony integration, real browser control.
What you’ll learn
- Install Panther alongside Symfony or in any composer-managed PHP project.
- Drive Chromium or Firefox from PHP with a familiar Crawler API.
- Wait for JS-rendered content with Panther's explicit-wait helpers.
- Recognise the two cases where Panther beats Playwright via Python subprocess.
If your stack is PHP, you have three options for browser automation: shell out to a Python script, embed a headless tool over CDP, or use Symfony Panther. Panther is the cleanest of the three, a native PHP library that drives Chromium or Firefox via WebDriver, with an API that feels like Goutte/DomCrawler but executes real JavaScript.
Install
composer require symfony/panther
Panther can use either Chromium (via ChromeDriver) or Firefox (via geckodriver). The driver binary is downloaded automatically the first time you run a script that needs it, into your project's drivers/ folder.
Symfony users get extra wiring (test framework integration, Symfony Console support). Non-Symfony PHP works fine too, Panther is composer require-able from any project.
Your first scraper
<?php
require_once __DIR__ . '/vendor/autoload.php';
use Symfony\Component\Panther\Client;
$client = Client::createChromeClient();
$crawler = $client->request('GET', 'https://practice.scrapingcentral.com/');
echo $crawler->filter('h1')->first()->text() . PHP_EOL;
$client->quit();
Client::createChromeClient() launches headless Chromium. $client->request() is the navigation method (note the HTTP-verb-style API, same as Symfony's BrowserKit). The returned $crawler is a DomCrawler instance, the same one you've used for static scraping in Sub-Path 1.
For Firefox: Client::createFirefoxClient().
What Panther actually does
Under the hood, Panther:
- Spawns ChromeDriver (or geckodriver) as a subprocess.
- Speaks the WebDriver protocol to it (the same protocol Selenium uses).
- Exposes a PHP API that mimics Symfony's
BrowserKit\ClientandDomCrawler\Crawler.
Result: you write PHP that looks like a Symfony Crawler scrape, but the page is being driven by a real browser that runs JS, executes XHRs, and updates the DOM. Same conceptual model as Playwright, different transport.
Crawler + WebDriver
Two APIs are available on every Panther response:
$crawler = $client->request('GET', 'https://practice.scrapingcentral.com/products');
// DomCrawler-style (read-only, familiar from Sub-Path 1)
foreach ($crawler->filter('.product-card') as $node) {
echo $node->getElementsByTagName('h2')[0]->textContent . PHP_EOL;
}
// WebDriver-style (interactive)
$client->getWebDriver()->findElement(WebDriverBy::cssSelector('button.load-more'))->click();
For pure extraction, stay in the Crawler API, it's the PHP idiom. For interactions (click, fill, scroll), reach for WebDriver. Panther blends them seamlessly.
Waiting for JS-rendered content
use Symfony\Component\Panther\DomCrawler\Crawler;
$client = Client::createChromeClient();
$crawler = $client->request('GET', 'https://practice.scrapingcentral.com/challenges/dynamic/spa-pure');
// Wait up to 10 seconds for the .product-grid to appear
$client->waitFor('.product-grid', 10);
// Now extract
$cards = $crawler->filter('.product-card');
echo "Found " . $cards->count() . " products" . PHP_EOL;
$client->quit();
waitFor(selector, timeout) polls until the selector resolves to at least one element. Variants:
$client->waitForVisibility('.modal');
$client->waitForInvisibility('.spinner');
$client->waitForElementToContain('.status', 'Loaded');
$client->waitForAttributeToContain('input', 'value', 'success');
These are Panther's equivalents to Playwright's wait_for_selector(state=...). Same idea, different API surface.
Filling forms and clicking
$client = Client::createChromeClient();
$crawler = $client->request('GET', 'https://practice.scrapingcentral.com/account/login');
$form = $crawler->filter('form')->form([
'email' => 'demo@example.com',
'password' => 'password',
]);
$client->submit($form);
$client->waitFor('.welcome-message');
echo $crawler->filter('.welcome-message')->text();
$client->quit();
The form API is straight from Symfony BrowserKit, the same code that works against a static crawler also works against a JS-driven page, with the addition of waitFor to handle async post-submit redirects.
For more direct control:
$crawler->selectButton('Login')->click();
$crawler->filter('input[name="email"]')->sendKeys('demo@example.com');
sendKeys is the WebDriver way: simulates keystrokes one at a time. The Symfony form helpers are higher-level, pick whichever fits your interaction shape.
When Panther beats Playwright-via-subprocess
You could call Playwright Python from PHP via exec() or Symfony\Process. Panther is better when:
-
Your project is already Symfony. Panther integrates with Symfony's test framework (
PantherTestCase), the dependency injection container, console commands, and the dotenv config. Lesson 2.13 builds a Symfony Console scraper that's idiomatic in a way subprocess-Playwright never could be. -
You need transactional control. Panther lives in your PHP process, sharing config, services, and exceptions. A subprocess Python script is a black box, communicate over stdout, hope for the best.
When Playwright (Python or Node) is still better:
- The team's primary language is Python/JS and PHP is one consumer.
- You need Playwright-specific features (browser context proxies, request interception, codegen).
- You're scraping at high volume where every millisecond counts, Panther's WebDriver protocol has more round-trips than CDP.
Cleanup
try {
$client = Client::createChromeClient();
// ... scraping ...
} finally {
$client->quit();
}
ChromeDriver and Chromium are subprocesses. Forgetting quit() leaves them running, sometimes outliving your PHP process. Always wrap in try/finally.
A note on environment
Headless Chromium needs:
- A modern Linux/macOS/Windows kernel.
- For CI/Docker: ensure the standard fontconfig + libnss libraries are installed, otherwise Chromium errors on startup.
- For dockerised PHP: use a base image that's known to work (
webdevops/php-nginxseries ships everything; vanillaphp:fpmdoes not).
The first run downloads ~150 MB of driver/browser. Subsequent runs are instant.
Hands-on lab
Open /challenges/dynamic/spa-pure. Write a Panther scraper that: (1) navigates to the page, (2) waits for the product grid to render, (3) extracts the name and price of every product card. Then add a click on "Load more" if the lab supports it, and re-extract. You should have a working browser-driven PHP scraper in under thirty lines.
Hands-on lab
Practice this lesson on Catalog108, our first-party scraping sandbox.
Open lab target →/challenges/dynamic/spa-pureQuiz, check your understanding
Pass mark is 70%. Pick the best answer; you’ll see the explanation right after.