How to Set Up Proxy in Puppeteer

If you run Puppeteer at scale without proxies, you risk IP bans, missing data, and broken tests. This guide shows how to set up proxies in Puppeteer, including configuration, rotation, and troubleshooting.
3 Types of Proxy Setup in Puppeteer
Static proxy via —proxy-server
The simplest way to use a proxy in Puppeteer is with the --proxy-server
option. This sets the proxy for the entire browser, and all tabs or contexts inherit it. Unlike other methods, this approach is native and doesn’t require any additional packages.
To use it, pass the argument when creating the browser instance:
import puppeteer from 'puppeteer';
const browser = await puppeteer.launch({
args: ['--proxy-server=http://HOST:PORT']
});
const page = await browser.newPage();
await page.goto('https://httpbin.org/ip', { waitUntil: 'domcontentloaded' });
console.log(await page.evaluate(()=>document.body.innerText));
await browser.close();
Page-level proxy with request interception
Puppeteer lacks a native per-page proxy. Use puppeteer-page-proxy
to route individual page requests through different proxies.
Install the following package:
npm i puppeteer-page-proxy
Usage example:
import puppeteer from 'puppeteer';
import useProxy from 'puppeteer-page-proxy';
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.setRequestInterception(true);
page.on('request', req => useProxy(req, 'http://user:pass@HOST:PORT'));
await page.goto('https://httpbin.org/ip');
Browser context-level proxy
Puppeteer doesn’t support proxies per browserContext
. The --proxy-server
option still applies to the entire browser.
To handle this, you can:
- Launch separate browser instances, each with its own
--proxy-server
. - Use request interception (see the previous section).
Keep in mind that running multiple browser processes increases CPU and memory usage. There’s no native alternative at the moment.
Example with two browser instances:
const browser = await puppeteer.launch({ args: ['--proxy-server=http://p1:port'] });
const browser2 = await puppeteer.launch({ args: ['--proxy-server=http://p2:port'] });
Handling Proxy Authentication
First, note that the following method does not work in Puppeteer:
--proxy-server=http://user:pass@host:port
Puppeteer controls Chromium, and Chromium does not support credentials in the --proxy-server
URL. This is documented in the Chromium proxy docs. So, Puppeteer cannot pass username/password through the URL either.
Using page.authenticate()
Puppeteer handles proxy auth through page.authenticate()
. This is the official way to pass proxy credentials. It works with HTTP/HTTPS and must run before navigation:
// Launch browser with global proxy
const browser = await puppeteer.launch({ args: ['--proxy-server=http://host:port'] });
const page = await browser.newPage();
// Authenticate proxy
await page.authenticate({ username: 'user', password: 'pass' });
await page.goto('https://example.com');
Third-party libraries and workarounds
There are workarounds for proxies that require authentication:
- Use third-party libraries that attach credentials at the request or page level.
- Localize (anonymize) an upstream proxy, then pass the local proxy URL to
--proxy-server
.
Example with puppeteer-page-proxy
(per-page proxy with credentials):
import puppeteer from 'puppeteer';
import useProxy from 'puppeteer-page-proxy';
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.setRequestInterception(true);
page.on('request', req => useProxy(req, 'http://USER:PASS@HOST:PORT'));
await page.goto('https://httpbin.org/ip');
await browser.close();
Example with @extra/proxy-router
(routing and dynamic IP changes):
import puppeteer from 'puppeteer';
import { ProxyRouter } from '@extra/proxy-router';
const router = new ProxyRouter();
await router.addProxy('http://USER:PASS@HOST:PORT');
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.authenticate({ username: 'USER', password: 'PASS' });
await page.goto('https://httpbin.org/ip');
await browser.close();
You can also use the proxy-chain
package or create your own anonymizer, which runs a local HTTP proxy and injects credentials into headers when forwarding traffic.
Advanced Proxy Techniques
Use extra measures like setting headers, hiding the automated browser, and rotating proxies regularly to reduce the risk of bans and make proxies last longer.
Building proxy rotation scripts
Proxy rotation is simple. Use a pool of proxies and switch between them either continuously or when bans/errors occur.
const pool = ['http://host1:port','http://host2:port'];
let i = 0;
const next = () => pool[(i++) % pool.length];
const proxy = next();
You can also use a rotating endpoint from a proxy provider. This is easier to implement and usually comes with access to a much larger pool, with the main downside being increased costs.
Track banned proxies to improve reliability. Keep metrics and skip bad proxies:
const pool = new Set(['proxy1','proxy2','proxy3']);
const banned = new Set();
function next() {
const candidates = [...pool].filter(x=>!banned.has(x));
return candidates[Math.floor(Math.random()*candidates.length)];
}
function markBad(p){ banned.add(p); }
Dynamic user-agent and header spoofing
Always use realistic User-Agent, Accept-Language, and other headers when switching IPs. Proxies alone are not enough. You can find the latest User Agents in our blog.
Example:
await page.setUserAgent('Mozilla/5.0 (Windows NT 10.0; Win64; x64) ...');
await page.setExtraHTTPHeaders({ 'Accept-Language': 'en-US,en;q=0.9' });
Headless detection evasion
To hide that you are using an automated browser, use puppeteer-extra
with the puppeteer-extra-plugin-stealth
plugin, which applies a set of evasion patches (but doesn’t guarantee 100% protection from bans).
These modules work like standard Puppeteer but include stealth features to better hide automation. Use them when scraping fails with plain Puppeteer.
Run this in your terminal:
npm i puppeteer-extra puppeteer-extra-plugin-stealth
Usage example:
import puppeteer from 'puppeteer-extra';
import StealthPlugin from 'puppeteer-extra-plugin-stealth';
puppeteer.use(StealthPlugin());
const browser = await puppeteer.launch({ headless: true, args: ['--proxy-server=...'] });
Troubleshooting Proxy Issues
Fixing authentication and connection errors
Most issues come from incorrect usage or authentication problems. Here are the common Puppeteer errors and fixes:
Error | Meaning / Cause | Fix |
---|---|---|
net::ERR_PROXY_CONNECTION_FAILED | Wrong host / port / protocol | Double-check proxy string (http://host:port) |
net::ERR_NO_SUPPORTED_PROXIES | Unsupported scheme | Use correct scheme: http://, https://, socks4://, socks5:// |
net::ERR_TUNNEL_CONNECTION_FAILED | Proxy can’t reach the target site | Test proxy externally (curl -x proxy …), switch to another one |
net::ERR_TIMED_OUT | Proxy is alive but too slow | Drop slow IPs, add timeout & retry logic |
net::ERR_HTTP_RESPONSE_CODE_FAILURE 407 | Proxy auth required, but missing credentials | Call page.authenticate({ user, pass }) or use a proxy lib |
Remember that Chromium does not support embedding credentials in the proxy URL.
Resolving misconfigurations and inconsistent output
Don’t mix global --proxy-server
with per-request interception. Use only one method per browser to avoid unpredictable IP behavior.
Mixing them splits traffic: some requests go through host1
, some through host2
, and some bypass the proxy, making debugging unreliable.
// Wrong: Mixing global proxy and per-request proxying
await puppeteer.launch({ args: ['--proxy-server=http://host1:port'] });
page.on('request', req => useProxy(req, 'http://host2:port'));
Monitoring proxy health at scale
Some proxies will fail at scale. If you don’t check them, jobs hang. Test each proxy with a simple site like httpbin, which returns the IP used for the request. Track success rate and latency, and remove bad nodes to keep scraping stable.
try {
await page.goto('https://httpbin.org/ip', { timeout: 2000 });
} catch {
console.log('Proxy dead, skip');
}
Alternative to rotating proxies in Puppeteer
Managing your own proxy pool is fine — until it’s not. Credentials, rotation logic, retries, health checks… all that eats time.
To get results quickly, offload proxy handling to a service like HasData. This SDK hides proxy/auth headaches behind a single API call.
No need to rotate IPs, no page.authenticate(), no retry loops. You can even capture a screenshot if needed, all proxy logic is handled automatically.
import ScrapeitSDK from '@scrapeit-cloud/scrapeit-cloud-node-sdk';
const main = async () => {
// To get an API key, sign up at https://app.hasdata.com/sign-up
const scrapeit = new ScrapeitSDK('YOUR_API_KEY');
try {
const response = await scrapeit.scrape({
url: "https://httpbin.org/ip",
proxy_country: "US",
proxy_type: "datacenter",
});
console.log(response);
} catch (e) {
console.error(e.message);
}
};
main();
