How to Set Up a Proxy with Selenium in Python

How to Set Up a Proxy with Selenium in Python
Last edit: May 06, 2024

Selenium is a popular open-source library for automating web browsers, testing, and scraping. It is available in most popular programming languages, including Python. Due to its ease of use and active community, Selenium is a go-to choice for web automation tasks.

In this article, we will delve into the world of using proxies with Selenium to enhance the security and efficiency of your web scraping endeavors. We will cover topics from the fundamentals of proxy usage with Selenium to advanced ones.

Why Use Proxies with Selenium?

A proxy server acts as an intermediary between you and the internet resources you access. When you request a website, your computer sends the request to the proxy server instead of directly to the website. The proxy server then forwards the request to the website and receives the response. Finally, the proxy server sends the response back to your computer.

Explore our Scraping Packages & Libraries

Effortlessly integrate web scraping into your Node.js projects with HasData's Node.js SDK, leveraging headless browsers, proxy rotation, and JavaScript rendering…

The HasData Python SDK simplifies web scraping by handling complex tasks like browser rendering, proxy management, and CAPTCHA avoidance, allowing you to focus on…

There are many different types of proxy servers, each with its advantages and disadvantages. Some proxy servers are designed to improve security, while others are designed to enhance anonymity. Some proxy servers are also designed to cache content, which can improve performance. But we have already discussed the different types of proxies in detail, so we will not dwell on them here.

There are several crucial reasons why using proxies is essential for web scraping:

  1. Prevent IP address blocks and CAPTCHAs.
  2. Bypass geo-restrictions and localize requests.
  3. Conceal your real IP address and enhance anonymity.

Let's delve into each of these points in more detail. 

Preventing IP Bans and Captchas

When it comes to scraping, using proxies is primarily essential to circumvent IP address blocks and CAPTCHA interruptions. As mentioned earlier, a proxy server acts as an intermediary between you and the target website. This way, if your IP address gets blocked, the access restriction applies to the proxy server and not your actual IP. To resume scraping, you can simply switch to a different proxy.

Dealing with CAPTCHAs is even simpler. Instead of solving CAPTCHAs during scraping, you can try to avoid them altogether by simply changing proxies when they appear. However, it's important to note that for this method to work, the proxies you use must be high-quality, preferably residential ones

Circumventing Geo-Restrictions

Proxies enable you to bypass geographical restrictions and access content that might be blocked or restricted based on your location. By routing your traffic through a proxy server located in another region or country, you can make it appear as if you're connecting to the internet from that location, thereby circumventing geo-blocks.

Enhancing Anonymity and Security

Another reason to use proxies is to enhance security and anonymity while scraping. However, it's important to note that not all proxies can improve your security. For instance, free proxies typically do more harm than good in this case. They are often unprotected, unstable, have low data transfer speeds, and may even monitor your traffic and sell it to third parties.

On the other hand, high-quality proxies can make your online presence anonymous and secure. Some proxy servers offer encryption, which scrambles your data into an unreadable format, protecting it from interception by third parties. This is particularly crucial when using public Wi-Fi networks, where your data may be vulnerable.

Prerequisites

Before we move on to examples of using proxies with Selenium, we need to make sure that all the necessary components are installed on your computer. For this article, you will need Python 3, a full installation tutorial you can find on Python scraping basics. Additionally, if you're interested in using proxies with the Requests library, you can see how to use proxy in Python requests.

To install Selenium, you can use the package manager and run the following command in the terminal:

pip install selenium

You will then need a chromedriver or any other web driver of the same version as the browser installed on your computer. In the article on Selenium scraping, you can find detailed instructions and all the necessary links to web drivers for different browsers.

Setting Up a Proxy in Selenium

Let's explore the different ways to use proxies with Selenium, along with the distinctions in usage based on the type of proxies chosen. There are two primary approaches to connect proxies in Selenium:

  1. Leveraging Selenium's built-in capabilities and adding proxies using Options.
  2. Employing third-party libraries for proxy management, such as Selenium Wire.

In this article, we'll delve into both methods, but the choice ultimately depends on your proficiency and project requirements.

Built-in Selenium Proxy Configuration

To begin, let's explore how to utilize Selenium's built-in functionality to establish a proxy connection. First, we'll import the necessary libraries:

from selenium import webdriver
from selenium.webdriver.chrome.options import Options

Next, we'll create a variable to store the proxy data:

proxy_server = "proxy_address:port"

Let's create a options object and populate it with proxy information:

options = webdriver.ChromeOptions()
options.add_argument('--proxy-server=%s' % proxy_server)

Next, create an instance of the web driver with specified options:

driver = webdriver.Chrome(options=options)

The remainder of the process for working with a web driver remains the same as described in our article on Selenium scraping.

Utilizing Third-party Libraries

For more advanced proxy management, including intercepting and modifying network requests, you can use third-party libraries like Selenium Wire. To use it, you need to install an additional module:

pip install selenium-wire

To use this package, you need to have Python 3.7 or higher and Selenium 4.0.0 or higher installed on your computer. We will replace the import of the webdriver, and leave the rest of the script unchanged:

from seleniumwire import webdriver

proxy_server = "proxy_address:port"

driver = webdriver.Chrome()

Specify the proxy:

driver.scopes = [(webdriver.request.Proxy(), 'http://' + proxy_server)]

Once this is done, page navigation and data processing become possible.

Configuring HTTP, HTTPS, and SOCKS5 Proxies

Proxy configuration in Selenium WebDriver allows you to route your web traffic through a proxy server using different protocols. This section will not delve into the details of these protocols but will focus on how to use proxies regardless of the protocol.

HTTP and HTTPS proxies are essentially the same, with the exception that HTTPS proxies are secure while HTTP proxies are not. They are also used in a very similar manner:

proxy_server = "116.203.28.43:80"

options = webdriver.ChromeOptions()

options.add_argument('--proxy-server=http://%s' % proxy_server)
options.add_argument('--proxy-server=https://%s' % proxy_server)

Unlike HTTP and HTTPS proxies, SOCKS and SOCKS5 proxies can handle UDP requests, making them more versatile. To use, specify the proxy type when configuring the options:

options.add_argument('--proxy-server=socks5://%s' % socks5_proxy)

As you can see, Selenium supports all types of proxies, and their import is performed in the same way. The only difference is that you need to specify the type of proxies you are using during import.

Using Selenium with a Proxy

Let's explore examples of using proxies with and without authentication. To make the example more illustrative, we will make requests to the httpbin website, which should return a JSON response with our current IP address. This will help us verify the functionality of the proxy and make the examples more clear.

Explore our Scraping Packages & Libraries

The Google SERP API library is a comprehensive solution that allows developers to integrate Google Search Engine Results Page (SERP) data. It provides a simplified…

The Google Maps API Python library offers developers an efficient means to harness detailed location data directly from Google Maps. This library simplifies the…

Unauthenticated Proxies

Free proxies are those that don't require a username and password for access. This is the type of proxies used in the previous examples. While convenient, they are often unreliable and can be blocked easily.

Let's modify one of the previously discussed scripts to access the website httpbin. This will demonstrate how to use a free proxy to make a request:

from selenium import webdriver
from selenium.webdriver.chrome.options import Options

proxy_server = "116.203.28.43:80"

options = webdriver.ChromeOptions()
options.add_argument('--proxy-server=http://%s' % proxy_server)
driver = webdriver.Chrome(options=options)
driver.get('https://httpbin.org/ip')

Next, we will retrieve the entire web page content and display it on the screen:

page_source = driver.page_source
print("Page title:", page_source)

Ensure the web driver is properly closed at the end of the script:

driver.quit()

Upon running the script, a browser window controlled by the webdriver will open, and the result will be displayed in the command line or terminal:

The result
The result

To test this script, you can utilize our list of free and up-to-date proxies.

Authenticated Proxies

Proxy authentication in Selenium involves providing credentials (username and password) to access the proxy before it can be used to route web traffic. These are typically specified in a URL format like:

http://username:password@proxy_address:port

Instead of HTTP, you can specify another protocol type, such as HTTPS or SOCKS5. Let's take the previous example and use an authenticated proxy:

from selenium import webdriver
from selenium.webdriver.chrome.options import Options

proxy_server = "hasdata:^G*[email protected]:3132"

options = webdriver.ChromeOptions()
options.add_argument('--proxy-server=https://%s' % proxy_server)
driver = webdriver.Chrome(options=options)

driver.get('https://httpbin.org/ip')

page_source = driver.page_source
print("Page title:", page_source)
driver.quit()

As a result we will get:

Make the requests
Make the requests

Using an authenticated proxy is significantly more secure, as it eliminates the possibility of unauthorized access by third parties. This enhanced security stems from the implementation of an authentication mechanism that verifies user credentials before granting access to the proxy server.

Advanced Topics

In addition to the basic examples of working with proxies, let's explore advanced topics that may require additional skills and knowledge but can significantly enhance your script's capabilities when using proxies in Selenium. 

Debugging

Debugging is an essential part of script development as it helps identify and fix errors, as well as analyze the script's behavior in different scenarios. For instance, to prevent the script from halting during execution if errors occur (non-functional proxies, timeout exceeded, or for any other reason), you can employ the try..except block:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options

proxy_server = "193.242.145.106:3132"
options = webdriver.ChromeOptions()
options.add_argument('--proxy-server=http://%s' % proxy_server)
try:
    driver = webdriver.Chrome()
    driver.get('https://httpbin.org/ip')
    ip_json = driver.find_element(By.TAG_NAME, 'pre').text
    print(ip_json)
except Exception as e:
    print(e)
finally:
    driver.quit()

The provided code captures and displays all encountered errors. However, you can customize error handling to suit your needs. For instance, you can filter errors to display only network errors or specific error codes. Additionally, you can tailor the output to display only the error code or other relevant information.

The finally block ensures that the browser is closed regardless of any errors or exceptions. Additionally, extracting only the text content from the page eliminates unnecessary information and streamlines the process.

Incorporating logging into the script further enhances error tracking and debugging. Utilize a logging library to record errors, their descriptions, and relevant timestamps. This structured log can be analyzed to identify patterns, recurring issues, and areas for improvement:

import logging

logging.basicConfig(level=logging.DEBUG)

By implementing these enhancements, you can create robust Selenium scripts that effectively handle errors, provide valuable insights, and streamline the debugging process.

Proxy Rotation

Proxy rotation is a technique that involves periodically changing the proxy server used to make requests. This can be useful for bypassing website blocks, increasing the reliability of requests, and protecting your anonymity. You can either purchase rotating proxies or implement a proxy rotation system from a pool of IPs.

With proxy rotation, you have a pool of available proxies, and you cycle through them for each request. This reduces the number of requests coming from the same IP address, making it appear to the target website that the requests are coming from different devices. 

To implement proxy rotation, you can employ various strategies, including:

  1. Changing proxies after each request. This method offers the highest level of anonymity but may not be suitable for high-volume requests.
  2. Changing proxies after a specific number of requests. This approach balances anonymity with performance, making it suitable for moderate traffic scenarios.
  3. Selecting random proxies for each request. This strategy provides a balance between anonymity and efficiency, making it ideal for general-purpose applications.

Let's implement the last option. First, we will import the necessary libraries and modules, and also declare a variable to place a list of proxies:

from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
import random

proxy_servers = [
    "116.203.28.43:80",
    "117.250.3.58:8080",
    "111.206.0.99:8181"
]

To make the example more illustrative, let's define a loop that will iterate five times:

for i in range(5):

Make requests to the httpbin website, randomly selecting proxies from a pool of available ones:

    try:
        options = Options()
        options.add_argument("--proxy-server=http://{}".format(random.choice(proxy_servers)))
        driver = webdriver.Chrome(options=options)

        driver.get('https://httpbin.org/ip')
        ip_json = driver.find_element(By.TAG_NAME, 'pre').text
        print(ip_json)
    except Exception as e:
        print(e)
    finally:
        driver.quit()

Run the script and get the result:

The result
The result

In this way, the script randomly selects proxies from a list each time and makes a request. This approach improves scraping quality and increases the reliability of your scripts.

Conclusion

In this article, we explored the fundamental principles of using proxies with Selenium, enabling you to mask your actual IP address while scraping data and automating browser actions. This approach offers enhanced security and anonymity on the internet, reducing the risk of your real IP address being blocked.

Proxies can also be beneficial in bypassing geo-restrictions, request limits, and other restrictions imposed by websites. Proxy rotation, on the other hand, can enhance the reliability and anonymity of your script while ensuring even load distribution across proxy servers. While proxies offer these advantages, using them effectively can be challenging, especially when dealing with complex scraping tasks. For a hassle-free and reliable scraping experience, consider using HasData’s web scraping API.

Tired of getting blocked while scraping the web?

Try out Web Scraping API with proxy rotation, CAPTCHA bypass, and Javascript rendering.

  • 1,000 Free API Credits
  • No Credit Card Required
  • 30-Day Trial
Try now for free

Collect structured data without any coding!

Our no-code scrapers make it easy to extract data from popular websites with just a few clicks.

  • CSV, XLSX, and JSON Formats
  • No Coding or Software Required
  • Save Time and Effort
Scrape with No Code
Valentina Skakun

I'm a technical writer who believes that data parsing can help in getting and analyzing data. I'll tell about what parsing is and how to use it.