Web Scraping with Go: Complete Guide 2024
Go, also known as Golang, is an open-source programming language developed by Google. With its avoidance of complex language constructs and a minimal set of keywords and built-in types, Go has gained popularity even among beginners.
Furthermore, Go provides built-in support for concurrency through goroutines and channels. Goroutines are lightweight threads that enable parallel execution, while channels provide a safe means of communication and data synchronization between goroutines. This makes it easy to write concurrent and scalable programs.
In addition to its simplicity and built-in support for parallelism, Go boasts many available libraries and tools that greatly enhance its capabilities. These resources address various areas, including web scraping, allowing developers to efficiently extract and process data from websites and online sources.
Effortlessly extract Google Maps data – business types, phone numbers, addresses, websites, emails, ratings, review counts, and more. No coding needed! Download results in convenient JSON, CSV, and Excel formats.
Discover the easiest way to get valuable SEO data from Google SERPs with our Google SERP Scraper! No coding is needed - just run, download, and analyze your SERP data in Excel, CSV, or JSON formats. Get started now for free!
Getting Started with Go
Before scraping data step-by-step and exploring various libraries, let’s prepare and set up our environment. First and foremost, let’s install Git so we can directly fetch Go libraries from GitHub. Download the required version from the official website and install it. If you are a beginner and haven’t used Git before, we recommend keeping the default settings unchanged.
Now it’s time actually to install Go. To do this, simply visit the official Go website, download the installation file, and follow the instructions provided during installation.
To be sure that you have successfully installed Go, you can use the command “go version”:
C:\Scripts>go version
go version go1.20.5 windows/amd64
You can use any text editor to write code, but it’s better to use specialized tools for convenience and syntax highlighting. We will use Visual Studio Code.
As the environment setup is complete, let’s look at example pages and Go libraries for scraping. We will discuss how to install and create simple scrapers using these libraries, showcasing their functionality.
Inspecting the Target Website
Before scraping web pages, it’s important to analyze the target website. This is necessary to find exactly where the information we need is located. We should know which tags and classes contain the required elements. For example, we can search for data inside <div> tags with a specific class or use other specific selectors to precisely locate the desired information on the web page. As an example, we will use two websites - “example” and “store example”.
Example Page
To analyze the webpage “example.com,” we must study its structure and content. By examining the page’s HTML code, we can identify the tags and classes that contain the necessary information. To do this, go to the page and open the DevTools (press F12 or right-click on the screen and select “Inspect”).
As we can see, the page title is located within the h1 tag, and the rest of the text is stored within the p tags. We can now use CSS selectors or XPATH to extract the desired information.
Example Store
This website has much more data and a structure closer to reality. Each item has a div element with the class “col,” and inside this div, you can find the following information:
The image is inside a div tag with the class “image” within a nested “a” tag with the “href” attribute.
The product name is in an “h4” tag. The link to the product page is also inside it in a nested “a” tag with the “href” attribute.
The product description is in a “p” tag.
The price is in a div tag with the class “price.” It also has nested tags:
The original price is in a “span” tag with the class “price-old.”
The discounted price is in a “span” tag with the class “price-new.”
The tax information is in a “span” tag with the class “price-tax.”
Now that we know the structure of both websites we will be scraping, we can select the libraries.
Best Go Web Scraping Libraries
Go has a growing and active community of developers. It has a strong library and framework ecosystem that covers a wide range of applications, including web development, networking, databases, and web scraping. But today, let’s focus on just 3 popular Go libraries for scraping:
Depending on the chosen library, you can use simple request functions or more advanced features like dynamic page rendering, regular expression extraction, and distributed scraping.
To make Golang web scraping easier for beginners and provide them with versatile examples, in addition to the three listed libraries, we will also demonstrate the usage of our web scraping API. It allows you to extract data automatically, handles proxy usage, solves JavaScript rendering issues, and bypasses captchas and blocks.
Get Data with HasData API
The web scraping API with rotating proxies from HasData has some great benefits. It makes scraping data from websites easier because you don’t need to worry about proxies. The API is easy to use and works with different programming languages such as Golang, Python, or NodeJS.
It can handle websites that use JavaScript so that you can scrape dynamic content. The rotating proxies feature helps you stay anonymous and avoid getting blocked. The service can handle both small and large-scale scraping tasks. The pricing plans are flexible; you can even try them for free. Overall, it’s a simple and convenient solution for data extraction from websites.
Prepare to Scrape
We will use the net/http library to make HTTP requests for Scrapi-It.Cloud API. It is a library for making requests that help us make requests to websites. To install the net/http library, you don’t need to do anything special. It is a built-in package in the Go programming language. So, when you install Go on your system, the net/http library comes with it automatically. You can use it in your Go programs without additional installation steps.
We will also need an API key, which you can get after signing up at HasData, along with a few free credits to use the API.
Using Example
Let’s get the data for example.com first. To begin, declare the libraries:
package main
import (
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"strings"
)
We will write all the code inside the main function, so let’s declare it:
func main() { }
Now, in this function, we need to access the HasData API with the necessary parameters, get a JSON response with the data, and display the required information on the screen.
Let’s declare the request type and set its parameters (request body):
url := "https://api.hasdata.com/scrape"
method := "POST"
payload := strings.NewReader(`{
"extract_rules": {
"Title": "h1",
"Description": "p"
},
"wait": 0,
"screenshot": true,
"block_resources": false,
"url": "https://example.com/"
}`)
client := &http.Client{}
req, err := http.NewRequest(method, url, payload)
if err != nil {
fmt.Println(err)
return
}
In addition to the request body, we will also declare the request headers.
req.Header.Add("x-api-key", "YOUR-API-KEY")
req.Header.Add("Content-Type", "application/json")
Next, we can make a request and retrieve the data:
res, err := client.Do(req)
if err != nil {
fmt.Println(err)
return
}
defer res.Body.Close()
Now we just need to process the data in a way that we only extract the elements located in ["scrapingResult"]["extractedData"]["Title"]
and ["scrapingResult"]["extractedData"]["Description"]
from the JSON code we received. To ensure we’re targeting the right attributes, you can display the JSON response on the screen or refer to the example response in our API documentation.
body, err := ioutil.ReadAll(res.Body)
if err != nil {
fmt.Println(err)
return
}
var response map[string]interface{}
err = json.Unmarshal(body, &response)
if err != nil {
fmt.Println(err)
return
}
if response["status"] != "ok" {
fmt.Println("Error: Request failed")
return
}
scrapingResult := response["scrapingResult"].(map[string]interface{})
extractedData := scrapingResult["extractedData"].(map[string]interface{})
Now all we have to do is to display these variables on the screen:
fmt.Println("Title:", extractedData["Title"])
fmt.Println("Description:", extractedData["Description"])
Let’s gather data from the second website to ensure this approach is simple. We won’t have to make significant changes to our code for this. We must replace the request body and add variables to fetch the data. We won’t display the parts of the code that will remain unchanged, but we will provide the complete code for scraping the product website at the end.
So, let’s modify the request body:
payload := strings.NewReader(`{
"extract_rules": {
"Title":"h4",
"Link":"h4 > a @href",
"Description":"p",
"Old":"span.price-old",
"New":"span.price-new",
"Tax":"span.price-tax",
"Image":".image > a @href"
},
"wait": 0,
"screenshot": true,
"block_resources": false,
"url": "https://demo.opencart.com/"
}`)
Add the output of new data at the end:
fmt.Println("Titles:", extractedData["Title"])
fmt.Println("Links:", extractedData["Link"])
fmt.Println("Descriptions:", extractedData["Description"])
fmt.Println("Old Prices:", extractedData["Old"])
fmt.Println("New Prices:", extractedData["New"])
fmt.Println("Taxes:", extractedData["Tax"])
fmt.Println("Images:", extractedData["Image"])
That’s all the script changes done. Let’s run it and get all the data we need.
D:\scripts>go run scraper.go
Titles: [MacBook iPhone Apple Cinema 30" Canon EOS 5D]
Links: [https://demo.opencart.com/index.php?route=product/product&language=en-gb&product_id=43 https://demo.opencart.com/index.php?route=product/product&language=en-gb&product_id=40 https://demo.opencart.com/index.php?route=product/product&language=en-gb&product_id=42 https://demo.opencart.com/index.php?route=product/product&language=en-gb&product_id=30]
Descriptions: [Your shopping cart is empty!
Intel Core 2 Duo processor
Powered by an Intel Core 2 Duo processor at speeds up to 2.1..
iPhone is a revolutionary new mobile phone that allows you to make a call by simply tapping a nam..
The 30-inch Apple Cinema HD Display delivers an amazing 2560 x 1600 pixel resolution. Designed sp..
Canon's press material for the EOS 5D states that it 'defines (a) new D-SLR category', while we'r.. Powered By OpenCart Your Store © 2023]
Old Prices: [$122.00 $122.00]
New Prices: [$602.00 $123.20 $110.00 $98.00]
Taxes: [Ex Tax: $500.00 Ex Tax: $101.00 Ex Tax: $90.00 Ex Tax: $80.00]
Images: [https://demo.opencart.com/index.php?route=product/product&language=en-gb&product_id=43 https://demo.opencart.com/index.php?route=product/product&language=en-gb&product_id=40 https://demo.opencart.com/index.php?route=product/product&language=en-gb&product_id=42 https://demo.opencart.com/index.php?route=product/product&language=en-gb&product_id=30]
Full code:
package main
import (
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"strings"
)
func main() {
url := "https://api.hasdata.com/scrape"
method := "POST"
payload := strings.NewReader(`{
"extract_rules": {
"Title":"h4",
"Link":"h4 > a @href",
"Description":"p",
"Old":"span.price-old",
"New":"span.price-new",
"Tax":"span.price-tax",
"Image":".image > a @href"
},
"wait": 0,
"screenshot": true,
"block_resources": false,
"url": "https://demo.opencart.com/"
}`)
client := &http.Client{}
req, err := http.NewRequest(method, url, payload)
if err != nil {
fmt.Println(err)
return
}
req.Header.Add("x-api-key", "1cb2a4a1-a569-423e-835e-07c3b1308bfe")
req.Header.Add("Content-Type", "application/json")
res, err := client.Do(req)
if err != nil {
fmt.Println(err)
return
}
defer res.Body.Close()
body, err := ioutil.ReadAll(res.Body)
if err != nil {
fmt.Println(err)
return
}
var response map[string]interface{}
err = json.Unmarshal(body, &response)
if err != nil {
fmt.Println(err)
return
}
if response["status"] != "ok" {
fmt.Println("Error: Request failed")
return
}
scrapingResult := response["scrapingResult"].(map[string]interface{})
extractedData := scrapingResult["extractedData"].(map[string]interface{})
fmt.Println("Titles:", extractedData["Title"])
fmt.Println("Links:", extractedData["Link"])
fmt.Println("Descriptions:", extractedData["Description"])
fmt.Println("Old Prices:", extractedData["Old"])
fmt.Println("New Prices:", extractedData["New"])
fmt.Println("Taxes:", extractedData["Tax"])
fmt.Println("Images:", extractedData["Image"])
}
As you can see, it’s pretty straightforward, and even beginners can modify this example for their own purposes.
Easy Parsing with GoQuery
Goquery is a popular Go library that provides a convenient way to parse HTML or XML documents and extract data using CSS selectors. It is based on jQuery, a widely used JavaScript library for manipulating and browsing HTML documents.
Install GoQuery Library
First of all, to use the GoQuery library, we need to install it. You can do this by using the following command in the terminal:
go get github.com/PuerkitoBio/goquery
And now you can use it in your projects.
Using Example
As in the previous example, let’s start by including the necessary libraries and declaring the main function:
package main
import (
"fmt"
"log"
"net/http"
"github.com/PuerkitoBio/goquery"
)
func main() { }
Let’s make a request and save the data we receive into a variable. It’s also important not to forget error checking.
url := "https://example.com"
resp, err := http.Get(url)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
doc, err := goquery.NewDocumentFromReader(resp.Body)
if err != nil {
log.Fatal(err)
}
Now, using the built-in functions of the GoQuery library, let’s find the desired data using CSS selectors.
doc.Find("h1").Each(func(_ int, s *goquery.Selection) {
fmt.Println(s.Text())
})
As a result, we will obtain the page title.
D:\scripts>go run scraper.go
Example Domain
Full code:
package main
import (
"fmt"
"log"
"net/http"
"github.com/PuerkitoBio/goquery"
)
func main() {
url := "https://example.com"
resp, err := http.Get(url)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
doc, err := goquery.NewDocumentFromReader(resp.Body)
if err != nil {
log.Fatal(err)
}
doc.Find("h1").Each(func(_ int, s *goquery.Selection) {
fmt.Println(s.Text())
})
}
We have discussed scraping the website example.com using an API and library, but since that example was too simple, we won’t revisit it. Let’s write a scraper for a website that sells products.
The library declaration, like most of the script, remains unchanged. We just need to replace the link to the page and the selectors to get the required data. First, let’s change the link to the page:
url := "https://demo.opencart.com/"
We need to review each product and extract the tags we need from each. To make things easier, we will store the data in variables and display them on the screen.
doc.Find("div.col").Each(func(_ int, s *goquery.Selection) {
image := s.Find(".image a").AttrOr("href", "")
productName := s.Find("h4 a").Text()
productLink := s.Find("h4 a").AttrOr("href", "")
description := s.Find("p").Text()
oldPrice := s.Find(".price-old").Text()
newPrice := s.Find(".price-new").Text()
tax := s.Find(".price-tax").Text()
fmt.Println("Image:", image)
fmt.Println("Product Name:", productName)
fmt.Println("Product Link:", productLink)
fmt.Println("Description:", description)
fmt.Println("Old Price:", oldPrice)
fmt.Println("New Price:", newPrice)
fmt.Println("Tax:", tax)
fmt.Println()
})
The script displays all the products, leaving an empty line between them for convenience. Here is an example of the output for one of the items:
Image: https://demo.opencart.com/index.php?route=product/product&language=en-gb&product_id=30
Product Name: Canon EOS 5D
Product Link: https://demo.opencart.com/index.php?route=product/product&language=en-gb&product_id=30
Description:
Canon's press material for the EOS 5D states that it 'defines (a) new D-SLR category', while we'r..
Old Price: $122.00
New Price: $98.00
Tax: Ex Tax: $80.00
As you can see, creating a scraping script in Go using the GoQuery library is a relatively simple task.
However, when it comes to the GoQuery library, it has some drawbacks that may require using other more functional libraries. One of these drawbacks is that GoQuery only supports a limited set of CSS selectors, which can limit your capabilities when extracting data from an HTML document.
Additionally, GoQuery only works with static HTML code. If your target page uses dynamic JavaScript to generate or modify content, GoQuery won’t be able to handle that content since it doesn’t have a built-in JavaScript engine.
So, let’s go to the next library.
Find and extract emails from any website with ease. Build targeted email lists for lead generation, outreach campaigns, and market research. Download your extracted data in your preferred format (CSV, JSON, or Excel) for immediate use.
Extract valuable business data such as names, addresses, phone numbers, websites, ratings, and more from a wide range of local business directories. Download your results in user-friendly formats like CSV, JSON, and Excel for easy analysis.
Scrape Dynamic Data with Colly
Colly library is another popular tool for scraping web pages in the Go language. It supports various functions like navigating pages, extracting data, handling errors, working with forms, and more.
Additionally, Colly allows for the asynchronous execution of web page requests. This means you can scan and process multiple pages simultaneously, improving performance and reducing scraping time.
Furthermore, with its headless browser capabilities, the Colly library effectively manages JavaScript on the page. It also offers user-friendly methods for page navigation, following links, submitting forms, and various other website interactions.
Install Colly Library
In the past, Colly used PhantomJS to handle JavaScript on web pages. However, starting from version 2.0, Colly switched to using the standard Go library to handle JavaScript.
So, to use Colly, we just need to install the Colly library:
go get -u github.com/gocolly/colly/v2
Now we can start writing a scraper using Colly.
Using Example
As mentioned before, we won’t be using example.com anymore, so let’s jump straight into scraping a product page from a test online store. To begin, let’s declare libraries, specify the webpage address and create a collector that will be used for navigation and gathering data from the webpage.
package main
import (
"fmt"
"log"
"github.com/gocolly/colly/v2"
)
func main() {
url := "https://demo.opencart.com/"
c := colly.NewCollector()
}
Next, we will define event handlers for different elements on the page using the OnHTML method. In these handlers, we specify which data we need to extract from the corresponding HTML elements.
c.OnHTML("div.col", func(e *colly.HTMLElement) {
image := e.ChildAttr("div.image a", "href")
productName := e.ChildText("h4 a")
productLink := e.ChildAttr("h4 a", "href")
description := e.ChildText("p")
oldPrice := e.ChildText(".price-old")
newPrice := e.ChildText(".price-new")
tax := e.ChildText(".price-tax")
fmt.Println("Image:", image)
fmt.Println("Product Name:", productName)
fmt.Println("Product Link:", productLink)
fmt.Println("Description:", description)
fmt.Println("Old Price:", oldPrice)
fmt.Println("New Price:", newPrice)
fmt.Println("Tax:", tax)
fmt.Println()
})
After defining the handlers, we will use the callback function to make the collector navigate to the URL. At this point, the collector will visit the page, trigger the relevant handlers, and extract data.
err := c.Visit(url)
if err != nil {
log.Fatal(err)
}
As a result, we will achieve the same outcome as when using the GoQuery library, but with greater speed and functionality. However, Colly is a powerful library, making it slightly more complex than simpler libraries like GoQuery, which is better for parsing simple websites.
Functional Scraping with Pholcus Framework
Pholcus framework (also known as “Pholcus-WebCrawler”) is a versatile web scraping framework built with the Go programming language. Its purpose is to simplify the creation and management of web scrapers.
Pholcus offers a comprehensive set of tools for extracting data from web pages. It supports techniques such as regular expressions, XPath, and CSS selectors. Additionally, the framework provides the ability to use proxy servers, which can help scrape websites with IP restrictions or ensure anonymity.
Install Pholcus Framework
To install Pholcus and its dependencies, you can use the following command in the command prompt:
go get -u github.com/henrylee2cn/pholcus
Once installed, you can import and use the library in your scripts.
Using Example
Using this framework is very similar to using the Colly library. Therefore, if you have seen the previous example, you won’t have any difficulties writing a similar go scraper or crawler using Pholcus.
To begin, let’s import the necessary libraries and create the main function:
package main
import (
"fmt"
"github.com/henrylee2cn/pholcus/app"
"github.com/henrylee2cn/pholcus/logs"
)
func main() { }
Now let’s set the parameters for the task, such as the basic URLs that will be scraped:
task := app.NewTask()
task.SetBaseUrls("https://demo.opencart.com/")
Next, we create a collector and define handlers for the HTML elements we want to extract. Inside each handler, we retrieve the specific data we need:
collector := app.NewCollector()
collector.OnHTML("div.col", func(element *app.HTMLElement) {
image := element.ChildAttr(".image a", "href")
productName := element.ChildText("h4 a")
productLink := element.ChildAttr("h4 a", "href")
description := element.ChildText("p")
oldPrice := element.ChildText(".price-old")
newPrice := element.ChildText(".price-new")
tax := element.ChildText(".price-tax")
fmt.Println("Image:", image)
fmt.Println("Product Name:", productName)
fmt.Println("Product Link:", productLink)
fmt.Println("Description:", description)
fmt.Println("Old Price:", oldPrice)
fmt.Println("New Price:", newPrice)
fmt.Println("Tax:", tax)
fmt.Println()
})
Once the task and the collector are set up, we initiate the task execution using the function app.Run(task). The task will trigger requests to the specified URLs and process the received responses according to the defined handlers.
task.Collector(collector)
if err := app.Run(task); err != nil {
logs.Log.Error(err)
}
This way, we obtained the same data but used a new tool. Unfortunately, beginners may find it quite challenging to use this framework because it has fewer examples, less detailed documentation, and a much smaller community than GoQuery or Colly.
Data Storage and Processing
To complete these examples, let’s explore how to save the retrieved data to a CSV file. We’ll use the script we wrote as an example for using the Colly library as a starting point. However, since we don’t need to display the retrieved data on the screen, we’ll remove the output commands. Instead, we’ll add a variable where we’ll write our data line by line for later writing to a file.
package main
import (
"encoding/csv"
"fmt"
"log"
"os"
"github.com/gocolly/colly/v2"
)
func main() {
url := "https://demo.opencart.com/"
c := colly.NewCollector()
var data [][]string
c.OnHTML("div.col", func(e *colly.HTMLElement) {
image := e.ChildAttr("div.image a", "href")
productName := e.ChildText("h4 a")
productLink := e.ChildAttr("h4 a", "href")
description := e.ChildText("p")
oldPrice := e.ChildText(".price-old")
newPrice := e.ChildText(".price-new")
tax := e.ChildText(".price-tax")
data = append(data, []string{image, productName, productLink, description, oldPrice, newPrice, tax})
})
err := c.Visit(url)
if err != nil {
log.Fatal(err)
}
Create a CSV file and set the delimiter as ”;“. We’ll also specify column names and save the data from the variable into the file.
file, err := os.Create("data.csv")
if err != nil {
log.Fatal(err)
}
defer file.Close()
writer := csv.NewWriter(file)
writer.Comma = ';'
defer writer.Flush()
header := []string{"Image", "Product Name", "Product Link", "Description", "Old Price", "New Price", "Tax"}
err = writer.Write(header)
if err != nil {
log.Fatal(err)
}
err = writer.WriteAll(data)
if err != nil {
log.Fatal(err)
}
After running the script, we obtain a CSV file that is saved in the same folder as the script. This file contains all the gathered data:
Usually, at this stage, in addition to saving, we also process the data to eliminate empty cells, incorrect data, or unnecessary symbols. This is called data processing and cleaning. Typically, the quality of the output data and the accuracy and success of the analysis based on it depend on this.
This is particularly important for those who work with large amounts of data or conduct analysis for making important decisions. Clean and accurate data allows for more reliable results and informed decision-making.
Gain instant access to a wealth of business data on Google Maps, effortlessly extracting vital information like location, operating hours, reviews, and more in HTML or JSON format.
Get real-time access to Google search results, structured data, and more with our powerful SERP API. Streamline your development process with easy integration of our API. Start your free trial now!
Best Practices and Tips
Following specific rules during web scraping is important to make it easier and safer. It’s also preferable to scrape resources during their least busy times to avoid overwhelming them, as they may not handle heavy traffic and could break down.
Use Go Functions Effectively
Use meaningful names that accurately describe the purpose of variables, functions, and types. This improves code readability and makes it easier for others to understand your code.
Also, utilize goroutines and channels to achieve parallelism. Use them to write efficient and parallel programs.
Organize your code into reusable packages to enhance code modularity and facilitate reusability.
Utilize Well-Documented Libraries
In general, the better the documentation, the larger the user community for that library. This means you can always find usage examples of the functions you need and support from real users if you encounter problems you can’t solve alone.
Follow Efficient Scraping Rules
If you don’t follow certain rules, you may encounter issues such as getting your IP blocked by the resource or needing to solve captchas. Therefore, it’s crucial to adhere to some rules. For example, introduce small intervals between requests, use user-agent, avoid scraping vast amounts of data, and consider using proxies.
However, on the other hand, all these issues can be resolved using a web scraping API, such as Google SERP API for scraping Google search results, which handles the challenges of bypassing blocks during scraping.
Conclusion and Takeaways
Golang is a powerful and beginner-friendly programming language developed by Google. Its simplicity, built-in support for concurrency, and vast collection of libraries make it an excellent choice for web scraping tasks.
With the help of popular libraries like Goquery, Colly, Pholcus, and HasData API and our tutorial developers can easily extract and process data from websites. Whether you are a beginner or an experienced developer, Go provides a seamless and efficient environment for web scraping projects.
Might Be Interesting
Oct 29, 2024
How to Scrape YouTube Data for Free: A Complete Guide
Learn effective methods for scraping YouTube data, including extracting video details, channel info, playlists, comments, and search results. Explore tools like YouTube Data API, yt-dlp, and Selenium for a step-by-step guide to accessing valuable YouTube insights.
- Python
- Tutorials and guides
- Tools and Libraries
Aug 16, 2024
JavaScript vs Python for Web Scraping
Explore the differences between JavaScript and Python for web scraping, including popular tools, advantages, disadvantages, and key factors to consider when choosing the right language for your scraping projects.
- Tools and Libraries
- Python
- NodeJS
Aug 13, 2024
How to Scroll Page using Selenium in Python
Explore various techniques for scrolling pages using Selenium in Python. Learn about JavaScript Executor, Action Class, keyboard events, handling overflow elements, and tips for improving scrolling accuracy, managing pop-ups, and dealing with frames and nested elements.
- Tools and Libraries
- Python
- Tutorials and guides