How to Scrape Google Trends using Python

How to Scrape Google Trends using Python
Last edit: Jul 11, 2024

Google Trends stands out as an invaluable tool for gathering insights into search queries. It empowers users to stay ahead of trends, identify emerging keywords and themes, and gain a deeper understanding of current audience interests.

However, effectively harnessing Google Trends often entails processing a substantial volume of keywords and queries within a short timeframe. This task can prove challenging when done manually. To address this, we'll delve into various methods for data scraping automation, catering to users with varying skill levels.

Try Our Ready-Made Solutions for Your Needs

Google Trends Scraper pulls data from several search queries and current topics listed on Google by selecting categories and specifying geographic location, allowing…

Automate the extraction of trending search data and gain valuable insights from Google Trends with our API to make your market analysis faster, more accurate, and…

Recognizing that skill levels vary, we will present a range of options, from those that require no programming expertise to those that necessitate a solid understanding of programming and script creation. 

Google Trends offers a wealth of data with diverse applications across various fields. Before delving into automated data collection methods, let's explore the potential uses and applications of Google Trends data.

Market Research

One of the most common and effective ways to utilize Google Trends data is for market research. Google Trends offers valuable insights into search query volumes across different regions and time periods.

Analyzing search trends over time can reveal valuable information about the fluctuating interest in a product or service. This information can be used to strategically plan marketing campaigns during periods of peak interest. Additionally, Google Trends allows for comparative analysis of brand popularity, enabling businesses to identify industry leaders and potential areas of growth.

Regional search data can also be leveraged to understand geographical variations in demand. This can be influenced by factors beyond product awareness and seasonality, such as regional preferences and cultural trends. By understanding these variations, businesses can optimize their marketing strategies to effectively target specific regions.

SEO and Content Strategy

Scraping Google Trends data can be an invaluable addition to your SEO strategy. Here are some ways you can use it to improve your website's search rankings and organic traffic:

  • Identify trending keywords. By analyzing Google Trends data, you can uncover keywords that are gaining popularity in your niche. Incorporating these keywords into your content can help you attract more relevant traffic to your site.
  • Optimize content timing. Google Trends can reveal when certain topics are spiking in interest. Use this information to schedule the publication of your blog posts, articles, and other content to coincide with these peaks in search demand.
  • Research competitors. Conduct competitive analysis using Google Trends to identify gaps in your content strategy. See which keywords and topics your competitors are ranking for that you're not, and create content to target those areas.

So, by using Google Trends data you can attract more organic traffic to your website.

Other Use Cases

The applications of Google Trends data scraped from the web are virtually limitless. Regardless of your industry, you can find the most suitable application for the data you collect, from financial research and sentiment analysis to competitor tracking and forecasting future trends based on the data you collect. 

In any case, Google Trends data provides a wide range of opportunities for analysis and finding ideas and solutions in various fields, making it a valuable tool for business, research, and planning. And scraping this data allows you to get the most up-to-date data automatically.

Choosing a Web Scraping Method

Since scraping Google Trends data is in high demand among people with varying programming skills, we will explore various options, starting with those that require no programming skills at all and ending with more complex methods for creating your scraper.

Try Our Ready-Made Solutions for Your Needs

Gain instant access to PAA boxes from Google SERP and uncover the questions your audience is asking. Leverage this valuable data to create high-quality content that…

Easily check your website's position in Google search results with our free online Google position checker tool. Get accurate, unbiased rankings for any keyword…

As mentioned earlier, even without coding skills, you can access the Google Trends data easily. You can do this by utilizing services that offer automated data collection capabilities and provide ready-to-use datasets in a format that suits your needs.

This method is straightforward and convenient, but it also offers the least flexibility. You cannot customize the scraper or modify the data format. Since these services provide a complete and packaged solution, you are limited to the features they offer.

Therefore, this approach is recommended if you need data quickly, do not require ongoing data collection, and are satisfied with the filter options and data format provided by the chosen Google Trends no-code scraper

Option 2: Using a Web Scraping API

In addition to no-code scrapers and web scraping libraries, another option for accessing Google Trends data is to use a Google Trends API. While Google Trends doesn't offer an official API, there are several third-party APIs available that can provide you with Google Trends data.

Let’s take a look at benefits of using a Google Trends API:

  1. Simplifies the data scraping process. APIs provide a structured and organized way to access Google Trends data, eliminating the need for complex web scraping techniques.
  2. Enhances flexibility. APIs offer more flexibility compared to no-code scrapers, allowing you to customize your data requests and retrieve specific data points.
  3. Reduces development time. By utilizing an API, you can save time and effort that would otherwise be spent on developing and maintaining your web scraping solution.
  4. Mitigates common scraping challenges. APIs handle tasks like proxy management and captcha solving, which can be time-consuming and challenging when web scraping manually.

Overall, using a Google Trends API is an efficient and effective approach for developers who want to streamline the process of collecting and analyzing Google Trends data. 

Option 3: Using a Web Scraping Library

The next most difficult method is to use specialized libraries to scrape Google Trends data. The most popular among them is pyTrends, however, before using it, it is necessary to learn about both its pros and cons.

Among the unconditional advantages of this library is the possibility of obtaining data quite simply (compared to the last option), but you will have to solve the problems yourself in case of blocking or captcha. In addition, for its normal use, you will definitely need proxies.

This option is suitable for those who are already quite good at programming and are ready to face difficulties, but do not want to write their scraper from scratch and are looking for a library that could help in the process of obtaining data from the Google Trends page.

Option 4: Using a Headless Browser

Using headless browser libraries to create your Google Trends scraper is the most complex option of the three. It assumes you're comfortable handling all the steps involved in gathering and processing the necessary data, requiring strong programming skills.

If you're unfamiliar with such libraries but interested in learning, check out our article on headless browser scraping. If you're confident in your skills and want to build your Google Trends scraping tool, this article will also provide guidance.

In summary, this is the most challenging approach but offers the most customization, allowing you to extract any data you need.

Prerequisites

Node.js and Python are the two most popular programming languages for web scraping. However, Python is generally considered easier to learn and use, even for beginners. Therefore, we will be using Python in this tutorial.

To follow along with the examples in this tutorial, you will need the following:

  • Python 3.10 or higher;
  • A Python IDE or any code editor with syntax highlighting.

If you are new to Python programming, you can refer to our introductory article, which covers the installation process and how to create your first web scraper.

In addition to Python, we will also need to install the following libraries:

pip install requests json csv pytrends selenium

For full functionality with Selenium, you may also need additional files, such as a web driver (only necessary for older versions of Selenium). You can find more information in our guide to web scraping with Selenium

As promised, we'll start with the simplest data scraping methods that don't even require coding skills, and then move on to more complex and advanced ones. Therefore, let's begin with the easiest method and show you how to use ready-made tools to get Google Trends data.

For this example, we'll use HasData's Google Trends no-code scraper. To begin, you'll need to sign up on our website. Once registered, log in and navigate to the no-code scrapers marketplace.

No-Code Scrapers
No-Code Scrapers

Find and navigate to the page of a Google Trends no-code scraper.

Google Trends Scraper page
Google Trends Scraper page

Let's break dthe elements on this page:

  1. Keywords. Enter your target keywords here, one per line.
  2. Geo. Specify the region you want to target.
  3. Timeframe. Set the date range for your search results.
  4. Run Scraper. Once you've set all the parameters, click this button to start the scraping process. Your progress will be displayed on the right side of the screen.
  5. Scraping Results. This section shows you all the results from your no-code scraping runs. You can also download the data in a convenient format: JSON, CSV, or XLSX.

As you can see, setting up and running this tool is quite simple. Here's an example of the data you can expect to get:

Google Trends results
Google Trends results

As you can see, with Google Trends no-code scraper, you can access search data for any time period that interests you, even without any programming skills. 

If you'd like to create your script, let's utilize HasData's Google Trends API instead of a no-code scraper and build our script. To do this, you'll need to sign up on our website to obtain an API key, just like in the previous method. You can find it on the main page of your account.

Copy your API key
Copy your API key

Next, you can use the API Playground to set your parameters and generate code in any programming language you like, or you can use the documentation and write the code yourself.

Unlike a no-code scraper, the Google Trends API provides a wider range of parameters that allow you to customize your query. Using the API, you can specify the following parameters:

  1. Search Query. The heart of your search, defining the keyword or phrase you want to investigate.
  2. Location. Specify the geographical area for your query, restricting results to a specific country, region, or city.
  3. Region. Utilize the "region search" feature, allowing you to target specific areas like cities, metropolitan areas, countries, or subregions.
  4. Data Type. Choose the type of data to retrieve: interest over time, interest by region, related topics, or related queries.
  5. Time Zone. Define the relevant time zone to accurately interpret search patterns across different regions.
  6. Category. Narrow dyour search by selecting a specific category, similar to the functionality on the Google Trends website.
  7. Google Property. Filter results based on the search source, such as google search, news, images, or YouTube search.
  8. Date Range. Specify the time period for which you want to retrieve data, ensuring you capture the most pertinent trends.

These parameters allow you to customize the request in the most suitable way for you. As mentioned earlier, you can set all the necessary parameters in the API Playground and simply copy the ready-made code in any programming language convenient for you, and we, as an example, will create such code from scratch.

You can also view and run the ready-made version of the considered script on Google Collaboratory.

To begin, create a new file with the extension .py and import the necessary libraries. Since most of the work is done by the API, we will only need a library to make requests and work with JSON data, as the API returns data in JSON format.

import requests
import json

Next, we'll define variables with the parameters we want to set. You can find a full list of parameters in our official documentation.

query = "Coffee"
geo = "US-NY"
region = "dma"
data_type = "geoMap"
category = "65"
date_range = "now 7-d".replace(" ", "+")

Then compose the link:

url = f"https://api.hasdata.com/scrape/google-trends/search?q={query}&geo={geo}&region={region}&dataType={data_type}&cat={category}&date={date_range}"

Put your API key to the request headers:

headers = {
  'Content-Type': 'application/json',
  'x-api-key': 'PUT-YOUR-API-KEY'
}

Make the request and print the response on the screen:

response = requests.request("GET", url, headers=headers)
print(response.text)

As a result you will get the data like at this example:

The resulting JSON
The resulting JSON

As you can see, in addition to the data itself, the API also returns a request URL and a screenshot of the query page that was executed.

With this information, you can either process the retrieved data or save it for later use. For example, let's save the data to a JSON file:

data = response.json()
with open("google_trends_data.json", "w") as json_file:
    json.dump(data, json_file, indent=4)

You can also save the data to a file in any other format that is convenient for you, such as CSV.  

Another method involves utilizing the PyTrends library, which streamlines the data scraping process. It is built on straightforward queries and employs the Requests and BeautifulSoup libraries for scraping, thus having certain limitations. 

Scrape the Data with PyTrends

Let's create a simple web scraper to extract data about interests by region. To do this, create a new Python script and import the necessary libraries into the project:

from pytrends.request import TrendReq

Then create a session pyTrend object:

pytrend = TrendReq()

And make a request to get a TOKEN:

pytrend.build_payload(kw_list=['coffee', 'green tea'])

After that, you can access the keyword data you need. For example, to get data by region and display it, use the following code:

interest_by_region_df = pytrend.interest_by_region()
print(interest_by_region_df.head())

As a result you will get the next data:

PyTrends Scraping Result
PyTrends Scraping Result

Unfortunately, as we mentioned before, using the PyTrends library has its drawbacks and limitations. For full functionality, you will need to use proxies. If you choose not to use them, your script will fail with a 429 error.

429 Error for PyTrends

The complete error message is "pytrends.exceptions.TooManyRequestsError: The request failed: Google returned a response with code 429". This typically indicates that Google has flagged your request as suspicious and is refusing to return data. To resolve this issue, you'll need to utilize proxies within your script.

We've previously discussed how to use proxies in Python. However, it's worth noting that PyTrends offers dedicated functions for this purpose. Simply specify the required parameters during object creation:

pytrend = TrendReq(hl='en-US', tz=360, proxies=['https://128.3.21.11:8080',])

It's important to highlight that PyTrends only supports HTTPS proxies. You can either find free ones on our free proxy list page to test this functionality or purchase proxies from a reliable provider.

The most complex yet flexible way to scrape data from Google Trends is to create your tool using a library like Selenium or any other library that supports headless browsers.

Here, we will discuss two options for obtaining the necessary data:

  1. Standard method. This method involves navigating to the page and parsing its content. This is a more complex method, as it can be a bit difficult to find the necessary selectors.
  2. A little trick to get the data you need. This will allow you to get all the data you need even if you don't want to deal with selectors.

In any case, whichever method you choose, you will be able to get all the data you need from Google Trends.

Standard Scraping Method

This method involves using a headless browser, such as Selenium, to simulate user interactions and extract data from web pages. While this approach is relatively simple, it can be time-consuming. To use it, just follow this algorithm:

  1. Formulating the URL. Construct the URL based on the specified parameters. This involves understanding the URL structure and incorporating the desired search terms, time range, and location filters.
  2. Navigating to the Page. Utilize a headless browser, such as Selenium, to simulate a user visit to the constructed URL. This enables interaction with the dynamic web page without a physical browser.
  3. Scraping the Data. Employ web scraping techniques to extract the relevant data from the HTML content of the page. This may involve identifying and parsing specific elements using XPath or CSS selectors.
  4. Processing and Saving the Data. Clean, organize, and format the extracted data into a structured format, such as CSV or JSON. Save the processed data for further analysis or visualization.

Now, let's implement the algorithm we discussed. First, we need to identify the patterns that govern the formation of the URL. To do this, we'll visit the Google Trends page and examine all the available filters. Then, we'll analyze how the URL changes based on the selected parameters.

Research the Google Trends Page
Research the Google Trends Page

For instance, using the parameters we've discussed (country - US, category - food and drinks, time - last 7 days, keyword - coffee), the following link would be relevant:

https://trends.google.com/trends/explore?cat=71&date=now%207-d&geo=US&q=coffe

Let's start by creating a script and importing the necessary libraries and modules:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options
import time

Next, we'll configure the settings and generate a link:

category = "71"
date_range = "now 7-d"
geo = "US"
query = "coffee"

url = f"https://trends.google.com/trends/explore?cat={category}&date={date_range}&geo={geo}&q={query}"

Then, create a WebDriver object:

chrome_options = Options()

driver = webdriver.Chrome(options=chrome_options)

Now you can proceed to the page. However, there is a small nuance here. If you follow a direct link directly to the page with the required parameters, you will receive the following error:

429 Error
429 Error

This error can be easily avoided by accessing the page from the main Google Trends page or by going by the link twice:

driver.get(url)
driver.get(url)

Let's take a closer look at the page and use one of the blocks to illustrate the data scraping process. As an example, let's gather data about "Related queries". Let's examine this section in more detail:

The related queries
The related queries

Inspecting the page's HTML using DevTools (press F12 or right-click and select Inspect), reveals that the 'Related Searches' section is contained within a div element with the class '.fe-related-queries'. The individual related search items, within this section, have the class '.item'. 

Let's proceed with scraping this data:

related_queries_div = driver.find_element(By.CSS_SELECTOR, '.fe-related-queries')
items = related_queries_div.find_elements(By.CSS_SELECTOR, '.item')

To enhance data organization, let's utilize a list format:

related_queries = []
for item in items:
    parts = item.text.split('\n')
    if len(parts) == 4:
        rank, title_category, score, more = parts
        title, category = title_category.split(' - ')
        related_queries.append({
            'rank': rank,
            'title': title,
            'category': category,
            'score': score
        })

Print the Google Trends data on the screen and close the webdriver:

print(related_queries)
driver.quit()

As a result you will get the following data:

The resulting list
The resulting list

You can also utilize Selenium for pagination to gather the most comprehensive data. Additionally, data from other blocks can be extracted similarly, as only the selectors will differ.

A Trick to Get Data Fast and Easy

As we mentioned at the beginning, there is a much simpler way to get Google Trends data, and that is to simply download it. Each block has a download button next to it that allows you to download the data:

Download button
Download button

Instead of manually collecting this data, we can simply download it all at once. The script structure, up to the page navigation part, will remain the same:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options

category = "71"
date_range = "now 7-d"
geo = "US"
query = "coffee"

url = f"https://trends.google.com/trends/explore?cat={category}&date={date_range}&geo={geo}&q={query}"

chrome_options = Options()

driver = webdriver.Chrome(options=chrome_options)
driver.get(url)
driver.get(url)

Then download the available Google Trends data:

export_buttons = driver.find_elements(By.CSS_SELECTOR, '.widget-actions-item.export')
for button in export_buttons:
    button.click()
    time.sleep(1) 

As a result, we get four files with all the data from the page:

The resulting files
The resulting files

This approach allows you to obtain all the data quickly and easily. Moreover, the data is well-structured and can be used for further processing.

Conclusion

This article delved into the practical applications of Google Trends data across various industries and use cases. We explored methodologies ranging from simple no-code solutions to advanced web scraping techniques utilizing APIs and libraries.

Additionally, we provided algorithms and examples for building your scrapers, empowering you to independently gather data in the future. To conclude, we unveiled a nifty trick that allows you to effortlessly acquire all the necessary data with relative ease.

In this guide, we strived to cover all possible Google Trends scraping methods, highlighting the pros and cons of each approach. Moreover, by encompassing even non-coding options, we ensured that everyone, regardless of their technical expertise, can find a suitable method.

Tired of getting blocked while scraping the web?

Try out Web Scraping API with proxy rotation, CAPTCHA bypass, and Javascript rendering.

  • 1,000 Free API Credits
  • No Credit Card Required
  • 30-Day Trial
Try now for free

Collect structured data without any coding!

Our no-code scrapers make it easy to extract data from popular websites with just a few clicks.

  • CSV, XLSX, and JSON Formats
  • No Coding or Software Required
  • Save Time and Effort
Scrape with No Code
Valentina Skakun

I'm a technical writer who believes that data parsing can help in getting and analyzing data. I'll tell about what parsing is and how to use it.