How to Scrape Google Maps Data Using Python

Google Maps isn’t simple to scrape. Dynamic JavaScript loading, frequent DOM changes, and anti-bot protections (rate limits, fingerprinting, token-based requests) all stand in the way.
Common tools like the Python Requests library or simple scraping libraries usually don’t cut it. They either miss data or quickly hit rate limits.
In this article, we’ll look at a more reliable approach to scraping Google Maps using Python, covering how to extract structured data such as places, ratings, and contact details.
Building a Google Maps Scraper with Python
Code Overview
Since Google frequently changes its class names and HTML structure, double-check the selectors and update them as needed before running the script.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.chrome.options import Options
import time
import pandas as pd
import re
def init_driver():
options = Options()
driver = webdriver.Chrome(options=options)
return driver
def search_query(driver, query: str):
driver.get("https://www.google.com/maps")
time.sleep(5)
search = driver.find_element(By.ID, "searchboxinput")
search.send_keys(query)
search.send_keys(Keys.ENTER)
time.sleep(5)
def scroll_results(driver, max_scrolls: int = 10, scroll_pause: int = 2):
scrollable = driver.find_element(By.CSS_SELECTOR, 'div[role="feed"]')
for _ in range(max_scrolls):
driver.execute_script('arguments[0].scrollTop = arguments[0].scrollHeight', scrollable)
time.sleep(scroll_pause)
def parse_cards(driver):
feed_container = driver.find_element(
By.CSS_SELECTOR, 'div.m6QErb.DxyBCb.kA9KIf.dS8AEf.XiKgde.ecceSd[role="feed"]'
)
cards = feed_container.find_elements(By.CSS_SELECTOR, "div.Nv2PK.THOPZb.CpccDe")
data = []
for card in cards:
name_el = card.find_elements(By.CLASS_NAME, "qBF1Pd")
name = name_el[0].text if name_el else ""
rating_el = card.find_elements(By.XPATH, './/span[contains(@aria-label, "stars")]')
rating = ""
if rating_el:
match = re.search(r"([\d.]+)", rating_el[0].get_attribute("aria-label"))
rating = match.group(1) if match else ""
reviews_el = card.find_elements(By.CLASS_NAME, "UY7F9")
reviews = ""
if reviews_el:
match = re.search(r"([\d,]+)", reviews_el[0].text)
reviews = match.group(1).replace(",", "") if match else ""
category_el = card.find_elements(By.XPATH, './/div[contains(@class, "W4Efsd")]/span[1]')
category = category_el[0].text if category_el else ""
services_el = card.find_elements(By.XPATH, './/div[contains(@class, "ah5Ghc")]/span')
services = ", ".join([s.text for s in services_el]) if services_el else ""
image_el = card.find_elements(By.XPATH, './/img[contains(@src, "googleusercontent")]')
image_url = image_el[0].get_attribute("src") if image_el else ""
link_el = card.find_elements(By.CSS_SELECTOR, 'a.hfpxzc')
detail_url = link_el[0].get_attribute("href") if link_el else ""
data.append({
"Name": name,
"Rating": rating,
"Reviews": reviews,
"Category": category,
"Services": services,
"Image": image_url,
"Detail URL": detail_url
})
return data
def save_data(data, csv_filename="maps_data.csv", json_filename="maps_data.json"):
df = pd.DataFrame(data)
df.to_csv(csv_filename, index=False)
print(f"Saved {len(df)} records to {csv_filename}")
with open(json_filename, "w", encoding="utf-8") as f:
json.dump(data, f, ensure_ascii=False, indent=4)
print(f"Saved {len(data)} records to {json_filename}")
def main():
query = "restaurants in New York"
max_scrolls = 10
scroll_pause = 2
driver = init_driver()
try:
search_query(driver, query)
scroll_results(driver, max_scrolls, scroll_pause)
data = parse_cards(driver)
save_data(data)
finally:
driver.quit()
if __name__ == "__main__":
main()
Tools and Setup
We recommend starting with our Python scraping introduction guide, if you’re new to web scraping. Otherwise, begin by installing the libraries:
pip install selenium pandas
Import required modules and libraries:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.chrome.options import Options
import time
import pandas as pd
import re
Page Structure Analysis
The easiest way to collect business data from Google Maps is to scrape the search results page, which already contains names, ratings, categories, and addresses:
Open DevTools (press F12 or right-click and Inspect), and find the relevant CSS selectors or XPath expressions for the data you want to extract.
Google often uses dynamic class names that will change even after small interface updates. If you rely only on these class names, your scraper will quickly break, especially if you need to run it regularly.
There are tutorials on how to work with CSS selectors and XPath, so here, we’ll share a table with the ready-to-use selectors for this project.
Field | Description | CSS | XPath |
---|---|---|---|
Name | Name of the place | .qBF1Pd | .//div[contains(@class, |
Rating | Star rating (e.g., 4.7 stars) | span[aria-label*=“stars”] | .//span[contains(@aria-label, |
Reviews | Number of user reviews | .UY7F9 | .//span[contains(@class, |
Category | Type of place (e.g., Restaurant, | div.W4Efsd > span:first-child | .//div[contains(@class, |
Image | Image preview of the place | img[src*=“googleusercontent”] | .//img[contains(@src, |
Feed Container | Container holding the list of result | div.m6QErb.DxyBCb.kA9KIf.dS8AEf.XiKgde.ecceSd[role=“feed”] | //div[@role=“feed” and |
Scrollable | Scrollable div that loads more results | div[role=“main”] | //div[@role=“main”]//div[@tabindex=“-1” |
Card | Single business listing card | div.Nv2PK.THOPZb.CpccDe | .//div[contains(@class, |
Search Input | Input field for search queries | #searchboxinput | //*[@id=“searchboxinput”] |
HasData’s Google Maps API provides structured access to map data through a consistent interface. This approach is generally easier to use and maintain, because it handles dynamic content loading, anti-bot protections, and data formatting for you.
Data Extraction
Start Chrome with Selenium and get the browser ready for scraping:
def init_driver():
# Initialize Selenium Chrome driver with options.
options = Options()
driver = webdriver.Chrome(options=options)
return driver
Open Google Maps, type your search term, and run the search.
def search_query(driver, query: str):
# Open Google Maps and search for a query.
driver.get("https://www.google.com/maps")
time.sleep(5)
search = driver.find_element(By.ID, "searchboxinput")
search.send_keys(query)
search.send_keys(Keys.ENTER)
time.sleep(5)
Go through each result card and extract the name, rating, reviews, category, services, image, and link.
def parse_cards(driver):
# Extract data from result cards
feed_container = driver.find_element(
By.CSS_SELECTOR, 'div.m6QErb.DxyBCb.kA9KIf.dS8AEf.XiKgde.ecceSd[role="feed"]'
)
cards = feed_container.find_elements(By.CSS_SELECTOR, "div.Nv2PK.THOPZb.CpccDe")
data = []
for card in cards:
# Name of the place
name_el = card.find_elements(By.CLASS_NAME, "qBF1Pd")
name = name_el[0].text if name_el else ""
# Rating (from aria-label, e.g. "4.5 stars")
rating_el = card.find_elements(By.XPATH, './/span[contains(@aria-label, "stars")]')
rating = ""
if rating_el:
match = re.search(r"([\d.]+)", rating_el[0].get_attribute("aria-label"))
rating = match.group(1) if match else ""
# Number of reviews (e.g. "1,234 reviews")
reviews_el = card.find_elements(By.CLASS_NAME, "UY7F9")
reviews = ""
if reviews_el:
match = re.search(r"([\d,]+)", reviews_el[0].text)
reviews = match.group(1).replace(",", "") if match else ""
# Category (e.g. "Italian restaurant")
category_el = card.find_elements(By.XPATH, './/div[contains(@class, "W4Efsd")]/span[1]')
category = category_el[0].text if category_el else ""
# Services (e.g. "Dine-in, Takeout, Delivery")
services_el = card.find_elements(By.XPATH, './/div[contains(@class, "ah5Ghc")]/span')
services = ", ".join([s.text for s in services_el]) if services_el else ""
# Image (URL of the thumbnail from Google Maps)
image_el = card.find_elements(By.XPATH, './/img[contains(@src, "googleusercontent")]')
image_url = image_el[0].get_attribute("src") if image_el else ""
# Detail page link
link_el = card.find_elements(By.CSS_SELECTOR, 'a.hfpxzc')
detail_url = link_el[0].get_attribute("href") if link_el else ""
# Collect all fields into one record
data.append({
"Name": name,
"Rating": rating,
"Reviews": reviews,
"Category": category,
"Services": services,
"Image": image_url,
"Detail URL": detail_url
})
return data
Infinite Scrolling Implementation
We covered infinite scrolling in detail in another article, but here’s the basic idea:
- Identify the scrollable container.
- Scroll to the bottom of that element.
- Wait for a few seconds.
- Repeat until no new results appear or you hit a stopping point.
Scroll through the results to load more places:
def scroll_results(driver, max_scrolls: int = 10, scroll_pause: int = 2):
# Scroll the results feed to load more places.
scrollable = driver.find_element(By.CSS_SELECTOR, 'div[role="feed"]')
for _ in range(max_scrolls):
driver.execute_script('arguments[0].scrollTop = arguments[0].scrollHeight', scrollable)
time.sleep(scroll_pause)
Store results in CSV/JSON format
Save your scraped data to a CSV or JSON file and see how many records were saved:
def save_data(data, csv_filename="maps_data.csv", json_filename="maps_data.json"):
# Save to CSV
df = pd.DataFrame(data)
df.to_csv(csv_filename, index=False)
print(f"Saved {len(df)} records to {csv_filename}")
# Save to JSON
with open(json_filename, "w", encoding="utf-8") as f:
json.dump(data, f, ensure_ascii=False, indent=4)
print(f"Saved {len(data)} records to {json_filename}")
Using API to Access Google Maps Data
If you want to collect data faster and more reliably, HasData’s Google Maps Scraping API can help. It manages browser actions, proxy rotation, and anti-bot defenses for you.
HasData’s API offers a cost-effective option for large-scale extraction. Using it to process 200,000 detailed place lookups costs $99 (check our pricing page for details), compared to about $850 with Google.
The following script sends a request to HasData’s API using your API key and a search query. It reads the JSON response and saves the important information to CSV and JSON files.
import requests
import json
import pandas as pd
# To get an API key, sign up at https://app.hasdata.com/sign-up
api_key = 'YOUR-API-KEY'
# What we want to search for in Google Maps
query = 'Pizza'
# Documentation with all parameters: https://docs.hasdata.com/apis/google-maps/search
url = f"https://api.hasdata.com/scrape/google-maps/search?q={query}"
# Headers for the API request
headers = {
'Content-Type': 'application/json',
'x-api-key': api_key
}
# Send GET request to HasData API
response = requests.get(url, headers=headers)
# Parse JSON response
data = response.json()
# Save full response to a JSON file
with open('output.json', 'w', encoding='utf-8') as f:
json.dump(data, f, ensure_ascii=False, indent=2)
# Extract the main results list from the response
results = data.get("localResults", [])
# Filter and normalize results
filtered = [
{
"title": r.get("title"),
"address": r.get("address"),
"phone": r.get("phone"),
"website": r.get("website"),
"rating": r.get("rating"),
"reviews": r.get("reviews"),
"type": r.get("type"),
"price": r.get("price"),
# Some fields may be nested, e.g., GPS coordinates
"latitude": r.get("gpsCoordinates", {}).get("latitude"),
"longitude": r.get("gpsCoordinates", {}).get("longitude")
}
for r in results
]
# Convert to DataFrame for easy analysis
df = pd.DataFrame(filtered)
# Save to CSV
df.to_csv('output.csv', index=False)
Conclusion
Scraping data from Google Maps can be difficult because of dynamic content and anti-scraping measures.
If you build your own Python scraper, you get full control over what and how to collect. But it takes more time, needs frequent updates, and can be hard to scale for higher data volumes.
Using a scraping API makes things easier. It handles browser automation, proxy rotation, and blocks for you. APIs are more reliable, especially for large-scale or regular data collection.
