How do I use soup.find_all() and soup.find() in BeautifulSoup?
soup.find_all() returns a list of every matching element. soup.find() returns only the first match, or None if nothing matches. Pass a tag name and a class_ argument (with the underscore, because class is a reserved word in Python) to filter what you want.
products = soup.find_all('div', class_='product')
for p in products:
print(p.get_text(strip=True))find_all() always returns a ResultSet (behaves like a list). If nothing matches, you get [], not None, so iterating is always safe. Add limit=N to cap the number of results.
Use find() when you expect exactly one element, like a page title or a unique container.
title = soup.find('h1')
if title:
print(title.get_text(strip=True))Always check for None after find(). Calling .get_text() or .find() on a missing tag raises AttributeError, which is the most common BeautifulSoup bug in production scrapers.
Related articles
All articles →How to Use Beautiful Soup for Web Scraping
Learn how to easily extract web data using Python's Beautiful Soup library. This easy-to-follow guide helps you parse HTML code and extract the data you need, with clear examples and practical tips.
Web Scraping with XPath in Selenium
Using XPath in Selenium for scraping helps to parse dynamic elements and to find element at any level of DOM structure.
Best Ways to Find All URLs on Any Website
Find all URLs on a domain by using a site crawler, parsing the sitemap file, exploring robots.txt, applying search engine queries with operators, or writing a custom scraping script.
Hand BeautifulSoup the right HTML
HasData fetches blocked, rendered, and rate-limited pages, so BS4 always sees clean input.