How to Make HTTP Requests in Node.js With Fetch API
This article will discuss the basics of using the Fetch API, a simple and intuitive interface for making HTTP requests. It is essential for working with data from remote servers, scraping data for further processing, and providing a convenient and flexible way to interact with external resources.
This guide delves into the fundamentals of utilizing Fetch in Node.js, encompassing installation and exploring essential methods like GET and POST. We’ll provide practical examples of interacting with a web scraping API using the POST method and showcase the execution results. Additionally, we’ll furnish code snippets for all frequently used HTTP methods and delve into the intricacies of handling responses, logging requests, and effortlessly sending files with the Node Fetch API.
Effortlessly integrate web scraping into your Node.js projects with HasData's Node.js SDK, leveraging headless browsers, proxy rotation, and JavaScript rendering capabilities.
This easy-to-use interface lets you call the Google SERP API to efficiently scrape search engine results (SERPs) using Node.js. It simplifies retrieval of organic search results, snippets, knowledge graph data, and more from Google.
Understanding the Fetch API
The Fetch API is a simple and intuitive interface for making asynchronous HTTP method of requests. It is essential for working with data from remote servers, providing a convenient and flexible way to interact with external resources. It is based on promises, making it a powerful tool for working with asynchronous code.
Back in April 2022, with the release of version 18.0.0, instead of xmlhttprequest API was added support of fetch()
but only as an experimental feature of Node.js. It wasn’t until version 20.0.0, rolled out in April 2023, that fetch()
method became a fully-fledged, built-in function. No extra flags, no third-party packages, just ready to go right out of the box.
But here’s the catch: if you’re stuck using a Node.js version older than 18.0.0, you’re out of luck when it comes to native fetch. You’ll need to rely on external libraries like node-fetch
or axios
to handle your HTTP requests. Sure, it’s not ideal, but it’s a solid workaround.
In contrast to other ways of making HTTP requests in server-side JavaScript, the Fetch API has several advantages. For example, the syntax of the Node Fetch API is concise and understandable, making the code more readable. Additionally, Fetch in NodeJS v21 automatically parses JSON data, simplifying working with data in JSON format. Finally, Fetch supports streaming data, which is useful for working with large files. If you’re interested in how Fetch compares to other libraries like Axios, check out our article on Axios vs. fetch() for making HTTP requests.
Basic Usage of Fetch API in Node.js
To use the Node Fetch API, you should install the corresponding npm package. This requires having the JavaScript package installed on your computer. We previously covered how to install NodeJS v21 in our introductory article on scraping using NodeJS.
To install the Fetch API, navigate to the folder of your project and run the following in the command prompt or terminal:
npm install node-fetch
Additionally, create a package.json file that specifies the imported module:
{
"type": "module",
"dependencies": {
"node-fetch": "^3.3.1"
}
}
After that, you can start using it in your project. Fetch API supports all HTTP methods, including GET, POST, PUT, and DELETE. Let’s take a look at examples of using each method.
Fetch for GET Requests
GET requests are the simplest and most common type of HTTP request. They allow you to easily extract data from web pages, making them a popular choice for web scraping.
For example, let’s use GET requests to fetch the HTML code of a page using the Fetch API. To make the examples more precise, we’ll look at both a basic GET request and a GET request with additional parameters.
Gain instant access to a wealth of business data on Google Maps, effortlessly extracting vital information like location, operating hours, reviews, and more in HTML or JSON format.
Get real-time access to Google search results, structured data, and more with our powerful SERP API. Streamline your development process with easy integration of our API. Start your free trial now!
Making Basic GET Request with Fetch
Now, create a file with the *.js extension and import the Fetch API:
import fetch from 'node-fetch';
Then, specify the URL of the page from which you want to get the data:
const url = 'https://demo.opencart.com/';
Finally, fetch request and specify the order of operations to get the page’s HTML code and display the received data on the screen. Provide for the output of error information in case of their occurrence:
fetch(url)
.then(response => response.text())
.then(data => console.log('Your data:', data))
.catch(error => console.error('Error:', error));
Save the changes in the project and run:
As you can see, we got the necessary data in the form we expected. If developers want to get not the page’s HTML code but, for example, the JSON response of the request, then instead of response.text() is enough to use response.json(), which will get and parse the JSON response object.
Making GET Request with Additional Parameters using Fetch
Using additional parameters in a GET request is very simple. For this example, we will use Google SERP. First, we will import the module and define the base URL:
import fetch from 'node-fetch';
const baseUrl = 'https://www.google.com/search';
Next, we’ll define the necessary parameters, including the query, the domain, the language, and the localization country:
const queryParams = '?q=Coffee&domain=google.com&gl=us&hl=en';
Then we will put together the entire link:
const url = `${baseUrl}${queryParams}`;
Keep the fetch and results output to the screen unchanged:
fetch(url)
.then(response => response.text())
.then(data => console.log('Your data:', data))
.catch(error => console.error('Error:', error));
GET requests are the easiest to understand and process. Let’s move on to more complex methods that support more parameters.
Fetch for POST Requests
POST requests send data, create new resources, or update existing ones on a server. They differ from GET requests, which are only used to retrieve data from a server.
In a POST request, data is sent in the request body, making it especially well-suited for sending large amounts of data. Additionally, data in a POST request can be sent in various formats, such as JSON, XML, or URL-encoded data, depending on the server’s requirements.
Using Fetch for Basic POST Requests
Let’s look at a basic example of how to make a POST request using the Fetch API. First, we’ll import the module and declare the base URL for the request:
import fetch from 'node-fetch';
const url = 'https://example.com/';
Next, we’ll define the parameters we need to pass in the request body:
const postData = {
key1: 'value1',
key2: 'value2'
};
Finally, we’ll assemble the entire request, specifying the HTTP method, request body, and headers object:
const requestOptions = {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify(postData)
};
Executing the request and displaying the data on the screen is almost identical. The only change is that in the fetch command, we need to specify not only the URL, but also additional parameters:
fetch(url, requestOptions)
.then(response => response.json())
.then(data => console.log('Your data:', data))
.catch(error => console.error('Error:', error));
For example, developers can use this approach to send the user’s login and password using the POST method for authorization. In this case, the server will respond with a message indicating whether the authorization was successful.
Sending Complex POST Request using Fetch
The previous example was very simple and more theoretical. Let’s use the Web Scraping API with a POST request to get a list of all the titles on the demo site page. To do this, we import the fetch module:
import fetch from 'node-fetch';
Then, we specify the endpoint for the Web Scraping API and the unique API key:
const apiKey = "YOUR-API-KEY";
const url = "https://api.hasdata.com/scrape";
Next, we specify the HTTP request fetch() headers, method, and body. In this case, we will use the extraction rules to extract only the product titles from the HTML page of the site:
const requestOptions = {
method: 'POST',
headers: {
'x-api-key': apiKey,
'Content-Type': 'application/json'
},
body: JSON.stringify({
url: "https://demo.opencart.com/",
js_rendering: false,
extract_emails: false,
extract_rules: {
title: "h4"
},
proxy_type: "datacenter",
proxy_country: "US"
})
};
Finally, we execute the request and print the result of the extraction rules to the screen:
fetch(url, requestOptions)
.then(response => response.json())
.then(result => console.log(result.scrapingResult?.extractedData))
.catch(error => console.log('error', error));
Running this code will output a list of all the product titles on the demo site page:
As you can see, we only got the necessary data using the Web Scraping API and a fairly simple POST request.
Fetch for Other HTTP Requests
As mentioned earlier, the Fetch API supports all major HTTP methods. In addition to GET and POST, PUT and DELETE are commonly used. To use these methods, create a new *.js file and import the node-fetch module. Then, specify the method type and execute the request.
Make PUT Request using Fetch
First, define the parameters you want to update with the PUT method.
const updatedData = { key: 'updatedValue' };
Then, simplify the previous code by specifying the URL and other parameters directly in the Fetch command.
fetch('https://example.com', {
method: 'PUT',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify(updatedData)
})
.then(response => response.json())
.then(data => console.log(data))
.catch(error => console.error('Error:', error));
As you can see, the request is not much different from previous examples.
Make DELETE Request using Fetch
The last method is used to delete data. Let’s modify the previous example slightly:
fetch('https://api.example.com', {
method: 'DELETE'
})
.then(response => response.json())
.then(data => console.log(data))
.catch(error => console.error('Error:', error));
In addition to these methods, Fetch supports others, such as PATCH, HEAD, and OPTIONS. Their usage is similar to the methods we have already discussed.
Handling Fetch API Responses
Previous examples have covered the basics of processing NodeJS Fetch API responses. However, there are a few additional things to consider. First, you should check the response status to ensure the request succeeded. Second, you should handle both JSON and text responses differently. Third, you can also process the response headers.
Discover the easiest way to get valuable SEO data from Google SERPs with our Google SERP Scraper! No coding is needed - just run, download, and analyze your SERP data in Excel, CSV, or JSON formats. Get started now for free!
Effortlessly extract Google Maps data – business types, phone numbers, addresses, websites, emails, ratings, review counts, and more. No coding needed! Download results in convenient JSON, CSV, and Excel formats.
Processing the Response
We can use a ternary operator to assign the correct processing logic in a scenario where the response format is unknown (JSON or text). The response will be processed as JSON in case of a successful request. In case of an error, the response will be processed as text.
fetch(url)
.then(response => response.ok ? response.json() : response.text())
.then(data => console.log('Data:', data))
.catch(error => console.error('Error:', error));
As a result, we achieved dynamic processing that automatically adapts to the type of HTTP response.
Getting the Response Status Code
Handling status codes is an important part of writing good code. The status code indicates the outcome of a request. For example, a 200 status code indicates that the request was successful. A 500 status code indicates a server error. A 404 status code indicates that the requested page was not found. Now, let’s put the above example into practice:
fetch(url)
.then(response => {
if (response.ok) {
// Successfull response
return response.json();
} else if (response.status === 500) {
// Retry the request
return null;
} else if (response.status === 404) {
console.log('Page not found.');
return null;
} else {
// Any other error
throw new Error(`HTTP error! Status: ${response.status}`);
}
})
.catch(error => {
console.error('Error:', error);
});
It is important to note that the .catch() at the end of the function handles HTTP request errors and any errors that may occur during processing. Following these best practices can make your code more resilient, flexible, and reliable.
Working with Response Headers
Working with response HTTP headers may sometimes be necessary. For example, let’s consider an example where we get the value of the Content-Type header:
fetch(url)
.then(response => {
const headers = response.headers;
const contentType = headers.get('Content-Type');
const response = response.text();
console.log('Content-Type:', contentType);
return response.json();
})
.catch(error => {
console.error('Error:', error);
});
The rest of the headers can be retrieved similarly.
Best Practices and Tips
The more features you use, the more your code will be more practical, functional, and user-friendly. Therefore, as additional ways to use Fetch API, let’s consider asynchronous web requests logging and file transfer examples.
Logging HTTP Requests
Logging HTTP requests is essential for debugging and monitoring application performance. Typically, this involves writing logs to the console or a central log file. We can create a separate function to log into the console for convenience.
function logRequest(url, method, status) {
console.log(`[HTTP Request] ${method} ${url} - Status: ${status}`);
}
To log data, simply call a pre-defined function in the desired location and pass it the link, method, and response status code.
fetch(url)
.then(response => {
logRequest(url, 'GET', response.status);
})
.catch(error => {
console.error('Error:', error);
});
In the future, you can customize the logging function to your needs. For example, instead of displaying logs on the screen, you can implement logging to a file.
Send File Using Fetch
To send a file using Node.js v21, you need to use the fs module to read the file into memory and the node-fetch module to make the HTTP request. First, import the fs and node-fetch modules into your project:
import fetch from 'node-fetch';
import fs from 'fs';
Next, specify the path to the file you want to send and the URL of the page that will receive the file:
const url = 'https://example.com';
const filePath = 'path/file.txt';
Then, read the file into memory in binary format:
const fileData = fs.readFileSync(filePath);
const formData = new FormData();
formData.append('file', fileData, { filename: 'file.txt' });
Set the request options, including the POST method and the request body with the file:
const options = {
method: 'POST',
body: formData,
};
And finally, execute the request:
fetch(url, options)
.then(response => response.ok ? response.json() : Promise.reject('HTTP error!'))
.then(data => console.log('Response:', data))
.catch(error => console.error('Error:', error));
If the request is successful, the file will be sent to the page specified by the URL.
Conclusion and Takeaways
The Node Fetch API provides a simple and efficient way to scrape HTML pages with various HTTP requests in Node.js. Its advantages include clear and concise syntax, automatic JSON parsing, and streaming support, which helps work with large files.
This article provides a comprehensive overview of the primary usage of Fetch in Node.js, starting with installing the required package with npm and examples of the main methods, such as GET and POST. In addition, we discussed an example of interacting with a web scraping API using POST requests. We also provided examples of other methods, such as PUT and DELETE.
An important aspect of the article is handling Fetch API responses. We cover methods for handling response statuses, different response formats (JSON and text), and working with response headers. Finally, we provide practical tips, such as logging HTTP requests for debugging and monitoring, and examples of sending files using Fetch API.
Might Be Interesting
Oct 29, 2024
How to Scrape YouTube Data for Free: A Complete Guide
Learn effective methods for scraping YouTube data, including extracting video details, channel info, playlists, comments, and search results. Explore tools like YouTube Data API, yt-dlp, and Selenium for a step-by-step guide to accessing valuable YouTube insights.
- Python
- Tutorials and guides
- Tools and Libraries
Aug 16, 2024
JavaScript vs Python for Web Scraping
Explore the differences between JavaScript and Python for web scraping, including popular tools, advantages, disadvantages, and key factors to consider when choosing the right language for your scraping projects.
- Tools and Libraries
- Python
- NodeJS
Aug 13, 2024
How to Scroll Page using Selenium in Python
Explore various techniques for scrolling pages using Selenium in Python. Learn about JavaScript Executor, Action Class, keyboard events, handling overflow elements, and tips for improving scrolling accuracy, managing pop-ups, and dealing with frames and nested elements.
- Tools and Libraries
- Python
- Tutorials and guides