Partner – Microsoft – NPI (cat=Java)
announcement - icon

Microsoft JDConf 2024 conference is getting closer, on March 27th and 28th. Simply put, it's a free virtual event to learn about the newest developments in Java, Cloud, and AI.

Josh Long and Mark Heckler are kicking things off in the keynote, so it's definitely going to be both highly useful and quite practical.

This year’s theme is focused on developer productivity and how these technologies transform how we work, build, integrate, and modernize applications.

For the full conference agenda and speaker lineup, you can explore JDConf.com:

>> RSVP Now

1. Introduction

We see web crawlers in use, every time we use our favorite search engine. They’re also commonly used to scrape and analyze data from websites.

In this tutorial, we’re going to learn how to use crawler4j to set up and run our own web crawlers. crawler4j is an open source Java project that allows us to do this easily.

2. Setup

Let’s use Maven Central to find the most recent version and bring in the Maven dependency:

<dependency>
    <groupId>edu.uci.ics</groupId>
    <artifactId>crawler4j</artifactId>
    <version>4.4.0</version>
</dependency>

3. Creating Crawlers

3.1. Simple HTML Crawler

We’re going to start by creating a basic crawler that crawls the HTML pages on https://baeldung.com.

Let’s create our crawler by extending WebCrawler in our crawler class and defining a pattern to exclude certain file types:

public class HtmlCrawler extends WebCrawler {

    private final static Pattern EXCLUSIONS
      = Pattern.compile(".*(\\.(css|js|xml|gif|jpg|png|mp3|mp4|zip|gz|pdf))$");

    // more code
}

In each crawler class, we must override and implement two methods: shouldVisit and visit.

Let’s create our shouldVisit method now using the EXCLUSIONS pattern we created:

@Override
public boolean shouldVisit(Page referringPage, WebURL url) {
    String urlString = url.getURL().toLowerCase();
    return !EXCLUSIONS.matcher(urlString).matches() 
      && urlString.startsWith("https://www.baeldung.com/");
}

Then, we can do our processing for visited pages in the visit method:

@Override
public void visit(Page page) {
    String url = page.getWebURL().getURL();

    if (page.getParseData() instanceof HtmlParseData) {
        HtmlParseData htmlParseData = (HtmlParseData) page.getParseData();
        String title = htmlParseData.getTitle();
        String text = htmlParseData.getText();
        String html = htmlParseData.getHtml();
        Set<WebURL> links = htmlParseData.getOutgoingUrls();

        // do something with the collected data
    }
}

Once we have our crawler written, we need to configure and run it:

File crawlStorage = new File("src/test/resources/crawler4j");
CrawlConfig config = new CrawlConfig();
config.setCrawlStorageFolder(crawlStorage.getAbsolutePath());

int numCrawlers = 12;

PageFetcher pageFetcher = new PageFetcher(config);
RobotstxtConfig robotstxtConfig = new RobotstxtConfig();
RobotstxtServer robotstxtServer= new RobotstxtServer(robotstxtConfig, pageFetcher);
CrawlController controller = new CrawlController(config, pageFetcher, robotstxtServer);

controller.addSeed("https://www.baeldung.com/");

CrawlController.WebCrawlerFactory<HtmlCrawler> factory = HtmlCrawler::new;

controller.start(factory, numCrawlers);

We configured a temporary storage directory, specified the number of crawling threads, and seeded the crawler with a starting URL.

We should also note that the CrawlController.start() method is a blocking operation. Any code after that call will only execute after the crawler has finished running.

3.2. ImageCrawler

By default, crawler4j doesn’t crawl binary data. In this next example, we’ll turn on that functionality and crawl all the JPEGs on Baeldung.

Let’s start by defining the ImageCrawler class with a constructor that takes a directory for saving images:

public class ImageCrawler extends WebCrawler {
    private final static Pattern EXCLUSIONS
      = Pattern.compile(".*(\\.(css|js|xml|gif|png|mp3|mp4|zip|gz|pdf))$");
    
    private static final Pattern IMG_PATTERNS = Pattern.compile(".*(\\.(jpg|jpeg))$");
    
    private File saveDir;
    
    public ImageCrawler(File saveDir) {
        this.saveDir = saveDir;
    }

    // more code

}

Next, let’s implement the shouldVisit method:

@Override
public boolean shouldVisit(Page referringPage, WebURL url) {
    String urlString = url.getURL().toLowerCase();
    if (EXCLUSIONS.matcher(urlString).matches()) {
        return false;
    }

    if (IMG_PATTERNS.matcher(urlString).matches() 
        || urlString.startsWith("https://www.baeldung.com/")) {
        return true;
    }

    return false;
}

Now, we’re ready to implement the visit method:

@Override
public void visit(Page page) {
    String url = page.getWebURL().getURL();
    if (IMG_PATTERNS.matcher(url).matches() 
        && page.getParseData() instanceof BinaryParseData) {
        String extension = url.substring(url.lastIndexOf("."));
        int contentLength = page.getContentData().length;

        // write the content data to a file in the save directory
    }
}

Running our ImageCrawler is similar to running the HttpCrawler, but we need to configure it to include binary content:

CrawlConfig config = new CrawlConfig();
config.setIncludeBinaryContentInCrawling(true);

// ... same as before
        
CrawlController.WebCrawlerFactory<ImageCrawler> factory = () -> new ImageCrawler(saveDir);
        
controller.start(factory, numCrawlers);

3.3. Collecting Data

Now that we’ve looked at a couple of basic examples, let’s expand on our HtmlCrawler to collect some basic statistics during our crawl.

First, let’s define a simple class to hold a couple of statistics:

public class CrawlerStatistics {
    private int processedPageCount = 0;
    private int totalLinksCount = 0;
    
    public void incrementProcessedPageCount() {
        processedPageCount++;
    }
    
    public void incrementTotalLinksCount(int linksCount) {
        totalLinksCount += linksCount;
    }
    
    // standard getters
}

Next, let’s modify our HtmlCrawler to accept a CrawlerStatistics instance via a constructor:

private CrawlerStatistics stats;
    
public HtmlCrawler(CrawlerStatistics stats) {
    this.stats = stats;
}

With our new CrawlerStatistics object, let’s modify the visit method to collect what we want:

@Override
public void visit(Page page) {
    String url = page.getWebURL().getURL();
    stats.incrementProcessedPageCount();

    if (page.getParseData() instanceof HtmlParseData) {
        HtmlParseData htmlParseData = (HtmlParseData) page.getParseData();
        String title = htmlParseData.getTitle();
        String text = htmlParseData.getText();
        String html = htmlParseData.getHtml();
        Set<WebURL> links = htmlParseData.getOutgoingUrls();
        stats.incrementTotalLinksCount(links.size());

        // do something with collected data
    }
}

Now, let’s head back to our controller and provide the HtmlCrawler with an instance of CrawlerStatistics:

CrawlerStatistics stats = new CrawlerStatistics();
CrawlController.WebCrawlerFactory<HtmlCrawler> factory = () -> new HtmlCrawler(stats);

3.4. Multiple Crawlers

Building on our previous examples, let’s now have a look at how we can run multiple crawlers from the same controller.

It’s recommended that each crawler use its own temporary storage directory, so we need to create separate configurations for each one we’ll be running.

The CrawlControllers can share a single RobotstxtServer, but otherwise, we basically need a copy of everything.

So far, we’ve used the CrawlController.start method to run our crawlers and noted that it’s a blocking method. To run multiples, we’ll use CrawlerControlller.startNonBlocking in conjunction with CrawlController.waitUntilFinish.

Now, let’s create a controller to run HtmlCrawler and ImageCrawler concurrently:

File crawlStorageBase = new File("src/test/resources/crawler4j");
CrawlConfig htmlConfig = new CrawlConfig();
CrawlConfig imageConfig = new CrawlConfig();
        
// Configure storage folders and other configurations
        
PageFetcher pageFetcherHtml = new PageFetcher(htmlConfig);
PageFetcher pageFetcherImage = new PageFetcher(imageConfig);
        
RobotstxtConfig robotstxtConfig = new RobotstxtConfig();
RobotstxtServer robotstxtServer = new RobotstxtServer(robotstxtConfig, pageFetcherHtml);

CrawlController htmlController
  = new CrawlController(htmlConfig, pageFetcherHtml, robotstxtServer);
CrawlController imageController
  = new CrawlController(imageConfig, pageFetcherImage, robotstxtServer);
        
// add seed URLs
        
CrawlerStatistics stats = new CrawlerStatistics();
CrawlController.WebCrawlerFactory<HtmlCrawler> htmlFactory = () -> new HtmlCrawler(stats);
        
File saveDir = new File("src/test/resources/crawler4j");
CrawlController.WebCrawlerFactory<ImageCrawler> imageFactory
  = () -> new ImageCrawler(saveDir);
        
imageController.startNonBlocking(imageFactory, 7);
htmlController.startNonBlocking(htmlFactory, 10);

htmlController.waitUntilFinish();
imageController.waitUntilFinish();

4. Configuration

We’ve already seen some of what we can configure. Now, let’s go over some other common settings.

Settings are applied to the CrawlConfig instance we specify in our controller.

4.1. Limiting Crawl Depth

By default, our crawlers will crawl as deep as they can. To limit how deep they’ll go, we can set the crawl depth:

crawlConfig.setMaxDepthOfCrawling(2);

Seed URLs are considered to be at depth 0, so a crawl depth of 2 will go two layers beyond the seed URL.

4.2. Maximum Pages to Fetch

Another way to limit how many pages our crawlers will cover is to set the maximum number of pages to crawl:

crawlConfig.setMaxPagesToFetch(500);

We can also limit the number of outgoing links followed off each page:

crawlConfig.setMaxOutgoingLinksToFollow(2000);

4.4. Politeness Delay

Since very efficient crawlers can easily be a strain on web servers, crawler4j has what it calls a politeness delay. By default, it’s set to 200 milliseconds. We can adjust this value if we need to:

crawlConfig.setPolitenessDelay(300);

4.5. Include Binary Content

We already used the option for including binary content with our ImageCrawler:

crawlConfig.setIncludeBinaryContentInCrawling(true);

4.6. Include HTTPS

By default, crawlers will include HTTPS pages, but we can turn that off:

crawlConfig.setIncludeHttpsPages(false);

4.7. Resumable Crawling

If we have a long-running crawler and we want it to resume automatically, we can set resumable crawling. Turning it on may cause it to run slower:

crawlConfig.setResumableCrawling(true);

4.8. User-Agent String

The default user-agent string for crawler4j is crawler4j. Let’s customize that:

crawlConfig.setUserAgentString("baeldung demo (https://github.com/yasserg/crawler4j/)");

We’ve just covered some of the basic configurations here. We can look at CrawConfig class if we’re interested in some of the more advanced or obscure configuration options.

5. Conclusion

In this article, we’ve used crawler4j to create our own web crawlers. We started with two simple examples of crawling HTML and images. Then, we built on those examples to see how we can gather statistics and run multiple crawlers concurrently.

The full code examples are available over on GitHub.

Course – LS (cat=Java)

Get started with Spring and Spring Boot, through the Learn Spring course:

>> CHECK OUT THE COURSE
res – REST with Spring (eBook) (everywhere)
Comments are closed on this article!