https://www.linkedin.com/pulse/java-how-use-headless-browsers-crawling-web-scraping-data-taluyev/
Did you ever think to implement software to scrape data from web pages? I guess everyone could think about crawling web.
The simplest way to get data from remote page is run your preferable web browser, load target web page, select needed text, copy and past text into text editor for the following data transformations. Joke :)
To be honest how to automate this routine process? Let's determine primary tasks need to be solved for implementing our crawler.
- Load data from remote host. It is not a secret how to to this...
- Parse loaded html and build DOM (Document Object Model).
- Get data by traversing DOM or using CSS selectors.
- Save or pass data for other tasks.
Parsing static HTML is quite "easy task". There are Java libraries which do this task very well. I would recommend to take a look at http://jsoup.org It's enough in simple case.
How to be with hidden HTML which is created by Javascript? We need to use browser or implement browser :) Fortunately we do not have to implement our own browser if we want just to implement crawler. These browsers are already implemented. Our herous: http://phantomjs.org, https://slimerjs.org
How to organize communication between Java program and headless browser? On the stage appears "Ghost" driver. The both browsers support this driver out of the box. Ghost driver is "relative" of WebDriver. WebDriver is well known among test-engineers - a lot of code examples and manuals. We are free to use Maven for integration GHost driver into crawler application.
There are difference between http://phantomjs.org, https://slimerjs.org. It is well documented on FAQ page of Slimerjs project.
Makes sense to consider Javascript framework casperjs.org - is a navigation scripting & testing utility for PhantomJS and SlimerJS written in Javascript.
What if we do not want to use not PhantomJS nor SlimerJS? There are alternatives:
At this point I propose to make a pause. Now we have enough information to dive into implementing of web crawlers applications.
Analytics starts from data gulps :)
Please like and share if you find my arcticle usefull :-)