Search engine scraping is the process of harvesting URLs, descriptions, or other information from search engines. This is a specific form of screen scraping or web scraping dedicated to search engines only.
Most commonly larger search engine optimization (SEO) providers depend on regularly scraping keywords from search engines to monitor the competitive position of their customers' websites for relevant keywords or their indexing status.
The process of entering a website and extracting data in an automated fashion is also often called "crawling". Search engines get almost all their data from automated crawling bots.
Google is by far the largest search engine with most users in numbers as well as most revenue in creative advertisements, which makes Google the most important search engine to scrape for SEO related companies.[1]
Although Google does not take legal action against scraping, it uses a range of defensive methods that makes scraping their results a challenging task, even when the scraping tool is realistically spoofing a normal web browser:
When search engine defense thinks an access might be automated, the search engine can react differently.
The first layer of defense is a captcha page[4] where the user is prompted to verify they are a real person and not a bot or tool. Solving the captcha will create a cookie that permits access to the search engine again for a while. After about one day, the captcha page is displayed again.
The second layer of defense is a similar error page but without captcha, in such a case the user is completely blocked from using the search engine until the temporary block is lifted, or the user changes their IP.
The third layer of defense is a long-term block of the entire network segment. Google has blocked large network blocks for months. This sort of block is likely triggered by an administrator and only happens if a scraping tool is sending a very high number of requests.
All these forms of detection may also happen to a normal user, especially users sharing the same IP address or network class (IPV4 ranges as well as IPv6 ranges).
To scrape a search engine successfully, the two major factors are time and amount.
The more keywords a user needs to scrape and the smaller the time for the job, the more difficult scraping will be and the more developed a scraping script or tool needs to be.
Scraping scripts need to overcome a few technical challenges:
When developing a scraper for a search engine, almost any programming language can be used. Although, depending on performance requirements, some languages will be favorable.
PHP is a commonly used language to write scraping scripts for websites or backend services, since it has powerful capabilities built-in (DOM parsers, libcURL); however, its memory usage is typically 10 times the factor of a similar C/C++ code. Ruby on Rails as well as Python are also frequently used to automated scraping jobs.
Additionally, bash scripting can be used together with cURL as a command line tool to scrape a search engine.
When scraping websites and services the legal part is often a big concern for companies, for web scraping it greatly depends on the country a scraping user/company is from as well as which data or website is being scraped. With many different court rulings all over the world.[5] [6]
However, when it comes to scraping search engines the situation is different, search engines usually do not list intellectual property as they just repeat or summarize information they scraped from other websites.
The largest public known incident of a search engine being scraped happened in 2011 when Microsoft was caught scraping unknown keywords from Google for their own, rather new Bing service,[7] but even this incident did not result in a court case.