一般遇到动态加载的网页就比较棘手,一般采用scrapy_splash和selenium这两种方式来解决。貌似scrapy_splash更强大,因为就从爬取美团这个网站而言,scrapy_splash可以实现,selenium没有实现。可能selenium没有设置对吧,按理说都应该可以的。
首先需要你安装scrapy_splash,需要用到docker。教程在网上自己找。很简单。两个命令的事,前提是你在linux环境下。安装好之后访问:http://127.0.0.1:8050/
安装模块 pip3 install scrapy-splash
scrapy-splash需要些lua脚本。比如加一些参数,请求头之类的。具体语法百度吧,我也不太清楚。
创建好项目之后。需要在setting中设置几个参数:
DOWNLOADER_MIDDLEWARES = { #scrapy_splash相关的中间件 'scrapy_splash.SplashCookiesMiddleware': 723, 'scrapy_splash.SplashMiddleware': 725, 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810, } SPIDER_MIDDLEWARES = { 'scrapy_splash.SplashDeduplicateArgsMiddleware': 100, } #'scrapy_splash的去重的类 DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter' # 最后配置一个Cache存储HTTPCACHE_STORAGE HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage' #地址 SPLASH_URL = 'http://localhost:8050'
然后开始写爬虫程序,也很简单,就是SplashRequest请求。然后指定需要执行的脚本。就会返回动态加载完成的页面。
# -*- coding: utf-8 -*- import scrapy from selenium import webdriver import time from scrapy_splash import SplashRequest script = """ function main(splash, args) assert(splash:wait(0.5)) splash:set_custom_headers({ ['Accept'] = '*/*', ['Accept-Language'] = 'zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2', ['Cache-Control'] = 'max-age=0', ['Connection'] = 'keep-alive', ['User-Agent'] = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36' }) splash.private_mode_enabled = false assert(splash:go(args.url)) assert(splash:wait(10)) return { html = splash:html(), png = splash:png(), har = splash:har(), } end """ class MeituanspiderSpider(scrapy.Spider): name = 'meituanSpider' # allowed_domains = ['zz.meituan.com'] start_urls = ['http://zz.meituan.com/meishi/pn/'] def start_requests(self): yield SplashRequest(self.start_urls[0], callback=self.parse, endpoint='execute', args={'lua_source': script, 'wait': 7}) def parse(self, response): # 店铺名字 list=response.xpath('//*[@id="app"]/section/div/div[2]/div[2]/div[1]/ul/li/div[2]/a/h4/text()').extract() for i in list : print(i)
如果不写脚本的话,是不会成功的,可能美团那边做了限制,脚本也很简单,就是加一些请求头。