• Scrapy框架(七)--中间件及Selenium应用


    中间件

    下载中间件(Downloader Middlewares) 位于scrapy引擎和下载器之间的一层组件。

    作用:批量拦截到整个工程中所有的请求和响应

    - 拦截请求:
      - UA伪装:process_request
      - 代理IP:process_exception:return request

    - 拦截响应:
      - 篡改响应数据,响应对象,处理动态加载的数据。

    UA池:User-Agent池

    作用:尽可能多的将scrapy工程中的请求伪装成不同类型的浏览器身份。
    操作流程:

      1.在下载中间件中拦截请求

      2.将拦截到的请求的请求头信息中的UA进行篡改伪装

      3.在配置文件中开启下载中间件

    UA池的封装:

    user_agent_list = [
                  "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 "
                  "(KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
                  "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 "
                  "(KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
                  "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 "
                  "(KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
                  "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 "
                  "(KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
                  "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 "
                  "(KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
                  "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 "
                  "(KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
                  "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 "
                  "(KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
                  "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "
                  "(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
                  "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 "
                  "(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
                  "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 "
                  "(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
                  "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "
                  "(KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
                  "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "
                  "(KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
                  "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "
                  "(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
                  "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "
                  "(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
                  "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 "
                  "(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
                  "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "
                  "(KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
                  "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 "
                  "(KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
                  "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 "
                  "(KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
          ]`

    代理池

    作用:尽可能多的将scrapy工程中的请求的IP设置成不同的。
    操作流程:

      1.在下载中间件中拦截请求

      2.将拦截到的请求的IP修改成某一代理IP

      3.在配置文件中开启下载中间件

    示例:

     class Proxy(object):
      def process_request(self, request, spider):
      #对拦截到请求的url进行判断(协议头到底是http还是https)
      #request.url返回值:http://www.xxx.com
      h = request.url.split(':')[0]  #请求的协议头
      if h == 'https':
      ip = random.choice(PROXY_https)
      request.meta['proxy'] = 'https://'+ip
      else:
      ip = random.choice(PROXY_http)
      request.meta['proxy'] = 'http://' + ip
      #可被选用的代理IP
      PROXY_http = [
      '153.180.102.104:80',
      '195.208.131.189:56055',
      ]
      PROXY_https = [
      '120.83.49.90:9000',
      '95.189.112.214:35508',
      ]

    中间件示例

    配置文件

    DOWNLOADER_MIDDLEWARES = {
       'Img.middlewares.ImgDownloaderMiddleware': 543,
    }

    process_request     process_exception

    class ImgDownloaderMiddleware:
        # Not all methods need to be defined. If a method is not defined,
        # scrapy acts as if the downloader middleware does not modify the
        # passed objects.
    
        user_agent_list = [
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 "
            "(KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
            "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 "
            "(KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 "
            "(KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 "
            "(KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
            "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 "
            "(KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
            "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 "
            "(KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
            "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 "
            "(KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "
            "(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 "
            "(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
            "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 "
            "(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "
            "(KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "
            "(KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "
            "(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "
            "(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 "
            "(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "
            "(KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
            "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 "
            "(KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
            "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 "
            "(KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
        ]
        proxy_http = [
            '60.188.2.46:3000',
            '110.243.16.20:9999'
        ]
        proxy_https = [
            '60.179.201.207:3000',
            '60.179.200.202:3000'
        ]
        @classmethod
        def from_crawler(cls, crawler):
            # This method is used by Scrapy to create your spiders.
            s = cls()
            crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
            return s
    
        def process_request(self, request, spider):
            request.headers['User-Agent'] = random.choice(self.user_agent_list)
            request.meta['proxy'] = 'http://60.188.2.46:3000'
            # if request.url.split(':')[0] == 'http':
            #     request.meta['proxy'] = 'http://' + random.choice(self.proxy_http)
            # if request.url.split(':')[0] == 'https':
            #     request.meta['proxy'] = 'https://' + random.choice(self.proxy_https)
            return None
    
        def process_response(self, request, response, spider):
            # Called with the response returned from the downloader.
    
            # Must either;
            # - return a Response object
            # - return a Request object
            # - or raise IgnoreRequest
            return response
    
        def process_exception(self, request, exception, spider):
            if request.url.split(':')[0] == 'http':
                request.meta['proxy'] = 'http://' + random.choice(self.proxy_http)
            if request.url.split(':')[0] == 'https':
                request.meta['proxy'] = 'https://' + random.choice(self.proxy_https)
            return request  # 将修正后的请求对象重新进行请求发送
    
    
        def spider_opened(self, spider):
            spider.logger.info('Spider opened: %s' % spider.name)

    process_response

    from scrapy.http import HtmlResponse
    def process_response(self, request, response, spider):  # spider就是爬虫文件中 爬虫类的实例化对象
            bro = spider.bro  # 是一个Selenium浏览器对象
            if request.url in spider.urls:
                bro.get(request.url)  # 对动态加载的数据使用selenium发送请求
                page_text = bro.page_source 
                new_response = HtmlResponse(url=request.url,body=page_text,encoding='utf-8',request=request)  # 封装成响应对象返回
                return new_response
            else:
                return response

    scrapy中selenium的应用
    在通过scrapy框架进行某些网站数据爬取的时候,往往会碰到页面动态数据加载的情况发生,如果直接使用scrapy对其url发请求,是绝对获取不到那部分动态加载出来的数据值。

    但是通过观察我们会发现,通过浏览器进行url请求发送则会加载出对应的动态加载出的数据。那么如果我们想要在scrapy也获取动态加载出的数据,则必须使用selenium创建浏览器对象,

    然后通过该浏览器对象进行请求发送,获取动态加载的数据值。


    案例分析
    需求:爬取网易新闻的国内板块下的新闻数据

    需求分析:当点击国内超链进入国内对应的页面时,会发现当前页面展示的新闻数据是被动态加载出来的,如果直接通过程序对url进行请求,是获取不到动态加载出的新闻数据的。

    则就需要我们使用selenium实例化一个浏览器对象,在该对象中进行url的请求,获取动态加载的新闻数据。


    selenium在scrapy中使用的原理分析
    当引擎将国内板块url对应的请求提交给下载器后,下载器进行网页数据的下载,然后将下载到的页面数据,封装到response中,提交给引擎,引擎将response在转交给Spiders。

    Spiders接受到的response对象中存储的页面数据里是没有动态加载的新闻数据的。要想获取动态加载的新闻数据,则需要在下载中间件中对下载器提交给引擎的response响应对象进行拦截,

    切对其内部存储的页面数据进行篡改,修改成携带了动态加载出的新闻数据,然后将被篡改的response对象最终交给Spiders进行解析操作。


    selenium在scrapy中的使用流程
    1.重写爬虫文件的构造方法,在该方法中使用selenium实例化一个浏览器对象(因为浏览器对象只需要被实例化一次)
    2.重写爬虫文件的closed(self,spider)方法,在其内部关闭浏览器对象。该方法是在爬虫结束时被调用
    3.重写下载中间件的process_response方法,让该方法对响应对象进行拦截,并篡改response中存储的页面数据
    4.在配置文件中开启下载中间件

    案例:https://www.cnblogs.com/sxy-blog/p/13216168.html




  • 相关阅读:
    Linux--sed使用
    header函数的用法
    php的cli命令行接口
    extract函数在表单提交中提供的方便之处
    sersync实时同步备份的安装
    Centos7 服务器异常处理
    MySQL之——崩溃-修复损坏的innodb:innodb_force_recovery
    记一次oracle内存分配不足,前端访问500报错,如何扩容oracle的memory_target内存
    mysql sql语句整理
    关于控制mysql的binlog日志刷入磁盘频率的参数理解
  • 原文地址:https://www.cnblogs.com/sxy-blog/p/13215982.html
Copyright © 2020-2023  润新知