• 10 给予scrapy-redis的分布式爬虫


    1. 安装

      pip install scrapy_redis

    2. 爬虫文件

      scrapy-redis提供了两种爬虫

      

    from scrapy_redis.spiders import RedisSpider
    
    
    class MySpider(RedisSpider):
        """Spider that reads urls from redis queue (myspider:start_urls)."""
        name = 'myspider_redis'
        redis_key = 'myspider:start_urls'
    
        def __init__(self, *args, **kwargs):
            # Dynamically define the allowed domains list.
            domain = kwargs.pop('domain', '')
            self.allowed_domains = filter(None, domain.split(','))
            super(MySpider, self).__init__(*args, **kwargs)
    
        def parse(self, response):
            return {
                'name': response.css('title::text').extract_first(),
                'url': response.url,
            }
    from scrapy.spiders import Rule
    from scrapy.linkextractors import LinkExtractor
    
    from scrapy_redis.spiders import RedisCrawlSpider
    
    
    class MyCrawler(RedisCrawlSpider):
        """Spider that reads urls from redis queue (myspider:start_urls)."""
        name = 'mycrawler_redis'
        redis_key = 'mycrawler:start_urls'
    
        rules = (
            # follow all links
            Rule(LinkExtractor(), callback='parse_page', follow=True),
        )
    
        def __init__(self, *args, **kwargs):
            # Dynamically define the allowed domains list.
            domain = kwargs.pop('domain', '')
            self.allowed_domains = filter(None, domain.split(','))
            super(MyCrawler, self).__init__(*args, **kwargs)
    
        def parse_page(self, response):
            return {
                'name': response.css('title::text').extract_first(),
                'url': response.url,
            }

    3. settings配置

      使用redis_spider组件中封装好的管道  ITEM_PIPELINES = { 'scrapy_redis.pipelines.RedisPipeline': 400 }  

      # 使用scrapy-redis组件的去重队列 DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"

      # 使用scrapy-redis组件自己的调度器 SCHEDULER = "scrapy_redis.scheduler.Scheduler"

      # 是否允许暂停 SCHEDULER_PERSIST = True

      

      REDIS_HOST = 'redis服务的ip地址'

      REDIS_PORT = 6379

      REDIS_ENCODING = ‘utf-8’

      REDIS_PARAMS = {‘password’:’123456’}

    4. 开启redis-server 和 redis-cli

    5. scrapy runspider myspider.py 开启分布式爬虫

    6. 向调度器中扔入一个起始url, lpush redis_key url

      

  • 相关阅读:
    入门教程: JS认证和WebAPI
    ASP.NET Core 之 Identity 入门(二)
    在Visual Studio 2017中使用Asp.Net Core构建Angular4应用程序
    .Net Core+Angular Cli/Angular4开发环境搭建教程
    简单易用的.NET免费开源RabbitMQ操作组件EasyNetQ解析
    Razor
    一个简易的反射类库NMSReflector
    发布 Ionic iOS 企业级应用
    AngularJS中的Provider们:Service和Factory等的区别
    Linux企业运维人员必备150个命令汇总
  • 原文地址:https://www.cnblogs.com/zhangjian0092/p/11966398.html
Copyright © 2020-2023  润新知