• Python 爬虫七 Scrapy


    Scrapy

    Scrapy是一个为了爬取网站数据,提取结构性数据而编写的应用框架。 其可以应用在数据挖掘,信息处理或存储历史数据等一系列的程序中。
    其最初是为了页面抓取 (更确切来说, 网络抓取 )所设计的, 也可以应用在获取API所返回的数据(例如 Amazon Associates Web Services ) 或者通用的网络爬虫。Scrapy用途广泛,可以用于数据挖掘、监测和自动化测试。

    Scrapy 使用了 Twisted异步网络库来处理网络通讯。整体架构大致如下

     

    Scrapy主要包括了以下组件:

      • 引擎(Scrapy)
        用来处理整个系统的数据流处理, 触发事务(框架核心)
      • 调度器(Scheduler)
        用来接受引擎发过来的请求, 压入队列中, 并在引擎再次请求的时候返回. 可以想像成一个URL(抓取网页的网址或者说是链接)的优先队列, 由它来决定下一个要抓取的网址是什么, 同时去除重复的网址
      • 下载器(Downloader)
        用于下载网页内容, 并将网页内容返回给蜘蛛(Scrapy下载器是建立在twisted这个高效的异步模型上的)
      • 爬虫(Spiders)
        爬虫是主要干活的, 用于从特定的网页中提取自己需要的信息, 即所谓的实体(Item)。用户也可以从中提取出链接,让Scrapy继续抓取下一个页面
      • 项目管道(Pipeline)
        负责处理爬虫从网页中抽取的实体,主要的功能是持久化实体、验证实体的有效性、清除不需要的信息。当页面被爬虫解析后,将被发送到项目管道,并经过几个特定的次序处理数据。
      • 下载器中间件(Downloader Middlewares)
        位于Scrapy引擎和下载器之间的框架,主要是处理Scrapy引擎与下载器之间的请求及响应。
      • 爬虫中间件(Spider Middlewares)
        介于Scrapy引擎和爬虫之间的框架,主要工作是处理蜘蛛的响应输入和请求输出。
      • 调度中间件(Scheduler Middewares)
        介于Scrapy引擎和调度之间的中间件,从Scrapy引擎发送到调度的请求和响应。

    Scrapy运行流程大概如下:

    1. 引擎从调度器中取出一个链接(URL)用于接下来的抓取
    2. 引擎把URL封装成一个请求(Request)传给下载器
    3. 下载器把资源下载下来,并封装成应答包(Response)
    4. 爬虫解析Response
    5. 解析出实体(Item),则交给实体管道进行进一步的处理
    6. 解析出的是链接(URL),则把URL交给调度器等待抓取

    一、安装 

    Linux:

    pip3 install scrapy

    Windows:

          a. pip3 install wheel
          b. 下载twisted http://www.lfd.uci.edu/~gohlke/pythonlibs/#twisted
          c. 进入下载目录,执行 pip3 install Twisted‑17.1.0‑cp35‑cp35m‑win_amd64.whl
          d. pip3 install scrapy
          e. 下载并安装pywin32:https://sourceforge.net/projects/pywin32/files/

    二、基本使用

    1、创建项目

    scrapy startproject 项目名称
        - 在当前目录中创建中创建一个项目文件(类似于Django)
    
    scrapy genspider [-t template] <name> <domain>
        - 穿件爬虫应用
        如:scrapy gensipider -t basic oldboy oldboy.com
              scrapy gensipider -t xmlfeed autohome autohome.com.cn
    
    查看所有命令:scrapy gensipider -l
    查看模板命令:scrapy gensipider -d 模板名称
    
    scrapy list
        - 展示爬虫应用列表
    
    scrapy crawl 爬虫应用名称
        - 运行单独爬虫应用

    创建实例:

    创建项目
    shuais-MacBook-Pro:~ dandyzhang$ scrapy startproject scrapy_test
    
    New Scrapy project 'scrapy_test', using template directory '/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/scrapy/templates/project', created in:
        /Users/dandyzhang/scrapy_test
    
    You can start your first spider with:
        cd scrapy_test
        scrapy genspider example example.com
    
    
    进入创建的项目
    shuais-MacBook-Pro:~ dandyzhang$ cd scrapy_test/
    
    创建爬虫应用1
    shuais-MacBook-Pro:scrapy_test dandyzhang$ scrapy genspider chouti chouti.com
    
    Created spider 'chouti' using template 'basic' in module:
      scrapy_test.spiders.chouti
    
    创建爬虫应用2
    shuais-MacBook-Pro:scrapy_test dandyzhang$ scrapy genspider cnblogs cnblogs.com
    Created spider 'cnblogs' using template 'basic' in module:
      scrapy_test.spiders.cnblogs

    2、项目结构以及爬虫应用简介

    上面的实例,创建好了一个完整的项目:

    文件说明:

    • scrapy.cfg  项目的主配置信息。(真正爬虫相关的配置信息在settings.py文件中)
    • items.py    设置数据存储模板,用于结构化数据,如:Django的Model
    • pipelines    数据处理行为,如:一般结构化的数据持久化
    • settings.py 配置文件,如:递归的层数、并发数,延迟下载等
    • spiders      爬虫目录,如:创建文件,编写爬虫规则

    注意:一般创建爬虫文件时,以网站域名命名 

    此时,发现之前根据命令创建了2个应用都存储在spiders文件夹内,现在以其中的chouti为例,来撰写第一个爬虫

    import scrapy
    
    
    class ChoutiSpider(scrapy.Spider):
        name = 'chouti'  # 外部scrapy调用的爬虫应用名称
        allowed_domains = ['chouti.com']  # 允许的域名
        start_urls = ['http://dig.chouti.com/']  # 起始url
    
        def parse(self, response):  # 访问起始url并获取结果后的回调函数
            print(response.text)  # response就是返回结果

    查看结果:

    如果是window用户可能会遇到编码问题:

    import sys,os
    sys.stdout=io.TextIOWrapper(sys.stdout.buffer,encoding='gb18030')

    3、小试牛刀

    如上,需要在抽屉网中抓去热榜的所有标题,图中的框已经标好,从content-list入手,抓取每一个item中class为part2的share-title

    class ChoutiSpider(scrapy.Spider):
        name = 'chouti'
        allowed_domains = ['chouti.com']
        start_urls = ['http://dig.chouti.com/']
    
        def parse(self, response):
            """
            1.获取想要的内容
            2.如果分页,继续下载内容
    
            :param response:
            :return:
            """
            # 获取当前页的内容
            item_list = Selector(response=response).xpath('//div[@id="content-list"]/div[@class="item"]')
            # /子标签
            # //起始位置时,是在全局进行查找;非起始位置是在当前标签的子子孙孙内部找
            # ./当前对象下面找
    
            # 获取index为0的对象中的第一个满足条件的文本
            # obj = item_list[0].xpath('./div[@class="news-content"]//div[@class="part2"]/@share-title').extract_first()
            obj_list = item_list.xpath('./div[@class="news-content"]//div[@class="part2"]/@share-title').extract()
            print(obj_list)  # 获取的结果是列表

    如果抓取的是标签的内容而不是属性的话:

    obj = item_list[0].xpath('./div[@class="news-content"]//div[@class="show-content"]/text()').extract()

    执行命令:

    shuais-MacBook-Pro:scrapy_test dandyzhang$ scrapy crawl chouti --nolog

    结果:

    此时,如果分页内的也需要抓取呢?

    首先,先获取以下分页内部的url:

    import scrapy
    from scrapy.selector import Selector, HtmlXPathSelector
    from scrapy.http import Request
    
    
    class ChoutiSpider(scrapy.Spider):
        name = 'chouti'
        allowed_domains = ['chouti.com']
        start_urls = ['http://dig.chouti.com/']
    
        def parse(self, response):
            """
            1.获取想要的内容
            2.如果分页,继续下载内容
    
            :param response:
            :return:
            """
            url_list = Selector(response=response).xpath('//div[@id="dig_lcpage"]//a/@href').extract()
            print(url_list)

    运行结果:

    shuais-MacBook-Pro:scrapy_test dandyzhang$ scrapy crawl chouti --nolog
    ['/all/hot/recent/2', '/all/hot/recent/3', '/all/hot/recent/4', '/all/hot/recent/5', '/all/hot/recent/6', '/all/hot/recent/7', '/all/hot/recent/8', '/all/hot/recent/9', '/all/hot/recent/10', '/all/hot/recent/2']

    此时需要先拼接url,然后抓取数据:

    # -*- coding: utf-8 -*-
    import scrapy
    from scrapy.selector import Selector, HtmlXPathSelector
    from scrapy.http import Request  # 这里导入了一个Request,用来迭代
    
    
    class ChoutiSpider(scrapy.Spider):
        name = 'chouti'
        allowed_domains = ['chouti.com']
        start_urls = ['http://dig.chouti.com/']
    
        def parse(self, response):
            """
            1.获取想要的内容
            2.如果分页,继续下载内容
    
            :param response:
            :return:
            """
            item_list = Selector(response=response).xpath('//div[@id="content-list"]/div[@class="item"]')
    
            obj_list = item_list.xpath('./div[@class="news-content"]//div[@class="part2"]/@share-title').extract()
            print(obj_list)
    
            url_list = Selector(response=response).xpath('//div[@id="dig_lcpage"]//a/@href').extract()
            for url in url_list:
                url = 'http://dig.chouti.com' + url
                yield Request(url=url)  # 迭代处理

    这里可以在settings配置文件内设置下钻的深度:

    DEPTH_LIMIT = 2

    可以发现产生来了多个列表文件:

    a、Request是一个封装用户请求的类,在回调函数中yield该对象表示继续访问

    b、HtmlXpathSelector用于结构化HTML代码并提供选择器功能

    4、选择器

    from scrapy.selector import Selector, HtmlXPathSelector  # 一个是新版本,一个是旧版本,后面会被取消
    from scrapy.http import HtmlResponse
    
    html = """<!DOCTYPE html>
    <html>
        <head lang="en">
            <meta charset="UTF-8">
            <title></title>
        </head>
        <body>
            <ul>
                <li class="item-"><a id='i1' href="link.html">first item</a></li>
                <li class="item-0"><a id='i2' href="llink.html">first item</a></li>
                <li class="item-1"><a href="llink2.html">second item<span>vv</span></a></li>
            </ul>
            <div><a href="llink2.html">second item</a></div>
        </body>
    </html>
    """
    response = HtmlResponse(url='http://example.com', body=html, encoding='utf-8')
    # hxs = HtmlXPathSelector(response)  # 对象
    # print(hxs)
    # hxs = Selector(response=response).xpath('//a')  # 取全局内所有a标签
    # print(hxs)
    # hxs = Selector(response=response).xpath('//a[2]')  # 取全局内index为2的a标签
    # print(hxs)
    # hxs = Selector(response=response).xpath('//a[@id]')  # 取全局所有有id属性的a标签
    # print(hxs)
    # hxs = Selector(response=response).xpath('//a[@id="i1"]')  # 取全局所有id="i1"的a标签
    # print(hxs)
    # hxs = Selector(response=response).xpath('//a[@href="link.html"][@id="i1"]')  # 取全局所有href为link.html并且id为i1的a标签
    # print(hxs)
    # hxs = Selector(response=response).xpath('//a[contains(@href, "link")]')  # 取全局所有href有link字符串的a标签
    # print(hxs)
    # hxs = Selector(response=response).xpath('//a[starts-with(@href, "link")]')  # 取全局所有href以link字符串开头的a标签
    # print(hxs)
    # hxs = Selector(response=response).xpath('//a[re:test(@id, "id+")]')  # 正则 取全局所有a标签,id属性是i+数字的
    # print(hxs)
    # hxs = Selector(response=response).xpath('//a[re:test(@id, "id+")]/text()').extract()  # 正则 取全局所有a标签,id属性是i+数字的 内部的值
    # print(hxs)
    # hxs = Selector(response=response).xpath('//a[re:test(@id, "id+")]/@href').extract()  # 正则 取全局所有a标签,id属性是i+数字的 href属性值
    # print(hxs)
    # hxs = Selector(response=response).xpath('/html/body/ul/li/a/@href').extract()
    # print(hxs)
    # hxs = Selector(response=response).xpath('//body/ul/li/a/@href').extract_first()
    # print(hxs)
    
    # ul_list = Selector(response=response).xpath('//body/ul/li')
    # for item in ul_list:
    #     v = item.xpath('./a/span')
    #     # 或
    #     # v = item.xpath('a/span')
    #     # 或
    #     # v = item.xpath('*/a/span')
    #     print(v)

    抽屉点赞:

    import scrapy
    from scrapy.selector import HtmlXPathSelector
    from scrapy.http.request import Request
    from scrapy.http.cookies import CookieJar
    from scrapy import FormRequest
    
    
    class ChouTiSpider(scrapy.Spider):
        # 爬虫应用的名称,通过此名称启动爬虫命令
        name = "chouti"
        # 允许的域名
        allowed_domains = ["chouti.com"]
    
        cookie_dict = {}
        has_request_set = {}  # 发送过请求的集合
    
        def start_requests(self):  # 继承Spider,Spider内部先执行的是start_requests方法
            url = 'http://dig.chouti.com/'
            # return [Request(url=url, callback=self.login)]
            yield Request(url=url, callback=self.login)  # 爬取网页,指定回调函数;其实Request默认的callback是parse,
            # 这也解释了为什么新建的爬虫应用内部都是def parse(self, response):方法。可以像这样重写start_requests方法,指定callback
    
        def login(self, response):
            cookie_jar = CookieJar()
            cookie_jar.extract_cookies(response, response.request)
            for k, v in cookie_jar._cookies.items():
                for i, j in v.items():
                    for m, n in j.items():
                        self.cookie_dict[m] = n.value
    
            req = Request(
                url='http://dig.chouti.com/login',
                method='POST',
                headers={'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8'},
                body='phone=8615131255089&password=pppppppp&oneMonth=1',
                cookies=self.cookie_dict,
                callback=self.check_login  # 指定回调函数
            )
            yield req
    
        def check_login(self, response):
            req = Request(
                url='http://dig.chouti.com/',
                method='GET',
                callback=self.show,  # 定义callback
                cookies=self.cookie_dict,
                dont_filter=True  # 不被去重过滤
            )
            yield req
    
        def show(self, response):
            # print(response)
            hxs = HtmlXPathSelector(response)  # 实例化标签对象
            news_list = hxs.select('//div[@id="content-list"]/div[@class="item"]')
            for new in news_list:
                # temp = new.xpath('div/div[@class="part2"]/@share-linkid').extract()
                link_id = new.xpath('*/div[@class="part2"]/@share-linkid').extract_first()  # 获取id
                yield Request(  # 点赞
                    url='http://dig.chouti.com/link/vote?linksId=%s' %(link_id,),
                    method='POST',
                    cookies=self.cookie_dict,
                    callback=self.do_favor
                )
            # 获取分页的网址
            page_list = hxs.select('//div[@id="dig_lcpage"]//a[re:test(@href, "/all/hot/recent/d+")]/@href').extract()
            for page in page_list:
    
                page_url = 'http://dig.chouti.com%s' % page
                import hashlib
                hash = hashlib.md5()
                hash.update(bytes(page_url,encoding='utf-8'))
                key = hash.hexdigest()
                if key in self.has_request_set:  # 加密key请求在已请求的列表中,则pass
                    pass
                else:  # 如果没有发送请求,继续发送
                    self.has_request_set[key] = page_url
                    yield Request(
                        url=page_url,
                        method='GET',
                        callback=self.show
                    )
    
        def do_favor(self, response):
            print(response.text)  # 打印以下点赞之后的返回值

    处理Cookie:

    import scrapy
    from scrapy.http.response.html import HtmlResponse
    from scrapy.http import Request
    from scrapy.http.cookies import CookieJar
    
    
    class ChoutiSpider(scrapy.Spider):
        name = "chouti"
        allowed_domains = ["chouti.com"]
        start_urls = (
            'http://www.chouti.com/',
        )
    
        def start_requests(self):
            url = 'http://dig.chouti.com/'
            yield Request(url=url, callback=self.login, meta={'cookiejar': True})  # 如此设置cookiejar,可以自动获取cookie
    
        def login(self, response):
            print(response.headers.getlist('Set-Cookie'))
            req = Request(
                url='http://dig.chouti.com/login',
                method='POST',
                headers={'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8'},
                body='phone=8613121758648&password=woshiniba&oneMonth=1',
                callback=self.check_login,
                meta={'cookiejar': True}
            )
            yield req
    
        def check_login(self, response):
            print(response.text)

    注意:settings.py中设置DEPTH_LIMIT = 1来指定“递归”的层数。

    这里对于上面的代码简单解释下,基础流程:

    首先最初创建的爬虫应用的源码:继承了Spider类,该类内部有一个start_requests方法,这是爬虫执行的起始函数,如果start_urls不为空,爬取此url。即上图的yield Request(url, dont_filter=True)、这也可以解释为什么继续爬取分页的url时,写的是yield  Request(url)。此时大家也许不明白,即start_urls不为空,为什么会执行parse函数呢?其实在开始执行的yield Request中有一个默认参数是callback=parse,所以初始化的爬虫应用的流程就一目了然了。

    现在解释下点赞的爬虫,前面提到继承了Spider类,第一个执行的是start_requests,此时既然继承了父类Spider,就可以对此类进行重写,已经知道了其实位置是start_requests,毫无疑问重写此方法,内部指定url(外部的start_urls删除),执行爬虫则调用Request方法,指定callback函数,这样根据callback也就形成了一个串行爬虫链。另外要提到的一点yield都知道是一个生成器,在Scrapy内部,spider内部调度yield Request只是其中的一部分,用来爬虫。另外一部分也是通过yield调用来做持久化的,即对于爬取的数据的处理跟保存。下面会讲到这部分,这里先提一下。

    5、格式化处理

    之前的实例只是一些简单的处理,所以在parse方法中直接处理。如果想要获取更多的数据处理,则可以利用Scrapy的items将数据格式化,然后统一交由pipelines来处理。

    回到最原始的parse代码,抓取以下热榜标题跟链接

    chouti.py

    import scrapy
    from scrapy.selector import HtmlXPathSelector, Selector
    from ..items import ScrapyTestItem
    
    class ChoutiSpider(scrapy.Spider):
        name = 'chouti'
        allowed_domains = ['chouti.com']
        start_urls = ['http://dig.chouti.com/']
    
        def parse(self, response):
            item_list = Selector(response=response).xpath('//div[@id="content-list"]/div[@class="item"]')
    
            for item in item_list:
                t = item.xpath('./div[@class="news-content"]//div[@class="part1"]/a/text()').extract()
                h = item.xpath('./div[@class="news-content"]//div[@class="part1"]/a/@href').extract()
                item_obj = ScrapyTestItem(title=t, href=h)   # 调用Item
                yield item_obj  # 这里指向了另一个调度器,持久化调度器

    items.py

    import scrapy
    
    
    class ScrapyTestItem(scrapy.Item):
        # define the fields for your item here like: 定义要抓取保存的字段
        title = scrapy.Field()
        href = scrapy.Field()

    pipelines.py

    class ScrapyTestPipeline(object):
        def process_item(self, item, spider):
            print(item, spider)
            return item

    这里需要注意的是跟Django一样,需要注册以下:

    在settings文件里面找到下面这段话,去掉注释,其中300代表优先级,稍后进行这个数字的测试。

    ITEM_PIPELINES = {
       'scrapy_test.pipelines.ScrapyTestPipeline': 300,
    }

    此时执行爬虫:

    语法没写好,抓到2个href了,不要在意这些细节。

    此时,了解了yield的另一个功能,当yield Item_obj是就会调度pipelines进行持久化,当然上面我们只是打印了以下结果,可以看到item对应的是字段,spider是爬虫应用函数方法。

    所以对于不同的要求可以直接在pipelines里面写到:

    class ScrapyTestPipeline1(object):
        def process_item(self, item, spider):
            print('step 1 输出到屏幕')
            return item
    
    
    class ScrapyTestPipeline2(object):
        def process_item(self, item, spider):
            print('step 2 保存到文件')
            return item
    
    
    class ScrapyTestPipeline3(object):
        def process_item(self, item, spider):
            print('step 3 保存到数据库')
            return item

    注册以下:

    ITEM_PIPELINES = {
        'scrapy_test.pipelines.ScrapyTestPipeline1': 100,
        'scrapy_test.pipelines.ScrapyTestPipeline2': 200,
        'scrapy_test.pipelines.ScrapyTestPipeline3': 300,
    }

    执行结果、注意顺序

    假设step3的类没有注册,就只会执行step1 & step2。

    那么、如果想在执行到某一个pipeline类终止怎么办?

    from scrapy.exceptions import DropItem  # 导入DropItem
    
    class ScrapyTestPipeline1(object):
        def process_item(self, item, spider):
            print('step 1 输出到屏幕')
            raise DropItem()
    
    
    class ScrapyTestPipeline2(object):
        def process_item(self, item, spider):
            print('step 2 保存到文件')
            return item
    
    
    class ScrapyTestPipeline3(object):
        def process_item(self, item, spider):
            print('step 3 保存到数据库')
            return item

    那spider参数是干嘛用的呢?

    假设,抓取的name是chouti的时候,不让其继续执行后续的:

    from scrapy.exceptions import DropItem
    
    
    class ScrapyTestPipeline1(object):
        def process_item(self, item, spider):
            print('step 1 输出到屏幕')
            if spider.name == 'chouti':
                raise DropItem()
            return item
    
    class ScrapyTestPipeline2(object):
        def process_item(self, item, spider):
            print('step 2 保存到文件')
            return item
    
    
    class ScrapyTestPipeline3(object):
        def process_item(self, item, spider):
            print('step 3 保存到数据库')
            return item

    pipelines更多:

    假设需要将数据写入文件,首先想到的方法一定是

    class ScrapyTestPipeline(object):
        def process_item(self, item, spider):
            with open('***', 'a+') as f:
                f.write('***')
            print('step 2 保存到文件')
            return item

    但是这样会在一次爬虫中频繁的打开文件,浪费IO

    此时引入另外的方法

    from scrapy.exceptions import DropItem
    
    class CustomPipeline(object):
        def __init__(self,v):  # v就是类方法返回的参数val
            self.value = v
            print(self.value)
    
        def process_item(self, item, spider):
            # 操作并进行持久化
    
            # return表示会被后续的pipeline继续处理
            print('****操作****')
            return item
    
            # 表示将item丢弃,不会被后续pipeline处理
            # raise DropItem()
    
    
        @classmethod
        def from_crawler(cls, crawler):
            """
            初始化时候,用于创建pipeline对象
            :param crawler: 
            :return: 
            """
            val = crawler.settings.get('MYPATH')  # 类方法获取配置文件参数
            print(val)
            return cls(val)
    
        def open_spider(self,spider):
            """
            爬虫开始执行时,调用
            :param spider: 
            :return: 
            """
            print('000000')
    
        def close_spider(self,spider):
            """
            爬虫关闭时,被调用
            :param spider: 
            :return: 
            """
            print('111111')

    此时在setting中配置以下文件地址就可以了:

    MYPATH = '***path***'

    settings参数必须全部大写,小写测试失败,未抓取到。

    执行结果

    所以以后,可以在from_crawler里面通过参数定义文件名,setting文件设置文件路径,然后打开文件,中间对文件句柄进行追加,一次打开,一次关闭,避免重复操作。

    6、中间件

    自动化里面Django blog其实已经讲过了中间件的一个大致流程,其实在scrapy中中间件的核心依然是同样的。

    上图是Django中中间件的一个基本概念图,而在scrapy中则是:

    爬虫中间件

    class SpiderMiddleware(object):
    
        def process_spider_input(self,response, spider):
            """
            下载完成,执行,然后交给parse处理(默认有start_urls时,parse时默认的callback函数)
            :param response: 
            :param spider: 
            :return: 
            """
            pass
    
        def process_spider_output(self,response, result, spider):
            """
            spider处理完成,返回时调用
            :param response:
            :param result:
            :param spider:
            :return: 必须返回包含 Request 或 Item 对象的可迭代对象(iterable)
            """
            return result
    
        def process_spider_exception(self,response, exception, spider):
            """
            异常调用
            :param response:
            :param exception:
            :param spider:
            :return: None,继续交给后续中间件处理异常;含 Response 或 Item 的可迭代对象(iterable),交给调度器或pipeline
            """
            return None
    
    
        def process_start_requests(self,start_requests, spider):
            """
            爬虫启动时调用
            :param start_requests:
            :param spider:
            :return: 包含 Request 对象的可迭代对象、丢给调度器分配下载或者是解析文本
            """
            return start_requests

    首先爬虫引擎启动全局,到spider的start_urls抓取数据返回start_request,放到任务调度器里面,下载器去任务调度器抓取任务执行。

    下载器中间件

    class DownMiddleware1(object):
        def process_request(self, request, spider):
            """
            请求需要被下载时,经过所有下载器中间件的process_request调用
            :param request: 
            :param spider: 
            :return:  
                None,继续后续中间件去下载;
                Response对象,停止process_request的执行,开始执行process_response
                Request对象,停止中间件的执行,将Request重新调度器
                raise IgnoreRequest异常,停止process_request的执行,开始执行process_exception
            """
            pass
    
    
    
        def process_response(self, request, response, spider):
            """
            spider处理完成,返回时调用
            :param response:
            :param result:
            :param spider:
            :return: 
                Response 对象:转交给其他中间件process_response
                Request 对象:停止中间件,request会被重新调度下载
                raise IgnoreRequest 异常:调用Request.errback
            """
            print('response1')
            return response
    
        def process_exception(self, request, exception, spider):
            """
            当下载处理器(download handler)或 process_request() (下载中间件)抛出异常
            :param response:
            :param exception:
            :param spider:
            :return: 
                None:继续交给后续中间件处理异常;
                Response对象:停止后续process_exception方法
                Request对象:停止中间件,request将会被重新调用下载
            """
            return None

    7、自定制命令

    a、在spiders同级创建任意目录,如:commands

    b、在其中创建 crawlall.py 文件 (此处文件名就是自定义的命令)-- 如果创建多个文件,其实是相当于创建了多个命令

    from scrapy.commands import ScrapyCommand
    from scrapy.utils.project import get_project_settings
    
    
        class Command(ScrapyCommand):
    
            requires_project = True
    
            def syntax(self):
                return '[options]'
    
            def short_desc(self):
                return 'Runs all of the spiders'
    
            def run(self, args, opts):
                spider_list = self.crawler_process.spiders.list()  # 去spiders文件夹下获取所有的爬虫文件
                for name in spider_list:
                    self.crawler_process.crawl(name, **opts.__dict__)  # 为所有的爬虫创建任务
                self.crawler_process.start()  # 并发的开始执行
    crawlall.py

    c、在settings.py 中添加配置 COMMANDS_MODULE = '项目名称.目录名称'

    d、在项目目录执行命令:scrapy crawlall 

    PS:scrapy的源码,建议从run开始着手看。

    单个爬虫:

    import sys
    from scrapy.cmdline import execute
    
    if __name__ == '__main__':
        execute(["scrapy","github","--nolog"])
    View Code

    8、自定义扩展

    自定义扩展时,利用信号在指定位置注册制定操作(跟Django的信号很相似)

    from scrapy import signals
    
    
    class MyExtension(object):
        def __init__(self, value):
            self.value = value
    
        @classmethod
        def from_crawler(cls, crawler):
            val = crawler.settings.get('MMMM')
            ext = cls(val)
    
            crawler.signals.connect(ext.openn, signal=signals.spider_opened)
            crawler.signals.connect(ext.closee, signal=signals.spider_closed)
    
            return ext
    
        def openn(self, spider):
            print('open')
    
        def closee(self, spider):
            print('close')
    """
    Scrapy signals
    
    These signals are documented in docs/topics/signals.rst. Please don't add new
    signals here without documenting them there.
    """
    
    engine_started = object()
    engine_stopped = object()
    spider_opened = object()
    spider_idle = object()
    spider_closed = object()
    spider_error = object()
    request_scheduled = object()
    request_dropped = object()
    response_received = object()
    response_downloaded = object()
    item_scraped = object()
    item_dropped = object()
    
    # for backwards compatibility
    stats_spider_opened = spider_opened
    stats_spider_closing = spider_closed
    stats_spider_closed = spider_closed
    
    item_passed = item_scraped
    
    request_received = request_scheduled
    爬虫的所有信号

    跟pipelines一样,需要注册类在settings文件里。

    9、避免重复访问

    scrapy默认使用 scrapy.dupefilter.RFPDupeFilter 进行去重,相关配置有:

    DUPEFILTER_CLASS = 'scrapy.dupefilter.RFPDupeFilter'
    DUPEFILTER_DEBUG = False
    JOBDIR = "保存范文记录的日志路径,如:/root/"  # 最终路径为 /root/requests.seen
    class RepeatUrl:
        def __init__(self):
            self.visited_url = set()
    
        @classmethod
        def from_settings(cls, settings):
            """
            初始化时,调用
            :param settings: 
            :return: 
            """
            return cls()
    
        def request_seen(self, request):
            """
            检测当前请求是否已经被访问过
            :param request: 
            :return: True表示已经访问过;False表示未访问过
            """
            if request.url in self.visited_url:
                return True
            self.visited_url.add(request.url)
            return False
    
        def open(self):
            """
            开始爬去请求时,调用
            :return: 
            """
            print('open replication')
    
        def close(self, reason):
            """
            结束爬虫爬取时,调用
            :param reason: 
            :return: 
            """
            print('close replication')
    
        def log(self, request, spider):
            """
            记录日志
            :param request: 
            :param spider: 
            :return: 
            """
            print('repeat', request.url)
    
    自定义URL去重操作
    自定义url去重

    10、settings其他设置

    # -*- coding: utf-8 -*-
    
    # Scrapy settings for step8_king project
    #
    # For simplicity, this file contains only settings considered important or
    # commonly used. You can find more settings consulting the documentation:
    #
    #     http://doc.scrapy.org/en/latest/topics/settings.html
    #     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
    #     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
    
    # 1. 爬虫名称
    BOT_NAME = 'scrapy_test'
    
    # 2. 爬虫应用路径
    SPIDER_MODULES = ['scrapy_test.spiders']
    NEWSPIDER_MODULE = 'scrapy_test.spiders'
    
    # Crawl responsibly by identifying yourself (and your website) on the user-agent
    # 3. 客户端 user-agent请求头 通用配置,也可以在Request内部配置
    # USER_AGENT = 'scrapy_test (+http://www.yourdomain.com)'
    
    # Obey robots.txt rules
    # 4. 禁止爬虫配置
    # ROBOTSTXT_OBEY = False
    
    # Configure maximum concurrent requests performed by Scrapy (default: 16)
    # 5. 并发请求数
    # CONCURRENT_REQUESTS = 4
    
    # Configure a delay for requests for the same website (default: 0)
    # See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
    # See also autothrottle settings and docs
    # 6. 延迟下载秒数(反爬虫,所有的爬虫都是延迟2秒)
    # DOWNLOAD_DELAY = 2
    
    
    # The download delay setting will honor only one of:
    # 7. 单域名访问并发数,并且延迟下次秒数也应用在每个域名
    # CONCURRENT_REQUESTS_PER_DOMAIN = 2
    # 单IP访问并发数,如果有值则忽略:CONCURRENT_REQUESTS_PER_DOMAIN,并且延迟下次秒数也应用在每个IP
    # CONCURRENT_REQUESTS_PER_IP = 3
    
    # Disable cookies (enabled by default)
    # 8. 是否支持cookie,cookiejar进行操作cookie
    # COOKIES_ENABLED = True
    # COOKIES_DEBUG = True
    
    # Disable Telnet Console (enabled by default)
    # 9. Telnet用于查看当前爬虫的信息,操作爬虫等...
    #    使用telnet ip port ,然后通过命令操作
    # TELNETCONSOLE_ENABLED = True
    # TELNETCONSOLE_HOST = '127.0.0.1'
    # TELNETCONSOLE_PORT = [6023,]
    
    
    # 10. 默认请求头
    # Override the default request headers:
    # DEFAULT_REQUEST_HEADERS = {
    #     'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    #     'Accept-Language': 'en',
    # }
    
    
    # Configure item pipelines
    # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
    # 11. 定义pipeline处理请求
    # ITEM_PIPELINES = {
    #    'scrapy_test.pipelines.JsonPipeline': 700,
    #    'scrapy_test.pipelines.FilePipeline': 500,
    # }
    
    
    
    # 12. 自定义扩展,基于信号进行调用
    # Enable or disable extensions
    # See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
    # EXTENSIONS = {
    #     # 'scrapy_test.extensions.MyExtension': 500,
    # }
    
    
    # 13. 爬虫允许的最大深度,可以通过meta查看当前深度;0表示无深度
    # DEPTH_LIMIT = 3
    
    # 14. 爬取时,0表示深度优先Lifo(默认);1表示广度优先FiFo
    
    # 后进先出,深度优先
    # DEPTH_PRIORITY = 0
    # SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleLifoDiskQueue'
    # SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.LifoMemoryQueue'
    # 先进先出,广度优先
    
    # DEPTH_PRIORITY = 1
    # SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleFifoDiskQueue'
    # SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.FifoMemoryQueue'
    
    # 15. 调度器队列  queue
    # SCHEDULER = 'scrapy.core.scheduler.Scheduler'
    # from scrapy.core.scheduler import Scheduler
    
    
    # 16. 访问URL去重
    # DUPEFILTER_CLASS = 'scrapy_test.duplication.RepeatUrl'
    
    
    # Enable and configure the AutoThrottle extension (disabled by default)
    # See http://doc.scrapy.org/en/latest/topics/autothrottle.html
    
    """
    17. 自动限速算法
        from scrapy.contrib.throttle import AutoThrottle
        自动限速设置
        1. 获取最小延迟 DOWNLOAD_DELAY
        2. 获取最大延迟 AUTOTHROTTLE_MAX_DELAY
        3. 设置初始下载延迟 AUTOTHROTTLE_START_DELAY
        4. 当请求下载完成后,获取其"连接"时间 latency,即:请求连接到接受到响应头之间的时间
        5. 用于计算的... AUTOTHROTTLE_TARGET_CONCURRENCY
        target_delay = latency / self.target_concurrency
        new_delay = (slot.delay + target_delay) / 2.0 # 表示上一次的延迟时间
        new_delay = max(target_delay, new_delay)
        new_delay = min(max(self.mindelay, new_delay), self.maxdelay)
        slot.delay = new_delay
    """
    
    # 开始自动限速
    # AUTOTHROTTLE_ENABLED = True
    # The initial download delay
    # 初始下载延迟
    # AUTOTHROTTLE_START_DELAY = 5
    # The maximum download delay to be set in case of high latencies
    # 最大下载延迟
    # AUTOTHROTTLE_MAX_DELAY = 10
    # The average number of requests Scrapy should be sending in parallel to each remote server
    # 平均每秒并发数
    # AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
    
    # Enable showing throttling stats for every response received:
    # 是否显示
    # AUTOTHROTTLE_DEBUG = True
    
    # Enable and configure HTTP caching (disabled by default)
    # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
    
    
    """
    18. 启用缓存
        目的用于将已经发送的请求或相应缓存下来,以便以后使用
        
        from scrapy.downloadermiddlewares.httpcache import HttpCacheMiddleware
        from scrapy.extensions.httpcache import DummyPolicy
        from scrapy.extensions.httpcache import FilesystemCacheStorage
    """
    # 是否启用缓存策略
    # HTTPCACHE_ENABLED = True
    
    # 缓存策略:所有请求均缓存,下次在请求直接访问原来的缓存即可
    # HTTPCACHE_POLICY = "scrapy.extensions.httpcache.DummyPolicy"
    # 缓存策略:根据Http响应头:Cache-Control、Last-Modified 等进行缓存的策略
    # HTTPCACHE_POLICY = "scrapy.extensions.httpcache.RFC2616Policy"
    
    # 缓存超时时间
    # HTTPCACHE_EXPIRATION_SECS = 0
    
    # 缓存保存路径
    # HTTPCACHE_DIR = 'httpcache'
    
    # 缓存忽略的Http状态码
    # HTTPCACHE_IGNORE_HTTP_CODES = []
    
    # 缓存存储的插件
    # HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
    
    
    """
    19. 代理,需要在环境变量中设置
        from scrapy.contrib.downloadermiddleware.httpproxy import HttpProxyMiddleware
        
        方式一:使用默认,key不可以修改
            os.environ
            {
                http_proxy:http://root:woshiniba@192.168.11.11:9999/
                https_proxy:http://192.168.11.11:9999/
            }
        方式二:使用自定义下载中间件
        
        def to_bytes(text, encoding=None, errors='strict'):
            if isinstance(text, bytes):
                return text
            if not isinstance(text, six.string_types):
                raise TypeError('to_bytes must receive a unicode, str or bytes '
                                'object, got %s' % type(text).__name__)
            if encoding is None:
                encoding = 'utf-8'
            return text.encode(encoding, errors)
            
        class ProxyMiddleware(object):
            def process_request(self, request, spider):
                PROXIES = [
                    {'ip_port': '111.11.228.75:80', 'user_pass': ''},
                    {'ip_port': '120.198.243.22:80', 'user_pass': ''},
                    {'ip_port': '111.8.60.9:8123', 'user_pass': ''},
                    {'ip_port': '101.71.27.120:80', 'user_pass': ''},
                    {'ip_port': '122.96.59.104:80', 'user_pass': ''},
                    {'ip_port': '122.224.249.122:8088', 'user_pass': ''},
                ]
                proxy = random.choice(PROXIES)
                if proxy['user_pass'] is not None:
                    request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])
                    encoded_user_pass = base64.encodestring(to_bytes(proxy['user_pass']))
                    request.headers['Proxy-Authorization'] = to_bytes('Basic ' + encoded_user_pass)
                    print "**************ProxyMiddleware have pass************" + proxy['ip_port']
                else:
                    print "**************ProxyMiddleware no pass************" + proxy['ip_port']
                    request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])
        
        DOWNLOADER_MIDDLEWARES = {
           'step8_king.middlewares.ProxyMiddleware': 500,
        }
        
    """
    
    """
    20. Https访问
        Https访问时有两种情况:
        1. 要爬取网站使用的可信任证书(默认支持)
            DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"
            DOWNLOADER_CLIENTCONTEXTFACTORY = "scrapy.core.downloader.contextfactory.ScrapyClientContextFactory"
            
        2. 要爬取网站使用的自定义证书
            DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"
            DOWNLOADER_CLIENTCONTEXTFACTORY = "scrapy_test.https.MySSLFactory"
            
            # https.py
            from scrapy.core.downloader.contextfactory import ScrapyClientContextFactory
            from twisted.internet.ssl import (optionsForClientTLS, CertificateOptions, PrivateCertificate)
            
            class MySSLFactory(ScrapyClientContextFactory):
                def getCertificateOptions(self):
                    from OpenSSL import crypto
                    v1 = crypto.load_privatekey(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.key.unsecure', mode='r').read())
                    v2 = crypto.load_certificate(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.pem', mode='r').read())
                    return CertificateOptions(
                        privateKey=v1,  # pKey对象
                        certificate=v2,  # X509对象
                        verify=False,
                        method=getattr(self, 'method', getattr(self, '_ssl_method', None))
                    )
        其他:
            相关类
                scrapy.core.downloader.handlers.http.HttpDownloadHandler
                scrapy.core.downloader.webclient.ScrapyHTTPClientFactory
                scrapy.core.downloader.contextfactory.ScrapyClientContextFactory
            相关配置
                DOWNLOADER_HTTPCLIENTFACTORY
                DOWNLOADER_CLIENTCONTEXTFACTORY
    
    """
    
    
    
    """
    21. 爬虫中间件
        class SpiderMiddleware(object):
    
            def process_spider_input(self,response, spider):
                '''
                下载完成,执行,然后交给parse处理
                :param response: 
                :param spider: 
                :return: 
                '''
                pass
        
            def process_spider_output(self,response, result, spider):
                '''
                spider处理完成,返回时调用
                :param response:
                :param result:
                :param spider:
                :return: 必须返回包含 Request 或 Item 对象的可迭代对象(iterable)
                '''
                return result
        
            def process_spider_exception(self,response, exception, spider):
                '''
                异常调用
                :param response:
                :param exception:
                :param spider:
                :return: None,继续交给后续中间件处理异常;含 Response 或 Item 的可迭代对象(iterable),交给调度器或pipeline
                '''
                return None
        
        
            def process_start_requests(self,start_requests, spider):
                '''
                爬虫启动时调用
                :param start_requests:
                :param spider:
                :return: 包含 Request 对象的可迭代对象
                '''
                return start_requests
        
        内置爬虫中间件:
            'scrapy.contrib.spidermiddleware.httperror.HttpErrorMiddleware': 50,
            'scrapy.contrib.spidermiddleware.offsite.OffsiteMiddleware': 500,
            'scrapy.contrib.spidermiddleware.referer.RefererMiddleware': 700,
            'scrapy.contrib.spidermiddleware.urllength.UrlLengthMiddleware': 800,
            'scrapy.contrib.spidermiddleware.depth.DepthMiddleware': 900,
    
    """
    # from scrapy.contrib.spidermiddleware.referer import RefererMiddleware
    # Enable or disable spider middlewares
    # See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
    SPIDER_MIDDLEWARES = {
       # 'scrapy_test.middlewares.SpiderMiddleware': 543,
    }
    
    
    """
    22. 下载中间件
        class DownMiddleware1(object):
            def process_request(self, request, spider):
                '''
                请求需要被下载时,经过所有下载器中间件的process_request调用
                :param request:
                :param spider:
                :return:
                    None,继续后续中间件去下载;
                    Response对象,停止process_request的执行,开始执行process_response
                    Request对象,停止中间件的执行,将Request重新调度器
                    raise IgnoreRequest异常,停止process_request的执行,开始执行process_exception
                '''
                pass
        
        
        
            def process_response(self, request, response, spider):
                '''
                spider处理完成,返回时调用
                :param response:
                :param result:
                :param spider:
                :return:
                    Response 对象:转交给其他中间件process_response
                    Request 对象:停止中间件,request会被重新调度下载
                    raise IgnoreRequest 异常:调用Request.errback
                '''
                print('response1')
                return response
        
            def process_exception(self, request, exception, spider):
                '''
                当下载处理器(download handler)或 process_request() (下载中间件)抛出异常
                :param response:
                :param exception:
                :param spider:
                :return:
                    None:继续交给后续中间件处理异常;
                    Response对象:停止后续process_exception方法
                    Request对象:停止中间件,request将会被重新调用下载
                '''
                return None
    
        
        默认下载中间件
        {
            'scrapy.contrib.downloadermiddleware.robotstxt.RobotsTxtMiddleware': 100,
            'scrapy.contrib.downloadermiddleware.httpauth.HttpAuthMiddleware': 300,
            'scrapy.contrib.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware': 350,
            'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': 400,
            'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 500,
            'scrapy.contrib.downloadermiddleware.defaultheaders.DefaultHeadersMiddleware': 550,
            'scrapy.contrib.downloadermiddleware.redirect.MetaRefreshMiddleware': 580,
            'scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware': 590,
            'scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware': 600,
            'scrapy.contrib.downloadermiddleware.cookies.CookiesMiddleware': 700,
            'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 750,
            'scrapy.contrib.downloadermiddleware.chunked.ChunkedTransferMiddleware': 830,
            'scrapy.contrib.downloadermiddleware.stats.DownloaderStats': 850,
            'scrapy.contrib.downloadermiddleware.httpcache.HttpCacheMiddleware': 900,
        }
    
    """
    # from scrapy.contrib.downloadermiddleware.httpauth import HttpAuthMiddleware
    # Enable or disable downloader middlewares
    # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
    # DOWNLOADER_MIDDLEWARES = {
    #    'scrapy_test.middlewares.DownMiddleware1': 100,
    #    'scrapy_test.middlewares.DownMiddleware2': 500,
    # }
    settings.py

    11、模拟scrapy框架

    #!/usr/bin/env python
    # -*- coding:utf-8 -*-
    from twisted.web.client import getPage, defer
    from twisted.internet import reactor
    import queue
    
    
    class Response(object):
        def __init__(self, body, request):
            self.body = body
            self.request = request
            self.url = request.url
    
        @property
        def text(self):
            return self.body.decode('utf-8')
    
    
    class Request(object):
        def __init__(self, url, callback=None):
            self.url = url
            self.callback = callback
    
    
    class Scheduler(object):
        def __init__(self, engine):
            self.q = queue.Queue()
            self.engine = engine
    
        def enqueue_request(self, request):
            self.q.put(request)
    
        def next_request(self):
            try:
                req = self.q.get(block=False)
            except Exception as e:
                req = None
    
            return req
    
        def size(self):
            return self.q.qsize()
    
    
    class ExecutionEngine(object):
        def __init__(self):
            self._closewait = None
            self.running = True
            self.start_requests = None
            self.scheduler = Scheduler(self)
    
            self.inprogress = set()
    
        def check_empty(self, response):
            if not self.running:
                self._closewait.callback('......')
    
        def _next_request(self):
            while self.start_requests:
                try:
                    request = next(self.start_requests)
                except StopIteration:
                    self.start_requests = None
                else:
                    self.scheduler.enqueue_request(request)
    
            while len(self.inprogress) < 5 and self.scheduler.size() > 0:  # 最大并发数为5
    
                request = self.scheduler.next_request()
                if not request:
                    break
    
                self.inprogress.add(request)
                d = getPage(bytes(request.url, encoding='utf-8'))
                d.addBoth(self._handle_downloader_output, request)
                d.addBoth(lambda x, req: self.inprogress.remove(req), request)
                d.addBoth(lambda x: self._next_request())
    
            if len(self.inprogress) == 0 and self.scheduler.size() == 0:
                self._closewait.callback(None)
    
        def _handle_downloader_output(self, body, request):
            """
            获取内容,执行回调函数,并且把回调函数中的返回值获取,并添加到队列中
            :param response: 
            :param request: 
            :return: 
            """
            import types
    
            response = Response(body, request)
            func = request.callback or self.spider.parse
            gen = func(response)
            if isinstance(gen, types.GeneratorType):
                for req in gen:
                    self.scheduler.enqueue_request(req)
    
        @defer.inlineCallbacks
        def start(self):
            self._closewait = defer.Deferred()
            yield self._closewait
    
        def open_spider(self, spider, start_requests):
            self.start_requests = start_requests
            self.spider = spider
            reactor.callLater(0, self._next_request)
    
    
    class Crawler(object):
        def __init__(self, spidercls):
            self.spidercls = spidercls
    
            self.spider = None
            self.engine = None
    
        @defer.inlineCallbacks
        def crawl(self):
            self.engine = ExecutionEngine()
            self.spider = self.spidercls()
            start_requests = iter(self.spider.start_requests())
            start_requests = iter(start_requests)
            self.engine.open_spider(self.spider, start_requests)
            yield self.engine.start()
    
    
    class CrawlerProcess(object):
        def __init__(self):
            self._active = set()
            self.crawlers = set()
    
        def crawl(self, spidercls, *args, **kwargs):
            crawler = Crawler(spidercls)
    
            self.crawlers.add(crawler)
            d = crawler.crawl(*args, **kwargs)
            self._active.add(d)
            return d
    
        def start(self):
            dl = defer.DeferredList(self._active)
            dl.addBoth(self._stop_reactor)
            reactor.run()
    
        def _stop_reactor(self, _=None):
            reactor.stop()
    
    
    class Spider(object):
        def start_requests(self):
            for url in self.start_urls:
                yield Request(url)
    
    
    class ChoutiSpider(Spider):
        name = "chouti"
        start_urls = [
            'http://dig.chouti.com/',
        ]
    
        def parse(self, response):
            print(response.text)
    
    
    class CnblogsSpider(Spider):
        name = "cnblogs"
        start_urls = [
            'http://www.cnblogs.com/',
        ]
    
        def parse(self, response):
            print(response.text)
    
    
    if __name__ == '__main__':
    
        spider_cls_list = [ChoutiSpider, CnblogsSpider]
    
        crawler_process = CrawlerProcess()
        for spider_cls in spider_cls_list:
            crawler_process.crawl(spider_cls)
    
        crawler_process.start()
    模拟scrapy框架

    参见文档:http://www.cnblogs.com/wupeiqi/articles/6229292.html

  • 相关阅读:
    block
    最短路径(图论-北京地铁线路查询)
    Linux与git使用引导(git rm 与rm命令)
    Linux、vim、Makefile-操作系统lab0
    2019-BUAA-OO-第一次作业(表达式函数求导)
    1064
    Navicate 连接mysql问题
    pypi上传问题
    pypi上传命令
    关于 List add方法
  • 原文地址:https://www.cnblogs.com/wuzdandz/p/9385534.html
Copyright © 2020-2023  润新知