• 爬虫进阶版


    1. 移动端数据抓取

    fillder进行一个基本的配置:tools->options->https->Decry..
    
    fillder进行一个基本的配置:tools->options->connection->allow remote 
    
    http://fillder所在pc机的ip+port/:访问到一张提供了证书下载功能的页面
    
    fiddler所在的机器和手机在同一网段下:在手机浏览器中访问http://fillder所在pc机的ip:8888/
    
    获取子页面进行证书的下载和安装(证书信任的操作) 
    
    配置你的手机的代理:将手机的代理配置成fiddler所对应pc机的ip和fillder自己的端口
    
    就可以让fiddler捕获手机发起的http和https的请求
    

    2. scrapy框架

    框架就是一个集成了各种功能且具有很强通用性(可以被应用在各种不同的需求中)的一个项目模板.
    

    scrapy集成了哪些功能:

    高性能的数据解析操作,持久化存储操作,高性能的数据下载的操作.....
    

    3.环境的安装:

    pip3 install wheel
    
    下载twisted http://www.lfd.uci.edu/~gohlke/pythonlibs/#twisted
    
    进入下载目录,执行 
    pip3 install Twisted-20.3.0-cp37-cp37m-win_amd64.whl
    
    pip3 install pywin32
    
    

    4 scrapy的基本使用

    创建一个工程:scrapy startproject zbb
    必须在spiders这个目录下创建一个爬虫文件
    cd zbb
    scrapy genspider first www.baidu.com
    
    import scrapy
    
    class FirstSpider(scrapy.Spider):
        # 爬虫文件的名称:爬虫文件的唯一标识(在spiders子目录下是可以创建多个爬虫文件)
        name = 'first'
        # 允许的域名
        # allowed_domains = ['www.baidu.com']
        # 起始的url列表:列表中存放的url会被scrapy自动的进行请求发送
        start_urls = ['https://www.baidu.com/', 'https://www.sogou.com/']
    
        # 用作于数据解析:将start_urls列表中对应的url请求成功后的响应数据进行解析
        def parse(self, response):
            pass
    

    执行工程

    scrapy crawl first
    

    settings.py

    #不遵从robots协议
    ROBOTSTXT_OBEY = False  
    #进行UA伪装
    USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36'
    #进行日志等级设定
    #scrapy crawl first  --nolog
    LOG_LEVEL = 'ERROR'
    

    5.持久化存储

    1.基于终端指令:

    特性:只可以将parse方法的返回值存储到本地的磁盘文件中
    指令:scrapy crawl first -o quibai.csv
    
    import scrapy
    
    
    class FirstSpider(scrapy.Spider):
       name = 'first'
       start_urls = ['https://www.qiushibaike.com/text/']
    
       def parse(self, response):
           div_list = response.xpath('//*[@id="content"]/div/div[2]/div')
           all_data = []
           for div in div_list:
               # xpath返回的列表元素一定是Selector对象
               # 最终要解析的数据储存在改对象中
               # extract()将data参数取值
               # author = div.xpath("./div[1]/a[2]/h2/text()")[0].extract()
               author = div.xpath("./div[1]/a[2]/h2/text()").extract_first()
               # 直接调用是将extract作用到每个列表元素中
               con = div.xpath('./a[1]/div/span//text()').extract()
               # 将列表转换为字符转
               con = ''.join(con)
               dic = {
                   'author': author,
                   'content': con
               }
               all_data.append(dic)
           return all_data
    

    2.基于管道:

    1.数据解析
    2.在item类中定义相关的属性
    3.将解析的数据存储或者封装到一个item类型的对象(items文件中对应类的对象)
    4.向管道提交item
    5.在管道文件的process_item方法中接收item进行持久化存储
    6.在配置文件中开启管道
    ITEM_PIPELINES = {
       'zbb.pipelines.ZbbPipeline': 300, #300表示优先值
    }
    

    item

    import scrapy
    
    
    class ZbbItem(scrapy.Item):
        # define the fields for your item here like:
        author = scrapy.Field()
        con = scrapy.Field()
    

    first.py

    import scrapy
    from zbb.items import ZbbItem
    class FirstSpider(scrapy.Spider):
        name = 'first'
        start_urls = ['https://www.qiushibaike.com/text/']
    
        def parse(self, response):
            div_list = response.xpath('//*[@id="content"]/div/div[2]/div')
            all_data = []
            for div in div_list:
                author = div.xpath("./div[1]/a[2]/h2/text()")[0].extract()
                con = div.xpath('./a[1]/div/span//text()').extract()
                con = ''.join(con)
                #将解析的数据储存到item对象中
                item = ZbbItem()
                item['author'] =author
                item['con'] =con
                #将item提交到管道
                yield item
    
    

    pipelines.py

    class ZbbPipeline:
        fp = None
        def open_spider(self, spider):
            print('开始爬虫......')
            self.fp = open('qiushibaike.txt', 'w', encoding='utf-8')
    
        # 使用来接收爬虫文件提交过来的item,然后将其进行任意形式的持久化存储
        # 参数item:就是接收到的item对象
        # 该方法每接收一个item就会调用一次
        def process_item(self, item, spider):
            author = item['author']
            con = item['con']
            self.fp.write(author + ':' + con + '
    ')
            return item  # item是返回给了下一个即将被执行的管道类
    
        def close_spider(self, spider):
            print('结束爬虫!')
            self.fp.close()
    

    3.将同一份数据持久化到不同的平台中

    • 分析:

      • 1.管道文件中的一个管道类负责数据的一种形式的持久化存储
      • 2.爬虫文件向管道提交的item只会提交给优先级最高的那一个管道类
      • 3.在管道类的process_item中的return item表示的是将当前管道接收的item返回/提交给
        下一个即将被执行的管道类

      setting配置

    ITEM_PIPELINES = {
        'zbb.pipelines.ZbbPipeline': 300,  # 300表示优先值
        'zbb.pipelines.MysqlPL': 301,  # 300表示优先值 越小越好
        'zbb.pipelines.RedisPL': 302,
    
    }
    

    pipelines

    import pymysql
    from redis import Redis
    
    class ZbbPipeline:
        fp = None
    
        def open_spider(self, spider):
            print('开始爬虫......')
            self.fp = open('qiushibaike.txt', 'w', encoding='utf-8')
    
        # 使用来接收爬虫文件提交过来的item,然后将其进行任意形式的持久化存储
        # 参数item:就是接收到的item对象
        # 该方法每接收一个item就会调用一次
        def process_item(self, item, spider):
            author = item['author']
            con = item['con']
            self.fp.write(author + ':' + con + '
    ')
            return item  # item是返回给了下一个即将被执行的管道类
    
        def close_spider(self, spider):
            print('结束爬虫!')
            self.fp.close()
    
    
    class MysqlPL:
        conn = None
        cursor = None
    
        def open_spider(self, spider):
            self.conn = pymysql.Connect(host='127.0.0.1', port=3306, user='root', password='123', db='spider',
                                        charset='utf8')
            print(self.conn)
    
        def process_item(self, item, spider):
            author = item['author']
            con = item['con']
            sql = 'insert into qiubai values ("%s","%s")'%(author, con)
            self.cursor = self.conn.cursor()
            try:
                self.cursor.execute(sql)
                self.conn.commit()
            except Exception as e:
                print(e)
                self.conn.rollback()
            return item
    
        def close_spider(self, spider):
            self.cursor.close()
            self.conn.close()
    
    class RedisPL:
        conn = None
    
        def open_spider(self, spider):
            self.conn = Redis(host='127.0.0.1', port=6379)
            print(self.conn)
    
        def process_item(self, item, spider):
            self.conn.lpush('all_data', item)
            # 注意:如果将字典写入redis报错:pip install -U redis==2.10.6
    

    6.在scrapy中手动请求发送(GET)

    • 使用场景:爬取多个页码对应的页面源码数据
    • yield scrapy.Request(url,callback)
    import scrapy
    from zbb.items import ZbbItem
    
    
    class FirstSpider(scrapy.Spider):
        name = 'first'
        start_urls = ['https://www.qiushibaike.com/text/']
        # 将多个页码对应的页面数据进行爬取和解析的操作
        url = 'https://www.qiushibaike.com/text/page/%d/'  # 通用的url模板
        pageNum = 1
    
        def parse(self, response):
            div_list = response.xpath('//*[@id="content"]/div/div[2]/div')
            all_data = []
            for div in div_list:
                author = div.xpath("./div[1]/a[2]/h2/text()")[0].extract()
                con = div.xpath('./a[1]/div/span//text()').extract()
                con = ''.join(con)
                # 将解析的数据储存到item对象中
                item = ZbbItem()
                item['author'] = author
                item['con'] = con
                # 将item提交到管道
                yield item
    
            if self.pageNum <= 5:
                self.pageNum += 1
                new_url = format(self.url%self.pageNum)
                # 手动请求(get)的发送
                yield scrapy.Request(new_url, callback=self.parse)
    
    

    7.在scrapy中手请求发送(POST)

    一般不用除非疯了 很麻烦

    data = { #post请求的请求参数
        'kw':'aaa'
    }
    yield scrapy.FormRequest(url,formdata=data,callback)
    

    8.scrapy五大核心组件的工作流程:

    引擎(Scrapy)
    用来处理整个系统的数据流处理, 触发事务(框架核心)
    调度器(Scheduler)
    用来接受引擎发过来的请求, 压入队列中, 并在引擎再次请求的时候返回. 可以想像成一个URL(抓取网页的网址或者说是链接)的优先队列, 由它来决定下一个要抓取的网址是什么, 同时去除重复的网址
    下载器(Downloader)
    用于下载网页内容, 并将网页内容返回给蜘蛛(Scrapy下载器是建立在twisted这个高效的异步模型上的)
    爬虫(Spiders)
    爬虫是主要干活的, 用于从特定的网页中提取自己需要的信息, 即所谓的实体(Item)。用户也可以从中提取出链接,让Scrapy继续抓取下一个页面
    项目管道(Pipeline)
    负责处理爬虫从网页中抽取的实体,主要的功能是持久化实体、验证实体的有效性、清除不需要的信息。当页面被爬虫解析后,将被发送到项目管道,并经过几个特定的次序处理数据。

    9.基于scrapy进行图片数据的爬取

    在爬虫文件中只需要解析提取出图片地址,然后将地址提交给管道
    配置文件中:IMAGES_STORE = './imgsLib'
        在管道文件中进行管道类的制定:
        1.from scrapy.pipelines.images import  ImagesPipeline
        2.将管道类的父类修改成ImagesPipeline
        3.重写父类的三个方法:
    

    1.爬取校花网图片

    第一步: 创建一个项目

    scrapy startproject zxy
    

    第二步: 创建一个爬虫文件

    scrapy genspider img www.baidu.com
    

    第三步:配置Stettings

    #UA伪装
    USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36'
    #不遵循reboot协议
    ROBOTSTXT_OBEY = False
    #显示日志
    LOG_LEVEL = 'ERROR'
    #图片地址
    IMAGES_STORE = './imgsLib'
    #开启管道
    ITEM_PIPELINES = {
       'zxy.pipelines.ZxyPipeline': 300,
    }
    
    

    img.py

    import scrapy
    from zxy.items import ZxyItem
    
    class ImgSpider(scrapy.Spider):
        name = 'img'
        # allowed_domains = ['www.baidu.com']
        start_urls = ['http://www.521609.com/daxuemeinv/']
        url = 'http://www.521609.com/daxuemeinv/list8%d.html'
        pageNum = 1
        def parse(self, response):
            li_list = response.xpath('//*[@id="content"]/div[2]/div[2]/ul/li')
            for li in li_list:
                img_src = "http://www.521609.com/" + li.xpath("./a[1]/img/@src").extract_first()
                item = ZxyItem()
                item['src'] = img_src
                yield item
            if self.pageNum < 3:
                self.pageNum += 1
                new_url = format(self.url%self.pageNum)
                # 手动请求(get)的发送
                yield scrapy.Request(new_url, callback=self.parse)
    
    
    

    item.py

    import scrapy
    
    
    class ZxyItem(scrapy.Item):
        # define the fields for your item here like:
        src = scrapy.Field()
    
    

    管道

    from scrapy.pipelines.images import ImagesPipeline
    import scrapy
    # class ZxyPipeline:
    #     def process_item(self, item, spider):
    #         return item
    class ZxyPipeline(ImagesPipeline):
        #对某一个媒体资源进行请求发送
        #item就是接受到spider发送过来的item
        def get_media_requests(self, item, info):
            yield scrapy.Request(item['src'])
        #制定媒体数据存储的名称
        def file_path(self, request, response=None, info=None):
            name = request.url.split('/')[-1]
            print("go" + name)
            return name
        #完成之后将item给下一个管道类
        # def item_completed(self, results, item, info):
        #     return item
    

    10.scrapy爬取数据的效率

    只需要将如下五个步骤配置在配置文件中即可

    增加并发:
    默认scrapy开启的并发线程为32个,可以适当进行增加。在settings配置文件中修改CONCURRENT_REQUESTS = 100值为100,并发设置成了为100。

    降低日志级别
    在运行scrapy时,会有大量日志信息的输出,为了减少CPU的使用率。可以设置log输出信息为INFO或者ERROR即可。在配置文件中编写:LOG_LEVEL = ‘INFO’

    禁止cookie
    如果不是真的需要cookie,则在scrapy爬取数据时可以禁止cookie从而减少CPU的使用率,提升爬取效率。在配置文件中编写:COOKIES_ENABLED = False

    禁止重试:
    对失败的HTTP进行重新请求(重试)会减慢爬取速度,因此可以禁止重试。在配置文件中编写:RETRY_ENABLED = False

    减少下载超时:
    如果对一个非常慢的链接进行爬取,减少下载超时可以能让卡住的链接快速被放弃,从而提升效率。在配置文件中进行编写:DOWNLOAD_TIMEOUT = 10 超时时间为10s

    11.请求传参(实现深度爬取)

    实现深度爬取:爬取多个层级对应的页面数据
    使用场景:爬取的数据没有在同一张页面中(如前面爬取的boos直聘)

    #在手动请求的时候传递item:yield scrapy.Request(url,callback,meta={'item':item})
    #将meta这个字典传递给callback
    #在callback中接收meta:item = response.meta['item']
    

    1.爬取www.4567kan.com

    第一步: 创建一个项目

    scrapy startproject mv
    

    第二步: 创建一个爬虫文件

    scrapy genspider movie www.baidu.com
    

    第三步:配置Stettings

    #UA伪装
    USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36'
    #不遵循reboot协议
    ROBOTSTXT_OBEY = False
    #显示日志
    LOG_LEVEL = 'ERROR'
    #开启管道
    ITEM_PIPELINES = {
       'zxy.pipelines.ZxyPipeline': 300,
    }
    
    

    movie.py

    import scrapy
    from mv.items import MvItem
    
    
    class MovieSpider(scrapy.Spider):
        name = 'movie'
        start_urls = ['https://www.4567kan.com/index.php/vod/show/class/%E5%8A%A8%E4%BD%9C/id/1/page/1.html']
    
        url = 'https://www.4567kan.com/index.php/vod/show/class/%E5%8A%A8%E4%BD%9C/id/1/page/%d.html'
        pageNum = 1
        def parse(self, response):
            li_list = response.xpath('/html/body/div[1]/div/div/div/div[2]/ul/li')
            for li in li_list:
                title = li.xpath('./div[1]/a/@title').extract_first()
                href = 'https://www.4567kan.com/' + li.xpath('./div[1]/a/@href').extract_first()
                item = MvItem()
                item['title'] = title
                #mata是一个字典,盖子点就可以传递给callback指定的回调函数
                yield scrapy.Request(href, callback=self.parse_detail, meta={'item': item})
            if self.pageNum <5:
                self.pageNum+=1
                new_url = format(self.url%self.pageNum)
                yield scrapy.Request(new_url,callback=self.parse)
    
        def parse_detail(self, response):
            item = response.meta['item']
            desc = response.xpath('/html/body/div[1]/div/div/div/div[2]/p[5]/span[2]/text()').extract_first()
            item['desc'] = desc
            yield  item
    

    item.py

    import scrapy
    
    
    class MvItem(scrapy.Item):
        # define the fields for your item here like:
        title = scrapy.Field()
        desc = scrapy.Field()
    
    

    管道

    class MvPipeline:
        def process_item(self, item, spider):
            print(item)
            return item
    

    12.Middleware中间件

    下载中间件:批量

    作用:批量拦截请求和响应
    

    1.拦截请求:process_request

    UA伪装:

    将所有的请求尽可能多的设定成不同的请求载体身份标识(一般直接在settings中加入,不在这里配置)
    request.headers['User Agent'] = 'xxx'
    

    批量实现

    user_agent_list = [
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 "
        "(KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
        "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 "
        "(KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 "
        "(KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 "
        "(KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
        "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 "
        "(KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
        "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 "
        "(KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
        "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 "
        "(KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "
        "(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
        "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 "
        "(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
        "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 "
        "(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "
        "(KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "
        "(KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "
        "(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "
        "(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
        "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 "
        "(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "
        "(KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
        "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 "
        "(KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
        "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 "
        "(KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
    ]
    
        def process_request(self, request, spider):
            #从列表中随机选择一个
            request.headers['User-Agent']=random.choice(user_agent_list)
    

    代理操作

    PROXY_http = [
        '153.180.102.104:80',
        '195.208.131.189:56055',
    ]
    PROXY_https = [
        '120.83.49.90:9000',
        '95.189.112.214:35508',
    ]
    
            if request.url.split(':')[0] == 'http':
                request.meta['proxy'] = 'http://' + random.choice(PROXY_http)  
            else:
                request.meta['proxy'] = 'https://' + random.choice(PROXY_https)
                
    

    2.拦截异常:process_exception

    如果代理ip报错可以重新请求
    
        def process_exception(self, request, exception, spider):
            print('i am process_exception')
            # 拦截到异常的请求然后对其进行修正,然后重新进行请求发送
            # 代理操作
            if request.url.split(':')[0] == 'http':
                request.meta['proxy'] = 'http://' + random.choice(PROXY_http)  
            else:
                request.meta['proxy'] = 'https://' + random.choice(PROXY_https)  
    
            return request  # 将修正之后的请求进行重新发送
    

    3.拦截响应:process_response

    篡改响应数据或者直接替换响应对象
    

    selenium在scrapy中的应用:

    实例化浏览器对象:写在爬虫类的构造方法中
    关闭浏览器:爬虫类中的closed(self,spider)关闭浏览器
    在中间件中执行浏览器自动化的操作
    

    13.爬取网易新闻

    爬取网易新闻的国内,国际,军事,航空,无人机这五个板块下对应的新闻标题和内容

    分析:

    每一个板块对应页面中的新闻数据是动态加载出来的
    

    第一步:创建项目

    scrapy startproject wangyiPro 
    scrapy genspider wangyi www.baidu.com
    

    第二步:修改文件

    wangyi.py

    import scrapy
    from selenium import webdriver
    from wangyiPro.items import WangyiproItem
    
    
    class WangyiSpider(scrapy.Spider):
        name = 'wangyi'
        # allowed_domains = ['www.xxx.com']
        start_urls = ['https://news.163.com']
        five_model_urls = []
        bro = webdriver.Chrome(executable_path=r'C:Userszhui3Desktopchromedriver.exe')
    
        # 用来解析五个板块对应的url,然后对其进行手动请求发送
        def parse(self, response):
            model_index = [3, 4, 6, 7, 8]
            li_list = response.xpath('//*[@id="index2016_wrap"]/div[1]/div[2]/div[2]/div[2]/div[2]/div/ul/li')
            for index in model_index:
                li = li_list[index]
                # 获取了五个板块对应的url
                model_url = li.xpath('./a/@href').extract_first()
                self.five_model_urls.append(model_url)
                # 对每一个板块的url进行手动i请求发送
                yield scrapy.Request(model_url, callback=self.parse_model)
    
        # 解析每一个板块页面中的新闻标题和新闻详情页的url
        # 问题:response(不满足需求的response)中并没有包含每一个板块中动态加载出的新闻数据
        def parse_model(self, response):
            div_list = response.xpath('/html/body/div[1]/div[3]/div[4]/div[1]/div/div/ul/li/div/div')
            for div in div_list:
                title = div.xpath('./div/div[1]/h3/a/text()').extract_first()
                detail_url = div.xpath('./div/div[1]/h3/a/@href').extract_first()
                item = WangyiproItem()
                item['title'] = title
                # 对详情页发起请求解析出新闻内容
                yield scrapy.Request(detail_url, callback=self.parse_new_content, meta={'item': item})
    
        def parse_new_content(self, response):  # 解析新闻内容
            item = response.meta['item']
            content = response.xpath('//*[@id="endText"]//text()').extract()
            content = ''.join(content)
    
            item['content'] = content
    
            yield item
    
        # 最后执行
        def closed(self, spider):
            self.bro.quit()
    

    items.py

    import scrapy
    
    class WangyiproItem(scrapy.Item):
        # define the fields for your item here like:
        title = scrapy.Field()
        content = scrapy.Field()
    

    中间件

    from time import sleep
    from scrapy import signals
    from scrapy.http import HtmlResponse
    
    
    class WangyiproDownloaderMiddleware(object):
    
        def process_request(self, request, spider):
            return None
    
        def process_response(self, request, response, spider):#spider就是爬虫文件中爬虫类实例化的对象
            #进行所有响应对象的拦截
            #1.将所有的响应中那五个不满足需求的响应对象找出
                #1.每一个响应对象对应唯一一个请求对象
                #2.如果我们可以定位到五个响应对应的请求对象后,就可以通过该i请求对象定位到指定的响应对象
                #3.可以通过五个板块的url定位请求对象
                #总结: url==》request==》response
    
            #2.将找出的五个不满足需求的响应对象进行修正(替换)
            #spider.five_model_urls:五个板块对应的url
            bro = spider.bro
            if request.url in spider.five_model_urls:
                bro.get(request.url)
                sleep(1)
                page_text = bro.page_source #包含了动态加载的新闻数据
                #如果if条件程利则该response就是五个板块对应的响应对象
                # HtmlResponse 篡改响应对象
                new_response = HtmlResponse(url=request.url,body=page_text,encoding='utf-8',request=request)
                return new_response
            return response
    
        def process_exception(self, request, exception, spider):
    
            pass
    
    

    管道: 基于百度ai分类

    from aip import AipNlp
    
    """ 你的 APPID AK SK """
    APP_ID = '219518'
    API_KEY = 'rXTO5pFiBSoEtwYVl8cKH'
    SECRET_KEY = 'oyxpRL7qyb9ubQC8nbsHpPGSfUV '
    
    
    class WangyiproPipeline:
        client = AipNlp(APP_ID, API_KEY, SECRET_KEY)
        def process_item(self, item, spider):
            title = item['title']
            content = item['content']
            #UnicodeEncodeError: 'gbk' codec can't encode character 'xa0' in position 242: illegal multibyte sequence
            #报错说不能被编码,所以替换掉
            content = content.replace(u'xa0',u'')
            title = title.replace(u'xa0',u'')
            wd_dic = self.client.keyword(title,content)
            tp_dic = self.client.topic(title,content)
            print(wd_dic,tp_dic)
            return item
    
    

    setting.py

    USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36'
    ROBOTSTXT_OBEY = False
    LOG_LEVEL = 'ERROR'
    DOWNLOADER_MIDDLEWARES = {
       'wangyiPro.middlewares.WangyiproDownloaderMiddleware': 543,
    }
    ITEM_PIPELINES = {
       'wangyiPro.pipelines.WangyiproPipeline': 300,
    }
    

    run.py

    from scrapy.cmdline import execute
    
    execute(["scrapy", "crawl", "wangyi"])
    

    14.基于CrawlSpider全站数据爬取

    CrawlSpider就是爬虫类中Spider的一个子类

    直接项目:

    爬取 阳光在线 标题处理状态文本内容

    1.创建项目

    scrapy startproject sumpro
    

    2.创建一个爬虫文件:

    scrapy genspider -t crawl sun www.xxxx.com
    

    3.构造链接提取器和规则解析器

    3.1链接提取器:

    作用 : 可以根据指定的规则进行指定链接的提取
    提取的规则:  allow =‘正则表达式’
    

    3.2 规则解析器:

    作用:获取链接提取器提取到的链接,然后对其进行请求发送,根据指定规则对请求到的页面
    源码数据进行数据解析
    

    fllow=True:

    将链接提取器 继续作用到连接提取器提取出的页码链接 所对应的页面中
    注意:连接提取器和规则解析器也是一对一的关系
    

    4.项目代码

    sun.py

    from scrapy.linkextractors import LinkExtractor
    from scrapy.spiders import CrawlSpider, Rule
    from sumpro.items import SumproItem, SumproItem_second
    
    
    class SunSpider(CrawlSpider):
        name = 'sun'
        # allowed_domains = ['www.xxxx.com']
        start_urls = ['http://wz.sun0769.com/political/index/politicsNewest?id=1&page=1']
    
        # 链接提取器
        Link = LinkExtractor(allow=r'id=1&page=d+')
        Link_detail = LinkExtractor(allow=r'index?id=d+')
        rules = (
            # 实例化一个Rule(规则解释器)的对象
            Rule(Link, callback='parse_item', follow=True),
            Rule(Link_detail, callback='parse_detail'),
        )
    
        def parse_item(self, response):
            li_list = response.xpath('/html/body/div[2]/div[3]/ul[2]/li')
            for i in li_list:
                title = i.xpath('./span[3]/a[1]/text()').extract_first()
                status = i.xpath('./span[2]/text()').extract_first()
                num = i.xpath('./span[1]/text()').extract_first()
                item = SumproItem_second()
                item['title'] = title
                item['status'] = status
                item['num'] = num
                yield item
    
        def parse_detail(self, response):
            content = response.xpath('/html/body/div[3]/div[2]/div[2]/div[2]/pre//text()').extract()
            content = ''.join(content)
            num = response.xpath('/html/body/div[3]/div[2]/div[2]/div[1]/span[4]/text()').extract_first()
            #num在详情页面里可能是空的
            if num:
                num = num.split(':')[-1]
                item = SumproItem()
                item['content'] = content
                item['num'] = num
                yield item
    
    

    item.py

    import scrapy
    
    #为了让content 和title status同时展示储存 所以加了一个num
    class SumproItem(scrapy.Item):
        # define the fields for your item here like:
        content = scrapy.Field()
        num = scrapy.Field()
    
    class SumproItem_second(scrapy.Item):
        title = scrapy.Field()
        status = scrapy.Field()
        num = scrapy.Field()
    

    管道

    class SumproPipeline:
        def process_item(self, item, spider):
            if item.__class__.__name__ == 'SumproItem':
                content = item['content']
                num = item['num']
                print("内容" + content) #执行sql
    
            else:
                title = item['title']
                status = item['status']
                num = item['num']
                print("1" + title, "2"+status,"3"+num)
    
            return item
    

    中间件,网站的反爬虫是封ip,所以要设置代理ip

        def process_request(self, request, spider):
            request.meta['proxy'] = 'http://' + "218.91.7.82:43413"
    
    

    setting 还要开启中间件,管道,日志,不遵循协议,UA伪装

    BOT_NAME = 'sumpro'
    USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36'
    ROBOTSTXT_OBEY = False
    LOG_LEVEL = 'ERROR'
    SPIDER_MODULES = ['sumpro.spiders']
    NEWSPIDER_MODULE = 'sumpro.spiders'
    ########################
    ITEM_PIPELINES = {
       'sumpro.pipelines.SumproPipeline': 300,
    }
    DOWNLOADER_MIDDLEWARES = {
       'sumpro.middlewares.SumproDownloaderMiddleware': 543,
    }
    

    run.py

    from scrapy.cmdline import execute
    
    execute(["scrapy", "crawl", "sun"])
    

    15.分布式爬虫

    什么是分布式爬虫?

    基于多台电脑组建一个分布式机群,然后让机群中的每一台电脑执行同一组程序,然后让它们对同一个
    网站的数据进行分布爬取
    

    为要使用分布式爬虫?

    提升爬取数据的效率
    

    如何实现分布式爬虫?

    基于scrapy+redis的形式实现分布式
    scrapy结合这scrapy-redis组建实现的分布式
    

    原生的scrapy框架是无法实现分布式?

    调度器无法被分布式机群共享
    管道无法被共享
    

    scrapy-redis组件的作用:

    提供可以被共享的调度器和管道
    

    1.环境安装:

    redis
    pip Install scrapy-redis
    

    2.编码流程:

    1.创建一个工程

    scrapy startproject fbsPro
    

    2.创建一个爬虫文件

    基于CrawlSpider的爬虫文件
    scrapy genspider -t crawl fbs www.xxxx.com
    

    3.修改当前的爬虫文件

    - 导包:from scrapy_redis.spiders import RedisCrawlSpider
    - 将当前爬虫类的父类修改成RedisCrawlSpider
    - 将start_urls替换成redis_key = 'xxx'#表示的是可被共享调度器中队列的名称
    - 编写爬虫类爬取数据的操作
    

    4.对settings进行配置:

    -UA
    	USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36'
    -指定管道:
    #开启可以被共享的管道
    ITEM_PIPELINES = {
        'scrapy_redis.pipelines.RedisPipeline': 400
    }
    - 指定调度器:
    #增加了一个去重容器类的配置, 作用使用Redis的set集合来存储请求的指纹数据, 从而实现请求去重的持久化
    DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
    #使用scrapy-redis组件自己的调度器
    SCHEDULER = "scrapy_redis.scheduler.Scheduler"
    #配置调度器是否要持久化, 也就是当爬虫结束了, 要不要清空Redis中请求队列和去重指纹的set。如果是True, 就表示要持久化存储, 就不清空数据, 否则清空数据
    SCHEDULER_PERSIST = True
    #指定redis的服务:
    REDIS_HOST = 'redis服务的ip地址'
    REDIS_PORT = 6379
    # REDIS_PARAMS = {
    #     'password': 'redisPasswordTest666666',
    # }
    
    #更改爬取速度
    #CONCURRENT_REQUESTS = 2
    

    5.redis的配置

    进行配置:redis.conf
    #bind 127.0.0.1
    
    #关闭protected-mode模式,此时外部网络可以直接访问
    protected-mode no
    
    携带配置文件启动redis服务	
    ./redis-server redis.conf
    
    启动redis的客户端
    redis-cli
    

    6.执行当前的工程

    进入到爬虫文件对应的目录中:
    scrapy runspider fbs.py
    

    7.向调度器队列中仍入一个起始的url:

    队列在哪里呢?

    答:队列在redis中

    lpush fbsQueue http://wz.sun0769.com/political/index/politicsNewest?id=1&page=1
    

    8.置执行完成之后

    lrange   fbs:items
    

    9.代码

    fbs.py

    import scrapy
    from scrapy.linkextractors import LinkExtractor
    from scrapy.spiders import CrawlSpider, Rule
    from scrapy_redis.spiders import RedisCrawlSpider,RedisSpider
    from fbsPro.items import FbsproItem
    
    
    class FbsSpider(RedisCrawlSpider):
        name = 'fbs'
        # allowed_domains = ['www.xxxx.com']
        # start_urls = ['http://www.xxxx.com/']
        redis_key = 'fbsQueue'
        rules = (
            Rule(LinkExtractor(allow=r'id=1&page=d+'), callback='parse_item', follow=True),
        )
    
        def parse_item(self, response):
            li_list = response.xpath('/html/body/div[2]/div[3]/ul[2]/li')
            for i in li_list:
                title = i.xpath('./span[3]/a[1]/text()').extract_first()
                status = i.xpath('./span[2]/text()').extract_first()
                item = FbsproItem()
                item['title'] = title
                item['status'] = status
                yield item
    
    

    items.py

    class FbsproItem(scrapy.Item):
        # define the fields for your item here like:
        title = scrapy.Field()
        status = scrapy.Field()
    

    settings.py

    USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36'
    
    BOT_NAME = 'fbsPro'
    
    SPIDER_MODULES = ['fbsPro.spiders']
    NEWSPIDER_MODULE = 'fbsPro.spiders'
    ROBOTSTXT_OBEY = False
    CONCURRENT_REQUESTS = 2
    ITEM_PIPELINES = {
        'scrapy_redis.pipelines.RedisPipeline': 400
    }
    
    DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
    SCHEDULER = "scrapy_redis.scheduler.Scheduler"
    SCHEDULER_PERSIST = True
    REDIS_HOST = '127.0.0.1'
    REDIS_PORT = 6379
    # REDIS_PARAMS = {
    #     'password': 'redisPasswordTest666666',
    # }
    
    

    16.增量式爬虫

    概念:

    监测网站数据更新的情况。
    

    核心:

    去重!!!
    

    深度爬取类型:

    深度爬取类型的网站中需要对详情页的url进行记录和检测
    记录:将爬取过的详情页的url进行记录保存
    url存储到redis的set中
    检测:如果对某一个详情页的url发起请求之前先要取记录表中进行查看,该url是否存在,存在的话以为
        着这个url已经被爬取过了。
    

    代码

    import scrapy
    from scrapy.linkextractors import LinkExtractor
    from scrapy.spiders import CrawlSpider, Rule
    from redis import Redis
    from zjs_moviePro.items import ZjsMovieproItem
    
    
    class MovieSpider(CrawlSpider):
        name = 'movie'
        conn = Redis(host='127.0.0.1', port=6379)
        # allowed_domains = ['www.xxx.com']
        start_urls = ['https://www.4567tv.tv/index.php/vod/show/id/6.html']
        rules = (  # /index.php/vod/show/id/6/page/2.html
            Rule(LinkExtractor(allow=r'id/6/page/d+.html'), callback='parse_item', follow=False),
        )
    
        def parse_item(self, response):
            li_list = response.xpath('/html/body/div[1]/div/div/div/div[2]/ul/li')
            for li in li_list:
                name = li.xpath('./div/div/h4/a/text()').extract_first()
                detail_url = 'https://www.4567tv.tv' + li.xpath('./div/div/h4/a/@href').extract_first()
                ex = self.conn.sadd('movie_detail_urls', detail_url)
                if ex == 1:  # 向redis的set中成功插入了detail_url
                    print('有最新数据可爬......')
                    item = ZjsMovieproItem()
                    item['name'] = name
                    yield scrapy.Request(url=detail_url, callback=self.parse_detail, meta={'item': item})
                else:
                    print('该数据已经被爬取过了!')
    
        def parse_detail(self, response):
            item = response.meta['item']
            desc = response.xpath('/html/body/div[1]/div/div/div/div[2]/p[5]/span[2]/text()').extract_first()
            item['desc'] = desc
    
            yield item
    
    
    class ZjsMovieproItem(scrapy.Item):
        # define the fields for your item here like:
        name = scrapy.Field()
        desc = scrapy.Field()
    
    
    class ZjsMovieproPipeline(object):
        def process_item(self, item, spider):
            conn = spider.conn
            conn.lpush('movie_data',item)
            return item
    
    

    非深度爬取类型的网站:

    名词:数据指纹
    
    一组数据的唯一标识
    

  • 相关阅读:
    Go语言TCP/UDP Socket编程
    Go目录
    Go语言获取项目当前路径
    Mysql写入记录出现 Incorrect string value: 'xB4xE7xB1xCAxBCxC7‘错误?(写入中文)
    Erlang的Web库和框架
    erlang 资源
    Erlang基础2
    Erlang语言基础总结
    angular修改端口号port
    npm ERR! Cannot read property 'resolve' of undefined
  • 原文地址:https://www.cnblogs.com/zdqc/p/13628070.html
Copyright © 2020-2023  润新知