• Scrapy项目





    一、项目目录结构

    spiders文件夹内包含doubanSpider.py文件,对于项目的构建以及结构逻辑,详见环境搭建篇。

    QN[G89ZN{@4_32]0W}QAIY8



    二、项目源码

    1.doubanSpider.py

    # -*- coding: utf-8 -*-
    import scrapy
    from douban.items import DoubanItem
    
    #创建爬虫类
    class DoubanspiderSpider(scrapy.Spider):
        name = 'doubanSpider'   #爬虫名字
        allowed_domains = ['movie.douban.com']  #容许爬虫的作用范围
        #定义开始的URL
        offset=0
        url='https://movie.douban.com/top250?start='
    
        start_urls = [url+str(offset)]    #爬虫开始的URL
    
        def parse(self, response):
             #with open("douban.html","w",encoding="utf-8") as f:
                 #f.write(str(response.body,encoding="utf-8"))
            #继承
            item=DoubanItem()
            #根节点
            movies=response.xpath("//div[@class='info']")
    
            for each in movies:
                #标题
                item['title']=each.xpath(".//span[@class='title'][1]/text()").extract()[0]
                #信息
                #item['info'] = each.xpath(".//div[@class='bd']/p[@class='']/text()[2]").extract()[0]
                item['info'] = each.xpath(".//div[@class='bd']/p[normalize-space(@class)='']/text()[2]").extract()[0]
    
    
    
                # xinxi
                #item['info2'] = each.xpath(".//div[@class='bd']/p[@class='']/text()[2]").extract()[0]
                # 评分
                item['star'] = each.xpath(".//div[@class='bd']/div[@class='star']/span[@class='rating_num']/text()").extract()[0]
                # 简介
                quote = each.xpath(".//div[@class='bd']/p[@class='quote']/span/text()").extract()
    
    
                #异常处理
                if len(quote)!=0:
                    item['quote']=quote[0]
    
                print(item)
                yield item
            if self.offset < 255:
                self.offset += 25
                # 每次处理完一页之后,重新发送下一页请求
                # self offset 自增25,同时拼接为新的URL并调用回调函数,self parse 处理response
                yield scrapy.Request(self.url + str(self.offset),callback=self.parse)
    
    
    
    
    
    
    
            # 通过scrapy 的xpath匹配所有老师的根节点列表集合
            #teacher_list = response.xpath("//div[@class='teacher-text']")
    
            # 所有列表集合
            #teacherItem = []
    
            # 遍历根节点的集合
            #for each in teacher_list:
                # Item对象来保存数据
                #item = GecspiderItem()
    
                # 不加extract() 结果为xpath匹配对象
                #name = each.xpath('./h4/text()').extract()
                # 职位
                #title = each.xpath('./h6/text()').extract()
                # 个人简介
                #info = each.xpath('./p/text()').extract()
                #item['name'] = name[0]
                #item['title'] = title[0]
                #item['info'] = info[0]
                # 上一次运行位置暂停继续运行,病返回
                #   yield item
    
                #teacherItem.append(item)
    
                # print(name[0])
                # print(title[0])
                # print(info[0])
            #return teacherItem



    2.items.py

    # -*- coding: utf-8 -*-
    
    # Define here the models for your scraped items
    #
    # See documentation in:
    # https://doc.scrapy.org/en/latest/topics/items.html
    
    import scrapy
    
    
    class DoubanItem(scrapy.Item):
        # define the fields for your item here like:
        # name = scrapy.Field()
        #标题
        title = scrapy.Field()
        #信息
        info = scrapy.Field()
        #评分
        star = scrapy.Field()
         #简介
    
        quote = scrapy.Field()
    
        pingjia = scrapy.Field()



    3.main.py

    from scrapy import cmdline
    #
    cmdline.execute("scrapy crawl doubanSpider ".split())



    4.pipelines.py

    # -*- coding: utf-8 -*-
    
    # Define your item pipelines here
    #
    # Don't forget to add your pipeline to the ITEM_PIPELINES setting
    # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
    import json
    from openpyxl import Workbook
    class DoubanPipeline(object):
        wb = Workbook()
        ws = wb.active
        # 设置表头
        ws.append(['标题','评分','信息','简介'])
    
        def process_item(self, item, spider):
            # 添加数据
            line = [item['title'],item['star'],item['info'],item['quote']]
            self.ws.append(line)  # 按行添加
            self.wb.save('douban.xlsx')
            return item



    5.settings.py

    # -*- coding: utf-8 -*-
    
    # Scrapy settings for douban project
    #
    # For simplicity, this file contains only settings considered important or
    # commonly used. You can find more settings consulting the documentation:
    #
    #     https://doc.scrapy.org/en/latest/topics/settings.html
    #     https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
    #     https://doc.scrapy.org/en/latest/topics/spider-middleware.html
    
    BOT_NAME = 'douban'
    
    SPIDER_MODULES = ['douban.spiders']
    NEWSPIDER_MODULE = 'douban.spiders'
    
    
    # Crawl responsibly by identifying yourself (and your website) on the user-agent
    USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36'
    
    # Obey robots.txt rules
    #ROBOTSTXT_OBEY = True
    
    # Configure maximum concurrent requests performed by Scrapy (default: 16)
    #CONCURRENT_REQUESTS = 32
    
    # Configure a delay for requests for the same website (default: 0)
    # See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
    # See also autothrottle settings and docs
    #DOWNLOAD_DELAY = 3
    # The download delay setting will honor only one of:
    #CONCURRENT_REQUESTS_PER_DOMAIN = 16
    #CONCURRENT_REQUESTS_PER_IP = 16
    
    # Disable cookies (enabled by default)
    #COOKIES_ENABLED = False
    
    # Disable Telnet Console (enabled by default)
    #TELNETCONSOLE_ENABLED = False
    
    # Override the default request headers:
    #DEFAULT_REQUEST_HEADERS = {
    #   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    #   'Accept-Language': 'en',
    #}
    
    # Enable or disable spider middlewares
    # See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
    #SPIDER_MIDDLEWARES = {
    #    'douban.middlewares.DoubanSpiderMiddleware': 543,
    #}
    
    # Enable or disable downloader middlewares
    # See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
    #DOWNLOADER_MIDDLEWARES = {
    #    'douban.middlewares.DoubanDownloaderMiddleware': 543,
    #}
    
    # Enable or disable extensions
    # See https://doc.scrapy.org/en/latest/topics/extensions.html
    #EXTENSIONS = {
    #    'scrapy.extensions.telnet.TelnetConsole': None,
    #}
    
    # Configure item pipelines
    # See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
    ITEM_PIPELINES = {
        'douban.pipelines.DoubanPipeline': 300,
    }
    
    # Enable and configure the AutoThrottle extension (disabled by default)
    # See https://doc.scrapy.org/en/latest/topics/autothrottle.html
    #AUTOTHROTTLE_ENABLED = True
    # The initial download delay
    #AUTOTHROTTLE_START_DELAY = 5
    # The maximum download delay to be set in case of high latencies
    #AUTOTHROTTLE_MAX_DELAY = 60
    # The average number of requests Scrapy should be sending in parallel to
    # each remote server
    #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
    # Enable showing throttling stats for every response received:
    #AUTOTHROTTLE_DEBUG = False
    
    # Enable and configure HTTP caching (disabled by default)
    # See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
    #HTTPCACHE_ENABLED = True
    #HTTPCACHE_EXPIRATION_SECS = 0
    #HTTPCACHE_DIR = 'httpcache'
    #HTTPCACHE_IGNORE_HTTP_CODES = []
    #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
  • 相关阅读:
    iOS开发UI篇—使用storyboard创建导航控制器以及控制器的生命周期
    iOS开发UI篇—UIWindow简单介绍
    第三方框架-纯代码布局:Masonry的简单使用
    (转)Foundation-性能优化之NSDateFormatter
    Foundation框架—时间输出格式NSDateFormatter
    物联网MQTT协议分析和开源Mosquitto部署验证
    玩转物联网之MQTT
    Android实现推送方式解决方案
    互联网推送服务原理:长连接+心跳机制(MQTT协议)
    将MySQL数据库转移到SqlServer2008数据库
  • 原文地址:https://www.cnblogs.com/Raodi/p/11187864.html
Copyright © 2020-2023  润新知