• 【python】【scrapy】使用方法概要(四)


    【请初学者作为参考,不建议高手看这个浪费时间】

    上一篇文章,我们抓取到了一大批代理ip,本篇文章介绍如何实现downloaderMiddleware,达到随即使用代理ip对目标网站进行抓取的。

    抓取的目标网站是现在炙手可热的旅游网站 www.qunar.com, 目标信息是qunar的所有seo页面,及页面的seo相关信息。

    qunar并没有一般网站具有的 robots.txt文件,所以无法利用列表进行抓取,但是,可以发现,qunar的seo页面主要部署在

    http://www.qunar.com/routes/  下,这个页面为入口文件,由此页面及此页面上所有带有routes的链接开始递归的抓取所有带有routes/字段的链接即可。

    开始吧

    目标信息为目标站点的seo信息,所以为head中的meta和description字段。

     1 # Define here the models for your scraped items
     2 #
     3 # See documentation in:
     4 # http://doc.scrapy.org/topics/items.html
     5 
     6 from scrapy.item import Item, Field
     7 
     8 class SitemapItem(Item):
     9     # define the fields for your item here like:
    10     # name = Field()
    11     url = Field()
    12     keywords = Field()
    13     description = Field()

    因为要使用代理ip,所以需要实现自己的downloadermiddlerware,主要功能是从代理ip文件中随即选取一个ip端口作为代理服务,代码如下

     1 import random
     2 
     3 class ProxyMiddleware(object):
     4     def process_request(self, request, spider):
     5         fd = open('/home/xxx/services_runenv/crawlers/sitemap/sitemap/data/proxy_list.txt','r')
     6         data = fd.readlines()
     7         fd.close()
     8         length = len(data)
     9         index  = random.randint(0, length -1)
    10         item   = data[index]
    11         arr    = item.split(',')
    12         request.meta['proxy'] = 'http://%s:%s' % (arr[0],arr[1])

    最重要的还是爬虫,主要功能是提取页面所有的链接,把满足条件的url实例成Request对象并yield, 同时提取页面的keywords,description信息,以item的形式yield,代码如下:

     1 from scrapy.selector import HtmlXPathSelector
     2 from sitemap.items import SitemapItem
     3 
     4 import urllib
     5 import simplejson
     6 import exceptions
     7 import pickle
     8 
     9 class SitemapSpider(CrawlSpider):
    10     name = 'sitemap_spider'
    11     allowed_domains = ['qunar.com']
    12     start_urls = ['http://www.qunar.com/routes/']
    13 
    14     rules = (
    15         #Rule(SgmlLinkExtractor(allow=(r'http://www.qunar.com/routes/.*')), callback='parse'),
    16         #Rule(SgmlLinkExtractor(allow=('http:.*/routes/.*')), callback='parse'),
    17     )
    18 
    19     def parse(self, response):
    20         item = SitemapItem()
    21         x         = HtmlXPathSelector(response)
    22         raw_urls  = x.select("//a/@href").extract()
    23         urls      = []
    24         for url in raw_urls:
    25             if 'routes' in url:
    26                 if 'http' not in url:
    27                     url = 'http://www.qunar.com' + url
    28                 urls.append(url)
    29 
    30         for url in urls:
    31             yield Request(url)
    32 
    33         item['url']         = response.url.encode('UTF-8')
    34         arr_keywords        = x.select("//meta[@name='keywords']/@content").extract()
    35         item['keywords']    = arr_keywords[0].encode('UTF-8')
    36         arr_description     = x.select("//meta[@name='description']/@content").extract()
    37         item['description'] = arr_description[0].encode('UTF-8')
    38 
    39         yield item

    pipe文件比较简单,只是把抓取到的数据存储起来,代码如下

     1 # Define your item pipelines here
     2 #
     3 # Don't forget to add your pipeline to the ITEM_PIPELINES setting
     4 # See: http://doc.scrapy.org/topics/item-pipeline.html
     5 
     6 class SitemapPipeline(object):
     7     def process_item(self, item, spider):
     8         data_path = '/home/xxx/services_runenv/crawlers/sitemap/sitemap/data/output/sitemap_data.txt'
     9         fd = open(data_path, 'a')
    10         line = str(item['url']) + '#$#' + str(item['keywords']) + '#$#' + str(item['description']) + '\n'
    11         fd.write(line)
    12         fd.close
    13         return item

    最后附上的是setting.py文件

    # Scrapy settings for sitemap project
    #
    # For simplicity, this file contains only the most important settings by
    # default. All the other settings are documented here:
    #
    #     http://doc.scrapy.org/topics/settings.html
    #
    
    BOT_NAME = 'sitemap hello,world~!'
    BOT_VERSION = '1.0'
    
    SPIDER_MODULES = ['sitemap.spiders']
    NEWSPIDER_MODULE = 'sitemap.spiders'
    USER_AGENT = '%s/%s' % (BOT_NAME, BOT_VERSION)
    
    DOWNLOAD_DELAY = 0
    
    ITEM_PIPELINES = [
        'sitemap.pipelines.SitemapPipeline'
    ]
    
    DOWNLOADER_MIDDLEWARES = {
        'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 110,
        'sitemap.middlewares.ProxyMiddleware': 100,
        }
    
    CONCURRENT_ITEMS = 128
    CONCURRENT_REQUEST = 64
    CONCURRENT_REQUEST_PER_DOMAIN = 64
    
    
    LOG_ENABLED = True
    LOG_ENCODING = 'utf-8'
    LOG_FILE = '/home/xxx/services_runenv/crawlers/sitemap/sitemap/log/sitemap.log'
    LOG_LEVEL = 'DEBUG'
    LOG_STDOUT = False

    对scrapy的介绍将告一段落,更复杂的应用还没有接触过,想等看完redis的源码,再来研究下scrapy的源码~~ 希望通过分享能给正在入门scrapy的童鞋带来帮助~

    喜欢一起简单,实用的东西,拒绝复杂花哨,我不是GEEK.
  • 相关阅读:
    H5调用Android播放视频
    JavaScript调Java
    Java调用JavaScript
    python的下载和安装
    s5_day1作业
    s5_day2作业
    pycharm激活(转)
    for…else和while…else
    小练习
    09 grep、正则表达式和sed
  • 原文地址:https://www.cnblogs.com/igloo1986/p/2660893.html
Copyright © 2020-2023  润新知