• Scrapy URLError


    错误信息如下:

    2015-12-03 16:05:08 [scrapy] INFO: Scrapy 1.0.3 started (bot: LabelCrawler)
    2015-12-03 16:05:08 [scrapy] INFO: Optional features available: ssl, http11, boto
    2015-12-03 16:05:08 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'LabelCrawler.spiders', 'SPIDER_MODULES': ['LabelCrawler.spiders'], 'BOT_NAME': 'LabelCrawler'}
    2015-12-03 16:05:08 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
    2015-12-03 16:05:09 [boto] DEBUG: Retrieving credentials from metadata server.
    2015-12-03 16:05:09 [boto] ERROR: Caught exception reading instance data
    Traceback (most recent call last):
      File "D:Anacondalibsite-packagesotoutils.py", line 210, in retry_url
        r = opener.open(req, timeout=timeout)
      File "D:Anacondaliburllib2.py", line 431, in open
        response = self._open(req, data)
      File "D:Anacondaliburllib2.py", line 449, in _open
        '_open', req)
      File "D:Anacondaliburllib2.py", line 409, in _call_chain
        result = func(*args)
      File "D:Anacondaliburllib2.py", line 1227, in http_open
        return self.do_open(httplib.HTTPConnection, req)
      File "D:Anacondaliburllib2.py", line 1197, in do_open
        raise URLError(err)
    URLError: <urlopen error [Errno 10051] >
    2015-12-03 16:05:09 [boto] ERROR: Unable to read instance data, giving up
    2015-12-03 16:05:09 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
    2015-12-03 16:05:09 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
    2015-12-03 16:05:09 [scrapy] INFO: Enabled item pipelines: 
    2015-12-03 16:05:09 [scrapy] INFO: Spider opened
    2015-12-03 16:05:09 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
    2015-12-03 16:05:09 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
    2015-12-03 16:05:09 [scrapy] DEBUG: Crawled (200) <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Books/> (referer: None)
    2015-12-03 16:05:09 [scrapy] DEBUG: Crawled (200) <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/> (referer: None)
    

     原因如下:

      That particular error message is being generated by boto (boto 2.38.0 py27_0), which is used to connect to Amazon S3. Scrapy doesn't have this enabled by default.

    解决办法:

    1.在settings.py文件中,加上

    DOWNLOAD_HANDLERS = {'S3': None,}
    

    但是我按照这个方法做并没有用,所以在spider.py文件中加入

    from scrapy import optional_features
    optional_features.remove('boto')
    

      问题解决

    说实话,即使报错,也不影响爬虫,但是我有强迫症。。。。

  • 相关阅读:
    luogu P4342 [IOI1998]Polygon
    luogu P2051 [AHOI2009]中国象棋
    luogu P3304 [SDOI2013]直径
    luogu P1776 宝物筛选_NOI导刊2010提高(02)
    luogu P2900 [USACO08MAR]土地征用Land Acquisition
    CF1009E [Intercity Travelling]
    luogu P4360 [CEOI2004]锯木厂选址
    luogu P1268 树的重量
    centos7扩展根分区
    tcpdump抓包工具的使用
  • 原文地址:https://www.cnblogs.com/tina-smile/p/5016599.html
Copyright © 2020-2023  润新知