• 日志等级


    日志等级(种类):

      ERROR:错误

      WARNING: 警告

      INFO:一般信息

      DEBUG:调试信息(默认)

    指定输入某一中日志信息:

      settings.py中添加LOG_LEVEL = "ERROR"

    将日志信息存储到制定文件中,而并非显示在终端里:

      settings.py: LOG_FILE = "log.txt"

    settings.py

    # -*- coding: utf-8 -*-
    
    # Scrapy settings for logtest project
    #
    # For simplicity, this file contains only settings considered important or
    # commonly used. You can find more settings consulting the documentation:
    #
    #     https://doc.scrapy.org/en/latest/topics/settings.html
    #     https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
    #     https://doc.scrapy.org/en/latest/topics/spider-middleware.html
    
    BOT_NAME = 'logtest'
    
    SPIDER_MODULES = ['logtest.spiders']
    NEWSPIDER_MODULE = 'logtest.spiders'
    
    
    # Crawl responsibly by identifying yourself (and your website) on the user-agent
    USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.110 Safari/537.36'
    
    # Obey robots.txt rules
    ROBOTSTXT_OBEY = False
    LOG_LEVEL = "ERROR"
    LOG_FILE = "log.txt"
    
    # Configure maximum concurrent requests performed by Scrapy (default: 16)
    #CONCURRENT_REQUESTS = 32
    
    # Configure a delay for requests for the same website (default: 0)
    # See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
    # See also autothrottle settings and docs
    #DOWNLOAD_DELAY = 3
    # The download delay setting will honor only one of:
    #CONCURRENT_REQUESTS_PER_DOMAIN = 16
    #CONCURRENT_REQUESTS_PER_IP = 16
    
    # Disable cookies (enabled by default)
    #COOKIES_ENABLED = False
    
    # Disable Telnet Console (enabled by default)
    #TELNETCONSOLE_ENABLED = False
    
    # Override the default request headers:
    #DEFAULT_REQUEST_HEADERS = {
    #   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    #   'Accept-Language': 'en',
    #}
    
    # Enable or disable spider middlewares
    # See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
    #SPIDER_MIDDLEWARES = {
    #    'logtest.middlewares.LogtestSpiderMiddleware': 543,
    #}
    
    # Enable or disable downloader middlewares
    # See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
    #DOWNLOADER_MIDDLEWARES = {
    #    'logtest.middlewares.LogtestDownloaderMiddleware': 543,
    #}
    
    # Enable or disable extensions
    # See https://doc.scrapy.org/en/latest/topics/extensions.html
    #EXTENSIONS = {
    #    'scrapy.extensions.telnet.TelnetConsole': None,
    #}
    
    # Configure item pipelines
    # See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
    #ITEM_PIPELINES = {
    #    'logtest.pipelines.LogtestPipeline': 300,
    #}
    
    # Enable and configure the AutoThrottle extension (disabled by default)
    # See https://doc.scrapy.org/en/latest/topics/autothrottle.html
    #AUTOTHROTTLE_ENABLED = True
    # The initial download delay
    #AUTOTHROTTLE_START_DELAY = 5
    # The maximum download delay to be set in case of high latencies
    #AUTOTHROTTLE_MAX_DELAY = 60
    # The average number of requests Scrapy should be sending in parallel to
    # each remote server
    #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
    # Enable showing throttling stats for every response received:
    #AUTOTHROTTLE_DEBUG = False
    
    # Enable and configure HTTP caching (disabled by default)
    # See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
    #HTTPCACHE_ENABLED = True
    #HTTPCACHE_EXPIRATION_SECS = 0
    #HTTPCACHE_DIR = 'httpcache'
    #HTTPCACHE_IGNORE_HTTP_CODES = []
    #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

    spiders/logtest.py

    # -*- coding: utf-8 -*-
    import scrapy
    
    
    class LogdemoSpider(scrapy.Spider):
        name = 'logdemo'
        allowed_domains = ['www.baidu.com']
        start_urls = ['http://www.baidu.com/']
    
        def parse(self, response):
            fp = open("demo.txt", "w", encoding="utf-8")
            fp.write(response.text)
  • 相关阅读:
    Python之路Day12--mysql介绍及操作
    Python之路第一课Day11--随堂笔记(异步IO数据库队列缓存之二)
    Python之路第一课Day10--随堂笔记(异步IO数据库队列缓存)
    Python之路第一课Day9--随堂笔记之二(进程、线程、协程篇)
    Python之路第一课Day9--随堂笔记之一(堡垒机实例以及数据库操作)未完待续....
    Python之路第一课Day8--随堂笔记(socket 承接上节---网络编程)
    Python之路第一课Day7--随堂笔记(面向对象编程进阶...未完待续 )
    Python之路第一课Day6--随堂笔记(面向对象 )
    Python之路第一课Day5--随堂笔记(模块)
    Python之路第一课Day4--随堂笔记(迭代生成装饰器)
  • 原文地址:https://www.cnblogs.com/cjj-zyj/p/10144182.html
Copyright © 2020-2023  润新知