• celery 调用scrapy


      我的环境: celery 3.1.25 python 3.6.9 window10

    celery tasks 代码如下,其中 QuotesSpider 是我的scrapy项目爬虫类名称

    from celery_app import app
    from scrapy.crawler import CrawlerProcess
    from scrapy.utils.project import get_project_settings
    from tutorial.spiders.quotes import QuotesSpider
    
    def crawl_run():
        scope = 'all'
        process = CrawlerProcess(settings=get_project_settings())
        process.crawl(QuotesSpider, scope)
        process.start()
        process.join()
    
    @app.task(queue='default')
    def execute_task():
        return crawl_run()


    后面发现这样写重复做定时任务的时候会报错,报reactor不能重启的问题,改成下面这样就解决了,这个类要放在和项目scrapy.cfg同级目录下
    
    
    from crawler.tutorial.crawler.tutorial.spiders.quotes import QuotesSpider
    from scrapy.utils.project import get_project_settings
    import scrapy.crawler as crawler
    from crochet import setup
    setup()
    import os
    
    class Scraper():
        def crawl_run(self):
            spider = QuotesSpider()
            settings = get_project_settings()
            runner = crawler.CrawlerRunner(settings)
            runner.crawl(spider, 'all')
            runner.join()
    
    
    if __name__ == '__main__':
        scraper = Scraper()
        scraper.crawl_run()
  • 相关阅读:
    设计模式之开篇(C#语法) 爱拼才会赢
    C#语法糖之第一篇:自动属性&隐式类型 爱拼才会赢
    C#语法糖之第四篇: 扩展方法 爱拼才会赢
    C#中this在扩展方法的应用
    笔记20120215_转义字符_运算符
    Linux基本操作
    SQL创表
    软件测试的概念
    java 线程池
    Redis广播
  • 原文地址:https://www.cnblogs.com/WalkOnMars/p/11558560.html
Copyright © 2020-2023  润新知