• windows 下 scrapy的安装


    安装参考博客:http://davenzhang.com/scrapy_install.htm

    我是先安装了scrapy,发现import scrapy 的时候报错。之后一次安装了下面关联软件的.exe文件。之后就可以导入了。

    这时候 import scrapy 的时候很正常,但是用scrapy startproject demo 的时候报错了,看version 也报错

    D:Just4StudyPythonTestProgram>C:Python27Scriptsscrapy version
    Traceback (most recent call last):
    File "C:Python27lib unpy.py", line 162, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
    File "C:Python27lib unpy.py", line 72, in _run_code
    exec code in run_globals
    File "C:Python27libsite-packagesscrapycmdline.py", line 5, in <module>
    import pkg_resources
    ImportError: No module named pkg_resources

    顺着错误找到解释说是没有安装setuptools或者没有装好。搜寻到安装方法:
    先下载:
    wget http://peak.telecommunity.com/dist/ez_setup.py
    再安装
    python ez_setup.py

    D:Just4StudyPythonTestProgram>python ez_setup.py
    Downloading http://pypi.python.org/packages/2.7/s/setuptools/setuptools-0.6c11-py2.7.egg
    Processing setuptools-0.6c11-py2.7.egg
    Copying setuptools-0.6c11-py2.7.egg to c:python27libsite-packages
    Adding setuptools 0.6c11 to easy-install.pth file
    Installing easy_install-script.py script to C:Python27Scripts
    Installing easy_install.exe script to C:Python27Scripts
    Installing easy_install.exe.manifest script to C:Python27Scripts
    Installing easy_install-2.7-script.py script to C:Python27Scripts
    Installing easy_install-2.7.exe script to C:Python27Scripts
    Installing easy_install-2.7.exe.manifest script to C:Python27Scripts

    Installed c:python27libsite-packagessetuptools-0.6c11-py2.7.egg
    Processing dependencies for setuptools==0.6c11
    Finished processing dependencies for setuptools==0.6c11

    之后运行
    D:Just4StudyPythonTestProgram>C:Python27Scriptsscrapy version
    Traceback (most recent call last):
    File "C:Python27lib unpy.py", line 162, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
    File "C:Python27lib unpy.py", line 72, in _run_code
    exec code in run_globals
    File "C:Python27libsite-packagesscrapycmdline.py", line 8, in <module>
    from scrapy.crawler import CrawlerProcess
    File "C:Python27libsite-packagesscrapycrawler.py", line 5, in <module>
    from scrapy.core.engine import ExecutionEngine
    File "C:Python27libsite-packagesscrapycoreengine.py", line 14, in <module>
    from scrapy.core.downloader import Downloader
    File "C:Python27libsite-packagesscrapycoredownloader\__init__.py", line 13, in <module>
    from .middleware import DownloaderMiddlewareManager
    File "C:Python27libsite-packagesscrapycoredownloadermiddleware.py", line 7, in <module>
    from scrapy.http import Request, Response
    File "C:Python27libsite-packagesscrapyhttp\__init__.py", line 8, in <module>
    from scrapy.http.headers import Headers
    File "C:Python27libsite-packagesscrapyhttpheaders.py", line 1, in <module>
    from w3lib.http import headers_dict_to_raw
    ImportError: No module named w3lib.http

    安装w3lib去(https://github.com/scrapy/w3lib)下载,安装

    D:Just4StudyPythonTestProgram>python C:Python27w3lib-mastersetup.py install
    running install
    running build
    running build_py
    error: package directory 'w3lib' does not exist

    D:Just4StudyPythonTestProgram>c:

    C:Python27Scrapy-0.18.1>cd C:Python27w3lib-master

    C:Python27w3lib-master>python setup.py install
    running install
    running build
    running build_py
    creating build
    creating buildlib
    creating buildlibw3lib
    copying w3libencoding.py -> buildlibw3lib
    copying w3libform.py -> buildlibw3lib
    copying w3libhtml.py -> buildlibw3lib
    copying w3libhttp.py -> buildlibw3lib
    copying w3liburl.py -> buildlibw3lib
    copying w3libutil.py -> buildlibw3lib
    copying w3lib\__init__.py -> buildlibw3lib
    running install_lib
    creating C:Python27Libsite-packagesw3lib
    copying buildlibw3libencoding.py -> C:Python27Libsite-packagesw3lib
    copying buildlibw3libform.py -> C:Python27Libsite-packagesw3lib
    copying buildlibw3libhtml.py -> C:Python27Libsite-packagesw3lib
    copying buildlibw3libhttp.py -> C:Python27Libsite-packagesw3lib
    copying buildlibw3liburl.py -> C:Python27Libsite-packagesw3lib
    copying buildlibw3libutil.py -> C:Python27Libsite-packagesw3lib
    copying buildlibw3lib\__init__.py -> C:Python27Libsite-packagesw3lib
    byte-compiling C:Python27Libsite-packagesw3libencoding.py to encoding.pyc
    byte-compiling C:Python27Libsite-packagesw3libform.py to form.pyc
    byte-compiling C:Python27Libsite-packagesw3libhtml.py to html.pyc
    byte-compiling C:Python27Libsite-packagesw3libhttp.py to http.pyc
    byte-compiling C:Python27Libsite-packagesw3liburl.py to url.pyc
    byte-compiling C:Python27Libsite-packagesw3libutil.py to util.pyc
    byte-compiling C:Python27Libsite-packagesw3lib\__init__.py to __init__.pyc
    running install_egg_info
    Writing C:Python27Libsite-packagesw3lib-1.3-py2.7.egg-info

    之后运行
    C:Python27w3lib-master>D:

    D:Just4StudyPythonTestProgram>C:Python27Scriptsscrapy version
    Scrapy 0.18.1

    D:Just4StudyPythonTestProgram>C:Python27Scriptsscrapy startproject demo

    D:Just4StudyPythonTestProgram>dir
    驱动器 D 中的卷是 work
    卷的序列号是 F4A9-7648

    D:Just4StudyPythonTestProgram 的目录

    2013/09/06 11:36 <DIR> .
    2013/09/06 11:36 <DIR> ..
    2013/02/06 09:58 140 AddressBook.data
    2013/02/06 10:12 1,081 AddressBook.py
    2013/02/04 17:25 156 backup_ver1.py
    2013/09/06 11:36 <DIR> demo
    2013/09/06 11:24 10,240 ez_setup.py
    2013/08/28 22:07 1,042 getPhoneNumber.py
    2009/07/17 14:35 1,719 oracle_export.py
    2013/03/11 22:02 269 python_debug.py
    2013/08/28 22:19 375 test_urllib2.py
    2013/02/03 15:31 182 using_sys.py
    9 个文件 15,204 字节
    3 个目录 124,353,531,904 可用字节

    好吧其他的目录不用管,我们看到了demo目录,再仔细看看。

    D:Just4StudyPythonTestProgram>cd demo

    D:Just4StudyPythonTestProgramdemo>dir
    驱动器 D 中的卷是 work
    卷的序列号是 F4A9-7648

    D:Just4StudyPythonTestProgramdemo 的目录

    2013/09/06 11:36 <DIR> .
    2013/09/06 11:36 <DIR> ..
    2013/09/06 11:36 <DIR> demo
    2013/09/06 11:36 250 scrapy.cfg
    1 个文件 250 字节
    3 个目录 124,353,531,904 可用字节
    D:Just4StudyPythonTestProgramdemo>cd demo

    D:Just4StudyPythonTestProgramdemodemo>dir
    驱动器 D 中的卷是 work
    卷的序列号是 F4A9-7648

    D:Just4StudyPythonTestProgramdemodemo 的目录

    2013/09/06 11:36 <DIR> .
    2013/09/06 11:36 <DIR> ..
    2013/09/06 11:36 265 items.py
    2013/09/06 11:36 258 pipelines.py
    2013/09/06 11:36 448 settings.py
    2013/09/06 11:02 <DIR> spiders
    2013/08/28 05:46 0 __init__.py
    4 个文件 971 字节
    3 个目录 124,353,531,904 可用字节

    这回终于和http://doc.scrapy.org/en/latest/intro/tutorial.html 上描述的一致了。

    到这里,我想是安装好了。

  • 相关阅读:
    NOIP200801 ISBN号码
    NOIP200902分数线划定
    NOIP200901多项式输出
    NOIP200603 Jam的计数法
    HDNOIP201102读数
    Modular Inverse(扩展欧几里得)
    青蛙的约会(扩展欧几里得)
    A Famous City(单调栈)
    Triangles
    2971: 魔族密码 (trie树)
  • 原文地址:https://www.cnblogs.com/Alex-Zeng/p/3305183.html
Copyright © 2020-2023  润新知