• Python爬虫-04:贴吧爬虫以及GET和POST的区别



    1. URL的组成


    汉字通过URL encode(UTF-8)编码出来的编码,里面的字符全是打字节

    如果你复制粘贴下来这个网址,出来的不是汉字,而是编码后的字节
    https://www.baidu.com/s?wd=编程吧

    我们也可以在python中做转换-urllib.parse.urlencode

    import urllib.parse.urlencode
    url = "http://www.baidu.com/s?"
    wd = {"wd": "编程吧"}
    out = urllib.parse.urlencode(wd)
    print(out)
    

    结果是: wd=%E7%BC%96%E7%A8%8B%E5%90%A7

    2. 贴吧爬虫

    2.1. 只爬贴吧第一页

    import urllib.parse
    import urllib.request
    
    url = "http://www.baidu.com/s?"
    keyword = input("Please input query: ")
    
    wd = {"wd": keyword}
    wd = urllib.parse.urlencode(wd)
    
    fullurl = url + "?" + wd
    headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36"}
    request = urllib.request.Request(fullurl, headers = headers)
    response = urllib.request.urlopen(request)
    html = response.read()
    
    print(html)
    

    2.2. 爬取所有贴吧的页面

    对于一个贴吧(编程吧)爬虫,可以翻页,我们可以总结规律

    page 1: http://tieba.baidu.com/f?kw=%E7%BC%96%E7%A8%8B&ie=utf-8&pn=0 
    page 2: http://tieba.baidu.com/f?kw=%E7%BC%96%E7%A8%8B&ie=utf-8&pn=50
    page 3: http://tieba.baidu.com/f?kw=%E7%BC%96%E7%A8%8B&ie=utf-8&pn=100
    
    import urllib.request
    import urllib.parse
    
    def loadPage(url,filename):
        """
            作用: url发送请求
            url:地址
            filename: 处理的文件名
        """
        print("正在下载", filename)
        headers = {
            "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36"}
        request = urllib.request.Request(url, headers=headers)
        response = urllib.request.urlopen(request)
        html = response.read()
        return html
    
    
    
    def writePage(html,filename):
        """
            作用:将html内容写入到本地
            html:服务器响应文件内容
        """
        print("正在保存",filename)
        with open(filename, "wb") as f:
            f.write(html)
        print("-"*30)
    
    
    def tiebaSpider(url, beginPage, endPage):
        """
            作用:贴吧爬虫调度器,复制组合处理每个页面的url
        """
        for page in range(beginPage, endPage + 1):
            pn = (page - 1) * 50
            filename = "第" + str(page) + "页.html"
            fullurl = url + "&pn=" + str(pn)
            html = loadPage(fullurl,filename)
            writePage(html,filename)
    
    
    if __name__ == "__main__":
        kw = input("Please input query: ")
        beginPage = int(input("Start page: "))
        endPage = int(input("End page: "))
    
        url = "http://tieba.baidu.com/f?"
        key = urllib.parse.urlencode({"kw":kw})
        fullurl = url + key
        tiebaSpider(fullurl, beginPage, endPage)
    

    结果是:

    Please input query: 编程吧
    Start page: 1
    End page: 5
    正在下载 第1页.html
    正在保存 第1页.html
    ------------------------------
    正在下载 第2页.html
    正在保存 第2页.html
    ------------------------------
    正在下载 第3页.html
    正在保存 第3页.html
    ------------------------------
    正在下载 第4页.html
    正在保存 第4页.html
    ------------------------------
    正在下载 第5页.html
    正在保存 第5页.html
    ------------------------------
    

    3. GET和POST的区别


    • GET: 请求的url会附带查询参数
    • POST: 请求的url不会

    3.1. GET请求

    对于GET请求:查询参数在QueryString里保存

    3.2. POST请求

    对于POST请求: 茶韵参数在WebForm里面



    3.3. 有道翻译模拟发送POST请求

    1. 首先我们用抓包工具获取请求信息
    POST http://fanyi.youdao.com/translate?smartresult=dict&smartresult=rule&smartresult=ugc&sessionFrom=null HTTP/1.1
    Host: fanyi.youdao.com
    Connection: keep-alive
    Content-Length: 254
    Accept: application/json, text/javascript, */*; q=0.01
    Origin: http://fanyi.youdao.com
    X-Requested-With: XMLHttpRequest
    User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36
    Content-Type: application/x-www-form-urlencoded; charset=UTF-8
    Referer: http://fanyi.youdao.com/
    Accept-Encoding: gzip, deflate
    Accept-Language: zh-CN,zh;q=0.9,en-US;q=0.8,en;q=0.7,en-CA;q=0.6
    Cookie: OUTFOX_SEARCH_USER_ID=-1071824454@10.169.0.83; OUTFOX_SEARCH_USER_ID_NCOO=848207426.083082; JSESSIONID=aaaiYkBB5LZ2t6rO6rCGw; ___rl__test__cookies=1546662813170
    x-hd-token: rent-your-own-vps
    # 这一行是form表单数据,重要
    i=love&from=AUTO&to=AUTO&smartresult=dict&client=fanyideskweb&salt=15466628131726&sign=63253c84e50c70b0125b869fd5e2936d&ts=1546662813172&bv=363eb5a1de8cfbadd0cd78bd6bd43bee&doctype=json&version=2.1&keyfrom=fanyi.web&action=FY_BY_REALTIME&typoResult=false
    
    1. 提取关键的表单数据
    i=love
    doctype=json
    version=2.1
    keyfrom=fanyi.web
    action=FY_BY_REALTIME
    typoResult=false  
    
    1. 有道翻译模拟
    import urllib.request
    import urllib.parse
    
    # 通过抓包方式获取,并不是浏览器上面的URL地址
    url = "http://fanyi.youdao.com/translate?smartresult=dict&smartresult=rule&smartresult=ugc&sessionFrom=null"
    
    # 完整的headers
    headers = {
        "Accept" : "application/json, text/javascript, */*; q=0.01",
        "X-Requested-With" : "XMLHttpRequest",
        "User-Agent" : "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36",
        "Content-Type" : "application/x-www-form-urlencoded; charset=UTF-8"
    }
    
    # 输入用户接口
    key = input("Please input english: ")
    
    # 模拟有道翻译传回的form数据
    # 这是post发送到服务器的form数据,post是有数据提交到web服务器的,与服务器做一个交互,通过传的数据返回响应的文件,而get不会发数据
    formdata = {
        "i":key,
        "doctype":"json",
        "version":"2.1",
        "keyfrom":"fanyi.web",
        "action":"FY_BY_REALTIME",
        "typoResult": "false"
    }
    
    # 通过转码
    data = urllib.parse.urlencode(formdata).encode("utf-8")
    # 通过data和header数据,就可以构建post请求,data参数有值,就是POST,没有就是GET
    request = urllib.request.Request(url, data=data, headers=headers)
    response = urllib.request.urlopen(request)
    html = response.read()
    
    print(html)
    

    结果如下:

    Please input english: hello
    b'                          {"type":"EN2ZH_CN","errorCode":0,"elapsedTime":1,"translateResult":[[{"src":"hello","tgt":"xe4xbdxa0xe5xa5xbd"}]]}
    '
    
  • 相关阅读:
    TortoiseGit学习系列之TortoiseGit基本操作修改提交项目(图文详解)
    TortoiseGit学习系列之TortoiseGit基本操作克隆项目(图文详解)
    TortoiseGit学习系列之Windows上本地代码如何通过TortoiserGit提交到GitHub详解(图文)
    TortoiseGit学习系列之Windows上TortoiseGit的安装详解(图文)
    TortoiseGit学习系列之TortoiseGit是什么?
    Cloudera Manager集群官方默认的各个组件开启默认顺序(图文详解)
    IntelliJ IDEA 代码字体大小的快捷键设置放大缩小(很实用)(图文详解)
    Jenkins+Ant+SVN+Jmeter实现持续集成
    jmeter+Jenkins 持续集成中发送邮件报错:MessagingException message: Exception reading response
    jmeter+Jenkins持续集成(四、定时任务和邮件通知)
  • 原文地址:https://www.cnblogs.com/haochen273/p/10220816.html
Copyright © 2020-2023  润新知