• Day1-python轻量级爬虫


      爬虫:一段自动抓取互联网信息的程序。

    从一个url出发访问与之关联的url  来获取目标     ;其价值在于:互联网数据,为我所用! 

    一个完整的爬虫架构:爬虫调度端{  url管理器,网页下载器,网页解析器}

    下面是两个测试  :

    关于urllib2:

    # -*- coding:utf8 -*-

    '''
    Created on 2020��1��9��

    @author: long.19981105
    '''
    import http.cookiejar
    import urllib.request
    url="http://www.baidu.com"
    print ('第一种方法')
    resp = urllib.request.urlopen(url)
    print (resp.getcode())
    print(len(resp.read()))
    print ('第二种方法')
    request=urllib.request.Request(url)
    request.add_header("user-agent","Mozilla/5.0")
    resp2=urllib.request.urlopen(request)
    print(resp2.getcode())
    print(len(resp2.read()))
    print ('第三种方法')
    cj=http.cookiejar.CookieJar()
    opener=urllib.request.build_opener(urllib.request.HTTPCookieProcessor(cj))
    urllib.request.install_opener(opener)
    resp3 = urllib.request.urlopen(url)
    print (resp3.getcode())
    print(cj)

    关于bs4:

    # -*- coding:utf8 -*-
    '''
    Created on 2020��1��9��

    @author: long.19981105
    '''
    import urllib.request
    import re
    from bs4 import BeautifulSoup
    html_doc = """
    <html><head><title>The Dormouse's story</title></head>
    <body>
    <p class="title"><b>The Dormouse's story</b></p>

    <p class="story">Once upon a time there were three little sisters; and their names were
    <a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
    <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
    <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
    and they lived at the bottom of a well.</p>

    <p class="story">...</p>
    """
    soup=BeautifulSoup(html_doc,'html.parser',from_encoding='utf-8')
    print('获取所有链接')
    links=soup.find_all('a')
    for link in links:
    print(link.name,link['href'],link.get_text())

    print('获取lacie的链接')
    link_node=soup.find('a',href='http://example.com/lacie')
    print (link_node.name,link_node['href'],link_node.get_text())

    print('正则匹配')
    link_node=soup.find('a',href=re.compile(r"ill"))
    print(link_node.name,link_node['href'],link_node.get_text())

    明天继续  给大家进行一个实战演练::爬取百度百科1000个页面的数据

    声明 本文为个人观看视频进行操作

  • 相关阅读:
    win10系统激活 快捷方式
    echarts 图表自适应外部盒子大小
    JS开发常用工具函数 总结
    课程学习总结报告
    结合中断上下文切换和进程上下文切换分析Linux内核的一般执行过程
    基于mykernel 2.0编写一个操作系统内核
    框架复习_SpringMvc
    框架复习_Mybatis
    框架复习_Spring
    IDEA调试
  • 原文地址:https://www.cnblogs.com/1983185414xpl/p/12173589.html
Copyright © 2020-2023  润新知