• Python爬虫实战案例:爬取新闻资讯


    前言

    本文的文字及图片来源于网络,仅供学习、交流使用,不具有任何商业用途,版权归原作者所有,如有问题请及时联系我们以作处理。

    一个简单的Python资讯采集案例,列表页到详情页,到数据保存,保存为txt文档,网站网页结构算是比较规整,简单清晰明了,资讯新闻内容的采集和保存!

    Python爬虫实战案例:爬取新闻资讯

     

    Python爬虫实战案例:爬取新闻资讯

     

    应用到的库

    requests,time,re,UserAgent,etree

    import requests,time,re
    from fake_useragent import UserAgent
    from lxml import etree

    列表页面

    Python爬虫实战案例:爬取新闻资讯

     

    列表页,链接xpath解析

    href_list=req.xpath('//ul[@class="news-list"]/li/a/@href')

    详情页

    Python爬虫实战案例:爬取新闻资讯

     

    Python爬虫实战案例:爬取新闻资讯

     

    内容xpath解析

    h2=req.xpath('//div[@class="title-box"]/h2/text()')[0]
    author=req.xpath('//div[@class="title-box"]/span[@class="news-from"]/text()')[0]
    details=req.xpath('//div[@class="content-l detail"]/p/text()')

    内容格式化处理

    detail='
    '.join(details)

    标题格式化处理,替换非法字符

    pattern = r"[/\:*?"<>|]"
    new_title = re.sub(pattern, "_", title) # 替换为下划线

    保存数据,保存为txt文本

    def save(self,h2, author, detail):
    with open(f'{h2}.txt','w',encoding='utf-8') as f:
    f.write('%s%s%s%s%s'%(h2,' ',detail,' ',author))


    print(f"保存{h2}.txt文本成功!")

    遍历数据采集,yield处理

    def get_tasks(self):
    data_list = self.parse_home_list(self.url)
    for item in data_list:
    yield item

    程序运行效果

    Python爬虫实战案例:爬取新闻资讯

     

    程序采集效果

    Python爬虫实战案例:爬取新闻资讯

     

    附源码参考:

    #研招网考研资讯采集
    #20200710 by微信:huguo00289
    # -*- coding: UTF-8 -*-

    import requests,time,re
    from fake_useragent import UserAgent
    from lxml import etree

    class RandomHeaders(object):
    ua=UserAgent()
    @property
    def random_headers(self):
    return {
    'User-Agent': self.ua.random,
    }

    class Spider(RandomHeaders):
    def __init__(self,url):
    self.url=url


    def parse_home_list(self,url):
    response=requests.get(url,headers=self.random_headers).content.decode('utf-8')
    req=etree.HTML(response)
    href_list=req.xpath('//ul[@class="news-list"]/li/a/@href')
    print(href_list)
    for href in href_list:
    item = self.parse_detail(f'https://yz.chsi.com.cn{href}')
    yield item


    def parse_detail(self,url):
    print(f">>正在爬取{url}")
    try:
    response = requests.get(url, headers=self.random_headers).content.decode('utf-8')
    time.sleep(2)
    except Exception as e:
    print(e.args)
    self.parse_detail(url)
    else:
    req = etree.HTML(response)
    try:
    h2=req.xpath('//div[@class="title-box"]/h2/text()')[0]
    h2=self.validate_title(h2)
    author=req.xpath('//div[@class="title-box"]/span[@class="news-from"]/text()')[0]
    details=req.xpath('//div[@class="content-l detail"]/p/text()')
    detail=' '.join(details)
    print(h2, author, detail)
    self.save(h2, author, detail)
    return h2, author, detail
    except IndexError:
    print(">>>采集出错需延时,5s后重试..")
    time.sleep(5)
    self.parse_detail(url)


    @staticmethod
    def validate_title(title):
    pattern = r"[/\:*?"<>|]"
    new_title = re.sub(pattern, "_", title) # 替换为下划线
    return new_title



    def save(self,h2, author, detail):
    with open(f'{h2}.txt','w',encoding='utf-8') as f:
    f.write('%s%s%s%s%s'%(h2,' ',detail,' ',author))

    print(f"保存{h2}.txt文本成功!")


    def get_tasks(self):
    data_list = self.parse_home_list(self.url)
    for item in data_list:
    yield item




    if __name__=="__main__":
    url="https://yz.chsi.com.cn/kyzx/jyxd/"
    spider=Spider(url)
    for data in spider.get_tasks():
    print(data)
  • 相关阅读:
    【xsy2506】 bipartite 并查集+线段树
    Linux K8s容器集群技术
    Linux 运维工作中的经典应用ansible(批量管理)Docker容器技术(环境的快速搭建)
    Linux Django项目部署
    Linux Django项目测试
    Linux 首先基本包安装(vim啊什么的),源,源优化,项目架构介绍, (LNMuWsgi)Django项目相关软件mysql,redies,python(相关模块)安装配置测试
    Linux centos系统安装后的基本配置,Linux命令
    Linux 虚拟机上安装linux系统 (ip:子网掩码,网关,dns,交换机,路由知识回顾)
    $ Django 调API的几种方式,django自定义错误响应
    $Django 路飞之显示视频,Redis存购物车数据,优惠卷生成表,优惠卷的一个领取表。(知识小回顾)
  • 原文地址:https://www.cnblogs.com/zwhy8/p/13285708.html
Copyright © 2020-2023  润新知