• 爬取全部的校园新闻


    作业链接:https://edu.cnblogs.com/campus/gzcc/GZCC-16SE1/homework/3002

    0.从新闻url获取点击次数,并整理成函数

    • newsUrl
    • newsId(re.search())
    • clickUrl(str.format())
    • requests.get(clickUrl)
    • re.search()/.split()
    • str.lstrip(),str.rstrip()
    • int
    • 整理成函数
    • 获取新闻发布时间及类型转换也整理成函数

    1.从新闻url获取新闻详情: 字典,anews

    2.从列表页的url获取新闻url:列表append(字典) alist

    3.生成所页列表页的url并获取全部新闻 :列表extend(列表) allnews

    *每个同学爬学号尾数开始的10个列表页

    4.设置合理的爬取间隔

    import time

    import random

    time.sleep(random.random()*3)

    5.用pandas做简单的数据处理并保存

    保存到csv或excel文件 

    newsdf.to_csv(r'F:\duym\爬虫\gzccnews.csv')

    from bs4 import BeautifulSoup
    import requests
    import re
    import pandas as pd
    from datetime import datetime
    
    def htmlsurl():
        url = 'http://news.gzcc.cn/html/xiaoyuanxinwen/'
        htmlurls = []
        for i in range(20,30):
            htmlurls.append(url+str(i)+'.html')
        return htmlurls
    def getclicktime(url):
        clickurl=('http://oa.gzcc.cn/api.php?op=count&id={}&modelid=80'.format(re.findall('(\d+).html',url)[0]))
        clickhtml = requests.get(clickurl)
        clickhtml.encoding ='utf-8'
        return re.search("hits'[)].html[(]'(\d+)'[)]",clickhtml.text).groups(0)[0]
    
    def getDt(FbSj):
        FbSj = ' '.join(FbSj)[5:]
        FbSj = datetime.strptime(FbSj,'%Y-%m-%d %H:%M:%S')
        return FbSj
    
    def alist():
        newsList=[]
        htmlurls = htmlsurl()
        for url in htmlurls:
            html = requests.get(url)
            html.encoding = 'utf-8'
            soup = BeautifulSoup(html.text,'html.parser')
            for news in soup.select('li'):
                if len(news.select('.news-list-title'))>0:
                    newsurl = news.select('a')[0]['href']
                    newsDict = anews(newsurl)
                    newsDict['newsUrl'] = newsurl
                    newsDict['description'] = news.select('.news-list-description')[0].text
                    newsList.append(newsDict)
        return newsList
    def anews(url):
        newsDetail ={}
        res = requests.get(url)
        res.encoding = 'utf-8'
        soup= BeautifulSoup(res.text,'html.parser')
        newsDetail['newsTitle'] = soup.select('.show-title')[0].text
        newsDetail['newsClick'] = getclicktime(url)
        FbSj = soup.select('.show-info')[0].text.split()[0:2]
        newsDetail['newsDate'] = getDt(FbSj)
        return newsDetail
    
    
    text = pd.DataFrame(alist())
    text.to_csv(r'E:\gzccnews.csv')

  • 相关阅读:
    面向对象设计大作业第二阶段:图书馆系统
    OO之接口-DAO模式代码阅读及应用
    OO设计-有理数类的设计
    DS博客作业05--查找
    DS博客作业04--图
    DS博客作业03--树
    DS博客作业02--栈和队列
    DS01——线性表
    c博客06-结构体&文件
    C语言博客作业05——指针
  • 原文地址:https://www.cnblogs.com/kevinShem/p/10697540.html
Copyright © 2020-2023  润新知