• python爬虫--房产数据爬取并保存本地


    
    

    import requests
    import csv
    from bs4 import BeautifulSoup
    headers={'user-agent':'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.79 Safari/537.36 Maxthon/5.2.6.1000'}
    for i in range(1,10):
    link='https://fz.anjuke.com/sale/p'+str(i)+'/#filtersort'
    r=requests.get(link,headers=headers)
    print(str(i + 1), "页响应状态码:", r.status_code)
    soup=BeautifulSoup(r.text,'lxml')
    house_list=soup.find_all('li',class_="list-item")
    with open('test.csv', 'a',newline='',encoding='utf-8-sig')as csvfile:
    w=csv.writer(csvfile)
    w.writerow(('标题','价格','均价','面积','楼层'))
    for house in house_list:
    temp = []
    name=house.find('div',class_='house-title').a.text.strip()
    price=house.find('div',class_='pro-price').contents[1].text.strip()
    price_ave=house.find('div',class_='pro-price').contents[2].text.strip()
    area=house.find('div',class_='details-item').span.text
    floor=house.find('div',class_='details-item').contents[5].text
    temp=[name,price,price_ave,area,floor]
    print(temp)
    w.writerow(temp)

     

    几个注意点:

    1、with open('test.csv', 'a',newline='',encoding='utf-8-sig')as csvfile:,注意utf8转码,否则数据保存本地会为乱码形式

    2、插入标题的方式,数组的写入

  • 相关阅读:
    ubuntu配置jdk和tomcat+部署java项目[最佳实践]
    jQuery TreeGrid
    关于json的一些误解
    jQuery2.0.3源码分析-1(持续更新中......)
    webstrom一些常用快捷键
    js插件-Map插件
    webstorm-删除项目
    随笔-20131209
    软件开发模式对比(瀑布、迭代、螺旋、敏捷)
    javascript学习(10)——[知识储备]链式调用
  • 原文地址:https://www.cnblogs.com/leon507/p/10401091.html
Copyright © 2020-2023  润新知