使用python下载超大文件, 直接全部下载, 文件过大, 可能会造成内存不足, 这时候要使用requests 的 stream模式,
主要代码如下
iter_content:一块一块的遍历要下载的内容
iter_lines:一行一行的遍历要下载的内容
def download_file(url, file_pname, chunk_size=1024*4): """ url: file url file_pname: file save path chunk_size: chunk size """ # 第一种 response_data_file = requests.get(url, stream=True) with open(file_pname, 'wb') as f: for chunk in response_data_file.iter_content(chunk_size=chunk_size): if chunk: f.write(chunk) # 第二种 with requests.get(url, stream=True) as req: with open(file_pname, 'wb') as f: for chunk in req.iter_content(chunk_size=chunk_size): if chunk: f.write(chunk)
如有错误欢迎指出