get请求
#encoding:UTF-8
importurllib
importurllib.request
data={}
data['name']='aaa'
url_parame=urllib.parse.urlencode(data)
url="http://xxxxxx?"
all_url=url+url_parame
data=urllib.request.urlopen(all_url).read()
record=data.decode('UTF-8')
print(record)
post请求
#encoding:UTF-8
importurllib
importurllib.request
url = 'http://xxxxxx'
user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'
values = {'name' : 'aaa'}
headers = { 'User-Agent' : user_agent }
data = urllib.parse.urlencode(values)
req = urllib.request.Request(url+'?'+data)
response = urllib.request.urlopen(req)
the_page = response.read()
print(the_page)
print(the_page.decode('UTF8'))
以上为urllib方法 现在已经很少用了 目前流行为requests库类 具体get和post请求如下
get请求
import requests
r = requests.get("http://xxxxx?name=aaa")
print(r.text)
post请求
import requests
postdata = { 'name':'aaa' }
r = requests.post("http://xxxxx?name=aaa",data=postdata)
print(r.text)
如果要爬虫用的话 一般建议带上session会话和headers表头信息,session会话可以自动记录cookie
s = requests.Session()
headers = { 'Host':'www.xxx.com'}
postdata = { 'name':'aaa' }
url = "http://xxxxx"
s.headers.update(headers)
r = s.post(url,data=postdata)
print(r.text)
作者:凛华夜子
链接:http://www.jianshu.com/p/9e50c58dabdd