你好,我想知道是否有可能连接到一个http主机(例如google.com)并下载网页的来源?
提前致谢.
使用urllib2下载页面.
谷歌将阻止此请求,因为它会阻止所有机器人.将用户代理添加到请求中.
import urllib2 user_agent = 'Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_4; en-US) AppleWebKit/534.3 (KHTML, like Gecko) Chrome/6.0.472.63 Safari/534.3' headers = { 'User-Agent' : user_agent } req = urllib2.Request('http://www.google.com', None, headers) response = urllib2.urlopen(req) page = response.read() response.close() # its always safe to close an open connection
您也可以使用pyCurl
import sys import pycurl class ContentCallback: def __init__(self): self.contents = '' def content_callback(self, buf): self.contents = self.contents + buf t = ContentCallback() curlObj = pycurl.Curl() curlObj.setopt(curlObj.URL, 'http://www.google.com') curlObj.setopt(curlObj.WRITEFUNCTION, t.content_callback) curlObj.perform() curlObj.close() print t.contents
您可以使用urllib2模块.
import urllib2 url = "http://somewhere.com" page = urllib2.urlopen(url) data = page.read() print data
有关更多示例,请参阅doc