Python爬虫系列,Python爬虫代理设置方法

HTTP代理

获取代理IP

首先需要获取一个可用代理IP和端口,可以从以下网站获取IP:

西刺:快代理:站大爷:

当然,你也可以从其他网站购买更稳定的代理IP。

Urllib

从上述网站获取代理IP, 这里我获取到IP:116.209.59.41,端口:9999

我们从基础的 Urllib 开始学习,来看一下代理的设置方法:

from urllib.error import URLError

from urllib.request import ProxyHandler, build_opener

def http_proxy(ip, port):

proxy = ip + : + port

proxy_handler = ProxyHandler({

http: https:// + proxy,

https: https:// + proxy

})

opener = build_opener(proxy_handler)

return opener

if __name__ == "__main__":

opener = http_proxy(116.209.59.41, 9999)

try:

response = opener.open(

print(response.read().decode(utf-8))

except URLError as e:

print(e.reason)

运行结果如下:

{

"args": {},

"headers": {

"Accept-Encoding": "identity",

"Connection": "close",

"Host": "httpbin.org",

"User-Agent": "Python-urllib/3.6"

},

"origin": "116.209.59.41",

"url": ""

}

返回结果是 Json,字段 origin标明了你现在的 IP。正好为代理 IP,不是真实的 IP,这样我们就成功设置好代理,可以使用opener打开任意网址。

Requests

Requests 代理设置代码如下:

import requests

if __name__ == "__main__":

ip = 116.209.59.41

port = 9999

proxy = ip + : + port

proxies = {

http: https:// + proxy,

https: https:// + proxy,

}

try:

response = requests.get(, proxies=proxies)

print(response.text)

except requests.exceptions.ConnectionError as e:

print(Error, e.args)

运行结果:

{

"args": {},

"headers": {

"Accept": "*/*",

"Accept-Encoding": "gzip, deflate",

"Connection": "close",

"Host": "httpbin.org",

"User-Agent": "python-requests/2.18.1"

},

"origin": "106.185.45.153",

"url": ""

}

Selenium

Selenium 同样可以设置代理。

elenium 设置代理的代码如下:

from selenium import webdriver

def get_driver_with_proxy(ip, port):

proxy = ip + : + port

chrome_options = webdriver.ChromeOptions()

chrome_options.add_argument(--proxy-server=https:// + proxy)

browser = webdriver.Chrome(chrome_options=chrome_options)\

return browser

if __name__ == "__main__":

browser = get_driver_with_proxy(106.185.45.153, 9999)

browser.get(

这样我们只需要调用browser就可以使用代理IP访问任意网站了。

{

"args": {},

"headers": {

"Accept": "*/*",

"Accept-Encoding": "gzip, deflate",

"Connection": "close",

"Host": "httpbin.org",

"User-Agent": "python-requests/2.18.1"

},

"origin": "106.185.45.153",

"url": ""

}

可以看到代理设置成功。

PhantomJS

对于 PhantomJS代理设置代码如下:

from selenium import webdriver

def get_PhantomJS_driver(ip, port):

proxy = ip + : + port

args = [

f--proxy={proxy},

--proxy-type=http

]

browser = webdriver.PhantomJS(service_args=args)

return browser

if __name__ == "__main__":

browser = get_PhantomJS_driver(106.185.45.153, 9999)

browser.get(

print(browser.page_source)

通过使用 args 参数进行代理IP设置。

运行结果:

{

"args": {},

"headers": {

"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",

"Accept-Encoding": "gzip, deflate",

"Accept-Language": "zh-CN,en,*",

"Connection": "close",

"Host": "httpbin.org",

"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X) AppleWebKit/538.1 (KHTML, like Gecko) PhantomJS/2.1.0 Safari/538.1"

},

"origin": "106.185.45.153",

"url": ""

}

运行结果的 origin 同样为代理的 IP,设置代理成功。

以上就是简单的一个Pytho的IP代理设置方法。我们只需要结合爬虫,爬取免费代理IP网站上面的IP构建自己的IP池,便可以轻松切换IP进行各种爬虫。