阿布云为我们提供了隧道代理IP的服务,通过阿布云HTTP隧道的动态版可以让我们的爬虫很好的使用动态代理IP,由此可知我们可以得到requests接入代码
# -*- coding:utf-8 -*- import requests # 要访问的目标网页 url = "" # 代理服务器,根据购买的套餐,自行查看修改 proxy_host = "http-dyn.abuyun.com" # 代理端口 proxy_prot = "9020" # 代理隧道验证信息 proxy_user = "H123D" # 购买隧道的通行证书 proxy_pass = "12345" # 购买隧道的通行秘钥 proxy_meta = "http://%(user)s:%(pass)s@%(host)s:%(port)s"%{ "host":proxy_host, "port":proxy_prot, "user":proxy_user, "pass":proxy_pass, } proxies = { "http":proxy_meta, "https":proxy_meta, } response = requests.get(url=url,proxies=proxies) print(response.status_code) print(response.text)得到结果为:
200 { "args": {}, "headers": { "Accept": "*/*", "Accept-Encoding": "gzip, deflate", "Connection": "close", "Host": "httpbin.org", "User-Agent": "python-requests/2.18.1" }, "origin": "60.207.237.111", "url": ""}最后由于阿布云的proxy地址是不变的(实际是动态ip),实际上,得到上边的proxies后,直接使用那个地址,进行proxies=proxies 设置即可。
同时阿布云还提供爬虫框架Scrapy的接入代码
import base64 # 代理服务器,根据购买的套餐,自行查看修改 proxyServer = ":9020" # 代理隧道验证信息 proxy_user = "H123D" # 购买隧道的通行证书 proxy_pass = "12345" # 购买隧道的通行秘钥 proxyAuth = "Basic " + base64.urlsafe_b64encode(bytes((proxy_user + ":" + proxy_pass), "ascii")) proxyAuth = proxyAuth.decode("utf8") class ProxyMiddleware(object): def process_request(self,request,spider): request.meta["proxy"] = proxyServer request.headers["Proxy-Authorization"] = proxyAuth由于,在阿布云购买的是最基础的代理,即每秒 5 个请求,又因为 Scrapy 默认的并发数是 16 个,所以需要对 Scrapy 请求数量进行一下限制,可以设置每个请求的延迟时间为 0.2s ,这样一秒就刚好请求 5 个,最后启用上面的代理中间件类即可:
AUTOTHROTTLE_ENABLED = True DOWNLOAD_DELAY = 0.2# 每次请求间隔时间 # 启用阿布云代理中间件 DOWNLOADER_MIDDLEWARES = { maoyan.middlewares.ProxyMiddleware: 301, }