爬虫怕封ip,程序员教你构建代理ip池。

1 设置 User-agent

Scrapy 官方建议使用 User-Agent 池, 轮流选择其中一个常用浏览器的 User-Agent来作为 User-Agent。scrapy 发起的 http 请求中 headers 部分中 User-Agent 字段的默认值是Scrapy/VERSION ,我们需要修改该字段伪装成浏览器访问网站。无私分享全套Python爬虫干货,如果你也想学习Python,@ 私信小编获取

同样在 setting.py 中新建存储 User-Agent 列表UserAgent_List = [ "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2227.1 Safari/537.36", "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2227.0 Safari/537.36", "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2227.0 Safari/537.36", "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2226.0 Safari/537.36", "Mozilla/5.0 (Windows NT 6.4; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2225.0 Safari/537.36", "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2225.0 Safari/537.36", "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2224.3 Safari/537.36", "Mozilla/5.0 (Windows NT 10.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.93 Safari/537.36", "Mozilla/5.0 (Windows NT 10.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.93 Safari/537.36", "Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2049.0 Safari/537.36", "Mozilla/5.0 (Windows NT 4.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2049.0 Safari/537.36", "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.67 Safari/537.36", "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.67 Safari/537.36", "Mozilla/5.0 (X11; OpenBSD i386) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.125 Safari/537.36", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1944.0 Safari/537.36", "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.3319.102 Safari/537.36", "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.2309.372 Safari/537.36", "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.2117.157 Safari/537.36", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.47 Safari/537.36", "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1866.237 Safari/537.36", "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.137 Safari/4E423F", "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:40.0) Gecko/ Firefox/40.1", "Mozilla/5.0 (Windows NT 6.3; rv:36.0) Gecko/ Firefox/36.0", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10; rv:33.0) Gecko/ Firefox/33.0", "Mozilla/5.0 (X11; Linux i586; rv:31.0) Gecko/ Firefox/31.0", "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/ Firefox/31.0", "Mozilla/5.0 (Windows NT 5.1; rv:31.0) Gecko/ Firefox/31.0", "Opera/9.80 (X11; Linux i686; Ubuntu/14.10) Presto/2.12.388 Version/12.16", "Opera/9.80 (Windows NT 6.0) Presto/2.12.388 Version/12.14", "Mozilla/5.0 (Windows NT 6.0; rv:2.0) Gecko/ Firefox/4.0 Opera 12.14", "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0) Opera 12.14", "Opera/9.80 (Windows NT 5.1; U; zh-sg) Presto/2.9.181 Version/12.00" ]

2、在 middlewares.py 文件中新建一个名为RandomUserAgentMiddleware的代理中间层类

import random from scrapy_demo.settings import UserAgent_List class RandomUserAgentMiddleware(object): 动态随机设置 User-agent def process_request(self, request, spider): ua = random.choice(UserAgent_List) if ua: request.headers.setdefault(User-Agent, ua) print(request.headers)

3、在 settings.py 中配置 RandomUserAgentMiddleware , 激活中间件

DOWNLOADER_MIDDLEWARES = { # 第二行的填写规则 #yourproject.middlewares(文件名).middleware类 # 设置 User-Agent scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware: None, scrapy_demo.middlewares.RandomUserAgentMiddleware: 400,# scrapy_demo 是你项目的名称

4、 禁用cookies

有些站点会使用 cookies 来发现爬虫的轨迹。因此,我们最好禁用 cookies

在 settings.py 文件中新增以下配置。

# 默认是被注释的, 也就是运行使用 cookies # Disable cookies (enabled by default) COOKIES_ENABLED = False

5、当 scrapy 的下载器在下载同一个网站下一个页面前需要等待的时间。我们设置下载延迟, 可以有效避免下载器获取到下载地址就立刻执行下载任务的情况发生。从而可以限制爬取速度, 减轻服务器压力。

在 settings.py 文件中新增以下配置。

# 默认是被注释的 # Configure a delay for requests for the same website (default: 0) # See #download-delay # See also autothrottle settings and docs DOWNLOAD_DELAY = 3 # 单位是秒,上述设置是延迟 3s。 # 同时还支持设置小数, 例 0.3, 延迟 300

6、设置代理

有些网站设置反爬虫机制,这使得我们的爬虫程序可能爬到一定数量网页就爬取不下去了。我们需要装饰下爬虫,让它访问网站行为更像类人行为。使用 IP 代理池能突破大部分网站的限制。

我将自己收集一些代理地址以列表形式保存到 settings.py 文件中

# 代理地址具有一定的使用期限, 不保证以下地址都可用。 PROXY_LIST = [ ":80", ":80", ":3128" ":61234", ":8080", ":808", ":4386", ":808",

7、在 middlewares.py 文件中新建一个名为ProxyMiddleware的代理中间层类

import random from scrapy_demo.settings import PROXY_LIST class ProxyMiddleware(object): # overwrite process request def process_request(self, request, spider): # Set the location of the proxy # request.meta[proxy] = ":80" request.meta[proxy] = random.choice(PROXY_LIST)

8、在 settings.py 文件中增加代理配置:

DOWNLOADER_MIDDLEWARES = { # 第二行的填写规则 #yourproject.middlewares(文件名).middleware类 # 设置代理 scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware: 110, scrapy_demo.middlewares.ProxyMiddleware: 100,# scrapy_demo 是你项目的名称

除此之外,如果你比较狠的话,可以采用 VPN + Tor 方式来突破反爬虫机制。

为了帮助大家更轻松的学好Python,我给大家分享一套Python学习资料,希望对正在学习的你有所帮助!

获取方式:关注并私信小编 “ 学习 ”,即可免费获取!