阿布云为我们提供了隧道代理IP的服务,通过阿布云HTTP隧道的动态版可以让我们的爬虫很好的使用动态代理IP
由此可知我们可以得到requests接入代码
1 # -*- coding:utf-8 -*- 2 import requests 3 4 # 要访问的目标网页 5 url = "http://httpbin.org/get" 6 7 # 代理服务器,根据购买的套餐,自行查看修改 8 proxy_host = "http-dyn.abuyun.com" 9 10 # 代理端口11 proxy_prot = "9020"12 13 # 代理隧道验证信息14 proxy_user = "H01234567890123D" # 购买隧道的通行证书15 proxy_pass = "0123456789012345" # 购买隧道的通行秘钥16 17 proxy_meta = "http://%(user)s:%(pass)s@%(host)s:%(port)s"%{18 "host":proxy_host,19 "port":proxy_prot,20 "user":proxy_user,21 "pass":proxy_pass,22 }23 24 proxies = {25 "http":proxy_meta,26 "https":proxy_meta,27 }28 29 response = requests.get(url=url,proxies=proxies)30 print(response.status_code)31 print(response.text)
得到结果为:
1 200 2 { 3 "args": {}, 4 "headers": { 5 "Accept": "*/*", 6 "Accept-Encoding": "gzip, deflate", 7 "Connection": "close", 8 "Host": "httpbin.org", 9 "User-Agent": "python-requests/2.18.1"10 },11 "origin": "60.207.237.111",12 "url": "http://httpbin.org/get"}
最后由于阿布云的proxy地址是不变的(实际是动态ip),实际上,得到上边的proxies后,直接使用那个地址,进行proxies=proxies 设置即可。
同时阿布云还提供爬虫框架Scrapy的接入代码
1 import base64 2 3 # 代理服务器,根据购买的套餐,自行查看修改 4 proxyServer = "http://http-dyn.abuyun.com:9020" 5 6 # 代理隧道验证信息 7 proxy_user = "H01234567890123D" # 购买隧道的通行证书 8 proxy_pass = "0123456789012345" # 购买隧道的通行秘钥 9 10 proxyAuth = "Basic " + base64.urlsafe_b64encode(bytes((proxy_user + ":" + proxy_pass), "ascii"))11 proxyAuth = proxyAuth.decode("utf8")12 13 14 class ProxyMiddleware(object):15 def process_request(self,request,spider):16 request.meta["proxy"] = proxyServer17 request.headers["Proxy-Authorization"] = proxyAuth
由于,在阿布云购买的是最基础的代理,即每秒 5 个请求,又因为 Scrapy 默认的并发数是 16 个,所以需要对 Scrapy 请求数量进行一下限制,可以设置每个请求的延迟时间为 0.2s ,这样一秒就刚好请求 5 个,最后启用上面的代理中间件类即可:
1 AUTOTHROTTLE_ENABLED = True2 DOWNLOAD_DELAY = 0.2 # 每次请求间隔时间3 4 # 启用阿布云代理中间件5 DOWNLOADER_MIDDLEWARES = {6 'maoyan.middlewares.ProxyMiddleware': 301,7 }