讲解Python的Scrapy爬虫框架使用代理进行采集的方法
更新时间:2016年02月18日 15:13:51 投稿:goldensun
这篇文章主要介绍了讲解Python的Scrapy爬虫框架使用代理进行采集的方法,并介绍了随机使用预先设好的user-agent来进行爬取的用法,需要的朋友可以参考下
(福利推荐:【腾讯云】服务器最新限时优惠活动,云服务器1核2G仅99元/年、2核4G仅768元/3年,立即抢购>>>:9i0i.cn/qcloud)
(福利推荐:你还在原价购买阿里云服务器?现在阿里云0.8折限时抢购活动来啦!4核8G企业云服务器仅2998元/3年,立即抢购>>>:9i0i.cn/aliyun)
1.在Scrapy工程下新建“middlewares.py”
# Importing base64 library because we'll need it ONLY in case if the proxy we are going to use requires authentication import base64 # Start your middleware class class ProxyMiddleware(object): # overwrite process request def process_request(self, request, spider): # Set the location of the proxy request.meta['proxy'] = "http://YOUR_PROXY_IP:PORT" # Use the following lines if your proxy requires authentication proxy_user_pass = "USERNAME:PASSWORD" # setup basic authentication for the proxy encoded_user_pass = base64.encodestring(proxy_user_pass) request.headers['Proxy-Authorization'] = 'Basic ' + encoded_user_pass
2.在项目配置文件里(./project_name/settings.py)添加
DOWNLOADER_MIDDLEWARES = { 'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 110, 'project_name.middlewares.ProxyMiddleware': 100, }
只要两步,现在请求就是通过代理的了。测试一下^_^
from scrapy.spider import BaseSpider from scrapy.contrib.spiders import CrawlSpider, Rule from scrapy.http import Request class TestSpider(CrawlSpider): name = "test" domain_name = "whatismyip.com" # The following url is subject to change, you can get the last updated one from here : # http://www.whatismyip.com/faq/automation.asp start_urls = ["http://xujian.info"] def parse(self, response): open('test.html', 'wb').write(response.body)
3.使用随机user-agent
默认情况下scrapy采集时只能使用一种user-agent,这样容易被网站屏蔽,下面的代码可以从预先定义的user- agent的列表中随机选择一个来采集不同的页面
在settings.py中添加以下代码
DOWNLOADER_MIDDLEWARES = { 'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware' : None, 'Crawler.comm.rotate_useragent.RotateUserAgentMiddleware' :400 }
注意: Crawler; 是你项目的名字 ,通过它是一个目录的名称 下面是蜘蛛的代码
#!/usr/bin/python #-*-coding:utf-8-*- import random from scrapy.contrib.downloadermiddleware.useragent import UserAgentMiddleware class RotateUserAgentMiddleware(UserAgentMiddleware): def __init__(self, user_agent=''): self.user_agent = user_agent def process_request(self, request, spider): #这句话用于随机选择user-agent ua = random.choice(self.user_agent_list) if ua: request.headers.setdefault('User-Agent', ua) #the default user_agent_list composes chrome,I E,firefox,Mozilla,opera,netscape #for more user agent strings,you can find it in http://www.useragentstring.com/pages/useragentstring.php user_agent_list = [\ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1"\ "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",\ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",\ "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",\ "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",\ "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",\ "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",\ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",\ "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",\ "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",\ "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",\ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",\ "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",\ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",\ "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",\ "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",\ "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",\ "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24" ]
您可能感兴趣的文章:
- Python爬虫框架Scrapy安装使用步骤
- 零基础写python爬虫之使用Scrapy框架编写爬虫
- 使用scrapy实现爬网站例子和实现网络爬虫(蜘蛛)的步骤
- scrapy爬虫完整实例
- 深入剖析Python的爬虫框架Scrapy的结构与运作流程
- Python使用Scrapy爬虫框架全站爬取图片并保存本地的实现代码
- Python的Scrapy爬虫框架简单学习笔记
- 实践Python的爬虫框架Scrapy来抓取豆瓣电影TOP250
- 使用Python的Scrapy框架编写web爬虫的简单示例
- python爬虫框架scrapy实战之爬取京东商城进阶篇
- 浅析python实现scrapy定时执行爬虫
- Scrapy爬虫多线程导致抓取错乱的问题解决
相关文章
Python使用Flask-SQLAlchemy连接数据库操作示例
这篇文章主要介绍了Python使用Flask-SQLAlchemy连接数据库操作,简单介绍了flask、Mysql-Python以及Flask-SQLAlchemy的安装方法,并结合实例形式分析了基于Flask-SQLAlchemy的数据库连接相关操作技巧,需要的朋友可以参考下2018-08-08MacOS(M1芯片?arm架构)下安装tensorflow的详细过程
这篇文章主要介绍了MacOS(M1芯片?arm架构)下如何安装tensorflow,本节使用的版本是tensorflow2.4?python3.8,因此并未安装加速插件,本文结合实例代码详细讲解,需要的朋友可以参考下2023-02-02Python 数值区间处理_对interval 库的快速入门详解
今天小编就为大家分享一篇Python 数值区间处理_对interval 库的快速入门详解,具有很好的参考价值,希望对大家有所帮助。一起跟随小编过来看看吧2018-11-11
最新评论