Python 框架 之 Scrapy 爬虫(二)

Scrapy是一个为了爬取网站数据,提取结构性数据而编写的应用框架。 其可以应用在数据挖掘,信息处理或存储历史数据等一系列的程序中。其最初是为了页面抓取 (更确切来说, 网络抓取)所设计的, 也可以应用在获取API所返回的数据(例如 Amazon Associates Web Services ) 或者通用的网络爬虫。Scrapy用途广泛,可以用于数据挖掘、监测和自动化测试。

Scrapy 使用了 Twisted异步网络库来处理网络通讯。整体架构大致如下

Scrapy主要包括了以下组件:

  • 引擎(Scrapy)
    用来处理整个系统的数据流处理, 触发事务(框架核心)
  • 调度器(Scheduler)
    用来接受引擎发过来的请求, 压入队列中, 并在引擎再次请求的时候返回. 可以想像成一个URL(抓取网页的网址或者说是链接)的优先队列, 由它来决定下一个要抓取的网址是什么, 同时去除重复的网址
  • 下载器(Downloader)
    用于下载网页内容, 并将网页内容返回给蜘蛛(Scrapy下载器是建立在twisted这个高效的异步模型上的)
  • 爬虫(Spiders)
    爬虫是主要干活的, 用于从特定的网页中提取自己需要的信息, 即所谓的实体(Item)。用户也可以从中提取出链接,让Scrapy继续抓取下一个页面
  • 项目管道(Pipeline)
    负责处理爬虫从网页中抽取的实体,主要的功能是持久化实体、验证实体的有效性、清除不需要的信息。当页面被爬虫解析后,将被发送到项目管道,并经过几个特定的次序处理数据。
  • 下载器中间件(Downloader Middlewares)
    位于Scrapy引擎和下载器之间的框架,主要是处理Scrapy引擎与下载器之间的请求及响应。
  • 爬虫中间件(Spider Middlewares)
    介于Scrapy引擎和爬虫之间的框架,主要工作是处理蜘蛛的响应输入和请求输出。
  • 调度中间件(Scheduler Middewares)
    介于Scrapy引擎和调度之间的中间件,从Scrapy引擎发送到调度的请求和响应。

Scrapy运行流程大概如下:

  1. 引擎从调度器中取出一个链接(URL)用于接下来的抓取
  2. 引擎把URL封装成一个请求(Request)传给下载器
  3. 下载器把资源下载下来,并封装成应答包(Response)
  4. 爬虫解析Response
  5. 解析出实体(Item),则交给实体管道进行进一步的处理
  6. 解析出的是链接(URL),则把URL交给调度器等待抓取

一、安装

Linux:pip3 install scrapyWindows:a. pip3 install wheelb. 下载twisted http://www.lfd.uci.edu/~gohlke/pythonlibs/#twistedc. 进入下载目录,执行 pip3 install Twisted‑17.1.0‑cp35‑cp35m‑win_amd64.whld. pip3 install scrapye. 下载并安装pywin32:https://sourceforge.net/projects/pywin32/files/

二、基本使用

1. 基本命令

1. scrapy startproject 项目名称# 在当前目录中创建一个项目文件(类似于Django)2. scrapy genspider [-t template] <name> <domain># 创建爬虫应用scrapy gensipider -t basic oldboy oldboy.comscrapy gensipider -t xmlfeed autohome autohome.com.cnPS:查看所有命令:scrapy gensipider -l查看模板命令:scrapy gensipider -d 模板名称3. scrapy list# 展示爬虫应用列表4. scrapy crawl 爬虫应用名称# 运行单独爬虫应用,要在项目内运行

2.项目结构以及爬虫应用简介

project_name/scrapy.cfg         # 项目的主配置信息。(真正爬虫相关的配置信息在settings.py文件中)project_name/__init__.pyitems.py       # 设置数据存储模板,用于结构化数据,如:Django的Modelpipelines.py   # 数据处理行为,如:一般结构化的数据持久化settings.py    # 配置文件,如:递归的层数、并发数,延迟下载等spiders/       # 爬虫目录,如:创建文件,编写爬虫规则__init__.py爬虫1.py爬虫2.py爬虫3.py

注意:一般创建爬虫文件时,以网站域名命名

爬虫1.py

import scrapyclass XiaoHuarSpider(scrapy.spiders.Spider):name = "spidername"                 # 爬虫名称 *****allowed_domains = ["spider.com"]    # 允许的域名start_urls = ["http://www.flepeng.com/",      # 起始URL]def parse(self, response):# 访问起始URL并获取结果后的回调函数

关于windows编码

import sys,os
sys.stdout=io.TextIOWrapper(sys.stdout.buffer,encoding='gb18030')

3. 小试牛刀

import scrapy
from scrapy.selector import HtmlXPathSelector # 新版的好像已经弃用,使用Selector
from scrapy.http.request import Requestclass DigSpider(scrapy.Spider):name = "dig"    # 爬虫应用的名称,通过命令启动爬虫时,使用此参数allowed_domains = ["chouti.com"]    # 允许的域名start_urls = ['http://dig.chouti.com/',]    # 起始URLhas_request_set = {}def parse(self, response):print(response.url)hxs = HtmlXPathSelector(response)page_list = hxs.select('//div[@id="dig_lcpage"]//a[re:test(@href, "/all/hot/recent/\d+")]/@href').extract()for page in page_list:page_url = 'http://dig.chouti.com%s' % pagekey = self.md5(page_url)if key not in self.has_request_set:self.has_request_set[key] = page_urlobj = Request(url=page_url, method='GET', callback=self.parse)yield obj@staticmethoddef md5(val):import hashlibha = hashlib.md5()ha.update(bytes(val, encoding='utf-8'))key = ha.hexdigest()return key

执行此爬虫文件,则在终端进入项目目录执行如下命令:

scrapy crawl dig --nolog # nolog 表示不打印日志

对于上述代码重要之处在于:

  • Request是一个封装用户请求的类,在回调函数中yield该对象表示继续访问
  • HtmlXpathSelector用于结构化HTML代码并提供选择器功能

4. 选择器

xpath的路径表达式:

表达式

描述
nodename选取此节点的所有子节点。
/从根节点选取。
//从匹配选择的当前节点选择文档中的节点,而不考虑它们的位置。
.选取当前节点。
..选取当前节点的父节点。
@选取属性。

在下面的表格中,列出了一些路径表达式以及表达式的结果:

路径表达式结果
bookstore选取 bookstore 元素的所有子节点。
/bookstore

选取根元素 bookstore。

注释:假如路径起始于正斜杠( / ),则此路径始终代表到某元素的绝对路径!

bookstore/book选取属于 bookstore 的子元素的所有 book 元素。
//book选取所有 book 子元素,而不管它们在文档中的位置。
bookstore//book选择属于 bookstore 元素的后代的所有 book 元素,而不管它们位于 bookstore 之下的什么位置。
//@lang选取名为 lang 的所有属性。
#!/usr/bin/env python
# -*- coding:utf-8 -*-
from scrapy.selector import Selector, HtmlXPathSelector # 新版好像已弃,使用Selector,用法和这个一样
from scrapy.http import HtmlResponse
html = """<!DOCTYPE html>
<html><head lang="en"><meta charset="UTF-8"><title></title></head><body><ul><li class="item-"><a id='i1' href="link.html">first item</a></li><li class="item-0"><a id='i2' href="llink.html">first item</a></li><li class="item-1"><a href="llink2.html">second item<span>vv</span></a></li></ul><div><a href="llink2.html">second item</a></div></body>
</html>
"""response = HtmlResponse(url='http://example.com', body=html,encoding='utf-8')
# hxs = HtmlXPathSelector(response)
# print(hxs)
# hxs = Selector(response=response).xpath('//a')  # 从根目录下查找所有 a 元素
# print(hxs)
# hxs = Selector(response=response).xpath('//a[2]')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[@id]')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[@id="i1"]')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[@href="link.html"][@id="i1"]')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[contains(@href, "link")]')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[starts-with(@href, "link")]')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[re:test(@id, "i\d+")]')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[re:test(@id, "i\d+")]/text()').extract()
# print(hxs)
# hxs = Selector(response=response).xpath('//a[re:test(@id, "i\d+")]/@href').extract()
# print(hxs)
# hxs = Selector(response=response).xpath('/html/body/ul/li/a/@href').extract()
# print(hxs)
# hxs = Selector(response=response).xpath('//body/ul/li/a/@href').extract_first()
# print(hxs)
# ul_list = Selector(response=response).xpath('//body/ul/li')
# for item in ul_list:
#     v = item.xpath('./a/span')
#     # 或
#     # v = item.xpath('a/span')
#     # 或
#     # v = item.xpath('*/a/span')
#     print(v)

示例:自动登陆抽屉并点赞

# -*- coding: utf-8 -*-
import scrapy
from scrapy.selector import HtmlXPathSelector
from scrapy.http.request import Request
from scrapy.http.cookies import CookieJar
from scrapy import FormRequestclass ChouTiSpider(scrapy.Spider):# 爬虫应用的名称,通过此名称启动爬虫命令name = "chouti"# 允许的域名allowed_domains = ["chouti.com"]cookie_dict = {}has_request_set = {}def start_requests(self):url = 'http://dig.chouti.com/'# return [Request(url=url, callback=self.login)]yield Request(url=url, callback=self.login)def login(self, response):cookie_jar = CookieJar()cookie_jar.extract_cookies(response, response.request)for k, v in cookie_jar._cookies.items():for i, j in v.items():for m, n in j.items():self.cookie_dict[m] = n.valuereq = Request(url='http://dig.chouti.com/login',method='POST',headers={'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8'},body='phone=8615131255089&password=pppppppp&oneMonth=1',cookies=self.cookie_dict,callback=self.check_login)yield reqdef check_login(self, response):req = Request(url='http://dig.chouti.com/',method='GET',callback=self.show,cookies=self.cookie_dict,dont_filter=True)yield reqdef show(self, response):# print(response)hxs = HtmlXPathSelector(response)news_list = hxs.select('//div[@id="content-list"]/div[@class="item"]')for new in news_list:# temp = new.xpath('div/div[@class="part2"]/@share-linkid').extract()link_id = new.xpath('*/div[@class="part2"]/@share-linkid').extract_first()yield Request(url='http://dig.chouti.com/link/vote?linksId=%s' %(link_id,),method='POST',cookies=self.cookie_dict,callback=self.do_favor)page_list = hxs.select('//div[@id="dig_lcpage"]//a[re:test(@href, "/all/hot/recent/\d+")]/@href').extract()for page in page_list:page_url = 'http://dig.chouti.com%s' % pageimport hashlibhash = hashlib.md5()hash.update(bytes(page_url,encoding='utf-8'))key = hash.hexdigest()if key in self.has_request_set:passelse:self.has_request_set[key] = page_urlyield Request(url=page_url,method='GET',callback=self.show)def do_favor(self, response):print(response.text)

 处理Cookie

# -*- coding: utf-8 -*-
import scrapy
from scrapy.http.response.html import HtmlResponse
from scrapy.http import Request
from scrapy.http.cookies import CookieJarclass ChoutiSpider(scrapy.Spider):name = "chouti"allowed_domains = ["chouti.com"]start_urls = ('http://www.chouti.com/',)def start_requests(self):url = 'http://dig.chouti.com/'yield Request(url=url, callback=self.login, meta={'cookiejar': True})def login(self, response):print(response.headers.getlist('Set-Cookie'))req = Request(url='http://dig.chouti.com/login',method='POST',headers={'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8'},body='phone=8613121758648&password=woshiniba&oneMonth=1',callback=self.check_login,meta={'cookiejar': True})yield reqdef check_login(self, response):print(response.text)

注意:settings.py中设置DEPTH_LIMIT = 1来指定“递归”的层数。

5. 格式化处理 pipelines

上述实例只是简单的处理,所以在parse方法中直接处理。如果对于想要获取更多的数据处理,则可以利用Scrapy的items将数据格式化,然后统一交由pipelines来处理。

spiders/xiahuar.py

import scrapy
from scrapy.selector import HtmlXPathSelector
from scrapy.http.request import Request
from scrapy.http.cookies import CookieJar
from scrapy import FormRequestclass XiaoHuarSpider(scrapy.Spider):name = "xiaohuar"allowed_domains = ["xiaohuar.com"]start_urls = ["http://www.xiaohuar.com/list-1-1.html",]# setting 中的配置pipelines# custom_settings = {#     'ITEM_PIPELINES':{#         'spider1.pipelines.JsonPipeline': 100#     }# }has_request_set = {}def parse(self, response):# 分析页面# 找到页面中符合规则的内容(校花图片),保存# 找到所有的a标签,再访问其他a标签,一层一层的搞下去hxs = HtmlXPathSelector(response)items = hxs.select('//div[@class="item_list infinite_scroll"]/div')for item in items:src = item.select('.//div[@class="img"]/a/img/@src').extract_first()name = item.select('.//div[@class="img"]/span/text()').extract_first()school = item.select('.//div[@class="img"]/div[@class="btns"]/a/text()').extract_first()url = "http://www.xiaohuar.com%s" % srcfrom ..items import XiaoHuarItemobj = XiaoHuarItem(name=name, school=school, url=url)yield objurls = hxs.select('//a[re:test(@href, "http://www.xiaohuar.com/list-1-\d+.html")]/@href')for url in urls:key = self.md5(url)if key in self.has_request_set:passelse:self.has_request_set[key] = urlreq = Request(url=url,method='GET',callback=self.parse)yield req@staticmethoddef md5(val):import hashlibha = hashlib.md5()ha.update(bytes(val, encoding='utf-8'))key = ha.hexdigest()return key

items

import scrapyclass XiaoHuarItem(scrapy.Item):name = scrapy.Field()school = scrapy.Field()url = scrapy.Field()

pipelines

import json
import os
import requestsclass JsonPipeline(object):def __init__(self):self.file = open('xiaohua.txt', 'w')def process_item(self, item, spider):v = json.dumps(dict(item), ensure_ascii=False)self.file.write(v)self.file.write('\n')self.file.flush()return itemclass FilePipeline(object):def __init__(self):if not os.path.exists('imgs'):os.makedirs('imgs')def process_item(self, item, spider):response = requests.get(item['url'], stream=True)file_name = '%s_%s.jpg' % (item['name'], item['school'])with open(os.path.join('imgs', file_name), mode='wb') as f:f.write(response.content)return item

settings

ITEM_PIPELINES = {'spider1.pipelines.JsonPipeline': 100,'spider1.pipelines.FilePipeline': 300,
}
# 后面的整数值,确定了他们运行的顺序,item按数字从低到高的顺序,通过pipeline,通常将这些数字定义在0-1000范围内。

对于pipeline可以做更多,如下:

自定义pipeline格式

from scrapy.exceptions import DropItemclass CustomPipeline(object):def __init__(self,v):self.value = vdef process_item(self, item, spider):# 运行pipeline时会调用此函数,操作并进行持久化# return表示会被后续的pipeline继续处理return item# 表示将item丢弃,不会被后续pipeline处理# raise DropItem()@classmethoddef from_crawler(cls, crawler):# 初始化时候,用于创建pipeline对象val = crawler.settings.getint('MMMM')return cls(val)def open_spider(self,spider):# 爬虫开始执行时,调用print('000000')def close_spider(self,spider):# 爬虫关闭时,被调用print('111111')

6.中间件

爬虫中间件

class SpiderMiddleware(object):def process_spider_input(self,response, spider):"""下载完成,执行,然后交给parse处理:param response: :param spider: :return: """passdef process_spider_output(self,response, result, spider):"""spider处理完成,返回时调用:param response::param result::param spider::return: 必须返回包含 Request 或 Item 对象的可迭代对象(iterable)"""return resultdef process_spider_exception(self,response, exception, spider):"""异常调用:param response::param exception::param spider::return: None,继续交给后续中间件处理异常;含 Response 或 Item 的可迭代对象(iterable),交给调度器或pipeline"""return Nonedef process_start_requests(self,start_requests, spider):"""爬虫启动时调用:param start_requests::param spider::return: 包含 Request 对象的可迭代对象"""return start_requests

下载器中间件

class DownMiddleware1(object):def process_request(self, request, spider):"""请求需要被下载时,经过所有下载器中间件的process_request调用:param request: :param spider: :return:  None,继续后续中间件去下载;Response对象,停止process_request的执行,开始执行process_responseRequest对象,停止中间件的执行,将Request重新调度器raise IgnoreRequest异常,停止process_request的执行,开始执行process_exception"""passdef process_response(self, request, response, spider):"""spider处理完成,返回时调用:param response::param result::param spider::return: Response 对象:转交给其他中间件process_responseRequest 对象:停止中间件,request会被重新调度下载raise IgnoreRequest 异常:调用Request.errback"""print('response1')return responsedef process_exception(self, request, exception, spider):"""当下载处理器(download handler)或 process_request() (下载中间件)抛出异常:param response::param exception::param spider::return: None:继续交给后续中间件处理异常;Response对象:停止后续process_exception方法Request对象:停止中间件,request将会被重新调用下载"""return None

7. 自定制命令

  • 在spiders同级创建任意目录,如:commands
  • 在其中创建 crawlall.py 文件 (此处文件名就是自定义的命令)

crawlall.py

from scrapy.commands import ScrapyCommand
from scrapy.utils.project import get_project_settingsclass Command(ScrapyCommand):requires_project = Truedef syntax(self):        # 命令的参数return '[options]'def short_desc(self):    # 命令的描述return 'Runs all of the spiders'def run(self, args, opts):spider_list = self.crawler_process.spiders.list()for name in spider_list:self.crawler_process.crawl(name, **opts.__dict__)self.crawler_process.start()
  • 在settings.py 中添加配置 COMMANDS_MODULE = '项目名称.目录名称'
  • 在项目目录执行命令:scrapy crawlall 

单个爬虫

import sys
from scrapy.cmdline import executeif __name__ == '__main__':execute(["scrapy","github","--nolog"])

8. 自定义扩展

自定义扩展时,利用信号在指定位置注册制定操作

from scrapy import signalsclass MyExtension(object):def __init__(self, value):self.value = value@classmethoddef from_crawler(cls, crawler):val = crawler.settings.getint('MMMM')ext = cls(val)# 注册信号crawler.signals.connect(ext.spider_opened, signal=signals.spider_opened)crawler.signals.connect(ext.spider_closed, signal=signals.spider_closed)return extdef spider_opened(self, spider):print('open')def spider_closed(self, spider):print('close')

9. 避免重复访问

scrapy默认使用 scrapy.dupefilter.RFPDupeFilter 进行去重,相关配置有:

DUPEFILTER_CLASS = 'scrapy.dupefilter.RFPDupeFilter'
DUPEFILTER_DEBUG = False
JOBDIR = "保存范文记录的日志路径,如:/root/"  # 最终路径为 /root/requests.seen

自定义URL去重操作

class RepeatUrl:def __init__(self):self.visited_url = set()@classmethoddef from_settings(cls, settings):"""初始化时,调用:param settings: :return: """return cls()def request_seen(self, request):"""检测当前请求是否已经被访问过:param request: :return: True表示已经访问过;False表示未访问过"""if request.url in self.visited_url:return Trueself.visited_url.add(request.url)return Falsedef open(self):"""开始爬取请求时,调用:return: """print('open replication')def close(self, reason):"""结束爬虫爬取时,调用:param reason: :return: """print('close replication')def log(self, request, spider):"""记录日志:param request: :param spider: :return: """print('repeat', request.url)

10.其他

settings

# -*- coding: utf-8 -*-# Scrapy settings for step8_king project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     http://doc.scrapy.org/en/latest/topics/settings.html
#     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html# 1. 爬虫名称
BOT_NAME = 'step8_king'# 2. 爬虫应用路径
SPIDER_MODULES = ['step8_king.spiders']
NEWSPIDER_MODULE = 'step8_king.spiders'# Crawl responsibly by identifying yourself (and your website) on the user-agent
# 3. 客户端 user-agent请求头
# USER_AGENT = 'step8_king (+http://www.yourdomain.com)'# Obey robots.txt rules
# 4. 禁止爬虫配置,应该开启,看看是否允许
# ROBOTSTXT_OBEY = False# Configure maximum concurrent requests performed by Scrapy (default: 16)
# 5. 并发请求数
# CONCURRENT_REQUESTS = 4# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
# 6. 延迟下载秒数
# DOWNLOAD_DELAY = 2# The download delay setting will honor only one of:
# 7. 单域名访问并发数,并且延迟下次秒数也应用在每个域名
# CONCURRENT_REQUESTS_PER_DOMAIN = 2
# 单IP访问并发数,如果有值则忽略:CONCURRENT_REQUESTS_PER_DOMAIN,并且延迟下次秒数也应用在每个IP
# CONCURRENT_REQUESTS_PER_IP = 3# Disable cookies (enabled by default)
# 8. 是否支持cookie,cookiejar进行操作cookie
# COOKIES_ENABLED = True
# COOKIES_DEBUG = True# Disable Telnet Console (enabled by default)
# 9. Telnet用于查看当前爬虫的信息,操作爬虫等...
#    使用telnet ip port ,然后通过命令操作
# TELNETCONSOLE_ENABLED = True
# TELNETCONSOLE_HOST = '127.0.0.1'
# TELNETCONSOLE_PORT = [6023,]# 10. 默认请求头
# Override the default request headers:
# DEFAULT_REQUEST_HEADERS = {
#     'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#     'Accept-Language': 'en',
# }# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
# 11. 定义pipeline处理请求
# ITEM_PIPELINES = {
#    'step8_king.pipelines.JsonPipeline': 700,
#    'step8_king.pipelines.FilePipeline': 500,
# }# 12. 自定义扩展,基于信号进行调用
# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
# EXTENSIONS = {
#     # 'step8_king.extensions.MyExtension': 500,
# }# 13. 爬虫允许的最大深度,可以通过meta查看当前深度;0表示无深度
# DEPTH_LIMIT = 3# 14. 爬取时,0表示深度优先Lifo(默认);1表示广度优先FiFo# 后进先出,深度优先
# DEPTH_PRIORITY = 0
# SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleLifoDiskQueue'
# SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.LifoMemoryQueue'
# 先进先出,广度优先# DEPTH_PRIORITY = 1
# SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleFifoDiskQueue'
# SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.FifoMemoryQueue'# 15. 调度器队列
# SCHEDULER = 'scrapy.core.scheduler.Scheduler'
# from scrapy.core.scheduler import Scheduler# 16. 访问URL去重
# DUPEFILTER_CLASS = 'step8_king.duplication.RepeatUrl'# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html"""
17. 自动限速算法from scrapy.contrib.throttle import AutoThrottle自动限速设置1. 获取最小延迟 DOWNLOAD_DELAY2. 获取最大延迟 AUTOTHROTTLE_MAX_DELAY3. 设置初始下载延迟 AUTOTHROTTLE_START_DELAY4. 当请求下载完成后,获取其"连接"时间 latency,即:请求连接到接受到响应头之间的时间5. 用于计算的... AUTOTHROTTLE_TARGET_CONCURRENCYtarget_delay = latency / self.target_concurrencynew_delay = (slot.delay + target_delay) / 2.0 # 表示上一次的延迟时间new_delay = max(target_delay, new_delay)new_delay = min(max(self.mindelay, new_delay), self.maxdelay)slot.delay = new_delay
"""# 开始自动限速
# AUTOTHROTTLE_ENABLED = True
# The initial download delay
# 初始下载延迟
# AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
# 最大下载延迟
# AUTOTHROTTLE_MAX_DELAY = 10
# The average number of requests Scrapy should be sending in parallel to each remote server
# 平均每秒并发数
# AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0# Enable showing throttling stats for every response received:
# 是否显示
# AUTOTHROTTLE_DEBUG = True# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings"""
18. 启用缓存目的用于将已经发送的请求或相应缓存下来,以便以后使用from scrapy.downloadermiddlewares.httpcache import HttpCacheMiddlewarefrom scrapy.extensions.httpcache import DummyPolicyfrom scrapy.extensions.httpcache import FilesystemCacheStorage
"""
# 是否启用缓存策略
# HTTPCACHE_ENABLED = True# 缓存策略:所有请求均缓存,下次在请求直接访问原来的缓存即可
# HTTPCACHE_POLICY = "scrapy.extensions.httpcache.DummyPolicy"
# 缓存策略:根据Http响应头:Cache-Control、Last-Modified 等进行缓存的策略
# HTTPCACHE_POLICY = "scrapy.extensions.httpcache.RFC2616Policy"# 缓存超时时间
# HTTPCACHE_EXPIRATION_SECS = 0# 缓存保存路径
# HTTPCACHE_DIR = 'httpcache'# 缓存忽略的Http状态码
# HTTPCACHE_IGNORE_HTTP_CODES = []# 缓存存储的插件
# HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'"""
19. 代理,需要在环境变量中设置from scrapy.contrib.downloadermiddleware.httpproxy import HttpProxyMiddleware方式一:使用默认os.environ{http_proxy:http://root:woshiniba@192.168.11.11:9999/https_proxy:http://192.168.11.11:9999/}方式二:使用自定义下载中间件def to_bytes(text, encoding=None, errors='strict'):if isinstance(text, bytes):return textif not isinstance(text, six.string_types):raise TypeError('to_bytes must receive a unicode, str or bytes ''object, got %s' % type(text).__name__)if encoding is None:encoding = 'utf-8'return text.encode(encoding, errors)class ProxyMiddleware(object):def process_request(self, request, spider):PROXIES = [{'ip_port': '111.11.228.75:80', 'user_pass': ''},{'ip_port': '120.198.243.22:80', 'user_pass': ''},{'ip_port': '111.8.60.9:8123', 'user_pass': ''},{'ip_port': '101.71.27.120:80', 'user_pass': ''},{'ip_port': '122.96.59.104:80', 'user_pass': ''},{'ip_port': '122.224.249.122:8088', 'user_pass': ''},]proxy = random.choice(PROXIES)if proxy['user_pass'] is not None:request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])encoded_user_pass = base64.encodestring(to_bytes(proxy['user_pass']))request.headers['Proxy-Authorization'] = to_bytes('Basic ' + encoded_user_pass)print "**************ProxyMiddleware have pass************" + proxy['ip_port']else:print "**************ProxyMiddleware no pass************" + proxy['ip_port']request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])DOWNLOADER_MIDDLEWARES = {'step8_king.middlewares.ProxyMiddleware': 500,}""""""
20. Https访问Https访问时有两种情况:1. 要爬取网站使用的可信任证书(默认支持)DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"DOWNLOADER_CLIENTCONTEXTFACTORY = "scrapy.core.downloader.contextfactory.ScrapyClientContextFactory"2. 要爬取网站使用的自定义证书DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"DOWNLOADER_CLIENTCONTEXTFACTORY = "step8_king.https.MySSLFactory"# https.pyfrom scrapy.core.downloader.contextfactory import ScrapyClientContextFactoryfrom twisted.internet.ssl import (optionsForClientTLS, CertificateOptions, PrivateCertificate)class MySSLFactory(ScrapyClientContextFactory):def getCertificateOptions(self):from OpenSSL import cryptov1 = crypto.load_privatekey(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.key.unsecure', mode='r').read())v2 = crypto.load_certificate(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.pem', mode='r').read())return CertificateOptions(privateKey=v1,  # pKey对象certificate=v2,  # X509对象verify=False,method=getattr(self, 'method', getattr(self, '_ssl_method', None)))其他:相关类scrapy.core.downloader.handlers.http.HttpDownloadHandlerscrapy.core.downloader.webclient.ScrapyHTTPClientFactoryscrapy.core.downloader.contextfactory.ScrapyClientContextFactory相关配置DOWNLOADER_HTTPCLIENTFACTORYDOWNLOADER_CLIENTCONTEXTFACTORY""""""
21. 爬虫中间件class SpiderMiddleware(object):def process_spider_input(self,response, spider):'''下载完成,执行,然后交给parse处理:param response: :param spider: :return: '''passdef process_spider_output(self,response, result, spider):'''spider处理完成,返回时调用:param response::param result::param spider::return: 必须返回包含 Request 或 Item 对象的可迭代对象(iterable)'''return resultdef process_spider_exception(self,response, exception, spider):'''异常调用:param response::param exception::param spider::return: None,继续交给后续中间件处理异常;含 Response 或 Item 的可迭代对象(iterable),交给调度器或pipeline'''return Nonedef process_start_requests(self,start_requests, spider):'''爬虫启动时调用:param start_requests::param spider::return: 包含 Request 对象的可迭代对象'''return start_requests内置爬虫中间件:'scrapy.contrib.spidermiddleware.httperror.HttpErrorMiddleware': 50,'scrapy.contrib.spidermiddleware.offsite.OffsiteMiddleware': 500,'scrapy.contrib.spidermiddleware.referer.RefererMiddleware': 700,'scrapy.contrib.spidermiddleware.urllength.UrlLengthMiddleware': 800,'scrapy.contrib.spidermiddleware.depth.DepthMiddleware': 900,"""
# from scrapy.contrib.spidermiddleware.referer import RefererMiddleware
# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
SPIDER_MIDDLEWARES = {# 'step8_king.middlewares.SpiderMiddleware': 543,
}"""
22. 下载中间件class DownMiddleware1(object):def process_request(self, request, spider):'''请求需要被下载时,经过所有下载器中间件的process_request调用:param request::param spider::return:None,继续后续中间件去下载;Response对象,停止process_request的执行,开始执行process_responseRequest对象,停止中间件的执行,将Request重新调度器raise IgnoreRequest异常,停止process_request的执行,开始执行process_exception'''passdef process_response(self, request, response, spider):'''spider处理完成,返回时调用:param response::param result::param spider::return:Response 对象:转交给其他中间件process_responseRequest 对象:停止中间件,request会被重新调度下载raise IgnoreRequest 异常:调用Request.errback'''print('response1')return responsedef process_exception(self, request, exception, spider):'''当下载处理器(download handler)或 process_request() (下载中间件)抛出异常:param response::param exception::param spider::return:None:继续交给后续中间件处理异常;Response对象:停止后续process_exception方法Request对象:停止中间件,request将会被重新调用下载'''return None默认下载中间件{'scrapy.contrib.downloadermiddleware.robotstxt.RobotsTxtMiddleware': 100,'scrapy.contrib.downloadermiddleware.httpauth.HttpAuthMiddleware': 300,'scrapy.contrib.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware': 350,'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': 400,'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 500,'scrapy.contrib.downloadermiddleware.defaultheaders.DefaultHeadersMiddleware': 550,'scrapy.contrib.downloadermiddleware.redirect.MetaRefreshMiddleware': 580,'scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware': 590,'scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware': 600,'scrapy.contrib.downloadermiddleware.cookies.CookiesMiddleware': 700,'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 750,'scrapy.contrib.downloadermiddleware.chunked.ChunkedTransferMiddleware': 830,'scrapy.contrib.downloadermiddleware.stats.DownloaderStats': 850,'scrapy.contrib.downloadermiddleware.httpcache.HttpCacheMiddleware': 900,}"""
# from scrapy.contrib.downloadermiddleware.httpauth import HttpAuthMiddleware
# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
# DOWNLOADER_MIDDLEWARES = {
#    'step8_king.middlewares.DownMiddleware1': 100,
#    'step8_king.middlewares.DownMiddleware2': 500,
# }

此文为转载https://www.cnblogs.com/wupeiqi/articles/6229292.html

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/454727.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

lamp和php,[LAMP]Apache和PHP的结合

在LAMP架构中&#xff0c;Apache通过PHP模块与Mysql建立连接&#xff0c;读写数据。那么配置Apache和PHP结合的步骤是怎么操作的呢&#xff1f;1、修改http.conf文件[rootjuispan ~]# cat /usr/local/apache2.4/conf/httpd.conf......#ServerName......AllowOverride noneRequi…

云电脑是什么_云电脑和我们现在平时用的电脑有什么区别?

&#x1f340;温馨提示&#x1f340;公众号推送改版&#xff0c;为了不让您错过【掌中IT发烧友圈】每天的精彩推送&#xff0c;切记将本号设置星标哦&#xff01;~01云电脑&#xff0c;是5G云服务时代的电脑新概念&#xff0c;是电脑的新的一种形态。从具体操作使用上来讲&…

PHP爬取历史天气

PHP爬取历史天气 PHP作为宇宙第一语言&#xff0c;爬虫也是非常方便&#xff0c;这里爬取的是从天气网获得中国城市历史天气统计结果。 程序架构 main.php <?phpinclude_once("./parser.php");include_once("./storer.php");#解析器和存储器见下文$par…

Python 第三方库之docx

日常上官网 https://python-docx.readthedocs.io/en/latest/ 一、安装 pip install python-docx 二、写入word word 中主要有两种用文本格式等级&#xff1a;块等级&#xff08;block-level&#xff09;和内联等级&#xff08;inline-level&#xff09;word 中大部分内容都…

Unity AI副总裁Danny Lange:如何用AI助推游戏行业?

本文讲的是Unity AI副总裁Danny Lange&#xff1a;如何用AI助推游戏行业&#xff1f; &#xff0c;10月26日&#xff0c;在加州山景城举办的ACMMM 2017大会进入正会第三天。在会上&#xff0c;Unity Technology负责AI与机器学习的副总裁Danny Longe进行了题为《Bringing Gaming…

SPI 读取不同长度 寄存器_SPI协议,MCP2515裸机驱动详解

SPI概述Serial Peripheral interface 通用串行外围设备接口是Motorola首先在其MC68HCXX系列处理器上定义的。SPI接口主要应用在 EEPROM&#xff0c;FLASH&#xff0c;实时时钟&#xff0c;AD转换器&#xff0c;还有数字信号处理器和数字信号解码器之间。SPI&#xff0c;是一种高…

20162314 《Program Design Data Structures》Learning Summary Of The First Week

20162314 2017-2018-1 《Program Design & Data Structures》Learning Summary Of The First Week Summary of teaching materials Algorithm analysis is the basic project of the computer science.Increasing function prove that the utilization of the time and spa…

高并发解决方法

2019独角兽企业重金招聘Python工程师标准>>> 高并发来说&#xff0c;要从实际项目的每一个过程去考虑&#xff0c;页面&#xff0c;访问过程&#xff0c;服务器处理&#xff0c;数据库访问每个过程都可以处理。&#xff08;前端-宽带-后端-DB&#xff09; 集群&…

MySQL 之 存储过程

一、初识存储过程 1、什么是存储过程 存储过程是在大型数据库系统中一组为了完成特定功能的SQL语句集&#xff0c;存储在数据库中。存储过程经过第一次编译后&#xff0c;再次调用不需要编译&#xff0c;用户可以通过指定的存储过程名和给出一些存储过程定义的参数来使用它。…

如何root安卓手机_安卓Root+卡开机画面救砖教程丨以一加手机为例

一加手机买到手已经用了1个多月了&#xff0c;还有很多朋友在问我怎么Root、怎么替换Recovery、怎么安装Magisk、有时候刷Magisk模块变砖怎么解救。小编统一整理一下&#xff0c;其他安卓手机也可以参考&#xff0c;很多思路都是通用的。一加手机刷入TWRP并RootTWRP大概是现在安…

unity消息队列判断字符串相等有错误_Python3十大经典错误及解决办法

◆ ◆ ◆ ◆ ◆接触了很多Python爱好者&#xff0c;有初学者&#xff0c;亦有转行人。不论大家学习Python的目的是什么&#xff0c;总之&#xff0c;学习Python前期写出来的代码不报错就是极好的。下面&#xff0c;严小样儿为大家罗列出Python3十大经典错误及解决办法&#xf…

php qmqp 没有方法,CentOS7 php 安装 amqp扩展

继续安装完 rabbitmq后&#xff0c;安装phpqmqp扩展1.安装rabbitmq-c安装最新版wget -c https://github.com/alanxz/rabbitmq-c/releases/download/v0.8.0/rabbitmq-c-0.8.0.tar.gztar zxf rabbitmq-c-0.8.0.tar.gzcd rabbitmq-c-0.8.0./configure --prefix/usr/local/rabbitmq…

CentOS 7镜像下载

方式一 官网下载 官网链接&#xff1a;http://isoredirect.centos.org/centos/7/isos/x86_64/ Actual Country 国内资源 Nearby Countries 周边国家资源 方式二 阿里云下载 阿里云站点&#xff1a;http://mirrors.aliyun.com/centos/7/isos/x86_64/ 各个版本的ISO镜像文件…

Docker Dirty Cow逃逸

2019独角兽企业重金招聘Python工程师标准>>> 在Linux中&#xff0c;有一个功能&#xff1a;VDSO(virtual dvnamic shared object),这是一个小型共享库&#xff0c;能将内核自动映射到所有用户程序的地址空间。 Docker逃逸利用Dirty Cow漏洞&#xff0c;将Payload写到…

创建office一直转圈_Windows写字板出现广告条幅:推荐用户使用在线版Office

自Windows 95开始&#xff0c;写字板(Wordpad)应用就一直预装在Windows操作系统中。它是一款非常简单的文本编辑器&#xff0c;在功能方面介于记事本和Word之间。近日Rafael Rivera发现微软正在为这款古老的写字板添加新功能--在应用中添加广告横幅。这个广告横幅就是推荐那些写…

2017软件工程实践第二次作业

1、 项目地址&#xff1a;https://github.com/one-piece-zero/sudoku 2、PSP表格记录的估计耗时 3、解题思路&#xff1a; 在拿到这个题目的时候&#xff0c;我最早想到的是大一下学期做的程序语言综合设计实践中的N皇后问题&#xff0c;这两个题目之间有许多的类似之处&#x…

CentOS7 安装或迁移 wordpress(完整迁移)

一、安装Apache web服务器 安装Apache web服务器&#xff1a; yum install -y httpd # 使用yum安装 systemctl start httpd # 启动Apache服务器 systemctl enable httpd # Apache服务器开机后自动启动 使用浏览器打开http://127.0.0.1检查Apache安装是否成功。成功后…

WinForm部署问题

WinForm部署问题 1、解决&#xff1a;This implementation is not part of the Windows Platform 问题&#xff1f; 一&#xff1a;单击 开始 &#xff0c;单击 运行 &#xff0c;键入 gpedit.msc &#xff0c;然后单击 确定 。    二&#xff1a;依次展开 计算机配置 &…

signal软件如何退出账号_超好用的手机视频剪辑软件Videoleap内购分享

注意事项【必读】&#xff1a;1.必须按照下面的教程操作&#xff0c;教程讲的很详细。2.如果遇到帐号密码错误&#xff0c;先看本页面新密码再登陆&#xff0c;别乱试密码。3.如果手机上有你购买的这个软件&#xff0c;请先卸载&#xff0c;再用我们的苹果id登陆下载&#xff0…

python之eval函数,map函数,zip函数

eval(str)函数很强大&#xff0c;官方解释为&#xff1a;将字符串str当成有效的表达式来求值并返回计算结果。所以&#xff0c;结合math当成一个计算器很好用。 eval()函数常见作用有&#xff1a; 1、计算字符串中有效的表达式&#xff0c;并返回结果 >>> eval(pow(2,…