python爬虫之scrapy框架

  Scrapy是一个为了爬取网站数据,提取结构性数据而编写的应用框架。 其可以应用在数据挖掘,信息处理或存储历史数据等一系列的程序中。
其最初是为了页面抓取 (更确切来说, 网络抓取 )所设计的, 也可以应用在获取API所返回的数据(例如 Amazon Associates Web Services ) 或者通用的网络爬虫。Scrapy用途广泛,可以用于数据挖掘、监测和自动化测试。

Scrapy 使用了 Twisted异步网络库来处理网络通讯。整体架构大致如下

Scrapy主要包括了以下组件:

  • 引擎(Scrapy)
    用来处理整个系统的数据流处理, 触发事务(框架核心)
  • 调度器(Scheduler)
    用来接受引擎发过来的请求, 压入队列中, 并在引擎再次请求的时候返回. 可以想像成一个URL(抓取网页的网址或者说是链接)的优先队列, 由它来决定下一个要抓取的网址是什么, 同时去除重复的网址
  • 下载器(Downloader)
    用于下载网页内容, 并将网页内容返回给蜘蛛(Scrapy下载器是建立在twisted这个高效的异步模型上的)
  • 爬虫(Spiders)
    爬虫是主要干活的, 用于从特定的网页中提取自己需要的信息, 即所谓的实体(Item)。用户也可以从中提取出链接,让Scrapy继续抓取下一个页面
  • 项目管道(Pipeline)
    负责处理爬虫从网页中抽取的实体,主要的功能是持久化实体、验证实体的有效性、清除不需要的信息。当页面被爬虫解析后,将被发送到项目管道,并经过几个特定的次序处理数据。
  • 下载器中间件(Downloader Middlewares)
    位于Scrapy引擎和下载器之间的框架,主要是处理Scrapy引擎与下载器之间的请求及响应。
  • 爬虫中间件(Spider Middlewares)
    介于Scrapy引擎和爬虫之间的框架,主要工作是处理蜘蛛的响应输入和请求输出。
  • 调度中间件(Scheduler Middewares)
    介于Scrapy引擎和调度之间的中间件,从Scrapy引擎发送到调度的请求和响应。

Scrapy运行流程大概如下:

    1. 引擎从调度器中取出一个链接(URL)用于接下来的抓取
    2. 引擎把URL封装成一个请求(Request)传给下载器
    3. 下载器把资源下载下来,并封装成应答包(Response)
    4. 爬虫解析Response
    5. 解析出实体(Item),则交给实体管道进行进一步的处理
    6. 解析出的是链接(URL),则把URL交给调度器等待抓取

linux系统

pip3 install scrapy

Windows系统

#scrapy 的一些依赖:pywin32、pyOpenSSL、Twisted、lxml 、zope.interface。(安装的时候,注意看报错信息)

#安装wheel
pip3 install wheel-i http://pypi.douban.com/simple --trusted-host pypi.douban.com

 

#安装这个依赖包,才有安装上Twisted
pip3 install Incremental -i http://pypi.douban.com/simple --trusted-host pypi.douban.com

 

#再pip3安装Twisted,但是还是安装不成功,会报错。(解决其它依赖问题)
pip3 install Twisted -i http://pypi.douban.com/simple --trusted-host pypi.douban.com

 

#再进入软件存放目录,再安装就可以成功啦。
pip3 install Twisted-17.1.0-cp35-cp35m-win32.whl

 

#安装scrapy
pip3 install scrapy -i http://pypi.douban.com/simple --trusted-host pypi.douban.com

 

#pywin32
下载:https://sourceforge.net/projects/pywin32/files/

检查pywin32是否安装成功。

C:\Users\Administrator>python
Python 3.5.2 (v3.5.2:4def2a2901a5, Jun 25 2016, 22:01:18) [MSC v.1900 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.>>> import win32api
>>> import win32con
>>> win32api.MessageBox(win32con.NULL, 'Python 你好!', '你好', win32con.MB_OK)

二、基本使用

1. 基本命令

#创建项目
scrapy startproject xiaohuar#进入项目
cd xiaohuar#创建爬虫应用
scrapy genspider xiaohuar xiaohar.com#运行爬虫
scrapy crawl chouti --nolog

2.项目结构以及爬虫应用简介

文件说明:
scrapy.cfg  项目的主配置信息。(真正爬虫相关的配置信息在settings.py文件中)
items.py    设置数据存储模板,用于结构化数据,如:Django的Model
pipelines    数据处理行为,如:一般结构化的数据持久化
settings.py 配置文件,如:递归的层数、并发数,延迟下载等
spiders      爬虫目录,如:创建文件,编写爬虫规则

注意:一般创建爬虫文件时,以网站域名命名

import scrapyclass XiaoHuarSpider(scrapy.spiders.Spider):name = "xiaohuar"                            # 爬虫名称 *****allowed_domains = ["xiaohuar.com"]  # 允许的域名start_urls = ["http://www.xiaohuar.com/hua/",   # 其实URL]def parse(self, response):# 访问起始URL并获取结果后的回调函数爬虫1.py

3、找标签方法

# -*- coding: utf-8 -*-
import scrapy
import sys
import io
from scrapy.http import Request
from scrapy.selector import Selector, HtmlXPathSelector
from ..items import ChoutiItemsys.stdout = io.TextIOWrapper(sys.stdout.buffer,encoding='gb18030')class ChoutiSpider(scrapy.Spider):name = "chouti"allowed_domains = ["chouti.com"]start_urls = ['http://dig.chouti.com/']visited_urls =set()# def start_requests(self):#     for url in self.start_urls:#         yield Request(url,callback=self.parse)def parse(self, response):# content = str(response.body,encoding='utf-8')# 找到文档中所有A标签# hxs = Selector(response=response).xpath('//a') # 标签对象列表# for i in hxs:#     print(i) # 标签对象# 对象转换为字符串# hxs = Selector(response=response).xpath('//div[@id="content-list"]/div[@class="item"]').extract()  # 标签对象列表# hxs = Selector(response=response).xpath('//div[@id="content-list"]/div[@class="item"]')  # 标签对象列表# for obj in hxs:#     a = obj.xpath('.//a[@class="show-content"]/text()').extract_first()#     print(a.strip())# 选择器:"""
        //   表示子孙中.//  当前对象的子孙中/    儿子/div 儿子中的div标签/div[@id="i1"]  #儿子中的div标签且id=i1/div[@id="i1"]  #儿子中的div标签且id=i1obj.extract()         # 列表中的每一个对象转换字符串 =》 []obj.extract_first()   # 列表中的每一个对象转换字符串 => 列表第一个元素//div/text()    获取某个标签的文本"""
# 获取当前页的所有页码# hxs = Selector(response=response).xpath('//div[@id="dig_lcpage"]//a/text()')# hxs = Selector(response=response).xpath('//div[@id="dig_lcpage"]//a/@href').extract()# hxs = Selector(response=response).xpath('//a[starts-with(@href, "/all/hot/recent/")]/@href').extract()# responsehxs1 = Selector(response=response).xpath('//div[@id="content-list"]/div[@class="item"]')  # 标签对象列表for obj in hxs1:title = obj.xpath('.//a[@class="show-content"]/text()').extract_first().strip()href =  obj.xpath('.//a[@class="show-content"]/@href').extract_first().strip()item_obj = ChoutiItem(title=title,href=href)# 将item对象传递给pipelineyield item_objhxs2 = Selector(response=response).xpath('//a[re:test(@href, "/all/hot/recent/\d+")]/@href').extract()for url in hxs2:md5_url = self.md5(url)if md5_url in self.visited_urls:passelse:self.visited_urls.add(md5_url)url = "http://dig.chouti.com%s" %url# 将新要访问的url添加到调度器yield Request(url=url,callback=self.parse)# a/@href  获取属性# //a[starts-with(@href, "/all/hot/recent/")]/@href'  已xx开始# //a[re:test(@href, "/all/hot/recent/\d+")]          正则# yield Request(url=url,callback=self.parse)          # 将新要访问的url添加到调度器# 重写start_requests,指定最开始处理请求的方法# def show(self,response):#     print(response.text)def md5(self,url):import hashlibobj = hashlib.md5()obj.update(bytes(url,encoding='utf-8'))return obj.hexdigest()

3. 小试牛刀

import scrapy
from scrapy.selector import HtmlXPathSelector
from scrapy.http.request import Requestclass DigSpider(scrapy.Spider):# 爬虫应用的名称,通过此名称启动爬虫命令name = "dig"# 允许的域名allowed_domains = ["chouti.com"]# 起始URLstart_urls = ['http://dig.chouti.com/',]has_request_set = {}def parse(self, response):print(response.url)hxs = HtmlXPathSelector(response)page_list = hxs.select('//div[@id="dig_lcpage"]//a[re:test(@href, "/all/hot/recent/\d+")]/@href').extract()for page in page_list:page_url = 'http://dig.chouti.com%s' % pagekey = self.md5(page_url)if key in self.has_request_set:passelse:self.has_request_set[key] = page_urlobj = Request(url=page_url, method='GET', callback=self.parse)yield obj@staticmethoddef md5(val):import hashlibha = hashlib.md5()ha.update(bytes(val, encoding='utf-8'))key = ha.hexdigest()return key

执行此爬虫文件,则在终端进入项目目录执行如下命令:

1
scrapy crawl dig --nolog

对于上述代码重要之处在于:

  • Request是一个封装用户请求的类,在回调函数中yield该对象表示继续访问
  • HtmlXpathSelector用于结构化HTML代码并提供选择器功能

4、选择器

#!/usr/bin/env python
# -*- coding:utf-8 -*-
from scrapy.selector import Selector, HtmlXPathSelector
from scrapy.http import HtmlResponse
html = """<!DOCTYPE html>
<html><head lang="en"><meta charset="UTF-8"><title></title></head><body><ul><li class="item-"><a id='i1' href="link.html">first item</a></li><li class="item-0"><a id='i2' href="llink.html">first item</a></li><li class="item-1"><a href="llink2.html">second item<span>vv</span></a></li></ul><div><a href="llink2.html">second item</a></div></body>
</html>
"""
response = HtmlResponse(url='http://example.com', body=html,encoding='utf-8')
# hxs = HtmlXPathSelector(response)
# print(hxs)
# hxs = Selector(response=response).xpath('//a')  #找到a标签
# print(hxs)
# hxs = Selector(response=response).xpath('//a[2]') #找到列表中的第2个
# print(hxs)
# hxs = Selector(response=response).xpath('//a[@id]') #找到有a标签的属性
# print(hxs)
# hxs = Selector(response=response).xpath('//a[@id="i1"]') #找到ID=他的值
# print(hxs)
# hxs = Selector(response=response).xpath('//a[@href="link.html"][@id="i1"]')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[contains(@href, "link")]')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[starts-with(@href, "link")]')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[re:test(@id, "i\d+")]') #正测表达式
# print(hxs)
# hxs = Selector(response=response).xpath('//a[re:test(@id, "i\d+")]/text()').extract()
# print(hxs)
# hxs = Selector(response=response).xpath('//a[re:test(@id, "i\d+")]/@href').extract()
# print(hxs)
# hxs = Selector(response=response).xpath('/html/body/ul/li/a/@href').extract() 
# print(hxs)
# hxs = Selector(response=response).xpath('//body/ul/li/a/@href').extract_first()
# print(hxs)# ul_list = Selector(response=response).xpath('//body/ul/li')
# for item in ul_list:
#     v = item.xpath('./a/span')
#     # 或
#     # v = item.xpath('a/span')
#     # 或
#     # v = item.xpath('*/a/span')
#     print(v)

示例:自动登陆抽屉并点赞

# -*- coding: utf-8 -*-
import scrapy
from scrapy.selector import HtmlXPathSelector
from scrapy.http.request import Request
from scrapy.http.cookies import CookieJar
from scrapy import FormRequestclass ChouTiSpider(scrapy.Spider):# 爬虫应用的名称,通过此名称启动爬虫命令name = "chouti"# 允许的域名allowed_domains = ["chouti.com"]cookie_dict = {}has_request_set = {}def start_requests(self):url = 'http://dig.chouti.com/'# return [Request(url=url, callback=self.login)]yield Request(url=url, callback=self.login)def login(self, response):cookie_jar = CookieJar()cookie_jar.extract_cookies(response, response.request)for k, v in cookie_jar._cookies.items():for i, j in v.items():for m, n in j.items():self.cookie_dict[m] = n.valuereq = Request(url='http://dig.chouti.com/login',method='POST',headers={'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8'},body='phone=8615131255089&password=pppppppp&oneMonth=1',cookies=self.cookie_dict,callback=self.check_login)yield reqdef check_login(self, response):req = Request(url='http://dig.chouti.com/',method='GET',callback=self.show,cookies=self.cookie_dict,dont_filter=True)yield reqdef show(self, response):# print(response)hxs = HtmlXPathSelector(response)news_list = hxs.select('//div[@id="content-list"]/div[@class="item"]')for new in news_list:# temp = new.xpath('div/div[@class="part2"]/@share-linkid').extract()link_id = new.xpath('*/div[@class="part2"]/@share-linkid').extract_first()yield Request(url='http://dig.chouti.com/link/vote?linksId=%s' %(link_id,),method='POST',cookies=self.cookie_dict,callback=self.do_favor)page_list = hxs.select('//div[@id="dig_lcpage"]//a[re:test(@href, "/all/hot/recent/\d+")]/@href').extract()for page in page_list:page_url = 'http://dig.chouti.com%s' % pageimport hashlibhash = hashlib.md5()hash.update(bytes(page_url,encoding='utf-8'))key = hash.hexdigest()if key in self.has_request_set:passelse:self.has_request_set[key] = page_urlyield Request(url=page_url,method='GET',callback=self.show)def do_favor(self, response):print(response.text)

注意:settings.py中设置DEPTH_LIMIT = 1来指定“递归”的层数。

5. 格式化处理

上述实例只是简单的处理,所以在parse方法中直接处理。如果对于想要获取更多的数据处理,则可以利用Scrapy的items将数据格式化,然后统一交由pipelines来处理。

spiders/xiahuar.py

import scrapy
from scrapy.selector import HtmlXPathSelector
from scrapy.http.request import Request
from scrapy.http.cookies import CookieJar
from scrapy import FormRequestclass XiaoHuarSpider(scrapy.Spider):# 爬虫应用的名称,通过此名称启动爬虫命令name = "xiaohuar"# 允许的域名allowed_domains = ["xiaohuar.com"]start_urls = ["http://www.xiaohuar.com/list-1-1.html",]# custom_settings = {#     'ITEM_PIPELINES':{#         'spider1.pipelines.JsonPipeline': 100#     }# }has_request_set = {}def parse(self, response):# 分析页面# 找到页面中符合规则的内容(校花图片),保存# 找到所有的a标签,再访问其他a标签,一层一层的搞下去hxs = HtmlXPathSelector(response)items = hxs.select('//div[@class="item_list infinite_scroll"]/div')for item in items:src = item.select('.//div[@class="img"]/a/img/@src').extract_first()name = item.select('.//div[@class="img"]/span/text()').extract_first()school = item.select('.//div[@class="img"]/div[@class="btns"]/a/text()').extract_first()url = "http://www.xiaohuar.com%s" % srcfrom ..items import XiaoHuarItemobj = XiaoHuarItem(name=name, school=school, url=url)yield objurls = hxs.select('//a[re:test(@href, "http://www.xiaohuar.com/list-1-\d+.html")]/@href')for url in urls:key = self.md5(url)if key in self.has_request_set:passelse:self.has_request_set[key] = urlreq = Request(url=url,method='GET',callback=self.parse)yield req@staticmethoddef md5(val):import hashlibha = hashlib.md5()ha.update(bytes(val, encoding='utf-8'))key = ha.hexdigest()return key

items

import scrapyclass XiaoHuarItem(scrapy.Item):name = scrapy.Field()school = scrapy.Field()url = scrapy.Field()

pipelines

import json
import os
import requestsclass JsonPipeline(object):def __init__(self):self.file = open('xiaohua.txt', 'w')def process_item(self, item, spider):v = json.dumps(dict(item), ensure_ascii=False)self.file.write(v)self.file.write('\n')self.file.flush()return itemclass FilePipeline(object):def __init__(self):if not os.path.exists('imgs'):os.makedirs('imgs')def process_item(self, item, spider):response = requests.get(item['url'], stream=True)file_name = '%s_%s.jpg' % (item['name'], item['school'])with open(os.path.join('imgs', file_name), mode='wb') as f:f.write(response.content)return item

settings

ITEM_PIPELINES = {'spider1.pipelines.JsonPipeline': 100,'spider1.pipelines.FilePipeline': 300,
}
# 每行后面的整型值,确定了他们运行的顺序,item按数字从低到高的顺序,通过pipeline,通常将这些数字定义在0-1000范围内。

自定义pipeline

from scrapy.exceptions import DropItemclass CustomPipeline(object):def __init__(self,v):self.value = vdef process_item(self, item, spider):# 操作并进行持久化# return表示会被后续的pipeline继续处理return item# 表示将item丢弃,不会被后续pipeline处理# raise DropItem()@classmethoddef from_crawler(cls, crawler):"""
        初始化时候,用于创建pipeline对象:param crawler: :return: """
        val = crawler.settings.getint('MMMM')return cls(val)def open_spider(self,spider):"""
        爬虫开始执行时,调用:param spider: :return: """
        print('000000')def close_spider(self,spider):"""
        爬虫关闭时,被调用:param spider: :return: """
        print('111111')

6.中间件

爬虫中间件

class SpiderMiddleware(object):def process_spider_input(self,response, spider):"""
        下载完成,执行,然后交给parse处理:param response: :param spider: :return: """
        passdef process_spider_output(self,response, result, spider):"""
        spider处理完成,返回时调用:param response::param result::param spider::return: 必须返回包含 Request 或 Item 对象的可迭代对象(iterable)"""
        return resultdef process_spider_exception(self,response, exception, spider):"""
        异常调用:param response::param exception::param spider::return: None,继续交给后续中间件处理异常;含 Response 或 Item 的可迭代对象(iterable),交给调度器或pipeline"""
        return Nonedef process_start_requests(self,start_requests, spider):"""
        爬虫启动时调用:param start_requests::param spider::return: 包含 Request 对象的可迭代对象"""
        return start_requests

下载器中间件

class DownMiddleware1(object):def process_request(self, request, spider):"""
        请求需要被下载时,经过所有下载器中间件的process_request调用:param request: :param spider: :return:  None,继续后续中间件去下载;Response对象,停止process_request的执行,开始执行process_responseRequest对象,停止中间件的执行,将Request重新调度器raise IgnoreRequest异常,停止process_request的执行,开始执行process_exception"""
        passdef process_response(self, request, response, spider):"""
        spider处理完成,返回时调用:param response::param result::param spider::return: Response 对象:转交给其他中间件process_responseRequest 对象:停止中间件,request会被重新调度下载raise IgnoreRequest 异常:调用Request.errback"""
        print('response1')return responsedef process_exception(self, request, exception, spider):"""
        当下载处理器(download handler)或 process_request() (下载中间件)抛出异常:param response::param exception::param spider::return: None:继续交给后续中间件处理异常;Response对象:停止后续process_exception方法Request对象:停止中间件,request将会被重新调用下载"""
        return None

7. 自定制命令

  • 在spiders同级创建任意目录,如:commands
  • 在其中创建 crawlall.py 文件 (此处文件名就是自定义的命令)

crawlall.py

from scrapy.commands import ScrapyCommandfrom scrapy.utils.project import get_project_settingsclass Command(ScrapyCommand):requires_project = Truedef syntax(self):return '[options]'def short_desc(self):return 'Runs all of the spiders'def run(self, args, opts):spider_list = self.crawler_process.spiders.list()for name in spider_list:self.crawler_process.crawl(name, **opts.__dict__)self.crawler_process.start()
  • 在settings.py 中添加配置 COMMANDS_MODULE = '项目名称.目录名称'
  • 在项目目录执行命令:scrapy crawlall 

8. 自定义扩展

自定义扩展时,利用信号在指定位置注册制定操作

from scrapy import signalsclass MyExtension(object):def __init__(self, value):self.value = value@classmethoddef from_crawler(cls, crawler):val = crawler.settings.getint('MMMM')ext = cls(val)crawler.signals.connect(ext.spider_opened, signal=signals.spider_opened)crawler.signals.connect(ext.spider_closed, signal=signals.spider_closed)return extdef spider_opened(self, spider):print('open')def spider_closed(self, spider):print('close')

9. 避免重复访问

scrapy默认使用 scrapy.dupefilter.RFPDupeFilter 进行去重,相关配置有:

1
2
3
DUPEFILTER_CLASS = 'scrapy.dupefilter.RFPDupeFilter'
DUPEFILTER_DEBUG = False
JOBDIR = "保存范文记录的日志路径,如:/root/"  # 最终路径为 /root/requests.seen

自定义URL去重操作

class RepeatUrl:def __init__(self):self.visited_url = set()@classmethoddef from_settings(cls, settings):"""
        初始化时,调用:param settings: :return: """
        return cls()def request_seen(self, request):"""
        检测当前请求是否已经被访问过:param request: :return: True表示已经访问过;False表示未访问过"""
        if request.url in self.visited_url:return Trueself.visited_url.add(request.url)return Falsedef open(self):"""
        开始爬去请求时,调用:return: """
        print('open replication')def close(self, reason):"""
        结束爬虫爬取时,调用:param reason: :return: """
        print('close replication')def log(self, request, spider):"""
        记录日志:param request: :param spider: :return: """
        print('repeat', request.url)

 10.其他

settings一些设置参数说明:

# -*- coding: utf-8 -*-# Scrapy settings for step8_king project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     http://doc.scrapy.org/en/latest/topics/settings.html
#     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html

# 1. 爬虫名称
BOT_NAME = 'step8_king'# 2. 爬虫应用路径
SPIDER_MODULES = ['step8_king.spiders']
NEWSPIDER_MODULE = 'step8_king.spiders'# Crawl responsibly by identifying yourself (and your website) on the user-agent
# 3. 客户端 user-agent请求头
# USER_AGENT = 'step8_king (+http://www.yourdomain.com)'# Obey robots.txt rules
# 4. 禁止爬虫配置
# ROBOTSTXT_OBEY = False# Configure maximum concurrent requests performed by Scrapy (default: 16)
# 5. 并发请求数
# CONCURRENT_REQUESTS = 4# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
# 6. 延迟下载秒数
# DOWNLOAD_DELAY = 2# The download delay setting will honor only one of:
# 7. 单域名访问并发数,并且延迟下次秒数也应用在每个域名
# CONCURRENT_REQUESTS_PER_DOMAIN = 2
# 单IP访问并发数,如果有值则忽略:CONCURRENT_REQUESTS_PER_DOMAIN,并且延迟下次秒数也应用在每个IP
# CONCURRENT_REQUESTS_PER_IP = 3# Disable cookies (enabled by default)
# 8. 是否支持cookie,cookiejar进行操作cookie
# COOKIES_ENABLED = True
# COOKIES_DEBUG = True# Disable Telnet Console (enabled by default)
# 9. Telnet用于查看当前爬虫的信息,操作爬虫等...
#    使用telnet ip port ,然后通过命令操作
# TELNETCONSOLE_ENABLED = True
# TELNETCONSOLE_HOST = '127.0.0.1'
# TELNETCONSOLE_PORT = [6023,]# 10. 默认请求头
# Override the default request headers:
# DEFAULT_REQUEST_HEADERS = {
#     'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#     'Accept-Language': 'en',
# }# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
# 11. 定义pipeline处理请求
# ITEM_PIPELINES = {
#    'step8_king.pipelines.JsonPipeline': 700,
#    'step8_king.pipelines.FilePipeline': 500,
# }# 12. 自定义扩展,基于信号进行调用
# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
# EXTENSIONS = {
#     # 'step8_king.extensions.MyExtension': 500,
# }# 13. 爬虫允许的最大深度,可以通过meta查看当前深度;0表示无深度
# DEPTH_LIMIT = 3# 14. 爬取时,0表示深度优先Lifo(默认);1表示广度优先FiFo# 后进先出,深度优先
# DEPTH_PRIORITY = 0
# SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleLifoDiskQueue'
# SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.LifoMemoryQueue'
# 先进先出,广度优先# DEPTH_PRIORITY = 1
# SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleFifoDiskQueue'
# SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.FifoMemoryQueue'# 15. 调度器队列
# SCHEDULER = 'scrapy.core.scheduler.Scheduler'
# from scrapy.core.scheduler import Scheduler# 16. 访问URL去重
# DUPEFILTER_CLASS = 'step8_king.duplication.RepeatUrl'# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html"""
17. 自动限速算法from scrapy.contrib.throttle import AutoThrottle自动限速设置1. 获取最小延迟 DOWNLOAD_DELAY2. 获取最大延迟 AUTOTHROTTLE_MAX_DELAY3. 设置初始下载延迟 AUTOTHROTTLE_START_DELAY4. 当请求下载完成后,获取其"连接"时间 latency,即:请求连接到接受到响应头之间的时间5. 用于计算的... AUTOTHROTTLE_TARGET_CONCURRENCYtarget_delay = latency / self.target_concurrencynew_delay = (slot.delay + target_delay) / 2.0 # 表示上一次的延迟时间new_delay = max(target_delay, new_delay)new_delay = min(max(self.mindelay, new_delay), self.maxdelay)slot.delay = new_delay
"""

# 开始自动限速
# AUTOTHROTTLE_ENABLED = True
# The initial download delay
# 初始下载延迟
# AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
# 最大下载延迟
# AUTOTHROTTLE_MAX_DELAY = 10
# The average number of requests Scrapy should be sending in parallel to each remote server
# 平均每秒并发数
# AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0# Enable showing throttling stats for every response received:
# 是否显示
# AUTOTHROTTLE_DEBUG = True# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings"""
18. 启用缓存目的用于将已经发送的请求或相应缓存下来,以便以后使用from scrapy.downloadermiddlewares.httpcache import HttpCacheMiddlewarefrom scrapy.extensions.httpcache import DummyPolicyfrom scrapy.extensions.httpcache import FilesystemCacheStorage
"""
# 是否启用缓存策略
# HTTPCACHE_ENABLED = True# 缓存策略:所有请求均缓存,下次在请求直接访问原来的缓存即可
# HTTPCACHE_POLICY = "scrapy.extensions.httpcache.DummyPolicy"
# 缓存策略:根据Http响应头:Cache-Control、Last-Modified 等进行缓存的策略
# HTTPCACHE_POLICY = "scrapy.extensions.httpcache.RFC2616Policy"# 缓存超时时间
# HTTPCACHE_EXPIRATION_SECS = 0# 缓存保存路径
# HTTPCACHE_DIR = 'httpcache'# 缓存忽略的Http状态码
# HTTPCACHE_IGNORE_HTTP_CODES = []# 缓存存储的插件
# HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'"""
19. 代理,需要在环境变量中设置from scrapy.contrib.downloadermiddleware.httpproxy import HttpProxyMiddleware方式一:使用默认os.environ{http_proxy:http://root:woshiniba@192.168.11.11:9999/https_proxy:http://192.168.11.11:9999/
        }方式二:使用自定义下载中间件def to_bytes(text, encoding=None, errors='strict'):if isinstance(text, bytes):return textif not isinstance(text, six.string_types):raise TypeError('to_bytes must receive a unicode, str or bytes ''object, got %s' % type(text).__name__)if encoding is None:encoding = 'utf-8'return text.encode(encoding, errors)class ProxyMiddleware(object):def process_request(self, request, spider):PROXIES = [{'ip_port': '111.11.228.75:80', 'user_pass': ''},{'ip_port': '120.198.243.22:80', 'user_pass': ''},{'ip_port': '111.8.60.9:8123', 'user_pass': ''},{'ip_port': '101.71.27.120:80', 'user_pass': ''},{'ip_port': '122.96.59.104:80', 'user_pass': ''},{'ip_port': '122.224.249.122:8088', 'user_pass': ''},]proxy = random.choice(PROXIES)if proxy['user_pass'] is not None:request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])encoded_user_pass = base64.encodestring(to_bytes(proxy['user_pass']))request.headers['Proxy-Authorization'] = to_bytes('Basic ' + encoded_user_pass)print "**************ProxyMiddleware have pass************" + proxy['ip_port']else:print "**************ProxyMiddleware no pass************" + proxy['ip_port']request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])DOWNLOADER_MIDDLEWARES = {'step8_king.middlewares.ProxyMiddleware': 500,}"""

"""
20. Https访问Https访问时有两种情况:1. 要爬取网站使用的可信任证书(默认支持)DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"DOWNLOADER_CLIENTCONTEXTFACTORY = "scrapy.core.downloader.contextfactory.ScrapyClientContextFactory"2. 要爬取网站使用的自定义证书DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"DOWNLOADER_CLIENTCONTEXTFACTORY = "step8_king.https.MySSLFactory"# https.pyfrom scrapy.core.downloader.contextfactory import ScrapyClientContextFactoryfrom twisted.internet.ssl import (optionsForClientTLS, CertificateOptions, PrivateCertificate)class MySSLFactory(ScrapyClientContextFactory):def getCertificateOptions(self):from OpenSSL import cryptov1 = crypto.load_privatekey(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.key.unsecure', mode='r').read())v2 = crypto.load_certificate(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.pem', mode='r').read())return CertificateOptions(privateKey=v1,  # pKey对象certificate=v2,  # X509对象verify=False,method=getattr(self, 'method', getattr(self, '_ssl_method', None)))其他:相关类scrapy.core.downloader.handlers.http.HttpDownloadHandlerscrapy.core.downloader.webclient.ScrapyHTTPClientFactoryscrapy.core.downloader.contextfactory.ScrapyClientContextFactory相关配置DOWNLOADER_HTTPCLIENTFACTORYDOWNLOADER_CLIENTCONTEXTFACTORY"""
"""
21. 爬虫中间件class SpiderMiddleware(object):def process_spider_input(self,response, spider):'''
            下载完成,执行,然后交给parse处理:param response: :param spider: :return: '''
            passdef process_spider_output(self,response, result, spider):'''
            spider处理完成,返回时调用:param response::param result::param spider::return: 必须返回包含 Request 或 Item 对象的可迭代对象(iterable)'''
            return resultdef process_spider_exception(self,response, exception, spider):'''
            异常调用:param response::param exception::param spider::return: None,继续交给后续中间件处理异常;含 Response 或 Item 的可迭代对象(iterable),交给调度器或pipeline'''
            return Nonedef process_start_requests(self,start_requests, spider):'''
            爬虫启动时调用:param start_requests::param spider::return: 包含 Request 对象的可迭代对象'''
            return start_requests内置爬虫中间件:'scrapy.contrib.spidermiddleware.httperror.HttpErrorMiddleware': 50,'scrapy.contrib.spidermiddleware.offsite.OffsiteMiddleware': 500,'scrapy.contrib.spidermiddleware.referer.RefererMiddleware': 700,'scrapy.contrib.spidermiddleware.urllength.UrlLengthMiddleware': 800,'scrapy.contrib.spidermiddleware.depth.DepthMiddleware': 900,"""
# from scrapy.contrib.spidermiddleware.referer import RefererMiddleware
# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
SPIDER_MIDDLEWARES = {# 'step8_king.middlewares.SpiderMiddleware': 543,
}"""
22. 下载中间件class DownMiddleware1(object):def process_request(self, request, spider):'''
            请求需要被下载时,经过所有下载器中间件的process_request调用:param request::param spider::return:None,继续后续中间件去下载;Response对象,停止process_request的执行,开始执行process_responseRequest对象,停止中间件的执行,将Request重新调度器raise IgnoreRequest异常,停止process_request的执行,开始执行process_exception'''
            passdef process_response(self, request, response, spider):'''
            spider处理完成,返回时调用:param response::param result::param spider::return:Response 对象:转交给其他中间件process_responseRequest 对象:停止中间件,request会被重新调度下载raise IgnoreRequest 异常:调用Request.errback'''
            print('response1')return responsedef process_exception(self, request, exception, spider):'''
            当下载处理器(download handler)或 process_request() (下载中间件)抛出异常:param response::param exception::param spider::return:None:继续交给后续中间件处理异常;Response对象:停止后续process_exception方法Request对象:停止中间件,request将会被重新调用下载'''
            return None默认下载中间件{'scrapy.contrib.downloadermiddleware.robotstxt.RobotsTxtMiddleware': 100,'scrapy.contrib.downloadermiddleware.httpauth.HttpAuthMiddleware': 300,'scrapy.contrib.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware': 350,'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': 400,'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 500,'scrapy.contrib.downloadermiddleware.defaultheaders.DefaultHeadersMiddleware': 550,'scrapy.contrib.downloadermiddleware.redirect.MetaRefreshMiddleware': 580,'scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware': 590,'scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware': 600,'scrapy.contrib.downloadermiddleware.cookies.CookiesMiddleware': 700,'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 750,'scrapy.contrib.downloadermiddleware.chunked.ChunkedTransferMiddleware': 830,'scrapy.contrib.downloadermiddleware.stats.DownloaderStats': 850,'scrapy.contrib.downloadermiddleware.httpcache.HttpCacheMiddleware': 900,}"""
# from scrapy.contrib.downloadermiddleware.httpauth import HttpAuthMiddleware
# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
# DOWNLOADER_MIDDLEWARES = {
#    'step8_king.middlewares.DownMiddleware1': 100,
#    'step8_king.middlewares.DownMiddleware2': 500,
# }

11.TinyScrapy

#!/usr/bin/env python
# -*- coding:utf-8 -*-
import types
from twisted.internet import defer
from twisted.web.client import getPage
from twisted.internet import reactorclass Request(object):def __init__(self, url, callback):self.url = urlself.callback = callbackself.priority = 0class HttpResponse(object):def __init__(self, content, request):self.content = contentself.request = requestclass ChouTiSpider(object):def start_requests(self):url_list = ['http://www.cnblogs.com/', 'http://www.bing.com']for url in url_list:yield Request(url=url, callback=self.parse)def parse(self, response):print(response.request.url)# yield Request(url="http://www.baidu.com", callback=self.parse)from queue import Queue
Q = Queue()class CallLaterOnce(object):def __init__(self, func, *a, **kw):self._func = funcself._a = aself._kw = kwself._call = Nonedef schedule(self, delay=0):if self._call is None:self._call = reactor.callLater(delay, self)def cancel(self):if self._call:self._call.cancel()def __call__(self):self._call = Nonereturn self._func(*self._a, **self._kw)class Engine(object):def __init__(self):self.nextcall = Noneself.crawlling = []self.max = 5self._closewait = Nonedef get_response(self,content, request):response = HttpResponse(content, request)gen = request.callback(response)if isinstance(gen, types.GeneratorType):for req in gen:req.priority = request.priority + 1Q.put(req)def rm_crawlling(self,response,d):self.crawlling.remove(d)def _next_request(self,spider):if Q.qsize() == 0 and len(self.crawlling) == 0:self._closewait.callback(None)if len(self.crawlling) >= 5:returnwhile len(self.crawlling) < 5:try:req = Q.get(block=False)except Exception as e:req = Noneif not req:returnd = getPage(req.url.encode('utf-8'))self.crawlling.append(d)d.addCallback(self.get_response, req)d.addCallback(self.rm_crawlling,d)d.addCallback(lambda _: self.nextcall.schedule())@defer.inlineCallbacksdef crawl(self):spider = ChouTiSpider()start_requests = iter(spider.start_requests())flag = Truewhile flag:try:req = next(start_requests)Q.put(req)except StopIteration as e:flag = Falseself.nextcall = CallLaterOnce(self._next_request,spider)self.nextcall.schedule()self._closewait = defer.Deferred()yield self._closewait@defer.inlineCallbacksdef pp(self):yield self.crawl()_active = set()
obj = Engine()
d = obj.crawl()
_active.add(d)li = defer.DeferredList(_active)
li.addBoth(lambda _,*a,**kw: reactor.stop())reactor.run()

 点击下载

更多文档参见:http://scrapy-chs.readthedocs.io/zh_CN/latest/index.html

 

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/541364.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

oracle中的事务回滚,ORACLE 死事务的回滚

死事务出现在异常关闭数据库或者事务进程不正常结束&#xff0c;比如KILL -9&#xff0c;shutdown abort的情况下。当前数据库里的死事务可以通过查询内部表x$ktuxe来获得。select ADDR,KTUXEUSN,KTUXESLT,KTUXESQN,KTUXESIZ from x$ktuxe where KTUXECFLDEAD;ADDR …

大数据数据可视化设计原则_数据可视化设计的8顶帽子

大数据数据可视化设计原则8 hats of data visualization are basically the important persons and their roles that are basically required to carry out data visualization are as follows: 数据可视化有8个基本要素&#xff0c;而进行数据可视化所需的基本角色如下&#…

debian8.8安装谷歌浏览器

第一步&#xff1a;下载&#xff1a; wget https://dl.google.com/linux/direct/google-chrome-stable_current_i386.deb //32位 wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb //64位第二步&#xff1a;安装dpkg -i google-chrome*.deb…

MB_LEN_MAX常数,带C ++示例

C MB_LEN_MAX宏常量 (C MB_LEN_MAX macro constant) MB_LEN_MAX constant is a macro constant which is defied in climits header, it is used to get the maximum number of bytes in a multibyte character, for any locale, it returns maximum number of bytes that a m…

php谷歌收录接口,php实现查询百度google收录情况(示例代码)

PHP$SEOdetail array();$domain !empty($_GET[q]) ? $_GET[q] : www.mycodes.net;baidudetail($domain);googledetail($domain);var_dump($SEOdetail);function baidudetail($domain) {$baidu_site http://www.baidu.com/baidu?wordsite%3A . $domain;$baidu_link http:/…

Linux学习第三步(Centos7安装mysql5.7数据库)

版本&#xff1a;mysql-5.7.16-1.el7.x86_64.rpm-bundle.tar 前言&#xff1a;在linux下安装mysql不如windows下面那么简单&#xff0c;但是也不是很难。本文向大家讲解了如何在Centos7下如何安装mysql5.7版本,如果有什么问题和错误的地方&#xff0c;欢迎大家指出。 注释&…

linux oracle删除恢复数据恢复,Linux下Oracle误删除数据文件恢复操作

检查数据文件的位置如下&#xff1a;SQL> select name from v$datafile;NAME--------------------------------------------------------------------------------/u01/app/Oracle/oradata/marven/system01.dbf/u01/app/oracle/oradata/marven/undotbs1.dbf/u01/app/oracle/…

数据库如何处理数据库太大_网络数据库中的数据处理

数据库如何处理数据库太大Before learning the data manipulation in a network model, we are discussing data manipulation language, so what is the data manipulation language? 在学习网络模型中的数据操作之前&#xff0c;我们正在讨论数据操作语言&#xff0c;那么什…

oracle12537错误,ORA-12537:TNS:connection closed错误处理方法

1.ORA-12537:TNS:connection closed错误处理过程检查监听正常&#xff0c;Oracle服务也是正常启动的&#xff0c;但是登录不进去。2.解决方案1. cd $ORACLE_HOME/bin/ 进入bin目录2. ll oracle-rwxrwxrwx. 1 ora12 dba 323762222 6?. 14 19:12 oracle3.chmod 6571 oracle 更改…

操作系统中的死锁_操作系统中的死锁介绍

操作系统中的死锁1.1究竟什么是僵局&#xff1f; (1.1 What exactly is a deadlock?) In a multiprogramming environment, there may be several processes with a finite number of resources. A process may request another resource while still holding some of the oth…

《云数据管理:挑战与机遇》2.3 数据库系统

本节书摘来自华章出版社《云数据管理》一书中的第2章&#xff0c;第3节&#xff0c;作者迪卫艾肯特阿格拉沃尔&#xff0c;更多章节内容可以访问云栖社区“华章计算机”公众号查看本节中&#xff0c;我们将为数据库系统中的一些主要概念提供一个相当抽象、简洁和高层次的描述。…

sql server与oracle的分页,详解SQLServer和Oracle的分页查询

不管是DRP中的分页查询代码的实现还是面试题中看到的关于分页查询的考察&#xff0c;都给我一个提示&#xff1a;分页查询是重要的。当数据量大的时候是必须考虑的。之前一直没有花时间停下来好好总结这里。现在又将Oracle视频中关于分页查询的内容看了一遍&#xff0c;发现很容…

java treemap_Java TreeMap lastEntry()方法与示例

java treemapTreeMap类的lastEntry()方法 (TreeMap Class lastEntry() method) lastEntry() method is available in java.util package. lastEntry()方法在java.util包中可用。 lastEntry() method is used to return the entry (key-value pairs) that exists with the large…

LeetCode OJ 之 Valid Anagram

题目&#xff1a; Given two strings s and t, write a function to determine if t is an anagram of s. For example,s "anagram", t "nagaram", return true.s "rat", t "car", return false. Note: You may assume the string…

oracle光标位置无效,解决在Form表单中光标移动不了问题

apply p8727236_10123 for Developer Suite 10.1.2.3 in Linux首先到oracle的技术支持下载所需补丁,然后1先打补丁7121788&#xff0c;把p7121788_10123_LINUX.zip解压到/home/oracledev目录下(ORACLE_HOME为/u01/app/oracledev/OraHome_dev)$cd /home/oracledev/7121788$expo…

java treemap_Java TreeMap HigherKey()方法与示例

java treemapTreeMap类HigherKey()方法 (TreeMap Class higherKey() method) higherKey() method is available in java.util package. HigherKey()方法在java.util包中可用。 higherKey() method is used to return the lowest key value element higher than the given key e…

centos配置ipv6地址

首先打开网站注册一个账号&#xff1a;http://www.tunnelbroker.net创建一个ipv6的地址&#xff1a;把下面的命令在linux上执行一遍&#xff0c;这个方式是临时生效&#xff0c;重启网卡和重启系统自动失效。把上面的命令保存到一个配置文件中&#xff1a;vi /etc/sysconfig/ne…

php oracle 需要libmysql.dll么_,Windows7环境下Apache+PHP+MySQL完美配置

写作此篇文章的目的在于记录Windows 7环境下成功配置WAMP环境, 初学者在不使用整合好的WAMPServer和XAMPP的情况下徒手配置整合环境貌似有很多意想不到的问题. 这将是我们需要讨论的.我将重现几个经典的问题, 并一一排除. 希望对各位看官有点借鉴作用.一. Apache在整合PHP后无法…

stringreader_Java StringReader skip()方法与示例

stringreaderStringReader类skip()方法 (StringReader Class skip() method) skip() method is available in java.io package. skip()方法在java.io包中可用。 skip() method is used to skip the given number of characters in the stream. skip()方法用于跳过流中给定数量的…