参考:https://blog.csdn.net/s150503/article/details/72571680
CONCURRENT_REQUESTS 与 DOWNLOAD_DELAY
Scrapy 中 CONCURRENT_REQUESTS 与 DOWNLOAD_DELAY 的联系,先建立一个项目来找CONCURRENT_REQUESTS与DOWNLOAD_DELAY的联系
以豆瓣电影top250 为例
douban_spider.py
# -*- coding: utf-8 -*-import scrapy
import time
import re
from lxml import etree"""
scrapy 豆瓣登录响应结果乱码问题
https://www.jianshu.com/p/9974fc338242
"""class ExampleSpider(scrapy.Spider):name = 'douban'allowed_domains = ['example.com']# start_urls = ['https://movie.douban.com/top250?start={}&filter='.format(i) for i in range(0, 250, 25)]start_urls = ['https://movie.douban.com/top250?start={}&filter='.format(i) for i in range(10000)]custom_settings = {'DEFAULT_REQUEST_HEADERS': {'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,''*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',"Accept-Encoding": "gzip, deflate","Accept-Language": "zh-CN,zh;q=0.9","Connection": "keep-alive","Host": "movie.douban.com","Upgrade-Insecure-Requests": "1","User-Agent": 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36'' (KHTML, like Gecko) Chrome/81.0.4044.113 Safari/537.36',},'CONCURRENT_REQUESTS': 10,'DOWNLOAD_DELAY': 0.01,'CONCURRENT_REQUESTS_PER_IP': 0,'CONCURRENT_REQUESTS_PER_DOMAIN': 10000,'FEED_EXPORT_ENCODING': 'utf-8'}def parse(self, response):current_url = response.urlprint(current_url)time.sleep(3)returnoffset = re.findall(r'start=(\d+)', current_url)[0]page_num = int(offset) // 25html = etree.HTML(text=response.text)# 先定位到 li 标签,data 是一个包含25个li标签的list,就是包含25部电影信息的listdata = html.xpath('//ol[@class="grid_view"]/li')index = 0for d in data:data_title = d.xpath('div/div[2]/div[@class="hd"]/a/span[1]/text()')data_info = d.xpath('div/div[2]/div[@class="bd"]/p[1]/text()')data_quote = d.xpath('div/div[2]/div[@class="bd"]/p[2]/span/text()')data_score = d.xpath('div/div[2]/div[@class="bd"]/div/span[@class="rating_num"]/text()')data_num = d.xpath('div/div[2]/div[@class="bd"]/div/span[4]/text()')data_pic_url = d.xpath('div/div[1]/a/img/@src')print(f"No: {str(page_num * 25 + index + 1)} {data_title}")index += 1passif __name__ == '__main__':from scrapy import cmdlinecmdline.execute('scrapy crawl douban'.split())pass
验证 1:
'CONCURRENT_REQUESTS': 10,
'DOWNLOAD_DELAY': 0.01,
CONCURRENT_REQUESTS 设置 为 10 时,理论上 可以并发 10个请求。但是 DOWNLOAD_DELAY 设置为 0.01 时,按 DOWNLOAD_DELAY 来算,可以并发 1 / 0.01 = 100 个请求,这两个取最小值 为 10,所以 并发 10个 请求。
几乎同一秒 并发 10 个左右的 请求
验证 2:
'CONCURRENT_REQUESTS': 10,
'DOWNLOAD_DELAY': 0.5,
CONCURRENT_REQUESTS 设置 为 10 时,理论上 可以并发 10个请求。但是 DOWNLOAD_DELAY 设置为 0.5 时,按 DOWNLOAD_DELAY 来算,可以并发 1 / 0.5 = 2 个请求,这两个取最小值 为 2,所以 并发 2个 请求。
总结:
DOWNLOAD_DELAY 会影响 CONCURRENT_REQUESTS,不能使并发显现出来。
思考:
1. 当有 CONCURRENT_REQUESTS,没有 DOWNLOAD_DELAY 时,服务器会在同一时间收到大量的请求。
'CONCURRENT_REQUESTS': 10,
# 'DOWNLOAD_DELAY': 0.5,
DOWNLOAD_DELAY 注释后,会使用默认值 0,
2. 当有 CONCURRENT_REQUESTS,有 DOWNLOAD_DELAY 时,服务器不会在同一时间收到大量的请求。
# 'CONCURRENT_REQUESTS': 0,
'DOWNLOAD_DELAY': 0.5,
CONCURRENT_REQUESTS 注释后,会使用默认值 16,