爬虫进阶之selenium模拟浏览器

爬虫进阶之selenium模拟浏览器

  • 简介
  • 环境配置
    • 1、建议先安装conda
    • 2、创建虚拟环境并安装对应的包
    • 3、下载对应的谷歌驱动以及与驱动对应的浏览器
  • 代码
    • setting.py配置
    • scrapy脚本参考
    • 中间件middlewares.py
  • 附录:selenium教程

简介

Selenium是一个用于自动化浏览器操作的工具,通常用于Web应用测试。然而,它也可以用作爬虫,通过模拟用户在浏览器中的操作来提取网页数据。以下是有关Selenium爬虫的一些基本介绍:

  1. 浏览器自动化: Selenium允许你通过编程方式控制浏览器的行为,包括打开网页、点击按钮、填写表单等。这样你可以模拟用户在浏览器中的操作。

  2. 支持多种浏览器: Selenium支持多种主流浏览器,包括Chrome、Firefox、Edge等。你可以选择适合你需求的浏览器来进行自动化操作。

  3. 网页数据提取: 利用Selenium,你可以加载网页并提取页面上的数据。这对于一些动态加载内容或需要用户交互的网页来说特别有用。

  4. 等待元素加载: 由于网页可能会异步加载,Selenium提供了等待机制,确保在继续执行之前等待特定的元素加载完成。

  5. 选择器: Selenium支持各种选择器,类似于使用CSS选择器或XPath来定位网页上的元素。

  6. 动态网页爬取: 对于使用JavaScript动态生成内容的网页,Selenium是一个有力的工具,因为它可以执行JavaScript代码并获取渲染后的结果。

尽管Selenium在爬虫中可以提供很多便利,但也需要注意一些方面。首先,使用Selenium进行爬取速度较慢,因为它模拟了真实用户的操作。其次,网站可能会检测到自动化浏览器,并采取措施来防止爬虫,因此使用Selenium时需要小心谨慎,遵守网站的使用规定和政策。

在使用selenium前需要有scrapy爬虫框架的相关知识,selenium需要结合scrapy的中间件才能发挥爬虫的作用,详细请看→前提知识:https://blog.csdn.net/shizuguilai/article/details/135554205

环境配置

1、建议先安装conda

参考连接:https://blog.csdn.net/Q_fairy/article/details/129158178

2、创建虚拟环境并安装对应的包

# 创建名字为scrapy的包
conda create -n scrapy 
# 进入虚拟环境
conda activate scrapy
# 下载对应的包
pip install scrapy
pip install selenium

3、下载对应的谷歌驱动以及与驱动对应的浏览器

参考连接:https://zhuanlan.zhihu.com/p/665018772
记得配置好环境变量

代码

目录结构:spiders下面就是我放scrapy脚本的位置。
在这里插入图片描述

setting.py配置

# Scrapy settings for sw project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.htmlBOT_NAME = "sw"SPIDER_MODULES = ["sw.spiders"]
NEWSPIDER_MODULE = "sw.spiders"
DOWNLOAD_DELAY = 3
RANDOMIZE_DOWNLOAD_DELAY = True
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'
COOKIES_ENABLED = True# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = "sw (+http://www.yourdomain.com)"# Obey robots.txt rules
ROBOTSTXT_OBEY = False# 文件settings.py中# ----------- selenium参数配置 -------------
SELENIUM_TIMEOUT = 25           # selenium浏览器的超时时间,单位秒
LOAD_IMAGE = True               # 是否下载图片
WINDOW_HEIGHT = 900             # 浏览器窗口大小
WINDOW_WIDTH = 900# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16# Disable cookies (enabled by default)
#COOKIES_ENABLED = False# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
#    "Accept-Language": "en",
#}# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    "sw.middlewares.SwSpiderMiddleware": 543,
#}# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    "sw.middlewares.SwDownloaderMiddleware": 543,
#}# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    "scrapy.extensions.telnet.TelnetConsole": None,
#}# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
#    "sw.pipelines.SwPipeline": 300,
#}
# ITEM_PIPELINES = {
#    "sw.pipelines.SwPipeline": 300,
# }# DB_SETTINGS = {
#     'host': '127.0.0.1',
#     'port': 3306,
#     'user': 'root',
#     'password': '123456',
#     'db': 'scrapy_news_2024_01_08',
#     'charset': 'utf8mb4',
# }# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = "httpcache"
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = "scrapy.extensions.httpcache.FilesystemCacheStorage"# Set settings whose default value is deprecated to a future-proof value
REQUEST_FINGERPRINTER_IMPLEMENTATION = "2.7"
TWISTED_REACTOR = "twisted.internet.asyncioreactor.AsyncioSelectorReactor"
FEED_EXPORT_ENCODING = "utf-8"
# REDIRECT_ENABLED = False

scrapy脚本参考

"""
Created on 2024/01/06 14:00 by Fxy
"""
import scrapy
from sw.items import SwItem
import time
from datetime import datetime
import locale
from scrapy_splash import SplashRequest
# scrapy 信号相关库
from scrapy.utils.project import get_project_settings
# 下面这种方式,即将废弃,所以不用
# from scrapy.xlib.pydispatch import dispatcher
from scrapy import signals
# scrapy最新采用的方案
from pydispatch import dispatcher
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWaitclass NhcSpider(scrapy.Spider):'''scrapy变量'''# 爬虫名称name = "1000_nhc"# 允许爬取的域名allowed_domains = ["xxxx.cn"]# 爬虫的起始链接start_urls = ["xxxx.shtml"]# 创建一个VidoItem实例item = SwItem()custom_settings = {'LOG_LEVEL':'INFO','DOWNLOAD_DELAY': 0,'COOKIES_ENABLED': False,  # enabled by default'DOWNLOADER_MIDDLEWARES': {# SeleniumMiddleware 中间件'sw.middlewares.SeleniumMiddleware': 543, # 这个数字是启用的优先级# 将scrapy默认的user-agent中间件关闭'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,}}'''自定义变量'''# 机构名称org = "xxxx数据"# 机构英文名称org_e = "None"# 日期格式site_date_format = '发布时间:\n            \t%Y-%m-%d\n            ' # 网页的日期格式date_format = '%d.%m.%Y %H:%M:%S' # 目标日期格式# 网站语言格式language_type = "zh2zh"  # 中文到中文的语言代码, 调用翻译接口时,使用# 模拟浏览器格式meta = {'usedSelenium': name, 'dont_redirect': True}# 将chrome初始化放到spider中,成为spider中的元素def __init__(self, timeout=40, isLoadImage=True, windowHeight=None, windowWidth=None):# 从settings.py中获取设置参数self.mySetting = get_project_settings()self.timeout = self.mySetting['SELENIUM_TIMEOUT']self.isLoadImage = self.mySetting['LOAD_IMAGE']self.windowHeight = self.mySetting['WINDOW_HEIGHT']self.windowWidth = self.mySetting['windowWidth']# 初始化chrome对象options = webdriver.ChromeOptions()options.add_experimental_option('useAutomationExtension', False) # 隐藏selenium特性options.add_experimental_option('excludeSwitches', ['enable-automation']) # 隐藏selenium特性options.add_argument('--ignore-certificate-errors') # 忽略证书错误options.add_argument('--ignore-certificate-errors-spki-list')options.add_argument('--ignore-ssl-errors') # 忽略ssl错误# chrome_options = webdriver.ChromeOptions()# chrome_options.binary_location = "E:\\学校的一些资料\\文档\研二上\\chrome-win64\\chrome.exe"  # 替换为您的特定版本的Chrome浏览器路径#1.创建Chrome或Firefox浏览器对象,这会在电脑上在打开一个浏览器窗口# browser = webdriver.Chrome(executable_path ="E:\\chromedriver\\chromedriver", chrome_options=chrome_options) #第一个参数为驱动的路径,第二个参数为对应的应用程序地址self.browser = webdriver.Chrome(chrome_options=options)self.browser.execute_cdp_cmd("Page.addScriptToEvaluateOnNewDocument", { # 隐藏selenium特性"source": """Object.defineProperty(navigator, 'webdriver', {get: () => undefined})"""})if self.windowHeight and self.windowWidth:self.browser.set_window_size(900, 900)self.browser.set_page_load_timeout(self.timeout)        # 页面加载超时时间self.wait = WebDriverWait(self.browser, 30)             # 指定元素加载超时时间super(NhcSpider, self).__init__()# 设置信号量,当收到spider_closed信号时,调用mySpiderCloseHandle方法,关闭chromedispatcher.connect(receiver = self.mySpiderCloseHandle,signal = signals.spider_closed)# 信号量处理函数:关闭chrome浏览器def mySpiderCloseHandle(self, spider):print(f"mySpiderCloseHandle: enter ")self.browser.quit()def start_requests(self):yield scrapy.Request(url = self.start_urls[0],meta = self.meta,callback = self.parse,# errback = self.error)#爬虫的主入口,这里是获取所有的归档文章链接, 从返回的resposedef parse(self,response):# locale.setlocale(locale.LC_TIME, 'en_US') #本地语言为英语 //*[@id="538034"]/divachieve_links = response.xpath('//ul[@class="zxxx_list"]/li/a/@href').extract()print("achieve_links",achieve_links)for achieve_link in achieve_links:full_achieve_link = "http:/xxxx.cn" + achieve_linkprint("full_achieve_link", full_achieve_link)# 进入每个归档链接yield scrapy.Request(full_achieve_link, callback=self.parse_item,dont_filter=True, meta=self.meta)#翻页逻辑xpath_expression = f'//*[@id="page_div"]/div[@class="pagination_index"]/span/a[text()="下一页"]/@href'next_page = response.xpath(xpath_expression).extract_first()print("next_page = ", next_page)# 翻页操作if next_page != None:# print(next_page)# print('next page')full_next_page = "http://xxxx/" + next_pageprint("full_next_page",full_next_page)meta_page = {'usedSelenium': self.name, "whether_wait_id" : True} # 翻页的meta和请求的meta要不一样yield scrapy.Request(full_next_page, callback=self.parse, dont_filter=True, meta=meta_page)#获取每个文章的内容,并存入itemdef parse_item(self,response):source_url = response.urltitle_o = response.xpath('//div[@class="tit"]/text()').extract_first().strip()# title_t = my_tools.get_trans(title_o, "de2zh")publish_time = response.xpath('//div[@class="source"]/span[1]/text()').extract_first()date_object = datetime.strptime(publish_time, self.site_date_format) # 先读取成网页的日期格式date_object = date_object.strftime(self.date_format) # 转换成目标的日期字符串publish_time = datetime.strptime(date_object, self.date_format) # 从符合格式的字符串,转换成日期content_o = [content.strip() for content in response.xpath('//div[@id="xw_box"]//text()').extract()]# content_o = ' '.join(content_o) # 这个content_o提取出来是一个字符串数组,所以要拼接成字符串# content_t = my_tools.get_trans(content_o, "de2zh")print("source_url:", source_url)print("title_o:", title_o)# print("title_t:", title_t)print("publish_time:", publish_time) #15.01.2008print("content_o:", content_o)# print("content_t:", content_t)print("-" * 50)page_data = { 'source_url': source_url,'title_o': title_o,# 'title_t' : title_t,'publish_time': publish_time,'content_o': content_o,# 'content_t': content_t,'org' : self.org,'org_e' : self.org_e,}self.item['url'] = page_data['source_url']self.item['title'] = page_data['title_o']# self.item['title_t'] = page_data['title_t']self.item['time'] = page_data['publish_time']self.item['content'] = page_data['content_o']# self.item['content_t'] = page_data['content_t']# 获取当前时间current_time = datetime.now()# 格式化成字符串formatted_time = current_time.strftime(self.date_format)# 将字符串转换为 datetime 对象datetime_object = datetime.strptime(formatted_time, self.date_format)self.item['scrapy_time'] = datetime_objectself.item['org'] = page_data['org']self.item['trans_org'] = page_data['org_e']yield self.item

中间件middlewares.py

# Define here the models for your spider middleware
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/spider-middleware.htmlfrom scrapy import signals# useful for handling different item types with a single interface
from itemadapter import is_item, ItemAdapterclass SwSpiderMiddleware:# Not all methods need to be defined. If a method is not defined,# scrapy acts as if the spider middleware does not modify the# passed objects.@classmethoddef from_crawler(cls, crawler):# This method is used by Scrapy to create your spiders.s = cls()crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)return sdef process_spider_input(self, response, spider):# Called for each response that goes through the spider# middleware and into the spider.# Should return None or raise an exception.return Nonedef process_spider_output(self, response, result, spider):# Called with the results returned from the Spider, after# it has processed the response.# Must return an iterable of Request, or item objects.for i in result:yield idef process_spider_exception(self, response, exception, spider):# Called when a spider or process_spider_input() method# (from other spider middleware) raises an exception.# Should return either None or an iterable of Request or item objects.passdef process_start_requests(self, start_requests, spider):# Called with the start requests of the spider, and works# similarly to the process_spider_output() method, except# that it doesn’t have a response associated.# Must return only requests (not items).for r in start_requests:yield rdef spider_opened(self, spider):spider.logger.info("Spider opened: %s" % spider.name)class SwDownloaderMiddleware:# Not all methods need to be defined. If a method is not defined,# scrapy acts as if the downloader middleware does not modify the# passed objects.@classmethoddef from_crawler(cls, crawler):# This method is used by Scrapy to create your spiders.s = cls()crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)return sdef process_request(self, request, spider):# Called for each request that goes through the downloader# middleware.# Must either:# - return None: continue processing this request# - or return a Response object# - or return a Request object# - or raise IgnoreRequest: process_exception() methods of#   installed downloader middleware will be calledreturn Nonedef process_response(self, request, response, spider):# Called with the response returned from the downloader.# Must either;# - return a Response object# - return a Request object# - or raise IgnoreRequestreturn responsedef process_exception(self, request, exception, spider):# Called when a download handler or a process_request()# (from other downloader middleware) raises an exception.# Must either:# - return None: continue processing this exception# - return a Response object: stops process_exception() chain# - return a Request object: stops process_exception() chainpassdef spider_opened(self, spider):spider.logger.info("Spider opened: %s" % spider.name)# -*- coding: utf-8 -*- 使用selenium
from selenium import webdriver
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
from scrapy.http import HtmlResponse
from logging import getLogger
import timeclass SeleniumMiddleware():# Middleware中会传递进来一个spider,这就是我们的spider对象,从中可以获取__init__时的chrome相关元素def process_request(self, request, spider):'''用chrome抓取页面:param request: Request请求对象:param spider: Spider对象:return: HtmlResponse响应'''print(f"chrome is getting page = {request.url}")# 依靠meta中的标记,来决定是否需要使用selenium来爬取usedSelenium = request.meta.get('usedSelenium', None) # 从request中的meta字段中获取usedSelenium值,不过不存在,返回默认的None# print("来到中间了?")if usedSelenium == "1000_nhc":try:spider.browser.get(request.url)time.sleep(4)if(request.meta.get('whether_wait_id', False)): # 从request中的meta字段中获取whether_wait_id值,不过不存在,返回默认的Falseprint("准备等待翻页的元素出现。。。")# 使用WebDriverWait等待页面加载完成wait = WebDriverWait(spider.browser, 20)  # 设置最大等待时间为60秒# 示例:等待页面中的某个元素加载完成,可根据实际情况调整wait.until(EC.presence_of_element_located((By.ID, "page_div"))) # 等待翻页结束,才进行下一步except TimeoutException: # 没有等到元素,继续重新进行请求print("Timeout waiting for element. Retrying the request.")self.retry_request(request, spider)except Exception as e:print(f"chrome getting page error, Exception = {e}")return HtmlResponse(url=request.url, status=500, request=request)else:time.sleep(4)# 页面爬取成功,构造一个成功的Response对象(HtmlResponse是它的子类)return HtmlResponse(url=request.url,body=spider.browser.page_source,request=request,# 最好根据网页的具体编码而定encoding='utf-8',status=200)# try:#     spider.browser.get(request.url)#     # 搜索框是否出现#     input = spider.wait.until(#         EC.presence_of_element_located((By.XPATH, "//div[@class='nav-search-field ']/input"))#     )#     time.sleep(2)#     input.clear()#     input.send_keys("iphone 7s")#     # 敲enter键, 进行搜索#     input.send_keys(Keys.RETURN)#     # 查看搜索结果是否出现#     searchRes = spider.wait.until(#         EC.presence_of_element_located((By.XPATH, "//div[@id='resultsCol']"))#     )# except Exception as e:#     print(f"chrome getting page error, Exception = {e}")#     return HtmlResponse(url=request.url, status=500, request=request)# else:#     time.sleep(3)#     # 页面爬取成功,构造一个成功的Response对象(HtmlResponse是它的子类)#     return HtmlResponse(url=request.url,#                         body=spider.browser.page_source,#                         request=request,#                         # 最好根据网页的具体编码而定#                         encoding='utf-8',#                         status=200)

附录:selenium教程

参考链接1 selenium如何等待具体元素的出现:https://selenium-python-zh.readthedocs.io/en/latest/waits.html
参考链接2 selenium具体用法:https://pythondjango.cn/python/tools/7-python_selenium/#%E5%85%83%E7%B4%A0%E5%AE%9A%E4%BD%8D%E6%96%B9%E6%B3%95
参考链接3 别人的的实战:https://blog.csdn.net/zwq912318834/article/details/79773870
在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/644717.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

CSS 楼梯弹弹球

<template><view class="loader"></view> </template><script></script><style>body {background-color: #212121;/* 设置背景颜色为 #212121 */}.loader {position: relative;/* 设置定位为相对定位 */width: 120px;/* 设…

38-WEB漏洞-反序列化之PHPJAVA全解(下)

WEB漏洞-反序列化之PHP&JAVA全解&#xff08;下&#xff09; 一、Java中API实现二、序列化理解三、案例演示3.1、本地3.2、Java 反序列化及命令执行代码测试3.3、WebGoat_Javaweb 靶场反序列化测试3.4、2020-网鼎杯-朱雀组-Web-think_java 真题复现 四、涉及资源 一、Java中…

springboot118共享汽车管理系统

简介 【毕设源码推荐 javaweb 项目】基于springbootvue 的共享汽车管理系统 适用于计算机类毕业设计&#xff0c;课程设计参考与学习用途。仅供学习参考&#xff0c; 不得用于商业或者非法用途&#xff0c;否则&#xff0c;一切后果请用户自负。 看运行截图看 第五章 第四章 获…

『论文阅读|2024 WACV 多目标跟踪Deep-EloU|纯中文版』

论文题目&#xff1a; Iterative Scale-Up ExpansionIoU and Deep Features Association for Multi-Object Tracking in Sports 论文特点&#xff1a; 作者提出了一种迭代扩展的 ExpansionIoU 和深度特征关联方法Deep-EIoU&#xff0c;用于体育场景中的多目标跟踪&#xff0c;旨…

基于springboot家政服务管理平台源码和论文

随着家政服务行业的不断发展&#xff0c;家政服务在现实生活中的使用和普及&#xff0c;家政服务行业成为近年内出现的一个新行业&#xff0c;并且能够成为大众广为认可和接受的行为和选择。设计家政服务管理平台的目的就是借助计算机让复杂的销售操作变简单&#xff0c;变高效…

深圳 福田区 建筑模型 su rhino

深圳 福田区 建筑模型 su rhino 只有福田区的&#xff0c;其他区的没有&#xff0c;其他市的没有 模型有skp&#xff0c;obj格式 模型如图 部分数据&#xff1a;

常用电子器件学习——三极管

三极管介绍 三极管&#xff0c;全称应为半导体三极管&#xff0c;也称双极型晶体管、晶体三极管&#xff0c;是一种电流控制电流的半导体器件其作用是把微弱信号放大成幅度值较大的电信号&#xff0c; 也用作无触点开关。晶体三极管&#xff0c;是半导体基本元器件之一&#xf…

浅学JAVAFX布局

JAVAFX FlowPane布局 Flowpane是一个容器。它在一行上排列连续的子组件&#xff0c;并且如果当前行填充满了以后&#xff0c;则自动将子组件向下推到一行 public class FlowPanedemo extends Application {Overridepublic void start(Stage stage) throws Exception {stage.s…

肺癌相关文献6

第十四篇 Classification of lung adenocarcinoma based on stemness scores in bulk and single cell transcriptomes IF&#xff1a;6.0 中科院分区:2区 生物学WOS分区&#xff1a;Q1被引次数&#xff1a; 4 背景&#xff1a;癌细胞具有无限期自我更新和增殖的能力[2]。在一…

python基础学习-03 安装

python3 可应用于多平台包括 Windows、Linux 和 Mac OS X。 Unix (Solaris, Linux, FreeBSD, AIX, HP/UX, SunOS, IRIX, 等等。)Win 9x/NT/2000Macintosh (Intel, PPC, 68K)OS/2DOS (多个DOS版本)PalmOSNokia 移动手机Windows CEAcorn/RISC OSBeOSAmigaVMS/OpenVMSQNXVxWorksP…

高校寝室卫生检查系统UML建模——活动图

学生查看历史的通知公告学生投诉寝室卫生检查 学生查看其他寝室的卫生情况 发起报修请求

【强化学习】QAC、A2C、A3C学习笔记

强化学习算法&#xff1a;QAC vs A2C vs A3C 引言 经典的REINFORCE算法为我们提供了一种直接优化策略的方式&#xff0c;它通过梯度上升方法来寻找最优策略。然而&#xff0c;REINFORCE算法也有其局限性&#xff0c;采样效率低、高方差、收敛性差、难以处理高维离散空间。 为…

【centos7安装docker】

背景&#xff1a; 学习docker&#xff0c;我是想做一个隔离环境&#xff0c;并且部署的话&#xff0c;希望实现自动化&#xff0c;不为安装软件而烦恼&#xff0c;保证每个人的环境一致。 2C4G内存 50G磁盘的虚拟机事先已经准备完毕。 1.查看下centos版本&#xff0c;docker要…

【大数据】Flink 系统架构

Flink 系统架构 1.Flink 组件1.1 JobManager1.2 ResourceManager1.3 TaskManager1.4 Dispatcher 2.应用部署2.1 框架模式2.2 库模式 3.任务执行4.高可用设置4.1 TaskManager 故障4.2 JobManager 故障 Flink 是一个用于状态化并行流处理的分布式系统。它的搭建涉及多个进程&…

aop介绍

AOP&#xff08;Aspect-Oriented Programming&#xff0c;面向方面编程&#xff09;&#xff0c;可以说是OOP&#xff08;Object-Oriented Programing&#xff0c;面向对象编程&#xff09;的补充和完善。OOP引入封装、继承和多态性等概念来建立一种对象层次结构&#xff0c;用…

代码随想录算法训练营第14天 | 二叉树的前序、中序、后序遍历(递归+迭代法)

二叉树的理论基础&#xff1a;&#xff08;二叉树的种类&#xff0c;存储方式&#xff0c;遍历方式 以及二叉树的定义&#xff09; https://programmercarl.com/%E4%BA%8C%E5%8F%89%E6%A0%91%E7%90%86%E8%AE%BA%E5%9F%BA%E7%A1%80.html 二叉树的递归遍历 Leetcode对应的三道习…

我们应该解决哪些计算机网络中的问题,才能实现进程之间基于网络的通信呢?

ps&#xff1a;本文章的图片内容来源都是来自于湖科大教书匠的视频&#xff0c;声明&#xff1a;仅供自己复习&#xff0c;里面加上了自己的理解 这里附上视频链接地址&#xff1a;1.6 计算机网络体系结构&#xff08;1&#xff09;—常见的计算机网络体系结构_哔哩哔哩_bilibi…

what is `ContentCachingRequestWrapper` does?

ContentCachingRequestWrapper 是 Spring Framework 中提供的一种包装类&#xff0c;它扩展了 HttpServletRequestWrapper 类&#xff0c;用于缓存请求体的内容。 通常在处理 HTTP 请求时&#xff0c;原生的 HttpServletRequest 对象中的输入流 (getInputStream()) 只能被读取一…

Java玩转《啊哈算法》排序之桶排序

过去心不可得&#xff0c;现在心不可得&#xff0c;未来心不可得 目录在这里 楔子代码地址桶排序代码核心部分优缺点 完整代码演示 升级版核心代码完整代码演示 楔子 大家好&#xff01;本人最近看了下《啊哈算法》&#xff0c;写的确实不错&#xff0c;生动形象又有趣&#x…

Optional的使用(代替if判断是否为空)

Optional 前言 我的使用 package yimeng;import com.ruoyi.RuoYiApplication; import com.ruoyi.common.core.domain.entity.SysUser; import org.junit.jupiter.api.Test; import org.springframework.boot.test.context.SpringBootTest; import java.util.*;SpringBootTes…