hue 4.11容器化部署,已结合Hive与Hadoop

配合《Hue 部署过程中的报错处理》食用更佳

官方配置说明页面:
https://docs.gethue.com/administrator/configuration/connectors/
官方配置hue.ini页面
https://github.com/cloudera/hue/blob/master/desktop/conf.dist/hue.ini

docker部署

注意

  • 初次部署需要先注释hue.ini配置文件的映射,待到部署成功后,将配置文件拷贝到指定目录后,再取消注释进行部署即可。
    或者到该地址下去复制并新建hue.ini,放到/data/hue/目录下(推荐)。https://github.com/cloudera/hue/blob/master/desktop/conf.dist/hue.ini
  • 如果已部署hive与hadoop,最好部署hue容器之前将文件映射配置修改完整
  • 如果未部署hive与Hadoop,忽略即可,后续有需要时再回来修改配置并重新部署hue容器即可
version: '3.3'  # 指定Docker Compose的版本
services:  # 定义服务列表hue:  # 服务的名称image: gethue/hue:4.11.0  # 指定使用的镜像及其版本container_name: hue  # 设置容器的名称restart: always  # 配置容器的重启策略为总是重启privileged: true  # 给予容器特权模式,拥有更多权限hostname: hue  # 设置容器的主机名ports:  # 端口映射- "9898:8888"  # 将容器的8888端口映射到宿主机的9898端口environment:  # 环境变量设置- TZ=Asia/Shanghai  # 设置时区为中国上海volumes:  # 数据卷配置- /data/hue/hue.ini:/usr/share/hue/desktop/conf/hue.ini  # 将宿主机的hue.ini配置文件映射到容器内的指定位置- /etc/localtime:/etc/localtime  # 将宿主机的本地时间设置映射到容器内,确保容器与宿主机时间一致- /opt/hive-3.1.3:/opt/hive-3.1.3 # 将宿主机的hive目录映射到容器内- /opt/hadoop-3.3.0:/opt/hadoop-3.3.0 # 将宿主机的hadoop目录映射到容器内networks:hue_default:ipv4_address: 172.15.0.2  # 指定静态IP地址networks:  # 定义网络hue_default:driver: bridge  # 使用bridge驱动ipam:config:- subnet: 172.15.0.0/16  # 指定子网范围
# 1. 创建hue部署文件
vi docker-compose-hue.yml# 2. 将上方的部署内容复制粘贴到docker-compose-hue.yml中
# 3. 使用docker compose部署hue
docker compose -f docker-compose-hue.yml  up -d# 4. 检验是否部署成功,同一局域网下的浏览器访问hue
# 由于这是您第一次登录,请选择任意用户名和密码。请务必记住这些,因为它们将成为您的 Hue 超级用户凭据。
主机ip:9898

hue初始化

  1. hue.ini 配置文件修改
vi /data/hue/hue.ini
# 修改Webserver监听地址http_host=127.0.0.1# 修改时区time_zone=Asia/Shanghai# 配置hue元数据数据库存储
[[database]]engine=mysqlhost=192.168.10.75port=3306user=rootpassword=HoMf@123name=hue# 配置hue数据库连接,用于查询Mysql数据库
[[interpreters]]
# Define the name and how to connect and execute the language.
# https://docs.gethue.com/administrator/configuration/editor/[[[mysql]]]name = MySQLinterface=sqlalchemy
#   ## https://docs.sqlalchemy.org/en/latest/dialects/mysql.htmloptions='{"url": "mysql://root:HoMf@123@192.168.10.75:3306/hue_meta"}'
#   ## options='{"url": "mysql://${USER}:${PASSWORD}@localhost:3306/hue"}'
  1. 创建数据库
# 进入数据库
mysql -uroot -pHoMf@123# 创建hue数据库
mysql> create database `hue` default character set utf8mb4 default collate utf8mb4_general_ci;
Query OK, 1 row affected (0.00 sec)# 创建hue_meta数据库
mysql> create database `hue_meta` default character set utf8mb4 default collate utf8mb4_general_ci;
Query OK, 1 row affected (0.00 sec)
  1. 应用配置文件,重启容器,进入容器,数据库初始化
# 重启hue容器
docker restart hue# 进入hue容器
docker exec -it hue bash# 执行数据库初始化
/usr/share/hue/build/env/bin/hue syncdb
/usr/share/hue/build/env/bin/hue migrate# 退出容器,ctrl + D,或
exit
  • 数据库初始化详细操作及返回结果
hue@hue:/usr/share/hue$ /usr/share/hue/build/env/bin/hue syncdb
[22/Nov/2024 15:38:07 +0800] settings     INFO     Welcome to Hue 4.11.0
[22/Nov/2024 15:38:08 +0800] conf         WARNING  enable_extract_uploaded_archive is of type bool. Resetting it as type 'coerce_bool'. Please fix it permanently
[22/Nov/2024 15:38:08 +0800] conf         WARNING  enable_new_create_table is of type bool. Resetting it as type 'coerce_bool'. Please fix it permanently
[22/Nov/2024 15:38:08 +0800] conf         WARNING  force_hs2_metadata is of type bool. Resetting it as type 'coerce_bool'. Please fix it permanently
[22/Nov/2024 15:38:08 +0800] conf         WARNING  show_table_erd is of type bool. Resetting it as type 'coerce_bool'. Please fix it permanently
[21/Nov/2024 23:38:08 -0800] backend      WARNING  mozilla_django_oidc module not found
[21/Nov/2024 23:38:09 -0800] apps         INFO     AXES: BEGIN LOG
[21/Nov/2024 23:38:09 -0800] apps         INFO     AXES: Using django-axes version 5.13.0
[21/Nov/2024 23:38:09 -0800] apps         INFO     AXES: blocking by IP only.
[21/Nov/2024 23:38:09 -0800] api3         WARNING  simple_salesforce module not found
[21/Nov/2024 23:38:09 -0800] jdbc         WARNING  Failed to import py4j
[21/Nov/2024 23:38:10 -0800] schemas      INFO     Resource 'XMLSchema.xsd' is already loaded
No changes detected
hue@hue:/usr/share/hue$ /usr/share/hue/build/env/bin/hue migrate
[22/Nov/2024 15:38:33 +0800] settings     INFO     Welcome to Hue 4.11.0
[22/Nov/2024 15:38:33 +0800] conf         WARNING  enable_extract_uploaded_archive is of type bool. Resetting it as type 'coerce_bool'. Please fix it permanently
[22/Nov/2024 15:38:33 +0800] conf         WARNING  enable_new_create_table is of type bool. Resetting it as type 'coerce_bool'. Please fix it permanently
[22/Nov/2024 15:38:33 +0800] conf         WARNING  force_hs2_metadata is of type bool. Resetting it as type 'coerce_bool'. Please fix it permanently
[22/Nov/2024 15:38:33 +0800] conf         WARNING  show_table_erd is of type bool. Resetting it as type 'coerce_bool'. Please fix it permanently
[21/Nov/2024 23:38:33 -0800] backend      WARNING  mozilla_django_oidc module not found
[21/Nov/2024 23:38:34 -0800] apps         INFO     AXES: BEGIN LOG
[21/Nov/2024 23:38:34 -0800] apps         INFO     AXES: Using django-axes version 5.13.0
[21/Nov/2024 23:38:34 -0800] apps         INFO     AXES: blocking by IP only.
[21/Nov/2024 23:38:34 -0800] api3         WARNING  simple_salesforce module not found
[21/Nov/2024 23:38:34 -0800] jdbc         WARNING  Failed to import py4j
[21/Nov/2024 23:38:35 -0800] schemas      INFO     Resource 'XMLSchema.xsd' is already loaded
Operations to perform:Apply all migrations: auth, authtoken, axes, beeswax, contenttypes, desktop, jobsub, oozie, pig, sessions, sites, useradmin
Running migrations:Applying contenttypes.0001_initial... OKApplying contenttypes.0002_remove_content_type_name... OKApplying auth.0001_initial... OKApplying auth.0002_alter_permission_name_max_length... OKApplying auth.0003_alter_user_email_max_length... OKApplying auth.0004_alter_user_username_opts... OKApplying auth.0005_alter_user_last_login_null... OKApplying auth.0006_require_contenttypes_0002... OKApplying auth.0007_alter_validators_add_error_messages... OKApplying auth.0008_alter_user_username_max_length... OKApplying auth.0009_alter_user_last_name_max_length... OKApplying auth.0010_alter_group_name_max_length... OKApplying auth.0011_update_proxy_permissions... OKApplying auth.0012_alter_user_first_name_max_length... OKApplying authtoken.0001_initial... OKApplying authtoken.0002_auto_20160226_1747... OKApplying authtoken.0003_tokenproxy... OKApplying axes.0001_initial... OKApplying axes.0002_auto_20151217_2044... OKApplying axes.0003_auto_20160322_0929... OKApplying axes.0004_auto_20181024_1538... OKApplying axes.0005_remove_accessattempt_trusted... OKApplying axes.0006_remove_accesslog_trusted... OKApplying axes.0007_add_accesslog_trusted... OKApplying axes.0008_remove_accesslog_trusted... OKApplying beeswax.0001_initial... OKApplying beeswax.0002_auto_20200320_0746... OKApplying beeswax.0003_compute_namespace... OKApplying desktop.0001_initial... OKApplying desktop.0002_initial... OKApplying desktop.0003_initial... OKApplying desktop.0004_initial... OKApplying desktop.0005_initial... OKApplying desktop.0006_initial... OKApplying desktop.0007_initial... OKApplying desktop.0008_auto_20191031_0704... OKApplying desktop.0009_auto_20191202_1056... OKApplying desktop.0010_auto_20200115_0908... OKApplying desktop.0011_document2_connector... OKApplying desktop.0012_connector_interface... OKApplying desktop.0013_alter_document2_is_trashed... OKApplying jobsub.0001_initial... OKApplying oozie.0001_initial... OKApplying oozie.0002_initial... OKApplying oozie.0003_initial... OKApplying oozie.0004_initial... OKApplying oozie.0005_initial... OKApplying oozie.0006_auto_20200714_1204... OKApplying oozie.0007_auto_20210126_2113... OKApplying oozie.0008_auto_20210216_0216... OKApplying pig.0001_initial... OKApplying pig.0002_auto_20200714_1204... OKApplying sessions.0001_initial... OKApplying sites.0001_initial... OKApplying sites.0002_alter_domain_unique... OKApplying useradmin.0001_initial... OKApplying useradmin.0002_userprofile_json_data... OKApplying useradmin.0003_auto_20200203_0802... OKApplying useradmin.0004_userprofile_hostname... OK
[21/Nov/2024 23:38:42 -0800] models       INFO     HuePermissions: 34 added, 0 updated, 0 up to date, 0 stale, 0 deleted
  1. 查看数据库表是否创建成功,判断数据库是否初始化完成
# 进入数据库
mysql -uroot -pHoMf@123# 进入hue数据库
mysql> use hue;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed# 查看hue数据库内的数据库表
mysql> show tables;
+--------------------------------+
| Tables_in_hue                  |
+--------------------------------+
| auth_group                     |
| auth_group_permissions         |
| auth_permission                |
| auth_user                      |
| auth_user_groups               |
| auth_user_user_permissions     |
| authtoken_token                |
| axes_accessattempt             |
| axes_accesslog                 |
| beeswax_compute                |
| beeswax_metainstall            |
| beeswax_namespace              |
| beeswax_queryhistory           |
| beeswax_savedquery             |
| beeswax_session                |
| defaultconfiguration_groups    |
| desktop_connector              |
| desktop_defaultconfiguration   |
| desktop_document               |
| desktop_document2              |
| desktop_document2_dependencies |
| desktop_document2permission    |
| desktop_document_tags          |
| desktop_documentpermission     |
| desktop_documenttag            |
| desktop_settings               |
| desktop_userpreferences        |
| django_content_type            |
| django_migrations              |
| django_session                 |
| django_site                    |
| documentpermission2_groups     |
| documentpermission2_users      |
| documentpermission_groups      |
| documentpermission_users       |
| jobsub_checkforsetup           |
| jobsub_jobdesign               |
| jobsub_jobhistory              |
| jobsub_oozieaction             |
| jobsub_ooziedesign             |
| jobsub_ooziejavaaction         |
| jobsub_ooziemapreduceaction    |
| jobsub_ooziestreamingaction    |
| oozie_bundle                   |
| oozie_bundledcoordinator       |
| oozie_coordinator              |
| oozie_datainput                |
| oozie_dataoutput               |
| oozie_dataset                  |
| oozie_decision                 |
| oozie_decisionend              |
| oozie_distcp                   |
| oozie_email                    |
| oozie_end                      |
| oozie_fork                     |
| oozie_fs                       |
| oozie_generic                  |
| oozie_history                  |
| oozie_hive                     |
| oozie_java                     |
| oozie_job                      |
| oozie_join                     |
| oozie_kill                     |
| oozie_link                     |
| oozie_mapreduce                |
| oozie_node                     |
| oozie_pig                      |
| oozie_shell                    |
| oozie_sqoop                    |
| oozie_ssh                      |
| oozie_start                    |
| oozie_streaming                |
| oozie_subworkflow              |
| oozie_workflow                 |
| pig_document                   |
| pig_pigscript                  |
| useradmin_grouppermission      |
| useradmin_huepermission        |
| useradmin_ldapgroup            |
| useradmin_userprofile          |
+--------------------------------+
80 rows in set (0.01 sec)

部署检验

此时应可以正常进入hue界面,并且Sources列表内能够看到MySql
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

配置hive

请先添加好文件映射,确保在容器内也可以访问到hive的配置文件目录
修改hue.ini配置文件,修改后重启容器即可
配置成功后即可在hue界面内的Sources列表内能够看到Hive
在这里插入图片描述

# 修改hue配置文件
vi /data/hue/hue.ini
# 取消注释即可[[[hive]]]name=Hiveinterface=hiveserver2[beeswax]# Host where HiveServer2 is running.
# If Kerberos security is enabled, use fully-qualified domain name (FQDN).
# hive主机地址hive_server_host=192.168.10.75# Binary thrift port for HiveServer2.
# hive-site.xml中的hive.server2.thrift.port配置hive_server_port=10000# Hive configuration directory, where hive-site.xml is located# hive配置文件目录,若为容器部署,则需要配置文件映射hive_conf_dir=/opt/hive-3.1.3/conf# Timeout in seconds for thrift calls to Hive serviceserver_conn_timeout=120# Override the default desktop username and password of the hue user used for authentications with other services.
# e.g. Used for LDAP/PAM pass-through authentication.auth_username=hueauth_password=root[metastore]
# Flag to turn on the new version of the create table wizard.enable_new_create_table=true
  • 附上我的hive-site.xml配置
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!--Licensed to the Apache Software Foundation (ASF) under one or morecontributor license agreements.  See the NOTICE file distributed withthis work for additional information regarding copyright ownership.The ASF licenses this file to You under the Apache License, Version 2.0(the "License"); you may not use this file except in compliance withthe License.  You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.
-->
<configuration><property><name>hive.metastore.warehouse.dir</name><value>/user/hive-3.1.3/warehouse</value><description/></property><property><name>javax.jdo.option.ConnectionURL</name><value>jdbc:mysql://Linux-Master:3306/hive?createDatabaseIfNotExist=true&amp;useSSL=false&amp;allowPublicKeyRetrieval=true</value><description>数据库连接</description></property><property><name>javax.jdo.option.ConnectionDriverName</name><value>com.mysql.jdbc.Driver</value><description/></property><property><name>javax.jdo.option.ConnectionUserName</name><value>root</value><description/></property><property><name>javax.jdo.option.ConnectionPassword</name><value>HoMf@123</value><description/></property><property><name>hive.querylog.location</name><value>/home/hadoop/logs/hive-3.1.3/job-logs/${user.name}</value><description>Location of Hive run time structured log file</description></property><property><name>hive.exec.scratchdir</name><value>/user/hive-3.1.3/tmp</value></property><property><name>hive.server2.thrift.port</name><value>11000</value></property>
</configuration>

配置 Hadoop

配置 HDFS

修改hue.ini

# 修改hue配置文件
vi /data/hue/hue.ini
# hdfs集群配置
[[hdfs_clusters]]# HA support by using HttpFs[[[default]]]fs_defaultfs=hdfs://192.168.10.75:9000webhdfs_url=http://192.168.10.75:9870/webhdfs/v1hadoop_conf_dir=/opt/hadoop-3.3.0/etc/hadoop# yarn集群配置
[[yarn_clusters]][[[default]]]
# Enter the host on which you are running the ResourceManagerresourcemanager_host=192.168.10.75# The port where the ResourceManager IPC listens onresourcemanager_port=8032# Whether to submit jobs to this cluster
submit_to=True# Resource Manager logical name (required for HA)
## logical_name=# Change this if your YARN cluster is Kerberos-secured
## security_enabled=false# URL of the ResourceManager APIresourcemanager_api_url=http://192.168.10.75:8088# URL of the ProxyServer APIproxy_api_url=http://192.168.10.75:8088# URL of the HistoryServer APIhistory_server_api_url=http://localhost:19888

修改hadoop的core-site.xml

<!—允许通过 httpfs 方式访问 hdfs 的主机名 -->
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
<!—允许通过 httpfs 方式访问 hdfs 的用户组 -->
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
  • 这里附上我的core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file.
--><!-- Put site-specific property overrides in this file. --><configuration><property><name>hadoop.tmp.dir</name><value>/opt/hadoop-3.3.0/tmp</value><description>Abase for other temporary directories.</description></property><property><name>fs.defaultFS</name><!-- IP地址为主节点服务器地址 --><value>hdfs://192.168.10.75:9000</value></property><property><name>hadoop.proxyuser.root.hosts</name><value>*</value></property><property><name>hadoop.proxyuser.root.groups</name><value>*</value></property><property><name>hadoop.proxyuser.hue.hosts</name><value>*</value></property><property><name>hadoop.proxyuser.hue.groups</name><value>*</value></property>
</configuration>

修改hadoop的hdfs-site.xml

<property><name>dfs.webhdfs.enabled</name><value>true</value>
</property>
  • 这里附上我的hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file.
--><!-- Put site-specific property overrides in this file. --><configuration><!-- nn web 端访问地址--><property><name>dfs.namenode.http-address</name><!-- master-ubuntu需要改为主节点机器名--><value>linux-master:9870</value></property><!-- 2nn web 端访问地址--><property><name>dfs.namenode.secondary.http-address</name><!-- slave1-ubuntu需要改为第一从节点台机器名--><value>linux-slave01:9868</value></property><property><name>dfs.permissions.enabled</name><value>false</value></property><property><name>dfs.webhdfs.enable</name><value>true</value></property>
</configuration>

配置yarn集群

  • 配置hadoop的yarn-site.xml
        <!-- 开启日志聚集功能 --><property><name>yarn.log-aggregation-enable</name><value>true</value></property><!-- 设置日志聚集服务器地址 --><property><name>yarn.log.server.url</name><!-- IP地址为主节点机器地址 --><value>http://192.168.10.75:19888/jobhistory/logs</value></property><!-- 设置日志保留时间为 7 天 --><property><name>yarn.log-aggregation.retain-seconds</name><value>604800</value></property>
  • 这里附上我的yarn-site.xml
<?xml version="1.0"?>
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file.
-->
<configuration><!-- Site specific YARN configuration properties --><!-- 指定 MR 走 shuffle --><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property><!-- 指定 ResourceManager 的地址--><property><name>yarn.resourcemanager.hostname</name><!-- master-ubuntu需要改为为主节点机器名--><value>linux-master</value></property><!-- 环境变量的继承 --><property><name>yarn.nodemanager.env-whitelist</name><value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CO NF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value></property><!-- 开启日志聚集功能 --><property><name>yarn.log-aggregation-enable</name><value>true</value></property><!-- 设置日志聚集服务器地址 --><property><name>yarn.log.server.url</name><!-- IP地址为主节点机器地址 --><value>http://192.168.10.75:19888/jobhistory/logs</value></property><!-- 设置日志保留时间为 7 天 --><property><name>yarn.log-aggregation.retain-seconds</name><value>604800</value></property>
</configuration>

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/888190.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Vue 3 中实现页面特定功能控制

在开发 Vue 应用时&#xff0c;我们经常会遇到需要在特定页面启用或禁用某些功能的情况。本文将以 A父.vue 页面为例&#xff0c;探讨如何在点击汇总菜单时仅在该页面生效&#xff0c;而在其他页面不生效的问题。 1. 利用 Vue 3 的 provide 和 inject 实现状态传递 Vue 3 提供…

Spring Boot自定义启动banner

在启动 Springboot 应用时&#xff0c;默认情况下会在控制台打印出 Springboot 相关的banner信息。 自定义banner 如果你想自定义一个独特的启动banner&#xff0c;该怎么做呢&#xff1f;Springboot 允许我们通过自定义启动banner来替换默认的banner。只需要在 resources 目…

leaflet 的基础使用

目录 一、创建dom节点 二、创建地图 三、添加底图&#xff08;天地图&#xff09;&#xff0c;在地图创建完成后添加底图 本章主要讲述leaflet在vue中的使用&#xff1a; leaflet 详情总目录&#xff1a;传送 一、创建dom节点 <div class"map" id"map_…

C++分治思想

分治思想的定义 分治思想是C中很重要的一个思想&#xff0c;它的主旨是“将大的问题化作小的问题&#xff0c;将小问题在化作更小的问题”。就是这样一个看起来十分简单的思想&#xff0c;却涵盖了许许多多的算法&#xff0c;如递归&#xff0c;递推&#xff0c;贪心等。 分治…

ubuntu的用户使用

ubuntu系统中的常规用户登录方式 在系统root用户是无法直接登录的,因为root用户的权限过大所以其安全性比较差 在登录系统时一般使用在安装系统时建立的普通用户登录 如果需要超级用户权限: Ubuntu用户密码破解 在系统安装完成后默认grub启动等待时间为0&#xff0c;建议改…

浏览器拨测:将网站护航的阵地再前推一米

作者&#xff1a;泉思 “从你在地址栏里敲下回车开始到你在网页上看到内容中间经过了哪些步骤”&#xff0c; 这是一个非常常见的互联网公司的面试题。想必很多开发者对于这个问题可以给出一个非常完整的回答&#xff0c;但是对于用户来说&#xff0c;在网页上看到内容仅仅是服…

Modern Effective C++ 条款二十七:熟悉通用引用重载的替代方法

item26中说明对使用通用引用形参的函数&#xff0c;无论是独立函数还是成员函数&#xff0c;进行重载都会导致一系列问题。但是也提供了一些示例&#xff0c;如果能够按照我们期望的方式运行&#xff0c;重载可能也是有用的。这个条款探讨了几种通过避免在通用引用上重载的设计…

【RL Application】语义分割中的强化学习方法

&#x1f4e2;本篇文章是博主强化学习&#xff08;RL&#xff09;领域学习时&#xff0c;用于个人学习、研究或者欣赏使用&#xff0c;并基于博主对相关等领域的一些理解而记录的学习摘录和笔记&#xff0c;若有不当和侵权之处&#xff0c;指出后将会立即改正&#xff0c;还望谅…

【C++】优先队列(Priority Queue)全知道

亲爱的读者朋友们&#x1f603;&#xff0c;此文开启知识盛宴与思想碰撞&#x1f389;。 快来参与讨论&#x1f4ac;&#xff0c;点赞&#x1f44d;、收藏⭐、分享&#x1f4e4;&#xff0c;共创活力社区。 目录 一、前言 二、优先队列&#xff08;Priority Queue&#xff09…

【SQL】实战--组合两个表

题目描述 表: Person ---------------------- | 列名 | 类型 | ---------------------- | PersonId | int | | FirstName | varchar | | LastName | varchar | ---------------------- personId 是该表的主键&#xff08;具有唯一值的列&#xff09;…

STL:相同Size大小的vector和list哪个占用空间多?

在C中&#xff0c;vector和list是两种不同的序列容器。vector底层是连续的内存&#xff0c;而list是非连续的&#xff0c;分散存储的。因此&#xff0c;vector占用的空间更多&#xff0c;因为它需要为存储的元素分配连续的内存空间。 具体占用多少空间&#xff0c;取决于它们分…

《Serverless 架构:引领未来软件开发的新趋势》

一、引言 随着云计算技术的不断发展&#xff0c;软件开发模式也在不断演进。Serverless 架构作为一种新兴的云计算架构模式&#xff0c;正在逐渐改变着软件开发的方式和流程。本文将深入探讨 Serverless 架构的概念、特点、应用场景以及未来发展趋势。 二、Serverless 架构概述…

Java的关键字和保留字

理解什么是关键字&#xff1f; Java赋予了某些单词特殊意义&#xff0c;就不能自己在代码中起同名一样的&#xff0c;否则提示错误 【在Java中关键字都是小写的&#xff0c;并不是所有的小写字母都是关键字&#xff0c;一般在IDEA中显示高亮橘黄色】 理解什么是保留字&#xf…

三十二:HTTP 协议的基本认证

在 Web 开发中&#xff0c;HTTP 协议提供了一种简单的方式来进行身份验证&#xff0c;即 基本认证&#xff08;Basic Authentication&#xff09;。这种认证方式广泛应用于需要保护的资源或 API 接口&#xff0c;它通过在 HTTP 请求头中传递用户名和密码来验证用户身份。虽然基…

GPT vs Claude到底如何选?

美国当地时间6月20日&#xff0c;OpenAI的“劲敌”Anthropic公司发布了最新模型Claude 3.5 Sonnet。据Anthropic介绍&#xff0c;该模型是Claude 3.5系列模型中的首个版本&#xff0c;也是Anthropic迄今为止发布的“最强大、最智能”的模型。它不仅在性能上超越了竞争对手和自家…

Ubuntu 22.04 LTS vs Ubuntu 24.04 LTS:深度剖析,哪个版本更胜一筹?

在开源操作系统领域&#xff0c;Ubuntu一直以其稳定、易用和丰富的功能而受到广泛好评。随着Ubuntu 24的发布&#xff0c;许多用户开始关注这两个版本之间的差异&#xff0c;并考虑是否应该升级到最新版本。鼎峰新匯Benson将对比Ubuntu 22和Ubuntu 24&#xff0c;以帮助用户做出…

Ubuntu 22.04 离线安装软件包

在使用最小化安装时&#xff0c;默认是不带有vim 或者nano编辑器的&#xff0c;如果你的环境不能上外网就需要离线安装。 首先你需要先找一台可以上网的ubuntu系统&#xff08;虚拟机搭建也行&#xff09;&#xff0c;下载所有的依赖包&#xff0c;然后上传到需要安装的服务器…

k8s 1.28 二进制安装与部署

第一步 &#xff1a;配置Linux服务器 #借助梯子工具 192.168.196.100 1C8G kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubectl、haproxy、keepalived 192.168.196.101 1C8G kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubectl、…

unity中:Unity 中异步与协程结合实现线程阻塞的http数据请求

在 Unity 开发中&#xff0c;将协程与 C# 的 async/await 机制结合&#xff0c;可以显著提高代码的可读性与维护性&#xff0c;并且支持返回值。 异步与协程结合在数据请求中的优势 提高代码可读性&#xff1a; 与传统协程相比&#xff0c; async/await 更接近同步逻辑&#xf…

详解QtPDF之 QPdfLink

文章目录 前言QPdfLink 类介绍QPdfLink 的基本功能 QPdfLink 的成员函数1. QPdfLink()2. boundingRect() const3. target() const4. setTarget(const QUrl &target)5. isValid() const 使用 QPdfLink 的示例示例代码代码说明&#xff1a; 总结 前言 在处理 PDF 文档时&…