hadoop 总结

1.hadoop 配置文件 core-site  hdfs-site yarn-site.xml   worker

hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration><property><name>dfs.nameservices</name><value>mycluster</value></property><property><name>dfs.ha.namenodes.mycluster</name><value>nn1,nn2</value></property><property><name>dfs.namenode.rpc-address.mycluster.nn1</name><value>xiemeng-01:9870</value></property><property><name>dfs.namenode.rpc-address.mycluster.nn2</name><value>xiemeng-02:9870</value></property><property><name>dfs.namenode.http-address.mycluster.nn1</name><value>xiemeng-01:50070</value></property><property><name>dfs.namenode.http-address.mycluster.nn2</name><value>xiemeng-02:50070</value></property><property><name>dfs.namenode.shared.edits.dir</name><value>qjournal://xiemeng-01:8485;xiemeng-02:8485;xiemeng-03:8485/mycluster</value></property><!--配置journalnode的工作目录--><property><name>dfs.journalnode.edits.dir</name><value>/home/xiemeng/software/hadoop-3.2.0/journalnode/data</value></property><property><name>dfs.client.failover.proxy.provider.mycluster</name><value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value></property><property><name>dfs.ha.fencing.methods</name><value>sshfenceshell(/bin/true)</value></property><property><name>dfs.ha.fencing.ssh.private-key-files</name><value>/home/root/.ssh/id_rsa</value></property><property><name>dfs.ha.automatic-failover.enabled</name><value>true</value></property><property><name>dfs.webhdfs.enabled</name><value>true</value>   </property><property><name>dfs.name.dir</name><value>/home/xiemeng/software/hadoop-3.2.0/name</value></property><property><name>dfs.data.dir</name><value>/home/xiemeng/software/hadoop-3.2.0/data</value></property><property><name>dfs.replication</name><value>1</value></property><property><name>dfs.journalnode.edits.dir</name><value>/opt/journalnode/data</value></property><property><name>dfs.ha.automatic-failover.enabled</name><value>true</value></property><property><name>dfs.ha.fencing.ssh.connect-timeout</name><value>30000</value></property>
</configuration>

core-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration><property><name>fs.defaultFS</name><value>hdfs://mycluster</value></property><property><name>dfs.nameservices</name><value>mycluster</value></property><property><name>ha.zookeeper.quorum</name><value>192.168.64.128:2181,192.168.64.130:2181,192.168.64.131:2181</value></property><property><name>hadoop.tmp.dir</name><value>/home/xiemeng/software/hadoop-3.2.0/var</value></property><property><name>io.file.buffer.size</name><value>131072</value></property><property><name>ipc.client.connect.max.retries</name><value>100</value><description>Indicates the number of retries a client will make to establisha server connection.</description></property><property><name>ipc.client.connect.retry.interval</name><value>10000</value><description>Indicates the number of milliseconds a client will wait forbefore retrying to establish a server connec
tion.</description></property><property><name>hadoop.proxyuser.xiemeng.hosts</name><value>*</value></property><property><name>hadoop.proxyuser.xiemeng.groups</name><value>*</value></property><property><name>hadoop.native.lib</name><value>true</value><description>Should native hadoop libraries, if present, be used.</description></property><property><name>fs.trash.interval</name><value>1</value></property><property><name>fs.trash.checkpoint.interval</name><value>1</value></property>
</configuration>

yarn-site.xml

<configuration>
<property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property><property><name>yarn.resourcemanager.ha.enabled</name><value>true</value></property><property><name>yarn.resourcemanager.cluster-id</name><value>mycluster</value></property><property><name>yarn.resourcemanager.ha.rm-ids</name><value>rm1,rm2</value></property><property><name>yarn.resourcemanager.hostname.rm1</name><value>xiemeng-01</value></property><property><name>yarn.resourcemanager.hostname.rm2</name><value>xiemeng-02</value></property><property><name>yarn.resourcemanager.webapp.address.rm1</name><value>xiemeng-01:8088</value></property><property><name>yarn.resourcemanager.webapp.address.rm2</name><value>xiemeng-02:8088</value></property><property><name>yarn.resourcemanager.zk-address</name><value>192.168.64.128:2181,192.168.64.130:2181,192.168.64.131:2181</value></property><property><name>yarn.nodemanager.pmem-check-enabled</name><value>false</value></property><!--是否启动一个线程查询每个任务使用的虚拟内存量,如果任务超出内存值直接杀掉,默认为true--><property><name>yarn.nodemanager.vmem-check-enabled</name><value>false</value></property><property><name>yarn.resourcemanager.scheduler.class</name><value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value></property><!-- 开启标签功能 --><property><name>yarn.node-labels.enabled</name><value>true</value></property><!-- 设置标签存储位置--><property><name>yarn.node-labels.fs-store.root-dir</name><value>hdfs://mycluster/yn/node-labels/</value></property><!-- 开启资源抢占监控 --><property><name>yarn.resourcemanager.scheduler.monitor.enable</name><value>true</value></property><!-- 设置一轮抢占的资源占比,默认为0.1 --><property><name>yarn.resourcemanager.monitor.capacity.preemption.total_preemption_per_round</name><value>0.3</value></property>
</configuration>

workers

xiemeng-01
xiemeng-02
xiemeng-03
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration><property><name>mapreduce.framework.name</name><value>yarn</value></property><property><name>mapreduce.application.classpath</name><value>/home/xiemeng/software/hadoop-3.2.0/etc/hadoop,/home/xiemeng/software/hadoop-3.2.0/share/hadoop/common/*,/home/xiemeng/software/hadoop-3.2.0/share/hadoop/common/lib/*,/home/xiemeng/software/hadoop-3.2.0/share/hadoop/hdfs/*,/home/xiemeng/software/hadoop-3.2.0/share/hadoop/hdfs/lib/*,/home/xiemeng/software/hadoop-3.2.0/share/hadoop/mapreduce/*,/home/xiemeng/software/hadoop-3.2.0/share/hadoop/mapreduce/lib/*,/home/xiemeng/software/hadoop-3.2.0/share/hadoop/yarn/*,/home/xiemeng/software/hadoop-3.2.0/share/hadoop/yarn/lib/*</value></property>
</configuration>

capacity-scheduler.xml

<configuration><property><name>yarn.scheduler.capacity.maximum-applications</name><value>10000</value></property><property><name>yarn.scheduler.capacity.maximum-am-resource-percent</name><value>0.1</value></property><property><name>yarn.scheduler.capacity.resource-calculator</name><value>org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator</value></property><property><name>yarn.scheduler.capacity.root.leaf-queue-template.ordering-policy</name><value>fair</value></property>
<property><name>yarn.scheduler.capacity.root.queues</name><value>default,low</value></property><property><name>yarn.scheduler.capacity.root.default.capacity</name><value>40</value></property><property><name>yarn.scheduler.capacity.root.low.capacity</name><value>60</value></property><property><name>yarn.scheduler.capacity.root.default.user-limit-factor</name><value>1</value></property><property><name>yarn.scheduler.capacity.root.low.user-limit-factor</name><value>1</value></property><property><name>yarn.scheduler.capacity.root.default.maximum-capacity</name><value>60</value></property><property><name>yarn.scheduler.capacity.root.low.maximum-capacity</name><value>80</value></property><property><name>yarn.scheduler.capacity.root.default.default-application-priority</name><value>100</value></property><property><name>yarn.scheduler.capacity.root.low.default-application-priority</name><value>100</value></property><property><name>yarn.scheduler.capacity.root.low.acl_administer_queue</name><value>xiemeng,root</value></property><property><name>yarn.scheduler.capacity.root.low.acl_submit_applications</name><value>xiemeng,root</value></property><property><name>yarn.scheduler.capacity.root.default.acl_administer_queue</name><value>xiemeng,root</value></property><property><name>yarn.scheduler.capacity.root.default.acl_submit_applications</name><value>xiemeng,root</value></property>
</configuration>

Hadoop
启动Hadoop集群: 
Step1 : 在各个JournalNode节点上,输入以下命令启动journalnode服务: sbin/hadoop-daemon.sh start journalnode 
Step2: 在[nn1]上,对其进行格式化,并启动: bin/hdfs namenode -format sbin/hadoop-daemon.sh start namenode 
Step3: 在[nn2]上,同步nn1的元数据信息: hdfs namenode -bootstrapStandby

查看执行任务日志

yarn logs -applicationId   application_1607776903207_0002

2. 基本架构 jobMannager  resourceManager  TaskMananger  一些流程

3.hadoop 命令行操作

hdfs dfs -put [-f] [-p] <localsrc> ... <dst>
hdfs dfs -get [-p] [-ignoreCrc] [-crc] <src> ... <localdst>
hadoop hdfs dfs –put [本地目录] [hadoop目录] 
hadoop fs -mkdir -p < hdfs dir >

3.hadoop java 操作

  Mapper,Reducer,InputFormat OutPutFormat Comparator Partition Comperess

public class WordCountMapper  extends Mapper<LongWritable,Text,Text,LongWritable> {/*** 初始化** @param context* @throws IOException* @throws InterruptedException*/@Overrideprotected void setup(Context context) throws IOException, InterruptedException {super.setup(context);}/**** 用户业务** @param key* @param value* @param context* @throws IOException* @throws InterruptedException*/@Overrideprotected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {String str = value.toString();String [] words  = StringUtils.split(str);for(String word:words){context.write(new Text(word),new LongWritable(1));}}/*** 清理资源** @param context* @throws IOException* @throws InterruptedException*/@Overrideprotected void cleanup(Context context) throws IOException, InterruptedException {super.cleanup(context);}
}
public class WordCountReducer extends Reducer<Text, LongWritable, Text, LongWritable> {@Overrideprotected void reduce(Text key, Iterable<LongWritable> values, Context context) throws IOException, InterruptedException {long count =0;for(LongWritable value:values){count += value.get();}context.write(key,new LongWritable(count));}
}


public class WordCountDriver {public static void main(String[] args) {Configuration config = new Configuration();System.setProperty("HADOOP_USER_NAME", "xiemeng");config.set("fs.defaultFS","hdfs://192.168.64.128:9870");config.set("mapreduce.framework.name","yarn");config.set("yarn.resourcemanager.hostname","192.168.64.128");config.set("mapreduce.app-submission.cross-platform", "true");config.set("mapreduce.job.jar","file:/D:/code/hadoop-start-demo/target/hadoop-start-demo-1.0-SNAPSHOT.jar");try {Job job = Job.getInstance(config);job.setJarByClass(WordCountDriver.class);job.setMapperClass(WordCountMapper.class);job.setCombinerClass(WordCountCombiner.class);job.setReducerClass(WordCountReducer.class);job.setMapOutputKeyClass(Text.class);job.setMapOutputValueClass(LongWritable.class);job.setOutputKeyClass(Text.class);job.setOutputValueClass(LongWritable.class);FileInputFormat.setInputPaths(job,new Path("/wordcount/input"));FileOutputFormat.setOutputPath(job,new Path("/wordcount2/output"));instance.setGroupingComparatorClass(OrderGroupintComparator.class);FileOutputFormat.setCompressOutput(job, true);FileOutputFormat.setOutputCompressorClass(job, BZip2Codec.class);boolean complete = job.waitForCompletion(true);System.exit(complete ? 0:1);} catch (Exception e) {e.printStackTrace();}   
}
public class OrderGroupintComparator extends WritableComparator {public OrderGroupintComparator() {super(OrderBean.class,true);}@Overridepublic int compare(Object o1, Object o2) {OrderBean orderBean = (OrderBean) o1;OrderBean orderBean2 = (OrderBean)o2;if(orderBean.getOrderId() > orderBean2.getOrderId()){return 1;}else if(orderBean.getOrderId() < orderBean2.getOrderId()){return -1;}else {return 0;}}
}
public class FilterOutputFormat extends FileOutputFormat<Text, NullWritable> {@Overridepublic RecordWriter<Text, NullWritable> getRecordWriter(TaskAttemptContext taskAttemptContext) throws IOException, InterruptedException {CustomWriter customWriter = new CustomWriter(taskAttemptContext);return customWriter;}protected static class CustomWriter extends RecordWriter<Text, NullWritable> {private FileSystem fs;private FSDataOutputStream fos;private TaskAttemptContext context;public CustomWriter(TaskAttemptContext context) {this.context = context;}@Overridepublic void write(Text text, NullWritable nullWritable) throws IOException, InterruptedException {fs = FileSystem.get(context.getConfiguration());String key = text.toString();Path path = null;if (StringUtils.startsWith(key, "137")) {path = new Path("file:/D:/hadoop/output/format/out/137/");} else {path = new Path("file:/D:/hadoop/output/format/out/138/");}fos = fs.create(path,true);byte[] bys = new byte[text.getLength()];fos.write(text.toString().getBytes());}@Overridepublic void close(TaskAttemptContext taskAttemptContext) throws IOException, InterruptedException {IOUtils.closeQuietly(fos);IOUtils.closeQuietly(fs);}}
}
public class WholeFileInputFormat extends FileInputFormat<Text, BytesWritable> {@Overrideprotected boolean isSplitable(JobContext context, Path filename) {return false;}@Overridepublic RecordReader<Text, BytesWritable> createRecordReader(InputSplit inputSplit, TaskAttemptContext taskAttemptContext) throws IOException, InterruptedException {WholeRecordReader reader  = new WholeRecordReader();reader.initialize(inputSplit, taskAttemptContext);return reader;}
}
@Data
public class FlowBeanObj  implements Writable, WritableComparable<FlowBeanObj> {private long upFlow;private long downFlow;private long sumFlow;@Overridepublic int compareTo(FlowBeanObj o) {if(o.getSumFlow() > this.getSumFlow()){return -1;}else if(o.getSumFlow() < this.getSumFlow()){return 1;}else {return 0;}}
}

public class WholeRecordReader extends RecordReader<Text, BytesWritable> {private Configuration config;private FileSplit fileSplit;private boolean isProgress = true;private BytesWritable value = new BytesWritable();private Text k = new Text();private FileSystem fs;private FSDataInputStream fis;@Overridepublic void initialize(InputSplit inputSplit, TaskAttemptContext context) throws IOException, InterruptedException {fileSplit = (FileSplit) inputSplit;this.config = context.getConfiguration();}@Overridepublic boolean nextKeyValue() throws IOException, InterruptedException {try {if (isProgress) {byte[] contents = new byte[(int) fileSplit.getLength()];Path path = fileSplit.getPath();fs = path.getFileSystem(config);fis = fs.open(path);IOUtils.readFully(fis,contents, 0,contents.length);value.set(contents, 0, contents.length);k.set(fileSplit.getPath().toString());isProgress = false;return true;}} catch (Exception e) {e.printStackTrace();}finally {IOUtils.closeQuietly(fis);}return false;}@Overridepublic Text getCurrentKey() throws IOException, InterruptedException {return k;}@Overridepublic BytesWritable getCurrentValue() throws IOException, InterruptedException {return value;}@Overridepublic float getProgress() throws IOException, InterruptedException {return 0;}@Overridepublic void close() throws IOException {fis.close();fs.close();}
}
public class HdfsClient {public static void main(String[] args) throws URISyntaxException, IOException, InterruptedException {Configuration config = new Configuration();config.set("fs.defaultFS","hdfs://localhost:9000");config.set("dfs.replication","2");FileSystem fs = FileSystem.get(new URI("hdfs://localhost:9000"),config,"xieme");fs.mkdirs(new Path("/hive3"));fs.copyFromLocalFile(new Path("file:/d:/elasticsearch.txt") ,new Path("/hive3"));fs.copyToLocalFile(false,new Path("/hive3/elasticsearch.txt"), new Path("file:/d:/hive3/elasticsearch2.txt"));fs.rename(new Path("/hive3/elasticsearch.txt"),new Path("/hive3/elasticsearch2.txt"));RemoteIterator<LocatedFileStatus> locatedFileStatusRemoteIterator = fs.listFiles(new Path("/"), true);while(locatedFileStatusRemoteIterator.hasNext()){LocatedFileStatus next = locatedFileStatusRemoteIterator.next();System.out.print(next.getPath().getName()+"\t");System.out.print(next.getLen()+"\t");System.out.print(next.getGroup()+"\t");System.out.print(next.getOwner()+"\t");System.out.print(next.getPermission()+"\t");System.out.print(next.getPath()+"\t");BlockLocation[] blockLocations = next.getBlockLocations();for(BlockLocation queue: blockLocations){for(String host :queue.getHosts()){System.out.print(host+"\t");}}System.out.println("");}*///fs.delete(new Path("/hive3"),true);/*FileStatus[] fileStatuses = fs.listStatus(new Path("/"));for(FileStatus fileStatus:fileStatuses){if(fileStatus.isDirectory()){System.out.println(fileStatus.getPath().getName());}}*/// 流copyFileInputStream fis = new FileInputStream("d:/elasticsearch.txt");FSDataOutputStream fos = fs.create(new Path("/hive/elasticsearch.txt"));IOUtils.copyBytes(fis,fos, config);IOUtils.closeStream(fis);IOUtils.closeStream(fos);FSDataInputStream fis2 = fs.open(new Path("/hive/elasticsearch.txt"));FileOutputStream fos2 = new FileOutputStream("d:/elasticsearch.tar.gz.part1");fis2.seek(1);IOUtils.copyBytes(fis2,fos2,config);/*byte [] buf = new byte[1024];for(int i=0; i<128;i++){while(fis2.read(buf)!=-1){fos2.write(buf);}}*/IOUtils.closeStream(fis2);IOUtils.closeStream(fos2);fs.close();}
}

3. hadoop 优化

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/734601.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

P1958 上学路线

难度&#xff1a;普及- 题目描述 你所在城市的街道好像一个棋盘&#xff0c;有 a 条南北方向的街道和 b 条东西方向的街道。南北方向的 a 条街道从西到东依次编号为 1 到 a&#xff0c;而东西方向的 b 条街道从南到北依次编号为 1 到 b&#xff0c;南北方向的街道 i 和东西方…

Java中常用的函数式接口

Java中常用的函数式接口 在Java中&#xff0c;函数式接口&#xff08;Functional Interface&#xff09;是一种只有一个抽象方法的接口。Java 8引入了函数式接口和Lambda表达式&#xff0c;使得编写简洁、易读的代码成为可能。Java中常用的函数式接口包括&#xff1a; Runnab…

单数码管(arduino)

1.连接方法 挨个点亮每个灯 #include <Arduino.h>int pin_list[] {4, 5, 19, 21, 22, 2, 15, 18}; int num_pins sizeof(pin_list) / sizeof(pin_list[0]); // 计算数组中的元素数量void setup() {// 设置每个引脚为输出for(int i 0; i < num_pins; i) {pinMode(p…

C语言:ctype和string库中的部分常用函数的应用和实现

在编程过程中&#xff0c;我们经常要处理字符和字符串&#xff0c;C语言标准库中就提供了一系列的库函数&#xff0c;便于我们操作库函数。 字符分类函数 C语⾔中有⼀系列的函数是专⻔做字符分类的&#xff0c;也就是⼀个字符是属于什么类型的字符的。这些函数的使⽤都需要包含…

Springboot 集成kafka 消费者实现ssl方式连接监听消息实现消费

证书准备&#xff1a;springboot集成kafka 消费者实现 如何配置是ssl方式连接的时候需要进行证书的转换。原始的证书是pem, 或者csr方式 和key方式的时候需要转换&#xff0c;因为kafka里面是jks 需要通过openssl进行转换。 证书处理&#xff1a; KeyStore 用于存储客户端的证…

分类预测 | Matlab基于TTAO-CNN-LSTM-Attention三角拓扑聚合优化算法优化卷积神经网络-长短期记忆网络-注意力机制的数据分类预测

分类预测 | Matlab基于TTAO-CNN-LSTM-Attention三角拓扑聚合优化算法优化卷积神经网络-长短期记忆网络-注意力机制的数据分类预测 目录 分类预测 | Matlab基于TTAO-CNN-LSTM-Attention三角拓扑聚合优化算法优化卷积神经网络-长短期记忆网络-注意力机制的数据分类预测分类效果基…

ABC344 A-E题解

文章目录 A题目AC Code&#xff1a; B题目AC Code&#xff1a; C题目AC Code&#xff1a; D题目AC Code&#xff1a; E题目AC Code&#xff1a; 不易不难&#xff0c;写到5题很简单&#xff0c;但是要有足够的思维能力。 A 题目 我们用一个 flag 变量记录我们是不是在两个竖…

python根据文件路径获取文件名

如果你想要获取文件路径中的文件名但不包括后缀名&#xff0c;你可以使用os.path.splitext()函数来分割文件名和后缀名&#xff0c;然后只取第一个部分。下面是一个例子&#xff1a; import os# 假设你有一个文件路径 file_path "/path/to/your/file.txt"# 使用os.…

人工智能、深度学习、机器学习书目推荐

AI入门书籍 人工智能基础 《Python神经网络编程》[英]塔里克拉希德(TariqRashid)中国工信部出版社入门强推&#xff0c;非常清晰的描述基于神经网络的人工智能基本原理&#xff0c;入门必看书目《统计学习方法》李航清华大学出版社人工智能必备数学基础&#xff0c;需要一定的…

【MacOS 上安装 Homebrew 】讲解

macOS 安装 Homebrew macOS 上安装 Homebrew 是一个相对简单的过程&#xff0c;Homebrew 是一款开源的软件包管理工具&#xff0c;它可以让你在 Mac 上轻松安装、更新和管理软件包。以下是安装 Homebrew 的步骤&#xff1a; 1. 打开终端&#xff1a; 你可以通过在 Finder 中…

CCF-CSP真题201403-2《窗口》(结构体+数组)

问题描述 在某图形操作系统中,有 N 个窗口,每个窗口都是一个两边与坐标轴分别平行的矩形区域。窗口的边界上的点也属于该窗口。窗口之间有层次的区别,在多于一个窗口重叠的区域里,只会显示位于顶层的窗口里的内容。   当你点击屏幕上一个点的时候,你就选择了处于被点击位置的…

前端发展史与优秀编程语言

前端开发是互联网技术领域中的一个重要分支&#xff0c;负责构建用户直接交互的网页和应用程序界面。随着互联网的发展&#xff0c;前端技术经历了多个阶段的演变&#xff0c;从最初的简单静态页面到如今的复杂交互式应用&#xff0c;不断推动着用户体验的提升和网页功能的丰富…

Vue3 重置覆盖 reactive 数组数据的方法

核心要点&#xff1a; 通过splice删除原数组内的所有数据&#xff0c;并添加新的数据进去。潜在影响&#xff1a;大数据量下&#xff0c;splice重置数组和 ref 的.value重新赋值重置数组&#xff0c;哪个耗时短还需自行测试。 通过 splice 传入0 和 Infinity 来删除原数组从头…

【Python】进阶学习:OpenCV--一文详解cv2.namedWindow()

【Python】进阶学习&#xff1a;OpenCV–一文详解cv2.namedWindow() &#x1f308; 个人主页&#xff1a;高斯小哥 &#x1f525; 高质量专栏&#xff1a;Matplotlib之旅&#xff1a;零基础精通数据可视化、Python基础【高质量合集】、PyTorch零基础入门教程&#x1f448; 希望…

C++椭圆检测论文复现 Ubuntu 22.04+Vscode+opencv3.4

复现的代码 本博客旨在复现论文《An Efficient High-quality Ellipse Detection》&#xff0c;该文章本来只有Matlab的代码实现&#xff0c;后来被islands翻译成了c 库&#xff0c;大家可以参考islands发在知乎上的文章高质量椭圆检测库&#xff0c;C的代码链接。 使用环境 U…

整合生成型AI战略:从宏观思维到小步实践

“整合生成型AI战略&#xff1a;从宏观思维到小步实践” 在这篇文章中&#xff0c;我们探讨了将生成型AI和大型语言模型融入企业核心业务的战略开发方法。我们的方法基于敏捷开发原则&#xff0c;技术专家和数据科学家需要采纳商业思维&#xff0c;而执行官则需理解生成型AI和…

ROS2动作通信的实现

文章目录 1.动作通信的概念及应用场景1.1 概念1.2 应用场景 2.准备工作3.动作通信的实现3.1 动作通信接口消息3.2 服务端实现3.3 客户端实现3.4 编译及运行 1.动作通信的概念及应用场景 1.1 概念 动作通信适用于长时间运行的任务。就结构而言动作通信由目标、反馈和结果三部分…

吴恩达机器学习-可选实验室:可选实验:使用逻辑回归进行分类(Classification using Logistic Regression)

在本实验中&#xff0c;您将对比回归和分类。 import numpy as np %matplotlib widget import matplotlib.pyplot as plt from lab_utils_common import dlc, plot_data from plt_one_addpt_onclick import plt_one_addpt_onclick plt.style.use(./deeplearning.mplstyle)jupy…

Java实战:Spring Boot利用MinIO实现文件切片上传

本文将详细介绍如何在 Spring Boot 应用程序中使用 MinIO 实现文件切片极速上传技术。我们将深入探讨 MinIO 的概念、文件切片上传的原理&#xff0c;以及如何使用 Spring Boot 和 MinIO 实现文件切片上传和合并。 1. 引言 在现代的互联网应用中&#xff0c;文件上传是一个常…

第三百九十二回

文章目录 1. 概念介绍2. 方法与细节2.1 实现方法2.2 具体细节 3. 示例代码4. 内容总结 我们在上一章回中介绍了"如何混合选择多个图片和视频文件"相关的内容&#xff0c;本章回中将介绍如何通过相机获取图片文件.闲话休提&#xff0c;让我们一起Talk Flutter吧。 1. …