MNN createSession 之创建流水线后端(四)

在这里插入图片描述

系列文章目录


MNN createFromBuffer(一)
MNN createRuntime(二)
MNN createSession 之 Schedule(三)
MNN createSession 之创建流水线后端(四)
MNN Session::resize 之流水线编码(五)
MNN Session 创建执行器(六)


文章目录

  • 系列文章目录
  • 1、createSession
    • 1.1 createMultiPathSession
    • 1.1.1 Session 类 ModeGroup
    • 1.1.2 Session::Session
    • 1.1.2.1 _createPipelineBackend
    • 1.1.2.1.1 VulkanRuntime::onCreate
    • 1.1.2.1.1.1 VulkanBackend::VulkanBackend
    • 1.1.2.1.2 CPURuntime::onCreate
    • 1.1.2.1.2.1 CPUBackend::CPUBackend
    • 1.1.2.2 Pipeline 类 TuningAttr、UnitInfo
    • 1.1.2.3 Pipeline::Pipeline
    • 1.1.2.3.1 GeometryComputer::Context


1、createSession

在这里插入图片描述

    依据 ScheduleConfig 和 RuntimeInfo 创建会话。

// source/core/Interpreter.cpp
Session* Interpreter::createSession(const ScheduleConfig& config, const RuntimeInfo& runtime) {return createMultiPathSession({config}, runtime);
}

1.1 createMultiPathSession

// source/core/Interpreter.cpp
Session* Interpreter::createMultiPathSession(const std::vector<ScheduleConfig>& configs, const RuntimeInfo& runtime) {// ...auto newSession =std::unique_ptr<Session>(new Session(std::move(info), mNet->modes, std::move(rt)));if (!newSession->valid()) {MNN_PRINT("Invalide Session!!\n");return nullptr;}auto result = newSession.get();auto validForResize = info.validForResize;if (validForResize && mNet->modes.inputMode == Session_Input_Inside && mNet->modes.resizeMode == Session_Resize_Direct) {result->resize();}if ((!mNet->cacheFile.empty()) && (!valid) && mNet->modes.backendMode == Session_Backend_Fix) {// Try to save extra cacheauto buffer = result->getCache();if (buffer.first != nullptr && buffer.second > 0) {MNN_PRINT("Write cache to %s, size = %zu\n", mNet->cacheFile.c_str(), buffer.second);writeCacheFile(mNet, buffer);mNet->lastCacheSize = buffer.second;// Write CachecacheMode = cacheMode | 2;}}// Reset cacheresult->loadCache(nullptr, 0);mNet->sessions.emplace_back(std::move(newSession));#ifdef MNN_INTERNAL_ENABLEDint precision = BackendConfig::Precision_Normal;if (nullptr != configs[0].backendConfig) {precision = configs[0].backendConfig->precision;}int mode = configs[0].mode;mNet->sessionInfo.insert(std::make_pair(result, std::make_tuple(precision, mode)));if (shouldLog(FREQ_HIGH)) {std::map<std::string, std::string> metrics = mNet->basicLogginData;metrics.emplace("UUID", mNet->uuid);metrics.emplace("Time", std::to_string((float)_timer.durationInUs() / 1024.0f));metrics.emplace("Backend", std::to_string(configs[0].type));metrics.emplace("Precision", std::to_string(precision));metrics.emplace("Mode", std::to_string(mode));metrics.emplace("Cache", std::to_string(cacheMode));metrics.emplace("CacheSize", std::to_string((float)(mNet->lastCacheSize / 1024.0f)));metrics.emplace("ModelSize", std::to_string ((float)mNet->buffer.size() / 1024.0f / 1024.0f));metrics.emplace("Usage", std::to_string((int) mNet->net->usage()));metrics.emplace("API", "Interpreter::createMultiPathSession");logAsync(metrics);}
#endif // MNN_INTERNAL_ENABLEDreturn result;
}

1.1.1 Session 类 ModeGroup

// source/core/Session.hpp
class MNN_PUBLIC Session {
public:struct ModeGroup {Interpreter::SessionMode callBackMode = Interpreter::Session_Debug;Interpreter::SessionMode inputMode = Interpreter::Session_Input_Inside;Interpreter::SessionMode outputMode = Interpreter::Session_Output_Inside;Interpreter::SessionMode backendMode = Interpreter::Session_Backend_Fix;Interpreter::SessionMode resizeMode = Interpreter::Session_Resize_Direct;Interpreter::SessionMode memoryUsageMode = Interpreter::Session_Memory_Collect;Interpreter::SessionMode codegenMode = Interpreter::Session_Codegen_Disable;int memoryAllocatorType = 0;int maxTuningNumber = MNN_DEFAULT_TUNING_NUMBER;};Session(Schedule::ScheduleInfo&& info, const ModeGroup& mode,RuntimeInfo&& runtime);~Session();Session* clone(RuntimeInfo&& runtime, std::shared_ptr<Schedule::ScheduleInfo> sharedConst);
public:/*** @brief infer.* @return result code.*/ErrorCode run() const;/*** @brief infer with callbacks and sync option.* @param enterCallback callback before each op.* @param exitCallback  callback after each op.* @param sync          wait until all ops done before return or not.* @return result code.*/ErrorCode runWithCallBack(const TensorCallBackWithInfo& enterCallback, const TensorCallBackWithInfo& exitCallback,bool sync = false) const;bool getInfo(Interpreter::SessionInfoCode code, void* ptr) const;public:/*** @brief resize tensors and buffers responding to input changes.* @return result code.*/ErrorCode resize();/*** @brief set if needs resize.* @param flag  needs resize or not.*/void setNeedResize(bool flag = true) {mNeedResize = flag;}void setNeedMalloc(bool flag = true) {mNeedMalloc = flag;}Runtime* getCPURuntime() {return mRuntime.second.get();}public:/*** @brief get backend that create the tensor.* @param tensor    given tensor.* @return backend that create the tensor, NULL if the tensor is created by default backend (CPU backend).*/const Backend* getBackEnd(const Tensor* tensor) const;/*** @brief get input tensor for given op name.* @param name given op name. if NULL, return first input tensor.* @return input tensor if found, NULL otherwise.*/Tensor* getInput(const char* name) const;/*** @brief get output tensor for given op name.* @param name given op name. if NULL, return first output tensor.* @return output tensor if found, NULL otherwise.*/Tensor* getOutput(const char* name) const;/*** @brief get output tensors map.* @return get output tensors map.*/const std::map<std::string, Tensor*>& getOutputAll() const;const std::map<std::string, Tensor*>& getInputAll() const;/*** @brief check session is valid or not.* @return session is valid or not.*/inline bool valid() const {return mValid;}/*** @brief update the session's const value to origin model's const blob.* @return errorcode*/ErrorCode updateToModel(Net* net) const;void waitAsyncResize();bool hasAsyncWork();bool loadCache(const void* buffer, size_t size);std::pair<const void*, size_t> getCache();Tensor* getTensor(int index) const;Schedule::PipelineInfo& getPipelineInfo(int index) const;
protected:const std::vector<std::shared_ptr<Pipeline>>& getPipelines() const {return this->mPipelines;}private:void _clearCache();void _setUpTensorInfo(const Schedule::ScheduleInfo& info);private:RuntimeInfo mRuntime;std::vector<std::shared_ptr<Pipeline>> mPipelines;bool mNeedResize = true;bool mValid      = true;bool mNeedMalloc = true;Interpreter::SessionMode mCallBackMode;Interpreter::SessionMode mMemoryUsageMode;Interpreter::SessionMode mCodegenMode;Schedule::ScheduleInfo mInfo;ModeGroup mMode;
};

1.1.2 Session::Session

// source/core/Session.cpp
Session::Session(Schedule::ScheduleInfo&& info, const ModeGroup& mode, RuntimeInfo&& runtime) {mMode = mode;mRuntime = std::move(runtime);if (info.pipelineInfo.empty()) {mValid = false;return;}mInfo = std::move(info);for (auto& iter : mInfo.pipelineInfo) {_createPipelineBackend(iter, mRuntime);Pipeline::TuningAttr attr;attr.maxTuningNumber = mode.maxTuningNumber;attr.autoSetOpType = mode.backendMode == Interpreter::Session_Backend_Auto;auto rt    = mRuntime.first.find(iter.first.info.type)->second.get();auto cpuRuntime = mRuntime.second;std::shared_ptr<Pipeline> newPipeline(new Pipeline(std::move(iter), mode.inputMode == Interpreter::Session_Input_Inside, mode.outputMode == Interpreter::Session_Output_User, attr, rt, cpuRuntime.get()));mPipelines.emplace_back(std::move(newPipeline));}mCallBackMode = mode.callBackMode;mMemoryUsageMode = mode.memoryUsageMode;mCodegenMode = mode.codegenMode;
}

1.1.2.1 _createPipelineBackend

    创建流水线后端。BackendCache

// source/core/Session.cpp
// typedef std::pair<BackendCache, std::vector<OpCacheInfo>> PipelineInfo;
//
//   struct BackendCache {
//      Backend::Info info;
//      BackendConfig config;
//      std::pair<std::shared_ptr<Backend>, std::shared_ptr<Backend>> cache;
//      bool needComputeShape = true;
//      bool needComputeGeometry = true;
//      bool reportError = true;
//      std::map<Tensor*, TENSORCACHE> inputTensorCopyCache;
//  };
//
// typedef std::pair< std::map<MNNForwardType, std::shared_ptr<Runtime>>,  \
//						std::shared_ptr<Runtime>> RuntimeInfo;
//
static void _createPipelineBackend(Schedule::PipelineInfo& iter, RuntimeInfo& runtime) {// iter.first 类型为 struct BackendCache if (iter.first.cache.first != nullptr) {return;}// runtime.first 类型为 std::map<MNNForwardType, std::shared_ptr<Runtime>>// 根据 MNNForwardType(MNN_FORWARD_VULKAN) 获取对应的 Runtime(VulkanRuntime)auto rt    = runtime.first.find(iter.first.info.type)->second.get();// runtime.second 为默认 Runtime(CPURuntime)auto cpuRuntime = runtime.second;bool specialUsage = false;if (iter.first.info.user != nullptr) {specialUsage = iter.first.info.user->flags > 0;}// 此处运行 VulkanRuntime::onCreate,创建对应的 Backend(VulkanBackend)// iter.first.cache 类型为 std::pair<std::shared_ptr<Backend>, std::shared_ptr<Backend>>iter.first.cache.first.reset(rt->onCreate(iter.first.info.user));std::shared_ptr<Backend> second;if (iter.first.cache.first->type() == MNN_FORWARD_CPU && (!specialUsage)) {iter.first.cache.second = iter.first.cache.first;} else {// Const Backend shouldn't be used as default backend// The session may be schedule multi-thread but const backend is the same// We need create a new backend to do size compute / not support op compute// 创建默认的 Backend(CPUBackend)BackendConfig defaultConfig;defaultConfig.flags = 4;iter.first.cache.second.reset(cpuRuntime->onCreate(&defaultConfig));}
}

1.1.2.1.1 VulkanRuntime::onCreate

// source/backend/vulkan/runtime/VulkanRuntime.cpp
Backend* VulkanRuntime::onCreate(const BackendConfig* config) const {// FIXME: Use configreturn new VulkanBackend(this, mInfo);
}

1.1.2.1.1.1 VulkanBackend::VulkanBackend

// source/backend/vulkan/image/backend/VulkanBackend.cpp
VulkanBackend::VulkanBackend(const VulkanRuntime* runtime, const Backend::Info& info) : Backend(MNN_FORWARD_VULKAN) {mRuntime = runtime;mDirect = Backend::Info::INDIRECT != info.mode;mDynamicMemoryPool.reset(new VulkanMemoryPool(runtime->mMemoryPool.get()));auto& dev              = device();mFence                 = std::make_shared<VulkanFence>(dev);if (!mDirect) {mCmdBuffer.reset(runtime->mCmdPool->allocBuffer());}mInitBuffer.reset(runtime->mCmdPool->allocBuffer());
}

1.1.2.1.2 CPURuntime::onCreate

// source/backend/cpu/CPUBackend.cpp
Backend* CPURuntime::onCreate(const BackendConfig* config) const {auto precision = mPrecision;auto memory = mMemory;size_t flags = mFlags;if (nullptr != config) {precision = config->precision;flags = config->flags;memory = config->memory;}
#ifdef LOG_VERBOSEMNN_PRINT("cpu backend was created by runtime:%p\n", this);
#endif#ifdef MNN_USE_ARMV82auto core = MNNGetCoreFunctions();if (core->supportFp16arith && precision == BackendConfig::Precision_Low) {return new Arm82Backend(this, memory);}
#endif
#ifdef MNN_SUPPORT_BF16if (precision == BackendConfig::Precision_Low_BF16 && BF16Functions::get()) {return new BF16Backend(this);}
#endifif (flags == MNN_CPU_USE_DEFAULT_BACKEND) {return new CPUBackend(this, precision, memory, MNN_FORWARD_CPU, 0);}
#ifdef MNN_USE_SSEif (AVX2Backend::isValid()) {return new AVX2Backend(this, memory, flags);}
#endifreturn new CPUBackend(this, precision, memory, MNN_FORWARD_CPU, flags);
}

1.1.2.1.2.1 CPUBackend::CPUBackend

// source/backend/cpu/CPUBackend.cpp
CPUBackend::CPUBackend(const CPURuntime* runtime, BackendConfig::PrecisionMode precision, BackendConfig::MemoryMode memory, MNNForwardType type, size_t flags) : Backend(type) {
#ifdef LOG_VERBOSEMNN_PRINT("cpu backend create\n");
#endifmMemory = memory;mRuntime = const_cast<CPURuntime*>(runtime);std::shared_ptr<BufferAllocator::Allocator> defaultAlloc(BufferAllocator::Allocator::createRecurse(runtime->mStaticAllocator.get()));if (mRuntime->getAllocatorType() == Runtime::Allocator_Defer) {mDynamicAllocator.reset(new DeferBufferAllocator(defaultAlloc));} else {mDynamicAllocator.reset(new EagerBufferAllocator(defaultAlloc));}mStaticAllocator = runtime->mStaticAllocator;mPrecisionMode = precision;mCoreFunctions = MNNGetCoreFunctions();mInt8CoreFunctions = MNNGetInt8CoreFunctions();mCache = new CPUResizeCache;
}

1.1.2.2 Pipeline 类 TuningAttr、UnitInfo

// source/core/Pipeline.hpp
/** pipeline. one session may contains multiple pipeline, and one pipeline may contains more than one unit. */
class Pipeline : public NonCopyable {
public:struct TuningAttr {bool autoSetOpType;int maxTuningNumber;};Pipeline(Schedule::PipelineInfo&& info, bool allocInput, bool outputStatic, const TuningAttr& tune, const Runtime* rt, const Runtime* cpuRt);~Pipeline();class UnitInfo : public OperatorInfo {public:UnitInfo()          = default;virtual ~UnitInfo() = default;void setUp(const Command& cmd, int index, const Op* originOp, int totalIndex);};
public:/** encode :1. compute shape for every op's inputs and outputs;2. geometry transform;3. copy op, inputs and outputs tensor info to mBufferstatic_model:  3; dynamic_model: 1,2,3*/ErrorCode encode(bool supportDebug = false, bool permitCodegen = false);/** allocMemory: create Execution and alloc memory for every op */ErrorCode allocMemory(bool firstMalloc, bool permitCodegen);/** execute this pipline */ErrorCode execute();ErrorCode executeCallBack(const TensorCallBackWithInfo& before, const TensorCallBackWithInfo& after);Schedule::PipelineInfo& getPipelineInfo() {return mInfo;}float flops() const {return mFlops;}friend class Session;MNNForwardType getMainForwardType() const  {return mInfo.first.cache.first->type();}
private:void _copyInputs();void _pushTuningTask(std::vector<Schedule::OpCacheInfo>&& initInfos);void _recycleDynamicMemory(Command* command);Schedule::PipelineInfo mInfo;bool mAllocInput;bool mOutputStatic;TuningAttr mTuneAttr;float mFlops = 0.0f;bool mIsQuantModel = false;// For gpu or other backendstd::map<Tensor*, std::shared_ptr<Tensor>> mCacheConstTensors;std::map<Tensor*, std::shared_ptr<Tensor>> mShapeFixConstCache;
#ifndef MNN_BUILD_MINIGeometryComputer::Context mContext;Runtime::CompilerType mUseGeometry;
#endifconst Runtime* mRuntime;const Runtime* mCpuRuntime;
};

1.1.2.3 Pipeline::Pipeline

OpCacheInfo

// source/core/Pipeline.cpp
// typedef std::pair<BackendCache, std::vector<OpCacheInfo>> PipelineInfo;
//
//    /** pipeline info */
//    struct OpCacheInfo {
//        /** op */
//        const Op* op;
//        /** input tensors */
//        std::vector<Tensor*> inputs;
//        /** output tensors */
//        std::vector<Tensor*> outputs;
//        /** schedule type*/
//        Schedule::Type type = Schedule::Type::SEPARATE;
//
//        /**Command buffer for cache*/
//        CommandBuffer cacheBuffer;
//
//        /**Command buffer for execute*/
//        CommandBuffer executeBuffer;
//        
//        std::map<const Op*, std::shared_ptr<Execution>> executionCache;
//    };
//
Pipeline::Pipeline(Schedule::PipelineInfo&& info, bool allocInput, bool outputStatic, const TuningAttr& tune, const Runtime* rt, const Runtime* cpuRt)
#ifndef MNN_BUILD_MINI// mContext 类型为 GeometryComputer::Context: mContext(info.first.cache.second, info.first.cache.first->type(), info.first.info.user ? info.first.info.user->precision :  BackendConfig::Precision_Normal), mUseGeometry(rt->onGetCompilerType()) {
#else
{
#endifrt->onCheckInfo(info.first.info);mRuntime = rt;mCpuRuntime = cpuRt;mTuneAttr = tune;mAllocInput    = allocInput;mOutputStatic  = outputStatic;mInfo          = std::move(info);mIsQuantModel = false;// mInfo.second 类型为 std::vector<OpCacheInfo>for (auto& iter : mInfo.second) {for (auto t : iter.outputs) {if (TensorUtils::getDescribe(t)->quantAttr.get() != nullptr) {// 是否是量化模型mIsQuantModel = true;break;}}for (auto t : iter.inputs) {if (TensorUtils::getDescribe(t)->quantAttr.get() != nullptr) {mIsQuantModel = true;break;}}if (mIsQuantModel) {break;}}}

1.1.2.3.1 GeometryComputer::Context

class GeometryComputer {
public:virtual ~GeometryComputer() {// Do nothing}class MNN_PUBLIC Context {public:Context(std::shared_ptr<Backend> allocBackend, MNNForwardType type = MNN_FORWARD_CPU, BackendConfig::PrecisionMode precision = BackendConfig::Precision_Normal);~Context();void clear();void setBackend(Backend* backend);void getRasterCacheCreateRecursive(Tensor* src, CommandBuffer& cmd);// If has cache, return. Otherwise create cacheconst std::vector<std::shared_ptr<Tensor>>& searchConst(const Op* op);std::shared_ptr<Tensor> allocConst(const Op* key, const std::vector<int>& shape, halide_type_t type,Tensor::DimensionType dimType = Tensor::TENSORFLOW);bool allocTensor(Tensor* tenosr);inline MNNForwardType forwardType() const {return mForwardType;}inline BackendConfig::PrecisionMode precisionType() const {return mPrecision;}void pushCache(const CommandBuffer& buffer);std::shared_ptr<BufferStorage> mRasterOp;private:void getRasterCacheCreate(Tensor* src, CommandBuffer& cmd);std::map<const Op*, std::vector<std::shared_ptr<Tensor>>> mConstTensors;std::vector<std::shared_ptr<Tensor>> mEmpty;std::vector<std::shared_ptr<Tensor>> mTempConstTensors;std::shared_ptr<Backend> mBackend;MNNForwardType mForwardType;BackendConfig::PrecisionMode mPrecision;std::vector<SharedPtr<Command>> mRasterCmdCache;};static void init();MNN_PUBLIC static const GeometryComputer* search(int opType, Runtime::CompilerType compType);static void registerGeometryComputer(std::shared_ptr<GeometryComputer> comp, std::vector<int> type, Runtime::CompilerType compType = Runtime::Compiler_Geometry);virtual bool onCompute(const Op* op, const std::vector<Tensor*>& inputs, const std::vector<Tensor*>& outputs,Context& context, CommandBuffer& cmd) const = 0;virtual bool onRecompute(const Op* op, const std::vector<Tensor*>& inputs, const std::vector<Tensor*>& outputs,Context& context, CommandBuffer& cmd) const {return false;}
};

   

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/756844.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

B站python爬虫课程笔记(Q16-)

下面是学习的网址&#xff1a; ​​​​​​【Python爬虫】 16、捕捉异常try&except语句的一些问题 1&#xff09;一些常见的异常类型 IndexError索引错误ZeroDivisionError除零错误FileNotFindError找不到文件错误TypeError类型错误KeyError键错误ValueError值错误Ind…

微信小程序 canvas层级过高覆盖原生组件

一、背景 微信小程序中使用signature第三方插件完成签名效果&#xff0c;但真机调试时发现canvas层级过高遮挡了按钮 二、具体问题 问题原因&#xff1a;签名后点击按钮无法生效 问题代码&#xff1a; <template><view class"sign_page" v-cloak>&l…

51单片机学习笔记7 串转并操作方法

51单片机学习笔记7 串转并操作方法 一、串转并操作简介二、74HC595介绍1. **功能**&#xff1a;2. **引脚**&#xff1a;3. **工作原理**&#xff1a;4. 开发板原理图&#xff08;1&#xff09;8*8 LED点阵&#xff1a;&#xff08;2&#xff09;74HC595 串转并&#xff1a; 三…

蓝桥杯刷题|03普及-真题

[蓝桥杯 2017 省 B] k 倍区间 题目描述 给定一个长度为 N 的数列&#xff0c;​,,⋯&#xff0c;如果其中一段连续的子序列 ​,,⋯ (i≤j) 之和是 K 的倍数&#xff0c;我们就称这个区间 [i,j] 是 K 倍区间。 你能求出数列中总共有多少个 K 倍区间吗&#xff1f; 输入格式 …

Spark 3.5.0 特性速览

介绍 Spark 3系列已经发布了第六版3.5.0&#xff0c;目前最新3.5.1。 使用最广泛的大数据可扩展计算引擎。数以千计的公司&#xff0c;包括 80% 的财富 500 强企业&#xff0c;都在使用 Apache Spark。来自业界和学术界的 2000 多名开源项目贡献者。 Apache Spark 3.5.0 是…

抖音视频爬虫提取工具界面|视频批量下载软件

抖音视频爬虫界面解析 一&#xff1a;概述 抖音视频爬虫是一款功能强大的工具&#xff0c;主要提供关键词批量提取视频和单独视频提取的功能&#xff0c;并支持提取后的视频下载操作。 二&#xff1a;功能解析 2.1&#xff1a;关键词批量提取视频的解析 用户可以通过输入关键…

Kotlin中单例模式和Java的对比浅析

前言 单例模式&#xff0c;一直以来是我们在日常开发中最常用的一种设计模式&#xff0c;更是面试中非常重要&#xff0c;也非常容易被问到的问题。在日常开发中&#xff0c;大家常用的语言还是Java&#xff0c;但今天我给大家带来的是在Kotlin语言中&#xff0c;单例模式是怎…

计算机服务器中了faust勒索病毒怎么办?Faust勒索病毒解密工具流程

在科技技术飞速发展的今天&#xff0c;网络计算机技术也得到了极大发展&#xff0c;为企业的生产运营提供了极大便利&#xff0c;越来越多的企业利用网络开展各项工作业务&#xff0c;许多企业离开了网络几乎很难运转&#xff0c;这也导致了企业越来越重视网络安全问题。近日&a…

第二证券策略:股指预计维持震荡格局 关注汽车、半导体等板块

第二证券指出&#xff0c;方针组合拳齐下&#xff0c;商场蓄势待起&#xff0c;短期指数或向上挑战3100点&#xff0c;低位业绩板块、叠加AI或是3月商场主要出资主线&#xff0c;尽管商场情绪高涨&#xff0c;但不主张情绪化追涨&#xff0c;究竟上方还有压制&#xff0c;放量打…

2024.3.19

1.哈希表 代码&#xff1a; #include"hash.h"//申请结点 node_p create_node(int data) {node_p new (node_p)malloc(sizeof(node));new->data data;return new; } //存入哈希表 void insert(node_p H[],int key) {//数据要存入哈希表中指定下标的位置int i …

【Mysql数据库基础03】分组函数(聚合函数)、分组查询

分组函数(聚合函数&#xff09;、分组查询 1 分组函数1.1 简单的使用1.2 是否忽略null值1.3 和关键字搭配使用1.4 count函数的详细介绍1.5 练习 2 分组查询Group by2.1 简单的分组查询2.2 练习 3 格式投票:yum: 1 分组函数 1.1 简单的使用 COUNT(expression)&#xff1a;计算符…

一维小波包的分解与重构程序深入学习——Matlab

绘制上述图的matlab程序为&#xff1a; clear all; close all; load noisdopp; xnoisdopp; wptwpdec(x,3,db1,shannon) %返回小波包树&#xff0c;设置采用的熵为shannon plot(wpt); %% 学习目标&#xff1a;一维小波包的分解和重构深入学习 %% 获取小波树上某个节点的小…

代码随想录day21(1)二叉树:平衡二叉树(leetcode110)

题目要求&#xff1a;判断一棵树是否为平衡二叉树 思路&#xff1a;递归地比较左右子树&#xff0c;只要有一棵子树不满足条件就说明这棵树不是平衡二叉树。本题采用迭代法较为复杂。 leetcode实战&#xff1a; 代码实现&#xff1a; 递归&#xff1a; 迭代&#xff1a;

python失物招领系统-安卓-flask-django-nodejs-php

对于本失物招领 的设计来说&#xff0c; 它是应用mysql数据库、安卓等技术动态编程以及数据库进行努力学习和大量实践&#xff0c;并运用到了 建设中在整个系统的设计当中&#xff0c;具体根据网上失物招领的现状来进行开发的&#xff0c;具体根据用户需求实现网上失物招领网络…

Java 设计模式系列:行为型-状态模式

简介 状态模式&#xff08;State Pattern&#xff09;是一种行为型设计模式&#xff0c;允许一个对象在其内部状态改变时改变其行为。状态模式中类的行为是由状态决定的&#xff0c;在不同的状态下有不同的行为。 状态模式主要解决的是当控制一个对象状态的条件表达式过于复杂…

[Labtools 27-1429] XML parser encountered a problem in file

平台&#xff1a;Vivado2108.3 最近在使用vivado的debug过程中发现&#xff0c;编译好工程后打开open hardware manager出现如下错误。 [Labtools 27-1429] XML parser encountered a problem in file E:/githome/xxxx/hw_1/hw.xml at line 1 : XML character encoding not su…

智能合约语言(eDSL)—— 使用rust实现eDSL的原理

为理解rust变成eDSL的实现原理&#xff0c;我们需要简单了解元编程与宏的概念,元编程被描述成一种计算机程序可以将代码看待成数据的能力&#xff0c;使用元编程技术编写的程序能够像普通程序在运行时更新、替换变量那样操作更新、替换代码。宏在 Rust 语言中是一种功能&#x…

机器人离散化阻抗控制

机器人离散化阻抗控制是一种控制策略&#xff0c;它结合了阻抗控制的思想与离散化方法&#xff0c;以实现对机器人运动与外力之间动态关系的精细调节。这种控制方法旨在使机器人在与环境交互时能够表现出期望的阻抗特性&#xff0c;从而实现对接触力和位置的精确控制。 在离散…

STM32—控制蜂鸣器(定时器)

目录 1 、 电路构成及原理图 2 、编写实现代码 main.c tim_irq.c 3、代码讲解 4、烧录到开发板调试、验证代码 5、检验效果 此笔记基于朗峰 STM32F103 系列全集成开发板的记录。 1 、 电路构成及原理图 定时器中断是利用定时器的计数功能&#xff08;向上计数或向下计…

算法---二分查找练习-3(山脉数组的顶峰索引)

山脉数组的顶峰索引 1. 题目解析2. 讲解算法原理3. 编写代码 1. 题目解析 题目地址&#xff1a;点这里 2. 讲解算法原理 初始化两个指针 left 和 right&#xff0c;分别指向数组的起始位置和结束位置。 进入循环&#xff0c;循环条件为 left < right。 在每次循环中&…