实用篇 | 一文快速构建人工智能前端展示streamlit应用

----------------------- 🎈API 相关直达 🎈--------------------------

🚀Gradio: 实用篇 | 关于Gradio快速构建人工智能模型实现界面,你想知道的都在这里-CSDN博客

🚀Streamlit :实用篇 | 一文快速构建人工智能前端展示streamlit应用-CSDN博客

🚀Flask: 实用篇 | 一文学会人工智能中API的Flask编写(内含模板)-CSDN博客

Streamlit是一个用于机器学习、数据可视化的 Python 框架,它能几行代码就构建出一个精美的在线 app 应用。相比于Gradio,能展示更多的功能~

目录

1.Streamlit的安装

2.Streamlit的语法

2.1.基本语法

2.2.进阶语法

2.2.1.图片,语音,视频

2.2.2.进程提示

2.3.高级语法

2.3.1.@st.cache_data

2.3.2.st.cache_resource

3.创建一个简单的app

实时读取数据并作图

4.人工智能深度学习项目Streamlit实例

4.1.实例1:文本生成

4.1.1ChatGLM的交互

4.1.2.OpenAI的交互

4.2.图像类

4.2.1.图像分类

4.2.2.图片生成

4.3.语音类

4.3.1.语音合成

 4.3.2.语音转文本

参考文献


官网:Get started - Streamlit Docs

1.Streamlit的安装

# 安装
pip install streamlit
pip install streamlit-chat# 测试
streamlit hello

会出现一些案例

2.Streamlit的语法

2.1.基本语法

import streamlit as st

最常用的几种

  • 标题st.title() : st.title("标题")
  • 写入st.write(): st.write("Hello world ")
  • 文本st.text():单行文本
  • 多行文本框st.text_area():st.text_area("文本框",value=''key=None)
  • 滑动条st.slider():st.slider(““)
  • 按钮st.button():st.button(“按钮“)
  • 输入文本st.text_input():st.text_input(“请求用户输入“)
  • 单选框组件st.radio()

2.2.进阶语法

2.2.1.图片,语音,视频

都可以输入向量值,比特值,加载文件,文件路径

  • st.image()
  • st.audio()
  • st.video()

2.2.2.进程提示

  • st.progress() 显示进度
  • st.spinner()显示执行状态
  • st.error()显示错误信息
  • st.warning - 显示警告信息

2.3.高级语法

2.3.1.@st.cache_data

当使用 Streamlit 的缓存注释标记函数时,它会告诉 Streamlit 每当调用函数时,它应该检查两件事:

  • 用于函数调用的输入参数
  • 函数内部的代码

2.3.2.st.cache_resource

用于缓存返回全局资源(例如数据库连接、ML 模型)的函数的装饰器。

缓存的对象在所有用户、会话和重新运行之间共享。他们 必须是线程安全的,因为它们可以从多个线程访问 同时。如果线程安全是一个问题,请考虑改用 st.session_state 来存储每个会话的资源。

默认情况下,cache_resource函数的所有参数都必须是可哈希的。 名称以 _ 开头的任何参数都不会进行哈希处理。

3.创建一个简单的app

实时读取数据并作图

import streamlit as st
import pandas as pd
import numpy as npst.title('Uber pickups in NYC')DATA_COLUMN = 'data/time'
DATA_URL = ('https://s3-us-west-2.amazonaws.com/''streamlit-demo-data/uber-raw-data-sep14.csv.gz')# 增加缓存
@st.cache_data
# 下载数据函数
def load_data(nrows):# 读取csv文件data = pd.rea_csv(data_url,nrows=nrows)# 转换小写字母lowercase = lambda x:tr(x).lower()# 将数据重命名 data.rename(lowercase,axis='columns',inplace=True)# 将数据以panda的数据列的形式展示出来data[DATA_COLUMN] = pd.to_datatime(data[DATA_COLUMN])# 返回最终数据return data# 直接打印文本信息
data_load_state = st.text('正在下载')
# 下载一万条数据中的数据
data = load_data(10000)
# 最后输出文本显示
data_load_state.text("完成!(using st.cache_data)")# 检查原始数据
if st.checkbox('Show raw data'):st.subheader('Raw data')st.write(data)# 绘制直方图
# 添加一个子标题
st.subheader('Number of pickups by hour')# 使用numpy生成一个直方图,按小时排列
hist_values = np.histogram(data[DATE_COLUMN].dt.hour, bins=24, range=(0,24))[0]
# 使用Streamlit 的 st.bar_chart() 方法来绘制直方图
st.bar_chart(hist_values)# 使用滑动块筛选结果
hour_to_filter = st.slider('hour', 0, 23, 17)
# 实时更新
filtered_data = data[data[DATE_COLUMN].dt.hour == hour_to_filter]# 为地图添加一个副标题
st.subheader('Map of all pickups at %s:00' % hour_to_filter)
# 使用st.map()函数绘制数据
st.map(filtered_data)

运行

streamlit run demo.py

4.人工智能深度学习项目Streamlit实例

4.1.实例1:文本生成

4.1.1ChatGLM的交互

from transformers import AutoModel, AutoTokenizer
import streamlit as st
from streamlit_chat import messagest.set_page_config(page_title="ChatGLM-6b 演示",page_icon=":robot:"
)@st.cache_resource
def get_model():tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True)model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).half().cuda()model = model.eval()return tokenizer, modelMAX_TURNS = 20
MAX_BOXES = MAX_TURNS * 2def predict(input, max_length, top_p, temperature, history=None):tokenizer, model = get_model()if history is None:history = []with container:if len(history) > 0:if len(history)>MAX_BOXES:history = history[-MAX_TURNS:]for i, (query, response) in enumerate(history):message(query, avatar_style="big-smile", key=str(i) + "_user")message(response, avatar_style="bottts", key=str(i))message(input, avatar_style="big-smile", key=str(len(history)) + "_user")st.write("AI正在回复:")with st.empty():for response, history in model.stream_chat(tokenizer, input, history, max_length=max_length, top_p=top_p,temperature=temperature):query, response = history[-1]st.write(response)return historycontainer = st.container()# create a prompt text for the text generation
prompt_text = st.text_area(label="用户命令输入",height = 100,placeholder="请在这儿输入您的命令")max_length = st.sidebar.slider('max_length', 0, 4096, 2048, step=1
)
top_p = st.sidebar.slider('top_p', 0.0, 1.0, 0.6, step=0.01
)
temperature = st.sidebar.slider('temperature', 0.0, 1.0, 0.95, step=0.01
)if 'state' not in st.session_state:st.session_state['state'] = []if st.button("发送", key="predict"):with st.spinner("AI正在思考,请稍等........"):# text generationst.session_state["state"] = predict(prompt_text, max_length, top_p, temperature, st.session_state["state"])

4.1.2.OpenAI的交互

from openai import OpenAI
import streamlit as stwith st.sidebar:openai_api_key = st.text_input("OpenAI API Key", key="chatbot_api_key", type="password")"[Get an OpenAI API key](https://platform.openai.com/account/api-keys)""[View the source code](https://github.com/streamlit/llm-examples/blob/main/Chatbot.py)""[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/streamlit/llm-examples?quickstart=1)"st.title("💬 Chatbot")
st.caption("🚀 A streamlit chatbot powered by OpenAI LLM")
if "messages" not in st.session_state:st.session_state["messages"] = [{"role": "assistant", "content": "How can I help you?"}]for msg in st.session_state.messages:st.chat_message(msg["role"]).write(msg["content"])if prompt := st.chat_input():if not openai_api_key:st.info("Please add your OpenAI API key to continue.")st.stop()client = OpenAI(api_key=openai_api_key)st.session_state.messages.append({"role": "user", "content": prompt})st.chat_message("user").write(prompt)response = client.chat.completions.create(model="gpt-3.5-turbo", messages=st.session_state.messages)msg = response.choices[0].message.contentst.session_state.messages.append({"role": "assistant", "content": msg})st.chat_message("assistant").write(msg)

4.2.图像类

4.2.1.图像分类

import streamlit as stst.markdown('<h1 style="color:black;">Vgg 19 Image classification model</h1>', unsafe_allow_html=True)
st.markdown('<h2 style="color:gray;">The image classification model classifies image into following categories:</h2>', unsafe_allow_html=True)
st.markdown('<h3 style="color:gray;"> street,  buildings, forest, sea, mountain, glacier</h3>', unsafe_allow_html=True)# 背景图片background image to streamlit@st.cache(allow_output_mutation=True)
# 以base64的方式传输文件
def get_base64_of_bin_file(bin_file):with open(bin_file, 'rb') as f:data = f.read()return base64.b64encode(data).decode()
#设置背景图片,颜色等
def set_png_as_page_bg(png_file):bin_str = get_base64_of_bin_file(png_file) page_bg_img = '''<style>.stApp {background-image: url("data:image/png;base64,%s");background-size: cover;background-repeat: no-repeat;background-attachment: scroll; # doesn't work}</style>''' % bin_strst.markdown(page_bg_img, unsafe_allow_html=True)returnset_png_as_page_bg('/content/background.webp')# 上传png/jpg的照片
upload= st.file_uploader('Insert image for classification', type=['png','jpg'])
c1, c2= st.columns(2)
if upload is not None:im= Image.open(upload)img= np.asarray(im)image= cv2.resize(img,(224, 224))img= preprocess_input(image)img= np.expand_dims(img, 0)c1.header('Input Image')c1.image(im)c1.write(img.shape)# 下载预训练模型# 输入尺寸input_shape = (224, 224, 3)# 定义优化器optim_1 = Adam(learning_rate=0.0001)# 分类数n_classes=6# 定义模型vgg_model = model(input_shape, n_classes, optim_1, fine_tune=2)# 下载权重vgg_model.load_weights('/content/drive/MyDrive/vgg/tune_model19.weights.best.hdf5')#预测vgg_preds = vgg_model.predict(img)vgg_pred_classes = np.argmax(vgg_preds, axis=1)c2.header('Output')c2.subheader('Predicted class :')c2.write(classes[vgg_pred_classes[0]] )

4.2.2.图片生成

import streamlit as st 
from dotenv import load_dotenv
import os 
import openai
from diffusers import StableDiffusionPipeline
import torchload_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")#function to generate AI based images using OpenAI Dall-E
def generate_images_using_openai(text):response = openai.Image.create(prompt= text, n=1, size="512x512")image_url = response['data'][0]['url']return image_url#function to generate AI based images using Huggingface Diffusers
def generate_images_using_huggingface_diffusers(text):pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)pipe = pipe.to("cuda")prompt = textimage = pipe(prompt).images[0] return image#Streamlit Code
choice = st.sidebar.selectbox("Select your choice", ["Home", "DALL-E", "Huggingface Diffusers"])if choice == "Home":st.title("AI Image Generation App")with st.expander("About the App"):st.write("This is a simple image generation app that uses AI to generates images from text prompt.")elif choice == "DALL-E":st.subheader("Image generation using Open AI's DALL-E")input_prompt = st.text_input("Enter your text prompt")if input_prompt is not None:if st.button("Generate Image"):image_url = generate_images_using_openai(input_prompt)st.image(image_url, caption="Generated by DALL-E")elif choice == "Huggingface Diffusers":st.subheader("Image generation using Huggingface Diffusers")input_prompt = st.text_input("Enter your text prompt")if input_prompt is not None:if st.button("Generate Image"):image_output = generate_images_using_huggingface_diffusers(input_prompt)st.info("Generating image.....")st.success("Image Generated Successfully")st.image(image_output, caption="Generated by Huggingface Diffusers")

4.3.语音类

4.3.1.语音合成

import torch
import streamlit as st
# 这里使用coqui-tts,直接pip install tts就可以
from TTS.api import TTS
import tempfile
import osdevice = "cuda" if torch.cuda.is_available() else "cpu"
# 模型选择
model_name = 'tts_models/en/jenny/jenny'
tts = TTS(model_name).to(device)st.title('Coqui TTS')# 输入文本
text_to_speak = st.text_area('Entire article text here:', '')# 点击按钮监听
if st.button('Listen'):if text_to_speak:# temp path needed for audio to listen to# 定义合成语音文件名称temp_audio_path = './temp_audio.wav'# 使用tts库中的tts_to_file函数tts.tts_to_file(text=text_to_speak, file_path=temp_audio_path)#输出语音st.audio(temp_audio_path, format='audio/wav')os.unlink(temp_audio_path)


 4.3.2.语音转文本

import logging
import logging.handlers
import queue
import threading
import time
import urllib.request
import os
from collections import deque
from pathlib import Path
from typing import Listimport av
import numpy as np
import pydub
import streamlit as st
from twilio.rest import Clientfrom streamlit_webrtc import WebRtcMode, webrtc_streamerHERE = Path(__file__).parentlogger = logging.getLogger(__name__)# This code is based on https://github.com/streamlit/demo-self-driving/blob/230245391f2dda0cb464008195a470751c01770b/streamlit_app.py#L48  # noqa: E501
def download_file(url, download_to: Path, expected_size=None):# Don't download the file twice.# (If possible, verify the download using the file length.)if download_to.exists():if expected_size:if download_to.stat().st_size == expected_size:returnelse:st.info(f"{url} is already downloaded.")if not st.button("Download again?"):returndownload_to.parent.mkdir(parents=True, exist_ok=True)# These are handles to two visual elements to animate.weights_warning, progress_bar = None, Nonetry:weights_warning = st.warning("Downloading %s..." % url)progress_bar = st.progress(0)with open(download_to, "wb") as output_file:with urllib.request.urlopen(url) as response:length = int(response.info()["Content-Length"])counter = 0.0MEGABYTES = 2.0 ** 20.0while True:data = response.read(8192)if not data:breakcounter += len(data)output_file.write(data)# We perform animation by overwriting the elements.weights_warning.warning("Downloading %s... (%6.2f/%6.2f MB)"% (url, counter / MEGABYTES, length / MEGABYTES))progress_bar.progress(min(counter / length, 1.0))# Finally, we remove these visual elements by calling .empty().finally:if weights_warning is not None:weights_warning.empty()if progress_bar is not None:progress_bar.empty()# This code is based on https://github.com/whitphx/streamlit-webrtc/blob/c1fe3c783c9e8042ce0c95d789e833233fd82e74/sample_utils/turn.py
@st.cache_data  # type: ignore
def get_ice_servers():"""Use Twilio's TURN server because Streamlit Community Cloud has changedits infrastructure and WebRTC connection cannot be established without TURN server now.  # noqa: E501We considered Open Relay Project (https://www.metered.ca/tools/openrelay/) too,but it is not stable and hardly works as some people reported like https://github.com/aiortc/aiortc/issues/832#issuecomment-1482420656  # noqa: E501See https://github.com/whitphx/streamlit-webrtc/issues/1213"""# Ref: https://www.twilio.com/docs/stun-turn/apitry:account_sid = os.environ["TWILIO_ACCOUNT_SID"]auth_token = os.environ["TWILIO_AUTH_TOKEN"]except KeyError:logger.warning("Twilio credentials are not set. Fallback to a free STUN server from Google."  # noqa: E501)return [{"urls": ["stun:stun.l.google.com:19302"]}]client = Client(account_sid, auth_token)token = client.tokens.create()return token.ice_serversdef main():st.header("Real Time Speech-to-Text")st.markdown("""
This demo app is using [DeepSpeech](https://github.com/mozilla/DeepSpeech),
an open speech-to-text engine.A pre-trained model released with
[v0.9.3](https://github.com/mozilla/DeepSpeech/releases/tag/v0.9.3),
trained on American English is being served.
""")# https://github.com/mozilla/DeepSpeech/releases/tag/v0.9.3MODEL_URL = "https://github.com/mozilla/DeepSpeech/releases/download/v0.9.3/deepspeech-0.9.3-models.pbmm"  # noqaLANG_MODEL_URL = "https://github.com/mozilla/DeepSpeech/releases/download/v0.9.3/deepspeech-0.9.3-models.scorer"  # noqaMODEL_LOCAL_PATH = HERE / "models/deepspeech-0.9.3-models.pbmm"LANG_MODEL_LOCAL_PATH = HERE / "models/deepspeech-0.9.3-models.scorer"download_file(MODEL_URL, MODEL_LOCAL_PATH, expected_size=188915987)download_file(LANG_MODEL_URL, LANG_MODEL_LOCAL_PATH, expected_size=953363776)lm_alpha = 0.931289039105002lm_beta = 1.1834137581510284beam = 100sound_only_page = "Sound only (sendonly)"with_video_page = "With video (sendrecv)"app_mode = st.selectbox("Choose the app mode", [sound_only_page, with_video_page])if app_mode == sound_only_page:app_sst(str(MODEL_LOCAL_PATH), str(LANG_MODEL_LOCAL_PATH), lm_alpha, lm_beta, beam)elif app_mode == with_video_page:app_sst_with_video(str(MODEL_LOCAL_PATH), str(LANG_MODEL_LOCAL_PATH), lm_alpha, lm_beta, beam)def app_sst(model_path: str, lm_path: str, lm_alpha: float, lm_beta: float, beam: int):webrtc_ctx = webrtc_streamer(key="speech-to-text",mode=WebRtcMode.SENDONLY,audio_receiver_size=1024,rtc_configuration={"iceServers": get_ice_servers()},media_stream_constraints={"video": False, "audio": True},)status_indicator = st.empty()if not webrtc_ctx.state.playing:returnstatus_indicator.write("Loading...")text_output = st.empty()stream = Nonewhile True:if webrtc_ctx.audio_receiver:if stream is None:from deepspeech import Modelmodel = Model(model_path)model.enableExternalScorer(lm_path)model.setScorerAlphaBeta(lm_alpha, lm_beta)model.setBeamWidth(beam)stream = model.createStream()status_indicator.write("Model loaded.")sound_chunk = pydub.AudioSegment.empty()try:audio_frames = webrtc_ctx.audio_receiver.get_frames(timeout=1)except queue.Empty:time.sleep(0.1)status_indicator.write("No frame arrived.")continuestatus_indicator.write("Running. Say something!")for audio_frame in audio_frames:sound = pydub.AudioSegment(data=audio_frame.to_ndarray().tobytes(),sample_width=audio_frame.format.bytes,frame_rate=audio_frame.sample_rate,channels=len(audio_frame.layout.channels),)sound_chunk += soundif len(sound_chunk) > 0:sound_chunk = sound_chunk.set_channels(1).set_frame_rate(model.sampleRate())buffer = np.array(sound_chunk.get_array_of_samples())stream.feedAudioContent(buffer)text = stream.intermediateDecode()text_output.markdown(f"**Text:** {text}")else:status_indicator.write("AudioReciver is not set. Abort.")breakdef app_sst_with_video(model_path: str, lm_path: str, lm_alpha: float, lm_beta: float, beam: int
):frames_deque_lock = threading.Lock()frames_deque: deque = deque([])async def queued_audio_frames_callback(frames: List[av.AudioFrame],) -> av.AudioFrame:with frames_deque_lock:frames_deque.extend(frames)# Return empty frames to be silent.new_frames = []for frame in frames:input_array = frame.to_ndarray()new_frame = av.AudioFrame.from_ndarray(np.zeros(input_array.shape, dtype=input_array.dtype),layout=frame.layout.name,)new_frame.sample_rate = frame.sample_ratenew_frames.append(new_frame)return new_frameswebrtc_ctx = webrtc_streamer(key="speech-to-text-w-video",mode=WebRtcMode.SENDRECV,queued_audio_frames_callback=queued_audio_frames_callback,rtc_configuration={"iceServers": get_ice_servers()},media_stream_constraints={"video": True, "audio": True},)status_indicator = st.empty()if not webrtc_ctx.state.playing:returnstatus_indicator.write("Loading...")text_output = st.empty()stream = Nonewhile True:if webrtc_ctx.state.playing:if stream is None:from deepspeech import Modelmodel = Model(model_path)model.enableExternalScorer(lm_path)model.setScorerAlphaBeta(lm_alpha, lm_beta)model.setBeamWidth(beam)stream = model.createStream()status_indicator.write("Model loaded.")sound_chunk = pydub.AudioSegment.empty()audio_frames = []with frames_deque_lock:while len(frames_deque) > 0:frame = frames_deque.popleft()audio_frames.append(frame)if len(audio_frames) == 0:time.sleep(0.1)status_indicator.write("No frame arrived.")continuestatus_indicator.write("Running. Say something!")for audio_frame in audio_frames:sound = pydub.AudioSegment(data=audio_frame.to_ndarray().tobytes(),sample_width=audio_frame.format.bytes,frame_rate=audio_frame.sample_rate,channels=len(audio_frame.layout.channels),)sound_chunk += soundif len(sound_chunk) > 0:sound_chunk = sound_chunk.set_channels(1).set_frame_rate(model.sampleRate())buffer = np.array(sound_chunk.get_array_of_samples())stream.feedAudioContent(buffer)text = stream.intermediateDecode()text_output.markdown(f"**Text:** {text}")else:status_indicator.write("Stopped.")breakif __name__ == "__main__":import osDEBUG = os.environ.get("DEBUG", "false").lower() not in ["false", "no", "0"]logging.basicConfig(format="[%(asctime)s] %(levelname)7s from %(name)s in %(pathname)s:%(lineno)d: ""%(message)s",force=True,)logger.setLevel(level=logging.DEBUG if DEBUG else logging.INFO)st_webrtc_logger = logging.getLogger("streamlit_webrtc")st_webrtc_logger.setLevel(logging.DEBUG)fsevents_logger = logging.getLogger("fsevents")fsevents_logger.setLevel(logging.WARNING)main()

参考文献

【1】API Reference - Streamlit Docs

【2】andfanilo/streamlit-lottie: Streamlit component to render Lottie animations (github.com)turner-anderson/streamlit-cropper: A simple image cropper for Streamlit (github.com)andfanilo/streamlit-lottie: Streamlit component to render Lottie animations (github.com) 

【3】awetomate/text-to-speech-streamlit: Text-to-Speech solution using Google's Cloud TTS API and a Streamlit front end (github.com) 【4】Using streamlit for an STT / TTS model demo? - 🧩 Streamlit Components - Streamlit

【5】AI-App/Streamlit-TTS (github.com)

【6】Building a Voice Assistant using ChatGPT API | Vahid's ML-Blog (vahidmirjalili.com) 

【7】streamlit/llm-examples: Streamlit LLM app examples for getting started (github.com) 

【8】whitphx/streamlit-stt-app: Real time web based Speech-to-Text app with Streamlit (github.com)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/210831.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Activity从下往上弹出视差效果实现

其实这篇文章是转至简书上的大佬的&#xff0c;加上我自己的代码实践了下发现可行&#xff0c;于是就分享下 先看效果 介绍: 其实有很多方法都可以实现这种效果&#xff0c;popwindow&#xff0c;Dialog&#xff0c;BottomSheetDialogFragment&#xff0c;BottomSheetDialog等…

链表OJ—相交链表

提示&#xff1a;文章写完后&#xff0c;目录可以自动生成&#xff0c;如何生成可参考右边的帮助文档 文章目录 前言 1、相交链表的题目&#xff1a; 方法讲解&#xff1a; 图文解析&#xff1a; 代码实现&#xff1a; 总结 前言 世上有两种耀眼的光芒&#xff0c;一种是正在升…

15.Java程序设计-基于SSM框架的微信小程序校园求职系统的设计与实现

摘要&#xff1a; 本研究旨在设计并实现一款基于SSM框架的微信小程序校园求职系统&#xff0c;以提升校园求职流程的效率和便捷性。通过整合微信小程序平台和SSM框架的优势&#xff0c;本系统涵盖了用户管理、职位发布与搜索、简历管理、消息通知等多个功能模块&#xff0c;为…

爱智EdgerOS之深入解析AI图像引擎如何实现AI视觉开发

一、前言 AI 视觉是为了让计算机利用摄像机来替代人眼对目标进行识别&#xff0c;跟踪并进一步完成一些更加复杂的图像处理。这一领域的学术研究已经存在了很长时间&#xff0c;但直到 20 世纪 70 年代后期&#xff0c;当计算机的性能提高到足以处理图片这样大规模的数据时&am…

ArkUI组件

目录 一、概述 声明式UI 应用模型 二、常用组件 1、Image&#xff1a;图片展示组件 示例 配置控制授权申请 2、Text&#xff1a;文本显示组件 示例 3、TextInput&#xff1a;文本输入组件 示例 4、Button&#xff1a;按钮组件 5、Slider&#xff1a;滑动条组件 …

vue中设置滚动条的样式

在vue项目中&#xff0c;想要设置如下图中所示滚动条的样式&#xff0c;可以采用如下方式&#xff1a; ​// 直接写在vue.app文件中 ::-webkit-scrollbar {width: 3px;height: 3px; } ::-webkit-scrollbar-thumb { //滑块部分// border-radius: 5px;background-color: #1890ff;…

【智能家居】智能家居项目

智能家居项目目录 项目目录结构 完整而典型的项目目录结构 CMake模板 CMake编译运行 README.md 项目说明文档 智能家居项目目录 【智能家居】面向对象编程OOP和设计模式(工厂模式) 【智能家居】一、工厂模式实现继电器灯控制 【智能家居】二、添加火灾检测模块&#xff08;…

Ubuntu上svn基本使用(gitee提交下载)

目录 环境准备 1. 获取代码到本地 直接获取 获取代码时加入用户名密码 指定版本更新 2. 提交代码 3. 展示代码列表 4. 添加代码文件(目录) 5. 删除gitee仓库中的文件 参考文档链接 环境准备 当前操作系统为Ubuntu22.04LTS gitee 创建仓库时 需要打开svn的支持 sudo…

GoLong的学习之路,进阶,微服务之使用,RPC包(包括源码分析)

今天这篇是接上上篇RPC原理之后这篇是讲如何使用go本身自带的标准库RPC。这篇篇幅会比较短。重点在于上一章对的补充。 文章目录 RPC包的概念使用RPC包服务器代码分析如何实现的&#xff1f;总结Server还提供了两个注册服务的方法 客户端代码分析如何实现的&#xff1f;如何异步…

nginx配置正向代理支持https

操作系统版本&#xff1a; Alibaba Cloud Linux 3.2104 LTS 64位 nginx版本&#xff1a; nginx-1.25.3 1. 下载软件 切换目录 cd /server wget http://nginx.org/download/nginx-1.25.3.tar.gz 1.1解压 tar -zxvf nginx-1.25.3.tar.gz 1.2切换到源码所在目录…

【探索Linux】—— 强大的命令行工具 P.21(多线程 | 线程同步 | 条件变量 | 线程安全)

阅读导航 引言一、线程同步1. 竞态条件的概念2. 线程同步的概念 二、条件变量1. 条件变量函数⭕使用前提&#xff08;1&#xff09;初始化条件变量&#xff08;2&#xff09;等待条件满足&#xff08;3&#xff09;唤醒等待pthread_cond_broadcast()pthread_cond_signal() &…

Steampipe的安装部署及简单使用(附带AWS CLI的安装与使用)

介绍 Steampipe 将 API 和服务公开为高性能关系数据库&#xff0c;使您能够编写基于 SQL 的查询来探索动态数据。Mods 通过使用简单 HCL 构建的仪表板、报告和控件扩展了 Steampipe 的功能。 官网&#xff1a;https://steampipe.io/ steampipe的安装 下载脚本并执行 sudo /…

【Linux】cat 命令使用

cat 命令 cat&#xff08;英文全拼&#xff1a;concatenate&#xff09;命令用于连接文件并打印到标准输出设备上。 可以使用cat连接多个文件、创建新文件、将内容附加到现有文件、查看文件内容以及重定向终端或文件中的输出。 cat可用于在不同选项的帮助下格式化文件的输出…

LV.13 D1 嵌入式系统移植导学 学习笔记

一、嵌入式系统分层 操作系统&#xff1a;向下管理硬件、向上提供接口 操作系统为我们提供了&#xff1a; 1.进程管理 2.内存管理 3.网络接口 4.文件系统 5.设备管理 那系统移植是干什么呢&#xff1f; 就是将Linux操作系统移植到基于ARM处理器的开发板中。 那为什么要移植系…

【calcitonin ; 降钙素 ;降钙素原】

Parathyroid_Hormone -甲状旁腺激素 PTH &#xff1b; 特立帕肽&#xff1b;

【SQL开发实战技巧】系列(四十八):Oracle12C常用新特性☞多分区操作和管理

系列文章目录 【SQL开发实战技巧】系列&#xff08;一&#xff09;:关于SQL不得不说的那些事 【SQL开发实战技巧】系列&#xff08;二&#xff09;&#xff1a;简单单表查询 【SQL开发实战技巧】系列&#xff08;三&#xff09;&#xff1a;SQL排序的那些事 【SQL开发实战技巧…

K8s构建的mysql无法远程连接

最近在写一个老师布置的大作业&#xff0c;都是老师写好的yaml文件&#xff0c;都是没问题的&#xff0c;但是构建的mysql无法远程连接。 尝试了网上的很多方法&#xff0c;都失败了&#xff0c;我的构建过程应该是没什么错误的&#xff0c;所以网上的方法并不奏效&#xff0c…

【小白专用】Sql Server 连接Mysql 更新23.12.09

目标 已知mysql连接参数&#xff08;地址和用户&#xff09;&#xff0c;期望通过Microsoft Sql Server Management Studio &#xff08;以下简称MSSSMS&#xff09;连接Mysql&#xff0c;在MSSSMS中直接查询或修改Mysql中的数据。 一般是选最新的版本下载。 选64位还是32位&a…

C++ 对象的初始化和清理:构造函数和析构函数

目录 构造函数和析构函数 构造函数 析构函数 构造函数的分类及调用 括号法 显示法 隐式转换法 拷贝构造函数的调用时机 使用一个已经创建完毕的对象来初始化一个新对象 值传递的方式给函数参数传值 以值方式返回局部对象 构造函数调用规则 初始化列表 类对象作…

【Java 基础】27 XML 解析

文章目录 1.SAX 解析器1&#xff09;什么是 SAX2&#xff09;SAX 工作流程初始化实现事件处理类解析 3&#xff09;示例代码 2.DOM 解析器1&#xff09;什么是 DOM2&#xff09;DOM 工作流程初始化解析 XML 文档操作 DOM 树 3&#xff09;示例代码 总结 在项目开发中&#xff0…