kubernetes集群使用GPU及安装kubeflow1.0.RC操作步骤

kubernetes集群使用GPU及安装kubeflow1.0.RC操作步骤

 

Kubeflow使用场景

  • 希望训练tensorflow模型且可以使用模型接口发布应用服务在k8s环境中(eg.local,prem,cloud)

  • 希望使用Jupyter notebooks来调试代码,多用户的notebook server

  • 在训练的Job中,需要对的CPU或者GPU资源进行调度编排

  • 希望Tensorflow和其他组件进行组合来发布服务

依赖库

  • ksonnet 0.11.0以上版本 /可以直接从github上下载,scp ks文件到usr/local/bin

  • kubernetes 1.8以上(直接使用CCE服务节点,需要创建一个CCE集群和若干节点,并为某个节点绑定EIP)

  • kubectl tools

 1、安装ksonnet

 ksonnet 安装过程,可以去网址里面查看ks最新版本

wget https://github.com/ksonnet/ksonnet/releases/download/v0.13.0/ks_0.13.0_linux_amd64.tar.gz
tar -vxf ks_0.13.0_linux_amd64.tar.gz
cd -vxf ks_0.13.0_linux_amd64
sudo cp ks /usr/local/bin

安装完成后

image.png

 

安装显卡驱动

安装CUDA

sudo yum-config-manager --add-repo http://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64/cuda-rhel7.repo
sudo yum clean all
sudo yum -y install nvidia-driver-latest-dkms cuda
sudo yum -y install cuda-drivers

如缺少gcc依赖,则实行如下命令

  yum install kernel-devel kernel-doc kernel-headers gcc\* glibc\*  glibc-\*

安装nvidia驱动

 rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.orgrpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpmyum install -y kmod-nvidia

禁用nouvean

###在GRUB_CMDLINE_LINUX添加 rdblacklist=nouveau 项
echo -e "blacklist nouveau\noptions nouveau modeset=0" > /etc/modprobe.d/blacklist.conf

重启,查看nouveau是否被禁用成功

lsmod|grep nouv
没有任何输出,则表示nouveau已被禁用

查看服务器显卡信息

[root@master ~]# nvidia-smi
Tue Jan 14 03:46:41 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.44       Driver Version: 440.44       CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla T4            Off  | 00000000:18:00.0 Off |                    0 |
| N/A   29C    P8    10W /  70W |      0MiB / 15109MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   1  Tesla T4            Off  | 00000000:86:00.0 Off |                    0 |
| N/A   25C    P8     9W /  70W |      0MiB / 15109MiB |      0%      Default |
+-------------------------------+----------------------+----------------------++-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

安装NVIDIA-DOCKER

下载nvidia-docker.repo文件

curl -s -L https://nvidia.github.io/nvidia-docker/centos7/x86_64/nvidia-docker.repo | sudo tee /etc/yum.repos.d/nvidia-docker.repo  
  • 查找NVIDIAdocker版本
yum search --showduplicates nvidia-docker
  • 安装NVIDIA-docker

docker版本为docker18.09.7.ce,所以安装下述NVIDIAdocker版本

yum install -y nvidia-docker2
pkill -SIGHUP dockerd

nvidia-docker version 可查看已安装的nvidia docker版本

修改docker runtimes为nvidia-docker

[root@ks-allinone ~]# cat /etc/docker/daemon.json
{"default-runtime": "nvidia","runtimes": {"nvidia": {"path": "nvidia-container-runtime","runtimeArgs": []}},"registry-mirrors": ["https://o96k4rm0.mirror.aliyuncs.com"]
}

重启docker及k8s

systemctl daemon-reload
systemctl restart docker.service
systemctl restart kubelet

安装gpushare-scheduler-extender

cd /etc/kubernetes/
curl -O https://raw.githubusercontent.com/AliyunContainerService/gpushare-scheduler-extender/master/config/scheduler-policy-config.json
cd /tmp/
curl -O https://raw.githubusercontent.com/AliyunContainerService/gpushare-scheduler-extender/master/config/gpushare-schd-extender.yaml
kubectl create -f gpushare-schd-extender.yaml

安装device-plugin-rabc

kubectl create -f device-plugin-rbac.yaml
# rbac.yaml
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: gpushare-device-plugin
rules:
- apiGroups:- ""resources:- nodesverbs:- get- list- watch
- apiGroups:- ""resources:- eventsverbs:- create- patch
- apiGroups:- ""resources:- podsverbs:- update- patch- get- list- watch
- apiGroups:- ""resources:- nodes/statusverbs:- patch- update
---
apiVersion: v1
kind: ServiceAccount
metadata:name: gpushare-device-pluginnamespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: gpushare-device-pluginnamespace: kube-system
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: gpushare-device-plugin
subjects:
- kind: ServiceAccountname: gpushare-device-pluginnamespace: kube-system

安装device-plugin-ds插件

kubectl create -f device-plugin-ds.yamlapiVersion: extensions/v1beta1
kind: DaemonSet
metadata:name: gpushare-device-plugin-dsnamespace: kube-system
spec:template:metadata:annotations:scheduler.alpha.kubernetes.io/critical-pod: ""labels:component: gpushare-device-pluginapp: gpusharename: gpushare-device-plugin-dsspec:serviceAccount: gpushare-device-pluginhostNetwork: truenodeSelector:gpushare: "true"containers:- image: registry.cn-hangzhou.aliyuncs.com/acs/k8s-gpushare-plugin:v2-1.11-35eccabname: gpushare# Make this pod as Guaranteed pod which will never be evicted because of node's resource consumption.command:- gpushare-device-plugin-v2- -logtostderr- --v=5#- --memory-unit=Miresources:limits:memory: "300Mi"cpu: "1"requests:memory: "300Mi"cpu: "1"env:- name: KUBECONFIGvalue: /etc/kubernetes/kubelet.conf- name: NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeNamesecurityContext:allowPrivilegeEscalation: falsecapabilities:drop: ["ALL"]volumeMounts:- name: device-pluginmountPath: /var/lib/kubelet/device-pluginsvolumes:- name: device-pluginhostPath:path: /var/lib/kubelet/device-plugins
参考
https://github.com/AliyunContainerService/gpushare-scheduler-extender
https://github.com/AliyunContainerService/gpushare-device-plugin

为共享节点打上gpushare标签

kubectl label node mynode gpushare=true

安装扩展

curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.12.1/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/bin/kubectlcd /usr/bin/
wget https://github.com/AliyunContainerService/gpushare-device-plugin/releases/download/v0.3.0/kubectl-inspect-gpushare
chmod u+x /usr/bin/kubectl-inspect-gpushare
kubectl inspect gpushare ##查看集群GPU使用情况
  •  

安装k8s负载均衡(v0.8.2)(可选)

wget https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yamlkubectl apply -f  metallb.yaml

metallb-config.yaml

apiVersion: v1
kind: ConfigMap
metadata:namespace: metallb-systemname: config
data:config: |address-pools:- name: defaultprotocol: layer2addresses:- 10.18.5.30-10.18.5.50
kubectl apply -f metallb-config.yaml

测试tensorflow

kubectl apply -f  tensorflow.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:name: tensorflow-gpu
spec:replicas: 1template:metadata:labels:name: tensorflow-gpuspec:containers:- name: tensorflow-gpuimage: tensorflow/tensorflow:1.15.0-py3-jupyterimagePullPolicy: Neverresources:limits:aliyun.com/gpu-mem: 1024ports:- containerPort: 8888
---
apiVersion: v1
kind: Service
metadata:name: tensorflow-gpu
spec:ports:- port: 8888targetPort: 8888nodePort: 30888name: jupyterselector:name: tensorflow-gputype: NodePort

查看集群GPU使用情况

[root@master ~]# kubectl inspect gpushare
NAME    IPADDRESS   GPU0(Allocated/Total)  GPU1(Allocated/Total)  GPU Memory(MiB)
master  10.18.5.20  1024/15109              0/15109                1024/30218
node    10.18.5.21  0/15109                0/15109                0/30218
------------------------------------------------------------------
Allocated/Total GPU Memory In Cluster:
1024/60436 (1%)
[root@master ~]#

可通过动态伸缩tensorflow service 的节点数量以及修改单个节点的显存大小测试GPU使用情况

 kubectl scale --current-replicas=1 --replicas=100 deployment/tensorflow-gpu

经测试,得出以下测试结果:

环境

节点GPU个数GPU内存
master215109M*2=30218M
node215109M*2=30218M

测试结果

podGpupod个数gpu利用率
256M18377%
512M11698%
1024M5694%

安装kubeflow(V1.0.RC)

安装ks

  tar -vxf ks_0.12.0_linux_amd64.tar.gzcp ks_0.12.0_linux_amd64/* /usr/local/bin/

安装kuberflow

下载安装包

kfctl_v1.0-rc.3-1-g24b60e8_linux.tar.gz

tar -zxvf kfctl_v1.0-rc.3-1-g24b60e8_linux.tar.gz
cp kfctl  /usr/bin/

准备工作,创建PV及PVC,使用NFS作为文件存储

创建storageclass

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: local-pathnamespace: kubeflow
#provisioner: example.com/nfs
provisioner: kubernetes.io/gce-pd
parameters:type: pd-ssdkubectl create -f storage.yml
yum install nfs-utils rpcbind
#创建NFS挂载目录(至少需要四个)
mkdir -p /data/nfsvim /etc/exports
#添加上面的挂载目录
/data/nfs 192.168.122.0/24(rw,sync)systemctl restart nfs-server.service

创建PV,因多个pod挂载文件可能重名,所以最好创建多个PV由pod选择挂载(至少4个,分别供katib-mysql,metadata-mysql,minio,mysql挂载)

[root@master pv]# cat mysql-pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:name: local-path  #不同的PVC需要修改
spec:capacity:storage: 200GiaccessModes:- ReadWriteOncepersistentVolumeReclaimPolicy: RecyclestorageClassName: local-pathnfs:path: /data/nfs #不同的PVC需要修改server: 10.18.5.20

创建命名空间 kubeflow-anonymous

kubectl create namespace kubeflow-anonymous

下载kubeflow1.0.RC yml文件, https://github.com/kubeflow/manifests/blob/v1.0-branch/kfdef/kfctl_k8s_istio.yaml

[root@master 2020-0219]# cat kfctl_k8s_istio.yaml
apiVersion: kfdef.apps.kubeflow.org/v1
kind: KfDef
metadata:clusterName: kubernetescreationTimestamp: nullname: 2020-0219namespace: kubeflow
spec:applications:- kustomizeConfig:parameters:- name: namespacevalue: istio-systemrepoRef:name: manifestspath: istio/istio-crdsname: istio-crds- kustomizeConfig:parameters:- name: namespacevalue: istio-systemrepoRef:name: manifestspath: istio/istio-installname: istio-install- kustomizeConfig:parameters:- name: namespacevalue: istio-systemrepoRef:name: manifestspath: istio/cluster-local-gatewayname: cluster-local-gateway- kustomizeConfig:parameters:- name: clusterRbacConfigvalue: "OFF"repoRef:name: manifestspath: istio/istioname: istio- kustomizeConfig:parameters:- name: namespacevalue: istio-systemrepoRef:name: manifestspath: istio/add-anonymous-user-filtername: add-anonymous-user-filter- kustomizeConfig:repoRef:name: manifestspath: application/application-crdsname: application-crds- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: application/applicationname: application- kustomizeConfig:parameters:- name: namespacevalue: cert-managerrepoRef:name: manifestspath: cert-manager/cert-manager-crdsname: cert-manager-crds- kustomizeConfig:parameters:- name: namespacevalue: kube-systemrepoRef:name: manifestspath: cert-manager/cert-manager-kube-system-resourcesname: cert-manager-kube-system-resources- kustomizeConfig:overlays:- self-signed- applicationparameters:- name: namespacevalue: cert-managerrepoRef:name: manifestspath: cert-manager/cert-managername: cert-manager- kustomizeConfig:repoRef:name: manifestspath: metacontrollername: metacontroller- kustomizeConfig:overlays:- istio- applicationrepoRef:name: manifestspath: argoname: argo- kustomizeConfig:repoRef:name: manifestspath: kubeflow-rolesname: kubeflow-roles- kustomizeConfig:overlays:- istio- applicationrepoRef:name: manifestspath: common/centraldashboardname: centraldashboard- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: admission-webhook/bootstrapname: bootstrap- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: admission-webhook/webhookname: webhook- kustomizeConfig:overlays:- istio- applicationparameters:- name: userid-headervalue: kubeflow-useridrepoRef:name: manifestspath: jupyter/jupyter-web-appname: jupyter-web-app- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: spark/spark-operatorname: spark-operator- kustomizeConfig:overlays:- istio- application- dbrepoRef:name: manifestspath: metadataname: metadata- kustomizeConfig:overlays:- istio- applicationrepoRef:name: manifestspath: jupyter/notebook-controllername: notebook-controller- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: pytorch-job/pytorch-job-crdsname: pytorch-job-crds- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: pytorch-job/pytorch-operatorname: pytorch-operator- kustomizeConfig:overlays:- applicationparameters:- name: usageIdvalue: <randomly-generated-id>- name: reportUsagevalue: "true"repoRef:name: manifestspath: common/spartakusname: spartakus- kustomizeConfig:overlays:- istiorepoRef:name: manifestspath: tensorboardname: tensorboard- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: tf-training/tf-job-crdsname: tf-job-crds- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: tf-training/tf-job-operatorname: tf-job-operator- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: katib/katib-crdsname: katib-crds- kustomizeConfig:overlays:- application- istiorepoRef:name: manifestspath: katib/katib-controllername: katib-controller- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: pipeline/api-servicename: api-service- kustomizeConfig:overlays:- applicationparameters:- name: minioPvcNamevalue: minio-pv-claimrepoRef:name: manifestspath: pipeline/minioname: minio- kustomizeConfig:overlays:- applicationparameters:- name: mysqlPvcNamevalue: mysql-pv-claimrepoRef:name: manifestspath: pipeline/mysqlname: mysql- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: pipeline/persistent-agentname: persistent-agent- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: pipeline/pipelines-runnername: pipelines-runner- kustomizeConfig:overlays:- istio- applicationrepoRef:name: manifestspath: pipeline/pipelines-uiname: pipelines-ui- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: pipeline/pipelines-viewername: pipelines-viewer- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: pipeline/scheduledworkflowname: scheduledworkflow- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: pipeline/pipeline-visualization-servicename: pipeline-visualization-service- kustomizeConfig:overlays:- application- istioparameters:- name: adminvalue: johnDoe@acme.comrepoRef:name: manifestspath: profilesname: profiles- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: seldon/seldon-core-operatorname: seldon-core-operator- kustomizeConfig:overlays:- applicationparameters:- name: namespacevalue: knative-servingrepoRef:name: manifestspath: knative/knative-serving-crdsname: knative-crds- kustomizeConfig:overlays:- applicationparameters:- name: namespacevalue: knative-servingrepoRef:name: manifestspath: knative/knative-serving-installname: knative-install- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: kfserving/kfserving-crdsname: kfserving-crds- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: kfserving/kfserving-installname: kfserving-installrepos:- name: manifestsuri: https://github.com/kubeflow/manifests/archive/master.tar.gzversion: master
status:reposCache:- localPath: '"../.cache/manifests/manifests-master"'name: manifests
[root@master 2020-0219]#
#进入你的kubeflowapp目录 执行
kfctl apply -V -f kfctl_k8s_istio.yaml
#安装过程中需要从GitHub下载配置文件,可能会失败,失败时重试

在kubeflowapp平级目录下会生成kustomize文件夹,为防重启时镜像拉取失败,需修改所有镜像拉取策略为IfNotPresent
然后再次执行 kfctl apply -V -f kfctl_k8s_istio.yaml

查看运行状态

kubectl get all -n kubeflow

通过istio ingress访问kubeflowui

#修改ingeress-gateway访问方式为LoadBalancer
kubectl -n istio-system edit svc istio-ingressgateway
#修改此处为LoadBalancer
selector:app: istio-ingressgatewayistio: ingressgatewayrelease: istio
sessionAffinity: None
type: LoadBalancer

保存,再次查看该svc信息

[root@master 2020-0219]# kubectl -n istio-system get svc istio-ingressgateway
NAME                   TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                                                                                                                                      AGE
istio-ingressgateway   LoadBalancer   10.98.19.247   10.18.5.30    15020:32230/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:31908/TCP,15030:31864/TCP,15031:31315/TCP,15032:30372/TCP,15443:32631/TCP   42h
[root@master 2020-0219]#

EXTERNAL-IP 即为外部访问地址,访问http://10.18.5.30 即可进入kubeflow主页

关于镜像拉取,gcr镜像国内无法拉取,可以通过如下方式拉取

curl -s https://zhangguanzhang.github.io/bash/pull.sh | bash -s -- 镜像信息

若上述方法也无法拉取,可以使用阿里云手动构建镜像方式使用海外服务器构建

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/547564.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

linux线程负载,linux 排查cpu负载过高异常(转载)

问&#xff1a;如何定位是哪个服务进程导致CPU过载&#xff0c;哪个线程导致CPU过载&#xff0c;哪段代码导致CPU过载&#xff1f;步骤一、找到最耗CPU的进程工具&#xff1a;top方法&#xff1a;执行top -c &#xff0c;显示进程运行信息列表键入P (大写p)&#xff0c;进程按照…

WIN10自带远程桌面实现多用户登录

一台主机当两台用&#xff0c;利用WIN10自带远程桌面实现多用户登录 2020-04-27 10:51:24 54点赞 311收藏 35评论 先来说说家里电脑和网络的基本情况。 新房装修时就考虑到家庭多媒体和自己变态的各种网络需求&#xff0c;所以全屋用企业级的网络设备组成了千兆网&#xff0…

PingingLab传世经典系列《CCNA完全配置宝典》-2.7 EIGRP基本配置

2.7 EIGRP基本配置实验目的&#xff1a;1、掌握EIGRP的基本配置。2&#xff0e;掌握EIGRP的邻居表、拓扑表、路由表。3&#xff0e;掌握EIGRP的无类特性。实验拓扑&#xff1a;实验步骤&#xff1a;1、依据图中拓扑配置各设备的IP地址&#xff0c;并保证直连连通性;在R1上做如下…

linux通过ip计算广播地址,子网掩码、网络地址、广播地址的计算

例如 192.168.1.53/27 如何计算出它的子网掩码、网络地址、广播地址、可用的主机数和最大可容纳主机数、可用的IP地址。子网掩码和主机数&#xff1a;192.168.1.53/27为例讲解&#xff0c;这就是平时说的&#xff23;类IP地址&#xff0c;平时大家用的是192.168.0.*或192.168.1…

KubeFlow安装指南

【摘要】 Kubeflow是Google推出的基于kubernetes环境下的机器学习组件&#xff0c;通过Kubeflow可以实现对TFJob等资源类型定义&#xff0c;可以像部署应用一样完成在TFJob分布式训练模型的过程。 组件 http://pachyderm.io/ http://www.argoproj.io/ Kubeflow使用场景 希望…

c#字符相似度对比通用类

本类适用于比较2个字符的相似度&#xff0c;代码如下&#xff1a; View Code using System;using System.Collections.Generic;using System.Text;public class StringCompute{#region 私有变量/// <summary>/// 字符串1/// </summary>private char[] _ArrChar1;/…

javascript数字格式化通用类——accounting.js使用

简介 accounting.js 是一个非常小的JavaScript方法库用于对数字&#xff0c;金额和货币进行格式化。并提供可选的Excel风格列渲染。它没有依赖任何JS框架。货币符号等可以按需求进行定制。 代码内容及下载地址 accounting.js代码如下&#xff1a; View Code /*!* accounting…

linux清除cpu,解决kswapd0 CPU占用率高的问题-清除病毒

连接服务器时发现cpu使用率100%&#xff0c;使用top命令查看是kswapd0进程占用cpu极高百度下后知道kswapd0进程的作用&#xff1a;它是虚拟内存管理中&#xff0c;负责换页的&#xff0c;操作系统每过一定时间就会唤醒kswapd &#xff0c;看看内存是否紧张&#xff0c;如果不紧…

Apache+Mysql+php+ZenTaoPMS安装配置文档

基于ApacheMysqlphpZenTaoPMS安装配置一、Apache安装配置tar zxvf httpd-2.2.23.tar.gzcd httpd-2.2.23mkdir –p /usr/local/app/apache2./configure --prefix/usr/local/app/apache2 --enable-so \--enable-maintainer-mode --enable-rewrite #添加后面的参数是为了解析s…

富编译器汇总及二次开发Demo

富文本编译器汇总 名称总大小当前版本官方地址扩展方法xhEditor1.43 MBv1.1.14http://xheditor.comhttp://xheditor.com/demos/demo09.htmlMarkitUp98.7 KBv1.1.13http://markitup.jaysalvat.com/home在set.js里设置开发。jwysiwyg1.52 MBv0.98https://github.com/akzhan/jwys…

docker安装nginx并配置SSL到个人博客

1 准备 1.已安装好docker环境 2.已申请好域名 2 申请SSL证书 我使用的是腾讯云&#xff0c;申请免费的TrustAsia的SSL证书&#xff0c;阿里云等或者其他平台一般都会提供TrustAsia的SSL证书的 填好域名等相关信息&#xff0c;一般一天就可以下载证书了 3 docker安装Nginx …

redhat linux 6.5 vnc,redhat 6.5 YUM安装kvm 并用VNC远程管理

安装完REDHAT&#xff0c;我们首先配置yum源先卸载系统原来的YUM包一、配置redhat yum源#rpm -aq|grep yum|xargs rpm -e --nodeps下载YUM源&#xff0c;我用的是&#xff11;&#xff16;&#xff13;的# wget http://mirrors.163.com/centos/6/os/x86_64/Packages/yum-plugin…

用DOSBox运行老游戏

DOSBox0.74-3-win32-installer.exe下载地址&#xff1a; https://nchc.dl.sourceforge.net/project/dosbox/dosbox/0.74-3/DOSBox0.74-3-win32-installer.exe 金庸群侠传&#xff1a;https://dos.zczc.cz/games/%E9%87%91%E5%BA%B8%E7%BE%A4%E4%BE%A0%E4%BC%A0/download 新版本…

宿主机为linux、windows分别实现VMware三种方式上网(转)

一、VMware三种方式工作原理1 Host-only连接方式 让虚机具有与宿主机不同的各自独立IP地址&#xff0c;但与宿主机位于不同网段&#xff0c;同时为宿主主机新增一个IP地址&#xff0c;且保证该IP地址与各虚机IP地址位于同一网段。最终结果是新建了一个由所有虚机与宿主主机所构…

摔倒、摔倒检测数据集

近期学习摔倒检测&#xff0c;接触摔倒数据集&#xff0c;自学笔记&#xff0c;仅用作个人复习。 the UR fall detection dataset (URFD)the fall detection dataset (FDD) UR Fall Detection Dataset &#xff08;University of Rzeszow - 热舒夫大学&#xff09; 数据集网…

visual studio内置“iis”组件提取及二次开发

简介 visual studio安装后会自带小型的“iis”服务器&#xff0c;本文就简单提取一下这个组件&#xff0c;自己做一个小型“iis”服务器吧。先来说用途吧&#xff08;废话可绕过&#xff09;&#xff0c;比如在服务器上没有安装iis&#xff0c;或者给客户演示asp.net程序&…

禁用 Microsoft 软件保护平台服务

以前没怎么注意&#xff0c;老觉得cup没事就声音很大&#xff0c;后来发现这玩意儿占用巨多cup&#xff0c;希望有大佬帮助解决一下&#xff0c;谢谢 解决方法&#xff1a; 首先使用【Win】 【R】组合快捷键&#xff0c;快速打开运行命令框&#xff0c;在打开后面键入命令&am…

asp.net mvc3.0安装失败之终极解决方案

安装失败截图 原因分析 因为vs10先安装了sp1补丁&#xff0c;然后安装的mvc3.0&#xff0c;某些文件被sp1补丁更改&#xff0c;导致“VS10-KB2483190-x86.exe”安装不了&#xff0c;造成安装失败。 解决方案 方法1&#xff1a; 解压mvc安装包&#xff08;AspNetMVC3Setup.e…

asp.net mvc3.0第一个程序helloworld开发图解

步骤一&#xff1a;新建asp.net mvc3.0项目 &#xff08;选择Razor模板&#xff09; 步骤二&#xff1a;创建控制器 步骤三&#xff1a;控制器源码内右键创建对应视图 步骤四&#xff1a;控制器内添加代码 步骤五&#xff1a;视图页面输出内容 步骤六&#xff1a;F5调试

在Windows系统中下载并安装Docker-desktop

在Windows系统中下载并安装Docker-desktop 推荐目录&#xff1a;https://t.cn/A6ApnczU Docker for Windows 在Windows上运行Docker。系统要求&#xff0c;Windows10x64位&#xff0c;支持Hyper-V。 下载 Docker for Windows Dokcer Desktop for Windows 安装要求 Docker …