6. Service详解

6. Service详解

文章目录

  • 6. Service详解
    • 6.1 Service介绍
    • 6.2 Service类型
    • 6.3 Service使用
      • 6.3.1 实验环境准备
      • 6.3.2 ClusterIP类型的Service
      • 6.3.3 HeadLess类型的Service
        • 6.3.3.1 deployment和statefulset区别
        • 6.3.3.2 statefulset deployment 区别
      • 6.3.4 NodePort类型的Service
      • 6.3.5 LoadBalancer类型的Service
      • 6.3.6 ExternalName类型的Service
    • 6.4 Ingress介绍
    • 6.5 Ingress使用
      • 6.5.1 环境准备
          • 6.5.1.1Http代理—制作转发
          • 6.5.1.2 控制器在多个节点上部署
          • 6.5.1.3 Https代理

6.1 Service介绍

在kubernetes中,pod是应用程序的载体,我们可以通过pod的ip来访问应用程序,但是pod的ip地址不是固定的,这也就意味着不方便直接采用pod的ip对服务进行访问。

为了解决这个问题,kubernetes提供了Service资源,Service会对提供同一个服务的多个pod进行聚合,并且提供一个统一的入口地址。通过访问Service的入口地址就能访问到后面的pod服务。

img

Service在很多情况下只是一个概念,真正起作用的其实是kube-proxy服务进程,每个Node节点上都运行着一个kube-proxy服务进程。当创建Service的时候会通过api-server向etcd写入创建的service的信息,而kube-proxy会基于监听的机制发现这种Service的变动,然后它会将最新的Service信息转换成对应的访问规则

img

# 10.97.97.97:80 是service提供的访问入口
# 当访问这个入口的时候,可以发现后面有三个pod的服务在等待调用,
# kube-proxy会基于rr(轮询)的策略,将请求分发到其中一个pod上去
# 这个规则会同时在集群内的所有节点上都生成,所以在任何一个节点上访问都可以。
[root@node1 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.97.97.97:80 rr-> 10.244.1.39:80               Masq    1      0          0-> 10.244.1.40:80               Masq    1      0          0-> 10.244.2.33:80               Masq    1      0          0

kube-proxy目前支持三种工作模式:

userspace 模式

userspace模式下,kube-proxy会为每一个Service创建一个监听端口,发向Cluster IP的请求被Iptables规则重定向到kube-proxy监听的端口上,kube-proxy根据LB算法选择一个提供服务的Pod并和其建立链接,以将请求转发到Pod上。 该模式下,kube-proxy充当了一个四层负责均衡器的角色。由于kube-proxy运行在userspace中,在进行转发处理时会增加内核和用户空间之间的数据拷贝,虽然比较稳定,但是效率比较低。

img

iptables 模式

iptables模式下,kube-proxy为service后端的每个Pod创建对应的iptables规则,直接将发向Cluster IP的请求重定向到一个Pod IP。 该模式下kube-proxy不承担四层负责均衡器的角色,只负责创建iptables规则。该模式的优点是较userspace模式效率更高,但不能提供灵活的LB策略,当后端Pod不可用时也无法进行重试。

img

ipvs 模式

ipvs模式和iptables类似,kube-proxy监控Pod的变化并创建相应的ipvs规则。ipvs相对iptables转发效率更高。除此以外,ipvs支持更多的LB算法。

img

# 此模式必须安装ipvs内核模块,否则会降级为iptables
# cm 配置管理器/配置映射
[root@k8s-master inventory]# kubectl api-resources|grep cm
configmaps                        cm           v1                                     true         ConfigMap# 开启ipvs,# 修改mode: "ipvs"
[root@k8s-master inventory]# kubectl edit configmaps kube-proxy -n kube-system
configmap/kube-proxy edited修改为 mode: "ipvs"# 查看标签:k8s-app=kube-proxy
[root@k8s-master kubernetes]# kubectl get pods -n kube-system --show-labels|grep k8s-app=kube-proxy
kube-proxy-n28gm                     1/1     Running   0                5m52s   controller-revision-hash=65cf5dddb9,k8s-app=kube-prox,pod-template-generation=1
kube-proxy-q6clr                     1/1     Running   0                5m52s   controller-revision-hash=65cf5dddb9,k8s-app=kube-prox,pod-template-generation=1
kube-proxy-vvkbn                     1/1     Running   0                5m53s   controller-revision-hash=65cf5dddb9,k8s-app=kube-prox,pod-template-generation=1
[root@k8s-master kubernetes]# kubectl delete pod -l k8s-app=kube-proxy -n kube-system
pod "kube-proxy-n28gm" deleted
pod "kube-proxy-q6clr" deleted
pod "kube-proxy-vvkbn" deleted[root@k8s-node1 ~]# yum -y install ipvsadm-1.31-1.el8.x86_6
[root@k8s-node1 kubernetes]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.17.0.1:32149 rr
TCP  192.168.207.11:32149 rr
TCP  10.96.0.1:443 rr-> 192.168.207.10:6443          Masq    1      1          0         
TCP  10.96.0.10:53 rr
TCP  10.96.0.10:9153 rr
TCP  10.110.182.125:80 rr
TCP  10.244.1.0:32149 rr
UDP  10.96.0.10:53 rr

6.2 Service类型

Service的资源清单文件:

kind: Service  # 资源类型
apiVersion: v1  # 资源版本
metadata: # 元数据name: service # 资源名称namespace: dev # 命名空间
spec: # 描述selector: # 标签选择器,用于确定当前service代理哪些podapp: nginxtype: # Service类型,指定service的访问方式clusterIP:  # 虚拟服务的ip地址sessionAffinity: # session亲和性,支持ClientIP、None两个选项ports: # 端口信息- protocol: TCP port: 3017  # service端口targetPort: 5003 # pod端口nodePort: 31122 # 主机端口
  • ClusterIP:默认值,它是Kubernetes系统自动分配的虚拟IP,只能在集群内部访问
  • NodePort:将Service通过指定的Node上的端口暴露给外部,通过此方法,就可以在集群外部访问服务
  • LoadBalancer:使用外接负载均衡器完成到服务的负载分发,注意此模式需要外部云环境支持
  • ExternalName: 把集群外部的服务引入集群内部,直接使用

6.3 Service使用

6.3.1 实验环境准备

在使用service之前,首先利用Deployment创建出3个pod,注意要为pod设置app=nginx-pod的标签

创建deployment.yaml,内容如下:

[root@k8s-master inventory]# vi deployment.yaml
[root@k8s-master inventory]# cat deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:name: pc-deploymentnamespace: dev
spec:replicas: 3selector:matchLabels:app: nginx-podtemplate: metadata:labels:app: nginx-podspec:containers:- name: nginximage: nginx:1.17.1ports:- containerPort: 80
[root@k8s-master inventory]# kubectl apply -f deployment.yaml 
deployment.apps/pc-deployment created# 查看pod详情
[root@k8s-master inventory]# kubectl get -f deployment.yaml 
NAME            READY   UP-TO-DATE   AVAILABLE   AGE
pc-deployment   3/3     3            3           7s
[root@k8s-master inventory]# kubectl get -f deployment.yaml -o wide
NAME            READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES         SELECTOR
pc-deployment   3/3     3            3           12s   nginx        nginx:1.17.1   app=nginx-pod
[root@k8s-master inventory]# kubectl get pods -n dev -o wide
NAME                            READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
pc-deployment-6fdb77cb4-4c5zq   1/1     Running   0          16s   10.244.2.42   k8s-node2   <none>           <none>
pc-deployment-6fdb77cb4-4prlq   1/1     Running   0          16s   10.244.1.32   k8s-node1   <none>           <none>
pc-deployment-6fdb77cb4-jq776   1/1     Running   0          16s   10.244.2.43   k8s-node2   <none>           <none># 为了方便后面的测试,修改下三台nginx的访问内容
[root@k8s-master inventory]# kubectl exec -itn dev pc-deployment-6fdb77cb4-4c5zq -- /bin/sh
# echo "page 111" > /usr/share/nginx/html/index.html 
# exit
[root@k8s-master inventory]# kubectl exec -itn dev pc-deployment-6fdb77cb4-4prlq -- /bin/sh
# echo "page 222" > /usr/share/nginx/html/index.html
# exit
[root@k8s-master inventory]# kubectl exec -itn dev pc-deployment-6fdb77cb4-jq776 -- /bin/sh
# echo "page 333" > /usr/share/nginx/html/index.html
# exit# 测试
[root@k8s-master ~]# curl 10.244.2.42
page 111
[root@k8s-master ~]# curl 10.244.1.32
page 222
[root@k8s-master ~]# curl 10.244.2.43
page 333

6.3.2 ClusterIP类型的Service

创建service-clusterip.yaml文件

[root@k8s-master inventory]# vi service-clusterip.yaml
[root@k8s-master inventory]# cat service-clusterip.yaml 
apiVersion: v1
kind: Service
metadata:name: service-clusteripnamespace: dev
spec:selector:app: nginx-podtype: ClusterIPports:- port: 80targetPort: 80
[root@k8s-master inventory]# kubectl apply -f service-clusterip.yaml 
service/service-clusterip created# 查看service
[root@k8s-master inventory]# kubectl get -f service-clusterip.yaml 
NAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service-clusterip   ClusterIP   10.110.199.147   <none>        80/TCP    5s
[root@k8s-master inventory]# kubectl get svc -n dev -o wide
NAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE     SELECTOR
nginx               NodePort    10.110.182.125   <none>        80:32149/TCP   5d22h   app=nginx-pod
service-clusterip   ClusterIP   10.110.199.147   <none>        80/TCP         9s      app=nginx-pod# 查看service的详细信息
# 在这里有一个Endpoints列表,里面就是当前service可以负载到的服务入口
[root@k8s-master inventory]# kubectl describe svc service-clusterip -n dev
Name:              service-clusterip
Namespace:         dev
Labels:            <none>
Annotations:       <none>
Selector:          app=nginx-pod
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.110.199.147
IPs:               10.110.199.147
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.1.32:80,10.244.2.42:80,10.244.2.43:80
Session Affinity:  None
Events:            <none># 查看ipvs的映射规则
[root@k8s-master inventory]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.17.0.1:32149 rr-> 10.244.1.32:80               Masq    1      0          0         -> 10.244.2.42:80               Masq    1      0          0         -> 10.244.2.43:80               Masq    1      0          0         
TCP  192.168.207.10:32149 rr-> 10.244.1.32:80               Masq    1      0          0         -> 10.244.2.42:80               Masq    1      0          0         -> 10.244.2.43:80               Masq    1      0          0         
TCP  10.96.0.1:443 rr-> 192.168.207.10:6443          Masq    1      3          0         
TCP  10.96.0.10:53 rr-> 10.244.0.4:53                Masq    1      0          0         -> 10.244.0.5:53                Masq    1      0          0         
TCP  10.96.0.10:9153 rr-> 10.244.0.4:9153              Masq    1      0          0         -> 10.244.0.5:9153              Masq    1      0          0         
TCP  10.110.182.125:80 rr-> 10.244.1.32:80               Masq    1      0          0         -> 10.244.2.42:80               Masq    1      0          0         -> 10.244.2.43:80               Masq    1      0          0         
TCP  10.110.199.147:80 rr-> 10.244.1.32:80               Masq    1      0          0         -> 10.244.2.42:80               Masq    1      0          0         -> 10.244.2.43:80               Masq    1      0          0         
TCP  10.244.0.0:32149 rr-> 10.244.1.32:80               Masq    1      0          0         -> 10.244.2.42:80               Masq    1      0          0         -> 10.244.2.43:80               Masq    1      0          0         
TCP  10.244.0.1:32149 rr-> 10.244.1.32:80               Masq    1      0          0         -> 10.244.2.42:80               Masq    1      0          0         -> 10.244.2.43:80               Masq    1      0          0         
UDP  10.96.0.10:53 rr-> 10.244.0.4:53                Masq    1      0          0         -> 10.244.0.5:53                Masq    1      0          0    # 访问 10.110.199.147 观察效果
[root@k8s-master inventory]# kubectl get svc -n dev -o wide
NAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE     SELECTOR
nginx               NodePort    10.110.182.125   <none>        80:32149/TCP   5d22h   app=nginx-pod
service-clusterip   ClusterIP   10.110.199.147   <none>        80/TCP         48s     app=nginx-pod
[root@k8s-master inventory]# curl 10.110.199.147
page 333
[root@k8s-master inventory]# curl 10.110.199.147
page 111
[root@k8s-master inventory]# curl 10.110.199.147
page 222
[root@k8s-master inventory]# curl 10.110.199.147
page 333
[root@k8s-master inventory]# curl 10.110.199.147
page 111
[root@k8s-master inventory]# curl 10.110.199.147
page 222
[root@k8s-master inventory]# curl 10.110.199.147
page 333

Endpoint

Endpoint是kubernetes中的一个资源对象,存储在etcd中,用来记录一个service对应的所有pod的访问地址,它是根据service配置文件中selector描述产生的。

一个Service由一组Pod组成,这些Pod通过Endpoints暴露出来,Endpoints是实现实际服务的端点集合。换句话说,service和pod之间的联系是通过endpoints实现的。

img

[root@k8s-master inventory]# kubectl get endpoints -n dev -o wide
NAME                ENDPOINTS                                      AGE
service-clusterip   10.244.1.32:80,10.244.2.42:80,10.244.2.43:80   3m41s

负载分发策略

对Service的访问被分发到了后端的Pod上去,目前kubernetes提供了两种负载分发策略:

  • 如果不定义,默认使用kube-proxy的策略,比如随机、轮询

  • 基于客户端地址的会话保持模式,即来自同一个客户端发起的所有请求都会转发到固定的一个Pod上

    此模式可以使在spec中添加sessionAffinity:ClientIP选项

    # 查看ipvs的映射规则【rr 轮询】
    grep -A 3 过滤下面3行
    [root@k8s-master inventory]#  ipvsadm -Ln|grep -A 3 '10.110.199.147'
    TCP  10.110.199.147:80 rr-> 10.244.1.32:80               Masq    1      0          0         -> 10.244.2.42:80               Masq    1      0          0         -> 10.244.2.43:80               Masq    1      0          0         # 循环访问测试
    [root@k8s-master inventory]# while true;do curl 10.110.199.147:80; sleep 2; done;
    page 111
    page 222
    page 333
    page 111
    page 222
    page 333
    page 111
    page 222
    page 333
    page 111
    ^C[root@k8s-master inventory]# kubectl delete -f service-clusterip.yaml 
    service "service-clusterip" deleted# 修改分发策略----sessionAffinity:ClientIP
    [root@k8s-master inventory]# vi service-clusterip.yaml 
    [root@k8s-master inventory]# cat service-clusterip.yaml 
    apiVersion: v1
    kind: Service
    metadata:name: service-clusteripnamespace: dev
    spec:sessionAffinity: ClientIPselector:app: nginx-podtype: ClusterIPports:- port: 80  # Service端口   targetPort: 80  # pod端口
    [root@k8s-master inventory]# kubectl apply -f service-clusterip.yaml 
    service/service-clusterip created
    [root@k8s-master inventory]# 
    [root@k8s-master inventory]# kubectl get svc -n dev
    NAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
    service-clusterip   ClusterIP   10.111.115.232   <none>        80/TCP    12s# 查看ipvs规则【persistent 代表持久】
    [root@k8s-master inventory]# ipvsadm -Ln|grep -A 3 '10.111.115.232'
    TCP  10.111.115.232:80 rr persistent 10800-> 10.244.1.32:80               Masq    1      0          0         -> 10.244.2.42:80               Masq    1      0          0         -> 10.244.2.43:80               Masq    1      0          0      # 循环访问测试
    [root@k8s-master inventory]#  while true;do curl 10.111.115.232; sleep 2; done;
    page 333
    page 333
    page 333
    [root@k8s-master inventory]# kubectl delete -f service-clusterip.yaml 
    service "service-clusterip" deleted
    

6.3.3 HeadLess类型的Service

  • 在某些场景中,开发人员可能不想使用Service提供的负载均衡功能,而希望自己来控制负载均衡策略,针对这种情况,kubernetes提供了HeadLess Service,这类Service不会分配Cluster IP,如果想要访问service,只能通过service的域名进行查询。

  • headless service可以通过解析service的DNS,返回所有Pod的地址和DNS(statefulSet部署的Pod才有DNS)

    普通的service,只能通过解析service的DNS返回service的ClusterIP

6.3.3.1 deployment和statefulset区别
Deployment和StatefulSet是Kubernetes中常用的两种应用部署方式,它们在特点和使用场景上有所不同。Deployment是Kubernetes中最常见的应用部署方式之一。Deployment可以用来部署和管理无状态(stateless)应用,它通过ReplicaSet来实现应用的伸缩、滚动升级和回滚等操作。Deployment不需要维护稳定的网络标识符和持久性存储卷,每个Pod都是独立的实体。当Pod发生故障或者需要进行扩容时,Deployment会自动创建或者销毁Pod。相比之下,StatefulSet适用于有状态(stateful)应用的部署和管理。StatefulSet可以保证应用的稳定标识符和有序的部署、扩容与缩容。StatefulSet中的每个Pod都有唯一的网络标识符和持久性存储卷,即使Pod重新创建也可以保留相同的标识符和数据。这对于一些依赖稳定标识符和持久状态的应用非常重要,例如数据库或者分布式存储系统。此外,Deployment和StatefulSet在更新策略上也存在差异。Deployment采用滚动升级的方式来更新应用,即逐步创建新的Pod并逐步删除旧的Pod,以确保应用的可用性。而StatefulSet则通过Pod的顺序创建和删除来实现有序的升级和回滚,以确保数据的连续性和一致性。综上所述,Deployment适合无状态应用的部署和管理,而StatefulSet适合有状态应用。根据应用的需求和特点选择合适的部署方式可以更好地利用Kubernetes提供的强大功能和优势。

deployment和statefulset区别(statefulset deployment 区别)

6.3.3.2 statefulset deployment 区别
  StatefulSet和Deployment是Kubernetes中常用的两种资源对象,它们在应用程序的部署和管理方面有着不同的特点和用途。Deployment是一种无状态的资源对象,用于管理无状态应用的部署。无状态意味着应用程序的实例之间没有任何依赖关系,可以随意地创建、删除和替换。Deployment通过使用ReplicaSet来实现应用程序的扩展和高可用性。当需要对应用程序进行水平扩展或滚动更新时,可以简单地修改Deployment的副本数或更新镜像版本,Kubernetes会自动处理应用程序的创建和删除过程。StatefulSet是一种有状态的资源对象,用于管理有状态应用的部署。有状态意味着应用程序的实例之间有着一定的依赖关系,并且需要保证有序、唯一性的命名和网络标识。StatefulSet通过使用有序的Pod命名和持久化存储来实现应用程序的有状态部署。当需要对有状态应用进行扩展、缩减或滚动更新时,StatefulSet可以确保应用程序实例的顺序和唯一性,从而保证数据的一致性和可靠性。总结来说,Deployment适合无状态应用的部署和管理,可以实现水平扩展和滚动更新。而StatefulSet适合有状态应用的部署和管理,可以确保应用程序实例的唯一性和顺序,保证数据的一致性和可靠性。根据应用程序的性质和需求,选择适合的资源对象可以更好地管理和维护应用程序的部署。
# 创建StatefulSet.yaml
[root@k8s-master inventory]# cat statefulset.yaml 
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: pc-deployment
namespace: dev
spec:
replicas: 3
selector:matchLabels:app: nginx-pod
template: metadata:labels:app: nginx-podspec:containers:- name: b1image: busybox:latest command: ["/bin/sleep","6000"]ports:- containerPort: 80
[root@k8s-master inventory]# kubectl apply -f statefulset.yaml 
statefulset.apps/pc-deployment created
[root@k8s-master inventory]# kubectl get -f deployment.yaml -o wide
NAME            READY   AGE   CONTAINERS   IMAGES
pc-deployment   3/3     21s   b1           busybox:latest
[root@k8s-master inventory]# kubectl get pods -n dev
NAME              READY   STATUS    RESTARTS   AGE
pc-deployment-0   1/1     Running   0          25s
pc-deployment-1   1/1     Running   0          15s
pc-deployment-2   1/1     Running   0          11s# 创建service-headliness.yaml
[root@k8s-master inventory]# cat service-headless.yaml 
apiVersion: v1
kind: Service
metadata:
name: service-headliness
namespace: dev
spec:
selector:app: nginx-pod
clusterIP: None  # 将clusterIP设置为None,即可创建headliness Service,自动分配
type: ClusterIP
ports:
- port: 80targetPort: 80[root@k8s-master inventory]# kubectl apply -f service-headless.yaml 
service/service-headliness created
# 获取service, 发现CLUSTER-IP未分配
[root@k8s-master inventory]# kubectl get svc service-headliness -n dev -o wide
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service-headliness   ClusterIP   None         <none>        80/TCP    27m   app=nginx-pod
[root@k8s-master inventory]#  kubectl get -f service-headless.yaml 
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service-headliness   ClusterIP   None         <none>        80/TCP    28m# 查看service详情
[root@k8s-master inventory]# kubectl describe svc service-headliness -n dev
Name:              service-headliness
Namespace:         dev
Labels:            <none>
Annotations:       <none>
Selector:          app=nginx-pod
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                None
IPs:               None
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.1.36:80,10.244.1.37:80,10.244.2.48:80
Session Affinity:  None
Events:            <none># 查看域名的解析情况
[root@k8s-master inventory]# kubectl get pods -n dev
NAME              READY   STATUS    RESTARTS   AGE
pc-deployment-0   1/1     Running   0          36s
pc-deployment-1   1/1     Running   0          35s
pc-deployment-2   1/1     Running   0          33s
[root@k8s-master inventory]# kubectl exec -itn dev pc-deployment-0 -- /bin/sh
/ # 
/ # cat /etc/resolv.conf
search dev.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.96.0.10
options ndots:5
/ # nslookup service-headliness.dev.svc.cluster.local
Server:		10.96.0.10
Address:	10.96.0.10:53Name:	service-headliness.dev.svc.cluster.local Address: 10.244.1.36
Name:	service-headliness.dev.svc.cluster.local Address: 10.244.1.37
Name:	service-headliness.dev.svc.cluster.local Address: 10.244.2.48

6.3.4 NodePort类型的Service

在之前的样例中,创建的Service的ip地址只有集群内部才可以访问,如果希望将Service暴露给集群外部使用,那么就要使用到另外一种类型的Service,称为NodePort类型。NodePort的工作原理其实就是将service的端口映射到Node的一个端口上,然后就可以通过NodeIp:NodePort来访问service了。

img

创建service-nodeport.yaml

[root@k8s-master inventory]# cat service-nodeport.yaml 
apiVersion: v1
kind: Service
metadata: name: service-nodeportnamespace: dev
spec:selector:app: nginx-podtype: NodePortports:- port: 80nodePort: 30002targetPort: 80# 查看service
[root@k8s-master inventory]# kubectl apply -f service-nodeport.yaml 
service/service-nodeport created
[root@k8s-master inventory]# kubectl get -f service-nodeport.yaml 
NAME               TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service-nodeport   NodePort   10.104.111.160   <none>        80:30002/TCP   11s
[root@k8s-master inventory]# kubectl get -f service-nodeport.yaml -o wide
NAME               TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE   SELECTOR
service-nodeport   NodePort   10.104.111.160   <none>        80:30002/TCP   16s   app=nginx-pod# 接下来可以通过电脑主机的浏览器去访问集群中任意一个nodeip的30002端口,即可访问到pod

在这里插入图片描述

6.3.5 LoadBalancer类型的Service

LoadBalancer和NodePort很相似,目的都是向外部暴露一个端口,区别在于LoadBalancer会在集群的外部再来做一个负载均衡设备,而这个设备需要外部环境支持的,外部服务发送到这个设备上的请求,会被设备负载之后转发到集群中。

img

6.3.6 ExternalName类型的Service

ExternalName类型的Service用于引入集群外部的服务,它通过externalName属性指定外部一个服务的地址,然后在集群内部访问此service就可以访问到外部的服务了。

img

[root@k8s-master inventory]# cat service-externalname.yaml 
apiVersion: v1
kind: Service
metadata: name: service-externalnamenamespace: dev
spec: type: ExternalNameexternalName: www.baidu.com
[root@k8s-master inventory]# kubectl apply -f service-externalname.yaml 
service/service-externalname created
[root@k8s-master inventory]# kubectl get -f service-externalname.yaml -o wide
NAME                   TYPE           CLUSTER-IP   EXTERNAL-IP     PORT(S)   AGE   SELECTOR
service-externalname   ExternalName   <none>       www.baidu.com   <none>    11s   <none>[root@k8s-master inventory]# kubectl exec -itn dev pc-deployment-0 -- /bin/sh
/ # 
/ # cat /etc/resolv.conf
search dev.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.96.0.10
options ndots:5
/ # nslookup service-externalname.dev.svc.cluster.local
Server:		10.96.0.10
Address:	10.96.0.10:53service-externalname.dev.svc.cluster.local	canonical name = www.baidu.com
www.baidu.com	canonical name = www.a.shifen.com
Name:	www.a.shifen.com Address: 183.2.172.185
Name:	www.a.shifen.com Address: 183.2.172.42service-externalname.dev.svc.cluster.local	canonical name = www.baidu.com
www.baidu.com	canonical name = www.a.shifen.com
Name:	www.a.shifen.com Address: 240e:ff:e020:966:0:ff:b042:f296
Name:	www.a.shifen.com Address: 240e:ff:e020:9ae:0:ff:b014:8e8b

6.4 Ingress介绍

在前面课程中已经提到,Service对集群之外暴露服务的主要方式有两种:NotePort和LoadBalancer,但是这两种方式,都有一定的缺点:

  • NodePort方式的缺点是会占用很多集群机器的端口,那么当集群服务变多的时候,这个缺点就愈发明显
  • LB(LoadBalancer)方式的缺点是每个service需要一个LB,浪费、麻烦,并且需要kubernetes之外设备的支持

基于这种现状,kubernetes提供了Ingress资源对象,Ingress只需要一个NodePort或者一个LB就可以满足暴露多个Service的需求。工作机制大致如下图表示:

img

实际上,Ingress相当于一个7层的负载均衡器,是kubernetes对反向代理的一个抽象,它的工作原理类似于Nginx,可以理解成在Ingress里建立诸多映射规则,Ingress Controller通过监听这些配置规则并转化成Nginx的反向代理配置 , 然后对外部提供服务。在这里有两个核心概念:

  • ingress:kubernetes中的一个对象,作用是定义请求如何转发到service的规则
  • ingress controller:具体实现反向代理及负载均衡的程序,对ingress定义的规则进行解析,根据配置的规则来实现请求转发,实现方式有很多,比如Nginx, Contour, Haproxy等等

Ingress(以Nginx为例)的工作原理如下:

  1. 用户编写Ingress规则,说明哪个域名对应kubernetes集群中的哪个Service
  2. Ingress控制器动态感知Ingress服务规则的变化,然后生成一段对应的Nginx反向代理配置
  3. Ingress控制器会将生成的Nginx配置写入到一个运行着的Nginx服务中,并动态更新
  4. 到此为止,其实真正在工作的就是一个Nginx了,内部配置了用户定义的请求转发规则

img

6.5 Ingress使用

6.5.1 环境准备

搭建ingress环境

[root@k8s-master ~]#  mkdir ingress-controller
# 获取ingress-nginx,本次案例使用的是1.31版本
# 修改deploy.yaml文件中的仓库
# 修改quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
# 为dyrnq/ingress-nginx-controller:v1.3.1 
原文链接:https://blog.csdn.net/mushuangpanny/article/details/126964281
[root@k8s-master ingress-controller]# vi deploy.yaml 
[root@k8s-master ingress-controller]# cat deploy.yaml 
apiVersion: v1
kind: Namespace
metadata:labels:app.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxname: ingress-nginx
---
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginxnamespace: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginx-admissionnamespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginxnamespace: ingress-nginx
rules:
- apiGroups:- ""resources:- namespacesverbs:- get
- apiGroups:- ""resources:- configmaps- pods- secrets- endpointsverbs:- get- list- watch
- apiGroups:- ""resources:- servicesverbs:- get- list- watch
- apiGroups:- networking.k8s.ioresources:- ingressesverbs:- get- list- watch
- apiGroups:- networking.k8s.ioresources:- ingresses/statusverbs:- update
- apiGroups:- networking.k8s.ioresources:- ingressclassesverbs:- get- list- watch
- apiGroups:- ""resourceNames:- ingress-controller-leaderresources:- configmapsverbs:- get- update
- apiGroups:- ""resources:- configmapsverbs:- create
- apiGroups:- coordination.k8s.ioresourceNames:- ingress-controller-leaderresources:- leasesverbs:- get- update
- apiGroups:- coordination.k8s.ioresources:- leasesverbs:- create
- apiGroups:- ""resources:- eventsverbs:- create- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginx-admissionnamespace: ingress-nginx
rules:
- apiGroups:- ""resources:- secretsverbs:- get- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:app.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginx
rules:
- apiGroups:- ""resources:- configmaps- endpoints- nodes- pods- secrets- namespacesverbs:- list- watch
- apiGroups:- coordination.k8s.ioresources:- leasesverbs:- list- watch
- apiGroups:- ""resources:- nodesverbs:- get
- apiGroups:- ""resources:- servicesverbs:- get- list- watch
- apiGroups:- networking.k8s.ioresources:- ingressesverbs:- get- list- watch
- apiGroups:- ""resources:- eventsverbs:- create- patch
- apiGroups:- networking.k8s.ioresources:- ingresses/statusverbs:- update
- apiGroups:- networking.k8s.ioresources:- ingressclassesverbs:- get- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginx-admission
rules:
- apiGroups:- admissionregistration.k8s.ioresources:- validatingwebhookconfigurationsverbs:- get- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginxnamespace: ingress-nginx
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: ingress-nginx
subjects:
- kind: ServiceAccountname: ingress-nginxnamespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginx-admissionnamespace: ingress-nginx
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: ingress-nginx-admission
subjects:
- kind: ServiceAccountname: ingress-nginx-admissionnamespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:labels:app.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginx
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: ingress-nginx
subjects:
- kind: ServiceAccountname: ingress-nginxnamespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginx-admission
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: ingress-nginx-admission
subjects:
- kind: ServiceAccountname: ingress-nginx-admissionnamespace: ingress-nginx
---
apiVersion: v1
data:allow-snippet-annotations: "true"
kind: ConfigMap
metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginx-controllernamespace: ingress-nginx
---
apiVersion: v1
kind: Service
metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginx-controllernamespace: ingress-nginx
spec:ipFamilies:- IPv4ipFamilyPolicy: SingleStackports:- appProtocol: httpname: httpport: 80protocol: TCPtargetPort: http- appProtocol: httpsname: httpsport: 443protocol: TCPtargetPort: httpsselector:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxtype: NodePort
---
apiVersion: v1
kind: Service
metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginx-controller-admissionnamespace: ingress-nginx
spec:ports:- appProtocol: httpsname: https-webhookport: 443targetPort: webhookselector:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxtype: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginx-controllernamespace: ingress-nginx
spec:minReadySeconds: 0revisionHistoryLimit: 10selector:matchLabels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxtemplate:metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxspec:hostNetwork: truecontainers:- args:- /nginx-ingress-controller- --election-id=ingress-controller-leader- --controller-class=k8s.io/ingress-nginx- --ingress-class=nginx- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller- --validating-webhook=:8443- --validating-webhook-certificate=/usr/local/certificates/cert- --validating-webhook-key=/usr/local/certificates/keyenv:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: LD_PRELOADvalue: /usr/local/lib/libmimalloc.soimage: dyrnq/ingress-nginx-controller:v1.3.1imagePullPolicy: IfNotPresentlifecycle:preStop:exec:command:- /wait-shutdownlivenessProbe:failureThreshold: 5httpGet:path: /healthzport: 10254scheme: HTTPinitialDelaySeconds: 10periodSeconds: 10successThreshold: 1timeoutSeconds: 1name: controllerports:- containerPort: 80name: httpprotocol: TCP- containerPort: 443name: httpsprotocol: TCP- containerPort: 8443name: webhookprotocol: TCPreadinessProbe:failureThreshold: 3httpGet:path: /healthzport: 10254scheme: HTTPinitialDelaySeconds: 10periodSeconds: 10successThreshold: 1timeoutSeconds: 1resources:requests:cpu: 100mmemory: 90MisecurityContext:allowPrivilegeEscalation: truecapabilities:add:- NET_BIND_SERVICEdrop:- ALLrunAsUser: 101volumeMounts:- mountPath: /usr/local/certificates/name: webhook-certreadOnly: truednsPolicy: ClusterFirstnodeSelector:kubernetes.io/os: linuxserviceAccountName: ingress-nginxterminationGracePeriodSeconds: 300volumes:- name: webhook-certsecret:secretName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginx-admission-createnamespace: ingress-nginx
spec:template:metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginx-admission-createspec:containers:- args:- create- --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc- --namespace=$(POD_NAMESPACE)- --secret-name=ingress-nginx-admissionenv:- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespaceimage: dyrnq/kube-webhook-certgen:v1.3.0imagePullPolicy: IfNotPresentname: createsecurityContext:allowPrivilegeEscalation: falsenodeSelector:kubernetes.io/os: linuxrestartPolicy: OnFailuresecurityContext:fsGroup: 2000runAsNonRoot: truerunAsUser: 2000serviceAccountName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginx-admission-patchnamespace: ingress-nginx
spec:template:metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginx-admission-patchspec:containers:- args:- patch- --webhook-name=ingress-nginx-admission- --namespace=$(POD_NAMESPACE)- --patch-mutating=false- --secret-name=ingress-nginx-admission- --patch-failure-policy=Failenv:- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespaceimage: dyrnq/kube-webhook-certgen:v1.3.0 imagePullPolicy: IfNotPresentname: patchsecurityContext:allowPrivilegeEscalation: falsenodeSelector:kubernetes.io/os: linuxrestartPolicy: OnFailuresecurityContext:fsGroup: 2000runAsNonRoot: truerunAsUser: 2000serviceAccountName: ingress-nginx-admission
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: nginx
spec:controller: k8s.io/ingress-nginx
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginx-admission
webhooks:
- admissionReviewVersions:- v1clientConfig:service:name: ingress-nginx-controller-admissionnamespace: ingress-nginxpath: /networking/v1/ingressesfailurePolicy: FailmatchPolicy: Equivalentname: validate.nginx.ingress.kubernetes.iorules:- apiGroups:- networking.k8s.ioapiVersions:- v1operations:- CREATE- UPDATEresources:- ingressessideEffects: None
[root@k8s-master ingress-controller]# grep image deploy.yaml image: dyrnq/ingress-nginx-controller:v1.3.1imagePullPolicy: IfNotPresentimage: dyrnq/kube-webhook-certgen:v1.3.0imagePullPolicy: IfNotPresentimage: dyrnq/kube-webhook-certgen:v1.3.0 imagePullPolicy: IfNotPresent加个主机网络:
在这个位置
400 spec:
401   minReadySeconds: 0
402   revisionHistoryLimit: 10
403   selector:
404     matchLabels:
405       app.kubernetes.io/component: controller
406       app.kubernetes.io/instance: ingress-nginx
407       app.kubernetes.io/name: ingress-nginx
408   template:
409     metadata:
410       labels:
411         app.kubernetes.io/component: controller
412         app.kubernetes.io/instance: ingress-nginx
413         app.kubernetes.io/name: ingress-nginx
414     spec:
415       hostNetwork: true[root@k8s-master ingress-controller]# kubectl apply -f deploy.yaml 
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created# 查看ingress-nginx
[root@k8s-master ingress-controller]# kubectl get ns
NAME              STATUS   AGE
default           Active   138d
dev               Active   20d
ingress-nginx     Active   86s
kube-flannel      Active   19d
kube-node-lease   Active   138d
kube-public       Active   138d
kube-system       Active   138d
[root@k8s-master ingress-controller]# kubectl get pods -n ingress-nginx
NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-wlzfp        0/1     Completed   0          95s
ingress-nginx-admission-patch-7t9zq         0/1     Completed   1          95s
ingress-nginx-controller-687f6cb7c5-l6cwl   1/1     Running     0          95s# 查看service 
[root@k8s-master ingress-controller]# kubectl get svc -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.102.7.12    <none>        80:31310/TCP,443:31868/TCP   103s
ingress-nginx-controller-admission   ClusterIP   10.100.63.61   <none>        443/TCP                      103s
  • 简介:cloud,云,cloud是在云厂商上部署,阿里云主机,腾讯云主机,云厂商里面有负载均衡器,借其可以实现负载均衡
  • 自己搭建的是位于裸机,裸金属
  • baremetal面向裸机

准备service和pod

为了后面的实验比较方便,创建如下图所示的模型

img

创建tomcat-nginx.yaml

[root@k8s-master ingress-controller]# vi tomcat-nginx.yaml
[root@k8s-master ingress-controller]# cat tomcat-nginx.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-deploymentnamespace: dev
spec:replicas: 3selector:matchLabels:app: nginx-podtemplate:metadata:labels:app: nginx-podspec:containers:- name: nginximage: nginx:1.17.1ports:- containerPort: 80---apiVersion: apps/v1
kind: Deployment
metadata:name: tomcat-deploymentnamespace: dev
spec:replicas: 3selector:matchLabels:app: tomcat-podtemplate:metadata:labels:app: tomcat-podspec:containers:- name: tomcatimage: tomcat:8.5-jre10-slimports:- containerPort: 8080---apiVersion: v1
kind: Service
metadata:name: nginx-servicenamespace: dev
spec:selector:app: nginx-podclusterIP: Nonetype: ClusterIPports:- port: 80targetPort: 80---apiVersion: v1
kind: Service
metadata:name: tomcat-servicenamespace: dev
spec:selector:app: tomcat-podclusterIP: Nonetype: ClusterIPports:- port: 8080targetPort: 8080
[root@k8s-master ingress-controller]# kubectl apply -f tomcat-nginx.yaml 
deployment.apps/nginx-deployment created
deployment.apps/tomcat-deployment created
service/nginx-service created
service/tomcat-service created# 查看
[root@k8s-master ingress-controller]# kubectl get pods -n dev
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-6fdb77cb4-dmsk4    1/1     Running   0          3m32s
nginx-deployment-6fdb77cb4-hdbh2    1/1     Running   0          3m32s
nginx-deployment-6fdb77cb4-xbs5h    1/1     Running   0          3m32s
tomcat-deployment-96986c7bd-4lg4r   1/1     Running   0          3m32s
tomcat-deployment-96986c7bd-6n8rn   1/1     Running   0          3m32s
tomcat-deployment-96986c7bd-z8h89   1/1     Running   0          3m32s
[root@k8s-master ingress-controller]# kubectl get pods -n dev -o wide
NAME                                READY   STATUS    RESTARTS   AGE     IP            NODE        NOMINATED NODE   READINESS GATES
nginx-deployment-6fdb77cb4-dmsk4    1/1     Running   0          3m34s   10.244.1.46   k8s-node1   <none>           <none>
nginx-deployment-6fdb77cb4-hdbh2    1/1     Running   0          3m34s   10.244.2.56   k8s-node2   <none>           <none>
nginx-deployment-6fdb77cb4-xbs5h    1/1     Running   0          3m34s   10.244.2.55   k8s-node2   <none>           <none>
tomcat-deployment-96986c7bd-4lg4r   1/1     Running   0          3m34s   10.244.1.47   k8s-node1   <none>           <none>
tomcat-deployment-96986c7bd-6n8rn   1/1     Running   0          3m34s   10.244.2.57   k8s-node2   <none>           <none>
tomcat-deployment-96986c7bd-z8h89   1/1     Running   0          3m34s   10.244.1.48   k8s-node1   <none>           
[root@k8s-master ingress-controller]# kubectl run centos2 --image centos -n dev -- /bin/sleep 9000
pod/centos2 created
[root@k8s-master ingress-controller]# kubectl get pods -n dev 
NAME                                READY   STATUS    RESTARTS   AGE
centos2                             1/1     Running   0          2m5s
nginx-deployment-6fdb77cb4-dmsk4    1/1     Running   0          23m
nginx-deployment-6fdb77cb4-hdbh2    1/1     Running   0          23m
nginx-deployment-6fdb77cb4-xbs5h    1/1     Running   0          23m
tomcat-deployment-96986c7bd-4lg4r   1/1     Running   0          23m
tomcat-deployment-96986c7bd-6n8rn   1/1     Running   0          23m
tomcat-deployment-96986c7bd-z8h89   1/1     Running   0          23m# 通过svc来访问资源
[root@k8s-master ingress-controller]# kubectl get svc -n dev
NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
nginx-service    ClusterIP   None         <none>        80/TCP     23m
tomcat-service   ClusterIP   None         <none>        8080/TCP   23m# 进入镜像
[root@k8s-master ingress-controller]# kubectl exec -itn dev centos2 -- /bin/bash
[root@centos2 /]# curl nginx-service
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>body {width: 35em;margin: 0 auto;font-family: Tahoma, Verdana, Arial, sans-serif;}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p><p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p>
</body>
</html># 创建的svc是有效的,在后端可以访问,下面做转发
6.5.1.1Http代理—制作转发
  • 创建ingress-http.yaml
[root@k8s-master ingress-controller]# cat ingress-http.yaml 
apiVersion: networking.k8s.io/v1 
kind: Ingress
metadata:name: ingress-httpnamespace: dev
spec:ingressClassName: nginxrules:- host: nginx.wjmz.comhttp:paths:- path: /pathType: Prefixbackend:service:name: nginx-serviceport: number: 80- host: tomcat.wjmz.comhttp:paths:- path: /pathType: Prefixbackend:service:name: tomcat-service port:number: 8080
# ingressClassName 可以指定选择的 Ingress Controller,使用名称选择,一般有多个控制器的时候使用。
部署 Nginx Ingress Controller 的名称是 nginx。[root@k8s-master ingress-controller]# kubectl get -f ingress-http.yaml 
NAME           CLASS   HOSTS                            ADDRESS          PORTS   AGE
ingress-http   nginx   nginx.wjmz.com,tomcat.wjmz.com   192.168.207.11   80      95s
[root@k8s-master ingress-controller]# kubectl get svc -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.102.7.12    <none>        80:31310/TCP,443:31868/TCP   92m
ingress-nginx-controller-admission   ClusterIP   10.100.63.61   <none>        443/TCP                      92m

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

  • 修改hosts文件,使其可以不加端口号访问
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
6.5.1.2 控制器在多个节点上部署
  • 将之前创建的删除
[root@k8s-master ingress-controller]# kubectl delete -f tomcat-nginx.yaml 
deployment.apps "nginx-deployment" deleted
deployment.apps "tomcat-deployment" deleted
service "nginx-service" deleted
service "tomcat-service" deleted[root@k8s-master ingress-controller]# kubectl delete -f deploy.yaml 
namespace "ingress-nginx" deleted
serviceaccount "ingress-nginx" deleted
serviceaccount "ingress-nginx-admission" deleted
role.rbac.authorization.k8s.io "ingress-nginx" deleted
role.rbac.authorization.k8s.io "ingress-nginx-admission" deleted
clusterrole.rbac.authorization.k8s.io "ingress-nginx" deleted
clusterrole.rbac.authorization.k8s.io "ingress-nginx-admission" deleted
rolebinding.rbac.authorization.k8s.io "ingress-nginx" deleted
rolebinding.rbac.authorization.k8s.io "ingress-nginx-admission" deleted
clusterrolebinding.rbac.authorization.k8s.io "ingress-nginx" deleted
clusterrolebinding.rbac.authorization.k8s.io "ingress-nginx-admission" deleted
configmap "ingress-nginx-controller" deleted
service "ingress-nginx-controller" deleted
service "ingress-nginx-controller-admission" deleted
deployment.apps "ingress-nginx-controller" deleted
job.batch "ingress-nginx-admission-create" deleted
job.batch "ingress-nginx-admission-patch" deleted
ingressclass.networking.k8s.io "nginx" deleted
validatingwebhookconfiguration.admissionregistration.k8s.io "ingress-nginx-admission" deleted[root@k8s-master ingress-controller]# kubectl get pods -n ingress-nginx
No resources found in ingress-nginx namespace.
[root@k8s-master ingress-controller]# kubectl get pods -n dev
NAME      READY   STATUS    RESTARTS   AGE
centos2   1/1     Running   0          101m[root@k8s-master ingress-controller]# cp deploy.yaml deploy.yaml_bak# 在所有节点都有一个控制器,有几个节点就有几个控制器
[root@k8s-master ingress-controller]# vi deploy.yaml
[root@k8s-master ingress-controller]# cat deploy.yaml
apiVersion: v1
kind: Namespace
metadata:labels:app.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxname: ingress-nginx
---
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginxnamespace: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginx-admissionnamespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginxnamespace: ingress-nginx
rules:
- apiGroups:- ""resources:- namespacesverbs:- get
- apiGroups:- ""resources:- configmaps- pods- secrets- endpointsverbs:- get- list- watch
- apiGroups:- ""resources:- servicesverbs:- get- list- watch
- apiGroups:- networking.k8s.ioresources:- ingressesverbs:- get- list- watch
- apiGroups:- networking.k8s.ioresources:- ingresses/statusverbs:- update
- apiGroups:- networking.k8s.ioresources:- ingressclassesverbs:- get- list- watch
- apiGroups:- ""resourceNames:- ingress-controller-leaderresources:- configmapsverbs:- get- update
- apiGroups:- ""resources:- configmapsverbs:- create
- apiGroups:- coordination.k8s.ioresourceNames:- ingress-controller-leaderresources:- leasesverbs:- get- update
- apiGroups:- coordination.k8s.ioresources:- leasesverbs:- create
- apiGroups:- ""resources:- eventsverbs:- create- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginx-admissionnamespace: ingress-nginx
rules:
- apiGroups:- ""resources:- secretsverbs:- get- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:app.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginx
rules:
- apiGroups:- ""resources:- configmaps- endpoints- nodes- pods- secrets- namespacesverbs:- list- watch
- apiGroups:- coordination.k8s.ioresources:- leasesverbs:- list- watch
- apiGroups:- ""resources:- nodesverbs:- get
- apiGroups:- ""resources:- servicesverbs:- get- list- watch
- apiGroups:- networking.k8s.ioresources:- ingressesverbs:- get- list- watch
- apiGroups:- ""resources:- eventsverbs:- create- patch
- apiGroups:- networking.k8s.ioresources:- ingresses/statusverbs:- update
- apiGroups:- networking.k8s.ioresources:- ingressclassesverbs:- get- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginx-admission
rules:
- apiGroups:- admissionregistration.k8s.ioresources:- validatingwebhookconfigurationsverbs:- get- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginxnamespace: ingress-nginx
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: ingress-nginx
subjects:
- kind: ServiceAccountname: ingress-nginxnamespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginx-admissionnamespace: ingress-nginx
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: ingress-nginx-admission
subjects:
- kind: ServiceAccountname: ingress-nginx-admissionnamespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:labels:app.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginx
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: ingress-nginx
subjects:
- kind: ServiceAccountname: ingress-nginxnamespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginx-admission
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: ingress-nginx-admission
subjects:
- kind: ServiceAccountname: ingress-nginx-admissionnamespace: ingress-nginx
---
apiVersion: v1
data:allow-snippet-annotations: "true"
kind: ConfigMap
metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginx-controllernamespace: ingress-nginx
---
apiVersion: v1
kind: Service
metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginx-controllernamespace: ingress-nginx
spec:ipFamilies:- IPv4ipFamilyPolicy: SingleStackports:- appProtocol: httpname: httpport: 80protocol: TCPtargetPort: http- appProtocol: httpsname: httpsport: 443protocol: TCPtargetPort: httpsselector:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxtype: NodePort
---
apiVersion: v1
kind: Service
metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginx-controller-admissionnamespace: ingress-nginx
spec:ports:- appProtocol: httpsname: https-webhookport: 443targetPort: webhookselector:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxtype: ClusterIP
---
apiVersion: apps/v1
kind: Deployment 
metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginx-controllernamespace: ingress-nginx
spec:replicas: 2minReadySeconds: 0revisionHistoryLimit: 10selector:matchLabels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxtemplate:metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxspec:hostNetwork: truecontainers:- args:- /nginx-ingress-controller- --election-id=ingress-controller-leader- --controller-class=k8s.io/ingress-nginx- --ingress-class=nginx- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller- --validating-webhook=:8443- --validating-webhook-certificate=/usr/local/certificates/cert- --validating-webhook-key=/usr/local/certificates/keyenv:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: LD_PRELOADvalue: /usr/local/lib/libmimalloc.soimage: dyrnq/ingress-nginx-controller:v1.3.1imagePullPolicy: IfNotPresentlifecycle:preStop:exec:command:- /wait-shutdownlivenessProbe:failureThreshold: 5httpGet:path: /healthzport: 10254scheme: HTTPinitialDelaySeconds: 10periodSeconds: 10successThreshold: 1timeoutSeconds: 1name: controllerports:- containerPort: 80name: httpprotocol: TCP- containerPort: 443name: httpsprotocol: TCP- containerPort: 8443name: webhookprotocol: TCPreadinessProbe:failureThreshold: 3httpGet:path: /healthzport: 10254scheme: HTTPinitialDelaySeconds: 10periodSeconds: 10successThreshold: 1timeoutSeconds: 1resources:requests:cpu: 100mmemory: 90MisecurityContext:allowPrivilegeEscalation: truecapabilities:add:- NET_BIND_SERVICEdrop:- ALLrunAsUser: 101volumeMounts:- mountPath: /usr/local/certificates/name: webhook-certreadOnly: truednsPolicy: ClusterFirstnodeSelector:kubernetes.io/os: linuxapp: ingress-controllerserviceAccountName: ingress-nginxterminationGracePeriodSeconds: 300volumes:- name: webhook-certsecret:secretName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginx-admission-createnamespace: ingress-nginx
spec:template:metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginx-admission-createspec:containers:- args:- create- --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc- --namespace=$(POD_NAMESPACE)- --secret-name=ingress-nginx-admissionenv:- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespaceimage: dyrnq/kube-webhook-certgen:v1.3.0imagePullPolicy: IfNotPresentname: createsecurityContext:allowPrivilegeEscalation: falsenodeSelector:kubernetes.io/os: linuxrestartPolicy: OnFailuresecurityContext:fsGroup: 2000runAsNonRoot: truerunAsUser: 2000serviceAccountName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginx-admission-patchnamespace: ingress-nginx
spec:template:metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginx-admission-patchspec:containers:- args:- patch- --webhook-name=ingress-nginx-admission- --namespace=$(POD_NAMESPACE)- --patch-mutating=false- --secret-name=ingress-nginx-admission- --patch-failure-policy=Failenv:- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespaceimage: dyrnq/kube-webhook-certgen:v1.3.0 imagePullPolicy: IfNotPresentname: patchsecurityContext:allowPrivilegeEscalation: falsenodeSelector:kubernetes.io/os: linuxrestartPolicy: OnFailuresecurityContext:fsGroup: 2000runAsNonRoot: truerunAsUser: 2000serviceAccountName: ingress-nginx-admission
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: nginx
spec:controller: k8s.io/ingress-nginx
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.3.1name: ingress-nginx-admission
webhooks:
- admissionReviewVersions:- v1clientConfig:service:name: ingress-nginx-controller-admissionnamespace: ingress-nginxpath: /networking/v1/ingressesfailurePolicy: FailmatchPolicy: Equivalentname: validate.nginx.ingress.kubernetes.iorules:- apiGroups:- networking.k8s.ioapiVersions:- v1operations:- CREATE- UPDATEresources:- ingressessideEffects: None# 确保有两个控制器,加标签
493       nodeSelector:
494         kubernetes.io/os: linux
495         app: ingress-controller###添加
496       serviceAccountName: ingress-nginx
497       terminationGracePeriodSeconds: 300
498       volumes:
499       - name: webhook-cert
500         secret:
501           secretName: ingress-nginx-admission
502 ---
503 apiVersion: batch/v1[root@k8s-master ingress-controller]# kubectl apply -f deploy.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
[root@k8s-master ingress-controller]# kubectl get pods -n ingress-nginx
NAME                                       READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-ld9h5       0/1     Completed   0          24s
ingress-nginx-admission-patch-q85pl        0/1     Completed   1          24s
ingress-nginx-controller-67cc8878b-mb4mw   0/1     Pending     0          24s
ingress-nginx-controller-67cc8878b-smfjk   0/1     Pending     0          24s# 处于pending,没有满足标签选择器所定义的标签,给节点添加标签之后可以运行# 给node1打标签,可以看到节点1运行起来了
[root@k8s-master ingress-controller]# kubectl label node k8s-node1 app=ingress-controller
node/k8s-node1 labeled
[root@k8s-master ingress-controller]# kubectl get pods -n ingress-nginx
NAME                                       READY   STATUS              RESTARTS   AGE
ingress-nginx-admission-create-ld9h5       0/1     Completed           0          51s
ingress-nginx-admission-patch-q85pl        0/1     Completed           1          51s
ingress-nginx-controller-67cc8878b-mb4mw   0/1     ContainerCreating   0          51s
ingress-nginx-controller-67cc8878b-smfjk   0/1     Pending             0          51s
[root@k8s-master ingress-controller]# kubectl get pods -n ingress-nginx
NAME                                       READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-ld9h5       0/1     Completed   0          52s
ingress-nginx-admission-patch-q85pl        0/1     Completed   1          52s
ingress-nginx-controller-67cc8878b-mb4mw   0/1     Running     0          52s
ingress-nginx-controller-67cc8878b-smfjk   0/1     Pending     0          52s# # 给node2打标签,可以看到节点2运行起来了v
[root@k8s-master ingress-controller]# kubectl label node k8s-node2 app=ingress-controller
node/k8s-node2 labeled
[root@k8s-master ingress-controller]# kubectl get pods -n ingress-nginx
NAME                                       READY   STATUS              RESTARTS   AGE
ingress-nginx-admission-create-ld9h5       0/1     Completed           0          60s
ingress-nginx-admission-patch-q85pl        0/1     Completed           1          60s
ingress-nginx-controller-67cc8878b-mb4mw   0/1     Running             0          60s
ingress-nginx-controller-67cc8878b-smfjk   0/1     ContainerCreating   0          60s
[root@k8s-master ingress-controller]# kubectl get pods -n ingress-nginx
NAME                                       READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-ld9h5       0/1     Completed   0          61s
ingress-nginx-admission-patch-q85pl        0/1     Completed   1          61s
ingress-nginx-controller-67cc8878b-mb4mw   0/1     Running     0          61s
ingress-nginx-controller-67cc8878b-smfjk   0/1     Running     0          61s[root@k8s-master ingress-controller]# kubectl get nodes --show-labels
NAME         STATUS   ROLES           AGE    VERSION   LABELS
k8s-master   Ready    control-plane   139d   v1.27.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-node1    Ready    <none>          139d   v1.27.0   app=ingress-controller,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux
k8s-node2    Ready    <none>          139d   v1.27.0   app=ingress-controller,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2,kubernetes.io/os=linux
[root@k8s-master ingress-controller]# cat tomcat-nginx.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-deploymentnamespace: dev
spec:replicas: 3selector:matchLabels:app: nginx-podtemplate:metadata:labels:app: nginx-podspec:containers:- name: nginximage: nginx:1.17.1ports:- containerPort: 80---apiVersion: apps/v1
kind: Deployment
metadata:name: tomcat-deploymentnamespace: dev
spec:replicas: 3selector:matchLabels:app: tomcat-podtemplate:metadata:labels:app: tomcat-podspec:containers:- name: tomcatimage: tomcat:8.5-jre10-slimports:- containerPort: 8080---apiVersion: v1
kind: Service
metadata:name: nginx-servicenamespace: dev
spec:selector:app: nginx-podclusterIP: Nonetype: ClusterIPports:- port: 80targetPort: 80---apiVersion: v1
kind: Service
metadata:name: tomcat-servicenamespace: dev
spec:selector:app: tomcat-podclusterIP: Nonetype: ClusterIPports:- port: 8080targetPort: 8080# 运行:pod tomcat-nginx.yaml
[root@k8s-master ingress-controller]# kubectl apply -f tomcat-nginx.yaml 
deployment.apps/nginx-deployment created
deployment.apps/tomcat-deployment created
service/nginx-service created
service/tomcat-service created
[root@k8s-master ingress-controller]# kubectl get pods -n dev
NAME                                READY   STATUS    RESTARTS      AGE
centos2                             1/1     Running   8 (13m ago)   20h
nginx-deployment-6fdb77cb4-gqx4j    1/1     Running   0             26s
nginx-deployment-6fdb77cb4-v6jzp    1/1     Running   0             26s
nginx-deployment-6fdb77cb4-wbvd8    1/1     Running   0             26s
tomcat-deployment-96986c7bd-4hvgl   1/1     Running   0             26s
tomcat-deployment-96986c7bd-6x9d9   1/1     Running   0             26s
tomcat-deployment-96986c7bd-xpw4w   1/1     Running   0             26s[root@k8s-master ingress-controller]# cat ingress-http.yaml 
apiVersion: networking.k8s.io/v1 
kind: Ingress
metadata:name: ingress-httpnamespace: dev
spec:ingressClassName: nginxrules:- host: nginx.wjmz.comhttp:paths:- path: /pathType: Prefixbackend:service:name: nginx-serviceport: number: 80- host: tomcat.wjmz.comhttp:paths:- path: /pathType: Prefixbackend:service:name: tomcat-service port:number: 8080# 运行控制器
[root@k8s-master ingress-controller]# kubectl apply -f ingress-http.yaml 
ingress.networking.k8s.io/ingress-http created
[root@k8s-master ingress-controller]# kubectl get -f ingress-http.yaml 
NAME           CLASS   HOSTS                            ADDRESS                         PORTS   AGE
ingress-http   nginx   nginx.wjmz.com,tomcat.wjmz.com   192.168.207.11,192.168.207.12   80      13s
[root@k8s-master ingress-controller]# kubectl get svc -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.96.214.223    <none>        80:32269/TCP,443:32665/TCP   63m
ingress-nginx-controller-admission   ClusterIP   10.100.165.211   <none>        443/TCP                      63m

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
固定ip
在deploy.yaml中service中添加两个ip

[root@k8s-master ingress-controller]# kubectl delete -f deploy.yaml
namespace "ingress-nginx" deleted
serviceaccount "ingress-nginx" deleted
serviceaccount "ingress-nginx-admission" deleted
role.rbac.authorization.k8s.io "ingress-nginx" deleted
role.rbac.authorization.k8s.io "ingress-nginx-admission" deleted
clusterrole.rbac.authorization.k8s.io "ingress-nginx" deleted
clusterrole.rbac.authorization.k8s.io "ingress-nginx-admission" deleted
rolebinding.rbac.authorization.k8s.io "ingress-nginx" deleted
rolebinding.rbac.authorization.k8s.io "ingress-nginx-admission" deleted
clusterrolebinding.rbac.authorization.k8s.io "ingress-nginx" deleted
clusterrolebinding.rbac.authorization.k8s.io "ingress-nginx-admission" deleted
configmap "ingress-nginx-controller" deleted
service "ingress-nginx-controller" deleted
service "ingress-nginx-controller-admission" deleted
deployment.apps "ingress-nginx-controller" deleted
job.batch "ingress-nginx-admission-create" deleted
job.batch "ingress-nginx-admission-patch" deleted
ingressclass.networking.k8s.io "nginx" deleted
validatingwebhookconfiguration.admissionregistration.k8s.io "ingress-nginx-admission" deleted
[root@k8s-master ingress-controller]# vi deploy.yaml
345 spec:
346   ipFamilies:
347   - IPv4
348   ipFamilyPolicy: SingleStack
349   ports:
350   - appProtocol: http
351     name: http
352     port: 80
353     nodePort: 30080###添加
354     protocol: TCP
355     targetPort: http
356   - appProtocol: https
357     name: https
358     port: 443
359     nodePort: 30443###添加
360     protocol: TCP
361     targetPort: https
362   selector:
[root@k8s-master ingress-controller]# kubectl apply -f deploy.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx unchanged
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx unchanged
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission configured[root@k8s-master ingress-controller]# kubectl get svc -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.99.212.97   <none>        80:30080/TCP,443:30443/TCP   11s
ingress-nginx-controller-admission   ClusterIP   10.102.91.41   <none>        443/TCP                      11s# 查看控制器信息
[root@k8s-master ingress-controller]# kubectl describe svc -n ingress-nginx ingress-nginx-controller
Name:                     ingress-nginx-controller
Namespace:                ingress-nginx
Labels:                   app.kubernetes.io/component=controllerapp.kubernetes.io/instance=ingress-nginxapp.kubernetes.io/name=ingress-nginxapp.kubernetes.io/part-of=ingress-nginxapp.kubernetes.io/version=1.3.1
Annotations:              <none>
Selector:                 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.99.212.97
IPs:                      10.99.212.97
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  30080/TCP
Endpoints:                192.168.207.11:80,192.168.207.12:80
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  30443/TCP
Endpoints:                192.168.207.11:443,192.168.207.12:443
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

在这里插入图片描述
在这里插入图片描述

6.5.1.3 Https代理

创建证书

[root@k8s-master ~]# mkdir crt
[root@k8s-master ~]# cd crt/
[root@k8s-master crt]# ls
[root@k8s-master crt]# openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/C=CN/ST=HB/L=WH/O=nginx/CN=wjmz.com"
Generating a RSA private key
................+++++
.......+++++
writing new private key to 'tls.key'
-----
[root@k8s-master crt]# kubectl create secret tls tls-secret --key tls.key --cert tls.crt
secret/tls-secret created
[root@k8s-master crt]# kubectl get secret
NAME         TYPE                DATA   AGE
tls-secret   kubernetes.io/tls   2      5s
[root@k8s-master ingress-controller]# vi ingress-https.yaml 
[root@k8s-master ingress-controller]# cat ingress-https.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:name: ingress-httpsnamespace: dev
spec:ingressClassName: nginxtls:- hosts:- nginx.wjmz.com- tomcat.wjmz.comsecretName: tls-secret rules:- host: nginx.wjmz.comhttp:paths:- path: /pathType: Prefixbackend:service:name: nginx-serviceport: number: 80- host: tomcat.wjmz.comhttp:paths:- path: /pathType: Prefixbackend:service:name: tomcat-serviceport: number: 8080# 将之前的http的删除,在创建
[root@k8s-master ingress-controller]# kubectl delete -f ingress-http.yaml 
ingress.networking.k8s.io "ingress-http" deleted
[root@k8s-master ingress-controller]# kubectl apply -f ingress-https.yaml 
ingress.networking.k8s.io/ingress-https created# 查看详情
[root@k8s-master ingress-controller]# kubectl get ingress -n dev
NAME            CLASS   HOSTS                            ADDRESS                         PORTS     AGE
ingress-https   nginx   nginx.wjmz.com,tomcat.wjmz.com   192.168.207.11,192.168.207.12   80, 443   52s
[root@k8s-master ingress-controller]# kubectl describe ingress -n dev ingress-https
Name:             ingress-https
Labels:           <none>
Namespace:        dev
Address:          192.168.207.11,192.168.207.12
Ingress Class:    nginx
Default backend:  <default>
TLS:tls-secret terminates nginx.wjmz.com,tomcat.wjmz.com
Rules:Host             Path  Backends----             ----  --------nginx.wjmz.com   /   nginx-service:80 (10.244.1.55:80,10.244.1.57:80,10.244.2.63:80)tomcat.wjmz.com  /   tomcat-service:8080 (10.244.1.54:8080,10.244.1.56:8080,10.244.2.62:8080)
Annotations:       <none>
Events:Type    Reason  Age               From                      Message----    ------  ----              ----                      -------Normal  Sync    6s (x2 over 53s)  nginx-ingress-controller  Scheduled for syncNormal  Sync    6s (x2 over 53s)  nginx-ingress-controller  Scheduled for sync
[root@k8s-master ingress-controller]# kubectl get svc -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.99.212.97   <none>        80:30080/TCP,443:30443/TCP   86m
ingress-nginx-controller-admission   ClusterIP   10.102.91.41   <none>        443/TCP                      86m

在这里插入图片描述
在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/219352.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

扩展操作码指令格式

指令 操作码地址码 \quad \quad 判断几地址指令 开头4位不是全1, 则表示是三地址指令 开头4位全1, 后面4位不是全1, 则为二地址指令 前面12全1, 则为零地址指令 当然啦这只是一种扩展方法, 如果想扩展更多, 可以将1110留作扩展操作码 较短的操作码, 我们对它的译码和分析的时间…

点云几何 之 判断某一点是否在三角形的边上(3)

点云几何 之 判断某一点是否在三角形的边上&#xff08;3&#xff09; 一、算法介绍二、算法实现1.代码2.结果 总结 一、算法介绍 判断某一点是否在三角形的边上 之前已经介绍了点在三角形的内外判断方法&#xff0c;这里增加点恰好在三角形边上的判断方法 &#xff08;本质上…

节流防抖:提升前端性能的秘密武器(上)

&#x1f90d; 前端开发工程师&#xff08;主业&#xff09;、技术博主&#xff08;副业&#xff09;、已过CET6 &#x1f368; 阿珊和她的猫_CSDN个人主页 &#x1f560; 牛客高级专题作者、在牛客打造高质量专栏《前端面试必备》 &#x1f35a; 蓝桥云课签约作者、已在蓝桥云…

vue3使用mars3d实现地图轮播高亮,且每个区域颜色不一样

效果图(珙县就是轮播高亮的效果) 思路:初始化一张完整的地图&#xff0c;然后定时器去挨个生成每个县上的地图&#xff0c;并且覆盖在原来的位置&#xff0c;每到一定的时间&#xff0c;就清除之前生成高亮图并且生成下一张高亮图 如何引入地图 上篇文章已详细发过 略 父组…

【JavaWeb学习笔记】7 - Servlet入门开发

零、在线文档 Servlet 3.1 API Documentation - Apache Tomcat 8.0.53 一、Servlet基本介绍 1.为什么出现Servlet 提出需求:请用你现有的html css javascript&#xff0c;开发网站&#xff0c;比如可以让用户留言/购物/支付,你能搞定吗? 不能 这几个不能直接操作数据库 …

Java 语言关键字

Java关键字是电脑语言里事先定义的&#xff0c;有特别意义的标识符&#xff0c;有时又叫保留字&#xff0c;还有特别意义的变量。Java的关键字对Java的编译器有特殊的意义&#xff0c;他们用来表示一种数据类型&#xff0c;或者表示程序的结构等&#xff0c;关键字不能用作变量…

20231213给Ubuntu18.04.6LTS新加一块HDD机械硬盘

20231213给Ubuntu18.04.6LTS新加一块HDD机械硬盘 2023/12/13 22:50 rootrootrootroot-X99-Turbo:~$ cat /etc/issue Ubuntu 18.04.6 LTS \n \l sudo fdisk -l rootrootrootroot-X99-Turbo:~$ rootrootrootroot-X99-Turbo:~$ sudo fdisk -lu Disk /dev/sda: 2.7 TiB, 300059298…

【源码解析】flink sql执行源码概述:flink sql执行过程中有哪些阶段,这些阶段的源码大概位置在哪里

文章目录 一. sql执行流程源码分析1. Sql语句解析成语法树阶段&#xff08;SQL - > SqlNode&#xff09;2. SqlNode 验证&#xff08;SqlNode – >Operation&#xff09;3. 语义分析&#xff08;Operation - > RelNode&#xff09;4. 优化阶段&#xff08;RelNode - &…

LangChain 25: SQL Agent通过自然语言查询数据库sqlite

LangChain系列文章 LangChain 实现给动物取名字&#xff0c;LangChain 2模块化prompt template并用streamlit生成网站 实现给动物取名字LangChain 3使用Agent访问Wikipedia和llm-math计算狗的平均年龄LangChain 4用向量数据库Faiss存储&#xff0c;读取YouTube的视频文本搜索I…

2023全国大学生数据分析大赛A题完整论文教学

大家好呀&#xff0c;从发布赛题一直到现在&#xff0c;总算完成了全国大学生数据分析大赛A题某电商平台用户行为分析与挖掘完整的成品论文。 本论文可以保证原创&#xff0c;保证高质量。绝不是随便引用一大堆模型和代码复制粘贴进来完全没有应用糊弄人的垃圾半成品论文。 实…

透析回溯的模板

关卡名 认识回溯思想 我会了✔️ 内容 1.复习递归和N叉树&#xff0c;理解相关代码是如何实现的 ✔️ 2.理解回溯到底怎么回事 ✔️ 3.掌握如何使用回溯来解决二叉树的路径问题 ✔️ 回溯可以视为递归的拓展&#xff0c;很多思想和解法都与递归密切相关&#xff0c;在很多…

Windows 网络监控的内容和方式

Microsoft Windows是使用最广泛的操作系统之一&#xff0c;受到全球用户的青睐&#xff0c;Windows 设备与许多进程、服务和事件相关联&#xff0c;这些进程、服务和事件通常需要从单个控制台进行跟踪&#xff0c;这就是 Windows 网络监控工具派上用场的地方。Windows 网络监控…

项目总结-自主HTTP实现

终于是写完了&#xff0c;花费了2周时间&#xff0c;一点一点看&#xff0c;还没有扩展&#xff0c;但是基本功能是已经实现了。利用的是Tcp为网络链接&#xff0c;在其上面又写了http的壳。没有使用epoll&#xff0c;多路转接难度比较高&#xff0c;以后有机会再写&#xff0c…

一张图片组合一组动作就可以生成毫无违和感的视频!

你敢信&#xff0c;1张人物图片 1张动作动画&#xff0c;就可以生成一段视频。网友直呼&#xff1a;“主播/视频UP主可能快要下岗了&#xff01;” &#xff08;模型视频来源于网络&#xff09; 本周&#xff0c;字节跳动联合新加坡国立大学发布了一款开源项目 MagicAnimate&…

Linux Ubuntu 手动搭建webDav

1、安装 因为需要跟 zotero 进行交互&#xff0c;因此需要在服务器搭建一个webDav 以下是搭建步骤&#xff1a; sudo apt-get update sudo apt-get install apache2 Ubuntu 安装apache2来实现 不同于Centos 安装好了之后&#xff0c;运行 a2enmod dav_fs a2enmod dav 激…

【视频笔记】古人智慧与修行

古人的智慧 相由心生、老子悟道、佛祖成佛 多一些思考&#xff0c;多一些精神修炼。 除非我们今天能够产生与人类科技发展相并行的精神变革&#xff0c;否则永远可能也无法跳脱出历史的轮回。 视频来源 曾仕强教授周易的智慧 太极两仪四象八卦 一生二&#xff0c;二生三&…

大数据机器学习深度解读决策树算法:技术全解与案例实战

大数据机器学习深度解读决策树算法&#xff1a;技术全解与案例实战 本文深入探讨了机器学习中的决策树算法&#xff0c;从基础概念到高级研究进展&#xff0c;再到实战案例应用&#xff0c;全面解析了决策树的理论及其在现实世界问题中的实际效能。通过技术细节和案例实践&…

现代雷达车载应用——第2章 汽车雷达系统原理 2.5节 检测基础

经典著作&#xff0c;值得一读&#xff0c;英文原版下载链接【免费】ModernRadarforAutomotiveApplications资源-CSDN文库。 2.5 检测基础 对于要测试目标是否存在的雷达测量&#xff0c;可以假定下列两个假设之一为真&#xff1a; •H0:—测量结果仅为噪声。 •H1:—测量是噪…

eNSP小实验(vlan和单臂路由)

一.vlan的划分 实验目的&#xff1a; ①pc1 只可以和pc2通信&#xff0c;不可以和pc3 pc4通信 ②pc1和pc2只能到Server1&#xff0c;pc3和pc4到Server2 1.拓扑图 2.配置 PC1-4 同理配置 SW1 <Huawei> <Huawei>u t m //关闭注释 Info: …