365bet亚洲版登录-bet官网365入口

365bet亚洲版登录拥有超过百间客房,bet官网365入口的文化历经几十年的传承和积淀形成的核心内容获得业界广泛的认可,365bet亚洲版登录是目前信誉最高的娱乐场所,同国内外几百家网上内容供应商建立了合作关系。

从零起头搭建Kubernetes集群

上一文《从零开头搭建Kubernetes集群(四、搭建K8S Dashboard)》介绍了什么搭建Dashboard。本篇将介绍怎样搭建Ingress来访谈K8S集群的Service。

Ingress是个什么样鬼,英特网资料非常多,我们自行钻研。简单来讲,便是一个载重均衡的玩意,其利害攸关用来消除使用NodePort揭示Service的端口时Node IP会漂移的标题。同时,若大气应用NodePort揭破主机端口,管理会特别混乱。

好的缓和方案就是让外界通过域名去做客Service,而没有须要关怀其Node IP及Port。那为何不直接行使Nginx?那是因为在K8S集群中,假使每参加八个劳务,我们都在Nginx中增多一个陈设,其实是三个重复性的体力活,只借使重复性的体力活,大家都应有通过技艺将它干掉。

Ingress就能够化解地点的主题材料,其含有五个零部件Ingress Controller和Ingress:

  • Ingress将Nginx的配备抽象成二个Ingress对象,每加多四个新的劳务只需写三个新的Ingress的yaml文件就能够
  • Ingress Controller将新投入的Ingress转化成Nginx的配置文件并使之生效

好了,废话不多,走你~

人生苦短,不造轮子,本文将以法定的正儿八经脚本为底蕴进行搭建,参照他事他说加以考察请戳官方文书档案。官方文书档案中供给种种实行如下命令:

curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/namespace.yaml  | kubectl apply -f -curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/default-backend.yaml  | kubectl apply -f -curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/configmap.yaml  | kubectl apply -f -curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/tcp-services-configmap.yaml  | kubectl apply -f -curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/udp-services-configmap.yaml  | kubectl apply -f -

以上yaml文件成立Ingress用到的Namespace、ConfigMap,以及私下认可的后端default-backend。最根本的一点是,由于事先大家依照Kubeadm创设了K8S集群,则还非得实行:

curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/rbac.yaml  | kubectl apply -f -curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/with-rbac.yaml  | kubectl apply -f -

这是出于Kubeadm创立的集群默许开启了RABC,由此Ingress也亟须创立相应的RABC权限调控。

只是,直接根据上述措施试行,大家的Ingress很恐怕会不恐怕使用。所以,大家需求将上述Yaml文件全部wget下来,经过一些退换后本事实行kubectl apply -f成立。其余索要潜心的是,这几个yaml文件中涉及的部分镜像,国内近来不恐怕下载,如:

gcr.io/google_containers/defaultbackend:1.4quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0

自身现已提前下载好,大家请戳:

地址:https://pan.baidu.com/s/1N-bK9hI7JTZZB6AzmaT8PA密码:1a8a

得到镜像后,在种种节点上推行如下命令导入镜像:

docker load < quay.io#kubernetes-ingress-controller#nginx-ingress-controller_0.14.0.tardocker tag 452a96d81c30 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0docker load < gcr.io#google_containers#defaultbackend.tardocker tag 452a96d81c30 gcr.io/google_containers/defaultbackend

如上所示,导入镜像后,别忘记给打tag,不然镜像名叫<none>:

图片 1image.png

那边,大家先对有的要害的文本举办简易介绍。

default-backend的功能是,借使外部访谈的域名荒诞不经的话,则暗中同意转载到default-http-backend这些Service,其会从来回到404:

apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: default-http-backend labels: app: default-http-backend namespace: ingress-nginxspec: replicas: 1 selector: matchLabels: app: default-http-backend template: metadata: labels: app: default-http-backend spec: terminationGracePeriodSeconds: 60 containers: - name: default-http-backend # Any image is permissible as long as: # 1. It serves a 404 page at / # 2. It serves 200 on a /healthz endpoint image: gcr.io/google_containers/defaultbackend:1.4 livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 ports: - containerPort: 8080 resources: limits: cpu: 10m memory: 20Mi requests: cpu: 10m memory: 20Mi---apiVersion: v1kind: Servicemetadata: name: default-http-backend namespace: ingress-nginx labels: app: default-http-backendspec: ports: - port: 80 targetPort: 8080 selector: app: default-http-backend

rbac.yaml担任Ingress的RBAC授权的调节,其创造了Ingress用到的ServiceAccount、ClusterRole、Role、RoleBinding、ClusterRoleBinding。在上文《从零起先搭建Kubernetes集群(四、搭建K8S Dashboard)》中,大家已对那些概念实行了简介。

apiVersion: v1kind: ServiceAccountmetadata: name: nginx-ingress-serviceaccount namespace: ingress-nginx---apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRolemetadata: name: nginx-ingress-clusterrolerules: - apiGroups: - "" resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - "" resources: - nodes verbs: - get - apiGroups: - "" resources: - services verbs: - get - list - watch - apiGroups: - "extensions" resources: - ingresses verbs: - get - list - watch - apiGroups: - "" resources: - events verbs: - create - patch - apiGroups: - "extensions" resources: - ingresses/status verbs: - update---apiVersion: rbac.authorization.k8s.io/v1beta1kind: Rolemetadata: name: nginx-ingress-role namespace: ingress-nginxrules: - apiGroups: - "" resources: - configmaps - pods - secrets - namespaces verbs: - get - apiGroups: - "" resources: - configmaps resourceNames: # Defaults to "<election-id>-<ingress-class>" # Here: "<ingress-controller-leader>-<nginx>" # This has to be adapted if you change either parameter # when launching the nginx-ingress-controller. - "ingress-controller-leader-nginx" verbs: - get - update - apiGroups: - "" resources: - configmaps verbs: - create - apiGroups: - "" resources: - endpoints verbs: - get---apiVersion: rbac.authorization.k8s.io/v1beta1kind: RoleBindingmetadata: name: nginx-ingress-role-nisa-binding namespace: ingress-nginxroleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: nginx-ingress-rolesubjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: ingress-nginx---apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRoleBindingmetadata: name: nginx-ingress-clusterrole-nisa-bindingroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: nginx-ingress-clusterrolesubjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: ingress-nginx

with-rbac.yaml是Ingress的中坚,用于创制ingress-controller。后面提到过,ingress-controller的效率是将新步入的Ingress进行转向为Nginx的布局。

apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: nginx-ingress-controller namespace: ingress-nginx spec: replicas: 1 selector: matchLabels: app: ingress-nginx template: metadata: labels: app: ingress-nginx annotations: prometheus.io/port: '10254' prometheus.io/scrape: 'true' spec: serviceAccountName: nginx-ingress-serviceaccount containers: - name: nginx-ingress-controller image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0 args: - /nginx-ingress-controller - --default-backend-service=$(POD_NAMESPACE)/default-http-backend - --configmap=$(POD_NAMESPACE)/nginx-configuration - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services - --udp-services-configmap=$(POD_NAMESPACE)/udp-services - --annotations-prefix=nginx.ingress.kubernetes.io env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - name: http containerPort: 80 - name: https containerPort: 443 livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 securityContext: runAsNonRoot: false

如上,能够观察nginx-ingress-controller运维时传出了参数,分别为日前创设的default-backend-service以及configmap。

内需留意的是,官方提供的with-rbac.yaml文件不可能直接行使,大家必得修改两处:

加入hostNetwork配置

如下,在serviceAccountName上边加多hostNetwork: true:

spec: hostNetwork: true serviceAccountName: nginx-ingress-serviceaccount containers: - name: nginx-ingress-controller image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0 args: - /nginx-ingress-controller - --default-backend-service=$(POD_NAMESPACE)/default-http-backend - --configmap=$(POD_NAMESPACE)/nginx-configuration - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services - --udp-services-configmap=$(POD_NAMESPACE)/udp-services - --annotations-prefix=nginx.ingress.kubernetes.io

配置hostNetwork: true是一种直接定义Pod互联网的不二秘技。定义后,Ingress-controller的IP就与宿主机k8s-node1同样(192.168.56.101),况兼端口80也是宿主机上的端口。那样,我们通过该192.168.56.101:80,就能够直接访问到Ingress-controller(实际上正是nginx),然后Ingress-controller则会转载大家的央浼到相应后端。

投入情状变量

在其env部分加入如下碰着变量:

 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: KUBERNETES_MASTER value: http://192.168.56.101:8080

不然,创造后会提醒如下错误:

[root@k8s-node1 ingress]# kubectl describe pod nginx-ingress-controller-9fbd7596d-rt9sf -n ingress-nginx省略前面...Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 30s default-scheduler Successfully assigned nginx-ingress-controller-9fbd7596d-rt9sf to k8s-node1 Normal SuccessfulMountVolume 30s kubelet, k8s-node1 MountVolume.SetUp succeeded for volume "nginx-ingress-serviceaccount-token-lq2dt" Warning BackOff 21s kubelet, k8s-node1 Back-off restarting failed container Normal Pulled 11s (x3 over 29s) kubelet, k8s-node1 Container image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0" already present on machine Normal Created 11s (x3 over 29s) kubelet, k8s-node1 Created container Warning Failed 10s (x3 over 28s) kubelet, k8s-node1 Error: failed to start container "nginx-ingress-controller": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: "/nginx-ingress-controller": stat /nginx-ingress-controller: no such file or directory": unknown

修改with-rbac.yaml后,使用kubectl -f create命令分别推行如下yaml文件,就能够创造Ingress-controller:

图片 2image.png

创办成功后如下所示:

[root@k8s-node1 ingress]# kubectl get pod -n ingress-nginx -o wideNAME READY STATUS RESTARTS AGE IP NODEdefault-http-backend-5c6d95c48-pdjn9 1/1 Running 0 23s 192.168.36.81 k8s-node1nginx-ingress-controller-547cd7d9cb-jmvpn 1/1 Running 0 8s 192.168.36.82 k8s-node1

有了ingress-controller,大家就足以创立自定义的Ingress了。这里已提前搭建好了Kibana服务,大家本着Kibana创设三个Ingress:

apiVersion: extensions/v1beta1kind: Ingressmetadata: name: kibana-ingress namespace: defaultspec: rules: - host: myk8s.com http: paths: - path: / backend: serviceName: kibana servicePort: 5601

其中:

  • rules中的host必得为域名,不可能为IP,表示Ingress-controller的Pod所在主机域名,相当于Ingress-controller的IP对应的域名。
  • paths中的path则表示映射的渠道。如映射/表示若访问myk8s.com,则会将呼吁转载至Kibana的service,端口为5601。

创办成功后,查看:

[root@k8s-node1 ingress]# kubectl get ingress -o wideNAME HOSTS ADDRESS PORTS AGEkibana-ingress myk8s.com 80 6s

大家再进行kubectl exec nginx-ingress-controller-5b79cbb5c6-2zr7f -it cat /etc/nginx/nginx.conf -n ingress-nginx,能够阅览生成nginx配置,篇幅较长,各位自行筛选:

 ## start server myk8s.com server { server_name myk8s.com ; listen 80; listen [::]:80; set $proxy_upstream_name "-"; location /kibana { log_by_lua_block { } port_in_redirect off; set $proxy_upstream_name ""; set $namespace "kube-system"; set $ingress_name "dashboard-ingress"; set $service_name "kibana"; client_max_body_size "1m"; proxy_set_header Host $best_http_host; # Pass the extracted client certificate to the backend # Allow websocket connections proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_set_header X-Real-IP $the_real_ip; proxy_set_header X-Forwarded-For $the_real_ip; proxy_set_header X-Forwarded-Host $best_http_host; proxy_set_header X-Forwarded-Port $pass_port; proxy_set_header X-Forwarded-Proto $pass_access_scheme; proxy_set_header X-Original-URI $request_uri; proxy_set_header X-Scheme $pass_access_scheme; # Pass the original X-Forwarded-For proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for; # mitigate HTTPoxy Vulnerability # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/ proxy_set_header Proxy ""; # Custom headers to proxied server proxy_connect_timeout 5s; proxy_send_timeout 60s; proxy_read_timeout 60s; proxy_buffering "off"; proxy_buffer_size "4k"; proxy_buffers 4 "4k"; proxy_request_buffering "on"; proxy_http_version 1.1; proxy_cookie_domain off; proxy_cookie_path off; # In case of errors try the next upstream server before returning an error proxy_next_upstream error timeout invalid_header http_502 http_503 http_504; proxy_next_upstream_tries 0; # No endpoints available for the request return 503; } location / { log_by_lua_block { } port_in_redirect off; set $proxy_upstream_name ""; set $namespace "default"; set $ingress_name "kibana-ingress"; set $service_name "kibana"; client_max_body_size "1m"; proxy_set_header Host $best_http_host; # Pass the extracted client certificate to the backend # Allow websocket connections proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_set_header X-Real-IP $the_real_ip; proxy_set_header X-Forwarded-For $the_real_ip; proxy_set_header X-Forwarded-Host $best_http_host; proxy_set_header X-Forwarded-Port $pass_port; proxy_set_header X-Forwarded-Proto $pass_access_scheme; proxy_set_header X-Original-URI $request_uri; proxy_set_header X-Scheme $pass_access_scheme; # Pass the original X-Forwarded-For proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for; # mitigate HTTPoxy Vulnerability # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/ proxy_set_header Proxy ""; # Custom headers to proxied server proxy_connect_timeout 5s; proxy_send_timeout 60s; proxy_read_timeout 60s; proxy_buffering "off"; proxy_buffer_size "4k"; proxy_buffers 4 "4k"; proxy_request_buffering "on"; proxy_http_version 1.1; proxy_cookie_domain off; proxy_cookie_path off; # In case of errors try the next upstream server before returning an error proxy_next_upstream error timeout invalid_header http_502 http_503 http_504; proxy_next_upstream_tries 0; # No endpoints available for the request return 503; } } ## end server myk8s.com

第一,大家要求在Ingress-controller的Pod所在主机上(这里为k8s-node1),将地点提到的域名myk8s.com追加入/etc/hosts文件:

192.168.56.101 myk8s.com

除去,倘诺想在和睦的Windows物理机上利用浏览器访谈kibana,也亟需在C:WindowsSystem32driversetchosts文件内步入上述剧情。设置后,分别在k8s-node1和物理机上测量试验正确就可以:

图片 3image.png图片 4image.png

在Windows物理机上,使用Chrome访谈myk8s.com,也正是一定于访谈了192.168.56.101:80

图片 5image.png

随机会见多个谬误的地方myk8s.com/abc,重临预期的404:

图片 6image.png

从那之后,大家的Ingress已经搭建实现,完结了在外表通过域名访问K8S集群Service的效应。假使我们有意思味,可以品尝为Ingress配置TLS,那样就足以采访如Dashboard这种https服务了。下一章节《从零开头搭建Kubernetes集群(五、在K8S上布署Redis 集群)》,敬请期望。

自己水平有限,难免有荒唐或遗漏之处,望大家指正和宽容,招待研商留言。

应接关怀笔者微信公众号:

图片 7爱你之心.jpg

本文由365bet亚洲版登录发布于计算机网络,转载请注明出处:从零起头搭建Kubernetes集群

您可能还会对下面的文章感兴趣: