我一直在尝试测试minikube以创建具有三个服务的演示应用程序。这个想法是要有一个Web UI与其他服务进行通信。每个服务将以不同的语言编写:nodejs,python和go。
我创建了3个docker映像,每个应用程序一个,并测试了代码,基本上它们提供了非常简单的REST端点。之后,我使用minikube部署了它们。以下是我当前的部署yaml文件:
--- apiVersion: v1 kind: Namespace metadata: name: ngci --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: web-gateway namespace: ngci spec: replicas: 1 template: metadata: labels: app: web-gateway spec: containers: - env: - name: VCSA_MANAGER value: http://vcsa-manager-service:7070 name: web-gateway image: silvam11/web-gateway imagePullPolicy: Never ports: - containerPort: 8080 readinessProbe: httpGet: path: /status port: 8080 periodSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: web-gateway-service namespace: ngci spec: selector: app: web-gateway ports: - protocol: "TCP" # Port accessible inside cluster port: 8080 # Port forward to inside the pod #targetPort did not work with nodePort, why? #targetPort: 9090 # Port accessible outside cluster nodePort: 30001 #name: grpc type: LoadBalancer --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: vcsa-manager namespace: ngci spec: replicas: 1 template: metadata: labels: app: vcsa-manager spec: containers: - name: vcsa-manager image: silvam11/vcsa-manager imagePullPolicy: Never ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: vcsa-manager-service namespace: ngci spec: selector: app: vcsa-manager ports: - protocol: "TCP" # Port accessible inside cluster port: 7070 # Port forward to inside the pod #targetPort did not work with nodePort, why? targetPort: 9090 # Port accessible outside cluster #nodePort: 30001 #name: grpc --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: repo-manager namespace: ngci spec: replicas: 1 template: metadata: labels: app: repo-manager spec: containers: - name: repo-manager image: silvam11/repo-manager imagePullPolicy: Never ports: - containerPort: 8000 --- apiVersion: v1 kind: Service metadata: name: repo-manager-service namespace: ngci spec: selector: app: repo-manager ports: - protocol: "TCP" # Port accessible inside cluster port: 9090 # Port forward to inside the pod #targetPort did not work with nodePort, why? #targetPort: 9090 # Port accessible outside cluster #nodePort: 30001 #name: grpc
如您所见,我在那里创建了服务,但是只有Web网关被定义为LoadBalancer类型。它提供了两个端点。名为/ status的一个名称,它允许我测试该服务是否已启动并且正在运行并且可以访问。
第二个端点/ user与另一个k8s服务通信。代码很简单:
app.post('/user', (req, res) => { console.log("/user called."); console.log("/user req.body : " + req.body); if(!req || !req.body) { var errorMsg = "Invalid argument sent"; console.log(errorMsg); return res.status(500).send(errorMsg); } **console.log("calling " + process.env.VCSA_MANAGER); const options = { url: process.env.VCSA_MANAGER, method: 'GET', headers: { 'Accept': 'application/json' } };** request(options, function(err, resDoc, body) { console.log("callback : " + body); if(err) { console.log("ERROR: " + err); return res.send(err); } console.log("statusCode : " + resDoc.statusCode); if(resDoc.statusCode != 200) { console.log("ERROR code: " + res.statusCode); return res.status(500).send(resDoc.statusCode); } return res.send({"ok" : body}); }); });
此代码段的主要思想是使用环境变量process.env.VCSA_MANAGER将请求发送到其他服务。这个变量在我的k8s部署yaml文件中定义为http:// vcsa-manager-service:7070
问题是,此请求返回连接错误。最初,我认为这将是DNS问题,但似乎Web网关pod可以解析名称:
kubectl exec -it web-gateway-7b4689bff9-rvbbn -n ngci -- ping vcsa-manager-service PING vcsa-manager-service.ngci.svc.cluster.local (10.99.242.121): 56 data bytes
来自Web网关Pod的ping命令正确解析了dns。该IP是正确的,如下所示:
kubectl get svc -n ngci NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE repo-manager-service ClusterIP 10.102.194.1799090/TCP 35m vcsa-manager-service ClusterIP 10.99.242.121 7070/TCP 35m web-gateway-service LoadBalancer 10.98.128.210 8080:30001/TCP 35m
另外,如建议的那样,对它们的描述
kubectl describe pods -n ngci Name: repo-manager-6cf98f5b54-pd2ht Namespace: ngci Node: minikube/10.0.2.15 Start Time: Wed, 09 May 2018 17:53:54 +0100 Labels: app=repo-manager pod-template-hash=2795491610 Annotations:Status: Running IP: 172.17.0.10 Controlled By: ReplicaSet/repo-manager-6cf98f5b54 Containers: repo-manager: Container ID: docker://d2d54e42604323c8a6552b3de6e173e5c71eeba80598bfc126fbc03cae93d261 Image: silvam11/repo-manager Image ID: docker://sha256:dc6dcbb1562cdd5f434f86696ce09db46c7ff5907b991d23dae08b2d9ed53a8f Port: 8000/TCP Host Port: 0/TCP State: Running Started: Thu, 10 May 2018 10:32:49 +0100 Last State: Terminated Reason: Error Exit Code: 137 Started: Wed, 09 May 2018 17:53:56 +0100 Finished: Wed, 09 May 2018 18:31:24 +0100 Ready: True Restart Count: 1 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-tbkms (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: default-token-tbkms: Type: Secret (a volume populated by a Secret) SecretName: default-token-tbkms Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 16h default-scheduler Successfully assigned repo-manager-6cf98f5b54-pd2ht to minikube Normal SuccessfulMountVolume 16h kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-tbkms" Normal Pulled 16h kubelet, minikube Container image "silvam11/repo-manager" already present on machine Normal Created 16h kubelet, minikube Created container Normal Started 16h kubelet, minikube Started container Normal SuccessfulMountVolume 3m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-tbkms" Normal SandboxChanged 3m kubelet, minikube Pod sandbox changed, it will be killed and re-created. Normal Pulled 3m kubelet, minikube Container image "silvam11/repo-manager" already present on machine Normal Created 3m kubelet, minikube Created container Normal Started 3m kubelet, minikube Started container Name: vcsa-manager-8696b44dff-mzq5q Namespace: ngci Node: minikube/10.0.2.15 Start Time: Wed, 09 May 2018 17:53:54 +0100 Labels: app=vcsa-manager pod-template-hash=4252600899 Annotations: Status: Running IP: 172.17.0.14 Controlled By: ReplicaSet/vcsa-manager-8696b44dff Containers: vcsa-manager: Container ID: docker://3e19fd8ca21a678e18eda3cb246708d10e3f1929a31859f0bb347b3461761b53 Image: silvam11/vcsa-manager Image ID: docker://sha256:1a9cd03166dafceaee22586385ecda1c6ad3ed095b498eeb96771500092b526e Port: 8080/TCP Host Port: 0/TCP State: Running Started: Thu, 10 May 2018 10:32:54 +0100 Last State: Terminated Reason: Error Exit Code: 2 Started: Wed, 09 May 2018 17:53:56 +0100 Finished: Wed, 09 May 2018 18:31:15 +0100 Ready: True Restart Count: 1 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-tbkms (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: default-token-tbkms: Type: Secret (a volume populated by a Secret) SecretName: default-token-tbkms Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 16h default-scheduler Successfully assigned vcsa-manager-8696b44dff-mzq5q to minikube Normal SuccessfulMountVolume 16h kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-tbkms" Normal Pulled 16h kubelet, minikube Container image "silvam11/vcsa-manager" already present on machine Normal Created 16h kubelet, minikube Created container Normal Started 16h kubelet, minikube Started container Normal SuccessfulMountVolume 3m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-tbkms" Normal SandboxChanged 3m kubelet, minikube Pod sandbox changed, it will be killed and re-created. Normal Pulled 3m kubelet, minikube Container image "silvam11/vcsa-manager" already present on machine Normal Created 3m kubelet, minikube Created container Normal Started 3m kubelet, minikube Started container Name: web-gateway-7b4689bff9-rvbbn Namespace: ngci Node: minikube/10.0.2.15 Start Time: Wed, 09 May 2018 17:53:55 +0100 Labels: app=web-gateway pod-template-hash=3602456995 Annotations: Status: Running IP: 172.17.0.12 Controlled By: ReplicaSet/web-gateway-7b4689bff9 Containers: web-gateway: Container ID: docker://677fbcbc053c57e4aa24c66d7f27d3e9910bc3dbb5fda4c1cdf5f99a67dfbcc3 Image: silvam11/web-gateway Image ID: docker://sha256:b80fb05c087934447c93c958ccef5edb08b7c046fea81430819823cc382337dd Port: 8080/TCP Host Port: 0/TCP State: Running Started: Thu, 10 May 2018 10:32:54 +0100 Last State: Terminated Reason: Completed Exit Code: 0 Started: Wed, 09 May 2018 17:53:57 +0100 Finished: Wed, 09 May 2018 18:31:16 +0100 Ready: True Restart Count: 1 Readiness: http-get http://:8080/status delay=0s timeout=1s period=5s #success=1 #failure=3 Environment: VCSA_MANAGER: http://vcsa-manager-service:7070 Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-tbkms (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: default-token-tbkms: Type: Secret (a volume populated by a Secret) SecretName: default-token-tbkms Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 16h default-scheduler Successfully assigned web-gateway-7b4689bff9-rvbbn to minikube Normal SuccessfulMountVolume 16h kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-tbkms" Normal Pulled 16h kubelet, minikube Container image "silvam11/web-gateway" already present on machine Normal Created 16h kubelet, minikube Created container Normal Started 16h kubelet, minikube Started container Warning Unhealthy 16h kubelet, minikube Readiness probe failed: Get http://172.17.0.13:8080/status: dial tcp 172.17.0.13:8080: getsockopt: connection refused Normal SuccessfulMountVolume 3m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-tbkms" Normal SandboxChanged 3m kubelet, minikube Pod sandbox changed, it will be killed and re-created. Normal Pulled 3m kubelet, minikube Container image "silvam11/web-gateway" already present on machine Normal Created 3m kubelet, minikube Created container Normal Started 3m kubelet, minikube Started container Warning Unhealthy 3m (x3 over 3m) kubelet, minikube Readiness probe failed: Get http://172.17.0.12:8080/status: dial tcp 172.17.0.12:8080: getsockopt: connection refused
以下是ngci名称空间上的Pod:
silvam11@ubuntu:~$ kubectl get pods -n ngci NAME READY STATUS RESTARTS AGE repo-manager-6cf98f5b54-pd2ht 1/1 Running 1 16h vcsa-manager-8696b44dff-mzq5q 1/1 Running 1 16h web-gateway-7b4689bff9-rvbbn 1/1 Running 1 16h
我在这里想念什么?是防火墙吗?
毛罗
您未正确配置端口号。
首先,vcsa-manager
暴露在端口8080上;之后,您尝试将服务映射vcsa-manager-service
到端口9090 repo-manager
。您发表评论targetPort
,但未将服务映射到端口。您应该将服务映射到正确的端口。
固定配置看起来像
--- apiVersion: v1 kind: Namespace metadata: name: ngci --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: web-gateway namespace: ngci spec: replicas: 1 template: metadata: labels: app: web-gateway spec: containers: - env: - name: VCSA_MANAGER value: http://vcsa-manager-service:7070 name: web-gateway image: silvam11/web-gateway imagePullPolicy: Never ports: - containerPort: 8080 readinessProbe: httpGet: path: /status port: 8080 periodSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: web-gateway-service namespace: ngci spec: selector: app: web-gateway ports: - protocol: "TCP" port: 8080 targetPort: 8080 nodePort: 30001 type: LoadBalancer --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: vcsa-manager namespace: ngci spec: replicas: 1 template: metadata: labels: app: vcsa-manager spec: containers: - name: vcsa-manager image: silvam11/vcsa-manager imagePullPolicy: Never ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: vcsa-manager-service namespace: ngci spec: selector: app: vcsa-manager ports: - protocol: "TCP" port: 7070 targetPort: 8080 --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: repo-manager namespace: ngci spec: replicas: 1 template: metadata: labels: app: repo-manager spec: containers: - name: repo-manager image: silvam11/repo-manager imagePullPolicy: Never ports: - containerPort: 8000 --- apiVersion: v1 kind: Service metadata: name: repo-manager-service namespace: ngci spec: selector: app: repo-manager ports: - protocol: "TCP" port: 9090 targetPort: 8000
我刚刚修复了您配置中的所有端口。