当前位置:  开发笔记 > 编程语言 > 正文

我的kubernetes pods与"CrashLoopBackOff"一起崩溃,但我找不到任何日志

如何解决《我的kubernetespods与"CrashLoopBackOff"一起崩溃,但我找不到任何日志》经验,为你挑选了3个好方法。

这就是我不断得到的:

[root@centos-master ~]# kubectl get pods
NAME               READY     STATUS             RESTARTS   AGE
nfs-server-h6nw8   1/1       Running            0          1h
nfs-web-07rxz      0/1       CrashLoopBackOff   8          16m
nfs-web-fdr9h      0/1       CrashLoopBackOff   8          16m

下面是"describe pods" kubectl describe pods的输出

Events:
  FirstSeen LastSeen    Count   From                SubobjectPath       Type        Reason      Message
  --------- --------    -----   ----                -------------       --------    ------      -------
  16m       16m     1   {default-scheduler }                    Normal      Scheduled   Successfully assigned nfs-web-fdr9h to centos-minion-2
  16m       16m     1   {kubelet centos-minion-2}   spec.containers{web}    Normal      Created     Created container with docker id 495fcbb06836
  16m       16m     1   {kubelet centos-minion-2}   spec.containers{web}    Normal      Started     Started container with docker id 495fcbb06836
  16m       16m     1   {kubelet centos-minion-2}   spec.containers{web}    Normal      Started     Started container with docker id d56f34ae4e8f
  16m       16m     1   {kubelet centos-minion-2}   spec.containers{web}    Normal      Created     Created container with docker id d56f34ae4e8f
  16m       16m     2   {kubelet centos-minion-2}               Warning     FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "web" with CrashLoopBackOff: "Back-off 10s restarting failed container=web pod=nfs-web-fdr9h_default(461c937d-d870-11e6-98de-005056040cc2)"

我有两个pod:nfs-web-07rxz,nfs-web-fdr9h,但如果我执行"kubectl logs nfs-web-07rxz"或"-p"选项,我在两个pod中都看不到任何日志.

[root@centos-master ~]# kubectl logs nfs-web-07rxz -p
[root@centos-master ~]# kubectl logs nfs-web-07rxz

这是我的replicationController yaml文件: replicationController yaml文件

apiVersion: v1 kind: ReplicationController metadata:   name: nfs-web spec:   replicas: 2   selector:
    role: web-frontend   template:
    metadata:
      labels:
        role: web-frontend
    spec:
      containers:
      - name: web
        image: eso-cmbu-docker.artifactory.eng.vmware.com/demo-container:demo-version3.0
        ports:
          - name: web
            containerPort: 80
        securityContext:
          privileged: true

我的Docker镜像是从这个简单的docker文件制作的:

FROM ubuntu
RUN apt-get update
RUN apt-get install -y nginx
RUN apt-get install -y nfs-common

我在CentOs-1611,kube版本上运行我的kubernetes集群:

[root@centos-master ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.0", GitCommit:"86dc49aa137175378ac7fba7751c3d3e7f18e5fc", GitTreeState:"clean", BuildDate:"2016-12-15T16:57:18Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.0", GitCommit:"86dc49aa137175378ac7fba7751c3d3e7f18e5fc", GitTreeState:"clean", BuildDate:"2016-12-15T16:57:18Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}

如果我通过"docker run"运行docker镜像,我能够毫无问题地运行图像,只有通过kubernetes我才能崩溃.

有人可以帮助我,如何在不看日志的情况下进行调试?



1> Steve Sloka..:

正如@Sukumar评论的那样,您需要让Dockerfile 运行Command或让ReplicationController指定命令.

pod正在崩溃,因为它启动然后立即退出,因此Kubernetes重新启动并且循环继续.



2> user128364..:
kubectl -n  describe pod 

kubectl -n  logs -p   


虽然这个命令可能(或可能不会解决)问题,但一个好的答案应该总是包含解释问题是如何解决的.

3> hmacias..:

我需要让Pod继续运行以进行后续的kubectl exec调用,并且如以上注释所指出,我的pod已被我的k8s集群杀死,因为它已经完成了其所有任务的运行。我设法用一个不会自动停止的命令来踢豆荚,从而使豆荚保持运行:

kubectl run YOUR_POD_NAME -n YOUR_NAMESPACE --image SOME_PUBLIC_IMAGE:latest --command tailf /dev/null


``tailf''对我不起作用,但是(在Alpine linux上)起作用了:-命令/ usr / bin / tail--f / dev / null
推荐阅读
郑谊099_448
这个屌丝很懒,什么也没留下!
DevBox开发工具箱 | 专业的在线开发工具网站    京公网安备 11010802040832号  |  京ICP备19059560号-6
Copyright © 1998 - 2020 DevBox.CN. All Rights Reserved devBox.cn 开发工具箱 版权所有