8

I want to create a statefulset elasticsearch in kubernetes on virtualbox. I'm not using cloud provider so i create two persistent volume localy for my two replicas of my statefulset :

pv0:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv-elk-0
  namespace: elk
  labels:
    type: local
spec:
  storageClassName: gp2
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data/pv0"

pv1:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv-elk-1
  namespace: elk
  labels:
    type: local
spec:
  storageClassName: gp2
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data/pv1"

Statefulset :

apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: elasticsearch-logging
  namespace: elk
  labels:
    k8s-app: elasticsearch-logging
    version: v5.6.2
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  serviceName: elasticsearch-logging
  replicas: 2
  selector:
    matchLabels:
      k8s-app: elasticsearch-logging
      version: v5.6.2
  template:
    metadata:
      labels:
        k8s-app: elasticsearch-logging
        version: v5.6.2
        kubernetes.io/cluster-service: "true"
    spec:
      serviceAccountName: elasticsearch-logging
      containers:
      - image: gcr.io/google-containers/elasticsearch:v5.6.2
        name: elasticsearch-logging
        ports:
        - containerPort: 9200
          name: db
          protocol: TCP
        - containerPort: 9300
          name: transport
          protocol: TCP
        resources:
          limits:
            cpu: 0.1
        volumeMounts:
        - name: elasticsearch-logging
          mountPath: /data
        env:
        - name: "NAMESPACE"
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
      initContainers:
      - image: alpine:3.6
        command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
        name: elasticsearch-logging-init
        securityContext:
          privileged: true
  volumeClaimTemplates:
  - metadata:
      name: elasticsearch-logging
    spec:
      accessModes: ["ReadWriteOnce"]
      storageClassName: gp2
      resources:
        requests:
          storage: 5Gi

It seems like the persistant volume is correctly bounced but the pod are always in crash loop and restart every time. It's it because of the use of the initContainer or something wrong with my yaml ?

Statefulset

pv,pvc

Nicola Ben
  • 10,615
  • 8
  • 41
  • 65
Yummel
  • 103
  • 1
  • 2
  • 11
  • can you post output of: `kubectl get pv,pvc -n elk` as well? Did you define manually any PVC? Did you define storage class? – Const Jun 26 '18 at 12:20
  • @Const I add a screenshot – Yummel Jun 26 '18 at 12:56
  • Could you provide kubelet logs? – Nick Rak Jun 26 '18 at 13:56
  • Ok, this was about pv and pvc, what about comments about manually defining PVC and storage class (gp2, I suspect no provisioner since you sait this is bare-metal and no cloud)? Are those manifests minimal complete example or you have something beside to replicate your setup? Also what is your k8s version and what tool you used while creating it? – Const Jun 27 '18 at 06:24
  • Sorry for the delay, I found something and it's working now, it's seem like the elasticsearch couldn't communicate with fluentd and cause a persistant volume fail. I juste restart first infludb and then elasticsearch. – Yummel Jun 27 '18 at 09:52
  • I use the last version of K8S 1.10 and I deployed it thanks to kubeadm. Yes I use cloudwatt but it don't provide PV and I don't use another cloud provider for this. I only define static PV locally. – Yummel Jun 27 '18 at 10:02
  • Everything seems ok now. It's look like a network error communication between my elk pods. I scale up the RAM of my k8s cluster and now it's find. – Yummel Jun 28 '18 at 14:17

1 Answers1

-45

Add more Ram, scale up the cluster.

Yummel
  • 103
  • 1
  • 2
  • 11