0

I have problems with connecting volume per iSCSI from Kubernetes. When I try with iscisiadm from worker node, it works. This is what I get from kubectl description pod.

Normal   Scheduled               <unknown>             default-scheduler        Successfully assigned default/iscsipd to k8s-worker-2
Normal   SuccessfulAttachVolume  4m2s                  attachdetach-controller  AttachVolume.Attach succeeded for volume "iscsipd-rw"
Warning  FailedMount             119s                  kubelet, k8s-worker-2    Unable to attach or mount volumes: unmounted volumes=[iscsipd-rw], unattached volumes=[iscsipd-rw default-token-d5glz]: timed out waiting for the condition
Warning  FailedMount             105s (x9 over 3m54s)  kubelet, k8s-worker-2    MountVolume.WaitForAttach failed for volume "iscsipd-rw" : failed to get any path for iscsi disk, last err seen:iscsi: failed to attach disk: Error: iscsiadm: No records found(exit status 21)

I'm just using iscsi.yaml file from kubernetes.io!

---
apiVersion: v1
kind: Pod
metadata:
  name: iscsipd
spec:
  containers:
  - name: iscsipd-rw
    image: kubernetes/pause
    volumeMounts:
    - mountPath: "/mnt/iscsipd"
      name: iscsipd-rw
  volumes:
  - name: iscsipd-rw
    iscsi:
      targetPortal: 192.168.34.32:3260
      iqn: iqn.2020-07.int.example:sql
      lun: 0
      fsType: ext4
      readOnly: true

Open-iscsi is installed on all worker nodes(just two of them).

● iscsid.service - iSCSI initiator daemon (iscsid)
   Loaded: loaded (/lib/systemd/system/iscsid.service; enabled; vendor preset: e
   Active: active (running) since Fri 2020-07-03 10:24:26 UTC; 4 days ago
     Docs: man:iscsid(8)
  Process: 20507 ExecStart=/sbin/iscsid (code=exited, status=0/SUCCESS)
  Process: 20497 ExecStartPre=/lib/open-iscsi/startup-checks.sh (code=exited, st
 Main PID: 20514 (iscsid)
    Tasks: 2 (limit: 4660)
   CGroup: /system.slice/iscsid.service
           ├─20509 /sbin/iscsid
           └─20514 /sbin/iscsid

ISCSI Target is created on the IBM Storwize V7000. Without CHAP.

I tried to connect with iscsiadm from worker node and it works.

sudo iscsiadm -m discovery -t sendtargets -p 192.168.34.32
192.168.34.32:3260,1 iqn.1986-03.com.ibm:2145.hq-v7000.hq-v7000-rz1-c1
192.168.34.34:3260,1 iqn.1986-03.com.ibm:2145.hq-v7000.hq-v7000-rz1-c1

sudo iscsiadm -m node --login
Logging in to [iface: default, target: iqn.1986-03.com.ibm:2145.hq-v7000.hq-v7000-rz1-c1, portal: 192.168.34.32,3260] (multiple)
Logging in to [iface: default, target: iqn.1986-03.com.ibm:2145.hq-v7000.hq-v7000-rz1-c1, portal: 192.168.34.34,3260] (multiple)
Login to [iface: default, target: iqn.1986-03.com.ibm:2145.hq-v7000.hq-v7000-rz1-c1, portal: 192.168.34.32,3260] successful.
Login to [iface: default, target: iqn.1986-03.com.ibm:2145.hq-v7000.hq-v7000-rz1-c1, portal: 192.168.34.34,3260] successful.

Disk /dev/sdb: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 32768 bytes / 32768 bytes
Disklabel type: dos
Disk identifier: 0x5b3d0a3a

Device     Boot Start       End   Sectors  Size Id Type
/dev/sdb1        2048 209715199 209713152  100G 83 Linux

Is anyone facing the same problem?

markoc
  • 1
  • 3
  • Welcome! Are these .yaml files the ones you're supposed to use for this task? If so, consider also filing a bug with kuernetes. Also, if you can clarify what you tried some more, it can help us diagnose your problem. – Riley Jul 06 '20 at 10:22
  • Hi and thanks! :) No, they are not. I tried to deploy SQL Container, and got the same problem "No records found(exit status 21) So I just tried with something more simple. And the problem was the same. When I use iscsiadm directly from nodes, it works. Sorry for the format! :( "Jul 6 10:29:52 k8s-worker-2 kubelet[1059]: E0706 10:29:52.468632 1059 iscsi_util.go:420] iscsi: failed to get any path for iscsi disk, last err seen: Jul 6 10:29:52 k8s-worker-2 kubelet[1059]: iscsi: failed to attach disk: Error: iscsiadm: – markoc Jul 06 '20 at 10:49

1 Answers1

1

Remember to not use a hostname for the target. Use the IP. For some reason, if the target is a hostname, it barfs with the error about requesting a duplicate session. If the target is an IP, it works fine. I now have multiple iSCSI targets mounted in various pods, and I am absolutely ecstatic.

You may also have authentication issue to your iscsi target.

If you don't use CHAP authentication yet, you still have to disable authentication. For example, if you use targetcli, you can run below commands to disable it.

$ sudo targetcli
/> /iscsi/iqn.2003-01.org.xxxx/tpg1 set attribute authentication=0 # will disable auth
/> /iscsi/iqn.2003-01.org.xxxx/tpg1 set attribute generate_node_acls=1 # will force to use tpg1 auth mode by default

If this doesn't help you, please share your iscsi target configuration, or guide that you followed.

What is important check if all of your nodes have the open-iscsi-package installed.

Take a look: kubernetes-iSCSI, volume-failed-iscsi-disk, iscsi-into-container-fails.

Malgorzata
  • 6,409
  • 1
  • 10
  • 27
  • Hi, just edited the question. Yes, i used IP and open-iscsi is installed. I'm using Ubuntu 18.04. I managed to connect volume directly from nodes without CHAP. If it helps, I installed kubernetes cluster with kubeadm! – markoc Jul 07 '20 at 12:27
  • Did you disable authentication as I mentioned in my answer ? Can you share guide that you followed ? – Malgorzata Jul 08 '20 at 08:55
  • Hi, I'm not using targetcli. iSCSI Target is created on IBM Storwize V7000 and without CHAP. I managed to connect from worker node(Ubuntu 18.04, open-iscsi) to target without using CHAP(You can see output in a question). I tried also from Windows, it works. The question is why pod doesn´t get it...I can´t see that the problem is authentication or iSCSI Target, I mean for now ...:), it is more that pod doesn´t see the target nodes(/etc/iscsi/node is empty)? – markoc Jul 08 '20 at 11:17
  • Can you paste all this extra information in post in proper format - also these ones from comment directly below your post ? – Malgorzata Aug 04 '20 at 14:18