2

I would like to add an iSCSI volume to a pod as in this this example. I have already prepared an iSCSI target on a Debian server and installed open-iscsi on all my worker nodes. I have also confirmed that I can mount the iSCSI target on a worker node with command line tools (i.e. still outside Kubernetes). This works fine. For simplicity, there is no authentication (CHAP) in play yet, and there is already a ext4 file system present on the target.

I would now like for Kubernetes 1.14 to mount the same iSCSI target into a pod with the following manifest:

---
apiVersion: v1
kind: Pod
metadata:
  name: iscsipd
spec:
  containers:
  - name: iscsipd-ro
    image: kubernetes/pause
    volumeMounts:
    - mountPath: "/mnt/iscsipd"
      name: iscsivol
  volumes:
  - name: iscsivol
    iscsi:
      targetPortal: 1.2.3.4 # my target
      iqn: iqn.2019-04.my-domain.com:lun1
      lun: 0
      fsType: ext4
      readOnly: true

According to kubectl describe pod this works in the initial phase (SuccessfulAttachVolume), but then fails (FailedMount). The exact error message reads:

Warning  FailedMount ... Unable to mount volumes for pod "iscsipd_default(...)": timeout expired waiting for volumes to attach or mount for pod "default"/"iscsipd". list of unmounted volumes=[iscsivol]. list of unattached volumes=[iscsivol default-token-7bxnn]
Warning  FailedMount ... MountVolume.WaitForAttach failed for volume "iscsivol" : failed to get any path for iscsi disk, last err seen:
Could not attach disk: Timeout after 10s

How can I further diagnose and overcome this problem?

UPDATE In this related issue the solution consisted of using a numeric IP address for the target. However, this does not help in my case, since I am already using a targetPortal of the form 1.2.3.4 (have also tried both with and without port number 3260).

UPDATE Stopping scsid.service and/or open-iscsi.service (as suggested here) did not make a difference either.

UPDATE The error apparently gets triggered in pkg/volume/iscsi/iscsi_util.go if waitForPathToExist(&devicePath, multipathDeviceTimeout, iscsiTransport) fails. However, what is strange is that when it is triggered the file at devicePath (/dev/disk/by-path/ip-...-iscsi-...-lun-...) does actually exist on the node.

UPDATE I have used this procedure for defining an simple iSCSI target for these test purposes:

pvcreate /dev/sdb
vgcreate iscsi /dev/sdb
lvcreate -L 10G -n iscsi_1 iscsi
apt-get install tgt
cat >/etc/tgt/conf.d/iscsi_1.conf <<EOL
<target iqn.2019-04.my-domain.com:lun1>
  backing-store /dev/mapper/iscsi-iscsi_1
  initiator-address 5.6.7.8 # my cluster node #1
  ... # my cluster node #2, etc.
</target>
EOL
systemctl restart tgt
tgtadm --mode target --op show
rookie099
  • 2,201
  • 2
  • 26
  • 52
  • Have you checked your permissions / equivalent of security groups for the disk? – cookiedough Apr 30 '19 at 16:08
  • @cookiedough How would I do that? I currently can mount the target on the command line with `iscsiadm ... -login; mount /dev/sdc` without problems, only Kubernetes cannot mount it on the same node. – rookie099 May 02 '19 at 07:24
  • Hello @rookie099, could you share your pv and pvc manifests? Also please provide us output of commands `$ kubectl get pv` and `$ kubectl describe pv ` also `$ kubectl get pvc` and later `$ kubectl describe pvc `. Storageclass might be helpful also `$ kubectl get sc`. – PjoterS May 02 '19 at 13:08
  • @PjoterS Right now I do not use `PersistentVolume`/`PersistentVolumeClaim` (nor a `StorageClass`) but specify the volume directly inside the given pod manifest. I tried to start with the simplest-possible setup. – rookie099 May 02 '19 at 13:20
  • @PjoterS P.S. I've just tried an alternative version with `PersistentVolume`/`PersistentVolumeClaim`, but it fails in the exact same way (as I has already suspected). – rookie099 May 02 '19 at 13:34
  • Now that you are using PV and PVC, can you share the manifest and the describe log? – cookiedough May 02 '19 at 14:46

2 Answers2

1

This is probably because of authentication issue to your iscsi target.

If you don't use CHAP authentication yet, you still have to disable authentication. For example, if you use targetcli, you can run below commands to disable it.

$ sudo targetcli
/> /iscsi/iqn.2003-01.org.xxxx/tpg1 set attribute authentication=0 # will disable auth
/> /iscsi/iqn.2003-01.org.xxxx/tpg1 set attribute generate_node_acls=1 # will force to use tpg1 auth mode by default

If this doesn't help you, please share your iscsi target configuration, or guide that you followed.

clxoid
  • 2,577
  • 12
  • 21
  • I find this hard to believe, because I can mount the target on the node's command line (outside Kubernetes) without problems. But I'll add my iSCSI target configuration to the question as you suggest. Could not try your specific recipe yet because I'm not using `targetcli`(to my best knowledge). – rookie099 May 07 '19 at 13:44
  • Yes, you can mount the the node's commandline via adding `InitiatorName` but and this is also auth method. But when you try to mount with pod, there is no parameter to specify `InitiatorName` to pod configuration. That's why you need to disable auth method, or use CHAP method on TPG. Did you try it at least? – clxoid May 07 '19 at 14:03
  • What exact path would I have to use for my example: e.g. `/iscsi/iqn.2019-04.my-domain.com/tpg1 set attribute authentication=0` produces "No such path". – rookie099 May 07 '19 at 14:26
  • if you mean targetcli, when you create target under `/iscsi create`, it creates default iqn target name, and you will have one option. You can follow this guide, with disabling auth method and so on. http://atodorov.org/blog/2015/04/07/how-to-configure-iscsi-target-on-red-hat-enterprise-linux-7/ – clxoid May 07 '19 at 15:55