I have 2 nodes spread across 2 zones, where I run my deployments twice as a failover mechanism. In my nodes I have 3 pods that require access to a disk. All works well with a single node, but now 2 nodes must access the disk with read and write access, and I read this is possible with a high availability hyperdisk. I have been trying to do so by following this guide:
I changed the access mode from “ReadWriteOnce” to “ReadWriteMany”, as it says it’s supported. However, when I try to mount this disk, I get this message in the events:
failed to provision volume with StorageClass “balanced-ha-storage”: rpc error: code = InvalidArgument desc = VolumeCapabilities is invalid: specified multi writer with mount access type
I can’t figure out how to get this done, any help would be much appreciated!
1 Like
Hi djst3rios,
Based on the Internal bug, this is classified as a documentation issue. In the meantime, to successfully provision volumes in RWX mode, you must specify volumeMode: Block
Here’s an example configuration:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: podpvc
spec:
accessModes:
- ReadWriteMany
volumeMode: Block # <<<<<<
storageClassName: balanced-ha-storage
resources:
requests:
storage: 20Gi
Additionally,since Pods are consuming a volume in Block mode, you should use the volumeDevcies configuration instead of volumeMounts. Here’s an
Summary
This text will be hidden
sample configuration:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-server-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
volumeDevices: # <<<<<<<<
- mountPath: /dev/my-device
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: podpvc
readOnly: false
1 Like
Thank you for your response! I have one more question regarding this, does GKE actually support this kind of disk? It seems it requires a clustered file system such as OCFS2, VMFS and GFS2, which doesn’t seem to be supported by GKE right now.