I have an issue with the mysql/mysql-server:8.0 (for arm64) image, being deployed on home home k3s node. When I don't use a PV, the instance works fine. However, if I use a persistent volume and claim, the server can no longer probe the instance to check if it's up, and I can't log in as root anymore.
Config map:
resource "kubernetes_manifest" "sql_config_map" {
manifest = {
"apiVersion" = "v1"
"data" = {
"my.cnf" = <<-EOT
[mysqld]
default-authentication-plugin=mysql_native_password
skip-host-cache
skip-name-resolve
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
secure-file-priv=/var/lib/mysql-files
user=mysql
pid-file=/var/run/mysqld/mysqld.pid
EOT
}
"kind" = "ConfigMap"
"metadata" = {
"labels" = {
"app" = local.db_name
}
"name" = "my-cnf"
"namespace" = local.namespace_name
}
}
}
DB Deployment:
resource "kubernetes_manifest" "ghost_db" {
depends_on = [kubernetes_persistent_volume_claim.ghost_db, kubernetes_secret.secrets]
manifest = {
"apiVersion" = "apps/v1"
"kind" = "Deployment"
"metadata" = {
"labels" = {
"app" = local.app_name
}
"name" = local.db_name
"namespace" = local.namespace_name
}
"spec" = {
"replicas" = 1
"selector" = {
"matchLabels" = {
"app" = local.db_name
}
}
"template" = {
"metadata" = {
"labels" = {
"app" = local.db_name
}
}
"spec" = {
"containers" = [
{
"args" = [
"--default-authentication-plugin=mysql_native_password",
]
"env" = [
{
"name" = "MYSQL_ROOT_PASSWORD"
"valueFrom" = {
"secretKeyRef" = {
"key" = "MYSQL_ROOT_PASSWORD"
"name" = "${local.db_name}-mysql"
}
}
},
{
"name" = "MYSQL_DATABASE"
"value" = local.db_name
},
{
"name" = "MYSQL_USER"
"value" = local.db_user
},
{
"name" = "MYSQL_PASSWORD"
"valueFrom" = {
"secretKeyRef" = {
"key" = "MYSQL_PASSWORD"
"name" = "${local.db_name}-mysql"
}
}
},
]
"image" = "mysql/mysql-server:8.0"
"imagePullPolicy" = "IfNotPresent"
"livenessProbe" = {
"exec" = {
"command" = [
"sh",
"-c",
"mysqladmin status -uroot -p$MYSQL_ROOT_PASSWORD",
]
}
"failureThreshold" = 3
"initialDelaySeconds" = 30
"periodSeconds" = 10
"successThreshold" = 1
"timeoutSeconds" = 1
}
"name" = local.db_name
"ports" = [
{
"containerPort" = 3306
"name" = local.db_name
},
]
"readinessProbe" = {
"exec" = {
"command" = [
"sh",
"-c",
"mysqladmin status -uroot -p$MYSQL_ROOT_PASSWORD",
]
}
"failureThreshold" = 3
"initialDelaySeconds" = 30
"periodSeconds" = 10
"successThreshold" = 1
"timeoutSeconds" = 1
}
"volumeMounts" = [
{
"mountPath" = "/var/lib/mysql"
"name" = "${local.db_name}-persistent-storage"
},
{
"mountPath" = "/etc/my.cnf"
"name" = "my-cnf"
"subPath" = "my.cnf"
},
]
},
]
# "initContainers" = [
# {
# "command" = [
# "sh",
# "-c",
# "chown -R 1001:1001 /var/lib/mysql"
# ]
# "image" = "busybox:latest"
# "imagePullPolicy" = "Always"
# "name" = "volume-permissions"
# "resources" = {}
# "securityContext" = {
# "runAsUser" = 0
# }
# "terminationMessagePath" = "/dev/termination-log"
# "terminationMessagePolicy" = "File"
# "volumeMounts" = [
# {
# "mountPath" = "/var/lib/mysql"
# "name" = "${local.db_name}-persistent-storage"
# },
# ]
# },
# ]
"hostname" = local.db_name
"subdomain" = local.subdomain
"nodeSelector" = {
"kubernetes.io/hostname" = local.nodeSelector_hostname
}
"securityContext" = {
"fsGroup" = 1001
"runAsUser" = 1001
}
"serviceAccountName" = "default"
"volumes" = [
{
"name" = "${local.db_name}-persistent-storage"
"persistentVolumeClaim" = {
"claimName" = local.db_name
}
},
{
"name" = "my-cnf"
"configMap" = {
"name" = "my-cnf"
}
},
]
}
}
}
}
}
I tried to use an init container to change the permissions as suggested here Mysql container not starting up on Kubernetes. But I have the same issues:
Error:
bash-4.4$ mysqladmin status -uroot -p$MYSQL_ROOT_PASSWORD
mysqladmin: [Warning] Using a password on the command line interface can be insecure.
mysqladmin: connect to server at 'localhost' failed
error: 'Access denied for user 'root'@'localhost' (using password: YES)
MySQL Container Logs:
[Entrypoint] MySQL Docker Image 8.0.26-1.2.4-server
[Entrypoint] Starting MySQL 8.0.26-1.2.4-server
2021-09-27T09:41:12.025060Z 0 [Warning] [MY-010143] [Server] Ignoring user change to '1001' because the user was set to 'mysql' earlier on the command line
2021-09-27T09:41:12.026132Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.26) starting as process 1
2021-09-27T09:41:12.028501Z 0 [Warning] [MY-010122] [Server] One can only use the --user switch if running as root
2021-09-27T09:41:12.033575Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2021-09-27T09:41:12.194605Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2021-09-27T09:41:12.340795Z 0 [Warning] [MY-013746] [Server] A deprecated TLS version TLSv1 is enabled for channel mysql_main
2021-09-27T09:41:12.340934Z 0 [Warning] [MY-013746] [Server] A deprecated TLS version TLSv1.1 is enabled for channel mysql_main
2021-09-27T09:41:12.341892Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2021-09-27T09:41:12.342070Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
2021-09-27T09:41:12.364915Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock
2021-09-27T09:41:12.366981Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.26' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Community Server - GPL.
On the node, if I remove the data directory and again re-deploy the deployment it works. It just doesn't seem to re-deploy on the same PV. The use case would be if I move nodes and I want to re-attach the data disk.
Any suggestions? I know people are using the bitnami helm chart with the "volumePermissions.enabled" value, however I can't use the chart because bitnami doesn't have arm64 images yet.