During my last post I was trying to restore a demo Wordpress deployment using Velero. The deployment itself was nothing fancy, and was exposed via NodePort instead of an Ingress. The restored app would start as expected but was not accessible with the same NodeIP:NodePort combination that I had from its initial deployment.
Here’s my Wordpress manifest (lovingly taken from https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/):
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:8.0
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: funkycloudmedina
- name: MYSQL_DATABASE
value: wordpress
- name: MYSQL_USER
value: wordpress
- name: MYSQL_PASSWORD
value: funkycloudmedina
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: wordpress
labels:
app: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
tier: frontend
type: NodePort
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wp-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_PASSWORD
value: funkycloudmedina
- name: WORDPRESS_DB_USER
value: wordpress
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wp-pv-claim
Notice the Service named wordpress
is configured as NodePort but no host port is explicitly stated, meaning the Service will listen on a random port generated when the Service is instantiated.
After I applied the manifest, I checked the port the Service was listening on (kubectl get Service <service-name>
) and accessed Wordpress with the node IP and host port number. For example sake, let’s say it’s 10.1.1.20:31115
.
Navigating to the 10.1.1.20:31115
, I was greeted with the Wordpress install and continued on about my day. Afterwards, I backed up the namespace, deleted it, then restored it.
After the restore, whenever I navigated to 10.1.1.20:31115
my restored Wordpress would never load (“Page could not be found”). It took me a few minutes to figure out that the restored Service was listening on a new port, not the original, as the Service was recreated as part of the restore. Since the Service did not have a host port hardcoded, a randomly selected port was used. OK, easy enough. Just navigate to the new port at 10.1.1.20:30412
. Wrong.
Navigating to 10.1.1.20:30412
resulted in an immediate redirect to 10.1.1.20:31115
. Nothing there. There’s no Service listening on that port!
I spent an embarassing amount of time trying to figure out why this was happening. Nothing in my manifest was configured to do this redirect.
Then it dawned on me, I can modify the Service to listen on whatever port I want. So, I edited the Service kubectl edit service <service-name>
and specified 31115
for the host port of the Service.
Huzzah, success.
Now to figure out where and how this redirect was happening. A quick look at my Wordpress instance settings I found the site URL stored the exact Service value of 10.1.1.20:31115
. Whenever traffic was hitting the new 10.1.1.20:30412
Service, Wordpress was immediately redirecting to 10.1.1.20:31115
, based on the original installation.
The issue is not specifically a Kubernetes or Velero issue. Or even Wordpress. It’s just how Kubernetes NodePort Services and Wordpress works.
As mentioned, to get access to your Wordpress instance again you’ll need to edit the Service after your Velero restore to listen on the same port that Wordpress was initially installed.
To fix this permanently, your manifest should use an Ingress with certificates, and the Wordpress instance installed or reconfigured to use the new hostname your Ingress is responding to.
- Restore of a Wordpress deployment is stuck redirecting the port
- Backups and Restores using Velero in TKGm 1.6.1
- Unable to upgrade the database: org.postgresql.util.PSQLException: ERROR: could not open shared memory segment: No such file or directory
- Upgrading Cloud Director 10.4.1 to 10.5
- Installing and Configuring Velero in TKGm 1.6.1 on vSphere