CPO enables the use of the in-place upgrade, which makes it possible to upgrade a cluster to a new PG major. For this purpose, pg_upgrade is used in the background.
Note that an in-place upgrade generates both a pod restore in the form of a rolling update and an operational interruption of the cluster during the actual execution of the restore.
- Pod restart - Use the rolling update strategy to replace all pods based on the new ENV
PGVERSION
with the version you want to update to. - Check - Check that the new
PGVERSION
is larger than the previously used one. - Check whether the new
PGVERSION
is larger than the previously used one and the maintenance mode of the cluster must be deactivated. In addition, the replicas should not have a high lag.
- use initdb to prepare a new data_dir (
data_new
) based on the newPGVERSION
. - check the upgrade possibility with
pg_upgrade --check
If one of the steps is aborted, a cleanup is performed
- remove dependencies that can cause problems. For example, the extensions
pg_stat_statements
andpgaudit
. - activate the maintenance mode of the cluster
- terminate PostgreSQL in an orderly manner
- check pg_controldata for the checkpoint position and wait until all replicas apply the latest checkpoint location
- use port
5432
for rsyncd and start it
- Call pg_upgrade -k to start the Upgrade
if the process failed, we need to rollback, if it was sucessful we’re reaching the point of no return
- Rename the directories.
data -> data_old
anddata_new -> data
- Update the Patroni.config (
postgres.yml
) - Call Checkpoint on every replica and trigger rsync on the Replicas
- Wait for Replicas to complete rsxnc.
Timeout: 300
- Stop rsyncd on Primary and remove ininitialize key from DCS, because its based on the old sysid
- Start Patroni on the Primary and start the postgres locally
- Reset custom staticstics, warmup the Memory and start Analyze in stages in separate threads
- Wait for every Replica to become ready
- Disable the maintenance mode for the Cluster
- Restore custom statistics, analyze these tables and restore dropped objetcs from
Prepare the upgrade
- Drop directory
data_old
- Trigger new Backup
- Stop rsynd if its running
- Disable the maintenance mode for the Cluster
- Drop directory
data_new
spec:
postgresql:
version: "17"
To trigger an In-Place-Upgrade you have just to increase the parameter spec.postgresql.version
. If you choose a valid number the Operator will start with the prozedure, described above.
kubectl patch postgresqls.cpo.opensource.cybertec.at cluster-1 --type='merge' -p \
'{"spec":{"postgresql":{"version":"17"}}}'
When cloning, the new cluster manifest must have a higher version number than the source cluster and is created from a base backup. Depending on the cluster size, the downtime can be considerable in this case, as write operations in the database should be stopped and all WAL files should be archived first before cloning is started. Therefore, only use cloning to test major version upgrades and to check the compatibility of your app with the Postgres server of a higher version.
In this scenario the major version could then be run by a user from within the primary pod. Exec into the container and run:
python3 /scripts/inplace_upgrade.py N
where N
is the number of members of your cluster (see numberOfInstances
). The upgrade is usually fast, well under one minute for most DBs.
Note, that changes become irrevertible once pg_upgrade is called.