Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

issue sync between Postgres-operator and patroni #2751

Open
fahedouch opened this issue Sep 6, 2024 · 0 comments
Open

issue sync between Postgres-operator and patroni #2751

fahedouch opened this issue Sep 6, 2024 · 0 comments

Comments

@fahedouch
Copy link
Contributor

fahedouch commented Sep 6, 2024

Please, answer some short questions which should help us to understand your problem / question better?

  • Which image of the operator are you using? e.g. ghcr.io/zalando/postgres-operator:v1.12.2
  • Where do you run it - cloud or metal? Kubernetes or OpenShift? OVH Cloud
  • Are you running Postgres Operator in production? yes
  • Type of issue? Bug report

I'm experiencing synchronization problems within my PostgreSQL cluster that's managed by the Zalando Postgres Operator. The cluster comprises two instances: pgsql-p3m01ob6-0 as the leader and pgsql-p3m01ob6-1as the replica. When trying to increase the instances persistentVolume size some kind of race happens between postgres-operator controller and patroni which leads to update fail.

Steps to reproduce the issue:

1 - trigger a postgresqls.acid.zalan.do volume size increasing.
2 -Resize the replicas pgsql-p3m01ob6-1 persistentVolume and make a rolling update to mount the new resized volume
3 - once the rolling update of replica is done and before the patroni join it to the cluster, patroni try to switchover over the leader pgsql-p3m01ob6-1 to replicas in order to make a rolling update and mount pgsql-p3m01ob6-1 resized volume, unfortunately, the switchover failed because the replicas had not yet caught up with the primary node with message no switchover candidate found

the issue does not happens within cluster of 3 instances and that's completely normal because we always have at lease one replicaavailable for switchover.

Solution:

introduce a back off exponential retry when get SwitchoverCandidate, and I realized that there is a //TODO for this so I will try to deal with this issue in the coming days

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant