Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix the ceph verification steps after the node replacement process to get the correct node names from the ceph osd status #11248

Open
yitzhak12 opened this issue Jan 28, 2025 · 0 comments
Assignees

Comments

@yitzhak12
Copy link
Contributor

See the failure here: https://reportportal-ocs4.apps.ocp-c1.prod.psi.redhat.com/ui/#ocs/launches/795/27665/1364638/1364664/log and the output:

2025-01-02 10:15:08 03:15:06 - MainThread - ocs_ci.ocs.node - INFO - Ceph osd status: {'OSDs': [{'host name': '', 'id': 0, 'kb available': 0, 'kb used': 0, 'read byte rate': 0, 'read ops rate': 0, 'state': ['exists', 'up'], 'write byte rate': 0, 'write ops rate': 0}, {'host name': '', 'id': 1, 'kb available': 0, 'kb used': 0, 'read byte rate': 0, 'read ops rate': 0, 'state': ['exists', 'up'], 'write byte rate': 0, 'write ops rate': 0}, {'host name': '', 'id': 2, 'kb available': 0, 'kb used': 0, 'read byte rate': 0, 'read ops rate': 0, 'state': ['exists', 'up'], 'write byte rate': 0, 'write ops rate': 0}]}
2025-01-02 10:15:08 03:15:06 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n openshift-storage get Pod -n openshift-storage --selector=app=rook-ceph-osd -o yaml
2025-01-02 10:15:08 03:15:07 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n openshift-storage get Pod rook-ceph-osd-0-7c497c5f4-8k5cn -n openshift-storage -o yaml
2025-01-02 10:15:08 03:15:07 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig get node -o yaml
2025-01-02 10:15:08 03:15:07 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n openshift-storage get Pod rook-ceph-osd-1-558ff9bc84-fnmcl -n openshift-storage -o yaml
2025-01-02 10:15:08 03:15:07 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig get node -o yaml
2025-01-02 10:15:08 03:15:08 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n openshift-storage get Pod rook-ceph-osd-2-86755c5d8d-456sl -n openshift-storage -o yaml
2025-01-02 10:15:08 03:15:08 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig get node -o yaml
2025-01-02 10:15:08 03:15:08 - MainThread - ocs_ci.ocs.node - INFO - osd node names: ['compute-2', 'compute-0', 'compute-1']
2025-01-02 10:15:08 03:15:08 - MainThread - ocs_ci.ocs.node - INFO - New osd node name is: compute-2
2025-01-02 10:15:08 03:15:08 - MainThread - ocs_ci.ocs.node - INFO - Node names from ceph osd status: ['', '', '']
2025-01-02 10:15:08 03:15:08 - MainThread - ocs_ci.ocs.node - WARNING - new osd node name not found in 'ceph osd status' output

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant