You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[root@an-a01n01 ~]# anvil-provision-server --ci-test --name an-test-deploy1 --os centos-stream9 --cpu 4 --ram 4G --storage-group "Storage group 1" --storage-size 30G --install-media CentOS-Stream-9-latest-x86_64-dvd1.iso --driver-disc deploy1.iso
Saving the job details to create this server. Please wait a few moments.
Job Data:
====
server_name=an-test-deploy1
os=centos-stream9
cpu_cores=4
ram=4G
storage_group_uuid=65c1b27f-583f-4915-a019-96fd282760f5
storage_size=30G
install_iso=88423910-0e37-4cfa-b895-0b64496ae8a5
driver_iso=4e22c0ed-a3e3-4a27-be7c-bb4ef489d7d5
====
The job to create the new server has been registered as job: [138ca0d7-5b54-47f3-a847-6da502e6c019].
It should be provisioned in the next minute or two.
[root@an-a01n01 ~]# anvil-manage-server-storage -vvv --log-secure --add 20G --disk vdb --server an-test-deploy1 --storage-group "Storage group 1" --confirm
Working with the server: [an-test-deploy1], UUID: [a803707e-aae2-4d69-aa20-9bdefdb7bffc]
Testing access to peer(s). Please be patient.
- Testing access to: [an-a01n02].
[ OK ] - Successfully connected to: [an-a01n02] via the network: [bcn1] and IP: [10.201.10.2].
- New drive target: [vdb], size: [20.00GiB], bus: [virtio], cache: [writeback], IO policy: [threads]
- Preparing to add a the drive: [an-test-deploy1/1] using the storage group: [Storage group 1]...
- Creating the new local LV: [/dev/anvil-test-vg/an-test-deploy1_1]...
Done!
- Creating the new LV on the peer: [an-a01n02:/dev/anvil-test-vg/an-test-deploy1_1], via: [10.201.10.2 (bcn1)]
Done!
- Testing the updated DRBD resource config file to ensure the new volumes are cromulent...
Success!
- Writing out the updated DRBD config file.
- Copying the new resource file to our peers.
- Copying: [/etc/drbd.d//an-test-deploy1.res] to: [an-a01n02:[email protected]:/etc/drbd.d/] via: [10.201.10.2].
- Creating the replicated storage metadata on the new backing devices now.
- Creating the meta-data on the new local volume: [1]...
Done!
- Creating the meta-data on the peer: [an-a01n02:an-test-deploy1/1], via: [10.201.10.2 (bcn1)]
Warning!
[ Warning ] - When trying to create the peer: [an-a01n02]'s meta-data on: [an-test-deploy1/1]
[ Warning ] - using the command: [/usr/sbin/drbdadm --force create-md --max-peers=3 an-test-deploy1/1]
[ Warning ] - The return code: [9999] was received, expected '0'. Output, if any:
==] STDOUT [========
==] STDERR [========
The remote shell call: [/usr/sbin/drbdadm --force create-md --max-peers=3 an-test-deploy1/1
/usr/bin/echo return_code:$?] to: [[email protected]:22] failed with the error:
====
ssh slave failed: timed out
====
====================
We will try to proceed anyway.
- Registered a job with job UUID: [ac47deb7-89a0-4ba8-820a-df85eb79b039] to reload the resource config on the host: [an-a01n02].
- Adjusting the local resource: [an-test-deploy1] to pick up the new config.
[ NOTE ] - If this hangs, make sure 'anvil-daemon' is running on the peers.
Updating our view of DRBD resources via scan-drbd.
Updating the view of DRBD on our peer: [an-a01n02] via: [10.201.10.2 (bcn1)].
- Waiting for all peers to connect the new volume...
- Peers are connected! Checking if the new volume requires initial sync.
- Initial sync required!
- Forcing primary locally...
Success!
- Ready to add the new disk. Checking if the server is running...
- The server is running on this host, we'll attach the disk here.
- Adding the drive to the server directly...
- Reading the updated server definition
- Validating the updated definition
- Updating the stored definition and undefining the server now...
- Pushing the new definition to the database and other hosts.
Done!
[root@an-a01n01 ~]#
The text was updated successfully, but these errors were encountered:
The text was updated successfully, but these errors were encountered: