ptable template ID changed during upgrade from 1.8.2 -> 1.9, breaks unattended provisioning
I upgraded from version 1.8.2 to 1.9 earlier this morning. No problems were encountered during the upgrade, or the app restart after.
Performing a test provision after the upgrade resulted in this error:
After some digging around, I found that my ptable template ID changed from 12 to 67, somehow. The OS had been created in foreman approximately 3 weeks ago, and the ptable had been associated with it. I had provisioned several VMs under this OS -- most recent was two days ago. No other changes were made to the KS template, or the ptable template.
One odd thing: Attempting to preview the KS template for a host which already existed (existingHost01) rendered the template w/out any problems. When I attempted to preview the KS template for the host I was attempting to provision today (newTestProv01), the template threw an error.
I checked the OS and verified that the ptable was listed as associated, and the checkbox was checked.
Please let me know if I can provide further information.
#5 Updated by Dominic Cleal over 6 years ago
Thanks, seems that it ran but there's an error still.
A user reported on IRC that it was only two hosts in build mode that weren't updated correctly.
Editing and re-saving a host, ensuring that the correct partition table is selected under the OS tab should fix any affected hosts.
#6 Updated by Marek Hulán over 6 years ago
I'm unable to reproduce. I migrated down the migration 20150514114044, then up again looking at host in build mode, it's ptable_id was updated correctly to new object. New object would fail the migration if it was not saved. It doesn't seem as a migration issue. A log from original migration run could help otherwise I'm not sure if we can find the cause and fix it.
#7 Updated by Dominic Cleal over 6 years ago
- Status changed from New to Need more information
- Legacy Backlogs Release (now unused) deleted (
Yeah, I've tried to reproduce it a few times too, including this morning with a host in build mode. My only guess is that perhaps it's performing validation on hosts during the migration and some aspect of orchestration is failing, but I've no evidence for it.
/var/log/foreman-install.log on Debian or /var/log/foreman/db_migrate.log on an RPM installation may help if it shows errors.