Bug #12487
openProvisioning with Templates causes new VM to use Template Disk
Description
Foreman 1.10 is using a Template's Disk for provisioned VMs disk.
Environment
- Server OS: Ubuntu 14.04
- Foreman: 1.10.0-RC2
- Web Server: Apache + Passenger
- vSphere: 5.0.0
Workflow
- Kick off provision from templates (same with either thin provision or not)
- Foreman Creates and Copies Template's Disk to the correct location on vSphere
- Boots VM using Template's Disk NOT new Disk
- Template is locked because new VM is using the disk
Output
This section contains supporting information.¶
Provision Log
I have attached foreman.log which contains a provision log for the foreman-test-16 host.¶
Foreman Error
The following is the error from foreman in an attempt to provision a 2nd vm.
Failed to create a compute vSphere-PN (VMware) instance foreman-test-11.config.landcareresearch.co.nz: FileLocked: Unable to access file [PN_IBM_SAN_VM02] Ubuntu 14.04.1 Template/Ubuntu 14.04.1 Template.vmdk since it is locked
vSphere
- The tempaltes.jpg file shows the Ubuntu 14.04 Template and that is locked
- wrongdrive.jpg shows that vSphere is using the template disk and not the newly provisioned disk
Files
Updated by Dominic Cleal almost 9 years ago
I can't see anything wrong in the parameters being sent in, it looks like it ought to be cloned by vSphere.
Please try commenting out the following line that modifies volumes: https://github.com/theforeman/foreman/blob/1.10.0-RC2/app/models/compute_resources/foreman/model/vmware.rb#L372, restarting and then cloning again.
This should rule the new volume cloning code in 1.10.0 out. It's possible that if the issue remains then it's the same as 1.9 and may be happening on the vSphere side.
Updated by Michael Speth almost 9 years ago
So commenting out line 372 fixed our issue.
Steps that we did.- Upgraded to RC3
- Commented out line 372 on vmware.rb
- Restarted apache/passenger
- Provisioned new vm
- The correct hdd was selected by vsphere!!!
Updated by Dominic Cleal almost 9 years ago
- Related to Bug #9705: Disk sizes specified not used in VMware image provisioning added
Updated by Dominic Cleal almost 9 years ago
- Translation missing: en.field_release set to 63
Thanks for confirming.
Updated by Michael Speth almost 9 years ago
So is there a solution for this? So commenting out line 372 does enable us to provision new VMs. However, the disk size cannot be changed nor additional disks cannot be added. Is this related or a different issue?
Updated by Dominic Cleal almost 9 years ago
The line you're commenting out is part of the resizing/additional disks logic, so it will stop that working. The ticket status will change to Ready for Testing if a patch is proposed, and Closed with a release if fixed.
Updated by Michael Speth almost 9 years ago
Dominic Cleal wrote:
The line you're commenting out is part of the resizing/additional disks logic, so it will stop that working. The ticket status will change to Ready for Testing if a patch is proposed, and Closed with a release if fixed.
What should I do in the mean time?
Updated by Dominic Cleal almost 9 years ago
- Translation missing: en.field_release changed from 63 to 104
Updated by Michael Speth over 8 years ago
Just want to confirm that the release of version 1.10.0 still has this problem :(
Updated by Dominic Cleal over 8 years ago
- Translation missing: en.field_release changed from 104 to 123
Updated by Michael Speth over 8 years ago
Is there anything I can do to help debug this issue?
Updated by Timo Goebel over 8 years ago
Michael Speth wrote:
Is there anything I can do to help debug this issue?
I just tried to reproduce this and failed. It did not lock the template for me.
I tried with "Thin Provision" and without "Eager Zero" enabled.
Updated by Michael Speth over 8 years ago
Timo Goebel wrote:
Michael Speth wrote:
Is there anything I can do to help debug this issue?
I just tried to reproduce this and failed. It did not lock the template for me.
I tried with "Thin Provision" and without "Eager Zero" enabled.
So you will be able to deploy 1 vm with this bug. Its when you try to deploy the 2nd vm that it will fail.
Have you tried to deploy 2 vms back to back on the same storage location?
Updated by Dominic Cleal over 8 years ago
- Translation missing: en.field_release changed from 123 to 145
Updated by Dominic Cleal over 8 years ago
- Translation missing: en.field_release deleted (
145)
Updated by Michael Speth over 8 years ago
What version of vSphere was tried by Timo Goebel?
We are using v5.0.0-4695. Do you think this is related to a bug in that version?
I see this issue has been deleted and not scheduled for a release. Does that mean this is dead? Is there anything else I can do to help debug this?
Updated by Dominic Cleal over 8 years ago
Michael Speth wrote:
I see this issue has been deleted and not scheduled for a release. Does that mean this is dead? Is there anything else I can do to help debug this?
It's been unscheduled as it wasn't fixed by the end of the release series, sorry. Somebody may still fix it.