Project

General

Profile

Actions

Bug #12487

open

Provisioning with Templates causes new VM to use Template Disk

Added by Michael Speth about 9 years ago. Updated over 7 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
Compute resources - VMware
Target version:
-
Difficulty:
Triaged:
Fixed in Releases:
Found in Releases:

Description

Foreman 1.10 is using a Template's Disk for provisioned VMs disk.

Environment
  • Server OS: Ubuntu 14.04
  • Foreman: 1.10.0-RC2
  • Web Server: Apache + Passenger
  • vSphere: 5.0.0

Workflow
  • Kick off provision from templates (same with either thin provision or not)
  • Foreman Creates and Copies Template's Disk to the correct location on vSphere
  • Boots VM using Template's Disk NOT new Disk
  • Template is locked because new VM is using the disk

Output
This section contains supporting information.

Provision Log
I have attached foreman.log which contains a provision log for the foreman-test-16 host.

Foreman Error
The following is the error from foreman in an attempt to provision a 2nd vm.
Failed to create a compute vSphere-PN (VMware) instance foreman-test-11.config.landcareresearch.co.nz: FileLocked: Unable to access file [PN_IBM_SAN_VM02] Ubuntu 14.04.1 Template/Ubuntu 14.04.1 Template.vmdk since it is locked

vSphere
  • The tempaltes.jpg file shows the Ubuntu 14.04 Template and that is locked
  • wrongdrive.jpg shows that vSphere is using the template disk and not the newly provisioned disk


Files

templates.jpg View templates.jpg 22.4 KB Michael Speth, 11/15/2015 07:35 PM
wrongdrive.jpg View wrongdrive.jpg 48.1 KB Michael Speth, 11/15/2015 07:35 PM
foreman.log foreman.log 151 KB Log File Containing Provision Michael Speth, 11/15/2015 07:53 PM

Related issues 1 (0 open1 closed)

Related to Foreman - Bug #9705: Disk sizes specified not used in VMware image provisioningClosedIvan Necas03/10/2015Actions
Actions #1

Updated by Dominic Cleal about 9 years ago

I can't see anything wrong in the parameters being sent in, it looks like it ought to be cloned by vSphere.

Please try commenting out the following line that modifies volumes: https://github.com/theforeman/foreman/blob/1.10.0-RC2/app/models/compute_resources/foreman/model/vmware.rb#L372, restarting and then cloning again.

This should rule the new volume cloning code in 1.10.0 out. It's possible that if the issue remains then it's the same as 1.9 and may be happening on the vSphere side.

Actions #2

Updated by Michael Speth about 9 years ago

So commenting out line 372 fixed our issue.

Steps that we did.
  • Upgraded to RC3
  • Commented out line 372 on vmware.rb
  • Restarted apache/passenger
  • Provisioned new vm
  • The correct hdd was selected by vsphere!!!
Actions #3

Updated by Dominic Cleal about 9 years ago

  • Related to Bug #9705: Disk sizes specified not used in VMware image provisioning added
Actions #4

Updated by Dominic Cleal about 9 years ago

  • Translation missing: en.field_release set to 63

Thanks for confirming.

Actions #5

Updated by Michael Speth almost 9 years ago

So is there a solution for this? So commenting out line 372 does enable us to provision new VMs. However, the disk size cannot be changed nor additional disks cannot be added. Is this related or a different issue?

Actions #6

Updated by Dominic Cleal almost 9 years ago

The line you're commenting out is part of the resizing/additional disks logic, so it will stop that working. The ticket status will change to Ready for Testing if a patch is proposed, and Closed with a release if fixed.

Actions #7

Updated by Michael Speth almost 9 years ago

Dominic Cleal wrote:

The line you're commenting out is part of the resizing/additional disks logic, so it will stop that working. The ticket status will change to Ready for Testing if a patch is proposed, and Closed with a release if fixed.

What should I do in the mean time?

Actions #8

Updated by Dominic Cleal almost 9 years ago

  • Translation missing: en.field_release changed from 63 to 104
Actions #9

Updated by Michael Speth almost 9 years ago

Just want to confirm that the release of version 1.10.0 still has this problem :(

Actions #10

Updated by Dominic Cleal almost 9 years ago

  • Translation missing: en.field_release changed from 104 to 123
Actions #11

Updated by Michael Speth almost 9 years ago

Is there anything I can do to help debug this issue?

Actions #12

Updated by Timo Goebel almost 9 years ago

Michael Speth wrote:

Is there anything I can do to help debug this issue?

I just tried to reproduce this and failed. It did not lock the template for me.
I tried with "Thin Provision" and without "Eager Zero" enabled.

Actions #13

Updated by Michael Speth almost 9 years ago

Timo Goebel wrote:

Michael Speth wrote:

Is there anything I can do to help debug this issue?

I just tried to reproduce this and failed. It did not lock the template for me.
I tried with "Thin Provision" and without "Eager Zero" enabled.

So you will be able to deploy 1 vm with this bug. Its when you try to deploy the 2nd vm that it will fail.

Have you tried to deploy 2 vms back to back on the same storage location?

Actions #14

Updated by Dominic Cleal almost 9 years ago

  • Translation missing: en.field_release changed from 123 to 145
Actions #15

Updated by Dominic Cleal over 8 years ago

  • Translation missing: en.field_release deleted (145)
Actions #16

Updated by Michael Speth over 8 years ago

What version of vSphere was tried by Timo Goebel?

We are using v5.0.0-4695. Do you think this is related to a bug in that version?

I see this issue has been deleted and not scheduled for a release. Does that mean this is dead? Is there anything else I can do to help debug this?

Actions #17

Updated by Dominic Cleal over 8 years ago

Michael Speth wrote:

I see this issue has been deleted and not scheduled for a release. Does that mean this is dead? Is there anything else I can do to help debug this?

It's been unscheduled as it wasn't fixed by the end of the release series, sorry. Somebody may still fix it.

Actions

Also available in: Atom PDF