Project

General

Profile

Actions

Bug #23473

closed

Update hammer to correctly provision vmware VMs.

Added by Doug Forster over 6 years ago. Updated about 5 years ago.

Status:
Closed
Priority:
Urgent
Assignee:
-
Category:
Compute resources
Target version:
-
Difficulty:
Triaged:
Yes
Team Backlog:
Fixed in Releases:
Found in Releases:
In Kanboard:

Description

Recently vmWare compute resources were updated to allow per-disk configuration of volumes.

Additional SCSI Controller with per-disk configuration
http://projects.theforeman.org/issues/4509

I believe this may be the root of this issue as well.
https://projects.theforeman.org/issues/23466

When creating a host via the foreman UI the request ends up containing:

"compute_attributes": {
  "cpus": "1",
  "corespersocket": "1",
  "memory_mb": "4096",
  "firmware": "efi",
  "cluster": "VFP",
  "resource_pool": "Resources",
  "path": "/Datacenters/VFP/vm",
  "guest_id": "otherGuest64",
  "hardware_version": "Default",
  "memoryHotAddEnabled": "0",
  "cpuHotAddEnabled": "0",
  "add_cdrom": "0",
  "start": "1",
  "annotation": "",
  "scsi_controllers": "{\"scsiControllers\":[{\"type\":\"VirtualLsiLogicController\",\"key\":1000}],\"volumes\":[{\"thin\":\"true\",\"name\":\"Hard disk\",\"mode\":\"persistent\",\"controllerKey\":1000,\"datastore\":\"Storage_2\",\"size\":52428800,\"sizeGb\":50,\"eagerZero\":\"false\"}]}" 
},

When using hammer it includes:

"compute_attributes": {
  "start": "1",
  "volumes_attributes": {
    "0": {
      "size_gb": "50G",
      "datastore": "Storage_2",
      "thin": "true" 
    }
  }
},

I even tried without passing any volume attributes and the build still fails like it doesn't honor what is in the compute profile.


Files

hammer.log hammer.log 102 KB Doug Forster, 05/07/2018 03:32 PM

Related issues 2 (1 open1 closed)

Related to Foreman - Bug #23466: Unable to provision host with hammer after upgrade to 1.16.1NewActions
Related to Hammer CLI - Tracker #26990: Tracker for VMware issuesClosedOleh Fedorenko

Actions
Actions #1

Updated by Tomáš Strachota over 6 years ago

  • Status changed from New to Need more information

Thank you for reporting the issue.

Do you use compute profiles when creating the host? There's difference in how hammer and UI handles default values. That could be root of the issue. Would you mind sharing full hammer -d log of your command? Relevant part of server side logs of the UI host creation would be helpful too.

Actions #2

Updated by Doug Forster over 6 years ago

Tomáš Strachota wrote:

Do you use compute profiles when creating the host?

yes

On the server side the only exception is in this issue: https://projects.theforeman.org/issues/23466

Actions #3

Updated by Doug Forster over 6 years ago

  • Status changed from Feedback to New
Actions #4

Updated by Tomáš Strachota over 6 years ago

  • Related to Bug #23466: Unable to provision host with hammer after upgrade to 1.16.1 added
Actions #5

Updated by Jason Hane over 6 years ago

We're having this problem too. Any ideas on what we can do? Do we need to upgrade to Foreman 17 and the latest hammer?

Actions #6

Updated by Jason Hane over 6 years ago

  • Priority changed from Normal to Urgent

Any insight you can provide would be greatly appreciated. This is preventing our ability for creating VMs from our automated provisioning system.

Actions #7

Updated by Tomáš Strachota over 6 years ago

Unfortunately I didn't manage to debug this one further. The issue is most likely caused by different approach that API uses for merging the attributes. I guess that some of them get merged incorrectly.

Could you share attributes from the profile you're using? It's not possible to list them from hammer but API should give you the info: https://theforeman.org/api/1.16/apidoc/v2/compute_profiles/show.html
That could help us to simulate the issue.

Actions #8

Updated by Doug Forster over 6 years ago

Tomáš Strachota wrote:

Unfortunately I didn't manage to debug this one further. The issue is most likely caused by different approach that API uses for merging the attributes. I guess that some of them get merged incorrectly.

Could you share attributes from the profile you're using? It's not possible to list them from hammer but API should give you the info: https://theforeman.org/api/1.16/apidoc/v2/compute_profiles/show.html
That could help us to simulate the issue.

Here is the output of the API. There is a good chance this is the issue. {
"created_at": "2017-11-15 12:15:01 -0500",
"updated_at": "2017-11-16 14:53:21 -0500",
"id": 19,
"name": "2x4",
"compute_attributes": [ {
"id": 29,
"name": "2 CPUs and 4096 MB memory",
"compute_resource_id": 5,
"compute_resource_name": "Philadelphia vCenter",
"provider_friendly_name": "VMware",
"compute_profile_id": 19,
"compute_profile_name": "2x4",
"vm_attrs": {
"cpus": "2",
"corespersocket": "2",
"memory_mb": "4096",
"firmware": "efi",
"cluster": "PHP2",
"resource_pool": "Resources",
"path": "/Datacenters/PHP/vm",
"guest_id": "otherGuest64",
"scsi_controller_type": "ParaVirtualSCSIController",
"hardware_version": "Default",
"memoryHotAddEnabled": "0",
"cpuHotAddEnabled": "0",
"add_cdrom": "0",
"annotation": "",
"interfaces_attributes": {
"0": {
"type": "VirtualVmxnet3",
"network": "dvportgroup-586"
}
},
"volumes_attributes": {
"0": {
"datastore": "php-prod-datastore-1",
"name": "Hard disk",
"size_gb": "50",
"thin": "true",
"eager_zero": "false",
"mode": "persistent"
}
}
}
}, {
"id": 31,
"name": "2 CPUs and 4096 MB memory",
"compute_resource_id": 1,
"compute_resource_name": "Valley Forge vCenter",
"provider_friendly_name": "VMware",
"compute_profile_id": 19,
"compute_profile_name": "2x4",
"vm_attrs": {
"cpus": "2",
"corespersocket": "2",
"memory_mb": "4096",
"firmware": "efi",
"cluster": "VFP",
"resource_pool": "Resources",
"path": "/Datacenters/VFP/vm",
"guest_id": "otherGuest64",
"scsi_controller_type": "ParaVirtualSCSIController",
"hardware_version": "Default",
"memoryHotAddEnabled": "0",
"cpuHotAddEnabled": "0",
"add_cdrom": "0",
"annotation": "",
"image_id": "RHEL73",
"interfaces_attributes": {
"0": {
"type": "VirtualVmxnet3",
"network": "dvportgroup-440"
}
},
"volumes_attributes": {
"0": {
"datastore": "VFP_Storage_1",
"name": "Hard disk",
"size_gb": "50",
"thin": "true",
"eager_zero": "false",
"mode": "persistent"
}
}
}
}, {
"id": 33,
"name": "2 CPUs and 4096 MB memory",
"compute_resource_id": 3,
"compute_resource_name": "Wood Dale vCenter",
"provider_friendly_name": "VMware",
"compute_profile_id": 19,
"compute_profile_name": "2x4",
"vm_attrs": {
"cpus": "2",
"corespersocket": "2",
"memory_mb": "4096",
"firmware": "efi",
"cluster": "WDP",
"resource_pool": "Resources",
"path": "/Datacenters/WDP/vm",
"guest_id": "otherGuest64",
"scsi_controller_type": "ParaVirtualSCSIController",
"hardware_version": "Default",
"memoryHotAddEnabled": "0",
"cpuHotAddEnabled": "0",
"add_cdrom": "0",
"annotation": "",
"interfaces_attributes": {
"0": {
"type": "VirtualVmxnet3",
"network": "dvportgroup-59"
}
},
"volumes_attributes": {
"0": {
"datastore": "WDP_Storage_1",
"name": "Hard disk",
"size_gb": "50",
"thin": "true",
"eager_zero": "false",
"mode": "persistent"
}
}
}
}
]
}

I am going to try to recreate this and see if hammer works again.

Actions #9

Updated by Doug Forster over 6 years ago

I tried creating a new compute profile and it still failed.

{
"created_at": "2018-06-04 10:27:10 -0400",
"updated_at": "2018-06-04 11:21:59 -0400",
"id": 33,
"name": "new_2x4",
"compute_attributes": [ {
"id": 61,
"name": "2 CPUs and 4096 MB memory",
"compute_resource_id": 5,
"compute_resource_name": "Philadelphia vCenter",
"provider_friendly_name": "VMware",
"compute_profile_id": 33,
"compute_profile_name": "new_2x4",
"vm_attrs": {
"cpus": "2",
"corespersocket": "2",
"memory_mb": "4096",
"firmware": "automatic",
"cluster": "PHP",
"resource_pool": "Resources",
"path": "/Datacenters/PHP/vm",
"guest_id": "rhel7_64Guest",
"hardware_version": "Default",
"memoryHotAddEnabled": "0",
"cpuHotAddEnabled": "0",
"add_cdrom": "0",
"annotation": "",
"scsi_controllers": [ {
"type": "ParaVirtualSCSIController",
"key": 1000
}
],
"interfaces_attributes": {
"0": {
"type": "VirtualVmxnet3",
"network": "dvportgroup-586"
}
},
"volumes_attributes": {
"0": {
"size_gb": 100,
"datastore": "php-prod-datastore-1",
"storage_pod": "",
"thin": true,
"eager_zero": false,
"name": "Hard disk",
"mode": "persistent",
"controller_key": 1000
}
}
}
}, {
"id": 63,
"name": "2 CPUs and 4096 MB memory",
"compute_resource_id": 1,
"compute_resource_name": "Valley Forge vCenter",
"provider_friendly_name": "VMware",
"compute_profile_id": 33,
"compute_profile_name": "new_2x4",
"vm_attrs": {
"cpus": "2",
"corespersocket": "2",
"memory_mb": "4096",
"firmware": "automatic",
"cluster": "VFP",
"resource_pool": "Resources",
"path": "/Datacenters/VFP/vm",
"guest_id": "rhel7_64Guest",
"hardware_version": "Default",
"memoryHotAddEnabled": "0",
"cpuHotAddEnabled": "0",
"add_cdrom": "0",
"annotation": "",
"image_id": "RHEL73",
"scsi_controllers": [ {
"type": "ParaVirtualSCSIController",
"key": 1000
}
],
"interfaces_attributes": {
"0": {
"type": "VirtualVmxnet3",
"network": "dvportgroup-440"
}
},
"volumes_attributes": {
"0": {
"thin": true,
"name": "Hard disk",
"mode": "persistent",
"controller_key": 1000,
"size": 10485760,
"size_gb": 100,
"datastore": "VFP_Storage_1"
}
}
}
}, {
"id": 65,
"name": "2 CPUs and 4096 MB memory",
"compute_resource_id": 3,
"compute_resource_name": "Wood Dale vCenter",
"provider_friendly_name": "VMware",
"compute_profile_id": 33,
"compute_profile_name": "new_2x4",
"vm_attrs": {
"cpus": "2",
"corespersocket": "2",
"memory_mb": "4096",
"firmware": "automatic",
"cluster": "WDP",
"resource_pool": "Resources",
"path": "/Datacenters/WDP/vm",
"guest_id": "rhel7_64Guest",
"hardware_version": "Default",
"memoryHotAddEnabled": "0",
"cpuHotAddEnabled": "0",
"add_cdrom": "0",
"annotation": "",
"scsi_controllers": [ {
"type": "ParaVirtualSCSIController",
"key": 1000
}
],
"interfaces_attributes": {
"0": {
"type": "VirtualVmxnet3",
"network": "dvportgroup-8737"
}
},
"volumes_attributes": {
"0": {
"thin": true,
"name": "Hard disk",
"mode": "persistent",
"controller_key": 1000,
"size": 10485760,
"size_gb": 100,
"datastore": "WDP_Storage_1"
}
}
}
}
]
}

Actions #10

Updated by Oleh Fedorenko over 5 years ago

Actions #11

Updated by Oleh Fedorenko about 5 years ago

  • Status changed from New to Closed
  • Triaged changed from No to Yes

This is pretty old bug and I presume this issue is not reproducible anymore since there were fixes related to that problem in both Foreman and hammer. Closing.

If you're still facing this issue, please reopen and please provide again hammer -d output as well as some logs from the server. Thank you.

Actions

Also available in: Atom PDF