Project

General

Profile

Bug #25093

Provisioning VMs with Foreman API Not Working for Multiple SCSI Controllers and Volumes

Added by Eric Hansen about 1 year ago. Updated 3 months ago.

Status:
Closed
Priority:
Normal
Category:
Compute resources
Target version:
-
Difficulty:
Triaged:
No
Bugzilla link:
Team Backlog:
Fixed in Releases:
Found in Releases:

Description

What I'm trying to do is generate a configuration for VM provisioning where multiple VMDK's are added to the VM. The issue I have is that there is no good information or examples out there that I can find on how to structure the host create for the compute_attributes with scsi_controller and volume_attributes. If I create something like below in a JSON structure, which is really my best guess after looking at the production log at the HostsController#create statement when building VM's manually, it doesn't work. If I strip out the volume and scsi_controller key structures, it does work by inheriting the underlying hostgroup values. Does anyone have any insight here as to what the structure needs to look like? I'm running Foreman 1.18.1.

params = {
"host" => {
"name" => "#{outhouse[:name]}",
"operatingsystem_id" => "#{outhouse[:os]}",
"managed" => "true",
"location_id" => 5,
"organization_id" => 3,
"domain_id" => "#{outhouse[:domain]}",
"environment_id" => "#{outhouse[:puppetenv]}",
"puppetclass_ids" => "#{outhouse[:puppetids]}",
"provision_method" => "#{outhouse[:provision]}",
"root_password" => "changeme",
"hostgroup_id" => "#{outhouse[:hostgroup]}",
"compute_resource_id" => "#{outhouse[:compute]}",
"build" => "true",
"ip" => ip,
"interfaces_attributes"=> [ {
"type"=>"Nic::Managed",
"primary"=>"true",
"provision"=>"true",
"managed"=>"#{outhouse[:ipmanaged]}",
"compute_network"=>"#{outhouse[:network]}"
}
],
"compute_attributes" => {
"start" => "1",
"cpus" => "2",
"corespersocket" => "1",
"memory_mb" => "3500",
"firmware" => "bios",
"resource_pool" => "foreman-temp",
"hardware_version" => "Default",
"add_cdrom" => "1",
"scsi_controllers" => {
"scsiControllers" => [ {
"type" => "VirtualLsiLogicController",
"key" => "1000",
}, ],
"volumes" => [ {
"thin" => 1,
"name" => "Hard disk",
"mode" => "persistent",
"controllerKey" => 1000,
"sizeGb" => 150,
"datastore" => "esxa_lun0",
"thin" => 1,
"eagerZero" => 0,
}, {
"thin" => 1,
"name" => "Hard disk 2",
"mode" => "persistent",
"controllerKey" => 1000,
"sizeGb" => 6,
"datastore" => "esxa_lun1",
"thin" => 1,
"eagerZero" => 0,
}
],
},
},
},
}.to_json

Related issues

Related to Hammer CLI - Tracker #16829: Tracker for compute resource related issuesNew2016-10-07

Related to Hammer CLI - Tracker #26990: Tracker for VMware issuesNew

Associated revisions

Revision 7b59a148 (diff)
Added by Oleh Fedorenko 3 months ago

Fixes #25093, #26421 - Host creation with multi SCSI controllers

Revision c62f9732
Added by Shira Maximov 3 months ago

Merge pull request #424 from ofedoren/bug-25093-26421-multiSCSI

Fixes #25093, #26421 - Host creation with multi SCSI controllers

History

#1 Updated by Eric Hansen about 1 year ago

This is the exact REST JSON structure that gets sent and the response back,

{"host":{"name":"eh-cent4","operatingsystem_id":"1","medium_id":"9","ptable_id":"123","managed":"true","location_id":5,"organization_id":3,"domain_id":"6","environment_id":"7","puppetclass_ids":"[]","provision_method":"bootdisk","root_password":"changeme","hostgroup_id":"1","compute_resource_id":"5","build":"true","ip":"","interfaces_attributes":[{"type":"Nic::Managed","primary":"true","provision":"true","managed":"true","compute_network":"172.20.x.x Network"}],"compute_attributes":{"start":"1","cpus":"2","corespersocket":"1","memory_mb":"3500","firmware":"bios","resource_pool":"foreman-temp","hardware_version":"Default","add_cdrom":"1","scsi_controllers":{"scsiControllers":[{"type":"VirtualLsiLogicController","key":"1000"}],"volumes":[{"thin":1,"name":"Hard disk","mode":"persistent","controllerKey":1000,"sizeGb":150,"datastore":"esxa_lun0"},{"thin":1,"name":"Hard disk 2","mode":"persistent","controllerKey":1000,"sizeGb":6,"datastore":"esxa_lun1"}]}}}}
-----------------------------
/usr/local/share/gems/gems/rest-client-2.0.2/lib/restclient/abstract_response.rb:223:in `exception_with_response': 422 Unprocessable Entity (RestClient::UnprocessableEntity)
from /usr/local/share/gems/gems/rest-client-2.0.2/lib/restclient/abstract_response.rb:103:in `return!'
from /usr/local/share/gems/gems/rest-client-2.0.2/lib/restclient/request.rb:809:in `process_result'
from /usr/local/share/gems/gems/rest-client-2.0.2/lib/restclient/request.rb:725:in `block in transmit'
from /usr/share/ruby/net/http.rb:852:in `start'
from /usr/local/share/gems/gems/rest-client-2.0.2/lib/restclient/request.rb:715:in `transmit'
from /usr/local/share/gems/gems/rest-client-2.0.2/lib/restclient/request.rb:145:in `execute'
from /usr/local/share/gems/gems/rest-client-2.0.2/lib/restclient/request.rb:52:in `execute'
from ./eh-vmbuilder.rb:183:in `create_host'
from ./eh-vmbuilder.rb:795:in `<main>'
[ehansen@eh-devbox rest]$

#2 Updated by Eric Hansen about 1 year ago

Finally, this is the production log HostsControllers#create statement,
2018-10-01T23:41:59 [I|app|4d8ee] Processing by Api::V2::HostsController#create as JSON
2018-10-01T23:41:59 [I|app|4d8ee] Parameters: {"host"=>{"name"=>"eh-cent4", "operatingsystem_id"=>"1", "medium_id"=>"9", "ptable_id"=>"123", "managed"=>"true", "location_id"=>5, "organization_id"=>3, "domain_id"=>"6", "environment_id"=>"7", "puppetclass_ids"=>"[]", "provision_method"=>"bootdisk", "root_password"=>"[FILTERED]", "hostgroup_id"=>"1", "compute_resource_id"=>"5", "build"=>"true", "ip"=>"", "interfaces_attributes"=>[{"type"=>"Nic::Managed", "primary"=>"true", "provision"=>"true", "managed"=>"true", "compute_network"=>"172.20.x.x Network"}], "compute_attributes"=>{"start"=>"1", "cpus"=>"2", "corespersocket"=>"1", "memory_mb"=>"3500", "firmware"=>"bios", "resource_pool"=>"foreman-temp", "hardware_version"=>"Default", "add_cdrom"=>"1", "scsi_controllers"=>{"scsiControllers"=>[{"type"=>"VirtualLsiLogicController", "key"=>"1000"}], "volumes"=>[{"thin"=>1, "name"=>"Hard disk", "mode"=>"persistent", "controllerKey"=>1000, "sizeGb"=>150, "datastore"=>"esxa_lun0"}, {"thin"=>1, "name"=>"Hard disk 2", "mode"=>"persistent", "controllerKey"=>1000, "sizeGb"=>6, "datastore"=>"esxa_lun1"}]}}}, "per_page"=>"1000", "apiv"=>"v2"}

And the error that doesn't help me, any way to expand and find out what it is looking for?
2018-10-01T23:42:10 [W|app|4d8ee] Rolling back due to a problem: [#<Orchestration::Task:0x0000000009aa2d38 @name="Set up compute instance eh-cent4.qa.local", @id="Set up compute instance eh-cent4.qa.local", @status="failed", @priority=2, @action=[#<Host::Managed id: nil, name: "eh-cent4.qa.local", last_compile: nil, last_report: nil, updated_at: nil, created_at: nil, root_pass: "$5$Xbm8Y1Qz$g3sEPZYhtWSomzOHZ1qJSiePQAASTu4a/ud26h...", architecture_id: 1, operatingsystem_id: 1, environment_id: 7, ptable_id: 123, medium_id: 9, build: true, comment: nil, disk: nil, installed_at: nil, model_id: nil, hostgroup_id: 1, owner_id: 4, owner_type: "User", enabled: true, puppet_ca_proxy_id: 1, managed: true, use_image: nil, image_file: nil, uuid: nil, compute_resource_id: 5, puppet_proxy_id: 1, certname: nil, image_id: nil, organization_id: 3, location_id: 5, type: "Host::Managed", otp: nil, realm_id: nil, compute_profile_id: 2, provision_method: "bootdisk", grub_pass: "$5$Xbm8Y1Qz$g3sEPZYhtWSomzOHZ1qJSiePQAASTu4a/ud26h...", global_status: 0, lookup_value_matcher: "fqdn=eh-cent4.qa.local", pxe_loader: nil>, :setCompute], @created=1538451730.644516, @timestamp=2018-10-02 03:42:10 UTC>]
2018-10-01T23:42:10 [I|app|4d8ee] Processed 1 tasks from queue 'Host::Managed Main', completed 0/8
2018-10-01T23:42:10 [E|app|4d8ee] Task 'Set up compute instance eh-cent4.qa.local' failed
2018-10-01T23:42:10 [E|app|4d8ee] Task 'Query instance details for eh-cent4.qa.local' canceled
2018-10-01T23:42:10 [E|app|4d8ee] Task 'Generating ISO image for eh-cent4.qa.local' canceled
2018-10-01T23:42:10 [E|app|4d8ee] Task 'Upload ISO image to datastore for eh-cent4.qa.local' canceled
2018-10-01T23:42:10 [E|app|4d8ee] Task 'Create IPv4 DNS record for eh-cent4.qa.local' canceled
2018-10-01T23:42:10 [E|app|4d8ee] Task 'Create Reverse IPv4 DNS record for eh-cent4.qa.local' canceled
2018-10-01T23:42:10 [E|app|4d8ee] Task 'Power up compute instance eh-cent4.qa.local' canceled
2018-10-01T23:42:10 [E|app|4d8ee] Task 'Attach ISO image to CDROM drive for eh-cent4.qa.local' canceled
2018-10-01T23:42:10 [E|app|4d8ee] Unprocessable entity Host::Managed (id: new):
Failed to create a compute dev5vcenter-QA-Databases (VMware) instance eh-cent4.qa.local: undefined method `attributes' for #<Array:0x0000000009a114c8>

#3 Updated by Tomáš Strachota about 1 year ago

  • Description updated (diff)

#4 Updated by Tomáš Strachota about 1 year ago

  • Category set to Compute resources

#5 Updated by Tomáš Strachota about 1 year ago

  • Related to Tracker #16829: Tracker for compute resource related issues added

#6 Updated by Tomáš Strachota about 1 year ago

I'm afraid this scenario was never tested and documented for neither hammer nor API. I doubt it works with hammer but API should be able to consume the same parameters as UI. You can try to create a host in the web interface, observe parameters in logs and use them for the API.
If you manage to create a host this way it would be extremely useful if you could document the process here in comments

#7 Updated by TJ Guthrie about 1 year ago

Hi,

Here is example data with 1 SCSI controller and 2 volumes. This is being sent to a 1.19.0 Foreman.

{"host":{"compute_attributes": {"scsi_controller_type":"ParaVirtualSCSIController", "volumes_attributes":{"0":{"_delete":"", "datastore":"ops-vm-fs01-52", "name":"Hard disk 1", "size_gb":"50", "thin":"true", "eager_zero":"false"}, "1":{"_delete":"", "datastore":"ops-vm-fs01-52", "name":"Hard disk 2", "size_gb":"200", "thin":"true", "eager_zero":"false"}}, "cpus":8, "corespersocket":1, "memory_mb":8192, "firmware":"bios", "cluster":"IT-DRS-96", "guest_id":"ubuntu64Guest", "path":"/Datacenters/NJ/vm/OPS", "hardware_version":"Default","memoryHotAddEnabled":"1", "cpuHotAddEnabled":"1", "add_cdrom":"0", "start":"1"}, "name":"ops-tj-test-cli05", "environment_name":"dc70", "ip":"10.89.1.10", "architecture_name":"x86_64", "domain_name":"dc70.lan", "realm_name":"DC70.LAN", "operatingsystem_name":"Ubuntu 18.04 LTS", "medium_name":"Ubuntu mirror", "ptable_name":"SDB - Preseed LVM", "subnet_id":"155", "compute_resource_id":53, "hostgroup_name":"DC52/ops", "owner_id": 133, "owner_type": "User", "build": true, "enabled": true, "provision_method": "build", "managed": true, "interfaces_attributes":{"0":{"ip":"10.89.1.10", "type":"interface", "name":"ops-tj-test-cli05", "subnet_id":155, "domain_name":"dc70.lan", "managed": true, "primary":true, "provision": true, "virtual": false, "compute_attributes":{"type":"VirtualE1000", "network":"DC70-Ops89"}}}}}

- TJ

#8 Updated by Eric Hansen 10 months ago

Been a bit, but my I resolved this problem and the root cause was poor design and/or documentation of the fields that need to be passed to the host creation REST call. For example, I think the volume attributes is a hash of hashes and the SCSI controllers is an array of hashes. None of that is documented anywhere... you have to figure it out by code inspection and troubleshooting.

#9 Updated by Oleh Fedorenko 4 months ago

  • Assignee set to Oleh Fedorenko
  • Status changed from New to Assigned

#10 Updated by Oleh Fedorenko 4 months ago

#11 Updated by The Foreman Bot 4 months ago

  • Status changed from Assigned to Ready For Testing
  • Pull request https://github.com/theforeman/hammer-cli-foreman/pull/424 added

#12 Updated by Oleh Fedorenko 3 months ago

  • Status changed from Ready For Testing to Closed

Also available in: Atom PDF