Project

General

Profile

Bug #5859

VM creation fails after IP conflict

Added by Matt Chesler almost 6 years ago. Updated over 1 year ago.

Status:
Closed
Priority:
Normal
Assignee:
Category:
Host creation
Target version:
Difficulty:
Triaged:
Bugzilla link:
Fixed in Releases:
Found in Releases:

Description

I recently upgraded to Foreman 1.5.0 and just discovered that I can't provision a new VM when the initial attempt results in any DNS issue. I get a 500 response stating that the MAC address is invalid.

I get the following stack trace in the GUI:

Net::Validations::Error
Invalid MAC
lib/net/validations.rb:19:in `validate_mac'
lib/net/dhcp/record.rb:7:in `initialize'
app/models/concerns/orchestration/dhcp.rb:16:in `new'
app/models/concerns/orchestration/dhcp.rb:16:in `dhcp_record'
app/models/concerns/orchestration/dhcp.rb:120:in `queue_remove_dhcp_conflicts'
app/models/concerns/orchestration/dhcp.rb:73:in `queue_dhcp'
app/models/concerns/orchestration.rb:47:in `valid?'
app/models/concerns/foreman/sti.rb:29:in `save_with_type'
app/controllers/hosts_controller.rb:94:in `create'
app/models/concerns/foreman/thread_session.rb:33:in `clear_thread'
lib/middleware/catch_json_parse_errors.rb:9:in `call'

From Foreman's production log:

Started POST "/hosts" for 10.103.100.234 at 2014-05-21 13:57:42 -0400
Processing by HostsController#create as */*
Parameters: {"utf8"=>"✓", "authenticity_token"=>"LFrxz49nB00DpkYELn2TOJiIv6Z9TSL8vye7YQXKrns=", "host"=>{"name"=>"rabbitmq-bulk-1", "hostgroup_id"=>"3", "compute_resource_id"=>"1", "compute_profile_id"=>"8", "environment_id"=>"6", "puppet_ca_proxy_id"=>"1", "puppet_proxy_id"=>"1", "puppetclass_ids"=>["", "29"], "managed"=>"true", "progress_report_id"=>"[FILTERED]", "type"=>"Host::Managed", "compute_attributes"=>{"cpus"=>"4", "corespersocket"=>"1", "memory_mb"=>"4096", "cluster"=>"Dell R620", "path"=>"/Datacenters/NJ3/vm", "guest_id"=>"centos64Guest", "interfaces_attributes"=>{"new_interfaces"=>{"type"=>"VirtualE1000", "network"=>"network-1024", "_delete"=>""}, "0"=>{"type"=>"VirtualE1000", "network"=>"network-1024", "_delete"=>""}}, "volumes_attributes"=>{"new_volumes"=>{"datastore"=>"depot03", "name"=>"Hard disk", "size_gb"=>"10", "thin"=>"true", "_delete"=>""}, "0"=>{"datastore"=>"depot03", "name"=>"Hard disk", "size_gb"=>"10", "thin"=>"true", "_delete"=>""}}, "scsi_controller_type"=>"VirtualLsiLogicController", "start"=>"1"}, "domain_id"=>"2", "realm_id"=>"", "mac"=>"", "subnet_id"=>"3", "ip"=>"192.168.0.75", "interfaces_attributes"=>{"new_interfaces"=>{"_destroy"=>"false", "type"=>"Nic::Managed", "mac"=>"", "name"=>"", "domain_id"=>"", "ip"=>"", "provider"=>"IPMI"}}, "architecture_id"=>"1", "operatingsystem_id"=>"4", "provision_method"=>"build", "build"=>"1", "medium_id"=>"1", "ptable_id"=>"18", "disk"=>"", "root_pass"=>"[FILTERED]", "is_owned_by"=>"2-Users", "enabled"=>"1", "comment"=>"", "overwrite"=>"true"}, "capabilities"=>"build image", "provider"=>"Vmware"}
Operation FAILED: Invalid MAC
Rendered common/500.html.erb (4.6ms)
Completed 500 Internal Server Error in 233ms (Views: 6.9ms | ActiveRecord: 4.1ms)

There is absolutely nothing in the foreman proxy log.

To clarify, this occurs when attempting to provision a new machine on VMware. The auto-suggested IP was previously in use, so a PTR record already exists. I ack the warning and click "Overwrite" and then get the 500 error.


Related issues

Related to Foreman - Bug #6380: VM not created when resubmitting new host form after orchestration failureAssigned2014-06-25
Related to Foreman - Bug #13422: creating a host on a libvirt compute resource throws an invalid MAC errorDuplicate2016-01-27
Has duplicate Foreman - Bug #10938: Crash during the provisioning when trying to overwrite the interface IPDuplicate2015-06-26
Has duplicate Foreman - Bug #13573: Overwite with compute resource fails when provisioning via compute resourceDuplicate2016-02-05

Associated revisions

Revision c2c01642 (diff)
Added by Ivan Necas almost 4 years ago

Fixes #5859 - don't rely on a mac address being present when overriding the conflicts

We tried to initiate the `dhcp_record` for checking if conflicts were
there. The problem was the mac address was not available at that stage
when using the compute resources. It also turns out there is no need
to initiate the dhcp_record at this stage, as we do that again while
actually removing the conflicts (and if no conflicts are there,
nothing happens anyway).

Revision f71bd5c6 (diff)
Added by Ivan Necas almost 4 years ago

Fixes #5859 - don't rely on a mac address being present when overriding the conflicts

We tried to initiate the `dhcp_record` for checking if conflicts were
there. The problem was the mac address was not available at that stage
when using the compute resources. It also turns out there is no need
to initiate the dhcp_record at this stage, as we do that again while
actually removing the conflicts (and if no conflicts are there,
nothing happens anyway).

(cherry picked from commit c2c016425c4d27f560d5f9c18aec480666d51db3)

History

#1 Updated by Dominic Cleal over 5 years ago

  • Category set to Host creation

I think I've seen this before too, it's as if it skips the compute orchestration step entirely.

#2 Updated by Jorick Astrego over 5 years ago

I also have the same problem on libvirt with foreman-1.6.0-0.develop.201406111311git5694e68.el6.noarch:

Started POST "/hosts" for xx.xxx.xxx.x at 2014-06-12 09:51:12 +0200
Processing by HostsController#create as */*
Parameters: {"utf8"=>"✓", "authenticity_token"=>"xTYja9JfTaFkI1DUauZvfw6yZ7f7NmuCqKYoGwl4OG0=", "host"=>{"name"=>"my", "hostgroup_id"=>"6", "compute_resource_id"=>"2", "compute_profile_id"=>"5", "environment_id"=>"1", "puppet_ca_proxy_id"=>"1", "puppet_proxy_id"=>"1", "config_group_ids"=>[""], "puppetclass_ids"=>[""], "managed"=>"true", "progress_report_id"=>"[FILTERED]", "type"=>"Host::Managed", "compute_attributes"=>{"cpus"=>"1", "memory"=>"1073741824", "nics_attributes"=>{"new_nics"=>{"type"=>"bridge", "_delete"=>"", "bridge"=>"br0", "model"=>"virtio"}, "0"=>{"type"=>"bridge", "_delete"=>"", "bridge"=>"br0", "model"=>"virtio"}}, "volumes_attributes"=>{"new_volumes"=>{"pool_name"=>"default", "capacity"=>"10G", "allocation"=>"0G", "format_type"=>"raw", "_delete"=>""}, "0"=>{"pool_name"=>"VM", "capacity"=>"50G", "allocation"=>"0G", "format_type"=>"qcow2", "_delete"=>""}}, "start"=>"1"}, "domain_id"=>"3", "realm_id"=>"", "mac"=>"", "subnet_id"=>"3", "ip"=>"xx.xxx.xxx.xxx", "interfaces_attributes"=>{"new_interfaces"=>{"_destroy"=>"false", "type"=>"Nic::Managed", "mac"=>"", "name"=>"", "domain_id"=>"", "ip"=>"", "provider"=>"IPMI"}}, "architecture_id"=>"1", "operatingsystem_id"=>"1", "provision_method"=>"build", "build"=>"1", "medium_id"=>"1", "ptable_id"=>"7", "disk"=>"", "root_pass"=>"[FILTERED]", "is_owned_by"=>"1-Users", "enabled"=>"1", "comment"=>"", "overwrite"=>"true"}, "capabilities"=>"build image", "provider"=>"Libvirt"}
Operation FAILED: Invalid MAC
Rendered common/500.html.erb (5.4ms)
Completed 500 Internal Server Error in 198ms (Views: 12.9ms | ActiveRecord: 2.6ms)

#3 Updated by Dominic Cleal over 5 years ago

app/models/concerns/orchestration/compute.rb is checking for any errors in the queue_compute method, so it won't create the VM if it's determined some errors on the form exist. That's fine, except in the case of a conflict where there are errors, but we continue anyway (override? is true).

Edit: looks like there's a similar usage in the TFTP orchestration, maybe others.

#4 Updated by Dominic Cleal over 5 years ago

  • Related to Bug #6380: VM not created when resubmitting new host form after orchestration failure added

#5 Updated by Dominic Cleal over 4 years ago

  • Has duplicate Bug #10938: Crash during the provisioning when trying to overwrite the interface IP added

#6 Updated by Dominic Cleal about 4 years ago

  • Related to Bug #13422: creating a host on a libvirt compute resource throws an invalid MAC error added

#7 Updated by Dominic Cleal almost 4 years ago

  • Has duplicate Bug #13573: Overwite with compute resource fails when provisioning via compute resource added

#8 Updated by The Foreman Bot almost 4 years ago

  • Status changed from New to Ready For Testing
  • Assignee set to Ivan Necas
  • Pull request https://github.com/theforeman/foreman/pull/3263 added

#9 Updated by Ivan Necas almost 4 years ago

  • Status changed from Ready For Testing to Closed
  • % Done changed from 0 to 100

#10 Updated by Dominic Cleal almost 4 years ago

  • Legacy Backlogs Release (now unused) set to 141

#11 Updated by Tomáš Strachota almost 4 years ago

  • Bugzilla link set to 1332186

Also available in: Atom PDF