Project

General

Profile

Bug #4551

Google Compute Engine Compute Resource Fails to create new hosts

Added by Michael O'Brien about 9 years ago. Updated over 4 years ago.

Status:
Closed
Priority:
Normal
Category:
Compute resources - GCE
Target version:
Difficulty:
Triaged:
Bugzilla link:
Fixed in Releases:
Found in Releases:
Red Hat JIRA:

Description

Originally encountered in in 1.4 but continues in 1.4.1

Problem
Creating new hosts either debian or Centos (supported images on GCE) fails almost immediately with errors in production.log of
Rendered hosts/_progress.html.erb
Rendered puppetclasses/_selectedClasses.html.erb
Rendered puppetclasses/_classes.html.erb
Rendered puppetclasses/_class_section.html.erb

Resolutions Tried
Changing OS: Other forum user tried debian and centos has similar experience
Different GCE projects: Other forum user using different project details, similar experience
SELinux on foreman: Disabled and set to passive but problem continues
Different GCE Zones: Tried euro and us zones
Different Instances: tried micro, & N1 standard
GCE Network to foreman: existing hosts can be registered with puppet-master on foreman on port 8140

Unknown
If additional logs are available that would help identify problem and solution

Additional Information
Adding GCE as a compute resource allows for hosts already in the GCE project to be listed under foreman/compute_resources/gceresourcename and for images to be listed as well.
These hosts can be shutdown and deleted indicating that foreman can read & edit the GCE project so authorisation and communication must be working


Related issues

Related to Foreman - Bug #8029: GCE - new host creates fails on Zone missing errorClosed

Associated revisions

Revision 1fa8dcfb (diff)
Added by Daniel Lobato Garcia almost 8 years ago

Fixes #4551 - GCE provisioning support

Enable provisioning of VMs through Google Compute Engine. Volume-wise,
this is currently limited to creating a VM with an attached disk that
contains the image specified. Future enhancements should include
choosing any available disks to auto-attach the VM and not force the
user to create a new one through Foreman.

History

#1 Updated by Michael O'Brien about 9 years ago

Submitted foreman-debug output via rsync

#2 Updated by laurent salut about 9 years ago

I am in the same situation and we opened this without success : https://groups.google.com/forum/#!topic/foreman-users/scWe0wVx7fg

There is no error in foreman-proxy.log
Here is my full production.log about this problem :

Started POST "/hosts" for <my-ip> at 2014-02-28 10:36:51 +0000
Processing by HostsController#create as */*
  Parameters: {"utf8"=>"✓", "authenticity_token"=>"CPefdYW2RfzUewMuqh9SR7dQHqh+YnHu3ILugs5tbZQ=", "host"=>{"name"=>"test", "hostgroup_id"=>"", "compute_resource_id"=>"2", "compute_profile_id"=>"3", "environment_id"=>"1", "puppet_ca_proxy_id"=>"1", "puppet_proxy_id"=>"1", "puppetclass_ids"=>[""], "managed"=>"true", "progress_report_id"=>"[FILTERED]", "type"=>"Host::Managed", "compute_attributes"=>{"machine_type"=>"n1-standard-1-d", "network"=>"default", "external_ip"=>"0", "image_id"=>"debian-7-wheezy-v20131120"}, "mac"=>"", "domain_id"=>"1", "architecture_id"=>"1", "operatingsystem_id"=>"2", "provision_method"=>"image", "build"=>"1", "medium_id"=>"2", "ptable_id"=>"7", "disk"=>"", "root_pass"=>"[FILTERED]", "is_owned_by"=>"", "enabled"=>"1", "comment"=>"", "overwrite"=>"false"}, "capabilities"=>"image"}
Adding Compute instance for test.c.natural-expanse-490.internal
Rolling back due to a problem: [Set up compute instance test.c.natural-expanse-490.internal     2     failed     [#<Host::Managed id: nil, name: "test.c.natural-expanse-490.internal", ip: nil, last_compile: nil, last_freshcheck: nil, last_report: nil, updated_at: nil, source_file_id: nil, created_at: nil, mac: nil, root_pass: nil, serial: nil, puppet_status: 0, domain_id: 1, architecture_id: 1, operatingsystem_id: 2, environment_id: 1, subnet_id: nil, ptable_id: 7, medium_id: 2, build: true, comment: "", disk: "", installed_at: nil, model_id: nil, hostgroup_id: nil, owner_id: nil, owner_type: nil, enabled: true, puppet_ca_proxy_id: 1, managed: true, use_image: nil, image_file: nil, uuid: nil, compute_resource_id: 2, puppet_proxy_id: 1, certname: "1cbc63d8-c6e3-44c7-8743-7fab1834d57c", image_id: 5, organization_id: nil, location_id: nil, type: "Host::Managed", compute_profile_id: 3>, :setCompute]]
Failed to save: 
  Rendered hosts/_progress.html.erb (0.2ms)
  Rendered puppetclasses/_selectedClasses.html.erb (0.0ms)
  Rendered puppetclasses/_classes.html.erb (3.9ms)
  Rendered puppetclasses/_class_selection.html.erb (6.5ms)
Started GET "/tasks/43b297b1-d89c-40d0-990e-e54f587872de" for <my-ip> at 2014-02-28 10:36:52 +0000
Processing by TasksController#show as */*
  Parameters: {"id"=>"43b297b1-d89c-40d0-990e-e54f587872de"}
  Rendered tasks/_list.html.erb (0.4ms)
Completed 200 OK in 2.1ms (Views: 0.8ms | ActiveRecord: 0.2ms)
Started GET "/tasks/43b297b1-d89c-40d0-990e-e54f587872de" for <my-ip> at 2014-02-28 10:36:54 +0000
Processing by TasksController#show as */*
  Parameters: {"id"=>"43b297b1-d89c-40d0-990e-e54f587872de"}
  Rendered tasks/_list.html.erb (0.4ms)
Completed 200 OK in 2.4ms (Views: 1.0ms | ActiveRecord: 0.2ms)
  Rendered compute_resources_vms/form/_gce.html.erb (2209.0ms)
  Rendered hosts/_compute.html.erb (2608.6ms)
  Rendered common/os_selection/_architecture.html.erb (3.1ms)
  Rendered common/os_selection/_operatingsystem.html.erb (4.7ms)
  Rendered hosts/_operating_system.html.erb (11.9ms)
  Rendered hosts/_unattended.html.erb (2625.9ms)
  Rendered puppetclasses/_class_parameters.html.erb (0.0ms)
  Rendered puppetclasses/_classes_parameters.html.erb (2.5ms)
  Rendered common_parameters/_inherited_parameters.html.erb (0.1ms)
  Rendered common_parameters/_puppetclass_parameter.html.erb (1.8ms)
  Rendered common_parameters/_puppetclasses_parameters.html.erb (2.7ms)
  Rendered common_parameters/_parameter.html.erb (0.9ms)
  Rendered common_parameters/_parameters.html.erb (2.3ms)
  Rendered hosts/_form.html.erb (2801.2ms)
  Rendered hosts/new.html.erb within layouts/application (2801.6ms)
  Rendered home/_user_dropdown.html.erb (1.1ms)
Read fragment views/tabs_and_title_records-1 0.1ms
  Rendered home/_topbar.html.erb (1.9ms)
  Rendered layouts/base.html.erb (3.5ms)
Completed 200 OK in 3913.1ms (Views: 2799.1ms | ActiveRecord: 12.9ms)
Started GET "/tasks/43b297b1-d89c-40d0-990e-e54f587872de" for <my-ip> at 2014-02-28 10:36:56 +0000
Processing by TasksController#show as */*
  Parameters: {"id"=>"43b297b1-d89c-40d0-990e-e54f587872de"}
  Rendered tasks/_list.html.erb (0.4ms)
Completed 200 OK in 2.7ms (Views: 1.0ms | ActiveRecord: 0.2ms)

#3 Updated by Michael O'Brien almost 9 years ago

Upgrading to 1.4.2 does not resolve the issue. I'm unsure if its down to misconfiguration by me or a problem with foreman or the gce plugin

#4 Updated by Dominic Cleal over 8 years ago

  • Category changed from Compute resources to Compute resources - GCE

#5 Updated by Daniel Lobato Garcia about 8 years ago

  • Assignee set to Daniel Lobato Garcia

It's currently broken due to some changes in the GCE API requiring disks to work. Sorry for the breakage, I'm working on fixing this now.

#6 Updated by Daniel Lobato Garcia almost 8 years ago

  • Related to Bug #8029: GCE - new host creates fails on Zone missing error added

#7 Updated by Dominic Cleal almost 8 years ago

  • Tracker changed from Support to Bug
  • Status changed from New to Ready For Testing
  • Pull request https://github.com/theforeman/foreman/pull/2214 added

#8 Updated by Dominic Cleal almost 8 years ago

  • Legacy Backlogs Release (now unused) set to 35

Marking for 1.9 mostly because I haven't had the opportunity to test it properly on top of the older Fog version in 1.8.

#9 Updated by Daniel Lobato Garcia almost 8 years ago

  • Status changed from Ready For Testing to Closed
  • % Done changed from 0 to 100

Also available in: Atom PDF