Project

General

Profile

Actions

Bug #17887

closed

Hosts with report listed for search query "not has last_report"

Added by El Joppa almost 8 years ago. Updated over 7 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
Reporting
Target version:
-
Difficulty:
Triaged:
Fixed in Releases:
Found in Releases:

Description

1.14RC2 fresh install.
Ubuntu Xenial.

Lists some hosts as with no reports even though it has reports:

http://imgur.com/a/H1gyb

Actions #1

Updated by Marek Hulán almost 8 years ago

  • Subject changed from Hosts listed as with no last report to Hosts with report listed for search query "not has last_report"
  • Category set to Reporting

The following query is being run for such search query

SELECT  "hosts".* FROM "hosts" WHERE "hosts"."type" IN ('Host::Managed') AND ((NOT COALESCE("hosts"."last_report" IS NOT NULL, false)))  ORDER BY "hosts"."name" ASC LIMIT 20 OFFSET 0

It seems to work just fine in my setup. Could you please check what's last_report for this host? You can do it in rails console with following commands

foreman-rake console
p Host.find_by_name('ceph-008...').last_report
exit

please upload the whole output here. Since you selected 1.14.0, does that mean you're hitting this issue with 1.14 RC2?

Actions #2

Updated by El Joppa almost 8 years ago

Yes, 1.14RC2 freshly installed on Ubuntu 16.04 with foreman-installer.

foreman console:

irb(main):001:0> p Host.find_by_name('ceph-008....').last_report
Mon, 02 Jan 2017 09:26:44 UTC +00:00
=> Mon, 02 Jan 2017 09:26:44 UTC +00:00
irb(main):002:0> 

sql:

-[ RECORD 1 ]--------+--------------------------------
id                   | 100
name                 | ceph-008.X
last_compile         | 
last_report          | 
updated_at           | 2016-12-30 09:57:28.692717
created_at           | 2016-12-30 09:57:18.963711
root_pass            | 
architecture_id      | 1
operatingsystem_id   | 1
environment_id       | 1
ptable_id            | 
medium_id            | 
build                | f
comment              | 
disk                 | 
installed_at         | 
model_id             | 4
hostgroup_id         | 
owner_id             | 
owner_type           | 
enabled              | t
puppet_ca_proxy_id   | 1
managed              | f
use_image            | 
image_file           | 
uuid                 | 
compute_resource_id  | 
puppet_proxy_id      | 1
certname             | ceph-008.X
image_id             | 
organization_id      | 
location_id          | 
type                 | Host::Managed
otp                  | 
realm_id             | 
compute_profile_id   | 
provision_method     | 
grub_pass            | 
global_status        | 0
lookup_value_matcher | fqdn=ceph-008.X
pxe_loader           | 

Actions #3

Updated by Dominic Cleal almost 8 years ago

Are these config reports from Puppet in a default Foreman installation, or another config management plugin/tool? Perhaps last_report isn't updated if a different importer's in use.

Actions #4

Updated by El Joppa almost 8 years ago

All reports for all servers in foreman are from a seperate Puppetmaster with smartproxy and puppet reporter installed. Only some nodes has these lacking last_report.

Actions #5

Updated by Dominic Cleal almost 8 years ago

Could you tail -f /var/log/foreman/production.log while running a Puppet agent on one of the affected hosts, and attach the resulting log? Perhaps an error is occurring during the report upload.

Actions #6

Updated by El Joppa almost 8 years ago

2017-01-03T11:22:56 ed625561 [app] [I] processing report for ceph-011.
2017-01-03T11:22:56 ed625561 [app] [D] Report: {"host"=>"ceph-011.", "reported_at"=>"2017-01-03 11:22:36 UTC", "status"=>{"applied"=>0, "restarted"=>0, "failed"=>0, "failed_restarts"=>0, "skipped"=>0, "pending"=>0}, "metrics"=>{"resources"=>{"changed"=>0, "failed"=>0, "failed_to_restart"=>0, "out_of_sync"=>0, "restarted"=>0, "scheduled"=>0, "skipped"=>0, "total"=>208}, "time"=>{"anchor"=>0.001184178, "apt_key"=>0.0018399150000000001, "augeas"=>3.808377495, "config_retrieval"=>7.508179066, "cron"=>0.000541659, "exec"=>0.010552525000000002, "file"=>0.28592155200000013, "file_line"=>0.000638734, "filebucket"=>0.000158476, "group"=>0.003276188, "ini_setting"=>0.007384523999999999, "ini_subsetting"=>0.000703383, "mailalias"=>0.0023759690000000003, "package"=>0.09707290800000001, "schedule"=>0.0009502489999999999, "service"=>0.244850981, "ssh_authorized_key"=>0.0032570940000000003, "sysctl"=>0.0015721229999999997, "total"=>11.989095714000001, "user"=>0.010258695000000002}, "changes"=>{"total"=>0}, "events"=>{"failure"=>0, "success"=>0, "total"=>0}}, "logs"=>[{"log"=>{"sources"=>{"source"=>"Puppet"}, "messages"=>{"message"=>"Using configured environment 'production'"}, "level"=>"info"}}, {"log"=>{"sources"=>{"source"=>"Puppet"}, "messages"=>{"message"=>"Retrieving pluginfacts"}, "level"=>"info"}}, {"log"=>{"sources"=>{"source"=>"Puppet"}, "messages"=>{"message"=>"Retrieving plugin"}, "level"=>"info"}}, {"log"=>{"sources"=>{"source"=>"Puppet"}, "messages"=>{"message"=>"Loading facts"}, "level"=>"info"}}, {"log"=>{"sources"=>{"source"=>"Puppet"}, "messages"=>{"message"=>"Caching catalog for ceph-011."}, "level"=>"info"}}, {"log"=>{"sources"=>{"source"=>"Puppet"}, "messages"=>{"message"=>"Applying configuration version '1483442566'"}, "level"=>"info"}}, {"log"=>{"sources"=>{"source"=>"Puppet"}, "messages"=>{"message"=>"Applied catalog in 5.52 seconds"}, "level"=>"notice"}}]}
2017-01-03T11:22:56 ed625561 [app] [D] Cache read: root_pass
2017-01-03T11:22:56 ed625561 [app] [D] Cache write: root_pass
2017-01-03T11:22:56 ed625561 [app] [D] Cache read: root_pass
2017-01-03T11:22:56 ed625561 [app] [D] Cache write: root_pass
2017-01-03T11:22:56 ed625561 [app] [I] Imported report for ceph-011. in 0.25 seconds

Is this enough?

Actions #7

Updated by Dominic Cleal almost 8 years ago

Yeah, it appears to be successful - the update should have happened before the last log message (though without SQL logs I can't tell if it did).

Regarding the earlier comment 2, is the record shown definitely the same as the host you looked up in the rake console command? The timestamps look off - the updated_at is 2016-12-30, but the last_report is 2017-01-02. It appears to me that you have multiple, perhaps differently named hosts.

Check with Host.find_by_name('ceph-008.X').id or similar to check the record IDs are the same.

Actions #8

Updated by El Joppa almost 8 years ago

2016-12-30 is the day i installed Foreman and setup Puppet reports / facts push.

Actually they have different id's so these are just duplicates. Thats strange but i guess something has happened while setting up the puppet integration,

Actions #9

Updated by Dominic Cleal almost 8 years ago

  • Status changed from New to Feedback

I'd suggest just deleting the older duplicate host showing up in the results, which should fix the issue.

I can't see why a duplicate would have been created, unless either the FQDN and/or domain differs between the two (unclear from obfuscation).

Actions #10

Updated by El Joppa almost 8 years ago

Deleted the duplicated failed hosts. They havent turned up yet.

Guess its just some glitch in the matrix while configuring Foreman/Puppet.

Actions #11

Updated by Anonymous over 7 years ago

  • Status changed from Feedback to Resolved
Actions

Also available in: Atom PDF