Project

General

Profile

Actions

Bug #20200

closed

refreshing discovered host facts will delete lldp information

Added by Dominic Schlegel over 6 years ago. Updated over 4 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Category:
-
Difficulty:
Triaged:
No
Fixed in Releases:
Found in Releases:

Description

if i boot into the FDI and the host is reported to foreman i see LLDP facts in the network section of the discovered host.
now if i refresh the facts all the lldp_* facts are gone. they should also be refreshed as it could be that the native vlan changed in the meanwhile.
this bug can be reproduced in with Foreman 1.15.1 and foreman_discovery plug-in 9.1.1


Files

journald.log journald.log 183 KB Dominic Schlegel, 07/10/2017 06:25 AM
Actions #1

Updated by Lukas Zapletal over 6 years ago

  • Status changed from New to Need more information

Hello, image version? Can you test the latest one? This is supposed to work, proxy reads /etc/default/discovery where FACTERLIB is properly set.

Actions #2

Updated by Dominic Schlegel over 6 years ago

current:

discovery_release     20170104.1
discovery_version     3.3.1
facterversion         2.4.1

the latest available image for download seems to be 3.4.0 from here: https://downloads.theforeman.org/discovery/releases/latest/
so i tried with that one:
discovery_release     20170331.1
discovery_version     3.4.0 
facterversion         2.4.1 

and again with above new version all lldp_* facts are gone after a refresh. i see that there is an image version 3.4.1 but it is not available from the above download link. so can you clarify please which version exactly i should use and where to get it from?

Actions #3

Updated by Lukas Zapletal over 6 years ago

Ok thanks, 3.4.0 is the latest, I haven't pushed 3.4.1 yet, but there were no changes in this regard.

I am unable to reproduce, can you attach full journald log of the whole boot with 3.4.0 version?

When you refresh facts, do you also see facts named "nmprimary_*" to disappear?

Actions #4

Updated by Dominic Schlegel over 6 years ago

I am unable to reproduce, can you attach full journald log of the whole boot with 3.4.0 version?

sure. please see attached journald.log from last boot into FDI with 3.4.0. i randomized the mac addresses a bit and removed some other uninteresting parts.

When you refresh facts, do you also see facts named "nmprimary_*" to disappear?

No. Before and After the refresh i do see some nmprimary_* facts under the Miscellaneous section.

Actions #5

Updated by Lukas Zapletal over 6 years ago

Hmmm nothing really useful in logs, I need to reproduce, but I don't have any LLDPAD enabled switch. Need to figure out how do I test this in libvirt environment. I don't see any LLDPAD facts reported in libvirt VM.

Actions #6

Updated by Dominic Schlegel over 6 years ago

may you are able to create an open virtual switch and bridge it to your virtual env. something bit more in detail described here: http://blog.scottlowe.org/2016/12/09/using-ovn-with-kvm-libvirt/

Actions #7

Updated by Lukas Zapletal over 6 years ago

Uh that's lot of work, let me try to remotely debug with you first :-)

Can you sign into the FDI and after it's booted and you verify that Refresh Facts does not work correctly, can you do:

FACTERLIB=/usr/share/fdi/facts/ facter

Do you see missing facts? I'd assume no (proxy does the same thing essentially). Then run this:

facter interfaces
lldptool get-tlv -n -i NETWORK_INTERFACE_FROM_ABOVE -V 1
lldptool get-tlv -n -i NETWORK_INTERFACE_FROM_ABOVE -V 5
lldptool get-tlv -n -i NETWORK_INTERFACE_FROM_ABOVE -V 8

Here is the custom fact which gets the facts, feel free to edit it in case you find why it does not work:

https://github.com/theforeman/foreman-discovery-image/blob/master/root/usr/share/fdi/facts/openlldp.rb

Actions #8

Updated by Dominic Schlegel over 6 years ago

after confirming the host is having missing lldp facts after a facts refresh i logged in to the FDI and run the following command:

FACTERLIB=/usr/share/fdi/facts/ facter

in the output of this command i do see some lldp facts:
lldp_neighbor_chassisid_eno49 => ec:XX:91:XX:XX:XX
lldp_neighbor_chassisid_eno50 => XX:fe:XX:d2:XX:XX
lldp_neighbor_mngaddr_ipv4_eno49 => XX.XX.XX.130
lldp_neighbor_mngaddr_ipv4_eno50 => XX.XX.XX.131
lldp_neighbor_portid_eno49 => Gi0/15
lldp_neighbor_portid_eno50 => Gi0/15
lldp_neighbor_pvid_eno49 => 700
lldp_neighbor_pvid_eno50 => 700
lldp_neighbor_sysname_eno49 => some-systems-fqdn.internal
lldp_neighbor_sysname_eno50 => some-other-systems-fqdn.internal

going further and running below command:
facter interfaces

that gives me:
eno49,eno50,lo

now running the lldptool against eno49 for example:
[root@fdi ~]# lldptool get-tlv -n -i eno49 -V 1
Chassis ID TLV
    MAC: ec:XX:91:XX:XX:XX
[root@fdi ~]# lldptool get-tlv -n -i eno49 -V 5
System Name TLV
    some-systems-fqdn.internal
[root@fdi ~]# lldptool get-tlv -n -i eno49 -V 8
Management Address TLV
    IPv4: XX.XX.XX.130
    System port number: 0

Could the problem by that we have a multi line output in the last command?

Actions #9

Updated by Lukas Zapletal over 6 years ago

So basically you say that after facts refresh, you do not see LLDP facts but facter shows them?

Now let's find out what smart-proxy shows via its API (it just calls Facter API), run this:

curl http://192.168.122.xx:8448/facts | json_reformat

If you don't have json_reformat just reformat the garbage with an online tool and find what you want to see. Two options:

1) It's missing - a bug in FDI when proxy does not see those facts (or they might fail in proxy context - I am not sure if we capture STDERR there)

2) It's there - then foreman_discovery must filter these out somehow, but I can't think of anything that would do it.

LZ

Actions #10

Updated by Dominic Schlegel over 6 years ago

Yes, thats what i am basically saying.
i am having troubles connecting via curl or wget to the smart proxy. i am always getting below output:

[root@fdi ~]# curl -k https://10.X.X.X:8443/facts
Requested url was not found

The Smart Proxy Logfile shows this:
I, [2017-07-28T10:21:25.280982 ]  INFO -- : 10.X.X.X - - [28/Jul/2017 10:21:25] "GET /facts HTTP/1.1" 404 27 0.0004

So it seems it is not finding /facts for whatever reason. am i doing something wrong?

Actions #11

Updated by Lukas Zapletal over 6 years ago

So it seems it is not finding /facts for whatever reason. am i doing something wrong?

Hmm that could be the bug. In the journald log I can clearly see the facts component starts up:

Jul 10 11:07:59 fdi foreman-proxy[1630]: 'facts' settings: 'enabled': true

Can you check that?

curl -k https://10.X.X.X:8443/features

Should list "facts" module loaded, then you really should be able to hit it via /facts.

Actions #12

Updated by Dominic Schlegel over 6 years ago

looks like that feature is not visible/enabled on the foreman-proxy:

[root@fdi ~]# curl -k https://XXXXXXXXXX:8443/features
["dhcp","discovery","salt","templates","tftp"] 

are there any specific settings i can check and verify?

Actions #13

Updated by Lukas Zapletal over 6 years ago

looks like that feature is not visible/enabled on the foreman-proxy:

Wait, I want you to run this against discovered node, there is also a foreman-proxy there but running different set of modules. It is used to expose some basic API for reboot/kexec and also facts.

Actions #14

Updated by Dominic Schlegel over 6 years ago

oh, i misunderstood that. running it agains the discovered node shows obviously that this feature is available:

# curl -k https://XXX.XXX.XXX:XXX:8443/features 
["bmc","discovery_image","facts","logs"]

getting /facts from the discovered node now just gives me 1 fact that has "lldp" in its name:
"nmprimary_connection_lldp": "-1 (default)", 

but unfortunately no fact like lldp_neighbor_*

Actions #15

Updated by Lukas Zapletal over 6 years ago

Dominic Schlegel wrote:

oh, i misunderstood that. running it agains the discovered node shows obviously that this feature is available:
[...]
getting /facts from the discovered node now just gives me 1 fact that has "lldp" in its name:
[...]
but unfortunately no fact like lldp_neighbor_*

Okay we are getting there. To sum up (please try):

So while the first one shows lldp facts the other does not, please confirm.

Then can you check foreman-proxy.service file for env variable for Puppet:

  1. grep EnvironmentFile /usr/lib/systemd/system/foreman-proxy.service
    EnvironmentFile=/etc/default/discovery

And then

  1. cat /etc/default/discovery
    FACTERLIB=/usr/share/fdi/facts:/opt/extension/facts

Puppet version is? I assume you use the official build with Puppet from EPEL?

  • curl/wget proxy on FDI
Actions #16

Updated by Dominic Schlegel over 6 years ago

correct. the first one does show some lldp facts:

[root@fdi ~]# FACTERLIB=/usr/share/fdi/facts/ facter | grep lldp
lldp_neighbor_chassisid_eno49 => ec:30:XX:XX:XX:XX
lldp_neighbor_chassisid_eno50 => 04:fe:XX:XX:XX:XX
lldp_neighbor_mngaddr_ipv4_eno49 => 10.XX.XX.XXX
lldp_neighbor_mngaddr_ipv4_eno50 => 10.XX.XX.XXX
lldp_neighbor_portid_eno49 => Gi0/22
lldp_neighbor_portid_eno50 => Gi0/22
lldp_neighbor_pvid_eno49 => 700
lldp_neighbor_pvid_eno50 => 700
lldp_neighbor_sysname_eno49 => my-switches-name
lldp_neighbor_sysname_eno50 => my-switches-name2
nmprimary_connection_lldp => -1 (default)

while the second one does only show:
curl --cert /tmp/cert.pem --key /tmp/key.pem -k https://xx.xx.xxx.x:8443/facts  | grep lldp
"nmprimary_connection_lldp":"-1 (default)",

Below the EnvironmentFile variable:
[root@fdi ~]# grep EnvironmentFile /usr/lib/systemd/system/foreman-proxy.service
EnvironmentFile=-/etc/default/discovery
EnvironmentFile=-/etc/sysconfig/foreman-proxy

and the default file:
[root@fdi ~]# cat /etc/default/discovery
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/extension/bin
FACTERLIB=/usr/share/fdi/facts:/opt/extension/facts
LD_LIBRARY_PATH=/opt/extension/lib
RUBYLIB=/opt/extension/lib/ruby

actually we don't use puppet. we only use salt. so not sure if your puppet question still is relevant or not.

Actions #17

Updated by Dominic Schlegel over 4 years ago

in the meanwhile i am using FDI 3.5.7 and i am missing the lldp_* facts even from first initial facts upload.

Actions #18

Updated by Dominic Schlegel over 4 years ago

i figured out the underlying problem of this:

some cisco switches will send lldp packets tagged with vlan 1 when the port mode is set to trunk and the native vlan is set to something else. this leads to the case that the switch sends those lldp packets but the server's nic will trash the packet as it is tagged with vlan 1 and this is out of interest of the server. there are 2 possible solutions for this:

1. set the port mode to acccess with the corresponding correct vlan id. in this case the switch will not tag lldp packets with vlan id 1 and the server receives it properly.
2. set all interfaces in the foreman discovery image to promiscuous mode.

Actions #19

Updated by The Foreman Bot over 4 years ago

  • Status changed from Need more information to Ready For Testing
  • Pull request https://github.com/theforeman/foreman-discovery-image/pull/119 added
Actions #20

Updated by The Foreman Bot over 4 years ago

  • Fixed in Releases Discovery Image 3.5.1 added
Actions #21

Updated by Anonymous over 4 years ago

  • Status changed from Ready For Testing to Closed
Actions

Also available in: Atom PDF