Bug #8422
closedIP get set to 127.0.0.1
Added by Slava Bendersky about 10 years ago. Updated almost 10 years ago.
Description
In web UI afrer new vm deployment finished IP address of primary interfaces get set to 127.0.0.1. And after I can't delete and or manage any vms.
[root@ca01srv00 dhcpd]# cat /var/lib/puppet/yaml/facts/ca01stest02.net.org.yaml --- !ruby/object:Puppet::Node::Facts name: ca01stest02.net.org expiration: 2014-11-17 13:36:59.066550 +00:00 values: operatingsystem: OracleLinux boardproductname: "440BX Desktop Reference Platform" is_pe: "false" serialnumber: "VMware-42 02 57 5f cc 69 1b b8-0e 6e 9f 01 46 cf 37 b0" path: "/sbin:/usr/sbin:/bin:/usr/bin" domain: net.org hardwareisa: x86_64 rubysitedir: /usr/lib/ruby/site_ruby/1.8 sshrsakey: "" selinux_current_mode: enforcing memoryfree: "1.70 GB" operatingsystemrelease: "6.5" puppetversion: "2.7.25" facterversion: "1.6.7" concat_basedir: /var/lib/puppet/concat kernelrelease: "3.10.59-1.el6.elrepo.x86_64" macaddress_eth0: "00:50:56:82:0C:7D" boardserialnumber: None memorysize: "1.96 GB" kernelmajversion: "3.10" environment: staging memorytotal: "1.96 GB" uptime_days: "2" is_virtual: "true" boardmanufacturer: "Intel Corporation" network_eth0: "172.16.104.0" network_lo: "127.0.0.0" kernel: Linux netmask_eth0: "255.255.255.0" macaddress: "00:50:56:82:0C:7D" netmask_lo: "255.0.0.0" uptime_seconds: "206753" augeasversion: "1.0.0" puppet_vardir: /var/lib/puppet ipaddress_eth0: "172.16.104.251" selinux: "true" swapsize: "992.00 MB" clientcert: ca01stest02.net.org clientversion: "2.7.25" swapfree: "992.00 MB" hostname: ca01stest02 sshdsakey: " type: Other selinux_policyversion: "28" virtual: vmware ipaddress: "172.16.104.251" physicalprocessorcount: "1" !ruby/sym "_timestamp": 2014-11-17 07:51:28.508131 -05:00 productname: "VMware Virtual Platform" netmask: "255.255.255.0" processor0: "Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz" selinux_enforced: "true" manufacturer: "VMware, Inc." selinux_config_mode: enforcing fqdn: ca01stest02.net.org rubyversion: "1.8.7" osfamily: RedHat kernelversion: "3.10.59" ipaddress_lo: "127.0.0.1" selinux_config_policy: targeted uptime_hours: "57" ps: "ps -ef" hardwaremodel: x86_64 root_home: /root uptime: "2 days" selinux_mode: targeted processorcount: "1" uniqueid: "10acfb68" interfaces: "eth0,lo" architecture: x86_64 timezone: EST
Files
junocontroller03.production.log | junocontroller03.production.log | 1.2 MB | foreman production.log | Ignacio Bravo, 11/18/2014 02:22 PM | |
8422_fact_upload | 8422_fact_upload | 78.8 KB | /api/hosts/facts request log | Dominic Cleal, 11/20/2014 05:38 AM |
Updated by Ignacio Bravo about 10 years ago
I have confirmed the same issue for two types of instalations:
- 1.6 to 1.7 Upgrade
- 1.7 Install from scratch
In addition to the new provisioned nodes, the foreman node also displays 127.0.0.1 in the GUI.
puppet agent --test does work in the nodes with no aparent errors.
Attempting to delete the nodes in Foreman fails as the address 127.0.0.1 can't be modified in the foreman-proxy. In order to delete the nodes the solution is to open the node in the GUI, select Unmanage and then delete.
https://groups.google.com/forum/#!topic/foreman-users/e4IWyOpW8yc
/var/lib/puppet/yaml/facts/mac08002784df70.miwcasa.yaml
2.53GHz"
--- !ruby/object:Puppet::Node::Facts
expiration: 2014-11-17 08:42:08.497665 -05:00
values:
hostname: mac08002784df70
id: root
sshrsakey: "AAAAB3NzaC1yc2EAAAABIwAAAQEAvC2SbKBfRVxfj/11hUwWRXtinKcDWSQGS+hWQSypWl0WP4DwcuPhBkpAVSvdFw01Tkl3iuKQrk94w7VeZhXLuPW9UcYrvCrDoWDDg0nuvKVmvHiZ5r2VveU9Y/lX1gh4pyvMy3AEA2G3wG27H3oNh1CQdS79rPP0fCd6AjMT/OsfbjgNm1L8GSz5AcAD6X/9avyUbCCidxJIzM/1U0rZpaF3vs3RxCjhxiNo/otTmUjIT15QR4KezPErQ5C0dvlyfE8lMfCq32ALglTTE78V47z1rlQITEzEsJw4gNBXodXFUhc5s85iulQoaezEl/Qc3qRPA3r3BWGgtxexlgrzFQ=="
augeasversion: "1.0.0"
is_virtual: "true"
serialnumber: "0"
osfamily: RedHat
rubysitedir: /usr/lib/ruby/site_ruby/1.8
macaddress: "08:00:27:84:DF:70"
memorysize: "490.39 MB"
ipaddress: "10.10.10.102"
physicalprocessorcount: "1"
network_eth0: "10.10.10.0"
productname: VirtualBox
boardserialnumber: "0"
hardwareisa: x86_64
processorcount: "1"
virtual: virtualbox
netmask: "255.255.255.0"
selinux_current_mode: enforcing
clientversion: "2.7.25"
kernelversion: "2.6.32"
memorytotal: "490.39 MB"
operatingsystemrelease: "6.6"
boardproductname: VirtualBox
mtu_eth0: "1500"
uptime_hours: "14"
path: "/sbin:/usr/sbin:/bin:/usr/bin"
hardwaremodel: x86_64
memoryfree: "315.42 MB"
network_lo: "127.0.0.0"
boardmanufacturer: "Oracle Corporation"
macaddress_eth0: "08:00:27:84:DF:70"
sshdsakey: "AAAAB3NzaC1kc3MAAACBAIWw9TBePQuHe3FkX13hiHG2qOQEsPCYzcBxw2A03MvcalcsKj7VylCLQmqEfx1YG8uzrEzPwPNYq1BEp7ezn5xEY5AgVWLCcOkLxRRvuPZRqIyzIHZrbNwkdUIuyO3Vcd5lU1swHMCsdPfd6+/09EwTPgsAQd7x1WjHLrze7IVnAAAAFQD6qnq+5NMbE0tJwVZBnuoA6vQbDwAAAIBD27/+IudYkvbCzCpMV5GT1T7dEhBSmIWdXb/gcSJ8CL7Y6BRn28EApsXU0jdQjRq7hXbjGA3XIlIiD/jJW4EYbJ2ssc93AMD2apvJg4rxw0bIRem9hGa6NoEv1Rl3xAdnBFKqCKsUG0rl99RXxsyUsLh+tZcgjldfkWTi6UndSQAAAIAxRaHl3lQ8yzGO8lXPdA8OKD3ssCMfZPxLSPuVB6OEL1fotdgoW5j8Twj1W79wp06WQcdjoTb4C4GDW/KkvVFCTSrtGglUORtBTbenbd28KWzC5kocKh2hXZByKyctu/uD4AjIbOkYw+BaBZTkBhQ5xrfxTGkeNNNTO1AlrjjPqQ=="
uptime: "14:05 hours"
processor0: "Intel(R) Core(TM)2 Duo CPU T9400
uniqueid: "0a0a660a"
uptime_days: "0"
kernelrelease: "2.6.32-504.el6.x86_64"
architecture: x86_64
mtu_lo: "65536"
selinux_enforced: "true"
facterversion: "1.6.18"
operatingsystem: CentOS
interfaces: "eth0,lo"
ipaddress_lo: "127.0.0.1"
selinux_config_policy: targeted
type: Other
domain: miwcasa
puppetversion: "2.7.25"
swapsize: "816.00 MB"
selinux: "true"
"_timestamp": 2014-11-17 13:12:08.561509 -05:00
uptime_seconds: "50718"
manufacturer: "innotek GmbH"
ipaddress_eth0: "10.10.10.102"
rubyversion: "1.8.7"
environment: production
selinux_mode: targeted
timezone: UTC
kernel: Linux
netmask_lo: "255.0.0.0"
selinux_config_mode: enforcing
clientcert: mac08002784df70.miwcasa
kernelmajversion: "2.6"
swapfree: "816.00 MB"
ps: "ps -ef"
selinux_policyversion: "24"
fqdn: mac08002784df70.miwcasa
netmask_eth0: "255.255.255.0"
name: mac08002784df70.miwcasa
@
facter on the node
2.53GHz
[root@mac08002784df70 ~]# facter
architecture => x86_64
augeasversion => 1.0.0
boardmanufacturer => Oracle Corporation
boardproductname => VirtualBox
boardserialnumber => 0
domain => miwcasa
facterversion => 1.6.18
fqdn => mac08002784df70.miwcasa
hardwareisa => x86_64
hardwaremodel => x86_64
hostname => mac08002784df70
id => root
interfaces => eth0,lo
ipaddress => 10.10.10.102
ipaddress_eth0 => 10.10.10.102
ipaddress_lo => 127.0.0.1
is_virtual => true
kernel => Linux
kernelmajversion => 2.6
kernelrelease => 2.6.32-504.el6.x86_64
kernelversion => 2.6.32
macaddress => 08:00:27:84:DF:70
macaddress_eth0 => 08:00:27:84:DF:70
manufacturer => innotek GmbH
memoryfree => 300.59 MB
memorysize => 490.39 MB
memorytotal => 490.39 MB
mtu_eth0 => 1500
mtu_lo => 65536
netmask => 255.255.255.0
netmask_eth0 => 255.255.255.0
netmask_lo => 255.0.0.0
network_eth0 => 10.10.10.0
network_lo => 127.0.0.0
operatingsystem => CentOS
operatingsystemrelease => 6.6
osfamily => RedHat
path => /usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin
physicalprocessorcount => 1
processor0 => Intel(R) Core(TM)2 Duo CPU T9400
processorcount => 1
productname => VirtualBox
ps => ps -ef
puppetversion => 2.7.25
rubysitedir => /usr/lib/ruby/site_ruby/1.8
rubyversion => 1.8.7
selinux => true
selinux_config_mode => enforcing
selinux_config_policy => targeted
selinux_current_mode => enforcing
selinux_enforced => true
selinux_mode => targeted
selinux_policyversion => 24
serialnumber => 0
sshdsakey => AAAAB3NzaC1kc3MAAACBAIWw9TBePQuHe3FkX13hiHG2qOQEsPCYzcBxw2A03MvcalcsKj7VylCLQmqEfx1YG8uzrEzPwPNYq1BEp7ezn5xEY5AgVWLCcOkLxRRvuPZRqIyzIHZrbNwkdUIuyO3Vcd5lU1swHMCsdPfd6+/09EwTPgsAQd7x1WjHLrze7IVnAAAAFQD6qnq+5NMbE0tJwVZBnuoA6vQbDwAAAIBD27/+IudYkvbCzCpMV5GT1T7dEhBSmIWdXb/gcSJ8CL7Y6BRn28EApsXU0jdQjRq7hXbjGA3XIlIiD/jJW4EYbJ2ssc93AMD2apvJg4rxw0bIRem9hGa6NoEv1Rl3xAdnBFKqCKsUG0rl99RXxsyUsLh+tZcgjldfkWTi6UndSQAAAIAxRaHl3lQ8yzGO8lXPdA8OKD3ssCMfZPxLSPuVB6OEL1fotdgoW5j8Twj1W79wp06WQcdjoTb4C4GDW/KkvVFCTSrtGglUORtBTbenbd28KWzC5kocKh2hXZByKyctu/uD4AjIbOkYw+BaBZTkBhQ5xrfxTGkeNNNTO1AlrjjPqQ==
sshrsakey => AAAAB3NzaC1yc2EAAAABIwAAAQEAvC2SbKBfRVxfj/11hUwWRXtinKcDWSQGS+hWQSypWl0WP4DwcuPhBkpAVSvdFw01Tkl3iuKQrk94w7VeZhXLuPW9UcYrvCrDoWDDg0nuvKVmvHiZ5r2VveU9Y/lX1gh4pyvMy3AEA2G3wG27H3oNh1CQdS79rPP0fCd6AjMT/OsfbjgNm1L8GSz5AcAD6X/9avyUbCCidxJIzM/1U0rZpaF3vs3RxCjhxiNo/otTmUjIT15QR4KezPErQ5C0dvlyfE8lMfCq32ALglTTE78V47z1rlQITEzEsJw4gNBXodXFUhc5s85iulQoaezEl/Qc3qRPA3r3BWGgtxexlgrzFQ==
swapfree => 816.00 MB
swapsize => 816.00 MB
timezone => UTC
type => Other
uniqueid => 0a0a660a
uptime => 14:19 hours
uptime_days => 0
uptime_hours => 14
uptime_seconds => 51541
virtual => virtualbox
@
Updated by Marek Hulán about 10 years ago
- Translation missing: en.field_release deleted (
21)
As a quick workaround you can set "ignore_puppet_facts_for_provisioning" setting value to true (Administer -> Settings -> Provisioning). We update the IP and MAC based on ipaddress and macaddress facts. Facter can change the value from run to run, but usually it should be the first found interface (alphabetical order). Could you please attach your facter version? Also could you rerun "facter ipaddress" on that host several times to confirm, the values does not change randomly?
Updated by Ignacio Bravo about 10 years ago
Marek,
I've updated the ignore_puppet_facts_for_provisioning, and that should take care of the future nodes. Is there any way I can update the old ones? Maybe editing the db directly?
As for the facter version, they were in disconnect.
The server has 2.3:
facter.x86_64 1:2.3.0-1.el6 @puppetlabs-products
And the guests:
facter.x86_64 1.6.18-3.el6 @epel/6.6
After upgrading one of the guests, the error still is there.
The curious part is that the server itself is also being displayed as 127.0.0.1.
[root@puppet6 pxelinux.cfg]# hammer -u admin -p changeme host list ---|-------------------------|------------------|------------|-----------|------------------ ID | NAME | OPERATING SYSTEM | HOST GROUP | IP | MAC ---|-------------------------|------------------|------------|-----------|------------------ 6 | mac08002784df70.miwcasa | CentOS 6.6 | CentOS6 | 127.0.0.1 | 08:00:27:84:df:70 10 | mac080027fc0b06.miwcasa | CentOS 6.6 | | 127.0.0.1 | 08:00:27:fc:0b:06 1 | puppet6.miwcasa | CentOS 6.6 | | 127.0.0.1 | 08:00:27:a2:05:ab ---|-------------------------|------------------|------------|-----------|------------------
Updated by Dominic Cleal about 10 years ago
It might be very helpful if you could try running the Puppet agent and letting it import facts (when that setting's disabled again) while having debugging enabled:
http://projects.theforeman.org/projects/foreman/wiki/Troubleshooting#How-do-I-enable-debugging
Then provide production.log - this should show exactly what facts are being received and how Foreman's treating them. You'd need to do this on a host that isn't currently set to 127.0.0.1 though.
Updated by Ignacio Bravo about 10 years ago
I created a new host with an IP of 192.168.6.170
After everything was settled and applied, the new IP of the host in the foreman GUI is being shown as 127.0.0.1 as you can see in the table bellow
[root@puppet foreman]# hammer -u admin -p changeme host list ---|-------------------------|------------------|------------------|-----------|------------------ ID | NAME | OPERATING SYSTEM | HOST GROUP | IP | MAC ---|-------------------------|------------------|------------------|-----------|------------------ 17 | compute01.hq.ltg | CentOS 6.5 | LTG_Base/Compute | 127.0.0.1 | 56:e0:57:b1:de:47 52 | junocontroller03.hq.ltg | CentOS 7.0 | LTG_CentOS7 | 127.0.0.1 | 00:50:56:83:db:05 21 | labsrv04.hq.ltg | CentOS 6.5 | LTG_Base/Compute | 127.0.0.1 | 5a:10:41:1f:81:41 29 | labsrv10.hq.ltg | CentOS 7.0 | LTG_CentOS7 | 127.0.0.1 | 0e:85:00:4f:42:4b 1 | puppet.hq.ltg | CentOS 6.5 | | 127.0.0.1 | 00:50:56:83:db:01 ---|-------------------------|------------------|------------------|-----------|------------------
I've uploaded the production log (in debug mode) to see if this gives you addtional hints as of what is happening.
Versions on host:
- Facter 2.2
- Puppet 3.6
Updated by Dominic Cleal about 10 years ago
- Translation missing: en.field_release set to 21
That's excellent, thanks for the valuable data.
The request begins on line 3622, the following smoking gun is line 4122.
We have following interfaces 'ens192, ens224' based on facts Interface ens192 facts: {"netmask"=>"255.255.255.0", "ipaddress"=>"192.168.6.170", "network"=>"192.168.6.0", "macaddress"=>"00:50:56:83:db:05", "mtu"=>"1500"} Interface ens224 facts: {"mtu"=>"1500", "macaddress"=>"00:50:56:83:db:06"} Skipping ens192 since it is primary interface of host junocontroller03.hq.ltg [1m[35mSQL (0.5ms)[0m INSERT INTO "audits" ("action", "associated_id", "associated_name", "associated_type", "auditable_id", "auditable_name", "auditable_type", "audited_changes", "comment", "created_at", "remote_address", "user_id", "user_type", "username", "version") VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, $15) RETURNING "id" [["action", "update"], ["associated_id", nil], ["associated_name", nil], ["associated_type", nil], ["auditable_id", 52], ["auditable_name", "junocontroller03.hq.ltg"], ["auditable_type", "Host"], ["audited_changes", "---\nmodel_id:\n- \n- 1\nip:\n- 192.168.6.170\n- 127.0.0.1\nprimary_interface:\n- \n- ens192\n"], ["comment", nil], ["created_at", Tue, 18 Nov 2014 18:55:40 UTC +00:00], ["remote_address", "192.168.6.151"], ["user_id", nil], ["user_type", nil], ["username", "API Admin"], ["version", 4]] [1m[36m (0.7ms)[0m [1mUPDATE "hosts" SET "model_id" = 1, "ip" = '127.0.0.1', "primary_interface" = 'ens192', "updated_at" = '2014-11-18 18:55:40.507772' WHERE "hosts"."type" IN ('Host::Managed') AND "hosts"."id" = 52[0m
Updated by Dominic Cleal about 10 years ago
Could you please run this on your Foreman server?
echo "select name from fact_names where id=8;" | sudo -u foreman psql
Seems the logs don't show what fact_name IDs equate to which facts.
Updated by Ignacio Bravo about 10 years ago
[root@puppet ~]# echo "select name from fact_names where id=8;" | sudo -u foreman psql could not change directory to "/root" name -------------- ipaddress_lo (1 row)
Updated by Ignacio Bravo about 10 years ago
And here you have all the facts:
[root@puppet ~]# echo "select name from fact_names ;" | sudo -u foreman psql could not change directory to "/root" name ------------------------------- network_eth1 blockdevice_sda_vendor sshfp_rsa blockdevice_sr0_size ipaddress_eth1 physicalprocessorcount netmask_lo ipaddress_lo filesystems uniqueid blockdevice_sda_model manufacturer id clientnoop uuid interfaces mtu_eth0 kernel operatingsystemmajrelease selinux_policyversion swapfree_mb uptime_days is_virtual blockdevice_sr0_model uptime osfamily augeasversion selinux_config_mode netmask_eth1 processors facterversion memorysize_mb timezone swapsize_mb uptime_seconds system_uptime domain ipaddress_eth0 network_eth0 macaddress_eth1 operatingsystem gid kernelmajversion fqdn sshrsakey operatingsystemrelease blockdevice_sr0_vendor hostname productname sshfp_dsa selinux memoryfree_mb serialnumber netmask_eth0 partitions bios_release_date memorysize swapsize clientcert virtual macaddress puppetversion uptime_hours blockdevice_sda_size boardserialnumber kernelrelease bios_version selinux_current_mode macaddress_eth0 rubyversion boardproductname hardwaremodel mtu_eth1 boardmanufacturer selinux_config_policy mtu_lo processorcount kernelversion sshdsakey swapfree netmask type clientversion architecture selinux_enforced rubysitedir os network_lo ipaddress ps path processor0 bios_vendor _timestamp blockdevices memoryfree hardwareisa ipa_client_configured galera_bootstrap_ok gluster_fsm_debug ip6tables_version puppet_vardir gluster_vrrp gluster_uuid iptables_version hamysql_active_node kvm_capable gluster_property_groups_ready openstack_services_enabled concat_basedir gluster_vrrp_password foo_a gluster_host_ip puppet_vardirtmp staging_http_get is_pe foo foo_b root_home memorytotal discovery_version lsbdistcodename lsbrelease lsbmajdistrelease lib discovery_bootif lsbdistdescription lsbdistid lsbdistrelease processor5 processor2 processor7 processor4 processor1 processor6 processor3 blockdevice_cciss!c0d0_size blockdevice_cciss!c0d0_model blockdevice_cciss!c0d0_vendor ipaddress_virbr0 network_virbr0 mtu_virbr0_nic netmask_virbr0 macaddress_virbr0_nic mtu_virbr0 lsbminordistrelease macaddress_virbr0 macaddress_ovs_system mtu_br_tun macaddress_br_int mtu_ovs_system macaddress_br_tun mtu_br_int macaddress_tapb80dd34e_de mtu_qbrb80dd34e_de macaddress_qbrb80dd34e_de mtu_tapb80dd34e_de macaddress_qvbb80dd34e_de mtu_qvbb80dd34e_de mtu_qvob80dd34e_de macaddress_qvob80dd34e_de mtu_tapb45f85f6_be mtu_qbrb45f85f6_be mtu_qvob45f85f6_be mtu_qvbb45f85f6_be macaddress_qvob45f85f6_be macaddress_qvbb45f85f6_be macaddress_tapb45f85f6_be macaddress_qbrb45f85f6_be macaddress_qvo29cbcceb_cd macaddress_qvb29cbcceb_cd macaddress_tap29cbcceb_cd mtu_qbr29cbcceb_cd macaddress_qbr29cbcceb_cd mtu_qvo29cbcceb_cd mtu_qvb29cbcceb_cd mtu_tap29cbcceb_cd macaddress_qvo686141ff_e0 macaddress_qvb686141ff_e0 mtu_qvo965d5ef5_16 macaddress_tap686141ff_e0 mtu_qvb965d5ef5_16 macaddress_qbr965d5ef5_16 macaddress_qvb965d5ef5_16 macaddress_qvo965d5ef5_16 mtu_qvo686141ff_e0 macaddress_tap965d5ef5_16 mtu_tap965d5ef5_16 mtu_qbr965d5ef5_16 mtu_qvb686141ff_e0 mtu_qbr686141ff_e0 macaddress_qbr686141ff_e0 mtu_tap686141ff_e0 macaddress_enp5s0 netmask_enp3s0 sshecdsakey ipaddress_enp3s0 sshfp_ecdsa network_enp3s0 macaddress_enp3s0 rubyplatform netmask_enp5s0 ipaddress_enp5s0 network_enp5s0 rabbitmq_erlang_cookie macaddress_br_ex dhcp_servers netmask_ens192 blockdevice_fd0_size mtu_ens192 network_ens192 macaddress_ens192 macaddress_ens224 mtu_ens224 ipaddress_ens192 network_ens224 ipaddress_ens224 netmask_ens224 mtu_enp3s0 mtu_enp5s0 mtu_br_ex blockdevice_vda_size blockdevice_vda_vendor macaddress_br0 mtu_br0 macaddress_ens3 mtu_ens3 ipaddress_ens3 macaddress_ens4 mtu_ens4 network_ens3 netmask_ens3 netmask_ens4 network_ens4 ipaddress_ens4 ipaddress_eth1_1 network_eth1_1 mtu_eth1_1 netmask_eth1_1 macaddress_eth1_1 discovery_release discovery_bootip speed_ens3 duplex_ens3 port_ens3 auto_negotitation_ens3 link_ens3 speed_ens4 duplex_ens4 port_ens4 auto_negotitation_ens4 link_ens4 link_lo (253 rows)
Updated by Dominic Cleal about 10 years ago
Thanks, so that confirms that ipaddress_lo is 127.0.0.1, ipaddress is 192.168.6.170, and that's somehow causing the primary IP to change.
interfaces=ens192,ens224,lo
Updated by Dominic Cleal about 10 years ago
- Status changed from New to Assigned
- Assignee set to Dominic Cleal
Updated by Dominic Cleal about 10 years ago
- File 8422_fact_upload 8422_fact_upload added
Updated by Marek Hulán about 10 years ago
- Assignee changed from Dominic Cleal to Marek Hulán
Updated by The Foreman Bot about 10 years ago
- Status changed from Assigned to Ready For Testing
- Pull request https://github.com/theforeman/foreman_discovery/pull/103 added
- Pull request deleted (
)
Updated by Dominic Cleal about 10 years ago
- Translation missing: en.field_release deleted (
21)
Updated by Dominic Cleal about 10 years ago
- Project changed from Foreman to Discovery
- Category changed from Importers to Discovery plugin
Seems it's a bug in foreman_discovery. Great find Marek.
Updated by Marek Hulán about 10 years ago
- Project changed from Discovery to Foreman
- Category changed from Discovery plugin to Importers
- Translation missing: en.field_release set to 21
Could you please test the fix from discovery PR? You'll have to apply in your `gem which foreman_discovery` directory and then restart foreman.
Updated by Marek Hulán about 10 years ago
- Project changed from Foreman to Discovery
- Category changed from Importers to Discovery plugin
- Translation missing: en.field_release deleted (
21)
Sorry for conflicting update, restored.
Updated by Ignacio Bravo about 10 years ago
After applying the patch, all the hosts managed with puppet are now showing the correct IP for the first interface.
Thanks.
IB
Updated by Marek Hulán almost 10 years ago
- Status changed from Ready For Testing to Closed
- % Done changed from 0 to 100
Applied in changeset 83ee2e24c067d060ca7229d9f0026b37044e9ea2.