Project

General

Profile

Bug #17426

[UI] Skyring doesn't properly configure more networks

Added by Brad Buckingham over 5 years ago. Updated almost 4 years ago.

Status:
Rejected
Priority:
Normal
Assignee:
-
Category:
-
Target version:
Difficulty:
Triaged:
Bugzilla link:
Pull request:
Fixed in Releases:
Found in Releases:
Red Hat JIRA:

Description

NOTE: a pulp change will be needed for this.

Cloned from https://bugzilla.redhat.com/show_bug.cgi?id=1360643

Description of problem:
On cluster with more networks (3 in my case) is possible to choose various combination of "Cluster Network" and "Access Network" between available networks.
But the submitted request contains only one Cluster Network and one Access Network.

Version-Release number of selected component (if applicable):
rhscon-ceph-0.0.36-1.el7scon.x86_64
rhscon-core-0.0.36-1.el7scon.x86_64
rhscon-core-selinux-0.0.36-1.el7scon.noarch
rhscon-ui-0.0.50-1.el7scon.noarch

How reproducible:
100%

Steps to Reproduce:
1. Prepare cluster with at least 3 separated networks available on each storage node.
2. Try to create cluster and chose some combination of more networks for Cluster Network or Access Network.
3. Check the POST request when submitting the Create Cluster.
4. Check the Ceph cluster configuration when the cluster is created.

Actual results:
Submitted POST request (Cluster Network should be 192.168.102.0/24 and 192.168.101.0/24, Public Network 10.34.112.0/20 as it is shown on the attached screenshot.)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ {
"name": "TestClusterA",
"type": "ceph",
"journalSize": "5GB",
"nodes": [ { "nodeid": "9fde6121-c5d4-43d6-b8c3-42c28cf1ee5b",
"nodetype": ["MON"]}, { "nodeid": "74d964db-ddcb-46dd-abf5-2b7c0a8d420b",
"nodetype": ["MON"]}, { "nodeid": "e122b7a9-4b35-4ec0-99e0-3968b3e4609d",
"nodetype": ["OSD"],
"disks": [ {"name": "/dev/vdj", "fstype": "xfs"}, {"name": "/dev/vdi", "fstype": "xfs"}, {"name": "/dev/vdh", "fstype": "xfs"}, {"name": "/dev/vdc", "fstype": "xfs"}, {"name": "/dev/vdb", "fstype": "xfs"}, {"name": "/dev/vdg", "fstype": "xfs"}, {"name": "/dev/vdf", "fstype": "xfs"}, {"name": "/dev/vde", "fstype": "xfs"}, {"name": "/dev/vdd", "fstype": "xfs"}
]}, { "nodeid": "36c9cb2a-9ebb-4ce8-98d0-61986442229f",
"nodetype": ["OSD"],
"disks": [ {"name": "/dev/vdj", "fstype": "xfs"}, {"name": "/dev/vdi", "fstype": "xfs"}, {"name": "/dev/vdh", "fstype": "xfs"}, {"name": "/dev/vdc", "fstype": "xfs"}, {"name": "/dev/vdb", "fstype": "xfs"}, {"name": "/dev/vdg", "fstype": "xfs"}, {"name": "/dev/vdf", "fstype": "xfs"}, {"name": "/dev/vde", "fstype": "xfs"}, {"name": "/dev/vdd", "fstype": "xfs"}
]}, { "nodeid": "93c0bc3d-baf7-4bb1-8b05-30424f4101a1",
"nodetype": ["OSD"],
"disks": [ {"name": "/dev/vdj", "fstype": "xfs"}, {"name": "/dev/vdi", "fstype": "xfs"}, {"name": "/dev/vdh", "fstype": "xfs"}, {"name": "/dev/vdc", "fstype": "xfs"}, {"name": "/dev/vdb", "fstype": "xfs"}, {"name": "/dev/vdg", "fstype": "xfs"}, {"name": "/dev/vdf", "fstype": "xfs"}, {"name": "/dev/vde", "fstype": "xfs"}, {"name": "/dev/vdd", "fstype": "xfs"}
]}, { "nodeid": "48ead07e-7996-4dc6-b7b6-81dd00e59e04",
"nodetype": ["OSD"],
"disks": [ {"name": "/dev/vdj", "fstype": "xfs"}, {"name": "/dev/vdi", "fstype": "xfs"}, {"name": "/dev/vdh", "fstype": "xfs"}, {"name": "/dev/vdc", "fstype": "xfs"}, {"name": "/dev/vdb", "fstype": "xfs"}, {"name": "/dev/vdg", "fstype": "xfs"}, {"name": "/dev/vdf", "fstype": "xfs"}, {"name": "/dev/vde", "fstype": "xfs"}, {"name": "/dev/vdd", "fstype": "xfs"}
]}, { "nodeid": "f9501e14-d3e6-4565-aad0-7eb81463bc17",
"nodetype": ["MON"]}
],
"networks": {
"cluster": "192.168.102.0/24",
"public": "10.34.112.0/20"
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Ceph cluster configuration (look for the public_network, and cluster_network keys):
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # cat /etc/ceph/TestClusterA.conf
[global]
osd crush update on start = False
max open files = 131072
fsid = aabf0005-8723-4e33-a8a6-a1c7812b054f

[mon.dhcp-124-0]
host = dhcp-124-0
mon addr = 192.168.102.115
[mon.dhcp-124-2]
host = dhcp-124-2
mon addr = 192.168.102.191
[client]
admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok # must be writable by QEMU and allowed by SELinux or AppArmor
log file = /var/log/ceph/qemu-guest-$pid.log # must be writable by QEMU and allowed by SELinux or AppArmor
[mon]
[mon.dhcp-126-56]
host = dhcp-126-56
mon addr = 192.168.102.178
[osd]
osd mount options xfs = noatime,largeio,inode64,swalloc
osd mkfs options xfs = -f -i size=2048
public_network = 10.34.112.0/20
cluster_network = 192.168.102.0/24
osd mkfs type = xfs
osd journal size = 0
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Expected results:
Submitted POST request respects the desired network configuration from the Create Cluster wizard.

Additional info:
Accordingly to the documentation1 (this is for Ceph 1.3, but I think it is similar in Ceph 2) it is possible to configure more networks as public_network or cluster_network.

[1] https://access.redhat.com/documentation/en/red-hat-ceph-storage/1.3/ceph-configuration-guide/chapter-8-network-configuration-settings

History

#1 Updated by Eric Helms over 5 years ago

  • Subject changed from [UI] Skyring doesn't properly configure more networks to [UI] Skyring doesn't properly configure more networks
  • Status changed from New to Rejected
  • Legacy Backlogs Release (now unused) set to 166

Also available in: Atom PDF