Ask Your Question
0

apex assigns IP addresses to wrong interfaces of overcloud

asked 2016-09-22 02:22:37 -0700

rvdp gravatar image

We are trying the Apex deployment for the first time with a bare metal undercloud and the network_settings below. The overcloud VMs PXE boots fine with the 192.0.2.0/24 addresses assigned to the correct interfaces (enp2s0f1). But when they boot from disk, this address gets assigned to the wrong interface (enp1s0f0).

The /etc/sysconfig/network-scripts config file have: enp1s0f0: 192.0.2.12 enp1s0f1: 192.0.2.12 enp2s0f0: 11.0.0.25 (this should be the public network) enp2s0f1: OVS_BRIDGE=br-ex (this should be the admin network)

(result is enp1s0f0 gets enp1s0f0 and enp1s0f1 has no address)

enp1s0f1 is the wrong interface, it should be enp2s0f1.

This is /etc/os-net-config/config.json on the overcloud VM:

{"networkconfig": [{"dnsservers": ["8.8.8.8", "8.8.4.4"], "name": "nic1", "routes": [{"ipnetmask": "169.254.169.254/32", "nexthop": "192.0.2.5"}], "usedhcp": false, "type": "interface", "addresses": [{"ipnetmask": "192.0.2.12/24"}]}, {"usedhcp": false, "type": "interface", "name": "nic2", "addresses": [{"ipnetmask": "11.0.0.25/24"}]}, {"name": "br-ex", "members": [{"type": "interface", "name": "nic3", "primary": true}], "routes": [{"ipnetmask": "0.0.0.0/0", "nexthop": "XXX.YYY.ZZZ.1"}], "usedhcp": false, "type": "ovsbridge", "addresses": [{"ipnetmask": "XXX.YYY.ZZZ.10/24"}]}, {"usedhcp": false, "type": "interface", "name": "nic4", "addresses": [{"ip_netmask": "12.0.0.26/24"}]}]}

Seems like the symbolic nicN names get mapped to the wrong physical interfaces. Any suggestions about how to fix this?

enp2s0f0 and enp2s0f1 are two 1G onboard interfaces enp1s0f0 and enp1s0f1 are two 10G Mellanox ConnectX-4 NICs

admin_network:
  enabled: true
  network_type: bridged
  bridged_interface: 'enp2s0f1'
  bond_interfaces: ''
  vlan: native
  usable_ip_range: 192.0.2.10,192.0.2.199
  gateway: 192.0.2.1
  provisioner_ip: 192.0.2.5
  cidr: 192.0.2.0/24
  dhcp_range: 192.0.2.10,192.0.2.100
  introspection_range: 192.0.2.200,192.0.2.254
private_network:
  enabled: true
  network_type: 'bridged'
  bridged_interface: ''
  cidr: 11.0.0.0/24
  dhcp_range: 11.0.0.100,11.0.0.200
public_network:
  enabled: true
  network_type: 'bridged'
  bridged_interface: 'enp2s0f0'
  cidr: XXX.YYY.ZZZ.0/24
  gateway: XXX.YYY.ZZZ.1
  floating_ip_range: XXX.YYY.ZZZ.100,XXX.YYY.ZZZ.254
  usable_ip_range: XXX.YYY.ZZZ.5,XXX.YYY.ZZZ.20
  provisioner_ip: XXX.YYY.ZZZ.3
storage_network:
  enabled: true
  network_type: 'bridged'
  bridged_interface: ''
  cidr: 12.0.0.0/24
  dhcp_range: 12.0.0.100,12.0.0.200

I tried this with Brahmaputra 3.0 and a daily Colorado built of 15 September.

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2016-09-29 08:52:43 -0700

rvdp gravatar image

I got this answer on the opnfv-users list:

Hi Ronald,

In the Brahmaputra release, the NIC ordering is static for the overcloud nodes.  Meaning the first interface must be admin network.  The
first interface is determined by an search done of devices by os-net-config.  You should see in your /var/log/messages output that shows the
"nic mapping", like:
Sep 22 12:28:55 localhost os-collect-config: [2016/09/22 12:28:55 PM] [INFO] nic1 mapped to: enp6s0
Sep 22 12:28:55 localhost os-collect-config: [2016/09/22 12:28:55 PM] [INFO] nic2 mapped to: enp7s0
Sep 22 12:28:55 localhost os-collect-config: [2016/09/22 12:28:55 PM] [INFO] nic3 mapped to: enp8s0
Sep 22 12:28:55 localhost os-collect-config: [2016/09/22 12:28:55 PM] [INFO] nic4 mapped to: enp9s0

The good news is in the Colorado release, this is no longer a restriction.  You are free to declare the NIC to be used per role in the
network settings file:

https://gerrit.opnfv.org/gerrit/gitweb?p=apex.git;a=blob;f=config/network/network_settings.yaml;h=f7680643b1cb688a6883195f3afa9c400ae11e22;h
b=stable/colorado#l40

In that example it is declaring which "logical nic" to use per role/network.  If you don't want to bother with using the logical nic mapping
and know the physical nic name of each host.  You can just use that physical nic name instead.

The latest iso/RPMs that will be used as Colorado release are:
http://artifacts.opnfv.org/apex/colorado/opnfv-2016-09-22.iso
http://artifacts.opnfv.org/apex/colorado/opnfv-apex-3.0-20160922.noarch.rpm
http://artifacts.opnfv.org/apex/colorado/opnfv-apex-common-3.0-20160922.noarch.rpm
http://artifacts.opnfv.org/apex/colorado/opnfv-apex-onos-3.0-20160922.noarch.rpm
http://artifacts.opnfv.org/apex/colorado/opnfv-apex-opendaylight-sfc-3.0-20160922.noarch.rpm
http://artifacts.opnfv.org/apex/colorado/opnfv-apex-undercloud-3.0-20160922.noarch.rpm

Thanks,

Tim Rozet
Red Hat SDN Team
edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

[hide preview]

Question Tools

Follow
1 follower

Stats

Asked: 2016-09-22 02:22:37 -0700

Seen: 177 times

Last updated: Sep 29 '16