This is a private note on how to setup OpenStack Folsom with Fedora18. Virtual networks are configured with Quantum OVS plugin using VLAN. (Fedora18's OVS doesn't support GRE tunnelling at the moment.)
Network Configuration
Physical network connections are as below. I used simple switching HUBs for Private NW and Management NW.
[Public NW] 10.0.201.0/24 | | |em1 ------ p1p1 192.168.1.0/24 |opst01|------------------------------- ------ 192.168.1.101 | |em2 | | | [Private NW] | | | |----------------- | |em2 |em2 [Management NW] ------ ------ | |opst02| |opst03| | ------ ------ | |p1p1 |p1p1 | |192.168.1.102 |192.168.1.103 | -----------------------------------
The following components are placed in each server (opst01-opst03)
- opst01: keystone,nova,glance,cinder,quantum,horizon
- opst02: nova-compute, quantum-ovs-plugin
- opst03: nova-compute, quantum-ovs-plugin
They communicate through Management NW. Administrative works are also done through Management NW. (i.e. You should log-in to servers via Management NW. As Public NW is a part of NW virtualization, its connection may be unstable during the virtual NW configurations.)
In this configuration, each node doesn't have access to the Internet. You may be wondering how you can use yum repositories on the Internet... Yes, there must be a trick. In my case, I connected a free port (em1) of opst02 to the Internet, and configured it as a Masquerade router for other nodes. That is:
On opst02,
# sysctl -w net.ipv4.ip_forward=1 # iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -j MASQUERADEOn opst01, opst03
# route add default gw 192.168.1.102
opst01 works as a network gateway node. Virtual private networks are overlayed on Private NW. The following is a sample logical network I will create in this document.
[Public Network] | | 10.64.201.0/24 --------------------------------------------------------- | [public01] | |10.64.201.1 |10.64.201.3 [router01] [router02] |192.168.101.1 |192.168.101.1 | | | 192.168.101.0/24 | 192.168.101.0/24 --------------------------- ---------------------------- | [net01_subnet01] | | [net02_subnet01] | | | | | | 192.168.101.2 |192.168.101.3 | 192.168.101.2 |192.168.101.3 [dnsmasq] [vm01] [dnsmasq] [vm02]
Common setup for opst01-opst03
Here's the common /etc/hosts entries.
/etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.1.101 opst01 192.168.1.102 opst02 192.168.1.103 opst03
These IP's are assigned to p1p1 of each server. em1 of opst01 and em2 of all three are just up without IP's. (Set BOOTPROTO="none" in ifcfg-emX.) By the way, "p1p1/emX" may stand for "ethY" of your servers.
SELinux is set to "Permissive" as SELinux policy is under development. And iptables is disabled for the sake of simplicity.
# sed -i '/^SELINUX=/s/^.*$/SELINUX=permissive/' /etc/selinux/config # setenforce 0 # systemctl stop firewalld # systemctl disable firewalld
Apply the latest update. This in not an optional thing. There are some critical fixes you need to apply for running OpenStack.
# yum update
Don't forget to use NTP for time sync. By the way, ntpd has been replaced with chronyd in Fedora 18. Use chronyc to check whether chronyd is working.
# chronyc sources 210 Number of sources = 4 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^? 200.140.8.72.in-addr.arpa 0 10 0 10y +0ns[ +0ns] +/- 0ns ^? tesla.selinc.com 0 10 0 10y +0ns[ +0ns] +/- 0ns ^? paladin.latt.net 0 10 0 10y +0ns[ +0ns] +/- 0ns ^? ntp1.ResComp.Berkeley.EDU 0 10 0 10y +0ns[ +0ns] +/- 0ns
In addition to ntpd, Some general network admin operations have changed in Fedora18.
For example, setting permanent hostname is:
# hostnamectl set-hostname --static opst01I'm not sure how NetworkManager can be used in non-desktop environment...
I switched to the traditional network service.# systemctl disable NetworkManager # systemctl stop NetworkManager # chkconfig network on # chkconfig network start
Settings for opst01
General setup
Install common packages.
# yum install openstack-utils dnsmasq-utils
Keystone setup
Install packages and create database tables.
# yum install openstack-keystone # openstack-db --init --service keystone mysql-server is not installed. Would you like to install it now? (y/n): y ... Complete! mysqld is not running. Would you like to start it now? (y/n): y Note: Forwarding request to 'systemctl enable mysqld.service'. ln -s '/usr/lib/systemd/system/mysqld.service' '/etc/systemd/system/multi-user.target.wants/mysqld.service' Since this is a fresh installation of MySQL, please set a password for the 'root' mysql user. Enter new password for 'root' mysql user: pas4mysql Enter new password again: pas4mysql Verified connectivity to MySQL. Creating 'keystone' database. Asking openstack-keystone to sync the database. Complete!
mysql's root password "pas4mysql" is up to your choice. The same for other passwords in the rest of this document.
Create admin user, role, and tenant. All of them is used for admin operations.
# export SERVICE_TOKEN=$(openssl rand -hex 10) # export SERVICE_ENDPOINT=http://opst01:35357/v2.0 # mkdir /root/work # echo $SERVICE_TOKEN > /root/work/ks_admin_token # openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token $SERVICE_TOKEN # systemctl enable openstack-keystone # systemctl start openstack-keystone # keystone user-create --name admin --pass pas4admin # keystone role-create --name admin # keystone tenant-create --name admin # user=$(keystone user-list | awk '/admin/ {print $2}') # role=$(keystone role-list | awk '/admin/ {print $2}') # tenant=$(keystone tenant-list | awk '/admin/ {print $2}') # keystone user-role-add --user-id $user --role-id $role --tenant-id $tenant
Create keystonerc_admin and source it.
# cat <<'EOF' > /root/work/keystonerc_admin export OS_USERNAME=admin export OS_TENANT_NAME=admin export OS_PASSWORD=pas4admin export OS_AUTH_URL=http://opst01:35357/v2.0/ export PS1="[\u@\h \W(keystone_admin)]\$ " EOF # unset SERVICE_ENDPOINT # unset SERVICE_TOKEN # cd /root/work # . keystonerc_admin
Create a service entry of Keystone.
# keystone service-create --name=keystone --type=identity --description="Keystone Identity Service" # service=$(keystone service-list | awk '/keystone/ {print $2}') # keystone endpoint-create --region regionOne \ --service_id $service \ --publicurl 'http://opst01:5000/v2.0' \ --adminurl 'http://opst01:35357/v2.0' \ --internalurl 'http://opst01:5000/v2.0'
Add demonstration tenants "redhat01/redhat02" and its user "enakai".
# keystone user-create --name enakai --pass xxxxxxxx # keystone role-create --name user # keystone tenant-create --name redhat01 # keystone tenant-create --name redhat02 # user=$(keystone user-list | awk '/enakai/ {print $2}') # role=$(keystone role-list | awk '/user/ {print $2}') # tenant=$(keystone tenant-list | awk '/redhat01/ {print $2}') # keystone user-role-add --user-id $user --role-id $role --tenant-id $tenant # tenant=$(keystone tenant-list | awk '/redhat02/ {print $2}') # keystone user-role-add --user-id $user --role-id $role --tenant-id $tenant
Create rc files for demo tenants.
# cat <<'EOF' > /root/work/keystonerc_redhat01 export OS_USERNAME=enakai export OS_TENANT_NAME=redhat01 export OS_PASSWORD=xxxxxxxx export OS_AUTH_URL=http://opst01:5000/v2.0/ export PS1="[\u@\h \W(keystone_redhat01)]\$ " EOF # cat <<'EOF' > /root/work/keystonerc_redhat02 export OS_USERNAME=enakai export OS_TENANT_NAME=redhat02 export OS_PASSWORD=xxxxxxxx export OS_AUTH_URL=http://opst01:5000/v2.0/ export PS1="[\u@\h \W(keystone_redhat02)]\$ " EOF
Note that the following setups should be done after keystonerc_admin is sourced.
# . keystonerc_admin
Glance setup
Install packages and create database tables.
# yum install openstack-glance # openstack-db --init --service glance Please enter the password for the 'root' MySQL user: pas4mysql ... Complete!
Modify configs and start services.
# openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone # openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_host opst01 # openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_tenant_name admin # openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_user admin # openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_password pas4admin # openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone # openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_host opst01 # openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_tenant_name admin # openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_user admin # openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_password pas4admin # systemctl enable openstack-glance-registry # systemctl enable openstack-glance-api # systemctl start openstack-glance-registry # systemctl start openstack-glance-api
Create a service entry in Keystone.
# keystone service-create --name=glance --type=image --description="Glance Image Service" # service=$(keystone service-list | awk '/glance/ {print $2}') # keystone endpoint-create --service_id $service \ --publicurl http://opst01:9292/v1 \ --adminurl http://opst01:9292/v1 \ --internalurl http://opst01:9292/v1
Import a sample machine image of Fedora17.
# glance add name=f17-jeos is_public=true disk_format=qcow2 container_format=ovf copy_from=http://berrange.fedorapeople.org/images/2012-11-15/f17-x86_64-openstack-sda.qcow2 # glance show $(glance index | awk '/f17-jeos/ {print $1}') URI: http://opst01:9292/v1/images/14f3325d-c406-4fad-9464-34d12c9f5ea5 Id: 14f3325d-c406-4fad-9464-34d12c9f5ea5 Public: Yes Protected: No Name: f17-jeos Status: saving Size: 251985920 Disk format: qcow2 Container format: ovf Minimum Ram Required (MB): 0 Minimum Disk Required (GB): 0 Owner: e7c2a756d749451fad888aeda30333bb Created at: 2013-01-18T01:09:05 Updated at: 2013-01-18T01:09:05
The "Status: saving" line shows it's now being downloaded. It becomes "Status: active" once the download is finished.
Cinder setup
Install packages and create database table.
# yum install openstack-cinder # openstack-db --init --service cinder Please enter the password for the 'root' MySQL user: pas4mysql Verified connectivity to MySQL. Creating 'cinder' database. Asking openstack-cinder to sync the database. Complete!
Modify configs.
# openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone # openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_host opst01 # openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_tenant_name admin # openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_user admin # openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_password pas4admin
Depending on the network configuration, you may need to explicitly set Cinder server's IP address.
# openstack-config --set /etc/cinder/cinder.conf DEFAULT iscsi_ip_address 192.168.1.101
Prepare Cinder's volume gorup. I'm using /dev/md125 (mdraid partition) as a physical volume. Choose appropriate one in your environment.
# pvcreate /dev/md125 # vgcreate cinder-volumes /dev/md125 # grep -q /etc/cinder/volumes /etc/tgt/targets.conf || sed -i '1iinclude /etc/cinder/volumes/*' /etc/tgt/targets.conf
Start services.
# systemctl enable tgtd # systemctl start tgtd # systemctl enable openstack-cinder-api # systemctl start openstack-cinder-api # systemctl enable openstack-cinder-scheduler # systemctl start openstack-cinder-scheduler # systemctl enable openstack-cinder-volume # systemctl start openstack-cinder-volume
Create a service entry in Keystone.
# keystone service-create --name=cinder --type=volume --description="Cinder Volume Service" # service=$(keystone service-list | awk '/cinder/ {print $2}') # keystone endpoint-create --service_id $service \ --publicurl "http://opst01:8776/v1/\$(tenant_id)s" \ --adminurl "http://opst01:8776/v1/\$(tenant_id)s" \ --internalurl "http://opst01:8776/v1/\$(tenant_id)s"
Nova setup
Install packages and modify basic config entries.
# yum install openstack-nova --enablerepo updates-testing # openstack-db --init --service nova Please enter the password for the 'root' MySQL user: pas4mysql Verified connectivity to MySQL. Creating 'nova' database. Asking openstack-nova to sync the database. Complete! # openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone # openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host opst01 # openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name admin # openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user admin # openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password pas4admin
Setting up QPID server.
# yum install qpid-cpp-server # echo "auth=no" >> /etc/qpidd.conf # systemctl enable qpidd # systemctl start qpidd
Additional configs and start services.
# openstack-config --set /etc/nova/nova.conf DEFAULT volume_api_class nova.volume.cinder.API # openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis ec2,osapi_compute,metadata # systemctl enable openstack-nova-api # systemctl enable openstack-nova-cert # systemctl enable openstack-nova-objectstore # systemctl enable openstack-nova-scheduler # systemctl start openstack-nova-api # systemctl start openstack-nova-cert # systemctl start openstack-nova-objectstore # systemctl start openstack-nova-scheduler
Create a service entry in Keystone
# keystone service-create --name=nova --type=compute --description="Nova Compute Service" # service=$(keystone service-list | awk '/nova/ {print $2}') # keystone endpoint-create --service_id $service \ --publicurl "http://opst01:8774/v1.1/\$(tenant_id)s" \ --adminurl "http://opst01:8774/v1.1/\$(tenant_id)s" \ --internalurl "http://opst01:8774/v1.1/\$(tenant_id)s"
Quantum setup
Install packages and create database tables.
# yum install openstack-quantum openstack-quantum-openvswitch gedit # quantum-server-setup --plugin openvswitch Quantum plugin: openvswitch Plugin: openvswitch => Database: ovs_quantum Redirecting to /bin/systemctl status mysqld.service Please enter the password for the 'root' MySQL user: Verified connectivity to MySQL. Would you like to update the nova configuration files? (y/n): y Configuration updates complete!
Modify configs and start Quantum's main service.
# openstack-config --set /etc/quantum/quantum.conf DEFAULT allow_overlapping_ips True # systemctl enable quantum-server # systemctl start quantum-server
Create bridges.
# systemctl enable openvswitch # systemctl start openvswitch # ovs-vsctl add-br br-int # ovs-vsctl add-br br-ex # ovs-vsctl add-port br-ex em1 # ovs-vsctl add-br br-em2 # ovs-vsctl add-port br-em2 em2
Create a service entry in Keystone.
# keystone service-create --name quantum --type network --description 'OpenStack Networking Service' # service=$(keystone service-list | awk '/quantum/ {print $2}') # keystone endpoint-create \ --service-id $service \ --publicurl "http://opst01:9696/" --adminurl "http://opst01:9696/" \ --internalurl "http://opst01:9696/" # keystone user-create --name quantum --pass pas4quantum # keystone tenant-create --name service # user=$(keystone user-list | awk '/quantum/ {print $2}') # role=$(keystone role-list | awk '/admin/ {print $2}') # tenant=$(keystone tenant-list | awk '/service/ {print $2}') # keystone user-role-add --user-id $user --role-id $role --tenant-id $tenant
Setup and start L2 agent.
# quantum-node-setup --plugin openvswitch Quantum plugin: openvswitch Please enter the Quantum hostname: opst01 Would you like to update the nova configuration files? (y/n): y /usr/bin/openstack-config --set|--del config_file section [parameter] [value] Configuration updates complete! # openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_vif_driver nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver # openstack-config --set /etc/quantum/plugin.ini OVS tenant_network_type vlan # openstack-config --set /etc/quantum/plugin.ini OVS network_vlan_ranges physnet1,physnet2:100:199 # openstack-config --set /etc/quantum/plugin.ini OVS bridge_mappings physnet1:br-ex,physnet2:br-em2 # systemctl enable quantum-openvswitch-agent # systemctl start quantum-openvswitch-agent
Setup and start DHCP agent.
# quantum-dhcp-setup --plugin openvswitch Quantum plugin: openvswitch Please enter the Quantum hostname: opst01 Configuration updates complete! # systemctl enable quantum-dhcp-agent # systemctl start quantum-dhcp-agent
Setup and start L3 agent.
# quantum-l3-setup --plugin openvswitch Quantum plugin: openvswitch Configuration updates complete! # systemctl enable quantum-l3-agent # systemctl start quantum-l3-agent
Restart Nova services.
# systemctl restart openstack-nova-api # systemctl restart openstack-nova-cert # systemctl restart openstack-nova-objectstore # systemctl restart openstack-nova-scheduler
Horizon setup
Install and start service.
# yum install openstack-dashboard # setsebool httpd_can_network_connect on # systemctl enable httpd # systemctl start httpd
I skipped the VNC gateway configuration.
Settings for opst02 and opst03
Nova setup
Copy and source keystonerc.
# mkdir /root/work # cd /root/work # scp opst01:/root/work/keystonerc_* ./ # . keystonerc_admin
Install packages.
# yum install openstack-nova python-cinderclient --enablerepo updates-testing
libguestfs issue has been resolved with the official update. If you have updated all the packages (or, at lesast, libguestfs package), you don't have to do the following. I just left it as a historical note.
Apply the latest build of libguestfs. This is to resolve the guestmount issue.
# yum update http://kojipkgs.fedoraproject.org//packages/libguestfs/1.20.1/3.fc18/x86_64/libguestfs-1.20.1-3.fc18.x86_64.rpm \ http://kojipkgs.fedoraproject.org//packages/libguestfs/1.20.1/3.fc18/x86_64/libguestfs-tools-c-1.20.1-3.fc18.x86_64.rpm
Modify configs.
# openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone # openstack-config --set /etc/nova/nova.conf DEFAULT volume_api_class nova.volume.cinder.API # openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis ec2,osapi_compute,metadata # openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname opst01 # openstack-config --set /etc/nova/nova.conf DEFAULT glance_api_servers opst01:9292 # openstack-config --set /etc/nova/nova.conf DEFAULT glance_host opst01 # openstack-config --set /etc/nova/nova.conf DEFAULT sql_connection mysql://nova:nova@opst01/nova # openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host opst01 # openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name admin # openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user admin # openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password pas4admin
Start libvirtd service.
# systemctl enable libvirtd # systemctl start libvirtd # virsh net-destroy default # virsh net-autostart default --disable
Quantum setup
Install packages and do basic setups.
# cd /root/work # . keystonerc_admin # yum install openstack-quantum openstack-quantum-openvswitch # quantum-node-setup --plugin openvswitch Quantum plugin: openvswitch Please enter the Quantum hostname: opst01 Would you like to update the nova configuration files? (y/n): y /usr/bin/openstack-config --set|--del config_file section [parameter] [value] Configuration updates complete! # openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_vif_driver nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver # ln -s /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini /etc/quantum/plugin.ini # openstack-config --set /etc/quantum/plugin.ini OVS tenant_network_type vlan # openstack-config --set /etc/quantum/plugin.ini OVS network_vlan_ranges physnet2:100:199 # openstack-config --set /etc/quantum/plugin.ini OVS bridge_mappings physnet2:br-em2
Create bridges.
# systemctl enable openvswitch # systemctl start openvswitch # ovs-vsctl add-br br-int # ovs-vsctl add-br br-em2 # ovs-vsctl add-port br-em2 em2
Start services.
# systemctl enable quantum-openvswitch-agent # systemctl start quantum-openvswitch-agent # systemctl enable openstack-nova-compute # systemctl start openstack-nova-compute
Create virtual networks
Define external network.
# tenant=$(keystone tenant-list | awk '/service/ {print $2}') # quantum net-create --tenant-id $tenant public01 --provider:network_type flat --provider:physical_network physnet1 --router:external=True # quantum subnet-create --tenant-id $tenant --name public01_subnet01 --gateway 10.64.201.254 public01 10.64.201.0/24 --enable_dhcp False
Create router and private network for redhat01.
# tenant=$(keystone tenant-list|awk '/redhat01/ {print $2}') # quantum router-create --tenant-id $tenant router01 # quantum router-gateway-set router01 public01 # quantum net-create --tenant-id $tenant net01 --provider:network_type vlan --provider:physical_network physnet2 --provider:segmentation_id 101 # quantum subnet-create --tenant-id $tenant --name net01_subnet01 net01 192.168.101.0/24 # quantum router-interface-add router01 net01_subnet01
Create router and private network for redhat02.
# tenant=$(keystone tenant-list|awk '/redhat02/ {print $2}') # quantum router-create --tenant-id $tenant router02 # quantum router-gateway-set router02 public01 # quantum net-create --tenant-id $tenant net02 --provider:network_type vlan --provider:physical_network physnet2 --provider:segmentation_id 102 # quantum subnet-create --tenant-id $tenant --name net02_subnet01 net02 192.168.101.0/24 # quantum router-interface-add router02 net02_subnet01
Now you can launch VM's from Horizon dashboard. But floating IP assignment is not supported by Horizon at the moment. You have to do it using quantum CLI's.
For example, after launching a VM connecting to net01 (Private NW) from user enakai in tenant(project) "redhat01",
Create a new floating IP.
# cd /root/work # . keystonerc_redhat01 # quantum floatingip-create public01 Created a new floatingip: +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | fixed_ip_address | | | floating_ip_address | 10.64.201.3 | | floating_network_id | dc6fbfb9-1589-42f3-abbc-930c5ef4edbf | | id | 4583ae5b-1f16-4511-938f-4d3a727eaef7 | | port_id | | | router_id | | | tenant_id | 0c1ddeee5d2d4e2a9176387412786b9d | +---------------------+--------------------------------------+
Check the VM's port ID.
# quantum port-list +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ | id | name | mac_address | fixed_ips | +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ | 2da9ea02-b06a-44f9-9b31-3d73dda2560f | | fa:16:3e:b8:d9:be | {"subnet_id": "f87d5156-eca2-427a-bed5-12fadbe4eea7", "ip_address": "192.168.101.2"} | | 3e9ccde0-3398-41e1-8d46-3cb1010cce01 | | fa:16:3e:1c:8b:58 | {"subnet_id": "f87d5156-eca2-427a-bed5-12fadbe4eea7", "ip_address": "192.168.101.3"} | | 3f899464-1a8e-4e13-a2e2-aec16aeda95b | | fa:16:3e:ca:d0:75 | {"subnet_id": "f87d5156-eca2-427a-bed5-12fadbe4eea7", "ip_address": "192.168.101.1"} | +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+
Associate the floating IP to the port.
# quantum floatingip-associate 4583ae5b-1f16-4511-938f-4d3a727eaef7 3e9ccde0-3398-41e1-8d46-3cb1010cce01 Associated floatingip 4583ae5b-1f16-4511-938f-4d3a727eaef7