Changing network for your openstack plateform

One day i had to take an OpenStack plateform from a network A to a network B. So after looking after old ips on this plateform i decided to write this little howto.

First of all no VMs should be beployed !

Once your plateform has moved physically that's the steps to get it up and working as before it moved.

This OpenStack plateform is composed by 5 hosts :

  • a controller node (keystone, nova and quatum server)
  • a network node for quantum/neutron
  • a storage node (glance and conder)
  • two compute nodes (nova-compute and quantum agent)

Controller node

Keystone

First think was to check keystone database to check endpoint urls. With this sql query it is possible to retrieve wich endpoint need to be changed :

(none)  > use keystone
(keystone) > select endpoint.id,service.type,endpoint.url from endpoint,service where endpoint.service_id=service.id and url like "%10.2.1.83%" ;
+----------------------------------+----------+----------------------------------------+
| id                               | type     | url                                    |
+----------------------------------+----------+----------------------------------------+
| 2ab4f83325a24fb9ab00671fd9928a06 | identity | http://10.2.1.83:5000/v2.0             |
| 21945164dc654dd9a3d2e3776dae7830 | compute  | http://10.2.1.83:8774/v2/$(tenant_id)s |
| a1960ac743a944b3bf6588d30d4382cc | ec2      | http://10.2.1.83:8773/services/Cloud   |
+----------------------------------+----------+----------------------------------------+

(keystone) > update endpoint set url="http://$NEW_IP:5000/v2.0" where id="2ab4f83325a24fb9ab00671fd9928a06" ;
(keystone) > update endpoint set url="http://$NEW_IP:8774/v2/$(tenant_id)s" where id="21945164dc654dd9a3d2e3776dae7830" ;
(keystone) > update endpoint set url="http://$NEW_IP:8773/services/Cloud" where id="a1960ac743a944b3bf6588d30d4382cc" ;
(keystone) > exit

Now restart keystone service to get endpoints new ip

root@myhost$ service keystone restart

Nova

Nova conf file also containt the $OLD_IP for novncproxy url , so it must be changed like this :

root@myhost$ sed -i "s/$OLD_IP/$NEW_IP/g" /etc/rc.local

root@myhost$ cd /etc/init.d

root@myhost$ for i in $(ls nova-*);do service $i restart ;done

System

Also check in /etc/rc.local if there is custom rules like masquerading :

sed -i "s/$OLD_IP/$NEW_IP/g" /etc/rc.local

Network node

On network node br-ex interface has a old_network ip that must be changed to fit new networking range :

root@myhost$ ifdown br-ex

# br-ex hold the default Gateway so the default GW should be down

# Change IP and netmask in /etc/network/interfaces file and get br-ex interface up

root@myhost$ ifup br-ex

# verify changes :

root@myhost$ ifconfig br-ex && route -n

Perhaps there is custom rules in /etc/rc.local like on controller so this must be changed too.

Compute node

The nova conf file should be the only one change on compute nodes.

Change novnc proxy url like on controller node :

root@myhost$ sed -i "s/$OLD_IP/$NEW_IP/g" /etc/nova/nova.conf

After changes are done, restart nova-compute service :

root@myhost$ service nova-compute restart

Tests

Now you have to test all service to validate that everything is ok :

  • tenant management (keystone)
  • user management (keystone)
  • volume management (cinder)
  • image management (glance)
  • VM management (nova)

If all above services are ok, you have finished your work !

Comments !