Centos Lab5 : Configure Compute Node – Kilo

POD parameters : OpenStack Group-1 user0 aio110 10.1.64.110 compute120 10.1.64.120 [email protected]
User aioX computeY Network & Allocation Pool
user0
vnc  : lab.onecloudinc.com:5900
aio110
eth0            : 10.1.64.110
eth1            : 10.1.65.110
eth2            : ext-net
Netmask  : 255.255.255.0
Gateway  : 10.1.64.1
compute120
eth0            : 10.1.64.120
eth1            : 10.1.65.120
eth2            : ext-net
Netmask  : 255.255.255.0
Float Range  : 10.1.65.0010.1.65.00
Network         : 10.1.65.0/24
Gateway         : 10.1.65.1
DNS                   : 10.1.1.92

Up to now you have been working on your AIO Node. You are now going to add compute120 as a compute node to your OpenStack installation. Take a moment to look at the lab topology diagram in the Lab Introduction module to see the network adapters that the compute120 node will have. (http://onecloudclass.com/lab-introduction/)

As you run through this lab, you may wish to compare the configuration files on this node to those of the aio node. Can you find any differences?

Basic Configuration

In this lab section you will need to open a new terminal session and log in to work on the compute120 node, not the aio110 node!

Step 1: If you have not already, you will need to SSH to the Compute node and login as “centos”

ssh centos@compute110

You should not need a password, but if one is requested, use centos as the password.

Then enter the following command, that allows you to become the root user (in the root home directory, which is important for many commands to operator properly). If a password is requested, use centos as the sudo password.

sudo su -

Step 2: Edit /etc/sysctl.conf file:

We’re going to update the “Reverse Path” limitations in the kernel for the Compute node even though we’re not going to be routing packets here. This just ensures that we don’t accidentally configure forwarding in the kernel that might leak packets between networks:

cat >> /etc/sysctl.conf <<EOF
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
EOF

Run the following command to load the changes you just made. This will also return the above values saved for confirmation:

sysctl -p

Example output:

net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0

Step 3: Install NTP Package on your Compute node.

yum install -y ntp

In the lab we have a NTP server running on “gw.onecloud” which will be providing a reference clock for the nodes. It may take 15-20 seconds for the below command to execute successfully.

ntpdate gw.onecloud

Example output:

25 Aug 19:01:19 ntpdate[1837]: adjust time server 10.1.64.1 offset 0.026366 sec

Next we’ll replace the contents of the /etc/ntp.conf file so that we point to our local ntp server (the defaults work as well, but this is “better internet citizenship”). We’re going to do this with a “Here File”, which in this case will copy the contents of our text into the file without having to edit it. There are very few lines needed, so this is efficient for a simple configuration like this.

cat > /etc/ntp.conf <<EOF
driftfile /var/lib/ntp/drift
restrict default nomodify notrap nopeer noquery
restrict 127.0.0.1
restrict ::1
server gw.onecloud
EOF

Next enable and restart the NTP service:

systemctl enable ntpd.service
systemctl start ntpd.service
systemctl status ntpd.service

Now let’s ensure we’re actually getting updates. We can use the ntpq (NTP query) app to check with our upstream ntp server. We will see updates every “poll” seconds, when tells you when the peer was last heard from, and delay and offset tell you how far from the NTP server you are, and what the calculated offset to the local clock is. The fact that we’re receiving updates is adequate for our needs.

ntpq -p

Example output:

remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
gw.onecloud     91.189.94.4      3 u   55   64    1    1.159   14.705   0.000

Step 4:OpenStack repository installation

To enable the OpenStack repository, install the epel-release package and rdo-release-kilo package. The EPEL repository provides a set of additional tools that are used as part of the OpenStack deployment but are not a part of the baseline CentOS/RHEL repository, while the RDO repository is the result of the RedHat community packaging of the OpenStack services components. We’ll install the EPEL tools repository, and then the RDO project kilo specific repository:

yum install epel-release -y
yum install https://github.com/onecloud/osbootcamp_repo/raw/master/osbc/kilo/rdo-release-kilo-2.noarch.rpm -y

On RHEL and CentOS, SELinux is enabled by default, and there is now an OpenStack specific set of SELinux profiles and while we’ll install them, just to ensure that we’re not going to run into SELinux specific installation or configuration issues, we’ll also disable the SELinux policies for this class. It should be possible to install properly without “setenforce 0“, but to avoid any latest code issues, we’ll still disable selinux for now.

Install the SELinux OpenStack profiles:

yum install openstack-selinux -y

And then set SELinux to permissive mode:

setenforce 0

Next we’ll install the OpenStack Utilities package. This includes a few tools that we can use to make our lives easier, specifically:
“openstack-config”: A tool for managing the ‘ini’ style configuration files used for most OpenStack services. This lab will modify the files directly, but we can simplify that process by using this tool (it’s actually based on another tool called “crud-ini”)
“openstack-service”: A tool for configuring a local mysql database on a per-OpenStack service basis. We’ll be doing this manually in the lab in order to be more clear as to what we’re doing.
“openstack-service”: A tool to determine the state of individual OpenStack services, especially given that their actual “Linux service” names may differ from what we’d otherwise expect.
“openstack-status”: A tool to get an overview of the current state of OpenStack services on a machine.

yum install openstack-utils -y

Step 5: Install Nova Hypervisor Agent and Neutron ML2 configurations and the Neutron OpenVSwtich agent on the compute120 node.

Note: This is a subset of the OpenStack components we’ve installed on the AIO node as the AIO node is also a compute node and so needs similar features.

These two components install the neutron ml2 plugin configuration files, and the Neutron OVS agent and it’s startup scripts:

yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch -y

These two components install the nova compute agent and the virtual disk systems tools needed to connect VMs to local and remote disks:

yum install openstack-nova-compute sysfsutils -y

Step 6: Install OpenStack client tools

While not strictly necessary, we will install the nova and neutron command line tools on the local machine. This will help if we were to need to debug connectivity or compute functionality as we’ll have these tools on the same machine that we’d likely have a terminal session on (and will have one for this lab), but as the command line clients communicate with the API endpoints that live on the AIO node, we can technically run these command line tools from _any_ node with connectivity. It turns out that the package dependencies actually include the entire python tool set for OpenStack, so we don’t actually need to do this, but it doesn’t hurt, and you can see how to do this on a machine where perhaps you’re not actually running OpenStack components:

yum install python-neutronclient python-novaclient -y

Step 7: Create and Configure Nova-Compute via openstack-config and the /etc/nova/nova.conf file.

As we did for the AIO node, we’ll configure the nova components. Specifically we’ll add the following configurations:

  • RabbitMQ connection parameters
  • VNC virtual machine KVM connection parameters
  • Network configuration (this is telling Nova how to connect to Neutron, and what services to manage directly)
  • Keystone configuration (for authentication)
  • Glance connection parameters
  • Neutron connection parameters
  • openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
    openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
    openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_host aio110
    openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_userid test
    openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_password test
    openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.1.64.120
    openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled True
    openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 0.0.0.0
    openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 10.1.64.120
    openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url http://10.1.64.110:6080/vnc_auto.html
    openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API
    openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron
    openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver
    openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
    openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://aio110:5000/v2.0
    openstack-config --set /etc/nova/nova.conf keystone_authtoken identity_uri http://aio110:35357
    openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
    openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
    openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password pass
    openstack-config --set /etc/nova/nova.conf glance host aio110
    openstack-config --set /etc/nova/nova.conf neutron url http://aio110:9696
    openstack-config --set /etc/nova/nova.conf neutron auth_strategy keystone
    openstack-config --set /etc/nova/nova.conf neutron admin_tenant_name service
    openstack-config --set /etc/nova/nova.conf neutron admin_username neutron
    openstack-config --set /etc/nova/nova.conf neutron admin_password pass
    openstack-config --set /etc/nova/nova.conf neutron admin_auth_url http://aio110:35357/v2.0

    Step 8: Start the Compute service including its dependencies

    We’ll both enable, and start the libvirt daemon and the nova-compute service agent:

    systemctl enable libvirtd.service openstack-nova-compute.service
    systemctl start libvirtd.service openstack-nova-compute.service
    systemctl status libvirtd.service openstack-nova-compute.service

    Step 9: Configure Neutron by creating and modifying the /etc/neutron/neutron.conf file.

    On the Compute node, we don’t have nearly as much to configure, as we’re principally just configuring the Neutron OVS agent. But we still need to define:

  • RabbitMQ connection parameters
  • Keystone connection parameters
  • Core Neutron service model components (ml2, router, overlapping IPs)
  • openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
    openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_host aio110
    openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_userid test
    openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_password test
    openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
    openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
    openstack-config --set /etc/neutron/neutron.conf DEFAULT verbose True
    openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://aio110:5000/v2.0
    openstack-config --set /etc/neutron/neutron.conf keystone_authtoken identity_uri http://aio110:35357
    openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service
    openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron
    openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password pass

    Step 10: While the Neutron-Server (our AIO node) will actually be driving the configuration of the local OVS instance, we still have to tell the instances about the network types it should be able to create (the type_drivers), and the mechanism that the agent should expect (it’s the OVS agent, so this should be obvious, but…). We’ll also pass in information about how security groups are handled so that the mechanism driver is able to manage appropriate (in this case, the HybridOVS driver actually defers to the nova-compute process). And we still have to tell the local node how to configure it’s

    Note: local_ip under [OVS] section has to be set with the Management IP address of your compute node.
    openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,gre
    openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre
    openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch
    openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000
    openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks external
    openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True
    openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True
    openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
    openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs local_ip 10.1.65.120
    openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs enable_tunneling True
    openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini agent tunnel_types gre

    Step 11: Create a bridge for internal communication and restart the Open vSwitch service:

    systemctl enable openvswitch.service
    systemctl start openvswitch.service
    systemctl status openvswitch.service

    Step 12: Create the symbolic linked file.

    ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

    Execute the commands below.

    cp /usr/lib/systemd/system/neutron-openvswitch-agent.service \
    /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig
    sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' \
    /usr/lib/systemd/system/neutron-openvswitch-agent.service

    Step 13:Restart the required services as stated below

    systemctl restart openstack-nova-compute.service
    systemctl enable neutron-openvswitch-agent.service
    systemctl restart neutron-openvswitch-agent.service

    Step 14:Validate the OpenStack config from the Compute node:

    First we create an openrc.sh script so that we can easily run OpenStack commands on the local node:

    cat > ~/openrc.sh <<EOF
    export OS_USERNAME=admin
    export OS_PASSWORD=pass
    export OS_TENANT_NAME=admin
    export OS_AUTH_URL=http://aio110:35357/v2.0
    EOF

    And we the source it to load those environment variables into our local shell:

    source ~/openrc.sh
    Note: If using VNC and your VNC terminal session, not the VNC session, is closed for any reason, you need to re-run the source command: source ~/openrc.sh in order to reload the environmental variables.

    Step 15: Type following command and check that your new Compute node is listed and ensure the service state is “up”.

    nova service-list

    Example output:

    +----+------------------+------------------+----------+---------+-------+----------------------------+-----------------+
    | Id | Binary           | Host             | Zone     | Status  | State | Updated_at                 | Disabled Reason |
    +----+------------------+------------------+----------+---------+-------+----------------------------+-----------------+
    | 1  | nova-conductor   | aio110           | internal | enabled | up    | 2015-05-11T00:51:34.000000 | -               |
    | 2  | nova-cert        | aio110           | internal | enabled | up    | 2015-05-11T00:51:34.000000 | -               |
    | 3  | nova-scheduler   | aio110           | internal | enabled | up    | 2015-05-11T00:51:34.000000 | -               |
    | 4  | nova-consoleauth | aio110           | internal | enabled | up    | 2015-05-11T00:51:34.000000 | -               |
    | 5  | nova-compute     | aio110           | nova     | enabled | up    | 2015-05-11T00:51:35.000000 | -               |
    | 6  | nova-compute     | compute120       | nova     | enabled | up    | 2015-05-11T00:51:40.000000 | -               |
    +----+------------------+------------------+----------+---------+-------+----------------------------+-----------------+

    We can also ensure that the Neutron services (Open VSwitch agent) is running on the compute node:

    neutron agent-list

    Example output:

    +--------------------------------------+--------------------+------------------+-------+----------------+---------------------------+
    | id                                   | agent_type         | host             | alive | admin_state_up | binary                    |
    +--------------------------------------+--------------------+------------------+-------+----------------+---------------------------+
    | 1b934e8a-031c-4243-997c-11e857623a02 | Open vSwitch agent | compute120       | :-)   | True           | neutron-openvswitch-agent |
    | 2cd3c13f-40a4-4aac-84b6-487fda052c01 | DHCP agent         | aio110           | :-)   | True           | neutron-dhcp-agent        |
    | 9ab173d2-7061-405a-9a94-b71d7ec540e9 | L3 agent           | aio110           | :-)   | True           | neutron-l3-agent          |
    | f6193494-c5c6-44dc-92eb-19caec50ab83 | Metadata agent     | aio110           | :-)   | True           | neutron-metadata-agent    |
    | ff8402e2-9892-4ca4-920f-f19bbeb75f77 | Open vSwitch agent | aio110           | :-)   | True           | neutron-openvswitch-agent |
    +--------------------------------------+--------------------+------------------+-------+----------------+---------------------------+

    With GRE tunnels specifically, the OVS controller (e.g. Neutron-Server) will establish a connection between all available hosts as soon as they become available. So we can further validate that our system is properly running with the ovs-vsctl show command. We’re actually looking for the automatically created br-tun bridge, and should expect to see a GRE port with the compute and control “eth1” associated IP address (note that we moved this address to br-ex on the AIO node). This address comes from our ML2 configuration that we did on both the AIO and Compute nodes, when we set the local_ip parameter in the configuration above.

    ovs-vsctl show

    Example output showing GRE, local_ip and remote_ip.:

    Bridge br-int
            fail_mode: secure
            Port br-int
                Interface br-int
                    type: internal
            Port patch-tun
                Interface patch-tun
                    type: patch
                    options: {peer=patch-int}
        Bridge br-tun
            Port br-tun
                Interface br-tun
                    type: internal
            Port "gre-0a000205"
                Interface "gre-0a000205"
                    type: gre
                    options: {in_key=flow, local_ip="10.0.2.4", out_key=flow, remote_ip="10.0.2.5"}
            Port patch-int
                Interface patch-int
                    type: patch
                    options: {peer=patch-tun}
        ovs_version: "2.0.1"