Centos Lab3 : Nova – Kilo

POD parameters : OpenStack Group-1 user0 aio110 compute120 [email protected]
User aioX computeY Network & Allocation Pool
vnc  : lab.onecloudinc.com:5900
eth0            :
eth1            :
eth2            : ext-net
Netmask  :
Gateway  :
eth0            :
eth1            :
eth2            : ext-net
Netmask  :
Float Range  :
Network         :
Gateway         :
DNS                   :

In this Lab we will deploy the OpenStack Compute Service, aka Nova.

Nova is a cloud compute controller, which is the core of any IaaS system. Nova interacts with Keystone for authentication, Glance for images, Neutron for network service (though it still has it’s own embedded option as well) and Horizon as a user and administrative graphical (web based) interface. Nova can manage a number of different underlying compute, storage, and network services, and is in the process of adding the ability to manage physical non-virtualized compute components as well!

In this lab, we’ll focus on deploying the compute control components (API servers, etc.) as well as a compute agent that will run on the same server (All-In-One mode). In a later lab, we will add a second separate compute instance to highlight how additional services are added, and capacity in the cloud can be scaled.

Compute Service Installation

Step 1: As with the previous labs, you will need to SSH the aio node.

If you have logged out, SSH into your AIO node:

ssh centos@aio110

If asked, the user password (as with the sudo password) would be centos, then become root via the sudo password:

sudo su -

Then we’ll source the OpenStack administrative user credentials. As you’ll recall from the previous lab, this sets a set of environment variables (OS_USERNAME, etc.) that are then picked up by the command line tools (like the keystone and glance tools we’ll be using in this lab) so that we don’t have to pass the equivalent –os-username command line variables for each command we run:

source ~/openrc.sh

Install Compute Controller Service packages

Step 2: You will now install a number of nova packages that will provide the Compute services on the aio node:

yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient -y

You have just installed:

  • openstack-nova-api: Accepts and responds to end user compute API calls.
  • openstack-nova-cert: Manages x509 certificates
  • openstack-nova-conductor: acts as an intermediary between compute nodes and the nova database
  • openstack-nova-console : Authorizes tokens for users that console proxies provide.
  • openstack-nova-novncproxy: Provides a proxy for accessing running instances through a VNC connection using a web browser.
  • openstack-nova-scheduler: determines how to dispatch compute and volume requests.
  • python-novaclient: Client library for OpenStack Compute API.

Install Compute Node packages

Step 3: While the previous step installed the service components, we also want to configure a local compute agent to manage our local KVM hypervisor, and we’ll install the sysfsutils package to add the required local tools for managing virtual disk connectivity.

yum install openstack-nova-compute sysfsutils -y

As with our previous steps, we’ll create the database for nova to store it’s state in, and configure the nova user access credentials (again, super secret: pass):

Create Database for Compute Service

Step 4: Create Database nova for Openstack nova by logging into MariaDB with password as pass

mysql -uroot -ppass
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'pass';

Step 5: Create a nova service user in Keystone

We need to create the user that Nova uses to authenticate with the Identity Service. As with Glance, we’ll add the nova user to the service tenant and give the user the admin role:

openstack user create nova --password pass --email [email protected]

Associate the user with the tenant and role:

openstack role add --project service --user nova admin

While we’re at it, we also need to configure the service and endpoint catalog entries in Keystone.

The service endpoint is just like the one we created in Glance, but now we’re using the well known name of nova, and the type tag of compute

openstack service create --name nova --description "Compute service" compute
openstack endpoint create --publicurl http://aio110:8774/v2/%\(tenant_id\)s --internalurl http://aio110:8774/v2/%\(tenant_id\)s --adminurl http://aio110:8774/v2/%\(tenant_id\)s --region RegionOne compute

Example output:

| Field        | Value                               |
| adminurl     | http://aio132:8774/v2/%(tenant_id)s |
| id           | a741f82c58ac475d8519cf8e9431ec0c    |
| internalurl  | http://aio132:8774/v2/%(tenant_id)s |
| publicurl    | http://aio132:8774/v2/%(tenant_id)s |
| region       | RegionOne                           |
| service_id   | c6f1f6c038f648448e560b6cb5075556    |
| service_name | nova                                |
| service_type | compute                             |
Note:This endpoint is a little more complicated than the glance endpoint, which was effectively just a hostname and a port. In this case we also require a tenant ID to be mapped into the path name, or the API will not function properly, and we’ve passed a substitution model that client applications (like the default python CLI tools) can use to properly format their API requests with.

Configure Compute Service

Step 6: Configure the common compute services’ connections to the internal components of Nova (RabbitMQ), the database, and Keystone. And some less common ones…

As with glance, we configure RabbitMQ connectivity to allow the nova processes to leverage the message queue for communication. We’ll also configure the database connection for those services that talk directly to the database (principally the API service, Scheduler, and the Compute Conductor). We’ll also establish a connection to Keystone so that Nova can authenticate itself for communications with other services (e.g. talking to Glance), or to accept and validate client communications (nova CLI authenticating with Nova via Keystone).

We’ll also need to configure the VNC server (Keyboard Video Mouse via web browser for “console” access to our virtual machines), a connection to glance (we’ll need to be able to get those images that we’re going to store in Glance for our virtual machines),

First we’ll establish the required communications parameters for RabbitMQ. These parameters go in the [DEFAULT] section.

openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_host aio110
openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_userid test
openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_password test

Next we’ll configure the database communications, again using the openstack-config tool.

openstack-config --set /etc/nova/nova.conf database connection 'mysql://nova:pass@aio110/nova'

As should be obvious, this is a much more efficient method than manually editing the files, and does reduce the likelihood of “placement” errors. It’s still important to get the actual parameters right as well!

And we’ll carry on with the Keystone config, much like in Glance, we tell Nova “where” Keystone lives, but in this case, we’re differentiating between Authorization, and Identity endpoints. One being a validation endpoint (I’d like a token for myself please) and the other one is used to verify client tokens (is this client/token valid).

openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri 'http://aio110:5000/v2.0'
openstack-config --set /etc/nova/nova.conf keystone_authtoken identity_uri 'http://aio110:35357'
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password pass
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
Note:That last parameter is actually in the [DEFAULT] section, but we’ve included it here as it’s part of enabling Keystone, telling the system to use Keystone (rather than a local file) for it’s authentication needs. This is one of the values of the opentsack-config tool, as this is the sort of parameter that might easy get added to the wrong section of the nova.conf file!

Next we’ll provide the configuration for the VNC proxy process, which provides a web based ‘Keyboard Video Mouse’ interface for interacting with the console of our virtual compute devices.

Note:The my_ip parameter really does want an IP, not a host name.
openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled True
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip ''
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen ''
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address ''
openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url ''

Next we’ll add the pointer to Glance so that Nova can interoperate with the Image service.

openstack-config --set /etc/nova/nova.conf glance host 'aio110'

Step 7: Populate the database tables for the nova database.

We’ll use the same model we used with glance, and leverage the nova-manage tool to migrate the database from nothing to “current” state.

su -s /bin/sh -c "nova-manage db sync" nova

Then we’ll enable and start (or restart) the services that we’ve configured this far.

systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl status openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

And we also need to start the Nova compute services so that we can eventually turn on a VM!

systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
systemctl status libvirtd.service openstack-nova-compute.service

Step 8: Verify Nova Operations:

Even though configured we have Nova properly, we can’t yet turn on VM. This is because we have no network service yet, and we’ve not enabled the Nova Network model services at this point. In the next lab we’ll enable Neutron so that we finally have network functionality, and will be able to actually _use_ this OpenStack environment. Until then, we can at least ensure that the OpenStack Compute service is healthy and ready to start serving us a soon as the Network comes online.

Firstly, we can see if the services that make up Nova (api, scheduler, conductor, auth, cert and at least our first compute node) have checked in with the API service. This will let us know if our inter process (RabbitMQ), database (MariaDB), and Keystone connections are functional:

nova service-list

Example output:

| Id | Binary           | Host   | Zone     | Status  | State | Updated_at                 | Disabled Reason |
| 1  | nova-conductor   | aio110 | internal | enabled | up    | 2015-04-27T12:54:25.000000 | -               |
| 2  | nova-consoleauth | aio110 | internal | enabled | up    | 2015-04-27T12:54:25.000000 | -               |
| 3  | nova-scheduler   | aio110 | internal | enabled | up    | 2015-04-27T12:54:25.000000 | -               |
| 4  | nova-cert        | aio110 | internal | enabled | up    | 2015-04-27T12:54:25.000000 | -               |
| 5  | nova-compute     | aio110 | nova     | enabled | up    | 2015-04-27T12:54:19.000000 | -               |

We had also previously configured a connection to Glance, and we should be able to ask Nova to ask Glance what images are available as in:

nova image-list

Example output:

| ID                                   | Name                | Status | Server |
| ff10d15d-d75d-4bda-b9bc-342213a95b03 | CirrOS 0.3.2        | ACTIVE |        |
| 8f90a562-e995-4f86-a7c1-b76a901f12b5 | cirros_0.3.2_direct | ACTIVE |        |