So we have installed OpenStack on our test environment but not done anything with it.

In this article we will explore some basic OpenStack use cases.

General Information

This article is intended as a quick-start guide for OpenStack Users. It shows how common tasks like

  • uploading images
  • creating and removing instances
  • creating networks

inside the environment can be accomplished from the command-line.

For details on the webinterface please have a look at the SUSE Cloud User Guide

The environment specific information in this article refers to the SUSECloud test-setup.

Connect to the SUSECloud Testbed.

On the Host

The admin network is reachable from the host system 192.168.124.0/24. The pre configured public net is 192.168.126.0/24.

The deployment does not allow you to ssh directly to the simulated bare metal servers. You have to go through the admin server. The default address of the admin server is 192.168.124.10 or SUSECLOUD_ADMIN

Remember: This is our test cluster setup:

Connect to Dashboard

Dashboard ADMIN URL: http://<SUSECLOUD_CONTROL_ADMIN or http://SUSECLOUD_CONTROL_PUBLIC

OpenStack CLI tools

We need to install the pythonclient libraries. Have a look at Install OpenStack Clients. For SUSE use:

sudo zypper in python-novaclient
sudo zypper in python-glanceclient
sudo zypper in python-neutronclient
sudo zypper in python-cinderclient

Setup CLI environment

Add these commands to your login profile dotfile or create a file openrc that you can source each time you want to create this environment.

Openstack API setup:

cat > openrc <<'DELIM'
export SUSECLOUD_ADMIN=192.168.124.10
export SUSECLOUD_CONTROL_ADMIN=192.168.124.85
export SUSECLOUD_CONTROL_PUBLIC=192.168.126.2

export OS_USERNAME=admin
export OS_PASSWORD=crowbar
export OS_TENANT_NAME=openstack
export OS_AUTH_URL=http://${SUSECLOUD_CONTROL_PUBLIC}:5000/v2.0/
export OS_AUTH_STRATEGY=keystone
DELIM
source openrc

To test it:

nova list
+----+------+--------+----------+
| ID | Name | Status | Networks |
+----+------+--------+----------+
+----+------+--------+----------+

Glance image management

Upload Image

Download a demo image:

wget http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img

Upload it to glance

glance add is_public=true container_format=bare disk_format=qcow2 distro="Cirros" name="Cirros-0.3.1" < cirros-0.3.1-x86_64-disk.img
Added new image with ID: 276bf12d-5450-45a9-907d-eaa3375ed5e5

You can find more about images for OpenStack here: OpenStack Image Guide

List images

List all images that are in glance:

glance index
ID                                   Name                           Disk Format          Container Format     Size
------------------------------------ ------------------------------ -------------------- -------------------- --------------
276bf12d-5450-45a9-907d-eaa3375ed5e5 Cirros-0.3.1                   qcow2                bare                       13147648

Delete Image

Use the “delete” command followed by the image id (e.g. a9001c99-d6ab-4a35-91f2-dc23503ff548) to delete the image.

glance delete a9001c99-d6ab-4a35-91f2-dc23503ff548

If you try to delete an image of another tenant, you will get an error message: “No image with ID was found”.

Nova compute instances

Keypair-Management

It is important to save the private key as it will not be saved on the server and cannot be regenerated. If you accidently forget to save it, just delete the keypair from the system.

nova keypair-add susecloud-ehaselwanter > ~/.ssh/susecloud-ehaselwanter.pem
chmod 400 ~/.ssh/susecloud-ehaselwanter.pem

To list all available keypairs use:

nova keypair-list
+------------------------+-------------------------------------------------+
| Name                   | Fingerprint                                     |
+------------------------+-------------------------------------------------+
| susecloud-ehaselwanter | 32:f3:9a:4f:e7:2c:73:3a:11:f0:a3:02:d9:bb:10:ab |
+------------------------+-------------------------------------------------+

If you already have a key you can upload your key:

nova keypair-add --pub_key your_pubkey.pub mykeypair

To delete a keypair just issue the following command (where mykeypair is the name of the keypair you want to delete):

nova keypair-delete mykeypair

If you have instances running that use this keypair they will not be affected. You just will not be able to assign the keypair to new instances any more.

Create Instance

No matter which way of launching an instance you choose. The following information is needed:

  • The image that should be launched
  • The size of the instance
  • an SSH keypair to log in to the instance (if the instance uses keys)
  • The network the image connects to if you want to connect to a specific network (otherwise ALL tenant networks will be connected!)

Get needed information to start an instance:

Get available flavors (sizes)

nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+

Create a flavor

nova flavor-create m1.micro 6 256 0 1
+----+----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name     | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+----------+-----------+------+-----------+------+-------+-------------+-----------+
| 6  | m1.micro | 256       | 0    | 0         |      | 1     | 1.0         | True      |
+----+----------+-----------+------+-----------+------+-------+-------------+-----------+

Get available images

nova image-list
+--------------------------------------+--------------+--------+--------+
| ID                                   | Name         | Status | Server |
+--------------------------------------+--------------+--------+--------+
| 276bf12d-5450-45a9-907d-eaa3375ed5e5 | Cirros-0.3.1 | ACTIVE |        |
+--------------------------------------+--------------+--------+--------+

Get available keypairs

nova keypair-list
+------------------------+-------------------------------------------------+
| Name                   | Fingerprint                                     |
+------------------------+-------------------------------------------------+
| susecloud-ehaselwanter | 32:f3:9a:4f:e7:2c:73:3a:11:f0:a3:02:d9:bb:10:ab |
+------------------------+-------------------------------------------------+

Choose image and flavor by id and launch instance:

nova boot --flavor 1 --image 276bf12d-5450-45a9-907d-eaa3375ed5e5  --key_name mykeypair mynewnovainstance

example:

nova boot --flavor 6 --image 276bf12d-5450-45a9-907d-eaa3375ed5e5 --key_name susecloud-ehaselwanter test-instance
+--------------------------------------+--------------------------------------+
| Property                             | Value                                |
+--------------------------------------+--------------------------------------+
| OS-EXT-STS:task_state                | scheduling                           |
| image                                | Cirros-0.3.1                         |
| OS-EXT-STS:vm_state                  | building                             |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000001                    |
| OS-SRV-USG:launched_at               | None                                 |
| flavor                               | m1.micro                             |
| id                                   | 4f409c4d-7372-4b3d-be18-1ae8adf79c2f |
| security_groups                      | [{u'name': u'default'}]              |
| user_id                              | 35a0ab46761e4c67b41435a433c221e9     |
| OS-DCF:diskConfig                    | MANUAL                               |
| accessIPv4                           |                                      |
| accessIPv6                           |                                      |
| progress                             | 0                                    |
| OS-EXT-STS:power_state               | 0                                    |
| OS-EXT-AZ:availability_zone          | nova                                 |
| config_drive                         |                                      |
| status                               | BUILD                                |
| updated                              | 2014-04-19T11:54:07Z                 |
| hostId                               |                                      |
| OS-EXT-SRV-ATTR:host                 | None                                 |
| OS-SRV-USG:terminated_at             | None                                 |
| key_name                             | susecloud-ehaselwanter               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | None                                 |
| name                                 | test-instance                        |
| adminPass                            | 49FjLKzP9NHU                         |
| tenant_id                            | d16c48d47f0349e5ab07d0569d8010fa     |
| created                              | 2014-04-19T11:54:07Z                 |
| os-extended-volumes:volumes_attached | []                                   |
| metadata                             | {}                                   |
+--------------------------------------+--------------------------------------+

Verify the status of current instance.

nova list
+--------------------------------------+---------------+--------+------------+-------------+----------------------+
| ID                                   | Name          | Status | Task State | Power State | Networks             |
+--------------------------------------+---------------+--------+------------+-------------+----------------------+
| 4f409c4d-7372-4b3d-be18-1ae8adf79c2f | test-instance | BUILD  | spawning   | NOSTATE     | fixed=192.168.123.51 |
+--------------------------------------+---------------+--------+------------+-------------+----------------------+

When the status of the instance is no longer “BUILD” but “ACTIVE” you can access the instance using SSH on the default network. This network is accessible from the outside dependent on the deployment. Most deployments do NOT allow to access an instance directly without floating ips.

We have set up our gateway as 192.168.123.1. At the moment we can not connect to this instance because it we first have to allow ping and ssh rules.

Nova Secgroup Rules

Check if instance is running and note its IP address (in our example: 192.168.123.51)

nova list

Set default security group to allow ping and ssh access

nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0

and ssh to the instance. We are using a cirros instance, so we not need a password. The cirros login credentials are logged to the console:

nova console-log 4f409c4d-7372-4b3d-be18-1ae8adf79c2f|tail -10
=== cirros: current=0.3.1 uptime=164.57 ===
  ____               ____  ____
 / __/ __ ____ ____ / __ \/ __/
/ /__ / // __// __// /_/ /\ \
\___//_//_/  /_/   \____/___/
   http://cirros-cloud.net


login as 'cirros' user. default password: 'cubswin:)'. use 'sudo' for root.
cirros login:

Ssh from the host to the instance:

$ ssh 192.168.123.51
ssh: connect to host 192.168.123.51 port 22: Connection timed out

Oh no, doesn’t work. How so? Have a look at the following diagram:

So ssh from the gateway to the instance. For the outbound access we have to add a route out.

gateway:~ # ssh cirros@192.168.123.51
cirros@192.168.123.51's password:
$ sudo -i
# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.123.49  0.0.0.0         UG    0      0        0 eth0
192.168.123.0   0.0.0.0         255.255.255.0   U     0      0        0 eth0
# route add -net 10.11.12.0/24 gw 192.168.123.1
# ping 10.11.12.1
PING 10.11.12.1 (10.11.12.1): 56 data bytes
64 bytes from 10.11.12.1: seq=7 ttl=63 time=7.228 ms

The 192.168.123.0/24 has a router at 192.168.123.49 which acts as a default gateway. So the instance does not know what to do with packets to to/from the ip 10.11.12.1. We have to tell the instance. We can define additional routes in the network, but for now we just add the route.

Now we can access the instance from the outside as well:

$ ping -c1 192.168.123.51
PING 192.168.123.51 (192.168.123.51) 56(84) bytes of data.
64 bytes from 192.168.123.51: icmp_req=1 ttl=63 time=3.52 ms

--- 192.168.123.51 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 3.528/3.528/3.528/0.000 ms

Allocating a public IP address to instance

We use the IP range 192.168.126.0/24 as floating IPs.

Nova API

nova floating-ip-create
+-----------------+-------------+----------+----------+
| Ip              | Instance Id | Fixed Ip | Pool     |
+-----------------+-------------+----------+----------+
| 192.168.126.130 | None        | None     | floating |
+-----------------+-------------+----------+----------+

Use the new ip and associate it with a running instance (instance id in the example is 4f409c4d-7372-4b3d-be18-1ae8adf79c2f)

nova add-floating-ip 4f409c4d-7372-4b3d-be18-1ae8adf79c2f 192.168.126.130

Check that assignment was successful:

nova floating-ip-list
+-----------------+--------------------------------------+----------------+----------+
| Ip              | Instance Id                          | Fixed Ip       | Pool     |
+-----------------+--------------------------------------+----------------+----------+
| 192.168.126.130 | 4f409c4d-7372-4b3d-be18-1ae8adf79c2f | 192.168.123.51 | floating |
+-----------------+--------------------------------------+----------------+----------+

And ssh to the floating IP:

ssh cirro@192.168.126.130
ssh: connect to host 192.168.126.130 port 22: Connection timed out

Does not work? Again: our routes fuck things up. This time we MUST got through the router so that the NATing works as expected.

ssh cirros@192.168.126.130
cirros@192.168.126.130's password:
$ sudo -i
# route del -net 10.11.12.0/24

And again

ssh cirro@192.168.126.130
The authenticity of host '192.168.126.130 (192.168.126.130)' can't be established.
RSA key fingerprint is 26:72:fe:3a:31:7f:9c:a5:87:48:43:b1:d6:02:29:23 [MD5].
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.126.130' (RSA) to the list of known hosts.
cirro@192.168.126.130's password:

destroy instance

Delete running server:

nova delete 6e64790a-fd8a-4b6d-9de7-345c1155baa9

Networking with Neutron

Overview

Neutron is the new network as a service OpenStack service that allows to configure rich network topologies by creating and configuring networks and subnets, and then instructing other OpenStack services like OpenStack Compute to attach virtual devices to ports on these networks. In particular, OpenStack Networking supports each tenant having multiple private networks, and allows tenants to choose their own IP addressing scheme, even if those IP addresses overlap with those used by other tenants. This enables very advanced cloud networking use cases, such as building multi-tiered web applications and allowing applications to be migrated to the cloud without changing IP addresses.

Configuration

By default, most OpenStack setups come with a default configuration (network, router, …) private to each tenant that allow users to create their VM easily.

We use the linuxbridge with vlans configuration from SUSECloud/Crowbar. A somewhat older article has plenty of interesting information for this kind of deployment: Under The Hood Linuxbridge

In case the user is trying to create a more complex architecture have a look at the following sections.

From command line

Start by creating a network:

neutron net-create mynet
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | 0ff4dbbd-0eae-4820-a287-6fa8189149f3 |
| name                      | mynet                                |
| provider:network_type     | vlan                                 |
| provider:physical_network | physnet1                             |
| provider:segmentation_id  | 501                                  |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | d16c48d47f0349e5ab07d0569d8010fa     |
+---------------------------+--------------------------------------+

Than you can create subnets as much as you want:

neutron subnet-create --name mysubnet mynet 11.0.0.0/24
Created a new subnet:
+------------------+--------------------------------------------+
| Field            | Value                                      |
+------------------+--------------------------------------------+
| allocation_pools | {"start": "11.0.0.2", "end": "11.0.0.254"} |
| cidr             | 11.0.0.0/24                                |
| dns_nameservers  |                                            |
| enable_dhcp      | True                                       |
| gateway_ip       | 11.0.0.1                                   |
| host_routes      |                                            |
| id               | fd9dfcd0-5478-43be-9783-5c3ea8ba9753       |
| ip_version       | 4                                          |
| name             | mysubnet                                   |
| network_id       | 0ff4dbbd-0eae-4820-a287-6fa8189149f3       |
| tenant_id        | d16c48d47f0349e5ab07d0569d8010fa           |
+------------------+--------------------------------------------+

If the subnet will have connection to an external network, it’s recommended to set the DNS option for the subnet to an external DNS server (for example Google DNS server):

neutron subnet-update mysubnet --dns_nameservers list=true 8.8.4.4 8.8.8.8
Updated subnet: mysubnet

Now you can link the network that you just create to the your router like this:

neutron router-interface-add c238c6d1-0b6f-47e2-b310-f3bb01c5232d mysubnet
Added interface ad22a2f9-5e9c-4eff-8452-1a03a4168069 to router c238c6d1-0b6f-47e2-b310-f3bb01c5232d.

If you don’t know available router, you can run the command below which give you all available routers:

neutron router-list

+--------------------------------------+-----------------+-----------------------------------------------------------------------------+
| id                                   | name            | external_gateway_info                                                       |
+--------------------------------------+-----------------+-----------------------------------------------------------------------------+
| c238c6d1-0b6f-47e2-b310-f3bb01c5232d | router-floating | {"network_id": "afaf60f2-c99d-4e71-81c5-5df562dfe85d", "enable_snat": true} |
+--------------------------------------+-----------------+-----------------------------------------------------------------------------+

Booting an instance now needs more information:

nova boot --flavor 6 --image 276bf12d-5450-45a9-907d-eaa3375ed5e5  test-instance-2
ERROR: Multiple possible networks found, use a Network ID to be more specific. (HTTP 400) (Request-ID: req-6c42e963-0f8a-4831-ae81-3946147b8392)

Now to boot an new VM instance, with a specific NIC you will have to add –nic argument like this:

nova boot --image <img> --flavor <flavor> --nic net-id=<net1-id> <vm-name>

as an example with the net we just created:

nova boot --flavor 6 --image 276bf12d-5450-45a9-907d-eaa3375ed5e5 --nic net-id=0ff4dbbd-0eae-4820-a287-6fa8189149f3 test-instance-1
+--------------------------------------+--------------------------------------+
| Property                             | Value                                |
+--------------------------------------+--------------------------------------+
| OS-EXT-STS:task_state                | scheduling                           |
| image                                | Cirros-0.3.1                         |
| OS-EXT-STS:vm_state                  | building                             |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000002                    |
| OS-SRV-USG:launched_at               | None                                 |
| flavor                               | m1.micro                             |
| id                                   | 7f342303-9e21-4158-b72f-5e65b9ebc83e |
| security_groups                      | [{u'name': u'default'}]              |
| user_id                              | 35a0ab46761e4c67b41435a433c221e9     |
| OS-DCF:diskConfig                    | MANUAL                               |
| accessIPv4                           |                                      |
| accessIPv6                           |                                      |
| progress                             | 0                                    |
| OS-EXT-STS:power_state               | 0                                    |
| OS-EXT-AZ:availability_zone          | nova                                 |
| config_drive                         |                                      |
| status                               | BUILD                                |
| updated                              | 2014-04-19T16:10:02Z                 |
| hostId                               |                                      |
| OS-EXT-SRV-ATTR:host                 | None                                 |
| OS-SRV-USG:terminated_at             | None                                 |
| key_name                             | None                                 |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | None                                 |
| name                                 | test-instance-1                      |
| adminPass                            | Dgsnq83vp7Bm                         |
| tenant_id                            | d16c48d47f0349e5ab07d0569d8010fa     |
| created                              | 2014-04-19T16:10:02Z                 |
| os-extended-volumes:volumes_attached | []                                   |
| metadata                             | {}                                   |
+--------------------------------------+--------------------------------------+

If you want the VM to be part of more than one network you can repeat the –nic argument:

nova boot --image <img> --flavor <flavor> --nic net-id=<net1-id> --nic net-id=<net2-id> <vm-name>

Now if you want to access the VM from the outside you will have to associate a floating ip with a VM, so first check if there is a free floating ip that you can use:

neutron floatingip-list
+--------------------------------------+------------------+---------------------+---------+
| id                                   | fixed_ip_address | floating_ip_address | port_id |
+--------------------------------------+------------------+---------------------+---------+
| 4f453aec-b74e-4623-a256-168c7eef4525 |                  | 192.168.126.130     |         |
+--------------------------------------+------------------+---------------------+---------+

The free IPs are the one that have the port_id column empty.

If there is no (free) floating IP, you can create one using:

neutron floatingip-create floating
Created a new floatingip:
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.126.131                      |
| floating_network_id | afaf60f2-c99d-4e71-81c5-5df562dfe85d |
| id                  | 2d7f4ad4-087b-4a99-9826-f86e69cf0df7 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | d16c48d47f0349e5ab07d0569d8010fa     |
+---------------------+--------------------------------------+

Now to associate the floating IP to your VM, first get the port id used by the VM, by checking which port use the VM fixed ip:

neutron port-list
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                              |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+
| baef8530-3b96-4b44-aa7a-13fa8a37772f |      | fa:16:3e:b2:4d:65 | {"subnet_id": "fd9dfcd0-5478-43be-9783-5c3ea8ba9753", "ip_address": "11.0.0.2"}        |
| 1415c70c-47a8-4e37-b471-cb219386ae89 |      | fa:16:3e:e0:49:cd | {"subnet_id": "9b82d31b-2112-4737-8193-d7c2802b5fc4", "ip_address": "192.168.126.131"} |
| 6bef6e89-425b-4b7e-a514-da4fc68d6589 |      | fa:16:3e:71:d3:31 | {"subnet_id": "7ac3c28f-2343-411b-a750-fa50c275cb83", "ip_address": "192.168.123.50"}  |
| 3fef4ab8-44da-435a-8ccc-70335f07656a |      | fa:16:3e:9a:50:36 | {"subnet_id": "9b82d31b-2112-4737-8193-d7c2802b5fc4", "ip_address": "192.168.126.130"} |
| ad22a2f9-5e9c-4eff-8452-1a03a4168069 |      | fa:16:3e:32:ba:7c | {"subnet_id": "fd9dfcd0-5478-43be-9783-5c3ea8ba9753", "ip_address": "11.0.0.1"}        |
| 35ccc38a-47fb-4d49-851d-7948e1396ff8 |      | fa:16:3e:bf:4f:14 | {"subnet_id": "9b82d31b-2112-4737-8193-d7c2802b5fc4", "ip_address": "192.168.126.129"} |
| 44f110a4-ce14-4d31-9647-9ea68a2ec3bf |      | fa:16:3e:b9:c6:5b | {"subnet_id": "fd9dfcd0-5478-43be-9783-5c3ea8ba9753", "ip_address": "11.0.0.3"}        |
| 375ed383-abf7-4dca-8892-0d82a6257b99 |      | fa:16:3e:44:be:d3 | {"subnet_id": "7ac3c28f-2343-411b-a750-fa50c275cb83", "ip_address": "192.168.123.49"}  |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+

nova list
+--------------------------------------+-----------------+--------+------------+-------------+----------------+
| ID                                   | Name            | Status | Task State | Power State | Networks       |
+--------------------------------------+-----------------+--------+------------+-------------+----------------+
| 7f342303-9e21-4158-b72f-5e65b9ebc83e | test-instance-1 | ACTIVE | None       | Running     | mynet=11.0.0.2 |
+--------------------------------------+-----------------+--------+------------+-------------+----------------+

Than you can associate the previous floating IP with this port:

neutron floatingip-associate <floating-ip-id> <previous-port-id>
neutron floatingip-associate 2d7f4ad4-087b-4a99-9826-f86e69cf0df7 baef8530-3b96-4b44-aa7a-13fa8a37772f
Associated floatingip 2d7f4ad4-087b-4a99-9826-f86e69cf0df7

neutron floatingip-list
+--------------------------------------+------------------+---------------------+--------------------------------------+
| id                                   | fixed_ip_address | floating_ip_address | port_id                              |
+--------------------------------------+------------------+---------------------+--------------------------------------+
| 4f453aec-b74e-4623-a256-168c7eef4525 |                  | 192.168.126.130     |                                      |
| 2d7f4ad4-087b-4a99-9826-f86e69cf0df7 | 11.0.0.2         | 192.168.126.131     | baef8530-3b96-4b44-aa7a-13fa8a37772f |
+--------------------------------------+------------------+---------------------+--------------------------------------+
ssh cirros@192.168.126.131
The authenticity of host '192.168.126.131 (192.168.126.131)' can't be established.
RSA key fingerprint is 14:27:d9:73:3d:17:4b:a4:c6:0d:b2:8b:5e:73:01:eb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.126.131' (RSA) to the list of known hosts.
cirros@192.168.126.131's password:
$

Cinder

Create Volume

To create a volume use the following command

cinder create --display-name vol1 1
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2014-04-19T16:24:22.682633      |
| display_description |                 None                 |
|     display_name    |                 vol1                 |
|          id         | 8fcc3633-1bed-4a87-908f-7c1bfeda86e9 |
|       metadata      |                  {}                  |
|         size        |                  1                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+

Attach volume

To attach a volume you need the volume ID and the ID of the instance it should be attached to and the name of the device that the volume should represent inside the VM (e.g. /dev/vdc)

cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 8fcc3633-1bed-4a87-908f-7c1bfeda86e9 | available |     vol1     |  1   |     None    |  false   |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
nova list
+--------------------------------------+-----------------+--------+------------+-------------+---------------------------------+
| ID                                   | Name            | Status | Task State | Power State | Networks                        |
+--------------------------------------+-----------------+--------+------------+-------------+---------------------------------+
| 7f342303-9e21-4158-b72f-5e65b9ebc83e | test-instance-1 | ACTIVE | None       | Running     | mynet=11.0.0.2, 192.168.126.131 |
+--------------------------------------+-----------------+--------+------------+-------------+---------------------------------+
nova volume-attach INSTANCE_ID VOLUME_ID auto

According to the OpenStack documentation the volume name argument will not be used by KVM:

“If you are using KVM as your hypervisor, then the actual device name in the guest will be different than the one specified in the euca-attach-volume command. You can specify a device name to the KVM hypervisor, but the actual means of attaching to the guest is over a virtual PCI bus. When the guest sees a new device on the PCI bus, it picks the next available name (which in most cases is /dev/vdc) and the disk shows up there on the guest.”

nova volume-attach 7f342303-9e21-4158-b72f-5e65b9ebc83e 8fcc3633-1bed-4a87-908f-7c1bfeda86e9 auto
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| serverId | 7f342303-9e21-4158-b72f-5e65b9ebc83e |
| id       | 8fcc3633-1bed-4a87-908f-7c1bfeda86e9 |
| volumeId | 8fcc3633-1bed-4a87-908f-7c1bfeda86e9 |
+----------+--------------------------------------+

on the VM

ssh cirros@192.168.126.131
cirros@192.168.126.131's password:
$ dmesg |tail -n1
[ 1887.524862]  vdb: unknown partition table
$ sudo -i
# mkfs.ext4 /dev/vdb
mke2fs 1.42.2 (27-Mar-2012)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
65536 inodes, 262144 blocks
13107 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
  32768, 98304, 163840, 229376

Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks):
done
Writing superblocks and filesystem accounting information: done

# mount /dev/vdb /mnt
# echo "this is persitent" > /mnt/text
# umount /mnt/

Detach volume

To detach the volume use the detach-volume command.

nova volume-detach 7f342303-9e21-4158-b72f-5e65b9ebc83e 8fcc3633-1bed-4a87-908f-7c1bfeda86e9

Because detaching is done at the VM “hardware” level, make sure that the volume is not used any more inside the VM.

Interested in Cloud or Chef Trainings? Have a look at our Commandemy Trainings page. Need help migrating to the cloud? Check out Infralovers.

comments powered by Disqus
Blog Tags