Cloud and AutomationCurrent Focus: Chef and OpenStack for your Private Cloudhttp://ehaselwanter.com2015-04-19T00:00:00ZEdmund HaselwanterMTU Issue?. Nope, It is LRO with Bridge and Bondhttp://ehaselwanter.com/en/blog/2014/11/02/mtu-issue--nope-it-is-lro-with-bridge-and-bond/2014-11-02T00:00:00Z2014-11-06T12:06:06+01:00Edmund Haselwanter<p>This one bugged me for a while. Cause it was so miss-leading to debug. Most of the time when faced with connection loss on larger packages one immediately thinks: Damn it, bitten again by PMTU, have to fix the MTU all the way in and out. Recently I got the exact same behaviour on a OpenStack cluster with VLAN provider networking on 10GbE bonds (port channels/lacp/bond4).</p>
<h2>Debug Symptom:</h2>
<p>On a node accessible from the target create two files:</p>
<pre class="highlight shell">head -c 1600 /dev/urandom > 1600.txt
head -c 500 /dev/urandom > 500.txt
</pre>
<p>1600 bytes is big enough to cause problems with fragmentation. (default MTU is 1500 almost everywhere), increase the file size if you have jumbo frames enabled on your path.</p>
<p>Then e.g. scp the packet.</p>
<p>you will see, that copying the small packet will succeed</p>
<pre class="highlight shell">scp admin@10.10.91.1:500.txt .
admin@10.10.91.1<span class="s1">'s password:
500.txt 100% 500 0.5KB/s 00:00
</span></pre>
<p>but copying the large file will stall</p>
<pre class="highlight plaintext">scp admin@10.10.91.1:1600.txt .
admin@10.10.91.1's password:
1600.txt 0% 0 0.0KB/s - stalled -^C
</pre>
<p>This is how a MTU problem reveals itself.</p>
<p>Checked everything, played with various fixes that helped in the past.</p>
<p>Things like:</p>
<ul>
<li>http://lartc.org/howto/lartc.cookbook.mtu-mss.html</li>
</ul>
<pre class="highlight shell">iptables -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
</pre>
<p>does not help</p>
<ul>
<li><a href="http://openstack.sys-con.com/node/2938779">http://openstack.sys-con.com/node/2938779</a></li>
<li><a href="http://www.cisco.com/c/en/us/support/docs/ip/generic-routing-encapsulation-gre/13725-56.html">http://www.cisco.com/c/en/us/support/docs/ip/generic-routing-encapsulation-gre/13725-56.html</a></li>
<li><a href="http://packetlife.net/blog/2008/aug/18/path-mtu-discovery/">http://packetlife.net/blog/2008/aug/18/path-mtu-discovery/</a></li>
<li><a href="http://mccltd.net/blog/?p=1577">http://mccltd.net/blog/?p=1577</a></li>
<li><a href="http://roie9876.wordpress.com/2014/04/29/nsx-minimum-mtu/">http://roie9876.wordpress.com/2014/04/29/nsx-minimum-mtu/</a></li>
</ul>
<h2>A Hint from a Friend - LRO Findings</h2>
<p>Then I got a hint from a friend (<a href="https://www.linkedin.com/pub/thore-bahr/72/b99/7a7">Thore Bahr</a>) to have a look into <code>rx-vlan-offload</code>. I did so, but that did not help. </p>
<p><strong>But</strong>, this triggered further investigation and more deep digging into the root cause. </p>
<p>Finally it revealed itself derived from some other observations:</p>
<ul>
<li><a href="https://bugzilla.redhat.com/show_bug.cgi?id=772317">https://bugzilla.redhat.com/show_bug.cgi?id=772317</a></li>
<li><a href="http://www.novell.com/support/kb/doc.php?id=7013089">http://www.novell.com/support/kb/doc.php?id=7013089</a></li>
</ul>
<p>=> So there is an issue with the lro setting and bonding with Intel igbxe adapters. We have to turn off lro</p>
<p>From the [Base Driver for the Intel® Ethernet 10 Gigabit PCI Express Family of Adapters README](http://downloadmirror.intel.com/22919/eng/README.txt:</p>
<blockquote>
<h1>IMPORTANT NOTE</h1>
<p>WARNING: The ixgbe driver compiles by default with the LRO (Large Receive
Offload) feature enabled. This option offers the lowest CPU utilization for
receives, but is completely incompatible with <em>routing/ip forwarding</em> and
<em>bridging</em>. If enabling ip forwarding or bridging is a requirement, it is
necessary to disable LRO using compile time options as noted in the LRO
section later in this document. The result of not disabling LRO when combined
with ip forwarding or bridging can be low throughput or even a kernel panic.</p>
</blockquote>
<h2>Change Offload Settings with <code>ethtool</code>:</h2>
<p>First try to get a connection to proof it is not working</p>
<pre class="highlight shell"><span class="gp">$ </span>ip netns <span class="nb">exec </span>qdhcp-9d444bee-0395-47d9-ae7e-ae315c25e088 ssh 50.0.0.9
</pre>
<p>Change the settings with <code>ethtool -K <adapter> lro off</code></p>
<pre class="highlight plaintext">$ ethtool -K p3p1 lro off
$ ethtool -K p3p2 lro off
$ ethtool -k p3p1
Offload parameters for eth6:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp-segmentation-offload: on
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off
receive-hashing: on
</pre>
<p>This was NOT working, now it works:</p>
<pre class="highlight shell"><span class="gp">$ </span>ip netns <span class="nb">exec </span>qdhcp-9d444bee-0395-47d9-ae7e-ae315c25e088 ssh 50.0.0.9 <span class="s1">'uptime'</span>
Warning: Permanently added <span class="s1">'50.0.0.9'</span> <span class="o">(</span>ECDSA<span class="o">)</span> to the list of known hosts.
17:34pm up 10 days 3:20, 0 users, load average: 0.00, 0.01, 0.05
</pre>
<h3>Make it Permanent!</h3>
<p><a href="http://www.novell.com/support/kb/doc.php?id=7013089">http://www.novell.com/support/kb/doc.php?id=7013089</a> suggests adding the following options to the network config: </p>
<pre class="highlight plaintext">ETHTOOL_OPTIONS='-K iface lro off'
</pre>
<p>We have to add this to the automation for all interfaces in a/the bond.</p>
<p>Finally since we use Chef we can push down this setting to all compute nodes:</p>
<pre class="highlight shell">knife ssh roles:<span class="k">*</span>comp<span class="k">*</span> -- ethtool -K p3p1 lro off
knife ssh roles:<span class="k">*</span>comp<span class="k">*</span> -- ethtool -k p3p1 |grep large
df0-xx-xx-xx-aa-aa.test-openstack.org large-receive-offload: off
df0-xx-xx-xx-aa-aa.test-openstack.org large-receive-offload: off
<span class="o">[</span> .. snipped .. <span class="o">]</span>
df0-xx-xx-xx-aa-aa.test-openstack.org large-receive-offload: off
</pre>Deploying Openstack with Stackforge Chef-Zero Stylehttp://ehaselwanter.com/en/blog/2014/10/15/deploying-openstack-with-stackforge-chef-zero-style/2014-10-15T00:00:00Z2015-10-04T13:07:50+02:00Edmund Haselwanter<h2>OpenStack Lifecycle Management Tools</h2>
<p>In a joint effort <a href="https://twitter.com/jannis_r">Jannis Rake-Revelant</a>, <a href="https://twitter.com/jaybrueder">Jürgen Brüder</a>, and myself <a href="https://twitter.com/ehaselwanter">Edmund Haselwanter</a> had a look at several what we call “Openstack Lifecycle Management tools”.</p>
<p>This time <a href="https://twitter.com/jaybrueder">Jürgen Brüder</a> did most of the work, so thanks for sharing your findings :-)</p>
<h1>Deploying Openstack with Stackforge</h1>
<p>Stackforge provides an open-source repository to setup Openstack with Chef.
You can deploy it with a Chef-Server or simply by using Chef-Zero.</p>
<p>It currently includes all OpenStack core projects: Compute, Dashboard, Identity, Image, Network, Object Storage, Block Storage, Telemetry and Orchestration.</p>
<p>Stackforge also comes with a couple of Vagrantfiles that can be used to create a multi-node test deployment from scratch.
This way, you also won’t need a Chef Server for deployment.</p>
<h2>Documentation and Tutorials</h2>
<p>The following Git repository contains good documentation about using Stackforge:
<a href="https://github.com/stackforge/openstack-chef-repo/tree/stable/icehouse">https://github.com/stackforge/openstack-chef-repo/tree/stable/icehouse</a></p>
<p>It also contains example configuration files for environments and roles.</p>
<h2>Using Vagrant for Test setup</h2>
<p>Vagrant can be installed on nearly all operating systems.
We will be using Mac OS X for this example.</p>
<p>If you are on Mac OS X, you’ll need to install Xcode Command Line Tools from
<a href="https://developer.apple.com/downloads/">https://developer.apple.com/downloads/</a></p>
<h3>Installing ChefDK</h3>
<p>To ensure a proper working deployment, we recommend using the ChefDK for installing all needed Gem dependencies.
This will also install Berkshelf etc.</p>
<p>Just follow this link a download the version that fits your OS. Then install it:
<a href="http://downloads.getchef.com/chef-dk/">http://downloads.getchef.com/chef-dk/</a></p>
<h3>Installing VirtualBox and Vagrant</h3>
<p>Install the latest VirtualBox for your operating system:
<a href="https://www.virtualbox.org/wiki/Downloads">https://www.virtualbox.org/wiki/Downloads</a></p>
<p>Then install the latest version of Vagrant:
<a href="https://www.vagrantup.com/downloads">https://www.vagrantup.com/downloads</a></p>
<p>To make the Vagrantfiles work, we need three additonal plugins for Vagrant.
Open a terminal window and run these commands:</p>
<p>Now install the Omnibus, Chef-Zero and Berkshelf Vagrant plugin like this (order!)</p>
<pre class="highlight shell">vagrant plugin install vagrant-berkshelf
vagrant plugin install vagrant-chef-zero
vagrant plugin install vagrant-omnibus
</pre>
<p>Please make sure to stick to the installation-order as listed above.
Some plugins can have issues if installed in the wrong order.</p>
<p>Check the three plugins are really installed</p>
<pre class="highlight shell">vagrant plugin list
</pre>
<p>Here is what we got back:</p>
<pre class="highlight shell">vagrant-berkshelf <span class="o">(</span>3.0.1<span class="o">)</span>
vagrant-chef-zero <span class="o">(</span>0.7.1<span class="o">)</span>
vagrant-login <span class="o">(</span>1.0.1, system<span class="o">)</span>
vagrant-omnibus <span class="o">(</span>1.4.1<span class="o">)</span>
vagrant-share <span class="o">(</span>1.1.2, system<span class="o">)</span>
</pre>
<h2>Deploying the Stackforge Cookbook</h2>
<p>Clone the Stackforge openstack-chef-repo into your home directory:</p>
<pre class="highlight shell">git clone -b stable/icehouse https://github.com/stackforge/openstack-chef-repo
</pre>
<p>Navigate into the created directory and rename the <strong>Vagrantfile-multi-neutron</strong> to <strong>Vagrantfile</strong></p>
<pre class="highlight shell">mv Vagrantfile-multi-neutron Vagrantfile
</pre>
<p>Now install all gems and cookbooks that are needed</p>
<pre class="highlight shell">bundle install
berks install
</pre>
<p>Now you can simply run this command to let Vagrant provision two VMs for you:</p>
<pre class="highlight shell">vagrant up /ubuntu1204/
</pre>
<p>One will be an all-in-one node, the second one will be an additional compute node.</p>
<h3>Vagrant Troubleshooting</h3>
<p>If you run into any problems with Vagrant try setting:</p>
<pre class="highlight shell"><span class="nb">export </span><span class="nv">VAGRANT_LOG</span><span class="o">=</span>debug
</pre>
<p>Also make sure that the Vagrant Plugins are compatible to each other. They are developed on independent release schedules and a new version of a plugin might not work with an older version of the other plugins.</p>
<h2>Testing the Openstack installation</h2>
<p>You can now login at the URL of the first VM. Use the username <strong>admin</strong> and the password <strong>admin</strong> for this.
If you navigate to <strong>Admin -> System Panel -> Host Aggregates</strong> you should see both nodes listed.</p>
<p><img class="center" data-thumbnails="original:article-images/host_aggregates.png|small:article-images/host_aggregates-small-200x.png|half:article-images/host_aggregates-half-390x.png|blog:article-images/host_aggregates-blog-780x.png" src="/images/article-images/host_aggregates-blog-780x.png" /> </p>
<h2>Analysing Vagrant Setup</h2>
<p>To be able to create a production ready bare-metal deployment without Vagrant, we need to understand what Vagrant is doing to deploy Openstack on VMs.</p>
<h3>Machine and Network setup</h3>
<p>Vagrant uses two VMs with each having 2 CPUs and 2048MB memory. It also adds two promiscuous interfaces to each VM and allows all frames through.</p>
<p>Each machine will need access to two private networks. Vagrant is configuring this for each VM.
The additional Compute node will have only these two networks configured:</p>
<pre class="highlight shell"><span class="c"># Vagrantfile excerpt</span>
ubuntu1204comp1.vm.network <span class="s2">"private_network"</span>, ip: <span class="s2">"192.168.3.61"</span>
ubuntu1204comp1.vm.network <span class="s2">"private_network"</span>, ip: <span class="s2">"172.16.10.61"</span>
</pre>
<p>Additionally, the Controller/Compute node will have three ports forwarded. Here the configuration of the Controller/Compute node:</p>
<pre class="highlight shell"><span class="c"># Vagrantfile excerpt</span>
ubuntu1204cont.vm.network <span class="s2">"forwarded_port"</span>, guest: 443, host: 8443 <span class="c"># dashboard-ssl</span>
ubuntu1204cont.vm.network <span class="s2">"forwarded_port"</span>, guest: 8773, host: 8773 <span class="c"># compute-ec2-api</span>
ubuntu1204cont.vm.network <span class="s2">"forwarded_port"</span>, guest: 8774, host: 8774 <span class="c"># compute-api</span>
ubuntu1204cont.vm.network <span class="s2">"private_network"</span>, ip: <span class="s2">"192.168.3.60"</span>
ubuntu1204cont.vm.network <span class="s2">"private_network"</span>, ip: <span class="s2">"172.16.10.60"</span>
</pre>
<h3>Chef-Zero and prerequisites</h3>
<p>Vagrant is using Chef-Zero to setup the deployment. This means, that there is no actual Chef-Server necessary. This is a good approach for small deployments. It will also install all needed Chef dependencies (e.g. Berkshelf) on the VM with an omnibus installer.</p>
<p>During its run it will upload all needed cookbooks, all Openstack projects (Nova, Swift etc.) are available as a Chef cookbook, to Chef-Zero. On a bare machine you would run:</p>
<pre class="highlight shell"><span class="c"># Install Chef-Omnibus</span>
curl -L https://www.opscode.com/chef/install.sh | bash
<span class="c"># Checkout Stackforge repo</span>
git clone -b stable/icehouse https://github.com/stackforge/openstack-chef-repo
<span class="nb">cd </span>openstack-chef-repo
<span class="c"># Installing Berkshelf gem and cookbooks</span>
/opt/chef/embedded/bin/gem install berkshelf --no-ri --no-rdoc
/opt/chef/embedded/bin/berks vendor ./cookbooks
</pre>
<h3>Create a Chef environment</h3>
<p>The environment provides an overall configuration for our deployment. It can tell each VM where to look for specific services, which network interface to use for what etc. Stackforge comes with a number of predefined environments. For a production deployment we recommend to write your own.</p>
<p>We will modify one of the provided environments to fit our scenario better. Here you can see the environment that we will be using:</p>
<pre class="highlight shell"><span class="o">{</span>
<span class="s2">"name"</span>: <span class="s2">"vagrant-multi-neutron"</span>,
<span class="s2">"description"</span>: <span class="s2">"Environment used in testing the upstream cookbooks and reference Chef repository with vagrant. To be used with the Vagrantfile-multi-neutron vagrantfile. Defines the necessary attributes for a working mutltinode (1 controller/n computes) openstack deployment, using neutron (with gre tunnels between hosts) for the networking component."</span>,
<span class="s2">"cookbook_versions"</span>: <span class="o">{}</span>,
<span class="s2">"json_class"</span>: <span class="s2">"Chef::Environment"</span>,
<span class="s2">"chef_type"</span>: <span class="s2">"environment"</span>,
<span class="s2">"default_attributes"</span>: <span class="o">{}</span>,
<span class="s2">"override_attributes"</span>: <span class="o">{</span>
<span class="s2">"mysql"</span>: <span class="o">{</span>
<span class="s2">"allow_remote_root"</span>: <span class="nb">true</span>,
<span class="s2">"root_network_acl"</span>: <span class="o">[</span><span class="s2">"%"</span><span class="o">]</span>
<span class="o">}</span>,
<span class="s2">"openstack"</span>: <span class="o">{</span>
<span class="s2">"developer_mode"</span>: <span class="nb">true</span>,
<span class="s2">"identity"</span>: <span class="o">{</span>
<span class="s2">"bind_interface"</span>: <span class="s2">"eth1"</span>
<span class="o">}</span>,
<span class="s2">"endpoints"</span>: <span class="o">{</span>
<span class="s2">"host"</span>: <span class="s2">"192.168.3.60"</span>,
<span class="s2">"mq"</span>: <span class="o">{</span>
<span class="s2">"host"</span>: <span class="s2">"192.168.3.60"</span>,
<span class="s2">"bind_interface"</span>: <span class="s2">"eth1"</span>
<span class="o">}</span>,
<span class="s2">"db"</span>: <span class="o">{</span>
<span class="s2">"host"</span>: <span class="s2">"192.168.3.60"</span>,
<span class="s2">"bind_interface"</span>: <span class="s2">"eth1"</span>
<span class="o">}</span>,
<span class="s2">"network"</span>: <span class="o">{</span>
<span class="s2">"debug"</span>: <span class="s2">"True"</span>,
<span class="s2">"dhcp"</span>: <span class="o">{</span>
<span class="s2">"enable_isolated_metadata"</span>: <span class="s2">"True"</span>
<span class="o">}</span>,
<span class="s2">"metadata"</span>: <span class="o">{</span>
<span class="s2">"nova_metadata_ip"</span>: <span class="s2">"192.168.3.60"</span>
<span class="o">}</span>,
<span class="s2">"openvswitch"</span>: <span class="o">{</span>
<span class="s2">"tunnel_id_ranges"</span>: <span class="s2">"1:1000"</span>,
<span class="s2">"enable_tunneling"</span>: <span class="s2">"True"</span>,
<span class="s2">"tenant_network_type"</span>: <span class="s2">"gre"</span>,
<span class="s2">"local_ip_interface"</span>: <span class="s2">"eth2"</span>
<span class="o">}</span>,
<span class="s2">"api"</span>: <span class="o">{</span>
<span class="s2">"bind_interface"</span>: <span class="s2">"eth1"</span>
<span class="o">}</span>
<span class="o">}</span>,
<span class="s2">"image"</span>: <span class="o">{</span>
<span class="s2">"api"</span>: <span class="o">{</span>
<span class="s2">"bind_interface"</span>: <span class="s2">"eth1"</span>
<span class="o">}</span>,
<span class="s2">"registry"</span>: <span class="o">{</span>
<span class="s2">"bind_interface"</span>: <span class="s2">"eth1"</span>
<span class="o">}</span>,
<span class="s2">"image_upload"</span>: <span class="nb">true</span>,
<span class="s2">"upload_images"</span>: <span class="o">[</span>
<span class="s2">"cirros"</span>,
<span class="s2">"ubuntu"</span>
<span class="o">]</span>,
<span class="s2">"upload_image"</span>: <span class="o">{</span>
<span class="s2">"ubuntu"</span>: <span class="s2">"http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img"</span>,
<span class="s2">"cirros"</span>: <span class="s2">"https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img"</span>
<span class="o">}</span>
<span class="o">}</span>,
<span class="s2">"compute"</span>: <span class="o">{</span>
<span class="s2">"xvpvnc_proxy"</span>: <span class="o">{</span>
<span class="s2">"bind_interface"</span>: <span class="s2">"eth1"</span>
<span class="o">}</span>,
<span class="s2">"novnc_proxy"</span>: <span class="o">{</span>
<span class="s2">"bind_interface"</span>: <span class="s2">"eth1"</span>
<span class="o">}</span>,
<span class="s2">"libvirt"</span>: <span class="o">{</span>
<span class="s2">"virt_type"</span>: <span class="s2">"qemu"</span>
<span class="o">}</span>,
<span class="s2">"network"</span>: <span class="o">{</span>
<span class="s2">"public_interface"</span>: <span class="s2">"eth1"</span>,
<span class="s2">"service_type"</span>: <span class="s2">"neutron"</span>
<span class="o">}</span>,
<span class="s2">"config"</span>: <span class="o">{</span>
<span class="s2">"ram_allocation_ratio"</span>: 5
<span class="o">}</span>
<span class="o">}</span>
<span class="o">}</span>
<span class="o">}</span>
<span class="o">}</span>
<span class="o">}</span>
</pre>
<p>You can create your the environment file under <strong>/openstack-chef-repo/environments/</strong>. The filename will needs to be <strong>vagrant-multi-neutron.json</strong>.</p>
<h3>Define a run_list</h3>
<p>Each machine gets it’s own run<em>list. The run</em>list defines which <strong>roles</strong> or <strong>recipes</strong> are being used on that machine.</p>
<p>Here a listing of the run_list of each machine:</p>
<pre class="highlight shell"><span class="c"># Controller/Compute node</span>
role[os-compute-single-controller-no-network]
recipe[openstack-network::identity_registration]
role[os-network-openvswitch]
role[os-network-dhcp-agent]
role[os-network-metadata-agent]
role[os-network-server]
<span class="c"># Compute node</span>
role[os-compute-worker]
recipe[apt::cacher-client]
</pre>
<p>If you would want to add a role to a machine manually, you can do this with this command:</p>
<pre class="highlight shell">knife node run_list add NODE_NAME -z <span class="s1">'role[NAME_OF_ROLE]'</span>
</pre>
<h3>Run the Chef-Client</h3>
<p>After everything is configured, Vagrant runs the Chef-Client on each VM. This will install everything that is needed and will create running deployment.
This would be an equivalent command to run this manually:</p>
<pre class="highlight shell"><span class="c"># Assuming you have a my-deployment.json inside the environments directory</span>
chef-client -z -E my-deployment
</pre>
<h3>A quick rundown</h3>
<ol>
<li>Setup networking for your machines</li>
<li>Install the Chef-Omnibus-Installer on your machines</li>
<li>Install all needed cookbooks with Berkshelf</li>
<li>Create a Chef environment</li>
<li>Define the run_list for each machine</li>
<li>Run Chef-Client on each machine</li>
</ol>
Deploying Openstack with Stackforge Chef-Server Stylehttp://ehaselwanter.com/en/blog/2014/10/15/deploying-openstack-with-stackforge-chef-server-style/2014-10-15T00:00:00Z2015-10-04T13:07:50+02:00Edmund Haselwanter<h2>OpenStack Lifecycle Management Tools</h2>
<p>In a joint effort <a href="https://twitter.com/jannis_r">Jannis Rake-Revelant</a>, <a href="https://twitter.com/jaybrueder">Jürgen Brüder</a>, and myself <a href="https://twitter.com/ehaselwanter">Edmund Haselwanter</a> had a look at several what we call “Openstack Lifecycle Management tools”.</p>
<p>This time <a href="https://twitter.com/jaybrueder">Jürgen Brüder</a> did most of the work, so thanks for sharing your findings :-)</p>
<h1>Deploying Openstack with Stackforge and Chef Server</h1>
<p>Stackforge provides an open-source repository to setup Openstack with Chef.
You can deploy it with a Chef-Server or simply by using Chef-Zero.</p>
<p>It currently includes all OpenStack core projects: Compute, Dashboard, Identity, Image, Network, Object Storage, Block Storage, Telemetry and Orchestration.</p>
<p>In this documents we will be using Chef Server to setup our deployment. We will be using Vagrant to create three virtual machines for us.</p>
<h2>Using Vagrant for Test setup</h2>
<p>Vagrant can be installed on nearly all operating systems.
We will be using Mac OS X for this example.</p>
<p>If you are on Mac OS X, you’ll need to install Xcode Command Line Tools from
<a href="https://developer.apple.com/downloads/">https://developer.apple.com/downloads/</a></p>
<h3>Installing ChefDK</h3>
<p>To ensure a proper working deployment, we recommend using the ChefDK for installing all needed Gem dependencies.
This will also install Berkshelf etc.</p>
<p>Just follow this link a download the version that fits your OS. Then install it:
<a href="http://downloads.getchef.com/chef-dk/">http://downloads.getchef.com/chef-dk/</a></p>
<h3>Installing VirtualBox and Vagrant</h3>
<p>Install the latest VirtualBox for your operating system:
<a href="https://www.virtualbox.org/wiki/Downloads">https://www.virtualbox.org/wiki/Downloads</a></p>
<p>Then install the latest version of Vagrant:
<a href="https://www.vagrantup.com/downloads">https://www.vagrantup.com/downloads</a></p>
<h2>Create the VMs</h2>
<p>Create a directory called <strong>stackforge-chef-server</strong> in your home directory. Inside of it, create a file called <strong>Vagrantfile</strong>.</p>
<p>Add this content to the Vagrantfile:</p>
<pre class="highlight plaintext">Vagrant.require_version ">= 1.1"
Vagrant.configure("2") do |config|
# get local ip so that we can force chef zero onto a different port per
# machine, allowing for multiple simultaneous vagrant up runs
local_ip = Socket.ip_address_list.detect{|intf| intf.ipv4_private?}.ip_address
# virtualbox provider settings
config.vm.provider "virtualbox" do |vb|
vb.customize ["modifyvm", :id, "--cpus", 2]
vb.customize ["modifyvm", :id, "--memory", 2048]
vb.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]
vb.customize ["modifyvm", :id, "--nicpromisc3", "allow-all"]
end
#################################
# Ubuntu 12.04 controller #
#################################
config.vm.define :ubuntu1204cont do |ubuntu1204cont|
ubuntu1204cont.vm.hostname = "ubuntu1204cont"
ubuntu1204cont.vm.box = "opscode-ubuntu-12.04"
ubuntu1204cont.vm.box_url = "http://opscode-vm-bento.s3.amazonaws.com/vagrant/virtualbox/opscode_ubuntu-12.04_chef-provisionerless.box"
ubuntu1204cont.vm.network "forwarded_port", guest: 443, host: 8443 # dashboard-ssl
ubuntu1204cont.vm.network "forwarded_port", guest: 8773, host: 8773 # compute-ec2-api
ubuntu1204cont.vm.network "forwarded_port", guest: 8774, host: 8774 # compute-api
ubuntu1204cont.vm.network "private_network", ip: "192.168.3.60"
ubuntu1204cont.vm.network "private_network", ip: "172.16.10.60"
end
#################################
# Ubuntu 12.04 compute1 #
#################################
config.vm.define :ubuntu1204comp1 do |ubuntu1204comp1|
ubuntu1204comp1.vm.hostname = "ubuntu1204comp1"
ubuntu1204comp1.vm.box = "opscode-ubuntu-12.04"
ubuntu1204comp1.vm.box_url = "http://opscode-vm-bento.s3.amazonaws.com/vagrant/virtualbox/opscode_ubuntu-12.04_chef-provisionerless.box"
ubuntu1204comp1.vm.network "private_network", ip: "192.168.3.61"
ubuntu1204comp1.vm.network "private_network", ip: "172.16.10.61"
end
#################################
# Ubuntu 12.04 chefsererv #
#################################
config.vm.define :ubuntu1204chef do |ubuntu1204comp1|
ubuntu1204comp1.vm.hostname = "ubuntu1204chef"
ubuntu1204comp1.vm.box = "opscode-ubuntu-12.04"
ubuntu1204comp1.vm.box_url = "http://opscode-vm-bento.s3.amazonaws.com/vagrant/virtualbox/opscode_ubuntu-12.04_chef-provisionerless.box"
ubuntu1204comp1.vm.network "private_network", ip: "192.168.3.62"
end
end
</pre>
<p>What this <strong>Vagrantfile</strong> does it to setup three VMs for us. This way, we can give the VMs all necessary network configurations that we need directly through the file.</p>
<p>Simply run this command in the directory with the <strong>Vagrantfile</strong>:</p>
<pre class="highlight shell">vagrant up
</pre>
<p>To SSH into the Machines, simply run from the <strong>stackforge-chef-server</strong> directory:</p>
<pre class="highlight shell">vagrant ssh ubuntu1204chef
vagrant ssh ubuntu1204cont
vagrant ssh ubuntu1204comp1
</pre>
<h2>Setup Chef Server</h2>
<p>On the <strong>ubuntu1204chef</strong> machine, run the following commands to setup a Chef Server:</p>
<pre class="highlight shell">wget https://opscode-omnibus-packages.s3.amazonaws.com/ubuntu/12.04/x86_64/chef-server_11.1.4-1_amd64.deb
sudo dpkg -i chef-server_11.1.4-1_amd64.deb
sudo chef-server-ctl reconfigure
</pre>
<p>Point your browser to the Chef VM and login using <strong>admin</strong> and <strong>p@ssw0rd1</strong>. Change the password after the login, regenerate the key and copy the key somewhere safe.</p>
<h2>Setup Workstation</h2>
<p>After installing the <strong>ChefDK</strong> earlier on your workstation we have installed everything we need to connect to your Chef Server.</p>
<p>Inside your terminal run:</p>
<pre class="highlight shell"><span class="nb">export </span><span class="nv">PATH</span><span class="o">=</span><span class="s2">"/opt/chefdk/embedded/bin:</span><span class="k">${</span><span class="nv">HOME</span><span class="k">}</span><span class="s2">/.chefdk/gem/ruby/2.1.0/bin:</span><span class="nv">$PATH</span><span class="s2">"</span>
chef
</pre>
<p>If both works, everything is setup just fine.</p>
<p>Clone the Chef-Repo Inside your <strong>home</strong> directory like so:</p>
<pre class="highlight shell">git clone git://github.com/opscode/chef-repo.git
Cloning into <span class="s1">'chef-repo'</span>...
remote: Counting objects: 199, <span class="k">done</span>.
remote: Compressing objects: 100% <span class="o">(</span>119/119<span class="o">)</span>, <span class="k">done</span>.
remote: Total 199 <span class="o">(</span>delta 71<span class="o">)</span>, reused 160 <span class="o">(</span>delta 47<span class="o">)</span>
Receiving objects: 100% <span class="o">(</span>199/199<span class="o">)</span>, 30.45 KiB, <span class="k">done</span>.
Resolving deltas: 100% <span class="o">(</span>71/71<span class="o">)</span>, <span class="k">done</span>.
</pre>
<p>Inside of the <strong>chef-repo</strong> directory create a new directory called <strong>.chef</strong>:</p>
<pre class="highlight shell">sudo mkdir -p .chef
</pre>
<p>Now we only need to configure the <strong>knife plugin</strong>:</p>
<pre class="highlight shell">knife configure --initial
</pre>
<p>Answer the questions of the setup. If you have configured everything correctly, you can run the following command to see if everything worked:</p>
<pre class="highlight shell">knife client list
</pre>
<h2>Bootstrap Nodes</h2>
<p>Navigate into your <strong>chef-repo</strong> and run:</p>
<pre class="highlight shell">knife bootstrap localhost --sudo -x vagrant -P vagrant --ssh-port 2222 -N ubuntu1204cont --bootstrap-version 11.14.6-1
knife bootstrap localhost --sudo -x vagrant -P vagrant --ssh-port 2200 -N ubuntu1204comp1 --bootstrap-version 11.14.6-1
</pre>
<p>Please make sure, that the ports numbers are correct. While running the <strong>vagrant up</strong> command earlier, Vagrant will have shown you the correct port numbers.</p>
<h2>Checkout Repo and bundle upload</h2>
<p>Navigate into <strong>chef-repo/cookbooks</strong> and clone the Stackforge cookbook there:</p>
<pre class="highlight shell">git clone -b stable/icehouse https://github.com/stackforge/openstack-chef-repo.git
</pre>
<p>To setup all cookbooks with your Chef server do</p>
<pre class="highlight shell"><span class="nb">cd </span>openstack-chef-repo
berks install
berks upload
</pre>
<p>If you encounter problems regarding SSL, you can create a <strong>config.json</strong> file in your <strong>~/.berkshelf</strong> directory:</p>
<pre class="highlight json"><span class="p">{</span><span class="w">
</span><span class="s2">"ssl"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="s2">"verify"</span><span class="p">:</span><span class="w"> </span><span class="kc">false</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></pre>
<h2>Create environment</h2>
<p>Now we create the environment our deployment will be using:</p>
<pre class="highlight shell"><span class="nb">export </span><span class="nv">EDITOR</span><span class="o">=</span><span class="k">$(</span>which vi<span class="k">)</span>
knife environment create multi-node
<span class="o">{</span>
<span class="s2">"name"</span>: <span class="s2">"multi-node"</span>,
<span class="s2">"description"</span>: <span class="s2">"Environment used in testing the upstream cookbooks and reference Chef repository with vagrant. To be used with the Vagrantfile-multi-neutron vagrantfile. Defines the necessary attributes for a working mutltinode (1 controller/n computes) openstack deployment, using neutron (with gre tunnels between hosts) for the networking component."</span>,
<span class="s2">"cookbook_versions"</span>: <span class="o">{}</span>,
<span class="s2">"json_class"</span>: <span class="s2">"Chef::Environment"</span>,
<span class="s2">"chef_type"</span>: <span class="s2">"environment"</span>,
<span class="s2">"default_attributes"</span>: <span class="o">{}</span>,
<span class="s2">"override_attributes"</span>: <span class="o">{</span>
<span class="s2">"mysql"</span>: <span class="o">{</span>
<span class="s2">"allow_remote_root"</span>: <span class="nb">true</span>,
<span class="s2">"root_network_acl"</span>: <span class="o">[</span><span class="s2">"%"</span><span class="o">]</span>
<span class="o">}</span>,
<span class="s2">"openstack"</span>: <span class="o">{</span>
<span class="s2">"developer_mode"</span>: <span class="nb">true</span>,
<span class="s2">"identity"</span>: <span class="o">{</span>
<span class="s2">"bind_interface"</span>: <span class="s2">"eth1"</span>
<span class="o">}</span>,
<span class="s2">"endpoints"</span>: <span class="o">{</span>
<span class="s2">"host"</span>: <span class="s2">"192.168.3.60"</span>,
<span class="s2">"mq"</span>: <span class="o">{</span>
<span class="s2">"host"</span>: <span class="s2">"192.168.3.60"</span>,
<span class="s2">"bind_interface"</span>: <span class="s2">"eth1"</span>
<span class="o">}</span>,
<span class="s2">"db"</span>: <span class="o">{</span>
<span class="s2">"host"</span>: <span class="s2">"192.168.3.60"</span>,
<span class="s2">"bind_interface"</span>: <span class="s2">"eth1"</span>
<span class="o">}</span>,
<span class="s2">"network"</span>: <span class="o">{</span>
<span class="s2">"debug"</span>: <span class="s2">"True"</span>,
<span class="s2">"dhcp"</span>: <span class="o">{</span>
<span class="s2">"enable_isolated_metadata"</span>: <span class="s2">"True"</span>
<span class="o">}</span>,
<span class="s2">"metadata"</span>: <span class="o">{</span>
<span class="s2">"nova_metadata_ip"</span>: <span class="s2">"192.168.3.60"</span>
<span class="o">}</span>,
<span class="s2">"openvswitch"</span>: <span class="o">{</span>
<span class="s2">"tunnel_id_ranges"</span>: <span class="s2">"1:1000"</span>,
<span class="s2">"enable_tunneling"</span>: <span class="s2">"True"</span>,
<span class="s2">"tenant_network_type"</span>: <span class="s2">"gre"</span>,
<span class="s2">"local_ip_interface"</span>: <span class="s2">"eth2"</span>
<span class="o">}</span>,
<span class="s2">"api"</span>: <span class="o">{</span>
<span class="s2">"bind_interface"</span>: <span class="s2">"eth1"</span>
<span class="o">}</span>
<span class="o">}</span>,
<span class="s2">"image"</span>: <span class="o">{</span>
<span class="s2">"api"</span>: <span class="o">{</span>
<span class="s2">"bind_interface"</span>: <span class="s2">"eth1"</span>
<span class="o">}</span>,
<span class="s2">"registry"</span>: <span class="o">{</span>
<span class="s2">"bind_interface"</span>: <span class="s2">"eth1"</span>
<span class="o">}</span>,
<span class="s2">"image_upload"</span>: <span class="nb">true</span>,
<span class="s2">"upload_images"</span>: <span class="o">[</span>
<span class="s2">"cirros"</span>,
<span class="s2">"ubuntu"</span>
<span class="o">]</span>,
<span class="s2">"upload_image"</span>: <span class="o">{</span>
<span class="s2">"ubuntu"</span>: <span class="s2">"http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img"</span>,
<span class="s2">"cirros"</span>: <span class="s2">"https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img"</span>
<span class="o">}</span>
<span class="o">}</span>,
<span class="s2">"compute"</span>: <span class="o">{</span>
<span class="s2">"xvpvnc_proxy"</span>: <span class="o">{</span>
<span class="s2">"bind_interface"</span>: <span class="s2">"eth1"</span>
<span class="o">}</span>,
<span class="s2">"novnc_proxy"</span>: <span class="o">{</span>
<span class="s2">"bind_interface"</span>: <span class="s2">"eth1"</span>
<span class="o">}</span>,
<span class="s2">"libvirt"</span>: <span class="o">{</span>
<span class="s2">"virt_type"</span>: <span class="s2">"qemu"</span>
<span class="o">}</span>,
<span class="s2">"network"</span>: <span class="o">{</span>
<span class="s2">"public_interface"</span>: <span class="s2">"eth1"</span>,
<span class="s2">"service_type"</span>: <span class="s2">"neutron"</span>
<span class="o">}</span>,
<span class="s2">"config"</span>: <span class="o">{</span>
<span class="s2">"ram_allocation_ratio"</span>: 5
<span class="o">}</span>
<span class="o">}</span>
<span class="o">}</span>
<span class="o">}</span>
<span class="o">}</span>
<span class="o">}</span>
</pre>
<p>Now we add the <strong>multi-node</strong> environment to our nodes:</p>
<pre class="highlight shell">knife node environment_set ubuntu1204cont multi-node
knife node environment_set ubuntu1204comp1 multi-node
</pre>
<h2>Add Roles and recipes to nodes</h2>
<p>Next, we will be adding the necessary <strong>roles</strong> to our Chef Server.</p>
<p>Once again inside <strong>chef-repo/cookbooks/openstack-chef-repo</strong> run</p>
<pre class="highlight shell">knife role from file roles/<span class="k">*</span>.json
</pre>
<p>Now that the roles are on the Chef Server, we can add them to the nodes:</p>
<pre class="highlight shell">knife node run_list add ubuntu1204cont <span class="s2">"role[os-compute-single-controller-no-network],recipe[openstack-network::identity_registration]"</span>,<span class="s2">"role[os-network-openvswitch]"</span>,<span class="s2">"role[os-network-dhcp-agent]"</span>,<span class="s2">"role[os-network-metadata-agent]"</span>,<span class="s2">"role[os-network-server]"</span>
knife node run_list add ubuntu1204comp1 <span class="s2">"role[os-compute-worker]"</span>
</pre>
<p>As you can see, we configure one node to to be the Controller/Compute node and one to be just an additional Compute node.</p>
<h2>Chef Client run</h2>
<p>SSH on your nodes (ubuntu1204cont, ubuntu1204comp1) and run:</p>
<pre class="highlight shell">sudo chef-client
</pre>
<h2>Test the deployment</h2>
<p>You can now login at the URL of the first VM. Use the username <strong>admin</strong> and the password <strong>admin</strong> for this.
If you navigate to <strong>Admin -> System Panel -> Host Aggregates</strong> you should see both nodes listed.</p>
<p><img class="center" data-thumbnails="original:article-images/host_aggregates.png|small:article-images/host_aggregates-small-200x.png|half:article-images/host_aggregates-half-390x.png|blog:article-images/host_aggregates-blog-780x.png" src="/images/article-images/host_aggregates-blog-780x.png" /> </p>
Deploying Openstack with RDO Packstackhttp://ehaselwanter.com/en/blog/2014/10/15/deploying-openstack-with-rdo-packstack/2014-10-15T00:00:00Z2015-10-04T13:07:50+02:00Edmund Haselwanter<h2>OpenStack Lifecycle Management Tools</h2>
<p>In a joint effort <a href="https://twitter.com/jannis_r">Jannis Rake-Revelant</a>, <a href="https://twitter.com/jaybrueder">Jürgen Brüder</a>, and myself <a href="https://twitter.com/ehaselwanter">Edmund Haselwanter</a> had a look at several what we call “Openstack Lifecycle Management tools”.</p>
<p>This time <a href="https://twitter.com/jannis_r">Jannis Rake-Revelant</a> did most of the work, so thanks for sharing your findings :-)</p>
<h2>RDO versus RedHat Enterprise Linux Openstack</h2>
<blockquote>
<p>[..] It’s a freely-available, community-supported distribution of OpenStack that runs on Red Hat Enterprise Linux, CentOS, Fedora, and their derivatives. – <a href="https://openstack.redhat.com/Frequently_Asked_Questions">https://openstack.redhat.com/Frequently<em>Asked</em>Questions</a></p>
</blockquote>
<p>The community effort RDO however is not officially supported by RedHat. If you want to buy support from Redhat for Openstack, you need to use <a href="https://access.redhat.com/products/red-hat-enterprise-linux-openstack-platform/">RedHat Enterprise Linux Openstack</a>. RedHat offers <a href="https://access.redhat.com/products/red-hat-enterprise-linux-openstack-platform/ova-r">virtual appliances</a> to test their platform.</p>
<p>For this tutorial, we will however be using <a href="https://openstack.redhat.com/Quickstart">RDO packstack</a> as an installer and follow the related quick start guide.</p>
<h2>Virtual machine preparation</h2>
<p>Since a RHEL VM would need licensing we chose CentOS as a base OS. CentOS 6.5 was used during the first attempt. Due to various errors encountered in the process CentOS 7 is used as an alternative later on.</p>
<p>To simplify the setup process we use <a href="https://www.vagrantup.com">Vagrant</a> and VMware Fusion as a <a href="https://www.vagrantup.com/vmware">Vagrant Provider</a>, VirtualBox may also be used. To speed up the process even further boxes from the <a href="https://vagrantcloud.com">Vagrant Cloud</a> will be used, specifically for <a href="https://vagrantcloud.com/chef/boxes/centos-6.5">CentOS 6.5</a> and <a href="https://vagrantcloud.com/chef/boxes/centos-7.0">CentOS 7</a>. </p>
<p>This allows us to start the OS with just two commands:</p>
<pre class="highlight plaintext">vagrant init chef/centos-6.5
vagrant up
</pre>
<p>To access the instance via ssh we simply:</p>
<pre class="highlight plaintext">vagrant ssh
</pre>
<h2>Using the quickstart guide</h2>
<p>Since the disclaimer tells us to use at leaste 2 GB of RAM and hardware virtualization extension we need to edit our Vagrantfile to include:</p>
<pre class="highlight plaintext">config.vm.provider :vmware_fusion do |v|
v.vmx["memsize"] = "4096"
v.vmx["numvcpus"] = "2"
v.vmx["vhv.enable"] = "TRUE"
end
</pre>
<p>We then need to reload vagrant:</p>
<pre class="highlight plaintext">vagrant reload
</pre>
<p>The <a href="https://openstack.redhat.com/Quickstart">quickstart setup</a> looks simple, we simply update our packages:</p>
<pre class="highlight plaintext">sudo yum update -y
</pre>
<p>Select the RDO repositories:</p>
<pre class="highlight plaintext">sudo yum install -y https://rdo.fedorapeople.org/rdo-release.rpm
</pre>
<p>Install the packstack installer:</p>
<pre class="highlight plaintext">sudo yum install -y openstack-packstack
</pre>
<p>and create an “all in one” setup:</p>
<pre class="highlight plaintext">packstack --allinone
</pre>
<p>However, eventually the following error is encountered:</p>
<pre class="highlight plaintext">192.168.3.153_amqp.pp: [ ERROR ]
Applying Puppet manifests [ ERROR ]
ERROR : Error appeared during Puppet run: 192.168.3.153_amqp.pp
Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list erlang' returned 1: Error: No matching Packages to list
</pre>
<h2>Troubleshooting the packstack installation</h2>
<h3>EPEL and Erlang</h3>
<p>After some <a href="https://ask.openstack.org/en/question/51013/rdo-deploy-openstack-error-execution-of-usrbinyum-d-0-e-0-y-list-erlang-returned-1-error-no-matching-packages-to-list/">research</a> it seems clear that we need to <a href="http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/">install EPEL 6</a>:</p>
<pre class="highlight plaintext">wget http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
sudo rpm -ivh epel-release-6-8.noarch.rpm
</pre>
<p>Check if the erlang package is available:</p>
<pre class="highlight plaintext">yum -d 0 -e 0 -y list erlang
Available Packages
erlang.x86_6
</pre>
<p>If you do not see this, check if the epel repo is enabled, by setting <code>enabled=1</code> via:</p>
<pre class="highlight plaintext">sudo vi /etc/yum.repos.d/epel.repo
</pre>
<p>However, after another <code>packstack --allinone</code> we encounter the same error. An alternative is to install the erlang package manually:</p>
<pre class="highlight plaintext">sudo yum install erlang
</pre>
<p>After yet another <code>packstack --allinone</code> we are presented with this error:</p>
<pre class="highlight plaintext">192.168.3.153_amqp.pp: [ ERROR ]
Applying Puppet manifests [ ERROR ]
ERROR : Error appeared during Puppet run: 192.168.3.153_amqp.pp
Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list rabbitmq-server' returned 1: Error: No matching Packages to list
</pre>
<p>Sidenote: <code>epel.repo</code> seems to be forgetting our enabling. Fix yet unknown, we have to reset it manually for now.</p>
<h3>RabbitMQ</h3>
<p>Learning from the Erlang problem let us try to pre-install <a href="http://www.rabbitmq.com/install-rpm.html">RabbitMQ</a>:</p>
<pre class="highlight plaintext">wget http://www.rabbitmq.com/releases/rabbitmq-server/v3.3.5/rabbitmq-server-3.3.5-1.noarch.rpm
sudo yum install rabbitmq-server-3.3.5-1.noarch.rpm
</pre>
<p>And, yet again we encounter another error:</p>
<pre class="highlight plaintext">192.168.3.153_keystone.pp: [ ERROR ]
Applying Puppet manifests [ ERROR ]
ERROR : Error appeared during Puppet run: 192.168.3.153_keystone.pp
Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-keystone' returned 1: Error: Package: python-keystone-2014.1.2.1-1.el6.noarch (openstack-icehouse)
You will find full trace in log /var/tmp/packstack/20141015-204302-54nNUw/manifests/192.168.3.153_keystone.pp.log
</pre>
<p>After reading the full log</p>
<pre class="highlight plaintext">Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-keystone' returned 1: Error: Package: python-keystone-2014.1.2.1-1.el6.noarch (openstack-icehouse)
Requires: python-iso8601
</pre>
<p>we seem to be missing a python module.</p>
<p>After even more <a href="https://ask.openstack.org/en/question/46267/unable-to-install-openstack-keystone/">research</a> and install of the python-keystoneclient:</p>
<pre class="highlight plaintext">yum install python-keystoneclient
</pre>
<p>and yet another error:</p>
<pre class="highlight plaintext">192.168.3.153_mariadb.pp: [ ERROR ]
Applying Puppet manifests [ ERROR ]
ERROR : Error appeared during Puppet run: 192.168.3.153_mariadb.pp
Error: mysqladmin -u root password '667bccaf22db4ec5' returned 1 instead of one of [0]
</pre>
<h2>CentOS 7 as an alternative?</h2>
<p>Choosing the new release solved the last issues with no apparent reason and the installation finished with: </p>
<pre class="highlight plaintext">****** Installation completed successfully ******
</pre>
<p>However neither the necessary rc files where created nor were any services running.</p>
Deploying Openstack with Rackspace Private Cloud (RPC)http://ehaselwanter.com/en/blog/2014/10/15/deploying-openstack-with-rackspace-private-cloud-rpc/2014-10-15T00:00:00Z2015-10-04T13:07:50+02:00Edmund Haselwanter<h2>OpenStack Lifecycle Management Tools</h2>
<p>In a joint effort <a href="https://twitter.com/jannis_r">Jannis Rake-Revelant</a>, <a href="https://twitter.com/jaybrueder">Jürgen Brüder</a>, and myself <a href="https://twitter.com/ehaselwanter">Edmund Haselwanter</a> had a look at several what we call “Openstack Lifecycle Management tools”.</p>
<p>This time <a href="https://twitter.com/jannis_r">Jannis Rake-Revelant</a> did most of the work, so thanks for sharing your findings :-)</p>
<h1>Deploying Openstack with Rackspace Private Cloud</h1>
<p><a href="http://www.rackspace.com/cloud/private/">Rackspace Private Cloud</a>(RPC) is available as an automated deployment system for multiple hosts and as a virtual appliance. The main difference is that the appliance is outdated (based on Openstack Havana, Rackspace Private Cloud 4.2) while the </p>
<h2>Rackspace Private Cloud Appliance</h2>
<p>To use the appliance a registration is required. After the registration, download links for VMware and VirtualBox are <a href="http://www.rackspace.com/cloud/private/script/">provided</a>.</p>
<p>For our setup we choose the OVA for VMware Fusion (can also be deployed in other VMware environments).</p>
<p><img class="center" data-thumbnails="original:article-images/rackspace-rpc-1.png|small:article-images/rackspace-rpc-1-small-200x.png|half:article-images/rackspace-rpc-1-half-390x.png|blog:article-images/rackspace-rpc-1-blog-780x.png" src="/images/article-images/rackspace-rpc-1-blog-780x.png" /> </p>
<p>After the download the “import” function of can be used to open the OVA:</p>
<p><img class="center" data-thumbnails="original:article-images/rackspace-rpc-2.png|small:article-images/rackspace-rpc-2-small-200x.png|half:article-images/rackspace-rpc-2-half-390x.png|blog:article-images/rackspace-rpc-2-blog-780x.png" src="/images/article-images/rackspace-rpc-2-blog-780x.png" /> </p>
<p>It is advisable to update the virtual machine before launch:</p>
<p><img class="center" data-thumbnails="original:article-images/rackspace-rpc-3.png|small:article-images/rackspace-rpc-3-small-200x.png|half:article-images/rackspace-rpc-3-half-390x.png|blog:article-images/rackspace-rpc-3-blog-780x.png" src="/images/article-images/rackspace-rpc-3-blog-780x.png" /> </p>
<p>After a lengthy startup process we are presented with an option to log into the VM with the credentials <code>root:Passw0rd</code>:</p>
<p><img class="center" data-thumbnails="original:article-images/rackspace-rpc-4.png|small:article-images/rackspace-rpc-4-small-200x.png|half:article-images/rackspace-rpc-4-half-390x.png|blog:article-images/rackspace-rpc-4-blog-780x.png" src="/images/article-images/rackspace-rpc-4-blog-780x.png" /> </p>
<pre class="highlight plaintext">Your Rackspace Private Cloud Sandbox is now ready to use.
# ============================================================================
Horizon URL is : https://192.168.3.154:443
Horizon User Name : admin
Horizon Password : Passw0rd
Sandbox User Name : root
Sandbox Password : Passw0rd
Chef Server URL is : https://192.168.3.154:4000
Chef User Name : admin
Chef Password : Passw0rd
# ============================================================================
root@RPC:~#
</pre>
<p>Using the Horizon URL we can easily login into the dashboard:</p>
<p><img class="center" data-thumbnails="original:article-images/rackspace-rpc-5.png|small:article-images/rackspace-rpc-5-small-200x.png|half:article-images/rackspace-rpc-5-half-390x.png|blog:article-images/rackspace-rpc-5-blog-780x.png" src="/images/article-images/rackspace-rpc-5-blog-780x.png" /> </p>
<p>After the creation of a first instance (a cirros image is preinstalled) we are presented with an error concerning the floating IP pool:</p>
<p><img class="center" data-thumbnails="original:article-images/rackspace-rpc-6.png|small:article-images/rackspace-rpc-6-small-200x.png|half:article-images/rackspace-rpc-6-half-390x.png|blog:article-images/rackspace-rpc-6-blog-780x.png" src="/images/article-images/rackspace-rpc-6-blog-780x.png" /> </p>
<p>The reason being, that there is no floating IP pool. This can be remedied by logging into the VM (console or ssh from the host machine). A <code>nova list</code> shows us that the credentials are preloaded:</p>
<pre class="highlight plaintext">root@RPC:~# nova list
+--------------------------------------+--------------+--------+------------+-------------+-------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+--------------+--------+------------+-------------+-------------------+
| f1da02eb-a9ba-4aa5-813f-ade77fb6a282 | initial-test | ACTIVE | None | Running | public=172.16.0.2 |
+--------------------------------------+--------------+--------+------------+-------------+-------------------+
</pre>
<p>We set up a small floating ip pool in the range of our VM:</p>
<pre class="highlight plaintext">root@RPC:~# nova-manage floating create 192.168.3.64/28
root@RPC:~# nova-manage floating list
None 192.168.3.65 None nova br0
None 192.168.3.66 None nova br0
None 192.168.3.67 None nova br0
None 192.168.3.68 None nova br0
None 192.168.3.69 None nova br0
None 192.168.3.70 None nova br0
None 192.168.3.71 None nova br0
None 192.168.3.72 None nova br0
None 192.168.3.73 None nova br0
None 192.168.3.74 None nova br0
None 192.168.3.75 None nova br0
None 192.168.3.76 None nova br0
None 192.168.3.77 None nova br0
None 192.168.3.78 None nova br0
</pre>
<p>Afterwards we can create and allocate a floating IP in the dashboard:</p>
<p><img class="center" data-thumbnails="original:article-images/rackspace-rpc-7.png|small:article-images/rackspace-rpc-7-small-200x.png|half:article-images/rackspace-rpc-7-half-390x.png|blog:article-images/rackspace-rpc-7-blog-780x.png" src="/images/article-images/rackspace-rpc-7-blog-780x.png" /> </p>
<p>and are able to reach the VM:</p>
<pre class="highlight plaintext">ping 192.168.3.66
PING 192.168.3.66 (192.168.3.66): 56 data bytes
64 bytes from 192.168.3.66: icmp_seq=0 ttl=63 time=2.393 ms
64 bytes from 192.168.3.66: icmp_seq=1 ttl=63 time=1.017 ms
</pre>
<p>An excellent article on the appliance can be found in the <a href="http://www.rackspace.com/blog/taking-openstack-for-a-spin-using-the-rackspace-private-cloud-sandbox/">Rackspace Blog</a>.</p>
<h2>Multi-Host Setup</h2>
<p>The current version that can be used for setting up a Rackspace Private Cloud is 9.0.0 with automation based on Ansible. Instruction can be found <a href="http://docs.rackspace.com/rpc/api/v9/bk-rpc-installation/content/rpc-common-front.html">here</a>.</p>
<h3>Preparation of the deployment VM</h3>
<p>The multi-host setup deployment machine can be kickstarted easily by using <a href="https://www.vagrantup.com">Vagrant</a> and the <a href="https://vagrantcloud.com">Vagrant Cloud</a>. The <a href="http://docs.rackspace.com/rpc/api/v9/bk-rpc-installation/content/sec-hosts-deployment-os.html">prerequisite</a> of Ubuntu 14.04 can be satisfied by:</p>
<pre class="highlight plaintext">vagrant init chef/ubuntu-14.04
vagrant up
vagrant ssh
</pre>
<h3>Deployment preparation</h3>
<p>Install additional packages:</p>
<pre class="highlight plaintext">apt-get update
apt-get install aptitude build-essential git ntp ntpdate openssh-server python-dev sudo curl
</pre>
<p>Install sources and dependencies:</p>
<pre class="highlight plaintext">cd /opt
git clone -b 9.0.0 https://github.com/rcbops/ansible-lxc-rpc.git
</pre>
<p>Install pip:</p>
<pre class="highlight plaintext">curl -O https://bootstrap.pypa.io/get-pip.py
python get-pip.py
pip install -r /opt/ansible-lxc-rpc/requirements.txt
</pre>
<p>Generate ssh keypair</p>
<pre class="highlight plaintext">ssh-keygen -t rsa
</pre>
<h3>Preparing the target host</h3>
<p>At this point in the <a href="http://docs.rackspace.com/rpc/api/v9/bk-rpc-installation/content/ch-hosts-target.html">documentation</a> it becomes clear that considerable effort is needed to deploy RPC since the change to ansible.</p>
Deploying Openstack with Mirantis Fuel 5.1http://ehaselwanter.com/en/blog/2014/10/15/deploying-openstack-with-mirantis-fuel-5-1/2014-10-15T00:00:00Z2015-10-04T13:07:50+02:00Edmund Haselwanter<h2>OpenStack Lifecycle Management Tools</h2>
<p>In a joint effort <a href="https://twitter.com/jannis_r">Jannis Rake-Revelant</a>, <a href="https://twitter.com/jaybrueder">Jürgen Brüder</a>, and myself <a href="https://twitter.com/ehaselwanter">Edmund Haselwanter</a> had a look at several what we call “Openstack Lifecycle Management tools”.</p>
<p>This time <a href="https://twitter.com/jannis_r">Jannis Rake-Revelant</a> did most of the work, so thanks for sharing your findings :-)</p>
<h2>Mirantis Fuel 5.1</h2>
<p>The Cloud Services company <a href="https://www.mirantis.com/">Mirantis</a> offers a so called “Pure Play Openstack” distribution called “Mirantis Openstack”. It is deployed using an automation system called <a href="https://software.mirantis.com/key-related-openstack-projects/project-fuel/">“Mirantis Fuel”</a> at can be obtained at <a href="https://software.mirantis.com/">https://software.mirantis.com/</a>.</p>
<p>The version to be deployed in this document is 5.1 and based on the Icehouse Release Cycle of Openstack and CentOS 6.5 as a host operating system (an option to deploy on Ubuntu is also offered but has not been testet yet).</p>
<h2>Existing Tutorials</h2>
<p>Existing documentation for (virtual) deployments seem to be sparse. The documentation at hand is based on the <a href="http://docs.mirantis.com/openstack/fuel/fuel-5.1/virtualbox.html#deploying-the-master-node-manually">Mirantis Documentation for a manual deployment on VirtualBox</a> and is adapted to VMWare Fusion.</p>
<h2>Setting up the virtual infrastructure</h2>
<p>The minimum setup of the distribution would consist of three VMs. For a more realistic deployment we will work with two instead of only one compute node:</p>
<ul>
<li>1 VM for the admin node (Fuel)</li>
<li>1 VM for the controller and simple glance image service</li>
<li>2 VMs for the compute nodes</li>
</ul>
<p>The following specifications are suggested to have a testable system:</p>
<pre class="highlight plaintext">| VM | vCPU | RAM | HDD | NICs |
|------------|------|--------|----------|------|
| Admin | 1 | 1 GB | 50 GB | 1 |
| Controller | 1 | 2 GB | 30 GB | 2 |
| Compute 1 | 1 | 2 GB | 30 GB | 2 |
| Compute 2 | 1 | 2 GB | 30 GB | 2 |
| | | | | |
| _Total_ | _1_ | _7 GB_ | _140 GB_ | _7_ |
</pre>
<p>The total HDD space is theoretical, since we will be working with virtual disks that expand if needed. The RAM requirement leads to the recommendation to use a 12-16 GB system.</p>
<p>The OpenStack systems needs at least two virtual NICs per VM since Mirantis Openstack works with at least two untagged networks (admin and public). The network setup will be detailed later on.</p>
<p>It is also necessary to use a virtualization solution (in our case VMware Fusion 5.05) and system allowing for nested virtualization (Intel VT-x) and enable this setting explicitly.</p>
<h2>Installing the admin node</h2>
<p>The admin node is needed to bootstrap the Openstack Installation. An ISO (2,6 GB) for installation can be obtained <a href="https://software.mirantis.com/">here</a>:</p>
<p><img class="center" data-thumbnails="original:article-images/mirantis-1.png|small:article-images/mirantis-1-small-200x.png|half:article-images/mirantis-1-half-390x.png|blog:article-images/mirantis-1-blog-780x.png" src="/images/article-images/mirantis-1-blog-780x.png" /> </p>
<p>Next we need to prepare the VM using the VMware Fusion assistant:</p>
<p><img class="center" data-thumbnails="original:article-images/mirantis-2.png|small:article-images/mirantis-2-small-200x.png|half:article-images/mirantis-2-half-390x.png|blog:article-images/mirantis-2-blog-780x.png" src="/images/article-images/mirantis-2-blog-780x.png" /> </p>
<p>We provide the ISO for installation:</p>
<p><img class="center" data-thumbnails="original:article-images/mirantis-3.png|small:article-images/mirantis-3-small-200x.png|half:article-images/mirantis-3-half-390x.png|blog:article-images/mirantis-3-blog-780x.png" src="/images/article-images/mirantis-3-blog-780x.png" /> </p>
<p>Chose Centos Linux in the 64bit option (important!):</p>
<p><img class="center" data-thumbnails="original:article-images/mirantis-4.png|small:article-images/mirantis-4-small-200x.png|half:article-images/mirantis-4-half-390x.png|blog:article-images/mirantis-4-blog-780x.png" src="/images/article-images/mirantis-4-blog-780x.png" /> </p>
<p>The default setting for the VM need to be adjusted:</p>
<p><img class="center" data-thumbnails="original:article-images/mirantis-5.png|small:article-images/mirantis-5-small-200x.png|half:article-images/mirantis-5-half-390x.png|blog:article-images/mirantis-5-blog-780x.png" src="/images/article-images/mirantis-5-blog-780x.png" /> </p>
<p>It should not matter in theory, but since we do not know if Fuel uses nested virtualization before bootstrapping it is turned on:</p>
<p><img class="center" data-thumbnails="original:article-images/mirantis-6.png|small:article-images/mirantis-6-small-200x.png|half:article-images/mirantis-6-half-390x.png|blog:article-images/mirantis-6-blog-780x.png" src="/images/article-images/mirantis-6-blog-780x.png" /> </p>
<p>IMPORTANT: The next step is very important since the default (unattended) installation of Fuel from the ISO uses a <strong>static</strong> IP (10.20.0.2). So we need to create a custom network in the general (not VM) settings in VMware Fusion:</p>
<p><img class="center" data-thumbnails="original:article-images/mirantis-7.png|small:article-images/mirantis-7-small-200x.png|half:article-images/mirantis-7-half-390x.png|blog:article-images/mirantis-7-blog-780x.png" src="/images/article-images/mirantis-7-blog-780x.png" /> </p>
<p>In our case it is <code>vmnet3</code> with no DHCP enabled. Although no DHCP is needed the IP range and mask need to be edited correctly, in our case <code>10.20.0.0/24</code> in order to have connectivity to the admin console of fuel via our local browser.</p>
<p>Make sure the VM is connected to the correct virtual network:</p>
<p><img class="center" data-thumbnails="original:article-images/mirantis-8.png|small:article-images/mirantis-8-small-200x.png|half:article-images/mirantis-8-half-390x.png|blog:article-images/mirantis-8-blog-780x.png" src="/images/article-images/mirantis-8-blog-780x.png" /> </p>
<p>We can now start the VM and the automatic installation of Fuel will do the rest:</p>
<p><img class="center" data-thumbnails="original:article-images/mirantis-9.png|small:article-images/mirantis-9-small-200x.png|half:article-images/mirantis-9-half-390x.png|blog:article-images/mirantis-9-blog-780x.png" src="/images/article-images/mirantis-9-blog-780x.png" /> </p>
<p>After a successful install you should see the following screen:</p>
<p><img class="center" data-thumbnails="original:article-images/mirantis-10.png|small:article-images/mirantis-10-small-200x.png|half:article-images/mirantis-10-half-390x.png|blog:article-images/mirantis-10-blog-780x.png" src="/images/article-images/mirantis-10-blog-780x.png" /> </p>
<p>The bootstraping can be done using a web browser and no log in on the VM console is needed.</p>
<p>Once the installation of the admin node has finished, the Fuel web interface is available (by default) at <a href="http://10.20.0.2:8000/#login">http://10.20.0.2:8000/#login</a> with the credentials `admin:admin :</p>
<p><img class="center" data-thumbnails="original:article-images/mirantis-11.png|small:article-images/mirantis-11-small-200x.png|half:article-images/mirantis-11-half-390x.png|blog:article-images/mirantis-11-blog-780x.png" src="/images/article-images/mirantis-11-blog-780x.png" /> </p>
<h2>Bootstraping the environment</h2>
<h3>Preparing the virtual machines</h3>
<p>Before the admin node can go to work it needs empty VMs to bootstrap via PXE boot. The setup of the VMs differs slightly from that of the admin node. </p>
<p>First, three custom VMs need to be defined without an existing disk or ISO with the already mentioned specifications for controller and compute nodes:</p>
<p><img class="center" data-thumbnails="original:article-images/mirantis-12.png|small:article-images/mirantis-12-small-200x.png|half:article-images/mirantis-12-half-390x.png|blog:article-images/mirantis-12-blog-780x.png" src="/images/article-images/mirantis-12-blog-780x.png" /> </p>
<p>Just like the admin node, CentOS Linux 64 Bit is chosen as a profile. Also, the if the number of cores, RAM and HDD settings are correct and nested virtualization is active:</p>
<p><img class="center" data-thumbnails="original:article-images/mirantis-13.png|small:article-images/mirantis-13-small-200x.png|half:article-images/mirantis-13-half-390x.png|blog:article-images/mirantis-13-blog-780x.png" src="/images/article-images/mirantis-13-blog-780x.png" /> </p>
<p>For the public network of Openstack an additional virtual network needs to be created like it was done for the intial admin network, in this case with the default range provided by Mirantis (<code>172.16.0.0/24</code> on <code>vmnet4</code>):</p>
<p><img class="center" data-thumbnails="original:article-images/mirantis-14.png|small:article-images/mirantis-14-small-200x.png|half:article-images/mirantis-14-half-390x.png|blog:article-images/mirantis-14-blog-780x.png" src="/images/article-images/mirantis-14-blog-780x.png" /> </p>
<p>In order to PXE boot all VMs also need to be connected to the admin net <code>vmnet3</code>. In addition the VMs boot device need to be set to the NIC that is connected to said virtual network:</p>
<p><img class="center" data-thumbnails="original:article-images/mirantis-15.png|small:article-images/mirantis-15-small-200x.png|half:article-images/mirantis-15-half-390x.png|blog:article-images/mirantis-15-blog-780x.png" src="/images/article-images/mirantis-15-blog-780x.png" /> </p>
<p>All VMs can now be started to be discovered by Fuel.</p>
<h2>Deploying Openstack</h2>
<p>After logging into the Fuel admin dashboard we are presented with the option to create a new environment:</p>
<p><img class="center" data-thumbnails="original:article-images/mirantis-16.png|small:article-images/mirantis-16-small-200x.png|half:article-images/mirantis-16-half-390x.png|blog:article-images/mirantis-16-blog-780x.png" src="/images/article-images/mirantis-16-blog-780x.png" /> </p>
<p>We chose an Icehouse Installation based on CentOS 6.5 and a multi-node setup (non-HA):</p>
<p><img class="center" data-thumbnails="original:article-images/mirantis-17.png|small:article-images/mirantis-17-small-200x.png|half:article-images/mirantis-17-half-390x.png|blog:article-images/mirantis-17-blog-780x.png" src="/images/article-images/mirantis-17-blog-780x.png" />
<img class="center" data-thumbnails="original:article-images/mirantis-18.png|small:article-images/mirantis-18-small-200x.png|half:article-images/mirantis-18-half-390x.png|blog:article-images/mirantis-18-blog-780x.png" src="/images/article-images/mirantis-18-blog-780x.png" /> </p>
<p>We can base our compute nodes on KVM, since we have nested virtulization available. Qemu setups can also be deployed but offer less performance.</p>
<p><img class="center" data-thumbnails="original:article-images/mirantis-19.png|small:article-images/mirantis-19-small-200x.png|half:article-images/mirantis-19-half-390x.png|blog:article-images/mirantis-19-blog-780x.png" src="/images/article-images/mirantis-19-blog-780x.png" /> </p>
<p>The network will be a simple nova (not neutron) setup:</p>
<p><img class="center" data-thumbnails="original:article-images/mirantis-20.png|small:article-images/mirantis-20-small-200x.png|half:article-images/mirantis-20-half-390x.png|blog:article-images/mirantis-20-blog-780x.png" src="/images/article-images/mirantis-20-blog-780x.png" /> </p>
<p>As storage backends default LVM volumes for cinder (block storage) and local storage for glance (image storage) are selected:</p>
<p><img class="center" data-thumbnails="original:article-images/mirantis-21.png|small:article-images/mirantis-21-small-200x.png|half:article-images/mirantis-21-half-390x.png|blog:article-images/mirantis-21-blog-780x.png" src="/images/article-images/mirantis-21-blog-780x.png" /> </p>
<p>We refrain from installing additional services and finish the creation of a new environment:</p>
<p><img class="center" data-thumbnails="original:article-images/mirantis-22.png|small:article-images/mirantis-22-small-200x.png|half:article-images/mirantis-22-half-390x.png|blog:article-images/mirantis-22-blog-780x.png" src="/images/article-images/mirantis-22-blog-780x.png" />
<img class="center" data-thumbnails="original:article-images/mirantis-23.png|small:article-images/mirantis-23-small-200x.png|half:article-images/mirantis-23-half-390x.png|blog:article-images/mirantis-23-blog-780x.png" src="/images/article-images/mirantis-23-blog-780x.png" /> </p>
<p>Inside the new environment new nodes can be added once they are discovered in the admin network using PXE boot:</p>
<p><img class="center" data-thumbnails="original:article-images/mirantis-24.png|small:article-images/mirantis-24-small-200x.png|half:article-images/mirantis-24-half-390x.png|blog:article-images/mirantis-24-blog-780x.png" src="/images/article-images/mirantis-24-blog-780x.png" /> </p>
<p>Nodes should automatically boot into bootstrap mode:</p>
<p><img class="center" data-thumbnails="original:article-images/mirantis-25.png|small:article-images/mirantis-25-small-200x.png|half:article-images/mirantis-25-half-390x.png|blog:article-images/mirantis-25-blog-780x.png" src="/images/article-images/mirantis-25-blog-780x.png" /> </p>
<p>As soon as the nodes are identified by Fuel they can be assigned to their respective roles. The controller will have controller services and cinder deployed, the compute node simply the compute service:</p>
<p><img class="center" data-thumbnails="original:article-images/mirantis-26.png|small:article-images/mirantis-26-small-200x.png|half:article-images/mirantis-26-half-390x.png|blog:article-images/mirantis-26-blog-780x.png" src="/images/article-images/mirantis-26-blog-780x.png" />
<img class="center" data-thumbnails="original:article-images/mirantis-27.png|small:article-images/mirantis-27-small-200x.png|half:article-images/mirantis-27-half-390x.png|blog:article-images/mirantis-27-blog-780x.png" src="/images/article-images/mirantis-27-blog-780x.png" /> </p>
<p>You may also check if the default networks are allocated correctly in the nodes:</p>
<p><img class="center" data-thumbnails="original:article-images/mirantis-28.png|small:article-images/mirantis-28-small-200x.png|half:article-images/mirantis-28-half-390x.png|blog:article-images/mirantis-28-blog-780x.png" src="/images/article-images/mirantis-28-blog-780x.png" /> </p>
<p>In addition, networking for admin, storage and nova network (fixed and floating) can be configured:</p>
<p><img class="center" data-thumbnails="original:article-images/mirantis-29.png|small:article-images/mirantis-29-small-200x.png|half:article-images/mirantis-29-half-390x.png|blog:article-images/mirantis-29-blog-780x.png" src="/images/article-images/mirantis-29-blog-780x.png" /> </p>
<p>Once all nodes have assigned roles, the deployment process can be started. It will intially install CentOS 6.5 on all nodes, afterwards set up the controller node and finally the two compute nodes:</p>
<p><img class="center" data-thumbnails="original:article-images/mirantis-30.png|small:article-images/mirantis-30-small-200x.png|half:article-images/mirantis-30-half-390x.png|blog:article-images/mirantis-30-blog-780x.png" src="/images/article-images/mirantis-30-blog-780x.png" />
<img class="center" data-thumbnails="original:article-images/mirantis-31.png|small:article-images/mirantis-31-small-200x.png|half:article-images/mirantis-31-half-390x.png|blog:article-images/mirantis-31-blog-780x.png" src="/images/article-images/mirantis-31-blog-780x.png" />
<img class="center" data-thumbnails="original:article-images/mirantis-32.png|small:article-images/mirantis-32-small-200x.png|half:article-images/mirantis-32-half-390x.png|blog:article-images/mirantis-32-blog-780x.png" src="/images/article-images/mirantis-32-blog-780x.png" />
<img class="center" data-thumbnails="original:article-images/mirantis-33.png|small:article-images/mirantis-33-small-200x.png|half:article-images/mirantis-33-half-390x.png|blog:article-images/mirantis-33-blog-780x.png" src="/images/article-images/mirantis-33-blog-780x.png" />
<img class="center" data-thumbnails="original:article-images/mirantis-34.png|small:article-images/mirantis-34-small-200x.png|half:article-images/mirantis-34-half-390x.png|blog:article-images/mirantis-34-blog-780x.png" src="/images/article-images/mirantis-34-blog-780x.png" /> </p>
<p>You may run into a “Hardware not supported error” while installing CentOS which you will only notice when looking into the virtual console. In this setup the warning can safely be ignored:</p>
<p><img class="center" data-thumbnails="original:article-images/mirantis-35.png|small:article-images/mirantis-35-small-200x.png|half:article-images/mirantis-35-half-390x.png|blog:article-images/mirantis-35-blog-780x.png" src="/images/article-images/mirantis-35-blog-780x.png" /> </p>
<p>Once the deplyoment is finished, the Mirantis Openstack Dashboard is available at <a href="http://10.20.0.5/dashboard/">http://10.20.0.5/dashboard/</a>. Fuel will also show the configured address after deployment completion. The default login is <code>admin:admin</code>:</p>
<p><img class="center" data-thumbnails="original:article-images/mirantis-36.png|small:article-images/mirantis-36-small-200x.png|half:article-images/mirantis-36-half-390x.png|blog:article-images/mirantis-36-blog-780x.png" src="/images/article-images/mirantis-36-blog-780x.png" /> </p>
<h2>Testing the Openstack installation</h2>
<p>After a succesful login, we can create test instances. A test VM based on Cirrus is provided in the distribution:</p>
<p><img class="center" data-thumbnails="original:article-images/mirantis-37.png|small:article-images/mirantis-37-small-200x.png|half:article-images/mirantis-37-half-390x.png|blog:article-images/mirantis-37-blog-780x.png" src="/images/article-images/mirantis-37-blog-780x.png" />
<img class="center" data-thumbnails="original:article-images/mirantis-38.png|small:article-images/mirantis-38-small-200x.png|half:article-images/mirantis-38-half-390x.png|blog:article-images/mirantis-38-blog-780x.png" src="/images/article-images/mirantis-38-blog-780x.png" /> </p>
<p>To reach the test VM make sure to assign a floating IP an allow (at least) ICMP and SSH in the security group settings:</p>
<p><img class="center" data-thumbnails="original:article-images/mirantis-39.png|small:article-images/mirantis-39-small-200x.png|half:article-images/mirantis-39-half-390x.png|blog:article-images/mirantis-39-blog-780x.png" src="/images/article-images/mirantis-39-blog-780x.png" />
<img class="center" data-thumbnails="original:article-images/mirantis-40.png|small:article-images/mirantis-40-small-200x.png|half:article-images/mirantis-40-half-390x.png|blog:article-images/mirantis-40-blog-780x.png" src="/images/article-images/mirantis-40-blog-780x.png" /> </p>
<p>Now you should be able to ping the floating IP and log into the test VM from your host machine with the following credentials <code>cirros:cubswin:)</code> :</p>
<pre class="highlight plaintext">$ ping 172.16.0.128
PING 172.16.0.128 (172.16.0.128): 56 data bytes
64 bytes from 172.16.0.128: icmp_seq=0 ttl=63 time=1.118 ms
64 bytes from 172.16.0.128: icmp_seq=1 ttl=63 time=0.683 ms
64 bytes from 172.16.0.128: icmp_seq=2 ttl=63 time=0.763 ms
</pre>
<p>You can also test the installation using the <a href="http://docs.openstack.org/user-guide/content/ch_cli.html">Openstack command line clients</a> using the credentials and endpoints found in the dashboard via “Access and Security”. A good test is to upload a new image (e.g. <a href="https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img">Ubuntu 14.04 LTS</a>) to the image service and afterwards launch said image:</p>
<pre class="highlight plaintext">$ glance image-create --name Ubuntu1404 --file Documents/Development/iso/trusty64.qcow2 --container-format bare --disk-format qcow2 --is-public true --progress
[=============================>] 100%
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| checksum | 040a20a402b8eff2afe4ce409c8688ac |
| container_format | bare |
| created_at | 2014-10-12T10:05:55 |
| deleted | False |
| deleted_at | None |
| disk_format | qcow2 |
| id | 0721d7d1-a21b-4851-a6ea-b0bb4505fa04 |
| is_public | True |
| min_disk | 0 |
| min_ram | 0 |
| name | Ubuntu1404 |
| owner | 4323809bb3154929827b11df0c891da9 |
| protected | False |
| size | 255066624 |
| status | active |
| updated_at | 2014-10-12T10:06:05 |
| virtual_size | None |
+------------------+--------------------------------------+
</pre>