Have a cluster installed like in: SUSECloud Part 1 - Install The Multi Node OpenStack/Ceph Environment
Turn on the one or more VMs that will act as hosts which SUSECloud can provision.
cd crowbar-virtualbox ./start_cluster.sh
admin:~ # for I in `crowbar machines list |grep dc0`; do crowbar machines allocate $I; done Executed allocate for dc0-ff-ee-00-01-01.suse-testbed.de Executed allocate for dc0-ff-ee-00-02-01.suse-testbed.de Executed allocate for dc0-ff-ee-00-03-01.suse-testbed.de Executed allocate for dc0-ff-ee-00-04-01.suse-testbed.de Executed allocate for dc0-ff-ee-00-05-01.suse-testbed.de Executed allocate for dc0-ff-ee-00-06-01.suse-testbed.de
The node will begin the allocation which includes:
You can watch the process with an RDP client:
Wait for the VMs to get into the Ready
state.
admin:~ # crowbar node_state status dc0-ff-ee-00-04-01 Installing dc0-ff-ee-00-05-01 Installing dc0-ff-ee-00-01-01 Installing admin Ready dc0-ff-ee-00-06-01 Installing dc0-ff-ee-00-02-01 Installing dc0-ff-ee-00-03-01 Installing
After the nodes become ready we edit the nodes to hint their intended role.
admin:~ # crowbar node_state status dc0-ff-ee-00-06-01 Ready dc0-ff-ee-00-03-01 Ready dc0-ff-ee-00-04-01 Ready admin Ready dc0-ff-ee-00-01-01 Ready dc0-ff-ee-00-05-01 Ready dc0-ff-ee-00-02-01 Ready
This is useful for the later steps as it will auto populate hostname values in corresponding fields.
We have Chef at our disposal so lets use a ruby script for that:
cat > set_intended_role_and_zone.rb << CODE nodes.all do |node| puts "updating Node #{node.name}" puts "current intended_role: #{node["crowbar_wall"]["intended_role"]}" puts "current az #{node["crowbar_wall"]["openstack"]["availability_zone"]}" if node["crowbar_wall"]["openstack"] case node.name when /dc0-ff-ee-00-01-01/ node["crowbar_wall"]["intended_role"] = "controller" when /dc0-ff-ee-00-02-01/ node["crowbar_wall"]["intended_role"] = "compute" node["crowbar_wall"]["openstack"] = {"availability_zone" => "mz"} when /dc0-ff-ee-00-03-01/ node["crowbar_wall"]["intended_role"] = "compute" node["crowbar_wall"]["openstack"] = {"availability_zone" => "sec"} when /dc0-ff-ee-00-0[4-6]-01/ node["crowbar_wall"]["intended_role"] = "storage" end node.save end CODE knife exec set_intended_role_and_zone.rb
This script also sets the OpenStack availability_zone for the compute nodes to different values.
Order matters when provisioning OpenStack pieces on the various host nodes. The proper order is as already predefined in the Barclamps -> Openstack list.
We have to follow that order from top to bottom.
Create a new proposal for the database barclamp.
ATTENTION: the proposal name MUST be ‘default’ in SUSECloud!
admin:~ # crowbar database proposal create default Created default admin:~ # crowbar database proposal show default > database.json
Within the json we can alter attribute values. At the moment we are interested that the service gets deployed on our control node. Since we set the intended_role
the setting should be already to our wishes.
admin:~ # grep -A4 elements database.json "elements": { "database-server": [ "dc0-ff-ee-00-01-01.suse-testbed.de" ] },
we then can save our settings back to the server
admin:~ # crowbar database proposal edit default --file database.json Edited default
and commit the change
admin:~ # crowbar database proposal commit default Committed default
Create a new proposal for the keystone barclamp.
ATTENTION: the proposal name MUST be ‘default’ in SUSECloud!
admin:~ # crowbar keystone proposal create default Created default admin:~ # crowbar keystone proposal show default > keystone.json
Within the json we can alter attribute values. At the moment we are interested that the service gets deployed on our control node. Since we set the intended_role
the setting should be already to our wishes.
admin:~ # grep -A4 elements keystone.json "elements": { "keystone-server": [ "dc0-ff-ee-00-01-01.suse-testbed.de" ] },
we then can save our settings back to the server
admin:~ # crowbar keystone proposal edit default --file keystone.json Edited default
and commit the change
admin:~ # crowbar keystone proposal commit default Committed default
You can always check the current state of a node (in another terminal) with
admin:~ # crowbar node_state status --no-ready dc0-ff-ee-00-01-01 Applying
Create a new proposal for the rabbitmq barclamp.
ATTENTION: the proposal name MUST be ‘default’ in SUSECloud!
admin:~ # crowbar rabbitmq proposal create default Created default admin:~ # crowbar rabbitmq proposal show default > rabbitmq.json
Within the json we can alter attribute values. At the moment we are interested that the service gets deployed on our control node. Since we set the intended_role
the setting should be already to our wishes.
admin:~ # grep -A4 elements rabbitmq.json "elements": { "rabbitmq-server": [ "dc0-ff-ee-00-01-01.suse-testbed.de" ] },
we then can save our settings back to the server
admin:~ # crowbar rabbitmq proposal edit default --file rabbitmq.json Edited default
and commit the change
admin:~ # crowbar rabbitmq proposal commit default Committed default
Create a new proposal for the ceph barclamp.
ATTENTION: the proposal name MUST be ‘default’ in SUSECloud!
admin:~ # crowbar ceph proposal create default Created default admin:~ # crowbar ceph proposal show default > ceph.json
Within the json we can alter attribute values. At the moment we are interested that the service gets deployed on our control node. Since we set the intended_role
the setting should be already to our wishes.
admin:~ # grep -A11 elements ceph.json "elements": { "ceph-osd": [ "dc0-ff-ee-00-06-01.suse-testbed.de", "dc0-ff-ee-00-04-01.suse-testbed.de", "dc0-ff-ee-00-05-01.suse-testbed.de" ], "ceph-mon": [ "dc0-ff-ee-00-01-01.suse-testbed.de", "dc0-ff-ee-00-06-01.suse-testbed.de", "dc0-ff-ee-00-04-01.suse-testbed.de" ] },
Here we see that our control node got into the cheph-mon
list. We want Ceph functionality within the Ceph part, so edit the file and change the ceph-mon list.
admin:~ # sed -i 's/dc0-ff-ee-00-01-01.suse-testbed.de/dc0-ff-ee-00-05-01.suse-testbed.de/' ceph.json admin:~ # grep -A11 elements ceph.json "elements": { "ceph-osd": [ "dc0-ff-ee-00-06-01.suse-testbed.de", "dc0-ff-ee-00-04-01.suse-testbed.de", "dc0-ff-ee-00-05-01.suse-testbed.de" ], "ceph-mon": [ "dc0-ff-ee-00-05-01.suse-testbed.de", "dc0-ff-ee-00-06-01.suse-testbed.de", "dc0-ff-ee-00-04-01.suse-testbed.de" ] },
we then can save our settings back to the server
admin:~ # crowbar ceph proposal edit default --file ceph.json Edited default
and commit the change
admin:~ # crowbar ceph proposal commit default Committed default
We leave out Swift in this deployment
Create a new proposal for the glance barclamp.
ATTENTION: the proposal name MUST be ‘default’ in SUSECloud!
admin:~ # crowbar glance proposal create default Created default admin:~ # crowbar glance proposal show default > glance.json
Within the json we can alter attribute values. At the moment we are interested that the service gets deployed on our control node. Since we set the intended_role
the setting should be already to our wishes.
admin:~ # grep -A4 elements glance.json "elements": { "glance-server": [ "dc0-ff-ee-00-01-01.suse-testbed.de" ] },
We want to use the ceph rbd backend, so we have to alter the json
sed -i 's/"default_store": "file"/"default_store": "rbd"/' glance.json
We then can save our settings back to the server
admin:~ # crowbar glance proposal edit default --file glance.json Edited default
and commit the change
admin:~ # crowbar glance proposal commit default Committed default
Create a new proposal for the cinder barclamp.
ATTENTION: the proposal name MUST be ‘default’ in SUSECloud!
admin:~ # crowbar cinder proposal create default Created default admin:~ # crowbar cinder proposal show default > cinder.json
Within the json we can alter attribute values. At the moment we are interested that the service gets deployed on our control node. Since we set the intended_role
the setting should be already to our wishes.
admin:~ # grep -A7 elements cinder.json "elements": { "cinder-volume": [ "dc0-ff-ee-00-06-01.suse-testbed.de" ], "cinder-controller": [ "dc0-ff-ee-00-01-01.suse-testbed.de" ] },
We do not want to have the cinder-volume role on one of the ceph nodes.
admin:~ # sed -i 's/dc0-ff-ee-00-06-01.suse-testbed.de/dc0-ff-ee-00-01-01.suse-testbed.de/' cinder.json admin:~ # grep -A7 elements cinder.json "elements": { "cinder-volume": [ "dc0-ff-ee-00-01-01.suse-testbed.de" ], "cinder-controller": [ "dc0-ff-ee-00-01-01.suse-testbed.de" ] },
We want to use the ceph rados backend, so we have to alter the json
admin:~ # sed -i 's/"volume_type": "raw"/"volume_type": "rbd"/' cinder.json
We then can save our settings back to the server
admin:~ # crowbar cinder proposal edit default --file cinder.json Edited default
and commit the change
admin:~ # crowbar cinder proposal commit default Committed default
Create a new proposal for the neutron barclamp.
ATTENTION: the proposal name MUST be ‘default’ in SUSECloud!
admin:~ # crowbar neutron proposal create default Created default admin:~ # crowbar neutron proposal show default > neutron.json
Within the json we can alter attribute values. At the moment we are interested that the service gets deployed on our control node. Since we set the intended_role
the setting should be already to our wishes.
admin:~ # grep -A4 elements neutron.json "elements": { "neutron-server": [ "dc0-ff-ee-00-01-01.suse-testbed.de" ] },
Here we want to make several customisations. First of all we want linuxbridge and vlan mode:
admin:~ # sed -i 's/"networking_mode": "gre"/"networking_mode": "vlan"/' neutron.json admin:~ # sed -i 's/"networking_plugin": "openvswitch"/"networking_plugin": "linuxbridge"/' neutron.json
And second of all we want all our public/floating traffic on a separate physical interface. Unfortunately SUSECloud does not support this out of the box yet, so we have to alter the chef recipes and templates.
This is somewhat hackish at the moment as it is really tied to this Virtualbox setup were we know that its gonna be eth1
and vlan 300 for the public interface.
admin:~ # sed -i 's/network_vlan_ranges = physnet1:/network_vlan_ranges = physnet2:300:300,physnet1:/' /opt/dell/chef/cookbooks/neutron/templates/default/ml2_conf.ini.erb admin:~ # sed -i 's/network_vlan_ranges = physnet1:/network_vlan_ranges = physnet2:300:300,physnet1:/' /opt/dell/chef/cookbooks/neutron/templates/default/linuxbridge_conf.ini.erb sed -i 's/physical_interface_mappings = physnet1:/physical_interface_mappings = physnet2:eth1,physnet1:/' /opt/dell/chef/cookbooks/neutron/templates/default/linuxbridge_conf.ini.erb sed -i 's/public_net\["vlan"\]} --provider:physical_network physnet1/public_net["vlan"]} --provider:physical_network physnet2/' /opt/dell/chef/cookbooks/neutron/recipes/post_install_conf.rb admin:~ # knife cookbook upload neutron -o /opt/dell/chef/cookbooks/ Uploading neutron [1.0.0] Uploaded 1 cookbook.
We then can save our settings back to the server
admin:~ # crowbar neutron proposal edit default --file neutron.json Edited default
and commit the change
admin:~ # crowbar neutron proposal commit default Committed default
Create a new proposal for the nova barclamp.
ATTENTION: the proposal name MUST be ‘default’ in SUSECloud!
admin:~ # crowbar nova proposal create default Created default admin:~ # crowbar nova proposal show default > nova.json
Within the json we can alter attribute values. At the moment we are interested that the service gets deployed on our control node. Since we set the intended_role
the setting should be already to our wishes.
admin:~ # grep -A17 elements nova.json "elements": { "nova-multi-controller": [ "dc0-ff-ee-00-01-01.suse-testbed.de" ], "nova-multi-compute-xen": [ "dc0-ff-ee-00-03-01.suse-testbed.de", "dc0-ff-ee-00-02-01.suse-testbed.de" ], "nova-multi-compute-kvm": [ ], "nova-multi-compute-qemu": [ ], "nova-multi-compute-hyperv": [ ] }
We are running Nova with Qemu as Virtualbox does us not allow to use kvm in kvm. So we have to move our compute nodes to nova-multi-compute-qemu
DOMAIN=`hostname -d` crowbar nova proposal show default > nova.json cat > nova-elements.json <<JSON { "nova-multi-compute-hyperv": [], "nova-multi-controller": [ "dc0-ff-ee-00-01-01.$DOMAIN" ], "nova-multi-compute-qemu": [ "dc0-ff-ee-00-02-01.$DOMAIN", "dc0-ff-ee-00-03-01.$DOMAIN" ], "nova-multi-compute-xen":[], "nova-multi-compute-kvm": [] } JSON json-edit -r -a deployment.nova.elements -v "`cat nova-elements.json`" nova.json
And check if it worked:
admin:~ #grep -A17 elements nova.json "elements": { "nova-multi-controller": [ "dc0-ff-ee-00-01-01.suse-testbed.de" ], "nova-multi-compute-xen": [ ], "nova-multi-compute-kvm": [ ], "nova-multi-compute-hyperv": [ ], "nova-multi-compute-qemu": [ "dc0-ff-ee-00-02-01.suse-testbed.de", "dc0-ff-ee-00-03-01.suse-testbed.de" ] },
Next we have to set the libvirt_type
admin:~ # sed -i 's/"libvirt_type": "kvm"/"libvirt_type": "qemu"/' nova.json
And finnaly we have to change another part of the scripting. Qemu is not supported in SUSECloud 3. So there is an issue to tackle: Using Ceph/Rbd with Qemu needs a code change to the recipes.
grep -ir -e "libvirt_type" /opt/dell/chef/cookbooks/* |grep rbd /opt/dell/chef/cookbooks/nova/recipes/config.rb: if cinder_server[:cinder][:volume][:volume_type] == "rbd" and node[:nova][:libvirt_type] == "kvm"
We have to change this line to:
if cinder_server[:cinder][:volume][:volume_type] == "rbd” and ["kvm","qemu"].include?(node[:nova][:libvirt_type])
change it with:
admin:~ # sed -i 's/and node\[:nova\]\[:libvirt_type\] == "kvm"/and ["kvm","qemu"].include?(node[:nova][:libvirt_type])/' /opt/dell/chef/cookbooks/nova/recipes/config.rb
and upload it to the chef server
admin:~ # knife cookbook upload nova -o /opt/dell/chef/cookbooks/ Uploading nova [0.3.0] Uploaded 1 cookbook.
We then can save our settings back to the server
admin:~ # crowbar nova proposal edit default --file nova.json Edited default
and commit the change
admin:~ # crowbar nova proposal commit default Committed default
There is another command which helps us watching the progress:
Every 2.0s: /opt/dell/bin/crowbar_node_state status --no-ready ; echo ' ' ; /opt/dell/bin/crowbar_node_status Fri Apr 12 14:47:12 2014 dc0-ff-ee-00-01-01 Applying dc0-ff-ee-00-02-01 Applying dc0-ff-ee-00-03-01 Applying Host OK WARN CRIT UNKNOWN PENDING
Create a new proposal for the nova_dashboard barclamp.
ATTENTION: the proposal name MUST be ‘default’ in SUSECloud!
admin:~ # crowbar nova_dashboard proposal create default Created default admin:~ # crowbar nova_dashboard proposal show default > nova_dashboard.json
Within the json we can alter attribute values. At the moment we are interested that the service gets deployed on our control node. Since we set the intended_role
the setting should be already to our wishes.
admin:~ # grep -A4 elements nova_dashboard.json "elements": { "nova_dashboard-server": [ "dc0-ff-ee-00-01-01.suse-testbed.de" ] },
We then can save our settings back to the server
admin:~ # crowbar nova_dashboard proposal edit default --file nova_dashboard.json Edited default
and commit the change
admin:~ # crowbar nova_dashboard proposal commit default Committed default
We leave out ceilometer and heat in this deployment. So if everything went well we should see now a lot of green on the OpenStack Barclamp list:
To use the Openstack login we can have a look at the control node:
The details page of the control node has two links build in. One for the the admin-net dashboard and one for the public-net dashboard.
To access the dashboard open the browser at http://192.168.124.81
crowbar
crowbar
Interested in Cloud or Chef Trainings? Have a look at our Commandemy Trainings page. Need help migrating to the cloud? Check out Infralovers.
comments powered by Disqus