At the beginning of May 2014 I started contributing to a great new OpenSource project sponsored by Deutsche Telekom. The Hardening Framework. One of the challenges in this project is an old friend: Keeping things DRY. But let us start from the beginning.

Chef And Puppet It Is

Large organizations very often allow for, or must tolerate that working teams choose different tools. So there is a need in supporting more than one toolchain to accomplish a certain goal. In this case the project goal is to provide for re-usable infrastructure code to harden several aspects of your deployment. At the moment teams should be free to choose either Chef or Puppet. Maybe more in the future so start contributing :-)

A Super Simple Example, NTP

Best thing to explain is by example. Lets say, for connectivity reasons we have to be able to use specific ntp servers in our datacenter deployment. So we have to deploy ntp and configure it to use the list of ntp servers we specified.

  • install ntp
  • configure ntp to use specific servers
  • start or restart ntp to use the new configuration

The focus of this example is NOT to show off the pros, cons, or best practices in either the puppet or the chef world. It is the acceptance/integration part. How can we be sure that we are done? Well. Lets get started by going through the checklist we got.

Definition Of Done

So, lets assume we have implemented the requirements. We logged on to the server. Checked if it the service is up and double checked that it is using our list of time servers.

Done! Right?

Well. Not so much. This is a very manual approach. And consider that we want to support different technology stacks on different platforms to converge our nodes, e.g. Puppet and Chef deployments on several operating systems.

We can do better! Lets see. We want to verify that:

  • ntp is running
  • the config of ntp points to the specified time servers.

And we want a report if the verification fails or succeeds.

Integration Testing

There are a million things we can do to check if a process is running or if a file contains some values. Basically we want to automate the manual approach we used to verify if a specification is met and get a nice report indicating success or failure.

There are a lot of libraries out there which are more or less tailored to this domain. One very popular approach is Rspec

RSpec is testing tool for the Ruby programming language. Born under the banner of Behaviour-Driven Development, it is designed to make Test-Driven Development a productive and enjoyable experience

Rspec is a Domain Specific Language for Testing. And there is an even better matching candidate: Serverspec

With serverspec, you can write RSpec tests for checking your servers are configured correctly.

Use Serverspec To Test Our Requirements

Serverspec supports a lot of resource types out of the box. Have a look at Resource Types. To verify our specification we can

describe package('ntp') do
  it { should be_installed }
end

describe service('ntp') do
  it { should be_running }
end

describe file('/etc/ntp.conf') do
  it { should contain '0.pool.ntp.org' }
  it { should contain '1.pool.ntp.org' }
end

And the best part: This is agnostic to the method we provisioned our server! Manual, Chef, Puppet, Saltstack, Ansible, … you name it.

To make it really work we obviously need some boilerplate code. This goes into the the spec_helper.rb file

require 'serverspec'
require 'pathname'
require 'net/ssh'
require 'highline/import'

include Serverspec::Helper::Ssh
include Serverspec::Helper::DetectOS

RSpec.configure do |c|

  if ENV['ASK_SUDO_PASSWORD']
    c.sudo_password = ask('Enter sudo password: ') { |q| q.echo = false }
  else
    c.sudo_password = ENV['SUDO_PASSWORD']
  end

  options = {}

  if ENV['ASK_LOGIN_PASSWORD']
    options[:password] = ask("\nEnter login password: ") { |q| q.echo = false }
  else
    options[:password] = ENV['LOGIN_PASSWORD']
  end

  if ENV['ASK_LOGIN_USERNAME']
    user = ask("\nEnter login username: ") { |q| q.echo = false }
  else
    user = ENV['LOGIN_USERNAME'] || ENV['user'] || Etc.getlogin
  end

  if user.nil?
    puts 'specify login user env LOGIN_USERNAME= or user='
    exit 1
  end

  c.host  = ENV['TARGET_HOST']
  options.merge(Net::SSH::Config.for(c.host))
  c.ssh   = Net::SSH.start(c.host, user, options)
  c.os    = backend.check_os

end

This allows for running the test against a server were we have ssh access.

To be able to support multiple test suites lets organize them in directories and use a Rakefile to choose which suite to run.

require 'rake'
require 'rspec/core/rake_task'

suites = Dir.glob('*').select{|entry| File.directory?(entry) }

class ServerspecTask < RSpec::Core::RakeTask

  attr_accessor :target

  def spec_command

    if target.nil? 
      puts "specify either env TARGET_HOST or target_host="
      exit 1
    end

    cmd = super
    "env TARGET_HOST=#{target} STANDALONE_SPEC=true #{cmd}  --format documentation --no-profile"
  end

end

namespace :serverspec do
  suites.each do |suite|
    desc "Run serverspec suite #{suite}"
    ServerspecTask.new(suite.to_sym) do |t|
      t.target = ENV['TARGET_HOST'] || ENV['target_host']
      t.ruby_opts = "-I #{suite}/serverspec"
      t.pattern = "#{suite}/serverspec/*_spec.rb"
    end
  end
end

What have we got now? Spoiler alert: this is exactly how test-kitchen expects integration test suites!

± /usr/local/bin/tree .
.
├── Gemfile
├── Gemfile.lock
├── Rakefile
├── default
│   └── serverspec
│       ├── ntp_spec.rb
│       └── spec_helper.rb
└── failing
    └── serverspec
        ├── ntp_spec.rb
        └── spec_helper.rb
± rake -T
rake serverspec:default  # Run serverspec suite default
rake serverspec:failing  # Run serverspec suite failing

To test it we run it against a server. This is the output from a server which does not implement our requirement:

± ASK_LOGIN_PASSWORD=true rake serverspec:default target_host=192.168.1.222 user=stack
env TARGET_HOST=192.168.1.222 STANDALONE_SPEC=true /Users/ehaselwanter/.rvm/rubies/ruby-2.1.1/bin/ruby -I default/serverspec -S rspec default/serverspec/ntp_spec.rb  --format documentation --no-profile
Enter sudo password:

Enter login password:

File "/etc/ntp.conf"
  should contain "0.pool.ntp.org" (FAILED - 1)
  should contain "1.pool.ntp.org" (FAILED - 2)

Service "ntp"
  should be running (FAILED - 3)

Package "ntp"
  should be installed (FAILED - 4)

Failures:

  1) File "/etc/ntp.conf" should contain "0.pool.ntp.org"
     Failure/Error: it { should contain '0.pool.ntp.org' }
       sudo grep -q -- 0.pool.ntp.org /etc/ntp.conf || sudo grep -qF -- 0.pool.ntp.org /etc/ntp.conf
       grep: /etc/ntp.conf: No such file or directory
grep: /etc/ntp.conf: No such file or directory

       expected File "/etc/ntp.conf" to contain "0.pool.ntp.org"
     # ./default/serverspec/ntp_spec.rb:12:in `block (2 levels) in <top (required)>'

  2) File "/etc/ntp.conf" should contain "1.pool.ntp.org"
     Failure/Error: it { should contain '1.pool.ntp.org' }
       sudo grep -q -- 1.pool.ntp.org /etc/ntp.conf || sudo grep -qF -- 1.pool.ntp.org /etc/ntp.conf
       grep: /etc/ntp.conf: No such file or directory
grep: /etc/ntp.conf: No such file or directory

       expected File "/etc/ntp.conf" to contain "1.pool.ntp.org"
     # ./default/serverspec/ntp_spec.rb:13:in `block (2 levels) in <top (required)>'

  3) Service "ntp" should be running
     Failure/Error: it { should be_running }
       sudo ps aux | grep -w -- ntp | grep -qv grep
       expected Service "ntp" to be running
     # ./default/serverspec/ntp_spec.rb:8:in `block (2 levels) in <top (required)>'

  4) Package "ntp" should be installed
     Failure/Error: it { should be_installed }
       sudo dpkg-query -f '${Status}' -W ntp | grep -E '^(install|hold) ok installed$'
       No packages found matching ntp.

       expected Package "ntp" to be installed
     # ./default/serverspec/ntp_spec.rb:4:in `block (2 levels) in <top (required)>'

Finished in 0.16283 seconds
4 examples, 4 failures

Failed examples:

rspec ./default/serverspec/ntp_spec.rb:12 # File "/etc/ntp.conf" should contain "0.pool.ntp.org"
rspec ./default/serverspec/ntp_spec.rb:13 # File "/etc/ntp.conf" should contain "1.pool.ntp.org"
rspec ./default/serverspec/ntp_spec.rb:8 # Service "ntp" should be running
rspec ./default/serverspec/ntp_spec.rb:4 # Package "ntp" should be installed

Randomized with seed 57616

env TARGET_HOST=192.168.1.222 STANDALONE_SPEC=true /Users/ehaselwanter/.rvm/rubies/ruby-2.1.1/bin/ruby -I default/serverspec -S rspec default/serverspec/ntp_spec.rb  --format documentation --no-profile failed

Great, right? A report about what failed and what worked as well as how it was tested.

And now the output from a server which got the the expected config applied:

± rake serverspec:default target_host=192.168.1.50
env TARGET_HOST=192.168.1.50 STANDALONE_SPEC=true /Users/ehaselwanter/.rvm/rubies/ruby-2.1.1/bin/ruby -I default/serverspec -S rspec default/serverspec/ntp_spec.rb  --format documentation --no-profile
Enter sudo password:

Package "ntp"
  should be installed

Service "ntp"
  should be running

File "/etc/ntp.conf"
  should contain "0.pool.ntp.org"
  should contain "1.pool.ntp.org"

Finished in 0.22648 seconds
4 examples, 0 failures

Randomized with seed 55249

Again, great feedback and a nice report.

You can find this in the repo tests-kitchen-example

Converge The Nodes

Now its time to provide some infrastructure-as-code to be able to converge any node to our specification. Again. This is no contest of Puppet/Chef best practices but to focus on the integration testing part.

The Puppet Implementation

Lucky us there is already a puppet module for this: puppetlabs/ntp. So we can implement it by using this module with the ntp servers we want:

# example_ntp.pp

class { '::ntp':
  servers => [ '0.pool.ntp.org', '1.pool.ntp.org' ],
}

The Chef Implementation

With Chef, again the community has us covered. We can use the ntp cookbook. Implementation is as simple as using a role in the run_list:

{
    "name": "ntp",
    "default_attributes": {
      "servers":  [
        "0.pool.ntp.org",
        "1.pool.ntp.org"
      ]
    },
    "override_attributes": { },
    "json_class": "Chef::Role",
    "description": "NTP Role",
    "chef_type": "role",
    "default_attributes" : {
    },
    "run_list": [
        "recipe[ntp]"
    ]
}

Don’t Repeat Yourself In Integration Testing

Now we are able to converge our node with Chef or Puppet, but we still have to run every step manually. It’s time to bring everything together. Have Puppet as well as Chef converge our node and verify it automatically.

You know the answer to that: KitchenCI/Test-Kitchen. But we need a trick. Test-Kitchen must be made aware that we already have our tests somewhere, and that we want to use them in our Puppet as well as Chef integration test scenario. This is not possible at the moment, see Test Artifact Fetch Feature.

To come around this issue I implemented the kitchen-sharedtests gem with some thor tasks that hook right into test-kitchen.

± thor -T
kitchen
-------
thor kitchen:all-sharedtests                                # Run all test instances
thor kitchen:diagnose-sharedtests-default-nocm-ubuntu-1204  # Diagnose default-nocm-ubuntu-1204 test instance
thor kitchen:diagnose-sharedtests-default-nocm-ubuntu-1310  # Diagnose default-nocm-ubuntu-1310 test instance
thor kitchen:fetch-remote-tests                             # Fetch remote tests from provider.test_repo_uri
thor kitchen:run-sharedtests-default-nocm-ubuntu-1204       # Run run-sharedtests-default-nocm-ubuntu-1204 test instance
thor kitchen:run-sharedtests-default-nocm-ubuntu-1310       # Run run-sharedtests-default-nocm-ubuntu-1310 test instance
thor kitchen:verify-sharedtests-default-nocm-ubuntu-1204    # Run default-nocm-ubuntu-1204 to verify instance

You can run kitchen commands with thor tasks and it will fetch the integration test repo specified in the .kitchen.yml file.

If you want to keep your workflow run …

thor kitchen:fetch-remote-tests                             # Fetch remote tests from provider.test_repo_uri

… once to fetch the repo and place it in test/integration. This obviously is just a lousy hack but lowers the entry barrier to move the integration tests out of the infrastructure-as-code repo.

Create A Puppet-Kitchen

We will refine the example kitchen from Using Test Kitchen With Puppet. The code lives in a branch for the shared integration test repo approach distinct-test-repo branch

The notable difference is: We remove the test/integration directory from the repo but hint in the .kitchen.yml were to get it from.

± grep repo .kitchen.yml
  test_repo_uri: "https://github.com/ehaselwanter/tests-kitchen-example.git"

This is just for convenience and allows for fetching the repo to the test/integration path

± thor -T |grep fetch
thor kitchen:fetch-remote-tests                             # Fetch remote tests from provider.test_repo_uri

± thor kitchen:fetch-remote-tests
       -----> create or update https://github.com/ehaselwanter/tests-kitchen-example.git
       cloning https://github.com/ehaselwanter/tests-kitchen-example.git /Users/ehaselwanter/repositories/t-labs-hardening/puppet-kitchen-example/test/integration

Another change is an update to how kitchen-puppet finds its files. This is now more aligned with how kitchen does it when converging a node with chef. We place the manifest to run in the test suite. The manifest/site.pp has to move because the configuration is tied to the specification, so we move it in a puppet folder. Once again this is not supported with test-kitchen at the moment of writing this blog post, but made possible through a little hack described in a pull request: add puppet to the list of ignored names as well as a fake gem to make it work without the pull request being merged (fake busser-puppet)

relevant part of the directory tree:

± /usr/local/bin/tree test
test
└── integration
    ├── default
        ├── puppet
        │   └── manifests
        │       └── site.pp
        └── serverspec
            ├── ntp_spec.rb
            └── spec_helper.rb

Now we are ready to run.

± kitchen list
Instance                  Driver   Provisioner  Last Action
default-nocm-ubuntu-1204  Vagrant  PuppetApply  <Not Created>
default-nocm-ubuntu-1310  Vagrant  PuppetApply  <Not Created>

We can add more platforms with editing the kitchen file. To run all the suites in all the platforms use:

kitchen test

Create Chef-Kitchen

once again fetch the tests

± thor kitchen:fetch-remote-tests
       -----> create or update https://github.com/ehaselwanter/tests-kitchen-example.git
       cloning https://github.com/ehaselwanter/tests-kitchen-example.git /Users/ehaselwanter/repositories/t-labs-hardening/chef-kitchen-example/test/integration

Next we need a Berksfile specifying our dependencies

site :opscode

metadata

cookbook "ntp"

and a role in the test suite to run our ntp recipe with the servers we want to have (relevant part of the directory tree):

test
└── integration
    ├── default
         ├── roles
         │   └── ntp.json
         └── serverspec
              ├── ntp_spec.rb
              └── spec_helper.rb

This, again, gives us:

± kitchen list
Instance                  Driver   Provisioner  Last Action
default-nocm-ubuntu-1204  Vagrant  ChefSolo     <Not Created>
default-nocm-ubuntu-1310  Vagrant  ChefSolo     <Not Created>

We can add more platforms with editing the kitchen file. To run all the suites in all the platforms use:

kitchen test

Conclusion

We dramatically increased the value of integration tests in test-kitchen. As shown, we now are able to verify our specification against any node we have ssh access to. And we can use the same integration test suites in our Chef or Puppet development cycle.

It still feels not very natural to first fetch the external repo but there a are some pending feature requests to test-kitchen to get that sorted. The main problem here is the motivation for creating a first class citizen support of external integration tests as very few projects have the need to support more then one configuration management tool

As shown in this article, there is true value to do so because you can immediately benefit from the standalone usage scenario.

What do you think? Leave some comments and let us start a discussion :-)

Interested in Cloud or Chef Trainings? Have a look at our Commandemy Trainings page. Need help migrating to the cloud? Check out Infralovers.

comments powered by Disqus
Blog Tags