About Chef and OpenStack

The open source configuration management and automation framework used to deploy and manage many large public and private installations supports a wide variety of deployment scenarios. This section introduces each resource, followed by supplementing documentation.

About Chef

Chef is a systems integration framework built to bring the benefits of configuration management to your entire infrastructure. This framework makes it easy to deploy servers and applications to any physical, virtual, or cloud location, no matter the size of the infrastructure.

Each organization is composed of one (or more) workstations, a single server, and every node configured and maintained by a Chef client. Install Chef client on every node and it will perform all necessary configuration tasks. Then come cookbooks and recipes. The Chef client relies on these to tell it how to configure each node in your organization. You can even manage multiple environments—-or groups of nodes and settings—-with the same Chef server. Visit https://learnchef.opscode.com for more information.

About OpenStack

OpenStack is a free, open-source project that provides an infrastructure as a service (IaaS) for cloud computing. Backed by a vibrant community of both individuals and companies, its technology consists of a series of interrelated projects that manage pools of coordination, processing, storage, and networking throughout a data center.

OpenStack’s ability to empower deployers and administrators, manage resources through its web interface, and provide easy-to-use command-line tools has helped it gain a lot of traction in only a few short years.

Visit http://docs.openstack.org for more information.

Requirements

Review the requirements below in their entirety before beginning installation.

Node Requirements

Consistent with OpenStack’s reference architecture, we recommend a minimum of the following three nodes to operate an officially supported architecture. These are optimal for users seeking a private cloud offering on the SoftLayer platform.

  • Cloud Controller Node
  • Network Node
  • Compute Node
OpenStack can scale to hundreds or thousands of nodes. Running OpenStack on systems that do not meet the minimum requirements may negatively affect system or overall cluster performance.
Hardware Cloud Controller Node Network Node Compute Node
System Dual Processor Single Processor Dual Processor
RAM 16 GB or greater 8 GB or greater 16 GB or greater
Processor Quad-Core Xeon or greater Quad-Core Xeon or greater Quad-Core Xeon or greater
Disk (OS Drive) Two physical disks in RAID1 Two physical disks in RAID1 Two physical disks in RAID1
Disk (MySQL Database) Two physical disks in RAID0 or RAID1, or 4+ physical disks in RAID10 N/A N/A
Disk (Cinder Volumes) Two physical disks1 in RAID1, or 4+ physical disks1 in RAID10 N/A N/A
Disk (Instance Storage) N/A N/A Two physical disks in RAID1, or 4+ physical disks in RAID10
Disk Space 144 GB or greater 144 GB or greater 144 GB or greater
Network 1Gbps Private and Public 1Gbps Private and Public 1Gbps Private and Public

(1) 10K SAS/SCSI RPM drives, or alternatively SSDs, may provide much better performance for Cinder when allocating and destroying volumes.

Software Requirements

Check the requirements below to ensure your environment is compatible.

Operating System

OpenStack supports Ubuntu Server Minimal 12.04 LTS.

The SoftLayer Chef recipes make use of the Ubuntu Cloud repository, which is where current OpenStack packages are maintained. The recipes automatically handle the configuration needed to use the Ubuntu Cloud repository.

Chef

Chef 11.4 or greater is required.

Cookbooks

Have the following cookbooks handy before beginning the install process. You can download them at Opscode Cookbooks GitHub account.

Networking Requirements

Within the SoftLayer environment and the Chef cookbooks provided, the reference architecture consists of three separate networks (two physical interfaces and one virtual interface). Refer to the following diagram to determine how various components and are connected, and the networks they’re connected to.

For OpenStack private cloud deployments, we recommend following best practices to secure your public network, exposing only the systems and services that you wish to make available over the Internet.

Included below is a table you can use to reference each description and some important notes.

Network Description Addressing
Private Existing SoftLayer back-end network where all your hardware exists, including VLANs attached to your account. Most SoftLayer servers connect to this network on the bond0 interface, but other servers may be connected on eth0. This provides access to other non-OpenStack servers and services on the SoftLayer private network. You must order an IPv4 pool of Portable Private IPs to dole out across instances. See important info about IP addressing.
Public Existing SoftLayer front-end network from which all incoming Internet traffic is received and all outbound Internet traffic is sent. Most SoftLayer servers connect to this network on the bond1 interface, but other servers may be connected on eth1. Must order an IPv4 pool of Portable Public IPs to dole out across instances. See important info about IP addressing.
Data Networks created within OpenStack Quantum are part of the data network. OpenStack instances communicate with each other over the data network, including when they issue DHCP requests during boots and reboots. This network is an IP-over-GRE network between Network and Compute Nodes. The virtual network appliance used is Open vSwitch combined with the `quantum-plugin-openvswitch-agent` in Quantum. Default environment attributes in our cookbooks allow you to create overlapping IPv4 subnet ranges between different tenants and projects. See important info about IP addressing.

IP Addressing

A minimum, but not recommended, range of /30 (or four total IP addresses with one usable address) is required in order to use a SoftLayer Portable IP block within your OpenStack private cloud, whether on the public network or the private network. Whenever possible, we recommend using blocks larger than /30.

As with all networks, the first IP in a subnet is reserved for addressing the subnet, the second IP is already used by the upstream SoftLayer router (and in many networks this is the case), and the final address in the subnet is used for broadcast traffic.

With these constraints in mind, a /29 portable subnet would thusly provide six total with three usable IP addresses, a /28 subnet would provide 14 total with 11 usable IP addresses, a /27 subnet would provide 30 total with 27 usable addresses, and so forth.

Remember that the DHCP server used in Quantum will need an IP address on each subnet where DHCP is enabled. Keep this in mind when planning how large of a SoftLayer Portable IP block to order.

OpenStack utilizes NAT (Network Address Translation) across externally connected Quantum virtual routers. Any network that attaches to the router has external network access through NAT. The use of floating IP’s can reduce the size of the portable block you will have to purchase by allowing all compute instances to have outbound network access when not running inbound public services.

IP Constraints on Data Networks

When creating new Data Networks to connect your instances, we recommend limiting Data Network subnets to the following IP ranges:

  • 0/12 (or 172.16.0.0 - 172.31.255.255)
  • 0/16 (or 192.168.0.0 - 192.168.255.255)
Any instance that is simultaneously connected to the SoftLayer Private Network and Data Network subnet within 10.0.0.0/8 is very likely to conflict and cause unforeseen problems, since the SoftLayer Private Network uses subnets within 10.0.0.0/8 as well.

If you absolutely need to create and use Data Network subnets within 10.0.0.0/8, ensure each instance assigned to them do not connect to the SoftLayer Private Network.

Installation

The instructions below will guide you through the process of installing Chef, followed by the steps for bootstrapping and configuring your nodes to achieve a functional private cloud installation.

We strongly recommend that you review OpenStack Requirements before starting. This is to ensure that your environment is ready for install before getting in too deep.

Install Chef

You must install Chef Server first, and the Chef Server should be accessible by what will become your OpenStack nodes on ports 443 and 80.

The Chef server acts as a hub for configuration data. It stores cookbooks, policies applied to each node, and metadata that describes each registered Chef node being managed by the chef-client command. Nodes use the chef-client command to ask the server for configuration details, such as recipes, templates, and file distributions. The chef-client command then performs much of the configuration work on the nodes themselves without much interaction with the server. This scalable approach provides consistent configuration management and quick deployments.

To install Chef Server, perform the following:

  1. Go to http://www.opscode.com/chef/install.
  2. Click the Chef Server tab.
  3. Select the operating system, version, and architecture that match the server from which you will run Chef Server.
  4. Select the version of Chef Server to download, and then click the link that appears to download the package.
  5. Install the downloaded package using the correct method for the operating system on which Chef Server will be installed. For instance, on Ubuntu and Debian, using sudo dpkg -i package.deb will perform the installation.
  6. Configure Chef Server by running the command below. This command will set up all required components, including Erchef, RabbitMQ, PostgreSQL, and the cookbooks that are needed by chef-solo to maintain Chef Server.
    $ sudo chef-server-ctl reconfigure

  7. Verify the hostname for the server by running the hostname command. The hostname for the server must be a fully qualified domain name (FQDN). We recommend as well that the proper A records for each of your nodes' FQDNs exist in DNS for easier accessibility.
  8. When you're finished, verify the installation of Chef Server by running the following command:
    $ sudo chef-server-ctl test

This will run the chef-pedant test suite against the installed Chef Server and will report that everything is installed correctly and running smoothly.

Install OpenStack

The instructions below provide a general overview of steps you will perform to install an OpenStack environment with private cloud. It demonstrates a typical OpenStack installation, and includes additional information about customizing or modifying your installation. Generally, your installation will follow these steps, with more details outlined in the other sections below.

  1. Configure a bootstrap script with the hardware servers and/or cloud compute instances CCIs that you wish to bootstrap with Chef; this can be done in a simple shell script. Ensure you substitute the proper FQDN, the remote user name that has password-less sudo access, the local path to that user's private SSH key, and name of the Chef environment in which the node resides. In this example, we represent these with FQDN, USER, and ENVIRONMENT, respectively.
    knife bootstrap FQDN -x USER --sudo -i ~/.ssh/id_rsa -E ENVIRONMENT
  2. Edit the role information for each server and role.
    knife node run_list add FQDN 'role[grizzly-controller]'
  3. Run the bootstrap script you've just created to prepare each server before running chef-client.
  4. Modify the required attributes through Chef environment overrides.
  5. Run the chef-client program on each server to start installation and configuration. Be sure to run the installs in this order:
    • MySQL roles
    • RabbitMQ roles
    • Keystone role
    • Controller role
    • All remaining roles

Prepare Chef

Before OpenStack can be installed to any servers, the private cloud repository needs to be downloaded locally and then uploaded to your Chef Server. To do this, prepare a default Chef directory structure with this command:

$ git clone git://github.com/opscode/chef-repo.git

Change directory into ~/chef-repo/cookbooks and then download the private cloud repository:

$ cd chef-repo/cookbooks
$ git clone https://github.com/softlayer/chef-openstack

The private cloud repository also depends on several Opscode cookbooks. Download them into the ~/chef-repo/cookbooks directory:

$ git clone https://github.com/opscode-cookbooks/mysql
$ git clone https://github.com/opscode-cookbooks/partial_search
$ git clone https://github.com/opscode-cookbooks/ntp
$ git clone https://github.com/opscode-cookbooks/build-essential
$ git clone https://github.com/opscode-cookbooks/openssl

The needed OpenStack roles are packaged within the private cloud repository. Copy the roles from the chef-openstack/ directory to the ~/chef-repo/roles directory.

$ cp -r ~/chef-repo/cookbooks/chef-openstack/roles ~/chef-repo/roles

Finally, upload the cookbooks and roles to your Chef server for deployment to remote nodes:

$ knife cookbook upload --all
$ knife role from file ~/chef-repo/roles/*

If you get any errors during the upload, check that your cookbook_path and role_path are set correctly in the ~/.chef/knife.rb. You can optionally re-run the knife configuration client.

Bootstrap Your Nodes

Bootstrapping is a Chef term for remotely deploying the chef client to a server. It creates the node and client objects in the Chef Server and also adds client keys to both the server and client, allowing to chef-client communicate with the Chef Server.

Two choices are available for bootstrapping nodes. SoftLayer has provided example scripts, which can be edited for your needs, or you can bootstrap them on your own by following a chef-client install guide or installing chef-client on your own. It is recommended to use the bootstrap scripts.

Edit the script for each hardware node you would like to include in the OpenStack installation. It is highly recommended that at least three nodes be used—-one for a controller node, one for a network node, and one for a compute node. After the bootstrap process completes, the script will assign an OpenStack role to the hardware nodes. A node can have more than one role. A three-node bootstrap example script is shown below.

#!/bin/bash

## Bootstrap three nodes with chef-client, registering them with Chef Server
knife bootstrap control1.example.com -x USER --sudo -i ~/.ssh/id_rsa -E ENVIRONMENT
knife bootstrap compute2.example.com -x USER --sudo -i ~/.ssh/id_rsa -E ENVIRONMENT
knife bootstrap network3.example.com -x USER --sudo -i ~/.ssh/id_rsa -E ENVIRONMENT

## Now, add specific roles to each node's run list that will run once chef-client is run
## Controller node:
knife node run_list add control1.example.com 'role[grizzly-mysql-all]'
knife node run_list add control1.example.com 'role[grizzly-rabbitmq]'
knife node run_list add control1.example.com 'role[grizzly-cinder]'
knife node run_list add control1.example.com 'role[grizzly-keystone]'
knife node run_list add control1.example.com 'role[grizzly-glance]'
knife node run_list add control1.example.com 'role[grizzly-controller]'

## Compute node:
knife node run_list add compute2.example.com 'role[grizzly-compute]'

## Network node:
knife node run_list add network3.example.com 'role[grizzly-network]'

Configure Chef

You will need to override some attributes for your OpenStack Chef deployment. These can be overridden at the environment level or at the node level, but the environment level is strongly recommended.

First, create an environment. It will be used to house your nodes and configuration attributes. The attribute overrides will modify the OpenStack for your deployment without the need to edit the recipes directly.

$ knife environment create NAME -d "Description for environment"

Edit your environment with the following command (you may also edit this from the Chef web-based UI). An editor will open where you may define your environment’s attributes in JSON format.

$ knife environment edit ENVIRONMENT

Take special care to ensure your final environment document is valid JSON, as knife may discard your attempted change if the JSON does not properly validate once you save and exit the editor.

The following is an example of the recommended minimum attributes that can be overridden in the environment, illustrating the required attributes to deploy:

"override_attributes": {
    "admin": {
      "password": "admin_pass"
    },
    "network": {
      "public_interface": "eth1",
      "private_interface": "eth0"
    },
    "quantum": {
      "db": {
        "password": "my_new_quantum_pass"
      },
      "softlayer_public_portable": "XX.XX.XX.XX/YY",
      "softlayer_private_portable": "AA.AA.AA.AA/BB"
    },
    "nova": {
      "db": {
        "password": "my_new_nova_pass"
      }
    },
    "glance": {
      "db": {
        "password": "my_new_glance_pass"
      }
    },
    "keystone": {
      "db": {
        "password": "my_new_keystone_pass"
      }
    },
    "cinder": {
      "db": {
        "password": "my_new_cinder_pass"
      }
    }
  }
}

Chef Your Nodes

The process below outlines how to sequentially chef the nodes. The order in which services come online is important. All OpenStack components depend on MySQL and RabbitMQ, therefore those roles must be completed before attempting to deploy OpenStack-specific components.

MySQL and RabbitMQ Nodes

If you have chosen to make the MySQL node separate from the controller, you must first complete a deployment of the MySQL role prior to chefing another node with any OpenStack services. You may easily deploy MySQL roles for each OpenStack component by adding your additional nodes to the sample script above and specifying which MySQL role(s) to apply to each. This is discussed in the Scaling & Branching Deployments section.

Similarly, if you intend to deploy RabbitMQ on a separate server, you may follow the same process, but deploying the RabbitMQ role must be performed prior to chefing the controller with any OpenStack services. It is independent of the MySQL roles.

Otherwise, please skip to the next step if MySQL and RabbitMQ will run from your controller node.

Controller Node

The controller node contains, at a minimum, the roles for the base Quantum and Nova services. If you are unfamiliar with OpenStack, it is recommended to do a standard installation as illustrated in the bootstrap example. Be sure that Chef shows that the node contains the MySQL backend, RabbitMQ, Keystone, Cinder, and Glance roles. You can verify this with a simple knife command:

knife node show FQDN

The output should look similar to this:

Node Name: control1.example.com Environment: Region FQDN: control1.example.com IP: XX.XX.XX.XX Run List: role[grizzly-mysql-cinder], role[grizzly-mysql-glance], role[grizzly-mysql-keystone], role[grizzly-mysql-nova], role[grizzly-mysql-quantum], role[grizzly-rabbitmq], role[grizzly-keystone], role[grizzly-controller], role[grizzly-cinder], role[grizzly-glance] Roles: grizzly-mysql-cinder, grizzly-mysql-glance, grizzly-mysql-keystone, grizzly-mysql-nova, grizzly-mysql-quantum, grizzly-rabbitmq, grizzly-keystone, grizzly-controller, grizzly-cinder, grizzly-glance Recipes: chef-openstack::set_attributes, chef-openstack::set_cloudnetwork, ntp, chef-openstack::mysql-cinder, chef-openstack::mysql-glance, chef-openstack::mysql-keystone, chef-openstack::mysql-nova, chef-openstack::mysql-quantum, chef-openstack::ip_forwarding, chef-openstack::repositories, chef-openstack::rabbitmq-server, chef-openstack::keystone, chef-openstack::quantum-controller, chef-openstack::nova, chef-openstack::dashboard, chef-openstack::cinder, chef-openstack::glance, chef-openstack::quantum-network Platform: ubuntu 12.04 Tags:

To chef the controller node you can either connect directly to the remote server and (with root privileges) run chef-client from the node itself or use knife ssh to run it from the Chef server:

knife ssh SEARCH_TERM 'sudo chef-client'

For example:

knife ssh 'role:grizzly-controller' 'sudo chef-client'

…or

knife ssh 'name:FQDN' 'sudo chef-client'

Other Network and Compute Nodes

After the MySQL, RabbitMQ, and controller roles have been chefed, any of the remaining roles can then be run in any order for the other nodes, and can even be run in parallel to speed up your total deployment time. Compute and Network nodes can also be added any time after the initial deployment. Following the example above, the two commands would chef your network and compute nodes:

## Compute node:
knife ssh 'name:FQDN_2' 'sudo chef-client'
## Network node:
knife ssh 'name:FQDN_3' 'sudo chef-client'

Using OpenStack

Here’s how to start getting familiar with the OpenStack client utilities.

Using Your Private Cloud

You can access the OpenStack command-line tools by logging in to the Controller node via SSH as root, and running the following commands:

# source .openrc
# nova flavor-list

The output should look similar to this:

+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID |    Name   | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512       | 0    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 10   | 20        |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 10   | 40        |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 10   | 80        |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 10   | 160       |      | 8     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+

This is a list of “flavors” — different disk, memory, and CPU allocations that you can assign to instances. This is an example of the information that you can access through the python-novaclient command line client.

The list of available images to boot from is seen by this command.

# nova image-list

The output should look similar to this:

+--------------------------------------+-----------------------------------+--------+--------+
| ID                                   | Name                              | Status | Server |
+--------------------------------------+-----------------------------------+--------+--------+
| 3470a8b5-46a7-442b-ac75-7c8f663e271d | CirrOS 0.3.0 i386                 | ACTIVE |        |
| ace57487-30b6-41c9-9e7a-9d44f103437d | CirrOS 0.3.0 x86_64               | ACTIVE |        |
| 3722fed6-0065-4521-b480-8abd5f7abf2c | Fedora 18 (Cloud) i386            | ACTIVE |        |
| 21c3f3ae-f773-46f9-8fef-d3c0a712ef45 | Fedora 18 (Cloud) x86_64          | ACTIVE |        |
| dbda560c-8e09-4035-9332-03ef4470a934 | Fedora 19 (Cloud) i386            | ACTIVE |        |
| c2ac12e3-5f11-4679-8074-232c5040b901 | Fedora 19 (Cloud) x86_64          | ACTIVE |        |
| 2baacb65-fa9d-4707-9856-5b6d5803d63e | Ubuntu 12.04 Server (Cloud) amd64 | ACTIVE |        |
| 49a33caa-8e78-41c3-8af6-cc5ea25182f2 | Ubuntu 12.04 Server (Cloud) i386  | ACTIVE |        |
| 5b8998ce-40fa-43cb-9f79-11f4e9a32296 | Ubuntu 12.10 Server (Cloud) amd64 | ACTIVE |        |
| e8f69a8b-c1f5-4432-89bb-6bfecfea7cc3 | Ubuntu 12.10 Server (Cloud) i386  | ACTIVE |        |
| 668e92c6-b6c0-4f9e-bf0d-078ede9667e9 | Ubuntu 13.04 Server (Cloud) amd64 | ACTIVE |        |
| 169fc708-cdbf-4dd3-a106-ced48a88922f | Ubuntu 13.04 Server (Cloud) i386  | ACTIVE |        |
+--------------------------------------+-----------------------------------+--------+--------+

To launch an instance, find the image ID and flavor name you would like to use.

# nova boot --flavor=2 --image=2baacb65-fa9d-4707-9856-5b6d5803d63e ubuntu

The output should look similar to this:

+-------------------------------------+--------------------------------------+
| Property                            | Value                                |
+-------------------------------------+--------------------------------------+
| status                              | BUILD                                |
| updated                             | 2013-10-02T21:18:01Z                 |
| OS-EXT-STS:task_state               | None                                 |
| OS-EXT-SRV-ATTR:host                | None                                 |
| key_name                            | None                                 |
| image                               | Ubuntu 12.04 Server (Cloud) amd64    |
| hostId                              |                                      |
| OS-EXT-STS:vm_state                 | building                             |
| OS-EXT-SRV-ATTR:instance_name       | instance-00000001                    |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None                                 |
| flavor                              | m1.small                             |
| id                                  | e12449f2-0dfb-4aee-b7cf-8c97850c2b30 |
| security_groups                     | [{u'name': u'default'}]              |
| user_id                             | 36fcdb3ca5d349ffb82731b91c522080     |
| name                                | ubuntu                               |
| adminPass                           | RKTRUH52zfGL                         |
| tenant_id                           | a06ad73e633b4a479986f8de4b613e51     |
| created                             | 2013-10-02T21:18:01Z                 |
| OS-DCF:diskConfig                   | MANUAL                               |
| metadata                            | {}                                   |
| accessIPv4                          |                                      |
| accessIPv6                          |                                      |
| progress                            | 0                                    |
| OS-EXT-STS:power_state              | 0                                    |
| OS-EXT-AZ:availability_zone         | nova                                 |
| config_drive                        |                                      |
+-------------------------------------+--------------------------------------+

You can also view the status of the controller and compute nodes and the Nova components active on each while logged in as the root user.

# nova-manage service list

The output should look similar to this:

Binary Host Zone Status State Updated_At
nova-conductor control1 nova enabled :-) 2013-09-20 10:41:39
nova-cert control1 nova enabled :-) 2013-09-20 10:41:36
nova-scheduler control1 nova enabled :-) 2013-09-20 10:41:34
nova-consoleauth control1 nova enabled :-) 2013-09-20 10:41:41
nova-compute compute2 nova enabled :-) 2013-09-20 10:41:35

You can also view logs with the tail command. For example, to view nova-compute.log on your compute node(s), execute the following command:

# tail /var/log/nova/nova-compute.log

All logs are available in the /var/log/ directory and its subdirectories. You may also view the status of Quantum agents that reside on your network and compute nodes.

# quantum agent-list

The output should look similar to this:

+--------------------------------------+--------------------+----------+-------+----------------+
| id                                   | agent_type         | host     | alive | admin_state_up |
+--------------------------------------+--------------------+----------+-------+----------------+
| 42b5f9cb-7244-499d-826a-2a056d987c44 | Open vSwitch agent | compute2 | :-)   | True           |
| c9846df5-5e13-4f7c-971e-c65dd660a2cb | Open vSwitch agent | network3 | :-)   | True           |
| 8eb94efa-8f41-44c8-8dc0-1387959de7be | DHCP agent         | network3 | :-)   | True           |
| d6fdc505-094b-4bcd-9cca-32410aa5e6e3 | L3 agent           | network3 | :-)   | True           |
+--------------------------------------+--------------------+----------+-------+----------------+

Accessing the Horizon Dashboard

Log in with the admin user name and the password you created during the OpenStack Chef deployment.

In addition to the command line, you can use your web browser to access the controller host. You can use the hostname or the IP address that you provided during installation, followed by “/horizon”. For instance, if your controller is “control1.example.com”, navigate to http://control1.example.com/horizon/. You should see the OpenStack dashboard login page. If not, the installation may not be complete.

After logging in, you can configure additional users, create, and manage OS images and other volumes, create or customize flavors, and launch instances. You also have the ability to view and create networks, routers, and subnets for use in your OpenStack environment.

OpenStack Client Utilities

The OpenStack client utilities are a convenient way to interact with OpenStack using the command line from your own workstation without logging in to the controller node. The client utilities for Python are available via PyPi and can be installed on most Linux systems with these commands:

pip install python-keystoneclient
pip install python-novaclient
pip install python-quantumclient
pip install python-cinderclient
pip install python-glanceclient
Individual utilities are maintained by differing communities. Refer to their help documentation for more information, or by using the —help flag for a given utility.

Our DevOps Tools

We offer two new tools to help interact with the SoftLayer environment:

  1. sl, a command line tool to view and manage SoftLayer resources (using our Python client library)
  2. swftp-chef, a Chef cookbook for swftp, our SFTP/SCP-based interface to Swift Object Storage

SoftLayer Command Line Tool

When working with lots of servers, whether virtual or hardware, being able to automate tasks can be a blessing. While on a CLI, quickly sorting and grepping is commonplace for those in DevOps roles, but if you have found yourself writing something like this, these tools are probably for you:

$ cat /proc/cpuinfo | grep "model name" | awk '{ print $NF }'

We have extended the SoftLayer Python bindings to also ship with a new command line tool: sl. Simply install from PyPI, configure them with your user name and API key, and you are ready to go.

## This command requires the python-setuptools package to be installed:
$ sudo easy_install softlayer

## This alternative method requires the python-pip package to be installed:
$ sudo pip install softlayer

## Then, set up your config, which will require your user name and API key:
$ sl config setup

Voila! You are all setup and ready to rock. Give it a test run by trying out some of these commands:

$ sl --help
$ sl cci list
$ sl hardware list
$ sl dns list
$ sl dns list | grep 20.. # notice how we adjusted the output for you? Great for sed/awk use.
$ sl dns list --format=table > dns_zones.txt # redirects pretty tables output to a file.

Development for this tool is out in the open on GitHub at https://github.com/softlayer/softlayer-api-python-client. Documentation for it is also available on GitHub at https://softlayer-api-python-client.readthedocs.org/en/latest. Do note that this is not full API documentation as seen on the SoftLayer Developer Network (SLDN) site, however it is a great resource for SoftLayer Python API references and examples.

The swftp-chef Cookbook

To support our DevOps comrades even further, we have released swftp-chef. This simplifies the deployment of swftp—an SFTP-based interface to Swift—in your fleet even more. Installing is as simple as running one of these two commands:

## If you have the knife-github gem installed, obtain it with this command:
$ knife cookbook github install softlayer/chef-swftp

## Otherwise, obtain it with this command:
$ knife cookbook site install swftp

Afterwards, set the attributes on either the role/environment and add it to your run list. You can find a full list of swftp attributes in its README file.

Scaling and Branching Deployments

The private cloud recipes were designed with scaling in mind. OpenStack was built to be scaled, but most small deployments adhere to the three-node model of a Compute, Network, and Controller node. SoftLayer provides several options for deployers who need to scale out. You can add Compute and Network nodes where necessary to compensate for load, and you can branch components of the install to separate servers in any configuration you choose. Going to be a heavy user of block storage? Move the Cinder role to a separate server and deploy it as the sole component on that system.

The private cloud is made up of 12 components:

  • OpenStack MySQL Servers
  • Quantum
  • Nova
  • Cinder
  • Glance
  • Keystone
  • RabbitMQ
  • OpenStack Controller
  • OpenStack Nova Compute
  • OpenStack Quantum
  • OpenStack Keystone Authentication
  • OpenStack Glance
  • OpenStack Cinder

The components (roles) can be branched into the traditional three-node OpenStack model.

Server Role(s)
OpenStack Controller MySQL
RabbitMQ
Keystone
Controller
Glance
Cinder
OpenStack Compute Nova
OpenStack Network Quantum

At scale, you may wish to have some separation of these roles to handle the increased load on any single component. Database roles can actually be split and scaled out to suit environments with heavy database churn or a desire for stronger isolation, helping to more evenly distribute load:

Server Role(s)
OpenStack Controller MySQL
Quantum
Nova
RabbitMQ
Controller
OpenStack Authentication Keystone
MySQL
Keystone
OpenStack Block Storage Cinder
MySQL
Cinder
OpenStack Image Store Glance
MySQL
Glance
OpenStack Compute Nova
OpenStack Network Quantum

In such a scenario, high load on a single component is far less likely to adversely affect the performance of another component. This flexibility allows you to use the hardware you already have more effectively—before having to spend money on beefier hardware.

Testing OpenStack

Use the testing procedures below before pushing your server into a production environment. Additionally, in case you want to create your own testing environment (or sandbox), those instructions are also included below.

Test Connectivity

The hostname for each server must meet the following requirements:

  1. The hostname must be a fully qualified domain name (FQDN), which includes the domain suffix. Example: mychefserver.example.com (not simply mychefserver).
  2. The hostname must be resolvable. In most cases, such as for a server that will run in a production environment, add the hostname for the server to the DNS system. In some cases, such as when deploying server into a testing environment, just adding the hostname to the /etc/hosts file is enough to ensure that a hostname is resolvable.

Resolvable Hostname

To verify if a hostname is resolvable, run the following command:

$ hostname -f

If the hostname is resolvable, it will return something like:

mychefserver.example.com

Alternatively, you can ping the host:

$ ping mychefserver.example.com

PING mychefserver.example.com (127.0.0.1) 56(84) bytes of data.
64 bytes from mychefserver.example.com (127.0.0.1): icmp_req=1 ttl=64 time=0.034 ms
64 bytes from mychefserver.example.com (127.0.0.1): icmp_req=2 ttl=64 time=0.028 ms
64 bytes from mychefserver.example.com (127.0.0.1): icmp_req=3 ttl=64 time=0.014 ms
64 bytes from mychefserver.example.com (127.0.0.1): icmp_req=4 ttl=64 time=0.014 ms

Check Connectivity

A simple routine to mark off your checklist is making sure that each server is able to communicate with the rest of the cluster. This is important when preparing for deployment and for testing afterward. To do this use the ping command to ping the other server’s FQDN.

$ ping control1.example.com
$ ping compute2.example.com
$ ping network3.example.com

Test Your Install

Run the following commands on the controller node to check if OpenStack has been deployed and is running correctly. Before running them, be sure to update your bash environment with the correct OpenStack variables to use them:

# source .openrc

Nova

root@control1:~# nova-manage service list 

Binary           Host      Zone             Status     State Updated_At                                                                         
nova-cert        control1  internal         enabled    :-)   2013-09-03 15:21:29
nova-scheduler   control1  internal         enabled    :-)   2013-09-03 15:21:29
nova-conductor   control1  internal         enabled    :-)   2013-09-03 15:21:29
nova-consoleauth control1  internal         enabled    :-)   2013-09-03 15:21:30
nova-compute     compute2  nova             enabled    :-)   2013-09-03 15:21:23

Each service or agent will display a :-) if they are running correctly, and XX if they are not.

Quantum/Neutron

root@control1:~# quantum agent-list

+--------------------------------------+--------------------+----------+-------+----------------+                                                
| id                                   | agent_type         | host     | alive | admin_state_up |
+--------------------------------------+--------------------+----------+-------+----------------+
| 438d2dd2-daf3-496c-99ad-179ed307b8d6 | Open vSwitch agent | compute2 | :-)   | True           |
| 4c684b66-16db-4853-8cb3-51098c3752b3 | L3 agent           | network3 | :-)   | True           |
| 98c3d818-0ebe-4cd6-89bf-0f107a7532ab | DHCP agent         | network3 | :-)   | True           |
| dd3b9134-ab76-440a-ba7a-70135202eb82 | Open vSwitch agent | network3 | :-)   | True           |
+--------------------------------------+--------------------+----------+-------+----------------+

Each service or agent will display a :-) if they are running correctly, and XX if they are not.

Glance

root@control1:~# glance image-list +--------------------------------------+-----------------------------------+-------------+------------------+-----------+--------+ | ID | Name | Disk Format | Container Format | Size | Status | +--------------------------------------+-----------------------------------+-------------+------------------+-----------+--------+ | 3470a8b5-46a7-442b-ac75-7c8f663e271d | CirrOS 0.3.0 i386 | qcow2 | bare | 9159168 | active | | ace57487-30b6-41c9-9e7a-9d44f103437d | CirrOS 0.3.0 x86_64 | qcow2 | bare | 9761280 | active | | 3722fed6-0065-4521-b480-8abd5f7abf2c | Fedora 18 (Cloud) i386 | qcow2 | bare | 226492416 | active | | 21c3f3ae-f773-46f9-8fef-d3c0a712ef45 | Fedora 18 (Cloud) x86_64 | qcow2 | bare | 228196352 | active | | dbda560c-8e09-4035-9332-03ef4470a934 | Fedora 19 (Cloud) i386 | qcow2 | bare | 235536384 | active | | c2ac12e3-5f11-4679-8074-232c5040b901 | Fedora 19 (Cloud) x86_64 | qcow2 | bare | 237371392 | active | | 2baacb65-fa9d-4707-9856-5b6d5803d63e | Ubuntu 12.04 Server (Cloud) amd64 | qcow2 | bare | 251985920 | active | | 49a33caa-8e78-41c3-8af6-cc5ea25182f2 | Ubuntu 12.04 Server (Cloud) i386 | qcow2 | bare | 230621184 | active | | 5b8998ce-40fa-43cb-9f79-11f4e9a32296 | Ubuntu 12.10 Server (Cloud) amd64 | qcow2 | bare | 221642752 | active | | e8f69a8b-c1f5-4432-89bb-6bfecfea7cc3 | Ubuntu 12.10 Server (Cloud) i386 | qcow2 | bare | 219938816 | active | | 668e92c6-b6c0-4f9e-bf0d-078ede9667e9 | Ubuntu 13.04 Server (Cloud) amd64 | qcow2 | bare | 235143168 | active | | 169fc708-cdbf-4dd3-a106-ced48a88922f | Ubuntu 13.04 Server (Cloud) i386 | qcow2 | bare | 233308160 | active | +--------------------------------------+-----------------------------------+-------------+------------------+-----------+--------+

Cinder

If you haven’t created any Cinder volumes yet, this command will return nothing. Otherwise, a list of created volumes will be returned.

root@control1:~# cinder list +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | e60def40-03a0-4e08-a9e5-3f60be89ad57 | available | test | 1 | None | false | | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+

Keystone

Keystone authentication has several commands that are worth checking, especially if you are experience any difficulty with user accounts, OpenStack component communication, or authentication problems.

For detailed information on troubleshooting and debugging Keystone, please refer to its documentation at http://docs.openstack.org.

Check Keystone Users

root@control1:~# keystone user-list

+----------------------------------+----------+---------+--------------------+
|                id                |   name   | enabled |       email        |
+----------------------------------+----------+---------+--------------------+
| 36fcdb3ca5d349ffb82731b91c522080 |  admin   |   True  |   root@localhost   |
| 01b0af86bb1e4062b04882a03bff9758 |  cinder  |   True  |  cinder@localhost  |
| 2e15846831884959a6fc0be96a9bceaf |   demo   |   True  |   demo@localhost   |
| ee1ffd860b9940c9bedcbd27e5439333 |  glance  |   True  |  glance@localhost  |
| 62a9c425721f48d8892a9132e23ede03 |   nova   |   True  |   nova@localhost   |
| b22df525c6714c259189fd2da82b72af | quantum  |   True  | quantum@localhost  |
+----------------------------------+----------+---------+--------------------+

Check Keystone Tenants

root@control1:~# keystone tenant-list

+----------------------------------+--------------------+---------+
|                id                |        name        | enabled |
+----------------------------------+--------------------+---------+
| a06ad73e633b4a479986f8de4b613e51 |       admin        |   True  |
| bb4f6b6e17314255b49a39fbe9fdff58 |        demo        |   True  |
| be05a0ac2e034f3aa43ac06a90943a14 | invisible_to_admin |   True  |
| cc0804251a5145c485ad87ab384350be |      service       |   True  |
+----------------------------------+--------------------+---------+

Check Keystone Services

root@control1:~# keystone service-list

+----------------------------------+----------+----------+------------------------------+
|                id                |   name   |   type   |         description          |
+----------------------------------+----------+----------+------------------------------+
| fd9b4f0ed7434504a8938d6e8b674455 |  cinder  |  volume  |   OpenStack Volume Service   |
| 05a89640b8d04a5c902b86d4bfbb9a8f |   ec2    |   ec2    |    OpenStack EC2 service     |
| 844e120b3b594f68838e6b0e5a227427 |  glance  |  image   |   OpenStack Image Service    |
| 1ef9095e84d44658a66a46aabe4ef551 | keystone | identity |      OpenStack Identity      |
| 8f50ec8454b9432b94cd7d68172dc98a |   nova   | compute  |  OpenStack Compute Service   |
| 3de9a5a7e57943739f59c014554ea602 | quantum  | network  | OpenStack Networking service |
+----------------------------------+----------+----------+------------------------------+

Keystone endpoints are extremely important and can cripple your deployment when not configured correctly. However, Chef wires up the necessary endpoint information correctly during deployment. In any event, checking the endpoint list to make sure it’s correct is always a good idea when troubleshooting issues connecting or authenticating to Keystone. Each endpoint should correspond to the correct server in your cluster. Note that the ports in the example below are the defaults, and may vary if you’ve overridden the default attributes for them.

root@openstack1:~# keystone endpoint-list +----------------------------------+---------+-------------------------------------------+-------------------------------------------+-------------------------------------------+----------------------------------+ | id | region | publicurl | internalurl | adminurl | service_id | +----------------------------------+---------+-------------------------------------------+-------------------------------------------+-------------------------------------------+----------------------------------+ | 0a9d0209521e40c4820c224e3e1a015d | Region | http://XX.XX.XX.XX:5000/v2.0 | http://XX.XX.XX.XX:5000/v2.0 | http://XX.XX.XX.XX:35357/v2.0 | 1ef9095e84d44658a66a46aabe4ef551 | | 33766366fb4e4e919d441116f46a6900 | Region | http://XX.XX.XX.XX:8776/v1/$(tenant_id)s | http://XX.XX.XX.XX:8776/v1/$(tenant_id)s | http://XX.XX.XX.XX:8776/v1/$(tenant_id)s | fd9b4f0ed7434504a8938d6e8b674455 | | 4d6f3fb4758442068e73b808a0501864 | Region | http://XX.XX.XX.XX:8773/services/Cloud | http://XX.XX.XX.XX:8773/services/Cloud | http://XX.XX.XX.XX:8773/services/Admin | 05a89640b8d04a5c902b86d4bfbb9a8f | | 87343c685ef5477b81075017159e1a39 | Region | http://XX.XX.XX.XX:9696/ | http://XX.XX.XX.XX:9696/ | http://XX.XX.XX.XX:9696/ | 3de9a5a7e57943739f59c014554ea602 | | e81953d02428499bb981e78be560bc88 | Region | http://XX.XX.XX.XX:9292/v2 | http://XX.XX.XX.XX:9292/v2 | http://XX.XX.XX.XX:9292/v2 | 844e120b3b594f68838e6b0e5a227427 | | f560ac6915f241019b95e103146326dd | Region | http://XX.XX.XX.XX:8774/v2/$(tenant_id)s | http://XX.XX.XX.XX:8774/v2/$(tenant_id)s | http://XX.XX.XX.XX:8774/v2/$(tenant_id)s | 8f50ec8454b9432b94cd7d68172dc98a | +----------------------------------+---------+-------------------------------------------+-------------------------------------------+-------------------------------------------+----------------------------------+

All-in-One Sandbox

This section is designed to illustrate a proof-of-concept installation for learning or development purposes, in which you may wish to run everything on a single machine. This installation process includes:

Create Your Own Sandbox

  1. Installing VirtualBox on your laptop or desktop
  2. Installing Vagrant on your laptop or desktop
  3. Creating one VirtualBox VM for Chef Server
  4. Creating a second VirtualBox VM for OpenStack
  5. Deploying OpenStack in your second VM using the Chef Server in your first VM

Introduction to Vagrant

Vagrant provides a quick and configuration-free platform to test drive the SoftLayer private cloud recipes on your own. Vagrant uses VirtualBox to preconfigure virtual machines without manual intervention. In this section, Vagrant will be used to deploy Chef Server, install the SoftLayer private cloud cookbook, and then provision the OpenStack all-in-one node. This workflow is similar to what is seen in a production environment, and will work for Microsoft Windows and most Linux distributions.

Installation Process

Follow the instructions below to install Virtual Box and Vagrant.

Install VirtualBox

  1. Go to the Virtual Box download page
  2. Download the AMD64 version for your operating system.
  3. Install VirtualBox using the downloaded package. For example, on Ubuntu and Debian Linux, run the following command:
  4. $ dpkg -i virtualbox-4.2_4.2.18-88780~Ubuntu~precise_amd64.deb
  5. On Windows, proceed through installation using the setup wizard:

Install Vagrant

  1. Go to the Vagrant download page.
  2. Click the latest version. (At the time this guide was written, the most recent version was 1.3.3.)
  3. Download the package for your operating system.
  4. Install Vagrant from the package.

Download the Vagrant File Scripts

  1. Make a temporary directory to place the Vagrant files.
  2. ## Linux command:
    $ mkdir ~/softlayer
    ## Windows command:
    > mkdir C:\softlayer
  3. Next, save the Vagrant files to the created location and open a terminal window or Windows command line window.
  4. Change your directory (cd) to the vagrant directory you just created.
  5. Run vagrant up.
  6. The install will take approximately 15 minutes, and will provision two VirtualBox VMs, install Chef Server, and bootstrap the OpenStack installation.
  7. After completion, the Vagrant script will tell you how to access the new Chef Server, as well as Horizon (the OpenStack dashboard).

Vagrant uses VirtualBox to deploy Chef Server and OpenStack. Running OpenStack compute instances inside an already virtualized environment is very slow compared to the speed of a hardware deployment.

Chef Server

  1. From your computer’s browser, navigate to https://127.0.0.1/.

  2. The Chef server uses a self-generated, unsigned certificate. There will be a prompt to accept and proceed.

  3. The Chef Server login prompt will be next. Enter in the following credentials:

    • username: admin
    • password: p@ssw0rd1

OpenStack

  1. From your computer’s browser, navigate to http://127.0.0.1:7081/horizon/.

  2. Log into OpenStack using the provided credentials:

    • username: admin
    • password: passwordsf

OpenStack Use Scenario

OpenStack is a cloud-computing project. It allows the implementer to create a private datacenter and provides everything needed for a private cloud experience:

  • Virtual Machines/Instances
  • Block Storage
  • Networking

While it’s very time-consuming to set up manually on your own, our Chef recipes make it simple to get up and running quickly. As an example, let’s look at fictional company called SoftCube.

SoftCube

SoftCube is a new startup. They expect a large boom in new customers and need the ability to adjust quickly to a changing set of requirements. Using a private cloud will provide them with the flexibility and adaptability that they need. To make this happen, SoftCube needs to move three of their existing hardware servers to the cloud. Currently, they have two web servers and one database server.

These servers reside on-site and neither of them have redundant power or networking. Let’s get them moved to the SoftLayer Private Cloud that SoftCube just decided to purchase.

SoftCube will need OpenStack compute instances to replace their hardware servers. SoftCube is a security-conscious company, they will use key-based SSH authentication to access their compute instances—the default behavior for new compute instances in OpenStack. But, before creating these instances, we’ll need to create the SSH key that will be used to access them. Then, each time SoftCube creates a new instance, OpenStack can inject the SSH key, and they’ll use the key each time they need to log in to one of their servers.

Create an SSH Key

To create an SSH key, you’ll follow these simple steps in the Horizon dashboard.

  1. Log in to the Horizon dashboard running on SoftCube's new SoftLayer Private Cloud.
  2. Click on the "Project" tab.
  3. Select your Current Project (admin in this case).
  4. Click "Access & Security".
  5. Click the "Keypairs" tab.
  6. Click "Create Keypair".
  7. Name the new keypair "SoftCube-Admin" and click the blue "Create Keypair" button.
  8. Now that the keypair is created, download it by clicking the provided link if it does not start automatically.

Create Compute Instances

Now that SoftCube’s keypair is available, the compute instances can be created.

  1. Click on the "Project" tab.
  2. Select your Current Project (admin in this case).
  3. Click "Instances".
  4. Click the "Launch Instances" button.
  5. A new dialog window will appear with all the details needed for launching a new compute instance. In the launch instance window, they will need the following information handy to create and launch their instances:
    • Image type
    • Name
    • Flavor
    • Keypair
    • Network information
    Additional volume and post-creation options may be needed in SoftCube's future, but are not necessary to specify right now.
  6. On the "Details" tab, provide these options.
    • Instance Source: Image
    • Image: Ubuntu 12.04 Server (Cloud) amd64
    • Instance Name: web1.softcube.com
    • Flavor: m1.medium
  7. Click on the "Access & Security" tab.
    • "SoftCube-Admin" should be already selected as the SSH key, but if not, select it from the list.
    • Uncheck the "default" security group.
    • Check the "basic-ssh-icmp" security group.

Configure Network Access

SoftCube’s web servers will need both internet access and private network access with each other. They’ve decided to use floating IPs in OpenStack to handle inbound public access to their web servers, private network access for all three servers, and no public network access to their database server. This network setup requires each web server to have two network connections, as illustrated below.

  1. Click on the "Networking" tab.
  2. In the "Available Networks" list, click the "+" button next to "stack-network" and "softlayer-private".
  3. Click the "Launch" button. The first web server will be launched. Since SoftCube needs two web servers, follow the same steps to create a second web server with the name web2.softcube.com.
  4. Lastly, they'll need an instance for the database server. Since SoftCube wants to plan for growth as early as possible, they will tweak a few changes to the instance configuration.
  5. On the "Details" tab, provide these options:
    • Instance name: db.softcube.com
    • Flavor: m1.large (for a beefier amount of compute power)
  6. On the "Access & Security" tab, provide the same options as your web server.
    • On the "Networking" tab under "Available Networks", click the "+" button next to "softlayer-private".
    • Click the "Launch" button. Within seconds, the database instance will launch, and SoftCube will be on the private cloud!

Allocate Public IP Addresses

We’re getting closer, but we aren’t quite finished yet. Each web server will need a public IP for inbound public access. Currently, the compute instances are only accessible via the devices connected to OpenStack Quantum (Grizzly)/Neutron (Havana) network, or any device that is on the SoftLayer Private Network. SoftCube can allocate public IP addresses purchased from SoftLayer to any instance at any time by assigning Floating IPs, which is our next step.

  1. From the Instances list, click the "More" dropdown for web1.softcube.com.
  2. Click "Associate Floating IP".
  3. Currently SoftCube has no allocated Floating IPs. One will need to be allocated before it will before it can be assigned to a compute instance. Click the "+" button to associate a floating IP with the current project.
  4. We need floating IPs provided to us from the "softlayer-public" network, so ensure it is selected as the Pool, and click the "Allocate IP" button.
  5. The Manage Floating IP Associations dialog box will appear with a public IP address pre-populated. Ensure the web1.softcube.com server is selected as well, and click "Associate".
  6. Horizon will attach the new public IP address to the web1.softcube.com instance, and display it in the "IP Address" column in the instances list. Sometimes this may take a moment to update, but by this time the IP address has already been routed to the instance.
  7. Follow the same steps above to allocate another floating IP address to web2.softcube.com.

Final Touches

SoftCube now has three OpenStack compute instances:

  • Two web servers with public inbound access
  • One database server that is accessible only on the backend

At this point, they can start transitioning their aging, hardware-based web and database servers to their new OpenStack instances.

Looks like SoftCube is well on its way to a successful future in the private cloud.

Finding Support for OpenStack

If this is new for you, becoming familiar with the components and terminology could help dial back the frustration that comes with being thrown into the fire.

Technical Resources

Check out these OpenStack-related articles at our SoftLayer Development Network (SLDN):

Definitions for Components

Instead of using the names of the components, specialists in the OpenStack community always use the code/project names. This table translates those for you.

Components Name Description
Identity / Authentication Keystone Provides authentication, token validation, and service URLs for each service to authorized users.
Compute Nova Manages all hypervisor interaction, management, and instance state.
Networking Quantum (Neutron) Manages state for all networks defined within a cluster, and provides network routing, IP management, DHCP, and load balancing.
Image Management Glance Maintains state and stores copies of OS images for use in new instance deployments.
Block Storage Cinder Allocates and manages block storage for instances, whether secondary, tertiary, etc. disks that are attached to instances. Also stores and manages snapshots created from instances.
Object Storage Swift Provides generic object storage services, usually as a more generic file store.
Management UI Horizon Allows administrators and end-users to manage each of the above.

Definitions for Terms

The list below contains only basic terms found throughout our documentation. For a beefier, more comprehensive list, go to http://docs.openstack.org/glossary/glossary.

Term Definition
Action Providers take idempotent actions to configure resources.
Attribute Attributes are data about nodes.
Authentication Clients authenticate to a server using pre-shared RSA keys and signed HTTP headers.
Auto-vivify Internally in the library, attributes are automatically created as methods.
Bootstrap In this context, bootstrap means to get Chef installed and ready to run on the target system.
Client The client communicates with a server to download the cookbooks it needs to compile and run its configuration.
Configuration Management Setting up all the various components and services on a server so it can fulfill a role is configuration management.
Convergence The process by which systems are brought in line with the overall configuration management policy.
Cookbook Chef cookbooks are packages for code used to configure some aspect of a system.
DSL Programming or specification language dedicated to a specific problem domain. Chef makes use of meta-programming features in Ruby to create a simple DSL for writing recipe, role, and metadata files.
Data Bag Arbitrary store of JSON data that is indexed for Search.
Definition Allow creation of new resource macros that string together other resources.
Environments Provide a mechanism for managing different segmented spaces such as production, staging, development, and testing, with one Chef setup (or one organization on Hosted Chef): allowing you to set policies to dictate which versions of a given cookbook may be used within an infrastructure segment.
File Specificity File specificity is a special order, which Chef looks for host-, platform-version- or platform-specific files to use when downloading/configuring file and template resources.
Git Git is a distributed version control system.
Idempotent A mathematical term that means multiple applications of the same action should not change the result.
Index Most data (all but Cookbooks) stored on the Chef Server are indexed for Search.
Infrastructure Applications run on Infrastructure. Infrastructure in this context is not physical or virtualized things like servers or networking. Rather, infrastructure is the application itself, plus all the underlying software prerequisites, server settings, tweaks, and configuration files need for it to function properly. Infrastructure typically spans nodes, and often networks.
JSON JavaScript Object Notation is a lightweight data format that is easy to read and write. All the APIs used in Chef are driven by JSON data.
Knife A command-line tool used to work with a Chef Server and local Chef Repository.
Library In a Chef cookbook, a Library is arbitrary Ruby code that can be used to extend Chef’s language, or rollout new features.
Merb Merb is a lightweight MVC framework used by the Chef Server to provide the API that clients communicate with.
Metadata Chef cookbooks use metadata to provide hints to the Chef Server about what cookbooks should be deployed to a node, and can be used in user interfaces built on top of Chef.
Node A node is a system that is configured in an environment.
Operating System An operating system manages hardware and provides services for running application software.
Organization In Hosted Chef, an organization represents a company, department or other grouped set of servers infrastructure managed by Chef.
Platform Chef detects the Operating System it is running on through Ohai, and uses that platform primarily to determine what provider to use for particular resources.
Provider A provider is an abstraction on top of system commands or API calls that is used to configure a resource. Providers are often Operating System specific, such as the provider that installs packages for Debian (APT), Red Hat (Yum) or ArchLinux (Pacman).
Provision The act of installing an operating system on bare metal, virtual machine or cloud computing instances is provisioning. Before Chef can configure and integrate systems, the system must first be provisioned, and then bootstrapped. This process varies by Operating System platform.
Queue The Chef SOLR search index uses a queue for incoming data that needs to be indexed for search on the Chef Server.
Recipe A recipe is a Ruby DSL configuration file that you write to encapsulate resources that should be configured by Chef.
Repository A Chef Repository is a directory where you store all the various code used to configure your infrastructure with Chef.
Resource A resource is an abstraction that represents a particular thing that needs to be configured, such as a package or a service.
REST “REpresentational State Transfer (REST) is a style of software architecture for distributed hypermedia systems”, commonly associated with HTTP. APIs conforming to REST constraints are “RESTful”. Chef uses a RESTful API.
Role A role describes a set of functionality for nodes through recipes and attributes.
Ruby Ruby is an object-oriented programming language. Chef is written in Ruby and uses a number of Ruby DSLs for writing recipe, role, and metadata as code.
Run List A run list is an array of recipes and roles that should be applied in order on the Node, or in another Role.
SOLR SOLR is a full text search engine platform written in Java.
Search Data stored by the Chef Server is indexed for Search and can be queried with SOLR’s Lucene search syntax.
Shef The Chef read-eval-print loop (REPL), Shef, is a way to run Chef in an IRB session. IRB is an interactive Ruby console.
Solo Chef Solo is a standalone, non-client/server way to execute Chef recipes on nodes.
System Integration The act of making disparate systems in an infrastructure work together to provide application services and business value is system integration. It is where all the systems that have been configured are brought together to do their job.
Tags Tags are an array attribute of nodes.
Template Chef uses ERB templates to create dynamically generated configuration files. The template files themselves are stored in cookbooks and generated using the Erubis library as it is faster than the default implementation of ERB in the Ruby standard library.
User In the Open Source Chef Server, users log into the Webui Management Console. In the Hosted Chef, users are credentialed entities used by humans to connect to Hosted Chef to manage the Organization.
Webui Hosted Chef and the Open Source Chef Server have a web user interface Management Console that can be used to view and modify various parts of the environment.