How to launch Pebbles server on cPouta

These are step by step instructions on how to launch a Pebbles server on cPouta IaaS cloud (https://research.csc.fi/pouta-iaas-cloud). We assume that you are familiar with the cPouta service. Horizon web interface will be used as an example.

First we create a security group, then launch a server and finally install the software using ssh interactive session.

Part 1: Launch the server

Prerequisites

  • First log in to https://pouta.csc.fi
  • Check that quota on the first page looks ok.
  • Upload public key or create a key pair, if you have not done so already (Access and Security -> Key Pairs)

Create a security group for the server

  • In case you are a member of multiple projects in cPouta, select the desired project in the project selection drop down box on the top of the page
  • Create a security group called ‘pb_server’ for the server (Access and Security -> Create Security Group)
  • Add ssh and https -access to your workstation IP/subnet (Manage rules -> Add rule) * ssh: port 22, CIDR: your ip/32 (you can check your ip with e.g. http://www.whatismyip.com/) * https: port 443, CIDR: like above
  • At this point keep the firewall as restricted as possible. Once the installation is complete, you can open the https-access to all your users.

Boot the server

We’ll next create the server VM. It will be based on CentOS-7.

  • Go to (Instances -> Launch Instance)
  • Details tab: * Instance name: e.g. ‘pb_server’ * Flavor: ‘standard.small’ * Instance boot source: ‘boot from image’ * Image: ‘CentOS-7’
  • Access and security tab: * Key Pair: your keypair * Security Groups: unselect ‘default’, select ‘pb_server’
  • Networks tab: default network is ok.
  • Post-Creation and Advanced tabs can be skipped

Assign a public IP

  • Go to (Instances) and click More on pb_server instance row. Select ‘Associate floating IP’
  • Select a free floating IP from the list and click ‘Associate’
  • If there are no items in the list, allocate an address with (+).

Download machine to machine OpenStack RC -file

  • Log out and log in to https://pouta.csc.fi again, this time using your machine to machine credentials
  • Go to (Access and Security -> API Access) and click ‘Download OpenStack RC File’. Save the file to a known location on your local computer

Part 2: Install software

Open ssh connection to the server:

$ ssh cloud-user@<public ip of the server>

Update the server packages (we’ll boot it later):

$ sudo yum update -y

Clone the repository from GitHub:

$ git clone https://github.com/CSCfi/pebbles.git

Run the install script once - you will asked to log out and in again to make unix group changes effective. Here is a good time to reboot the server after the updates:

$ ./pebbles/scripts/install_pb.bash
$ sudo reboot

Wait while for the server to reboot and copy the m2m OpenStack RC file to the server:

$ scp path/to/your/saved/rc-file.bash cloud-user@<public ip of the server>

SSH in again, source your m2m OpenStack credentials (use your m2m password when asked for) and continue installation:

$ ssh cloud-user@<public ip of the server>
$ source your-openrc.bash
$ ./pebbles/scripts/install_pb.bash

Part 3: Quick start using the software

Here is list of tasks for a quick start.

Set admin credentials

The installation script will print out initialization URL at the end of the installation. Navigate to that, set the admin credentials and log in as an admin. If you’re using the ansible script you won’t see the url, it is of the format https://pebbles.example.org/#/initialize .

Configure essentials

One should change at least the following settings

class pebbles.config.BaseConfig[source]

Stores the default key, value pairs for the system configuration. Rendered with a decorator which considers any environment variables, then the system level config file and finally the default values, in that order of precedence.

BASE_URL
BRAND_IMAGE

An image URL for branding the installation

MAIL_SERVER
MAIL_SUPPRESS_SEND
SENDER_EMAIL

See pebbles.config.BaseConfig for all the options.

Enable a driver

By default, only a dummy test driver is enabled. To add more drivers for provisioning different resources, you need to edit the _PLUGIN_WHITELIST_ variable. Change this variable to e.g.

PLUGIN_WHITELIST: OpenStackDriver DummyDriver

The plugin infrastructure running in ‘worker’ container will periodically check the plugin whitelist and upload driver configurations to the API server running in ‘api’ container. In the case of OpenStackDriver, the configuration will dynamically include VM images and flavors that are available for the cPouta project. Refresh the page after a minute or two, and the Plugins list on top of the page should include the driver of your choice.

Create a test blueprint

Click ‘Create Blueprint’ next to OpenStackDriver in the plugin list and you are presented by a dialog for configuring the new blueprint. We’ll create a blueprint for Ubuntu-14.04 based VM, using standard.tiny flavor, running for 1h maximum. We’ll also test running a custom command as part of the boot process and allow user to open ssh access to the instance from an arbitrary address

  • Name: Ubuntu-14.04 test
  • Description: Test blueprint for launching a single core Ubuntu-14.04 VM in cPouta
  • Flavor: standard.tiny
  • Maximum lifetime: 1h
  • Maximum instances per user: 1
  • Pre-allocate credits for the instance from the user quota: unchecked
  • Cost multiplier: 0
  • Remove the example Frontend firewall rule
  • Allow user to request instance firewall to allow access to user’s IP address: check

Also add a Customization script, just for test purposes:

#!/bin/bash touch /tmp/hello_from_blueprint_config

Save the new blueprint and enable it in the Blueprints list.

Launch a test instance

Go to ‘Dashboard’ tab. If you have not uploaded your ssh public key yet, you’ll see a notice with a link to do so in the Blueprint list. Click the link and upload or generate a public key.

Go back to ‘Dashboard’ and launch an instance. You’ll notice the new instance in the Instance list. Click on the instance name, that will take you to the detailed view, where you can see the provisioning logs and update access to your IP once the instance is up and running. Click on ‘Query client IP’ to let the system take an educated guess of your IP and then ‘Change client IP’. Now the instance firewall is open to that given IP. Copy the ssh -command from the Access field above and paste that to a terminal (or an ssh-client):

Check if our boot time customization script worked:

$ ls -l /tmp/hello_from_blueprint_config -rw-r–r– 1 root root 0 Nov 17 09:47 /tmp/hello_from_blueprint_config

Enable Docker Driver

Enabling DockerDriver requires a bit more preparation, see Docker driver documentation

Enable OpenShift Driver

An OpenShift driver exists, but it’s documentation is still incomplete. ToDo: a list to the documentation when it is done.

Part 4: Open access to users

Once you have set the admin credentials and checked that the system works, you can open the firewall to all the users.

  • Go to pouta.csc.fi -> Access and Security -> Security Groups and select Manage Rules on ‘pb_server’ group
  • Open https -access either globally by selecting ‘Add rule’ -> port 443, CIDR 0.0.0.0/0 or if the users of the system should always access it from a certain subnet, use that instead of 0.0.0.0/0

Part 5: Administrative tasks and troubleshooting

# Notes on container based deployment

The default installation with the provided script makes a Docker container based deployment. Since the system will have OpenStack credentials for the project it is serving and also be exposed to internet, we want to have an extra layer of isolation between http server and provisioning processes holding the credentials. The database (PosgreSQL) and message queue backend (Redis) also run in their own containers, using official vanilla images.

The connections between containers are as follows:

digraph G {
        node [color=blue shape=box style=filled fillcolor=white];
        color=black;
        edge [arrowhead=none];
        subgraph cluster_frontend{
                bgcolor="gray";
                nginx [shape=diamond color=red];
                label = "Frontend";
                }
        subgraph cluster_API {
                bgcolor="gray";
                gunicorn->flask_backend;
                gunicorn [shape=diamond color=red];
                flask_backend [label="Flask Backend"];
                
                label = "API";
                }
        subgraph cluster_redis {
                bgcolor="gray";
                flask_backend -> redis; 
                redis [shape=diamond color=red];
                label = "Redis";
                }
        subgraph cluster_worker {
                bgcolor="gray";
                redis->worker_; 
                worker_ [label="Celery Worker"];
                label = "Worker";
                }
        subgraph cluster_sql {
                bgcolor="gray";
                // cylinder shape is in devel but not in mainstream
                PostgreSQL [shape=diamond color=red]; 
                label = "DB";
                }
        subgraph cluster_SSO {
                bgcolor="gray";
                shibboleth;
                label = "SSO\n(optional)";
        }

        flask_backend->PostgreSQL;
        nginx->gunicorn;
        edge [weight=0 color=darkgreen arrowhead=none];
        nginx->worker_ [label="proxy configuration directives"];
        worker_->flask_backend [label="provision logging"];
        nginx->redis [label="proxy config updates"];
        edge [weight=0 color=darkgreen arrowhead=none, style=dotted];
        nginx->shibboleth;
        shibboleth->gunicorn;
}

The containers are: api, worker, frontend, db and redis (plus possibly sso, if you enable shibboleth authentication). You can list the status with:

$ docker ps $ docker ps -a

Aliases are provided for an easy ssh access:

$ ssh worker $ ssh api $ ssh frontend

The api, frontend and worker containers share the git repository that was checked out during installation through a read only shared folder. For other directories shared from the host, see the [Ansible play] (https://github.com/CSCfi/pebbles/blob/master/ansible/roles/single_server_with_docker/tasks/main.yml) that sets up the container infrastructure.

To see the server process logs, take a look at /webapps/pebbles/logs -directory in the container:

$ ssh api $ ls /webapps/pebbles/logs

You can also launch a tmux based status session, that will have windows open for the host and each of the containers and multiple panes showing status and logs in each window:

$ pebbles/scripts/tmux_status.bash

Tmux is terminal multiplexer like screen. Here is a quick survival guide:

Action Command
navigate the views CTRL-b n
change active pane CTRL-b arrow keys
exit/detach CTRL-b d
new window CTRL-b c
attach $ tmux attach (or att)
list sessions $ tmux list-sessions
kill a session $ tmux kill-session -t status