Friday, June 26, 2015

Nice bash prompt for git

When checking out jfrazelle's Dockerfiles which are super fun I went down the rabbit hole with her dotfiles

I stole, and liked the setup for her bash prompt when working with GIT.

Chuck the following in your .bash_profile or fork a dotfiles repo.

prompt_git() {
 local s='';
 local branchName='';

 # Check if the current directory is in a Git repository.
 if [ $(git rev-parse --is-inside-work-tree &>/dev/null; echo "${?}") == '0' ]; then

  # check if the current directory is in .git before running git checks
  if [ "$(git rev-parse --is-inside-git-dir 2> /dev/null)" == 'false' ]; then

   # Ensure the index is up to date.
   git update-index --really-refresh -q &>/dev/null;

   # Check for uncommitted changes in the index.
   if ! $(git diff --quiet --ignore-submodules --cached); then
    s+='+';
   fi;

   # Check for unstaged changes.
   if ! $(git diff-files --quiet --ignore-submodules --); then
    s+='!';
   fi;

   # Check for untracked files.
   if [ -n "$(git ls-files --others --exclude-standard)" ]; then
    s+='?';
   fi;

   # Check for stashed files.
   if $(git rev-parse --verify refs/stash &>/dev/null); then
    s+='$';
   fi;

  fi;

  # Get the short symbolic ref.
  # If HEAD isn’t a symbolic ref, get the short SHA for the latest commit
  # Otherwise, just give up.
  branchName="$(git symbolic-ref --quiet --short HEAD 2> /dev/null || \
   git rev-parse --short HEAD 2> /dev/null || \
   echo '(unknown)')";

  [ -n "${s}" ] && s=" [${s}]";

  echo -e "${1}${branchName}${blue}${s}";
 else
  return;
 fi;
# Highlight the user name when logged in as root.
if [[ "${USER}" == "root" ]]; then
 userStyle="${red}";
else
 userStyle="${orange}";
fi;

# Highlight the hostname when connected via SSH.
if [[ "${SSH_TTY}" ]]; then
 hostStyle="${bold}${red}";
else
 hostStyle="${yellow}";
fi;

if tput setaf 1 &> /dev/null; then
 tput sgr0; # reset colors
 bold=$(tput bold);
 reset=$(tput sgr0);
 # Solarized colors, taken from http://git.io/solarized-colors.
 black=$(tput setaf 0);
 blue=$(tput setaf 33);
 cyan=$(tput setaf 37);
 green=$(tput setaf 64);
 orange=$(tput setaf 166);
 purple=$(tput setaf 125);
 red=$(tput setaf 124);
 violet=$(tput setaf 61);
 white=$(tput setaf 15);
 yellow=$(tput setaf 136);
else
 bold='';
 reset="\e[0m";
 black="\e[1;30m";
 lightblue="\e[94m";
 blue="\e[1;34m";
 cyan="\e[1;36m";
 green="\e[1;32m";
 orange="\e[1;33m";
 purple="\e[1;35m";
 red="\e[1;31m";
 violet="\e[1;35m";
 white="\e[1;37m";
 yellow="\e[1;33m";
 notdim="\e[22m"
 dim="\e[2m"
fi;


# Set the terminal title to the current working directory.
PS1="\[\033]0;\w\007\]";
PS1+="\[${bold}\]\n"; # newline
PS1+="\[${userStyle}\]\u"; # username
PS1+="\[${white}\] at ";
PS1+="\[${hostStyle}\]\h"; # host
PS1+="\[${white}\] in ";
PS1+="\[${green}\]\w"; # working directory
PS1+="\$(prompt_git \"${white} on ${violet}\")"; # Git repository details
PS1+="\n";
PS1+="\[${blue}\]\$ \[${reset}\]"; # `$` (and reset color)
export PS1;
}

Pretty useful.

Saturday, June 20, 2015

Forking pipelines

Yesterday, in preparation for Dockercon I posted a blog summary of some things we have been working on.

A piece buried within that post is the ability to fork a project and get its delivery pipeline along with the source. GitHub has made it trivial to share and fork source but deploying another persons application has never been simple. Docker goes a long way to making this better by providing consistent packaging mechanism and environment to run an application. Now via the use of a pipeline.yml file in the projects source repository we are able share the full delivery pipeline along with the source of an application.



I hope that this gets traction. Every software project has to standup a delivery pipeline. This always seems like more work than it should be and often feels like recreating the wheel. The ability to copy or clone existing pipelines is step towards allowing teams have robust continuous delivery pipelines with little cost, and for organizations to standardize on their delivery processes in a way that is executable rather than simple a list of requirements on some wiki. I can imagine communities emerging to share job types and pipeline definitions but to start with lets take a closer look at this first step.

The application

Lets assume you have a project with a pipeline (if you don’t see the post above, and go create one). For this exercise I took a BookClub application that Steve Atkin developed. This application is used to demonstrate how a java application can use the Globalization Pipeline service. It is a good application for my purposes because deployment involves binding the application to a number of services in IBM Bluemix, as well as passing credentials to the application for services not in Bluemix. In addition it is a Cloud Foundry application so also provides me with an excuse to Dockerize it and see how things work on the Container service.



Generating a pipeline.yml file

I cloned the project into a GitHub repository. This step was not necessary I wanted to organize a few examples in one place. The BookClub project had a pipeline. If your application does not you should go ahead and build one out manually that includes at least a Build and Deploy stage:
Next I generated a pipeline.yml file from the existing Pipeline by adding /yaml to the end of my exiting pipeline definition.
I checked this into my source repository as .bluemix/pipeline.yml.

The pipelne.yml file contained information specific to my organization and space. I edit the file and replaced

    target:
      url: https://api.ng.bluemix.net
      organization: rjminsha@us.ibm.com
      space: dev
with
    target:
      url: ${CF_TARGET_URL}
      organization: ${CF_ORGANIZATION}
      space: ${CF_SPACE}
Now when someone clones my pipeline, their information will be automatically substituted. I also replaced my application name with
     
   application: ${CF_APP}
Now when the project is copied using the 'Deploy to Bluemix Button', a unique project name will be generated and passed in as ${CF_APP}. In addition a unique route or url will be created for the new instance of the application and pipeline. These can be changed later if you desire. If you are working with a pipeline that uses Containers you will want to set the IMAGE_NAME, CONTAINER_NAME, and ROUTE_HOSTNAME to ${CF_APP}.

Dealing with services

Next I had to think about services. The BookClub application leverages the Globalization, Watson Machine Translation and IBM Insights for Twitter services in Bluemix. To deploy the application into a space you need those services added to the space and bound to the application. Binding a service to an application is how an application gets credentials for a service, and knows the correct endpoint.

Rather than ask the user to manually do this as a 'one time setup' I wanted to automate the creation of the service. If the services already existed I should bind the existing instances to my application, and if not I need to add service instances into the space. I checked a deployment utility into my repository that I would use to automate this.
... 
try:
    Logger = setupLogging()
    parser = argparse.ArgumentParser()
    parser.add_argument("--app")
    args=parser.parse_args()
    if args.app:
        appName=args.app
    else: 
        appName=DEFAULT_BRIDGEAPP_NAME

    createBoundAppForService(GLOBALIZATION_SERVICE, GLOBALIZATION_SERVICE_PLAN, GLOBALIZATION_SERVICE_NAME,appName)
    createBoundAppForService(MACHINE_TRANSLATION_SERVICE, MACHINE_TRANSLATION_PLAN, MACHINE_TRANSLATION_NAME,appName)
    createBoundAppForService(TWITTER_SERVICE, TWITTER_PLAN, TWITTER_NAME,appName)

    sys.exit(0)

except Exception, e:
    Logger.warning("Exception received", exc_info=e)
    sys.exit(1)
...
To make this available to the deployment jobs I also added it to my build stage in the pipeline.yml
      #!/bin/bash
      # build the application 
      mvn -B package
      #copy deploy utility
      cp setup_services.py ${ARCHIVE_DIR}
The last step was to deal with the API Keys needed to access non-bluemix services. The application uses APIs from http://developer.nytimes.com, http://idreambooks.com/api and http://www.alchemyapi.com. I didn't want to include private information like this in the build script or pipeline.yml so moved these to secure properties on the deploy stage and once again updated the pipeline.yml.
The secure properties can be accessed as environment variables in the deploy script, but will be hidden in output.


Adding Deploy to Bluemix

Finally I added a 'Deploy to Bluemix' button to my projects README.md. For information on this 'Deploy to Bluemix' refer to Philippe Mulet's excellent blog. It boils down to adding the following markdown to your README.md :
[![Deploy to Bluemix](https://bluemix.net/deploy/button.png)](https://bluemix.net/deploy?repository=https://github.com/Puquios/bookclub-foundry.git)
After doing so if someone clicks this button it will automatically clone the project, create an Bluemix DevOps Services Project, create a pipeline, generate a unique application name and route and kick off an initial deployment.


Give it a try

Head on over to the to the project, just deploy it!, or check out some of my other examples on my Puquios project. Then see some of the other examples, or share your own.

Friday, June 19, 2015

Recap of containers at Interconnect Feb 2015

I've been posting content in a few different locations, so thought I'd keep a few references on this Blog.  Back in February IBM held the first Interconnect conference.  Interconnect was huge ... three conferences rolled up into one.  Hybrid Cloud and the role containers play in portable workloads was a pretty consistent theme throughout the conference.  This of course made me happy. 

I was lucky enough to have the opportunity to present some sessions on containers and continuous delivery.   If you are interested you can get a copy of my slides on slideshare.  

Of the keynotes I thought that the talk Gennaro Cuomo and Heather Cox gave about Hybrid Cloud and Innovation was well done: https://www.youtube.com/watch?feature=player_detailpage&v=VjJ46gJYHoU#t=1893

Dan Berg and Jason McGee did a nice give and take on IBM DevOps Services for Containers:  https://www.youtube.com/watch?feature=player_detailpage&v=g7JZgAI3IDI#t=5438  

 The keynote video does not do a great job of showing the scenario, so here is an introduction to IBM DevOps Services for Containers at the time of Interconnect



All in all it was a great conference. We got really good feedback in terms of the direction that we are heading with the Delivery Pipeline, and the tight integration with services on Bluemix. With Dockercon coming up I'll take the opportunity to post more in next few days on my thoughts and progress working with containers and building out delivery pipelines.

Wednesday, October 8, 2014

Delivery pipeline for Docker containers

In a previous post I showed the usage of IBM Urbancode Deploy (UCD) to create a delivery pipeline for services packaged as Docker Containers. It was a positive experience. I liked docker as a packaging mechanism, I liked the concept of linking a test container to an application container as both a functional regression, as well as a health check and UCD did a good job of making the process of publishing to Docker registries and hosts repeatable and less error prone.

However, there were things that I did not like about the setup:
  • Triggering processes from one environment to another was a manual experience. I did not so much have a pipeline as I had a repeatable way to deploy to a set of environments.
  • It required users to be familiar with IBM Urbancode Deploy (UCD). While I think that UCD scores high on a usability rating it does take a few minutes to understand the concepts and relationships between Applications, Environments, Resources, Agents and Properties. I wanted something simpler that other team members would immediately understand.


Goals
When it comes down to it all I wanted was a tool that would allow me to check in a code change and then automatically trigger a pipeline. I wanted my pipeline to consist of the following stages that would be triggered by a checkin:
  1. Source repository. Check in code to GIT
  2. Private Docker Registry. Location where versioned containers could be found. Used for the source of deployments to staging, and also as a source for team members to get the latest version of the application for testing and debug.
  3. Automated regression. Deploy Application Container, deploy linked test container and run tests. Discard any resources.
  4. [ if Automated regression=passed] Staging. Deploy application container to a known staging environment. This can then be used for additional manual testing etc.

    Then additional deploy stages that would not be automatically triggered but could be requested.

  5. Production. An environment used for the hosting of the application.
  6. DockerHub. Public docker registry used to share the application with the world.


Demo


What was done
We setup a container service using Open Stack and some Docker drivers. Then configured a private docker registry behind nginx. We made some modifications with the help of the IBM DevOps Services team to support "Docker Builders" and "OpenStack Deployers", the scripted out each of the stages in our pipeline using shell.

Obviously, a lot of activity is happening around docker. I see simple multiphase delivery pipelines that provide a framework that we can continuous integrate, test, and deploy containers to docker cloud environments as a needed addition to the space.

Overall this was a lot of fun and I am encouraged by the simplicity of the JazzHub Pipeline. Check it out at http://hub.jazz.net/

Monday, September 22, 2014

Tip: Setting terminal tab name in Mac

Just a quick tip ...

My development environment tends to consist of a set of terminal windows and a text editor (currently Sublime Text).  Rather right clicking on each terminal tab to give it a name I wanted to set it from the command line.

That way I can open new tabs with Ctl-+, and then keep them organized. By placing the following script in /usr/local/bin

This script can be run to setup the title. For example:

For some other useful tips on using terminal such as setting the theme on the command line check out:

Monday, September 8, 2014

Short video on experiements with Rational and Docker

This is a short post, pretty much simply posting the video below.   I recently been experimenting with Docker and how it could be leveraged by Rational technology. The following video summarizes some of these experiments, most of which I have blogged about previously. 

Rational has a strong background in Agile development, and is well positioned to provide DevOps tooling. Docker, provides some really significant opportunities to bridge the gap between SaaS offerings and born on cloud style applications, and traditional on prem development.

As docker becomes more mainstream the focus of tooling is going to need to shift. Today deployment automation is a hot item and one that IBM Urbancode Deploy does very well. Moving forward though this provide will move into the background and tooling that manages the release and deploy processes for dockerized applications will be more and more important. We will need tooling that makes it simple to take a change to an application, package that change, test that change, tag the version according to quality and then release that change based on common release patterns (hotfix, canary, A/B, rolling upgrade).

These solutions will be needed both in the SaaS space and on-premise as well as needing to support hybrid scenarios. Docker will be key when enabling 'borderless cloud' scenarios both in terms of packaging applications in such a way that they can be run and scaled both on prem and in SaaS environments, but also in terms of providing powerful but simple DevOps tooling that can be leveraged in a SaaS environment, or deployed on prem. Leveraging docker containers to provide on-demand isolated environments is a nice way to scale out continuous integration processes, or to dynamically create environments for pre-integration testing or diagnostics.

While technologies to co-ordinate the deployment of an application as a set of distributed containers (fig, openstack, kubernetes, mesosphere...) are still evolving it is promising enough that we should consider this as a default packaging choice when distributing solutions moving forward.


 

Rational solutions as docker containers

Rational products as containers 

Audience

The intended audience for this post is anyone that has experience with Docker, and is interested in using Docker in combination with Rational Solutions.  The solutions described in this blog post are very much just a simple experiment and should not be considered to be a fully baked, production solution.  

Motivation

I have drunk the Kool-Aid and will deliberately over-simplify the pretty complex challenges associated with delivering content to customers.  Many of the costs associated with delivering on-premise solutions seem from there being too many deployment options.  This comes out in build times where native components are needed, complexities in installation and prereq checking, costs associated with test cross platform (huge) and cumbersome documentation that walks the line between too specific, or too general.

I'd like to live in a world where capabilities are packaged as Docker containers.  There is a lot to be understood and evolved to make this a reality and many emerging technologies that will help describe topologies made up of containers but the potential is certainly there today for this to radically simplify the production and consumption of packaged software. 

For this post, I simply wanted to build out a few basic containers made up of Rational Products essential to our on premise DevOps solutions. 

Short demo



Scenario: Adding agents to IBM Urbancode Deploy 

IBM Urbancode Deploy (UCD) is a great tool for developing a continuous delivery, or DevOps pipeline.  UCD takes an application centric view of the world, and for a given application you can create a set of environments and processes to deploy the application across them.  An environment in UCD is made up of a set of resources.  Each resource has a UCD Agent running on it.  Typically a resource is a running virtual machine with the agent installed.  While there are integrations in UCD to provision resources 'from a cloud' I wanted to see if we can we leverage Docker to make it simpler to quickly create resources and environments in UCD?


To do this I simply created a Docker Image with an IBM Urbancode Deploy Agent installed on it.  The container takes in environment information which tells it which UCD Server to connect to, and optionally the name of the agent to start.

This way I can start an agent which will automatically connect to my UCD Server and be named based on the containers hostname like so :

docker run -e "UCD_Hostname=myhostname" ucd-agent 
Or to give the agent a desired name:
docker run -e "UCD_Agent_Name=myagentname" -e "UCD_Hostname=myhostname" ucd-agent

A simple script allows me to quickly startup any number of agents.  This is really nice as it allows me to quickly create agents, resources, and environments which is really useful when creating and testing UCD automation.  Also it allows me to take machines in the lab, and to leverage docker to provide a set of isolated environments for various applications or purposes. 

Note, if you are using this container to deploy applications to you will need to expose the appropriate ports on the container for your application.  

Scenario: Standing up a DevOps Pipeline 

Dynamically creating UCD agents is great but lets take this a step further and look a running a DevOps pipeline in docker containers.  For this I created a single image for IBM Collaborative Lifecycle Management (CLM), an image for IBM Urbancode Deploy Server, and then the previous image for the UCD agents.  This is an interesting combination of products because typically we recommend Rational Team Concert (contained in CLM) for source control, planning and continuous integration and then UCD for continuous deployment.  Combined with some number of agents this gives us enough infrastructure to build out a continuous delivery pipeline.


 In this solution both the UCD agent, and the CLM containers take in the location of the UCD Server as a part of the environment.  This will automatically configure them to be connected to the UCD Server.  In addition to being able to pass in this information using the -e flag, they can also be run as linked containers which will automatically configure the connection information.

For example, to start these as linked containers exposing the default ports run:
echo "###################"
echo "starting ucd-server image"
echo "###################"
docker run -d -p 8080:8080 -p 8443:8443 --name=ucd-server ucd-server

echo "###################"
echo "Starting additional agent(s)"
echo "###################"
docker run -d --name=ucd-agent-1 -e "UCD_Agent_Name=docker-agent-remote-1" --link=ucd-server:ucdserver ucd-agent
docker run -d --name=ucd-agent-2 -e "UCD_Agent_Name=docker-agent-remote-2" --link=ucd-server:ucdserver ucd-agent

echo "###################"
echo "Starting CLM Server"
echo "###################"
docker run -d --link=ucd-server:ucdserver -p 9443:9443 -p 9080:9080 -e "UCD_Agent_NAME=docker-agent-clm-server" --name clm-server clm-simple

While the solution is very simple at this point and does not represent best practice topologies for CLM and CLM it shows quite a bit of promise. This solution has been very useful for quickly standing up isolated solutions for continuous delivery which is very useful when developing new applications/processes and for avoiding clutter on single large UCD Server deployments.   

Getting access to, and building the containers  

I have shared the source code for these images in a public IBM DevOps Services project called leanjazz-docker.

There are a number (unfortunately) of binary files that need to be downloaded as described in the README.md document.

Building the images: 
cd bin 
./build-all.sh  

Running the images:
cd bin 
./start-all.sh 
./show-info.sh  

At this point you can access the webpage for IBM Urbancode Deploy and view the connected agents, and also access the Jazz Team Server setup page to complete the setup of IBM Collaborative Lifecycle Management and Rational Team Concert.  If you used the scripts listed above the default ports will be mapped to the docker host ports. On my local machine I am using boot2docker and my docker host has an IP address of
$ boot2docker ip
The VM's Host only interface IP address is: 192.168.59.103
As such I can access the CLM console at https://192.168.59.103:9443/jts/ and the UCD Server console at http://192.168.59.103:8080
   
Adding additional 3 agents:
cd bin 
./add-agent.sh -a mynewagents -n 3 
docker ps | grep ucd-agent-docker 

Remaining work and issues 

Things are certainly not perfect.  I'll be tracking improvements to the containers  on leanjazz-docker  but here is a summary of some of the
  •  Images are quite large due to the size of the products but also due to a lack of work done optimizing the Dockerfile and build process
  • Containers for UCD and CLM run on tomcat and should be updated to include WebSphere Liberty profile
  • Containers for CLM have all application (CLM, QM, RM, JTS) installed on a single tomcat instance rather than being distributed across multiple instances. 
  • Databases are running within the container rather than being isolated on a volume, or in an external DBAS solution
  • Databases are running derby rather than IBM DB2 due to a requirement to start DB2 container in privileged mode 
  • Not currently published to DockerHub
  • No form of license acceptance built into the images