Friday, June 26, 2015

Nice bash prompt for git

When checking out jfrazelle's Dockerfiles which are super fun I went down the rabbit hole with her dotfiles

I stole, and liked the setup for her bash prompt when working with GIT.

Chuck the following in your .bash_profile or fork a dotfiles repo.

prompt_git() {
 local s='';
 local branchName='';

 # Check if the current directory is in a Git repository.
 if [ $(git rev-parse --is-inside-work-tree &>/dev/null; echo "${?}") == '0' ]; then

  # check if the current directory is in .git before running git checks
  if [ "$(git rev-parse --is-inside-git-dir 2> /dev/null)" == 'false' ]; then

   # Ensure the index is up to date.
   git update-index --really-refresh -q &>/dev/null;

   # Check for uncommitted changes in the index.
   if ! $(git diff --quiet --ignore-submodules --cached); then
    s+='+';
   fi;

   # Check for unstaged changes.
   if ! $(git diff-files --quiet --ignore-submodules --); then
    s+='!';
   fi;

   # Check for untracked files.
   if [ -n "$(git ls-files --others --exclude-standard)" ]; then
    s+='?';
   fi;

   # Check for stashed files.
   if $(git rev-parse --verify refs/stash &>/dev/null); then
    s+='$';
   fi;

  fi;

  # Get the short symbolic ref.
  # If HEAD isn’t a symbolic ref, get the short SHA for the latest commit
  # Otherwise, just give up.
  branchName="$(git symbolic-ref --quiet --short HEAD 2> /dev/null || \
   git rev-parse --short HEAD 2> /dev/null || \
   echo '(unknown)')";

  [ -n "${s}" ] && s=" [${s}]";

  echo -e "${1}${branchName}${blue}${s}";
 else
  return;
 fi;
# Highlight the user name when logged in as root.
if [[ "${USER}" == "root" ]]; then
 userStyle="${red}";
else
 userStyle="${orange}";
fi;

# Highlight the hostname when connected via SSH.
if [[ "${SSH_TTY}" ]]; then
 hostStyle="${bold}${red}";
else
 hostStyle="${yellow}";
fi;

if tput setaf 1 &> /dev/null; then
 tput sgr0; # reset colors
 bold=$(tput bold);
 reset=$(tput sgr0);
 # Solarized colors, taken from http://git.io/solarized-colors.
 black=$(tput setaf 0);
 blue=$(tput setaf 33);
 cyan=$(tput setaf 37);
 green=$(tput setaf 64);
 orange=$(tput setaf 166);
 purple=$(tput setaf 125);
 red=$(tput setaf 124);
 violet=$(tput setaf 61);
 white=$(tput setaf 15);
 yellow=$(tput setaf 136);
else
 bold='';
 reset="\e[0m";
 black="\e[1;30m";
 lightblue="\e[94m";
 blue="\e[1;34m";
 cyan="\e[1;36m";
 green="\e[1;32m";
 orange="\e[1;33m";
 purple="\e[1;35m";
 red="\e[1;31m";
 violet="\e[1;35m";
 white="\e[1;37m";
 yellow="\e[1;33m";
 notdim="\e[22m"
 dim="\e[2m"
fi;


# Set the terminal title to the current working directory.
PS1="\[\033]0;\w\007\]";
PS1+="\[${bold}\]\n"; # newline
PS1+="\[${userStyle}\]\u"; # username
PS1+="\[${white}\] at ";
PS1+="\[${hostStyle}\]\h"; # host
PS1+="\[${white}\] in ";
PS1+="\[${green}\]\w"; # working directory
PS1+="\$(prompt_git \"${white} on ${violet}\")"; # Git repository details
PS1+="\n";
PS1+="\[${blue}\]\$ \[${reset}\]"; # `$` (and reset color)
export PS1;
}

Pretty useful.

Saturday, June 20, 2015

Forking pipelines

Yesterday, in preparation for Dockercon I posted a blog summary of some things we have been working on.

A piece buried within that post is the ability to fork a project and get its delivery pipeline along with the source. GitHub has made it trivial to share and fork source but deploying another persons application has never been simple. Docker goes a long way to making this better by providing consistent packaging mechanism and environment to run an application. Now via the use of a pipeline.yml file in the projects source repository we are able share the full delivery pipeline along with the source of an application.



I hope that this gets traction. Every software project has to standup a delivery pipeline. This always seems like more work than it should be and often feels like recreating the wheel. The ability to copy or clone existing pipelines is step towards allowing teams have robust continuous delivery pipelines with little cost, and for organizations to standardize on their delivery processes in a way that is executable rather than simple a list of requirements on some wiki. I can imagine communities emerging to share job types and pipeline definitions but to start with lets take a closer look at this first step.

The application

Lets assume you have a project with a pipeline (if you don’t see the post above, and go create one). For this exercise I took a BookClub application that Steve Atkin developed. This application is used to demonstrate how a java application can use the Globalization Pipeline service. It is a good application for my purposes because deployment involves binding the application to a number of services in IBM Bluemix, as well as passing credentials to the application for services not in Bluemix. In addition it is a Cloud Foundry application so also provides me with an excuse to Dockerize it and see how things work on the Container service.



Generating a pipeline.yml file

I cloned the project into a GitHub repository. This step was not necessary I wanted to organize a few examples in one place. The BookClub project had a pipeline. If your application does not you should go ahead and build one out manually that includes at least a Build and Deploy stage:
Next I generated a pipeline.yml file from the existing Pipeline by adding /yaml to the end of my exiting pipeline definition.
I checked this into my source repository as .bluemix/pipeline.yml.

The pipelne.yml file contained information specific to my organization and space. I edit the file and replaced

    target:
      url: https://api.ng.bluemix.net
      organization: rjminsha@us.ibm.com
      space: dev
with
    target:
      url: ${CF_TARGET_URL}
      organization: ${CF_ORGANIZATION}
      space: ${CF_SPACE}
Now when someone clones my pipeline, their information will be automatically substituted. I also replaced my application name with
     
   application: ${CF_APP}
Now when the project is copied using the 'Deploy to Bluemix Button', a unique project name will be generated and passed in as ${CF_APP}. In addition a unique route or url will be created for the new instance of the application and pipeline. These can be changed later if you desire. If you are working with a pipeline that uses Containers you will want to set the IMAGE_NAME, CONTAINER_NAME, and ROUTE_HOSTNAME to ${CF_APP}.

Dealing with services

Next I had to think about services. The BookClub application leverages the Globalization, Watson Machine Translation and IBM Insights for Twitter services in Bluemix. To deploy the application into a space you need those services added to the space and bound to the application. Binding a service to an application is how an application gets credentials for a service, and knows the correct endpoint.

Rather than ask the user to manually do this as a 'one time setup' I wanted to automate the creation of the service. If the services already existed I should bind the existing instances to my application, and if not I need to add service instances into the space. I checked a deployment utility into my repository that I would use to automate this.
... 
try:
    Logger = setupLogging()
    parser = argparse.ArgumentParser()
    parser.add_argument("--app")
    args=parser.parse_args()
    if args.app:
        appName=args.app
    else: 
        appName=DEFAULT_BRIDGEAPP_NAME

    createBoundAppForService(GLOBALIZATION_SERVICE, GLOBALIZATION_SERVICE_PLAN, GLOBALIZATION_SERVICE_NAME,appName)
    createBoundAppForService(MACHINE_TRANSLATION_SERVICE, MACHINE_TRANSLATION_PLAN, MACHINE_TRANSLATION_NAME,appName)
    createBoundAppForService(TWITTER_SERVICE, TWITTER_PLAN, TWITTER_NAME,appName)

    sys.exit(0)

except Exception, e:
    Logger.warning("Exception received", exc_info=e)
    sys.exit(1)
...
To make this available to the deployment jobs I also added it to my build stage in the pipeline.yml
      #!/bin/bash
      # build the application 
      mvn -B package
      #copy deploy utility
      cp setup_services.py ${ARCHIVE_DIR}
The last step was to deal with the API Keys needed to access non-bluemix services. The application uses APIs from http://developer.nytimes.com, http://idreambooks.com/api and http://www.alchemyapi.com. I didn't want to include private information like this in the build script or pipeline.yml so moved these to secure properties on the deploy stage and once again updated the pipeline.yml.
The secure properties can be accessed as environment variables in the deploy script, but will be hidden in output.


Adding Deploy to Bluemix

Finally I added a 'Deploy to Bluemix' button to my projects README.md. For information on this 'Deploy to Bluemix' refer to Philippe Mulet's excellent blog. It boils down to adding the following markdown to your README.md :
[![Deploy to Bluemix](https://bluemix.net/deploy/button.png)](https://bluemix.net/deploy?repository=https://github.com/Puquios/bookclub-foundry.git)
After doing so if someone clicks this button it will automatically clone the project, create an Bluemix DevOps Services Project, create a pipeline, generate a unique application name and route and kick off an initial deployment.


Give it a try

Head on over to the to the project, just deploy it!, or check out some of my other examples on my Puquios project. Then see some of the other examples, or share your own.

Friday, June 19, 2015

Recap of containers at Interconnect Feb 2015

I've been posting content in a few different locations, so thought I'd keep a few references on this Blog.  Back in February IBM held the first Interconnect conference.  Interconnect was huge ... three conferences rolled up into one.  Hybrid Cloud and the role containers play in portable workloads was a pretty consistent theme throughout the conference.  This of course made me happy. 

I was lucky enough to have the opportunity to present some sessions on containers and continuous delivery.   If you are interested you can get a copy of my slides on slideshare.  

Of the keynotes I thought that the talk Gennaro Cuomo and Heather Cox gave about Hybrid Cloud and Innovation was well done: https://www.youtube.com/watch?feature=player_detailpage&v=VjJ46gJYHoU#t=1893

Dan Berg and Jason McGee did a nice give and take on IBM DevOps Services for Containers:  https://www.youtube.com/watch?feature=player_detailpage&v=g7JZgAI3IDI#t=5438  

 The keynote video does not do a great job of showing the scenario, so here is an introduction to IBM DevOps Services for Containers at the time of Interconnect



All in all it was a great conference. We got really good feedback in terms of the direction that we are heading with the Delivery Pipeline, and the tight integration with services on Bluemix. With Dockercon coming up I'll take the opportunity to post more in next few days on my thoughts and progress working with containers and building out delivery pipelines.