In this last of a three part series on learning how to add services to a Cloud Foundry cloud we’ll deploy the echo service into a BOSH-based deployment. In part II you’ll find a more detailed description of the parts of a system service implementation, and also a description of and link to an updated version (updated from here) of the echo server itself. If I’m doing my job right, with this post you should have an “ah ha” moment or two – as I already mentioned, I went through the exercise of learning about cloud foundry services in exactly the order mirrored with this series of blog posts, and a lot of things came together for me in this last step. So, let’s get started.
I’m going to roughly follow the instructions posted here – a BOSH release for the Echo Service. As I went through this exercise I was working off of an older version of this repository, with an older version of the documentation, where after cloning the repository you copy things from this directory into your cloud foundry release. The latest instructions point out that BOSH now supports having multiple releases for a single deployment, a way to modularize a deployment, so you no longer have to copy things into a single directory structure for the cloud foundry deployment. There is, however, something to be learned by copying things, so I’ve decided to keep this post in the older style to allow me to sprinkle the process with some explanation – I’ll refer to the steps as described in the older version of the docs.
Step 1: We already had a BOSH-based deployment of cloud foundry running in our lab. We started with the cf-release posted here and modified it so that it consumed a few less resources (you would think as EMC that we would have all the vBlocks we need, but then you would be wrong ) ; before adding the echo service we were running 34 vms.
Step 2: Clone the repository (https://github.com/cloudfoundry/vcap-services-sample-release).
Step 3: Copy the job and package directories into your cloud foundry release.
cp -r vcap-services-sample-release/jobs/* cf-release/jobs/ cp -r vcap-services-sample-release/packages/* cf-release/packages/
If you haven’t already dug into the primary portions of a bosh release, here’s a brief explanation:
- Packages describe all of the bits that will make their way onto the VMs that will run the service. Every service I have looked at or built myself has had at least a spec file and a packaging file.
- The spec describes what is required for that service component – dependencies on other cloud foundry packages (like ruby or sqlite) or files that are a part of the cloud foundry release. This tells bosh during deployment to copy these artifacts onto the VM that will run this component.
- The packaging file is a script that runs after all of those bits have been delivered to the newly provisioned VM. It usually will involve things like untarring a file and moving the resultant bits into the appropriate location on the VM.
- Some packages will also have a prepackaging script that is run during the compiling of a package, before the VM is even provisioned.
- Jobs represent the things that will be run on a VM and the files are generally start scripts and configuration files. What is really interesting here is that those start scripts and config files are found in a subdirectory of the jobs directory called “templates.” The fact that these are templates allows you to instantiate them with values at run time, allowing you to do things like supply IP addresses of running machines at the point where that IP address is actually known.
There are two other major pieces of a BOSH release: 1) the blobs (which I’ll get to in a moment) and 2) the source tree containing code bits that make up the pieces of a package (mentioned in the package “spec” above). I won’t say much about the latter in this post except that for the echo service, and all the base cloud foundry services, those bits get into your cloud foundry release via some git magic – it’s all in the ./update command that you do after cloning the cf-release repository. This draws the pieces for those services, the node and gateway implementations, from the vcap-services repository.
Step 4: In this step you are asked to put metadata for the echo server blob into the …/cf-release/config/blobs.yml file. This step isn’t needed at the moment, and in fact, the latest version of the docs for this sample release does not include it.
Step 5: Add echo to the list of built in services by modifying the cloud_controller.yml.erb file adding ‘echo’ to the line that starts with “services =”.
At this point the instructions tell you that you can do a bosh create release and a bosh upload release but there is one critical step missing – what about the actual EchoServer-0.1.0.jar? How do we get it running on one of the BOSH managed VMs?
I mentioned above that in addition to the packages and jobs portions of a cloud foundry release, there are also blobs. For cloud foundry services these are generally the tar/zip files that contain the actual servers that will provide the service capabilities; the postgresql-9.0-x86_64.tar.gz file or the redis-2.2.15.tar.gz, for example. For our sample service this is the EchoServer-0.1.0.jar file. There are a number of ways that you can structure your cf-release leveraging git to make this perhaps a bit more elegant, but for now we’ll just do the brute force:
- Create the echoserver directory in the …/cf-release/blobs directory.
- Drop the EchoServer-0.1.0.jar file from part II in this series into that new echoserver directory.
(Looking into several of the echo service files that were copied over into the cf-release you can find reference to that jar file in places like …/cf-release/packages/echoserver/spec, …/cf-release/packages/echoserver/packaging and …/cf-release/jobs/echo_node/templates/echoserver_ctl.)
During the bosh create release this jar will then get included in the tar ball that is subsequently uploaded to and deployed into the cloud.
Okay, so now for the good stuff. In part II of my series I promised you that some of the ugliness around coordinating the command line arguments for running the Echo Server with values in the echo_node.yml file would get better with BOSH. You see, BOSH is now responsible for running both the Echo Server (starting it with a java command) and the echo_node, so there must be a way that we can coordinate these two things. There is.
The single place that we will put values that will then be used by the Echo Server and the echo_node is in the deployment manifest. Under the properties: section you need to include the following:
echo_gateway: token: changeme ip_route: ***.***.***.*** echoserver: port: 5555
Then you have to see to it that the Echo Server and the echo_node pick up the port value appropriately.
In the …/cf-release/jobs/echo_node/templates/echoserver_ctl file you will find the java command that runs the echo server:
exec java \ -jar EchoServer-0.1.0.jar \ -port <%= properties.echoserver && properties.echoserver.port || 8080 %> \ >>$LOG_DIR/echoserver.stdout.log \ 2>>$LOG_DIR/echoserver.stderr.log
Enclosed in the <%= %> is a template expression (using ruby’s erb feature) that pulls values from the deployment manifest. But our Echo Server also takes in an IP address so we need to update this execution to the following:
exec java \ -jar EchoServer-0.1.0.jar \ -ipaddress <%= spec.networks.default.ip %> \ -port <%= properties.echoserver && properties.echoserver.port || 8080 %> \ >>$LOG_DIR/echoserver.stdout.log \ 2>>$LOG_DIR/echoserver.stderr.log
In the …/ cf-release/jobs/echo_node/templates/echo_node.yml.erb file you will find the port for the echo server specified; recall from Part II that the echo_node oversimplified so as to just return the port number that is specified in the _node.yml file.
port: <%= properties.echoserver && properties.echoserver.port || 8080 %>
Of course, now you can see that the port in this config file is drawn from the same source as the port supplied to the Echo Server when it is started. Something you had to coordinate manually is now handled by BOSH. Coolness.
Step 6: NOW you can do the bosh create and upload. Because we were updating an already deployed release we need to do:
bosh create release –force
bosh upload release
Step 7: And while we have already updated the deployment manifest with the properties for the node and gateway, you also have to update it to include the two jobs that will be part of our cloud foundry deployment. Note that each VM gets a single job, but that the echo_node job launches two processes, the echo_node implementation and the actual Echo Server. The following are roughly those parts taken from our deployment manifest; your mileage will vary depending on how you configured your cf-release deployment. Under the jobs: section:
- name: echo_node template: echo_node instances: 1 resource_pool: infrastructure1 persistent_disk: 128 networks: - name: default static_ips: - ***.***.***.*** - name: echo_gateway template: echo_gateway instances: 1 resource_pool: infrastructure1 networks: - name: default static_ips: - ***.***.***.***
Oh, and we increased the size of our “infrastructure1” resource pool by 2. Of course, you’ll have to update the ***.***.***.*** IP addresses appropriately.
Now do Step 8:
You should now be able to push the same echo app as posted in Part II of the series.