The Anatomy of a Cloud Foundry System Service Implementation

In the first post of this three part series, I went through the steps we took to deploy the sample Echo service to a non-BOSH-based, single node cloud foundry instance, and in the last part we’ll do the deployment into a BOSH-based cloud foundry. In this post I want to spend just a bit of time explaining the parts of the Echo service itself.

A cloud foundry service implementation consists of three main components:

  • The service itself: This is the actual running service, for example, for Postgres this is a set of processes that implement the running database.
  • The service node: This is the code that provisions and deprovisions (and a few other things) cloud foundry service instances. In the case of Postgres, when a service instance is created, the code in the service node will make calls to the Postgres server itself, creating a new database, user and password. Attributes, such as the server address and the new database name, user and password will ultimately be passed back to an application that is binding to the service.
  • The service gateway: This is part of the mechanism for that “ultimately” – this code presents a RESTful service for creating and deleting service instances and gets those requests to the service node for fulfillment. When the node responds with values, the gateway returns them to the original requestor.

While my intent with this post is to mainly explain the Echo Service components, not to provide a general tutorial on creating cloud foundry services, there are a few general tidbits I do want to share right away.

  • For the service itself, you are likely not writing any code here, rather, you are probably taking some piece of software and deploying it (how depends on whether you are going into a BOSH-based deployment (part III) or not (part I)).  For example, we’re working on creating a Cassandra cloud foundry service, so the Cassandra implementation is just downloaded from here.
  • For the gateway, you have very, very little code to write.  The code you do write is in Ruby and leverages a bunch of cloud foundry code.  The gateway just extends these cloud foundry provided classes setting two things:
    • The service_name (this is actually set in a provisioner class that is then included in the gateway class) which is also set in the node implementation (via common base class included in both) and is used for communication between the node and gateway (messaging over NATS).
    • The name of the config file.
  • Most of your work will be in the node implementation.  This is written in Ruby and leverages a bunch of cloud foundry code. Your task here is to communicate with the server itself to do whatever needs to be done on provision and deprovision requests.
    • When you go into a multi-node cloud foundry deployment, your service node code will typically execute on the same VM as the actual service –if there are multiple service VMs, there will be multiple node processes running (one on each vm).  You’ll probably have one or two gateways – running on VMs separate from the node/service VMs. Think of the service gateway as a router of sorts to the set of nodes actually providing a service – one thing, the gateway is only called into action on provisioning and deprovisioning requests, not when an application is communicating with the service.
    • The gateway and node communicate via messaging (NATS) embedded in cloud foundry.

There is a lot more to say about this, but that’s a topic for another post.  In the mean time, I encourage you to study not only the Echo Service, but do look at a more real implementation such as Postgres.

So, back to Echo.  The original instructions are a bit confusing as they first deal with deployment of the node and gateway, getting them running, even before there is a service for the node to speak to.  I’d probably address the actual service first before worrying about connecting to it with the gateway/node – just sayin. In fact, let me start there.

The Echo Cloud Foundry Service

The Echo Server

The service itself is found as an attachment in the original article – the echo_service.jar attachment – and it is super simple.  It is a java program – you can find the source in the echo_src.zip attachment of the original article – and it’s a single java class.  When you run this service it simply listens on a particular port and when something shows up on that port, it just takes that string and sends it back, over the same socket.  The code as you originally find it will listen at the ip address 127.0.0.1 on the port passed in when you run the jar.  The following line of code is the one that gets that localhost IP address in the method call on InetAddress:

serverSocket = new ServerSocket(port, 0, InetAddress.getLocalHost());

This didn’t work for me. My client app (which I talk about in the last section of this post) was sending out messages on the actual IP address of the box, not 127.0.0.1, so the echo server never saw them.  To solve this, a colleague of mine and I modified the echo server to take in, on the command line, the server ip address as well as the port.  Here you’ll find both the new jar file and the source.  If you use this new Echo Server implementation instead of the original, when you start up the server make sure you include the ipaddress and port as follows:

java –jar EchoServer-0.1.0.jar –ipaddress 192.168.1.111 –port 5002

We’ll come back to this in part III when the startup of the echo server is automated.

The echo_node implementation

The echo_node implementation has the job of servicing provisioning and deprovisioning requests.  In a “real” service this would likely involve communicating with the Server to change its state (i.e. create a new database) and/or retrieve some values (username and password for the database), and then this information would be returned to the requesting party.  For Echo it’s much simpler.  There is nothing to create within the server and the echo server doesn’t expose any type of interface for asking about its state – it might, for example, have offered an API that returns the ip address and port for the socket it is listening on – but it doesn’t.  So the echo_node implementation does not communicate with the echo server at all and instead simply returns values to the requesting party.  Where does it get these values?  Mostly from the echo_node configuration file.  In part I, roughly following these instructions, the echo_node.yml includes the following two lines:

port: 5002
host: 192.168.1.111

The host property is no longer required – cloud foundry automatically includes the host and name properties in the values returned from the services node.   That said, you need to make sure the ip address you provide when running the echo server is, in fact, the ip address of the machine the echo_node is running on because that is the ip address that will eventually be passed to the client application.  And if this all seems a bit confusing, don’t worry, it will get far better when we deploy with BOSH – yep, part III.

The echo_gateway implementation

If I were to stay only with talking about the echo gateway here there wouldn’t be much to say.  Most service gateways are pretty much the same.  They present an HTTP (RESTful?) service for provisioning, deprovisioning, binding and a few other operations, they dispatch messages to NATS and wait for the service node to do its work and respond.  The response is then made available to the requestor.  As I said before, all that code is pretty much included in cloud foundry – you just have to do what I described earlier in this post. But let me give a very brief overview of what all of that lovely, provided-for-you code is doing.

  1. It presents an HTTP (RESTful?) service.
  2. It sets up a listener on NATS for responses it expects to get from the service node.
  3. It processes the request and dispatches the appropriate message to NATS (in step 2 we set up the mechanism for receiving a response).
  4. And when it gets that response, it in turn responds to the HTTP request that started the whole thing off.

There is a good bit of code in there that does things like handling the getting back of multiple values for a particular key and converts them into an array, and other goodies like that.

So, who is the recipient of the gateway response?  Well, ultimately it is the client application, and that is what I will talk about next.

The Echo Client

The echo client is also quite simple.  It’s a plain-old java web application – not spring.  Cloud foundry knows how to reconfigure certain types of applications by looking at the artifacts for that application type, like spring config files, finding common patterns, like the use of the javax.sql.DataSource interface for database connectivity, and setting values, like database server ip addresses, in there. It’s the cloud foundry stager that does this. But for a plain old java web app, the set of things that cloud foundry knows how to configure is limited to some basic things in the web.xml; beyond that, it has no idea how that app is configured. Does the app look for a property file? Or does it look things up in environmental variables? Dunno.  So in this case the dea, who has the values that came back from the service gateway, just writes them into environmental variables .  In fact, the dea always writes these values into environmental variables regardless of whether the stager does anything extra with any of those values.

And that is how the echo client is consuming them.  If you have a look in the source for the echo client, you’ll see the following in the index.jsp:

String services_json = System.getenv("VCAP_SERVICES");

The services_json is then parsed to find the credentials object which in turn contains the host and port.

Summarizing, the echo_node obtained the host and port from its config file, which until part III you are responsible for keeping in synch with the arguments you provide when you run the echo server.  Through the echo_gateway, the dea took those values and wrote them to environmental variables, the client picked them up from there and configured itself to send messages over the socket at that ip address and port.

One point to emphasize: once the service instance is bound to the application, the node and gateway are out of the picture – at that point the app communicates directly with the echo server.

Got it? 🙂

It takes a bit of study but in the end it all makes sense.

Have I mentioned that I love this stuff?

Share Your Thoughts