Playing with Spark (the web framework)

When it comes to web development, Java doesn't come to mind as a language that you can use to build small applications quickly. But recently I've discovered Spark. A micro framework to build web applications that takes advantage of Java 8 lambda expressions.

With a syntax inspired by Sinatra, the code looks very clean as you can see from the example in the projects home page.

import static spark.Spark.*;

public class HelloWorld {  
    public static void main(String[] args) {
        get("/hello", (req, res) -> "Hello World");
    }
}

I've set out to build a little web service to try out this framework. When trying out something web related, I usually go and build a url shortening web service. The final service will have three endpoints:

  • POST /urls to create a new url;
  • GET /urls/:id to get the url object associated with a given id;
  • GET /u/:id will redirect the user to the url matched with that id.

For simplicity, all the data will only be kept in memory. I will be using Gson for handling JSON. Also, I'm using Gradle for building the app and managing dependencies.

First we start by setting up Gradle with some plugins and the dependencies we need. The build.gradle file looks like this:

plugins {  
    id 'java'
    id 'application'
    id 'com.github.johnrengelman.shadow' version '1.2.2'
}

sourceCompatibility = 1.8  
version = '1.0'

mainClassName = 'UrlShortner'

repositories {  
    mavenCentral()
}

dependencies {  
    compile 'com.sparkjava:spark-core:2.2'
    compile 'com.google.code.gson:gson:2.6.2'
}

If you've used Gradle before, nothing here should come as a surprise. If you haven't, I'd encourage you to have a look through their website. It's pretty neat, specially if you're coming from a tool like Maven where all the configurations have to go in XML files.

As we're building an executable jar we use the application plugin and we're required to set the the mainClassName, this will be the entry point of the app and for this project, it will be called UrlShortner.

The shadow plugin creates a "Fat Jar" with all the dependencies so it's easier to run and deploy.

With our build settings done, we can move to the actual code. First, lets write the Url class.

import java.util.UUID;

public class Url {  
    public String id;
    public String originalUrl;

    public Url() {
    }

    public Url(String originalUrl) {
        this.originalUrl = originalUrl;
        this.id = UUID.randomUUID().toString().split("-")[0];
    }
}

This will mainly be used store the id and originalUrl attributes. The id is automatically generated using a random uuid upon instantiation of the object.

Finally, we have all we need to go on and look at the most interesting class in this project, the UrlShortner class.

There isn't a lot of code in this class but it's still better to go through it if we go step by step.

Let's first get an overview of what's going on here.

public class UrlShortner {  
    public static void main(String[] args) {
        Map<String, Url> urlsById  = new ConcurrentHashMap<String, Url>();
        Map<String, Url> urlsByUrl = new ConcurrentHashMap<String, Url>();

        post("/urls", (request, response) -> {
            // ...
        });

        get("/urls/:id", (request, response) -> {
            // ...
        });

        get("/u/:id", ((request, response) -> {
            // ...
        }));
    }

    private static Map<String, String> parseBody(Request request) {
        Gson gson = new Gson();
        Type type = new TypeToken<Map<String, String>>(){}.getType();

        return gson.fromJson(request.body(), type);
    }

    private static String toJson(Object o) {
        Gson gson = new Gson();
        return gson.toJson(o);
    }
}

At a glance we can see that all the routing goes on inside the main method. Spark offers utility methods that map each HTTP method. Each of them receives a string that will be path for that endpoint and a lambda function that will contain the business logic to be called. Each lambda function receives the request and response objects and returns a string that will be the body.

At the top of the main function I've initialised two maps. These objects will store the url's, indexed by both id and original url. Note that I've chosen a ConcurrentHashMap instead of a traditional HashMap. Spark will run our code in a multi threaded environment, so everything we do should always be thread safe.

Scrolling to the bottom of the class, there are two utility methods. The parseBody will take a request and return a Map with the data passed in the request body. It assumes that the body will contain a valid JSON object. In a real world example, you'd probably want to be more careful with these assumptions and handle the case where the input is not valid in a graceful manner.

The other utility method is toJson. And this will just get an object and serialised it to JSON.

Lets now have a look at the implementation of each of the three endpoints.

Lets start with the url creation.

post("/urls", (request, response) -> {  
    Map<String, String> body = parseBody(request);

    if(!body.containsKey("originalUrl")) {
        response.status(400);
        return "";
    }

    String originalUrl = body.get("originalUrl");
    Url newUrl = null;

    if (urlsByUrl.containsKey(originalUrl)) {
        newUrl = urlsByUrl.get(originalUrl);
    } else {
        newUrl = new Url(originalUrl);
        urlsById.put(newUrl.id, newUrl);
        urlsByUrl.put(newUrl.originalUrl, newUrl);
    }

    response.type("application/json");
    return toJson(newUrl);
});

First we parse the input using the utility method we defined earlier and validate that it contains a key called originalUrl. If no, we set the response status to 400 and return an empty body immediately.

If the validation passes, we check if we already have that url stored. If not, we just create it.

In the end, we set the content type to application/json and return the JSON representation of the url object.

Note that there is a race condition here if there are two users storing the same url at the same time. There are ways we could have solved this problem but as it's not related to Spark, I opted to let this slide for now.

get("/urls/:id", (request, response) -> {  
    String id = request.params("id");

    if (urlsById.containsKey(id)) {
        Url url = urlsById.get(id);

        Gson gson = new Gson();
        String json = gson.toJson(url);

        response.type("application/json");
        return json;
    } else {
        response.status(404);
        return "";
    }
});

To retrieve a stored url by id, we first take the id from the path params. Note that the path ends with :id, that means that there will be a String object waiting for us in the request that we can get using the params object.

If the object is found we return the JSON object as we did before. If not, we return a 404 with an empty body.

get("/u/:id", ((request, response) -> {  
    String id = request.params("id");

    if (urlsById.containsKey(id)) {
        Url url = urlsById.get(id);

        response.redirect(url.originalUrl);
        return "";
    } else {
        response.status(404);
        return "";
    }
}));

The endpoint that gets us a redirect to the original url is very similar to the previous one. The main difference is that here we use the redirect method from the response object. This will take care of the details of telling the client to go to request the original url.

You can get the whole code from here.

Spark also makes it very easy to run in development mode as it comes with Jetty embedded. Using Gradle, running the server in your local machine is as easy as running gradle run. This command will download your dependencies, compile your code and execute the final jar.

It's also possible to compile a Spark application to a war file so you can run it with a different application server (like Tomcat or JBoss) and you probably want to for production environments. I haven't tried doing this yet, but I should write about it some time in the near future.

Spark is a very good tool to have up your sleeve, specially if you're going for a micro services oriented architecture with a lot of small web services. In the past I've mostly used Sinatra for this type of problem. But several times I've been forced to run those services with JRuby either because of scale or because I needed a library that was more stable in the Java world. Spark feels like a very nice alternative that doesn't take a significant cost in terms of development time when compared to the solutions available in the Ruby or Python worlds.