Serverless Computing Tutorial: The Service (Part 2)

The following is a Coderland presentation about our newest attraction, the Compile Driver. Hello! This is Doug Tidwell for Coderland. In this video we’ll take a look at a serverless function that manipulates image data captured by the webcam next to the Compile Driver. Given a picture of a happy guest, the function adds a message, a date stamp, and the Coderland logo. As you can see, the results are striking. To get started, clone or fork our repo. It’s available at the URL on your screen. [] The code follows the common Function as a Service convention of using JSON to move data. Obviously we’re using binary data here, so we have to base 64 encode and decode it as it moves from the Coderland Swag Shop to your serverless function and back. The code is a SpringBoot application, so we’ll use Java’s built-in libraries for the base 64 stuff. And we’ll use Spring’s Jackson JSON library so we don’t have to worry about JSON syntax. The majority of the code in the function does the image manipulation, as it should be. The JSON structure used by the function contains six fields. The most important is imageData, which contains the base 64 encoded pixels from the image. There’s also imageType, which indicates if this is a JPEG or a PNG. The other field you might want to use is greeting. That’s the text that’s written at the top of the image. Beyond that, there is a dateFormatString, a language and a location. Basically we took everything hardcoded in the function and made it a field in the JSON structure. The repo contains a file called sampleInput.txt that you can use for testing. Feel free to change its values and see what happens. To make life simpler, we created a Java Image class that uses the Jackson library we mentioned earlier. The members of the class are all defined with the JsonProperty annotation, so the JSON is automatically parsed and turned into a Java object. When we define the serverless function, we say that it takes an Image object as its input and returns one as its output. Jackson handles all of the JSON work for us, so we simply use the objects. Next, we need to set up the method that handles the POST request. The PostMapping annotation tells SpringBoot that this method handles POST requests for the overlayImage endpoint. We’re also saying that we use JSON in and JSON out, and there is a CrossOrigin annotation to handle any CORS issues that might occur. From here, we’ll breeze through the actual image processing. We create a BufferedImage object from the decoded image data, then we create a canvas. We draw the decoded image onto the canvas, then we create an alpha channel. The Coderland logo is loaded as another BufferedImage object, and the logo is drawn onto the canvas, centered vertically along the left side. Finally, we need to draw the text of the greeting and the date stamp. We use Font and FontMetrics objects to center the text on the image. One thing we don’t do is make sure the text actually fits on the image. I’ll say that I left that as an exercise for you, the home viewer, but to be completely honest, it was just laziness on my part. Another character-building opportunity: the code originally looked for Overpass, which is Red Hat’s official font, but when we deploy this code to Knative, it has to be packaged as a container image. The base OpenJDK image doesn’t have that font installed, so we just went with the generic Sans Serif font. If you figure out how to modify our Dockerfile to install Overpass, we’d love to see how you did it. Or even better, just send us a PR. Once the greeting and the date stamp are drawn, we write the image data and encode it. The last step is to create a new Image object and return it to the caller. Again, the Jackson library handles all the JSON mangling required, required, so we don’t have to worry about it. That’s how we do the image processing itself. If you’d like to try the function, switch to the directory containing the code, run mvn clean package, then run the JAR file. We assume you have curl on your machine, Run the script or curltest.cmd, depending on your platform. This sends the sampleInput.txt file to the function. Redirect that output to a file to save its results. Unfortunately, once you have those results, you’ll need to extract the base 64 image data from the file and decode it manually to see the modified image. That’s a little clumsy… but wait! It gets better! Thanks to the amazing Don Schenck, you can test your code using a React front end. Here’s what that front end looks like. You give the React application access to your webcam, click the button, and you should see the results instantly. Is it not awesome? You can get Don’s code at the URL on your screen. [] Once you’ve cloned his repo, switch to that directory and run npm install, then npm start. This is a React application, so you’ll need to install Node if you haven’t already. When you type npm start, the system will open a new browser tab and the app will ask your permission to use the webcam. The front end is at localhost:3000, and it assumes the image service is at localhost:8080. You can set the environment variable REACT_APP_OVERLAY_URL if you need to change the location of the image service, which we’ll do in our next video. That’s as far as we’ll go for now. We’ve got a function that does the image processing we want, and we’ve got a lovely front end to test it. In the next video, we’ll look at how to deploy the image processing function to Knative. In a nutshell, we need to build a container image from this code and tell Knative to load that image and manage it. That part is simple, but getting Knative set up can be tricky. Thanks so much for watching. For Coderland, this is Doug Tidwell, saying “May all your bugs be shallow.” Cheers.

Leave a Reply

Your email address will not be published. Required fields are marked *