# Proto Cache: A Caching Story

## What is Proto-Cache?

I’ve been working internally at Google to open source several libraries including cl-protobufs and a series of utility libraries we call “ace”. I wrote several blog posts making an HTTP server that takes in either protocol buffers or JSON strings and responds in kind. I think I have worked enough on Mortgage Server and wish to work on a different project.

Proto-cache will grow up to be a pub-sub system that takes in google.protobuf:any protos and send them to users over http requests. I’m developing it to showcase the ace.core library and the Any proto well-known-type. In this post we create a cache system which stores google.protobuf.any messages in a hash-table keyed off of a symbol.

## The current incarnation of Proto Cache:

The code can be found here: https://github.com/Slids/proto-cache

### Proto-cache.asd:

This is remarkable in-as-much as cl-protobufs isn’t required for the defsystem! It’s not required at all, but we do require the cl-protobufs.google.protobuf:any protocol buffer message object. Right now we are only adding and getting it from the cache. This allows us to store a protocol buffer message object that any user system can parse by calling unpack-any. We never have to understand the message inside.

### Proto-cache.lisp:

The actual implementation. We give three different functions:

• get-from-cache
• set-in-cache
• remove-from-cache

We also have a:

• hash-table

Note: The ace.core library can be found at: https://github.com/cybersurf/ace.core

The first interesting thing to note is the fast-read mutex. This can be found in the ace.core.thread package included in the ace.core utility library. This allows for mutex free reads of a protected region of code. One has to call:

• (with-frmutex-write (fr-mutex) body)

If the body of with-frmutex-read is finished with nobody calling with-frmutex-write then the value is returned. If someone calls with-frmutex-write while another thread is in with-frmutex-read then the body of with-frmutex-read has to be re-run. One should be careful to not modify state in the with-frmutex-read body.

## Discussion About the Individual Functions

### get-from-cache:

(acd:defun* get-from-cache (key)
"Get the any message from cache with KEY."
(gethash key cache)))

This function uses the defun* form from ace.core.defun. It looks the same as a standard defun except has a new declare statement. The declare statement takes the form

(declare (acd:self (lambda-list-type-declarations) output-declaration))

In this function we state that the input KEY must be a symbol and the return value is going to be a google:any protobuf message. The output declaration is optional. For all of the options please see the macro definition for ace.core.defun:defun*.

The with-fr-mutex-read macro is also being used.

Note in the macro’s body we only do a simple accessor call into a hash-table. Safety is not guaranteed, only consistency.

### set-in-cache:

(acd:defun* set-in-cache (key any)
"Set the ANY message in cache with KEY."
(act:with-frmutex-write (cache-mutex)
(setf (gethash key cache) any)))

We see that the new defun* call is used. In this case we have two inputs, KEY will be a symbol ANY will be a google:any proto message. We also see that we will return a google:any proto message.

The with-frmutex-write macro is being used. The only thing that is done in the body is setting a cache value. If we try to get a message from the cache and set a message into the cache, it is possible a reader will have to read multiple times. In systems where readers are more common than writers fr-mutexes and spinlocking are much faster than having readers lock a mutex for every read..

## remove-from-cache:

We omit this function in this write-up for brevity.

## Conclusion:

Fast-read mutexes like the one found in ace.core.thread are incredibly useful tools. Having to access a mutex can be slow even in cases where that mutex is never locked. I believe this is one of the more useful additions in the ace.core library.

The new defun* macro found in ace.core.defun for creating function definitions is more mixed. I find a lack of clarity in mapping the lambda list s-expression in the defun statement to the s-expression in the declaration. Others may find it provides nicer syntax and the clarity is more obvious.

Future posts will show the use of the any protocol buffer message.

As usual Carl Gay gave copious edits and suggestions.

# 2021

Greeting Everyone.

It’s been a weird year for everyone. I don’t understand the idea that the 2020 year ending will make everything better, but it seems to be a popular idea. Please remember, the virus has gotten worse (UK variant). Please be careful, stay inside, isolate, and take the vaccine as soon as you can.

Okay, now onto some of my hopes and plans for 2021. First I have a reading list. Some of these have been started, but I hope to finish them in 2021:

1. The Common Lisp Condition System.
• Written by Michal Herda it’s a new book on the Lisp Condition System.
• I’ll have a review out when I’m finished.
2. Site Reliability Engineering: How Google Runs Production Systems and The Site Reliability Workbook: Practical Ways to Implement SRE.
• These books go together in a complimentary way. I beleive all SWE’s should have knowledge and experience trying to keep their production systems working.
3. Completely Bounded Maps and Operator Algebras
• Book by Vern Paulsen, interestingly important in Quantum Information Theory.

Next, I’m buying a house! The house needs quite a bit of work, but it’s in a really nice neighborhood in Belmont MA. A town with great schools, nice parks, and some really cool Googlers! Take a look here:

https://www.zillow.com/homedetails/410-Pleasant-St-Belmont-MA-02478/56411531_zpid/

Most importantly to have a great year with my family. Lyra is growing REALLY big. She runs around and plays, a perfect little one. Her sister Faye will be born at the end of january or beginning of february. Wenwen is tired.

There are some things I know I’ll miss in 2021. I won’t see my next intern, just like I never met my previous intern (Ben) in person. Due to the pandemic Google has interns working at home. Thankfully Ben got a job at Google so I may see him yet.

The European Lisp Symposium will be held online in 2021. I miss seeing Lispers from all kinds of backgrounds working in academia, startups, small businesses all the way to corporate monoliths. I miss the gathering, the online meeting isn’t the same.

Finally, I want to leave this year with something I learned.

In 2020, as software engineers, we learned how to work in isolation. I don’t think remote work will or should be the norm, though I know lots of  different engineers disagree. We learn, make connections, gain understanding, and advance by meeting, discussing with, and learning from people. This will never be as constructive online as it will be in person.

I know from my wife’s work that students learn better online. I hope 2020 will show us that online work and education can and will be part of the future of education, but it will not be the future of work and education.

I hope everyone has a fantastic New Year.

# Merry Christmas!

Greetings everyone!

This will not be a programming post, or really a post of any technical or mathematical interest. I’m not entirely sure what the next technical post I will make is, but I am thinking.

As it is Christmas, I wanted to say some thanks.

First, to Carl Gay, my coworker and mentor at Google. He’s been the person I’ve talked to the most from work over these past 9 months (and probably well before that as well). He’s much farther down his career then I am, but he’s been amazingly helpful and kind freind.

I have been blessed with many great co-workers at Google. Ron, Ted, Stephen, Rujith, etc. Thank you for making this strange work year as great as it was.

Also Google. They’ve given me months off to take care of my daughter and allowed my wife to continue working without strain on childcare. I know people have a lot of misgivings about Big Tech, but I truly beleive Google always tries to do whats right.

Next my parents. It was a tough year. We stayed at my moms for a bit in the summer, which allowed Lyra to play in giant fields and moo at giant cows. Sadly, we did not get to see my dad and Melinda. We miss them very much and look forward to seeing them in 2021.

Finally, to my wife Wenwen and daughter Lyra, for making this year. For making our condo a home.

Again, Merry Christmas and if I don’t post again this year have a Happy New Year!

# Mortgage Server on a Raspberry Pi

In the last post we discussed creating a server to calculate an amortization schedule that takes and returns both protocol buffer messages and JSON. In this post we will discuss hosting this server on a Raspberry Pi. There are some pitfalls, and the story isn’t complete, but it’s still fairly compelling.

# What We Will Use:

Hardware:

We will use a Raspberry Pi 3 model B as our server. We will use the stock operating system Raspbian. This SOC has a quad core 64-bit processor with floating point on chip. The operating system itself is 32-bit which makes the processor run on 32-bit mode.

Software:

We will be using SBCL as our Common Lisp, CL-PROTOBUFS as our protocol buffer and JSON library, and Hunchentoot as our web server.

# Problems

### 1. SBCL on a Raspbian

When trying to run the mortgage-info server on Raspbian the first error I got was an inability to load the lisp file generated by protoc. On contacting Doug Katzman he noted I was running an old version of SBCL. The Raspbian apt-get repository has an old version of SBCL. If someone desires to run SBCL on a Raspberry Pi they should follow the binary installation instructions here: http://www.sbcl.org/getting.html.

### 2. CL-Protobufs on a 32-Bit OS

The cl-protobufs library has been optimized to run on a 64-bit x86 platform. The Raspberry Pi environment is 32-bit arm. As noted before, the 32-bit arm environment is supported by SBCL. I don’t think anyone has attempted to run cl-protobufs on the 32-bit arm environment running SBCL. After modifying cl-protobufs.asd to have float-bits.lisp loaded on SBCL not running in 64-bit we could quickload mortgage-info into a repl.

### 3. Bugs in the mortgage-info repo

There were several bugs I fixed in my very limited testing of the mortgage info repo, as well as some bugs that are still existent.

1. When trying to set numbers in the proto message structs I had to coerce them to double-float. I’m not sure why… This works on SBCL running on the x86-64 without the coercions.
2. A division by 0 bug if the entered interest rate is 0.
3. The possibility of having 0 as the number of repayment periods. I added an assertion so we will return a 500 stating the assertion was hit. We should have a more graceful error message than a stack trace, but this is currently only a proof of concept.
4. The mortgage.proto file had interest as an integer, but interest is usually a float divisible by .125.
5. We have rounding problems if the interest rate is too high (say 99%). We only ever pay interest and the amount never goes down, at least with a 300 payment period. This is most likely due to rounding, we do not accept fractional pennies. This is okay, if the national interest rate went anywhere near 99% we have BIG problems.

# CL-protobufs on the Pi

I have cl-protobufs running on SBCL on the Raspberry Pi, but some of the tests don’t pass. I’m not sure if it would work on a 64-bit OS on the Raspberry Pi, I don’t have the inclination to get a 64-bit OS for my Pi. If you do, please tell me what happens!

I wasn’t able to get CCL on arm32 to load cl-protobufs. It gives an error saying it doesn’t have asdf 3.1. Quickloading asdf I get undefined function version<=. If any CCL folk has an idea about what’s going on, please send me a message.

Trying to run ABCL lead me to yet another bug: https://github.com/armedbear/abcl/issues/359

# Running Server

My Raspberry Pi is running at: http://65.96.161.53:4242/mortgage-info

Feel free to send either JSON or protobuf messages to the server.

Example JSON:

{
“interest”:3,
“loan_amount”:380000,
“num_periods”:300
}

I don’t know how long I will keep it running. If it goes down and you are interested in sending it messages please send me an email.

Ron, Carl, and Ben edited this post (as usual). Doug provided a great deal of help with SBCL on ARM 32.

# Lisp Mortgage Calculator Proto with JSON

I’ve finally found a house! Like many Googlers from Cambridge I will be moving to Belmont MA. With that being said, I have to get a mortgage. My wife noticed we don’t know much about mortgages, so she decided to do some research. I, being a mathematician and a programmer, decided to make a basic mortgage calculator that will tell you how much you will pay on your mortgage per month, and give you an approximate amortization schedule. Due to rounding it’s impossible to give an exact amortization schedule for every bank.

This post should explain three things:

1. How to calculate your monthly payment given a fixed rate loan.
2. How to create an amortization schedule.
3. How to create an easy acceptor in Hunchentoot that takes either application/json or application/octet-stream.

## Mathematical Finance

The actual formulas here come from the Pre Calculus for Economic Students course my wife teaches. The book is:

Applied Mathematics for the Managerial, Life, and Social Sciences, Soo T. Tan, Cengage Learning, Jan 1, 2015 – Mathematics – 1024 pages

With that out of the way we come to the Periodic Payment formula. We will assume you pay monthly and the interest rate is quoted for the year but calculated monthly.

 Example:
Interest rate of 3%
Loan Amount 100,000$First Month Interest =$100,000*(.03/12) = $100,000*.0025=$250. 

$MonthlyPayment = \frac{LoanAmount * \frac{InterestRate}{12}} {1 - (1 + \frac{InterestRate}{12})^{NumberOfMonths}}$

I am not going to prove this, though the proof is not hard. I refer to the cited book section 4.3.

With this we can compute the amortization schedule iteratively. The interest paid for the first month is:

$I_{1} = LoanAmount * \frac{InterestRate}{12}$

The payment toward principal for the first month is:

$PTP_{1} = MonthlyPayment - I_{1}$

The interest paid for month j is:

$I_{j} = \frac{InterestRate}{12}*(LoanAmount - \sum_{i=1}^{j-1}PTP_{i})$

The payment toward principal for month j is:

$PTP_{j} = MonthlyPayment - I_{j}$

Since $I_{j}$ relies on only the $PTP(i)$ for$0 and $PTP_{1}$ is defined, we can compute them for any value we wish!

## Creating the Mortgage Calculator

We will be creating a Huntchentoot server that will receive either JSON or octet-stream Protocol Buffer messages and return either JSON or octet-stream Protocol Buffer messages. My previous posts discussed creating Hunchentoot Acceptors and integrating Protocol Buffer Messages into a Lisp application. For a refresher please visit my Proto Over HTTPS.

### mortgage.proto

When defining a system that sends and receives protocol buffers you must tell your consumers what those messages will be. We expect requests to be in the form of the  mortgage_information_request message and we will respond with mortgage_information message.

Note: With the cl-protobufs.json package we can send JSON requests that look like the protocol buffer message. So sending in:

{
"interest":"3",
"loan_amount":"380000",
"num_periods":"300"
}

We can parse a mortgage_information. We will show how to do this shortly.

### Server Code:

There are two main portions of this file, the server creation section and the mortgage calculator section. We will start by discussing the server creation section by looking at the define-easy-handler macro.

We get the post body by calling (raw-post-data). This can either be in JSON or serialized protocol buffer format so we inspect the content-type http header with

(cdr (assoc :content-type (headers-in *request*)))

If this header is “application/json” we turn the body into a string and call cl-protobufs.json:parse-json:

(let ((string-request
(flexi-streams:octets-to-string request)))
(cl-protobufs.json:parse-json
'mf:mortgage-information-request
:stream (make-string-input-stream
string-request)))

Otherwise we assume it’s a serialized protocol buffer message and we call cl-protobufs:deserialize-from-stream.

The application code is the same either way; we will briefly discuss this later.

Finally, if we received a JSON object we return a JSON object. This can be done by calling cl-protobufs.json:print-json on the response object:

(setf (hunchentoot:content-type*) "application/json")
(let ((out-stream (make-string-output-stream)))
(cl-protobufs.json:print-json response
:stream out-stream)
(get-output-stream-string out-stream))

Otherwise we return the response serialized to an octet vector using cl-protobufs:serialize-to-bytes.

## Application Code:

For the most part, the application code is just the formulas described in the mathematical finance section but written in Lisp. The only problem is that representing currency as double-precision floating point is terrible. We make two simplifying assumptions:

1. The currency uses two digits after the decimal.
2. We floor to two digits after the decimal.

When we make our final amortization line we pay off the remaining principal. This means the repayment may not be the repayment amount for every other month, but it removes rounding errors. We may want to make a currency message for users to send us which specifies its own rounding and decimal places, or we could use the Google one that is not a well known type here. The ins-and-outs of currency programming wasn’t part of this blog post so please pardon the crudeness.

We create the mortgage_info message with the call to populate-mortgage-info:

  (let (...
(response (populate-mortgage-info
(mf:loan-amount request)
(mf:interest request)
(mf:num-periods request)))) …)

We showed in the previous section how we convert JSON text or the serialized protocol buffer message into a protocol buffer message in lisp memory. This message was stored in the request variable. We also showed in the last section how the response variable will be returned to the caller as either a JSON string or a serialized protocol buffer message.

The author would like to thanks Ron Gut, Carl Gay, and Ben Kuehnert.

# The Secretary Problem

I’ve been looking for houses lately. The general problem with house hunting is that there is a time limit which dictates how many houses you will see, and there will probably be a close to total order on your opinions of the houses. In layman’s terms, for all of the houses you look at each house will be better than some houses, and worse than the rest. My wife and I have debated how long we should look for a house. Thankfully this is nicely solved in mathematics.

The Secretary Problem:

Suppose you are trying to hire a secretary. You know you will interview 10 possible secretaries and you will have a total order in how much you like them. You must decide whether or not you should hire them at the end of each interview. What is the likelihood of choosing the top ranked secretary?

## Problem Description and algorithm description.

To further explain each possible secretary you interview will have a rank from 1 to 10. When you interview them, you will not know how high they rank but you will know their score in relation to the other secretaries you have interviewed. When you interview candidate 1 you have no information. When you interview candidate 2 you know they are better or worse than candidate 1. When you interview candidate 3 you know how they relate to candidates 1 and 2. More information gives you more knowledge on the ranking, but less choices in who to hire.

Obviously there are many algorithms you could use to choose a secretary. You could choose the first secretary to come and interview, your chances of getting the optimal secretary is 10%. You could choose the first secretary better than the first, this will mean with 90% probability you will avoid the worst secretary!

The optimal probability of selecting the best secretary is 1/e. I’m not going to go into the proof, it’s not easy, but if you’re interested please check out the wikipedia page. The algorithm itself is quite simple. First we will generalize to having n secretaries come to interview.

1. Check the first n/e applicants.
2. Choose the next applicant who is better than the first n/e applicants.

## Coding Experiment

### We will generalize the optimal algorithm thusly:

1. We will check the first k candidates of the n candidates.
2. We will choose the first applicant who is better than the first k applicants.

We will create a permutation of {1,…,n}, get the max of the first k candidates, then get the first candidate who is higher than the max, if no such candidate exists we take the last candidate. We will return a bool determining whether this chosen candidate is ranked n.

The code can be found on my github account. We use Robert Smith’s cl-permutation library available on Quicklisp.

We see for 10 candidates and 100000 trials we get:

For 100 candidates and 100000 trials we get:

It’s interesting to note your chances of finding the optimal secretary increase quite quickly while increasing the number of people you interview, and decrease far slower after hitting the optimal stopping bound.

## Takeaways:

As mathematics only approximates life, this doesn’t perfectly fit into my house search problem. I don’t know how many houses I will see, and I don’t know if house prices will increase or decrease over time. Also, I often don’t have to make a split-second decision right after I see a house.

This does however give me a takeaway:

When searching for a house, do your due diligence and look at as many open houses as you can at first. Getting an idea of what you like and don’t like will help you find the house you want. Don’t wait too long though!

I would like to thank Ron, Carl, and Ben for the edits to this article.

# Sending Protocol Buffers as an Octet Vector

In our previous posts on using Hunchentoot to send protocol buffer messages we turned them into base64-encoded strings and sent them as parameters in an HTTP post call. This allows us to send multiple protocol buffer messages in a single post call using multiple post parameters. In this post we will show how we can send a single protocol buffer message in the body of a post call as binary data instead of base64 encoding.

Note: I am new to using Hunchentoot, and would have started by sending an octet vector in the body of a post call if he knew how. On review the last blog post Carl Gay asked why this method wasn’t used, and the answer was due to lack of knowledge. After learning that one could use the hunchentoot:raw-post-data to access the post body I was able to write this simpler method.

## Hello-world-client

The changes from our previous post where we turned our octet-vectors into base64 encoded strings to this post where we just send the octet vector can be found here.

### ASD file

Since we are sending an octet-vector we no longer need to worry about flexi-streams, cl-base64, and protobuf-utilities. We removed them from the asd file.

### Implementation

This change is a dramatic simplification to our post call. All we have to do is use drakma to call our web server, setting the :content-type to application/octet-stream and :content to the serialized proto message. Since we assume the web server will be sending us application/octet-stream data we can deserialize the reply response and be one our way.

(response           (cl-protobufs:deserialize-from-bytes            'hwp:response            (drakma:http-request             address             :content-type "application/octet-stream"             :content              (cl-protobufs:serialize-to-bytes                proto-to-send)))

## Hello-world-server

The changes from our previous post where we turned our base64 encoded strings into octet-vectors to this post where we just read the octet vector can be found here.

### ASD file

Since we are sending an octet-vector we no longer need to worry about protobuf-utilities. We removed this from the asd file.

### Implementation

This change is a dramatic simplification to our post handler. First we set hunchentoot:content-type* to application/octet-stream so it knows we will return an octet-vector. Then we call raw-post-data and deserialize the result. We do our application logic and create our response. Finally we serialize our reply proto and return the octet-vector.
The one gotcha in all of this is the inability to either send or receive the empty octet-vector. Either drakma just sends nil, or hunchentoot receives the octet stream as nil. Care should be taken to make sure one doesn’t try to deserialize nil, as that’s a type error. W all know nil is not of type octet-vector!

(define-easy-handler (hello-world :uri "/hello") ()  (setf (hunchentoot:content-type*)         "application/octet-stream")  (let* ((post-request (raw-post-data))         (request            (if post-request               (cl-protobufs:deserialize-from-bytes                 'hwp:request post-request)                 (hwp:make-request)))         (response            (hwp:make-response             :response             (if (hwp:request.has-name request)                 (format nil "Hello ~a"                    (hwp:request.name request))                 "Hello"))))    (cl-protobufs:serialize-to-bytes response)))

## Final Thoughts

Sending and receiving protocol buffers through octet-vectors is a simpler way of using cl-protobufs with hunchentoot than trying to use HTTP parameters. Anyone using protocol-buffers will probably send and receive only one message at a time (or wrap multiple messages in one message) so it should be considered the canonical use case. This is how gRPC works.

I hope you enjoyed this series on cl-protobufs, and hope you enjoy adding it into your own toolbox of useful Lisp packages.

I would like to thank Carl Gay for taking the time to edit the post and provide information on Hunchentoot Web Server.

# Serializing and Deserializing Protobuf Messages for HTTP

So far, I’ve made two posts creating an HTTP client  which sends and receives protocol buffer messages and an HTTP Server that accepts and respond with protocol buffer messages. In both of these posts we had to do a lot of extra toil in serializing protocol buffers into base64-encoded strings and deserialize protocol buffers from base64-encoded strings. In this post we create three functions and a macro to help us serialize and deserialize protocol buffers in our http server and client.

### Notes:

I will be discussing the Hello World Server and Hello World Client. If you missed those blog posts it may be useful to go and view them here and here. There has been code drift since those posts, mainly the changes we will discuss in this post. The source code for the utility functions can be found in my protobuf-utilities code repo on github.

## Code Discussion

This time we will omit the discussion of the asd files. We went through the asd files line-by-line in the two posts referenced in the notes so please look at those.

In addition to the main macros we discuss and show below, we use two helper functions deserialize-proto-from-base64-string and serialize-proto-to-base64-string which can be found in my protobuf-utilities repo.

### Server-Side

We noticed a large part of the problem with using cl-protobufs protocol buffer objects in an HTTP request and response is the tedium of translating from the base64-encoded string that was sent into the server to a protocol buffer and then reversing the protocol buffer with the response object. We know which parameters to our HTTP handler will be either nil or a base64-encoded proto packed in a string and their respective types. With this we can make a macro to translate the strings to their respective protos and use them in an enclosing lexical scope.

Why a macro? Many Lispers may not ask this question, but we should as macros are harder to reason about than functions. We want the body of our macro to run in a scope where it has access to all of the deserialized protobuf messages. We are creating a utility that will work for all lists of proto messages so long as we know their types. We could with effort and ugliness make a function that accepts a function, and have that outer function funcall the inner function, but it would be ugly. With a macro we can create new syntax which will simplify code, allowing us to simply list the protobuf messages we wish to deserialize and then use them.

Given that, what our macro should accept is obvious, a list of conses each containing the variable that holds an encoded proto and the type of message to be encoded/decoded. We also take a body in which the supplied symbols will now refer to a deserialized proto.

(defmacro with-deserialized-protos
(message-message-type-list &body body)
"Take a list (MESSAGE . PROTO-TYPE)
MESSAGE-MESSAGE-TYPE-LIST where the message will be
a symbol pointing to a base64-encoded serialized proto
in a string. Deserialize the protos and store them in
the message symbols. The messages are bound lexically
serialized base64-encoded strings."
(let ,(loop for (message . message-type)
in  message-message-type-list
collect
(,message
(deserialize-proto-from-base64-string
',message-type
(or ,message ""))))
,@body))

It is plausible that our HTTP server will respond with a base64-encoded protocol buffer object. We could first call with-deserialized-protos to do some processing, creating a new protocol buffer object, and then call a function like serialize-proto-to-base64-string. Instead I create a macro that will automatically serialize to string then base64-encode the result of a body.

(defmacro serialize-result (&body body)
(let ((result-proto (gensym "RESULT-PROTO")))
(let ((,result-proto ,@body))
(serialize-proto-to-base64-string ,result-proto))))

Since we’ve gone this far, we can string these two macros together:

(defmacro with-deserialized-protos-serializing-return
(message-message-type-list &body body)
(serialize-result (with-deserialized-protos
,message-message-type-list ,@body)))

This vastly improves our handler:

(define-easy-handler (hello-world :uri "/hello")
((request :parameter-type 'string))
(pu:with-deserialized-protos-serializing-return
((request . hwp:request))
(hwp:make-response
:response
(if (hwp:request.has-name request)
(format nil "Hello ~a" (hwp:request.name request))
"Hello"))))

A final pro-macro argument: Macros allow us to make syntax that describes what we want a region of code to accomplish. The macros I wrote aren’t distinctly necessary, you could just call deserialize-proto-from-base64-string several times in a let binding. Since you probably only have one request proto that would do find. You could also deserialize the return proto yourself. I find the written macros makes the code nicer to write, the downside is people working on the code will have to know what these macros do. Thankfully, we have M-x and docstrings for that.

### Client-Side

We have the reverse story on the client side. We start by having to serialize and base64-encode our proto object before sending them over the wire, and then deserialize the result. One would imagine writing the same kind of macro here as we wrote on the server side. The problem with that is there’s no real body we want to run with our serialized protos we want to send over the wire, and we get one proto back so we can just serialize the HTTP result proto object and let bind it. We can just use a function for this.

(defun proto-call
(let* ((call-name-serialized-proto-list
(loop for (call-name .  proto)
in call-name-proto-list
for ser-proto
= (pu:serialize-proto-to-base64-string proto)
collect
(cons call-name ser-proto)))
(call-result
(or (drakma:http-request
:parameters call-name-serialized-proto-list)
"")))
(pu:deserialize-proto-from-base64-string return-type
call-result)))

## Final Remarks

In this blog post we implemented several helper macros and a function for working with protocol-buffer objects in an HTTP environment. I believe the macros in protobuf-utilities are the missing link that will make cl-protobufs a welcome addition to Common Lisp HTTP servers.

Pull requests are always welcome

I would like to thank @rongut, @cgay, and @benkuehnert for their edits and comments.

# Banana Bike

I’ve done several programming posts back-to-back, and I think it’s time to take a fun break. Today we are going to talk about the Banana bike, you can find it on Amazon.

The Banana bike is a balance bike with an aluminum body and air filled tires. It says it’s suitable for a toddler between the ages of 2 to 4, but I would think that is probably too large of a range, so lets says 2-3. It has a movable seat. I would say it’s best attribute, especially for the price range, is air filled tires. More expensive options like the Strider have foam tires, something that doesn’t roll well with me.

I got this bike for my daughter around 14 months. She wasn’t really able to ride it until around 20 months so you may want to wait for your toddler to be tall enough to ride it. On the other hand, she continually wanted to try to ride it, so maybe having a challenge like this is a positive.

Pros:

• The weels are air filled. This is something bikes at this price point rarely have.
• Very light. My daughter can easily move it around.
• Good size.
• Great price.

Cons:

• The handlebar swivels a full 360 degrees. A steering limiter would be appreciated.
• No breaks. At this age level thats probably fine.

Final opinion:

This is a fantastic bike for your toddler. At first I was concerned about the difference in quality a 60$bike would be from say the 200$ Woom 1. I can tell you that your toddler won’t notice the difference. I might suggest the 30\$ upgrade to a Schwinn balance bike to get the steering limiter, but it’s definitely not required. IF you have a toddler, you should definitely get out there and bike!

Note: A Woom 1, while expensive, may still be worth it. One can often resell them for more than the initial cost. The Banana Bike will probably have no resale value. You should however be able to use iether bike with multiple children.

# CL-Protobufs Hello-World-Client

Last post we created a server using the Lisp web server Huntchentoot and the cl-protobufs Lisp protocol buffer library. In this post we will discuss making a client to contact our web service, sending protocol buffer messages over HTTP. We will be using the HTTP client package Drakma. We will end the post by discussing improvements that should be made in future iterations.

The reader should find the code in my hello-world-client github repo. It contains three files:

If you haven’t read the previous post please take a look here as we will be connecting to the service discussed therein.

# Updates to the HTTP web server

In our last post our hello-world-server accepted a string and we directly read the string in as an octet-buffer. We did this to ease our testing, we could manually use a GUI web client such as Postman to send rest calls to our web server, inputting the octet-buffer we get from calling print on the octet buffer returned from serialize-proto. I would like to avoid any read calls in my Common Lisp code for security. Instead we will take in a base64-encoded string containing the octets, decode the base64 encoding with cl-base64 and use flexi-streams string-to-octets and octets-to-string to read and write the octet-buffer as a string.

The updated code can be found in hello-world-server/hello-world-server.lisp starting at line 19.

# Code Discussion

I will omit the discussion of the hello-world.proto code and how it is compiled with the cl-protobuf asd additions. For a discussion on this please refer to my previous post here.

## Hello-world-client.asd:

The useful information in this file is:

• defsystem: The Lisp system is called hello-world-client.
• defsystem-depends-on: To load the system you will need to load cl-protobufs so we can generate Lisp code from the proto file.
• depends-on: We will use Drakma as our http client library.
• module: We have one module src.
• protobuf-source-file: This is an asdf directive given to us by cl-protobufs. It will look for a file hello-world.proto in our current directory and call protoc-gen-lisp on this file.
• file: A lisp file hello-world-client.lisp

## Hello-world.proto:

The request and response schema definitions for the hello-world-server. This is copied from hello-world.proto in hello-world-server

## Hello-world-client.lisp:

This is where the real work is done. We are starting the hello-world server locally and default to port 4242 with handler hello so we will set those as globals. We define a function call-hello-world which is what the client is attempting to do. It takes a name as either nil or string and address, port, and handler as optional keyword arguments defaulting to the aforementioned globals.

We create the proto and then serialize the bytes using the cl-protobufs serializer:

(cl-protobufs:serialize-object-to-bytes proto-to-send)

Next we use flexi-streams to turn the bytes into a string octet, cl-base64 to base64 encode the string, and send it to the server with drakma.

(drakma:http-request
(concatenate
'string address ":" port "/" handler)
:parameters
(("request" . ,(cl-base64:string-to-base64-string
(flexi-streams:octets-to-string
serialized-req)))))


The drakma library blocks until it receives a response, which will contain a base64 encoded stringified proto message. We simply reverse the base64 encoding, then call string-to-octets to get our octet buffer. We deserialize the proto message with cl-protobufs and print the response to the repl.

(print
(hwp:response.response
(cl-protobufs:deserialize-object-from-bytes
'hwp:response
(flexi-streams:string-to-octets
(cl-base64:base64-string-to-string response)))

We see

(call-hello-world “foo”)
=> “Hello foo”`

as all good hello-world calls should show.

Final Remarks

The hello-world-server and hello-world-client code work as one would expect. There is however, too much boilerplate that has to go around this code. Having to manually call octets-to-string and string-to-base64-string and its reverse is cumbersome. What one should really do is have a function for the client that does this for you.

On the server side it is equally onerous to have to call base64-string-to-string and string-to-octets, and at the end it’s the reverse for every proto parameter. This should really be a macro that takes as an argument a list of (parameter-name . proto-type) and does the serialization for you. You could add an optional output proto-type/parameter-name to do the base64 encoding then string-to-octets at the end of the call. This would equate to having to add one macro call per handler.

In the next cl-protobuf hello-world post we will try to add these!

I would like to thank @rongut, @cgay, and @benkuehnert for their edits and comments.