Sending Protocol Buffers as an Octet Vector

In our previous posts on using Hunchentoot to send protocol buffer messages we turned them into base64-encoded strings and sent them as parameters in an HTTP post call. This allows us to send multiple protocol buffer messages in a single post call using multiple post parameters. In this post we will show how we can send a single protocol buffer message in the body of a post call as binary data instead of base64 encoding.

Note: I am new to using Hunchentoot, and would have started by sending an octet vector in the body of a post call if he knew how. On review the last blog post Carl Gay asked why this method wasn’t used, and the answer was due to lack of knowledge. After learning that one could use the `hunchentoot:raw-post-data` to access the post body I was able to write this simpler method.

Hello-world-client

The changes from our previous post where we turned our octet-vectors into base64 encoded strings to this post where we just send the octet vector can be found here.

ASD file

Since we are sending an octet-vector we no longer need to worry about flexi-streams, cl-base64, and protobuf-utilities. We removed them from the asd file. 

Implementation

This change is a dramatic simplification to our post call. All we have to do is use drakma to call our web server, setting the :content-type to application/octet-stream and :content to the serialized proto message. Since we assume the web server will be sending us application/octet-stream data we can deserialize the reply response and be one our way.

(response
           (cl-protobufs:deserialize-from-bytes
            'hwp:response
            (drakma:http-request
             address
             :content-type "application/octet-stream"
             :content
(cl-protobufs:serialize-to-bytes
proto-to-send)))

Hello-world-server

The changes from our previous post where we turned our base64 encoded strings into octet-vectors to this post where we just read the octet vector can be found here.

ASD file

Since we are sending an octet-vector we no longer need to worry about protobuf-utilities. We removed this from the asd file. 

Implementation

This change is a dramatic simplification to our post handler. First we set hunchentoot:content-type* to application/octet-stream so it knows we will return an octet-vector. Then we call raw-post-data and deserialize the result. We do our application logic and create our response. Finally we serialize our reply proto and return the octet-vector. 
The one gotcha in all of this is the inability to either send or receive the empty octet-vector. Either drakma just sends nil, or hunchentoot receives the octet stream as nil. Care should be taken to make sure one doesn’t try to deserialize nil, as that’s a type error. W all know nil is not of type octet-vector!

(define-easy-handler (hello-world :uri "/hello") ()
  (setf (hunchentoot:content-type*)
"application/octet-stream")
  (let* ((post-request (raw-post-data))
         (request
(if post-request
               (cl-protobufs:deserialize-from-bytes
                'hwp:request post-request)
                 (hwp:make-request)))
         (response
(hwp:make-response
            :response
             (if (hwp:request.has-name request)
                 (format nil "Hello ~a"
(hwp:request.name request))
                 "Hello"))))
    (cl-protobufs:serialize-to-bytes response)))

Final Thoughts

Sending and receiving protocol buffers through octet-vectors is a simpler way of using cl-protobufs with hunchentoot than trying to use HTTP parameters. Anyone using protocol-buffers will probably send and receive only one message at a time (or wrap multiple messages in one message) so it should be considered the canonical use case. This is how gRPC works. 

I hope you enjoyed this series on cl-protobufs, and hope you enjoy adding it into your own toolbox of useful Lisp packages.


I would like to thank Carl Gay for taking the time to edit the post and provide information on Hunchentoot Web Server.

Serializing and Deserializing Protobuf Messages for HTTP

So far, I’ve made two posts creating an HTTP client  which sends and receives protocol buffer messages and an HTTP Server that accepts and respond with protocol buffer messages. In both of these posts we had to do a lot of extra toil in serializing protocol buffers into base64-encoded strings and deserialize protocol buffers from base64-encoded strings. In this post we create three functions and a macro to help us serialize and deserialize protocol buffers in our http server and client.

Notes:

I will be discussing the Hello World Server and Hello World Client. If you missed those blog posts it may be useful to go and view them here and here. There has been code drift since those posts, mainly the changes we will discuss in this post. The source code for the utility functions can be found in my protobuf-utilities code repo on github.

Code Discussion

This time we will omit the discussion of the asd files. We went through the asd files line-by-line in the two posts referenced in the notes so please look at those.

In addition to the main macros we discuss and show below, we use two helper functions deserialize-proto-from-base64-string and serialize-proto-to-base64-string which can be found in my protobuf-utilities repo.

Server-Side

We noticed a large part of the problem with using cl-protobufs protocol buffer objects in an HTTP request and response is the tedium of translating from the base64-encoded string that was sent into the server to a protocol buffer and then reversing the protocol buffer with the response object. We know which parameters to our HTTP handler will be either nil or a base64-encoded proto packed in a string and their respective types. With this we can make a macro to translate the strings to their respective protos and use them in an enclosing lexical scope.

Why a macro? Many Lispers may not ask this question, but we should as macros are harder to reason about than functions. We want the body of our macro to run in a scope where it has access to all of the deserialized protobuf messages. We are creating a utility that will work for all lists of proto messages so long as we know their types. We could with effort and ugliness make a function that accepts a function, and have that outer function funcall the inner function, but it would be ugly. With a macro we can create new syntax which will simplify code, allowing us to simply list the protobuf messages we wish to deserialize and then use them.

Given that, what our macro should accept is obvious, a list of conses each containing the variable that holds an encoded proto and the type of message to be encoded/decoded. We also take a body in which the supplied symbols will now refer to a deserialized proto.

(defmacro with-deserialized-protos 
  (message-message-type-list &body body)
  "Take a list (MESSAGE . PROTO-TYPE) 
MESSAGE-MESSAGE-TYPE-LIST where the message will be 
a symbol pointing to a base64-encoded serialized proto 
in a string. Deserialize the protos and store them in 
the message symbols. The messages are bound lexically 
so after this macro finishes the protos return to be 
serialized base64-encoded strings."
  `(let ,(loop for (message . message-type) 
            in  message-message-type-list
               collect
               `(,message 
                   (deserialize-proto-from-base64-string
                      ',message-type
                      (or ,message ""))))
     ,@body))

It is plausible that our HTTP server will respond with a base64-encoded protocol buffer object. We could first call `with-deserialized-protos` to do some processing, creating a new protocol buffer object, and then call a function like `serialize-proto-to-base64-string`. Instead I create a macro that will automatically serialize to string then base64-encode the result of a body.

(defmacro serialize-result (&body body)
  (let ((result-proto (gensym "RESULT-PROTO")))
    `(let ((,result-proto ,@body))
       (serialize-proto-to-base64-string ,result-proto))))

Since we’ve gone this far, we can string these two macros together:

(defmacro with-deserialized-protos-serializing-return 
  (message-message-type-list &body body)
  `(serialize-result (with-deserialized-protos 
                       ,message-message-type-list ,@body)))

This vastly improves our handler:

(define-easy-handler (hello-world :uri "/hello")
    ((request :parameter-type 'string))
  (pu:with-deserialized-protos-serializing-return 
     ((request . hwp:request))
    (hwp:make-response
     :response
     (if (hwp:request.has-name request)
         (format nil "Hello ~a" (hwp:request.name request))
         "Hello"))))

A final pro-macro argument: Macros allow us to make syntax that describes what we want a region of code to accomplish. The macros I wrote aren’t distinctly necessary, you could just call `deserialize-proto-from-base64-string` several times in a let binding. Since you probably only have one request proto that would do find. You could also deserialize the return proto yourself. I find the written macros makes the code nicer to write, the downside is people working on the code will have to know what these macros do. Thankfully, we have M-x and docstrings for that.

Client-Side

We have the reverse story on the client side. We start by having to serialize and base64-encode our proto object before sending them over the wire, and then deserialize the result. One would imagine writing the same kind of macro here as we wrote on the server side. The problem with that is there’s no real body we want to run with our serialized protos we want to send over the wire, and we get one proto back so we can just serialize the HTTP result proto object and let bind it. We can just use a function for this.

(defun proto-call 
    (call-name-proto-list return-type address)
  (let* ((call-name-serialized-proto-list
           (loop for (call-name .  proto) 
              in call-name-proto-list
                 for ser-proto 
               = (pu:serialize-proto-to-base64-string proto)
                 collect
                 (cons call-name ser-proto)))
         (call-result
           (or (drakma:http-request
                address
                :parameters call-name-serialized-proto-list)
               "")))
    (pu:deserialize-proto-from-base64-string return-type 
       call-result)))

Final Remarks

In this blog post we implemented several helper macros and a function for working with protocol-buffer objects in an HTTP environment. I believe the macros in protobuf-utilities are the missing link that will make cl-protobufs a welcome addition to Common Lisp HTTP servers.

Pull requests are always welcome


I would like to thank @rongut, @cgay, and @benkuehnert for their edits and comments.

Banana Bike

I’ve done several programming posts back-to-back, and I think it’s time to take a fun break. Today we are going to talk about the Banana bike, you can find it on Amazon.

The Banana bike is a balance bike with an aluminum body and air filled tires. It says it’s suitable for a toddler between the ages of 2 to 4, but I would think that is probably too large of a range, so lets says 2-3. It has a movable seat. I would say it’s best attribute, especially for the price range, is air filled tires. More expensive options like the Strider have foam tires, something that doesn’t roll well with me.

I got this bike for my daughter around 14 months. She wasn’t really able to ride it until around 20 months so you may want to wait for your toddler to be tall enough to ride it. On the other hand, she continually wanted to try to ride it, so maybe having a challenge like this is a positive.

Pros:

  • The weels are air filled. This is something bikes at this price point rarely have.
  • Very light. My daughter can easily move it around.
  • Good size.
  • Great price.

Cons:

  • The handlebar swivels a full 360 degrees. A steering limiter would be appreciated.
  • No breaks. At this age level thats probably fine.

Final opinion:

This is a fantastic bike for your toddler. At first I was concerned about the difference in quality a 60$ bike would be from say the 200$ Woom 1. I can tell you that your toddler won’t notice the difference. I might suggest the 30$ upgrade to a Schwinn balance bike to get the steering limiter, but it’s definitely not required. IF you have a toddler, you should definitely get out there and bike!

Note: A Woom 1, while expensive, may still be worth it. One can often resell them for more than the initial cost. The Banana Bike will probably have no resale value. You should however be able to use iether bike with multiple children.

CL-Protobufs Hello-World-Client

Last post we created a server using the Lisp web server Huntchentoot and the cl-protobufs Lisp protocol buffer library. In this post we will discuss making a client to contact our web service, sending protocol buffer messages over HTTP. We will be using the HTTP client package Drakma. We will end the post by discussing improvements that should be made in future iterations.

The reader should find the code in my hello-world-client github repo. It contains three files:

  1. hello-world-client.lisp
  2. hello-world.proto
  3. hello-world-client.asd

If you haven’t read the previous post please take a look here as we will be connecting to the service discussed therein.

Updates to the HTTP web server

In our last post our hello-world-server accepted a string and we directly read the string in as an octet-buffer. We did this to ease our testing, we could manually use a GUI web client such as Postman to send rest calls to our web server, inputting the octet-buffer we get from calling print on the octet buffer returned from serialize-proto. I would like to avoid any read calls in my Common Lisp code for security. Instead we will take in a base64-encoded string containing the octets, decode the base64 encoding with cl-base64 and use flexi-streams string-to-octets and octets-to-string to read and write the octet-buffer as a string.

The updated code can be found in hello-world-server/hello-world-server.lisp starting at line 19.

Code Discussion

I will omit the discussion of the hello-world.proto code and how it is compiled with the cl-protobuf asd additions. For a discussion on this please refer to my previous post here.

Hello-world-client.asd:

The useful information in this file is:

  • defsystem: The Lisp system is called hello-world-client. 
  • defsystem-depends-on: To load the system you will need to load cl-protobufs so we can generate Lisp code from the proto file.
  • depends-on: We will use Drakma as our http client library.
  • module: We have one module src.
  • protobuf-source-file: This is an asdf directive given to us by cl-protobufs. It will look for a file hello-world.proto in our current directory and call protoc-gen-lisp on this file.
  • file: A lisp file hello-world-client.lisp

Hello-world.proto:

The request and response schema definitions for the hello-world-server. This is copied from hello-world.proto in hello-world-server

Hello-world-client.lisp:

This is where the real work is done. We are starting the hello-world server locally and default to port 4242 with handler hello so we will set those as globals. We define a function call-hello-world which is what the client is attempting to do. It takes a name as either nil or string and address, port, and handler as optional keyword arguments defaulting to the aforementioned globals.  

We create the proto and then serialize the bytes using the cl-protobufs serializer:

(cl-protobufs:serialize-object-to-bytes proto-to-send)

Next we use flexi-streams to turn the bytes into a string octet, cl-base64 to base64 encode the string, and send it to the server with drakma.

(drakma:http-request
  (concatenate 
    'string address ":" port "/" handler)
    :parameters 
      `(("request" . ,(cl-base64:string-to-base64-string
                        (flexi-streams:octets-to-string
                          serialized-req)))))

The drakma library blocks until it receives a response, which will contain a base64 encoded stringified proto message. We simply reverse the base64 encoding, then call string-to-octets to get our octet buffer. We deserialize the proto message with cl-protobufs and print the response to the repl.

(print
     (hwp:response.response 
       (cl-protobufs:deserialize-object-from-bytes
          'hwp:response
          (flexi-streams:string-to-octets
            (cl-base64:base64-string-to-string response)))

We see

(call-hello-world “foo”)
=> “Hello foo”

as all good hello-world calls should show.

Final Remarks

The hello-world-server and hello-world-client code work as one would expect. There is however, too much boilerplate that has to go around this code. Having to manually call octets-to-string and string-to-base64-string and its reverse is cumbersome. What one should really do is have a function for the client that does this for you.

On the server side it is equally onerous to have to call base64-string-to-string and string-to-octets, and at the end it’s the reverse for every proto parameter. This should really be a macro that takes as an argument a list of (parameter-name . proto-type) and does the serialization for you. You could add an optional output proto-type/parameter-name to do the base64 encoding then string-to-octets at the end of the call. This would equate to having to add one macro call per handler.

In the next cl-protobuf hello-world post we will try to add these!


I would like to thank @rongut, @cgay, and @benkuehnert for their edits and comments.

Proto Over HTTPS

Introduction

Recently I announced the release of CL-Protobufs, a full featured protocol buffer library for Common Lisp. It does not include a gRPC package. In this post, I will make an example hello-world server that will take a protocol buffer message as input and respond with a protocol buffer as output. I will describe some of the limitations of the given code and some ways to improve it. 

I will expect a basic understanding of the ASDF build system; if you don’t have one please see an example such as this.

The reader should find the code in my github repo. It contains three files:

  1. hello-world-server.lisp
  2. hello-world-server.proto
  3. hello-world-server.asd

The .asd file is for integration with the ASDF build system. The proto file is where we define the request and response. 

Code Discussion:

The actual implementation is in the hello-world-server.lisp file. Let’s briefly go over the files.

hello-world-server.asd:

The useful information in this file is:

  • defsystem: The Lisp system is called hello-world-server. 
  • defsystem-depends-on: To load the system you will need to load cl-protobufs first. This is so we can compile the proto file into a file that Lisp will understand. Call this the Lisp proto schema file.
  • depends-on: We will use hunchentoot as our web server.
  • module: We have one module src.
    • protobuf-source-file: This is an asdf directive given to us by cl-protobufs It will look for a file hello-world.proto in our current directory and call lisp-protoc on this file.
      • Note: You can specify a different directory to look in with proto-pathname. 
      • Warning: You must have the lisp-protoc plugin compiled and in your $PATH to call this directive successfully. Please go to CL-Protobufs and follow the protoc installation directions.
    • file: A lisp file hello-world-server.lisp

If you have a background using asdf that may have been obvious.The defsystem-depends-on call to cl-protobufs and the protobuf-source-file call may be non-obvious to newer lispers.

hello-world.proto

This is where we place the request and response definitions. This will be compiled into a lisp file by protoc that lisp will be able to interpret with cl-protobufs. Note the package name is hello_world. To read more about protocol buffers please visit the Google developer documentation and to learn more about the API for cl-protobufs please visit the CL-Protobufs readme.

hello-world-server.lisp

This is where the application code lives. The defpackage tells us mostly what we already know, we will use Hunchentoot as our web server library and the cl-protobufs library which will allow us to serialize and deserialize our protocol buffer objects. We will also be using a package: cl-protobufs.hello-world. This package is defined as part of the code generated by the Lisp protoc plugin and contains everything we need to work with the native lisp protocol buffer objects:

  • cl-protobufs.hello-world:request
  • cl-protobufs.hello-world:response

A good tutorial on Hunchentoot can be found here. We define a handler for our server that will be at /hello and will be available on port 4242. We give a parameter request of type string. 

Inside of the request body is where the real protocol buffer work is done. The handler will take in a request which will be of type string. This string will actually hold a simple array, i.e. #(values) which will be the serialized protocol buffer request message ‘cl-protobufs.hello-world:request. As an example: 

(cl-protobufs:serialize-object-to-bytes
  (cl-protobufs.hello-world:make-request :name "fish"))

Will give us a serialized request object whose string name entry is “fish”. When run in the REPL we get:

#(10 4 102 105 115 104) 

We call read-from-string to get the array as an array and then call make-array to get an octet array. In cl-protobufs:deserialize-object-from-bytes we expect an octet-array so this is required. We deserialize the message with deserialize-object-from-bytes. If we received a message with a set name we return a response with a response string “Hello {name}”, otherwise we just return a response with response string “Hello”. Finally we serialize the created response and return it as a string.

Calling with my “fish” request I get the response:

#(10 10 72 101 108 108 111 32 102 105 115 104)

Then calling 

(cl-protobufs:deserialize-object
  'cl-protobufs.hello-world:response
  (make-array 12 :element-type '(unsigned-byte 8) 
                 :initial-contents 
                 #(10 10 72 101 108 108 111 
                   32 102 105 115 104)))

I get 

#S(CL-PROTOBUFS.HELLO-WORLD:RESPONSE :%RESPONSE "Hello fish" 
                                     :%BYTES NIL 
                                     :%%IS-SET #*1) 

As should be expected.  

Limitations and Extensions

The first limitation is the work we have to do to read the request. We shouldn’t have to call read-from-string at all, and then we shouldn’t have to make a new octet array. What we should do is make our call with octets-to-string and then call string-to-octets on the received message. This can easily be done by importing trivial-utf-8. I was using Postman to make the calls in debugging this simple hello-world-server. This should be easy to fix for the next post.

We shouldn’t even have to do that. We should be able to set the message request in the handler and the octets-to-string then deserialize call should be handled for us. The final serialize and utf-8-string-to-bytes call should also be handled like that. This is easily handled by around methods. Given the ingenuity of Lispers, there’s probably even better answers.

Final Remarks

I hope you found this example interesting. This is only a simple server. Also, I haven’t used Huntchentoot very much so I probably wrote some terrible server code. I 

  1. Hope to improve my lisp server code.
  2. Hope to get a gRPC server out someday.

Thank you for reading. I hope to see some interesting lisp code using cl-protobufs!


I would like to thank @rongut, @cgay, and @benkuehnert for reviewing this document.

Compiling CL-Protobufs with ABCL

I’m one of the main maintainers for the Google Common Lisp protocol buffer library:

https://github.com/qitab/cl-protobufs

I’ve been meaning to do a write up on it because it’s a heavily used library for my team, developed at Google. Internally we only use the SBCL Compiler (Steel Bank Common Lisp), while other compilers are also available for Common Lisp. In this post I will describe the benefits of getting Common Lisp libraries such as cl-protobufs compiling on multiple Common Lisp compilers.

Recently I’ve been working on getting cl-protobufs to compile with CCL and ABCL. Most of my coworkers have been wondering why I bother. It seems fairly unlikely people outside of Google will use cl-protobufs and we only use SBCL. I hope this is wrong; it would be great to see more adoption of protocol buffers in the Lisp community, and thanks to Ben Kuehnert we have the only fully proto2 and proto3 compatible protocol buffer library in Common Lisp. But I digress.

Why would I bother getting protobufs to compile in both CCL and ABCL?

The answer is fairly simple; we want cl-protobufs to be available to a wider array of Common Lisp users; and SBCL allows certain constructs that other compilers don’t, so having cl-protobufs run in CCL and ABCL we can find more bugs. 

A clear example, and the majority of the bugs I found in cl-protobufs, is the use of make-instance for creating structure-classes. SBCL will allow this — you can’t set the slot values with make-instance, but you can create the instance. In CCL, make-instance expects all of the init-forms of the structure variables to be static, i.e. not functions, so fails. For example, if you have the struct

(defstruct my-struct
    (foo (make-array ‘(3)
                     :element:initial-elements ‘(1 2 3))
                     :type (simple-vector))))

You can call (make-instance ‘foo) in SBCL but not in CCL or ABCL. Calling make-instance on a structure-class object is undefined behaviour, so this is a bug!

CCL, like SBCL, compiles down to machine code, but ABCL compiles down to JVM bytecode. Why would I care that cl-protobuf can run on ABCL? We find lots of interesting bugs by using ABCL.

Here’s a simple example:

(defun positive-p (my-int)
    (declare (optimize (speed 3) (safety 0))
             (type fixnum my-int))
    (> my-int 0))

What should (positive-p nil) return? In SBCL we expect my-int to be a fixnum, and we tell the compiler to believe us. Speed 3 safety 0 means just treat it as a fixnum. The output on my laptop is T. In ABCL nil is not a fixnum, and there’s no way to cast it to a fixnum, so this is a type error.

Note: 

(defun safe-positive-p (my-int)
    (> my-int 0))

(safe-positive-p nil) throws an error in SBCL.

Why is this a problem? Why are you telling the compiler to compile at speed 3 safety 0? When code has to be fast, this makes it fast. I’ll discuss using the debugger in a later blog post.

So why try to get cl-protobufs working on different Lisp variants? It finds lots of bugs and undefined behavior! 


Special thanks to Ron Gut and Carl Gay for looking over this post.

For Whom the Clock Ticks?

First, I hate the new WordPress wysiwyg editor, so apologies.

My wife is a college lecturer, she usually teaches 3-4 classes per semester. I am a software engineer, I work 40 hours a week, usually more. During this time of Covid it’s difficult to find childcare. Do you even want someone else to take care of your child? With my wife teaching this opens up a problem.

Previously I posted thanking Google for giving me time off to take care of my daughter. This is a great benefit, and I’m very thankful to Google for giving me this benefit. That being said, it has one really big downside. A large part of performance is based on impact, and it’s hard to have next level impact when you’re only working 60% of what everyone else is working.

I’ve been working at Google for almost 4 years. I got to L4 in about a year and a half to two years. It was a fairly easy promotion process, just sure you’re a relative mature programmer who can do some smaller level system design. It’s probably the level most engineers are at, and it’s the level we hire PhD grads at. I was happy to get the promotion, it came with nice benefits and all is great.

Two years later I have a daughter and another one on the way. I want to get a nice house for my family, a yard to play in and a garage to work on my motorcycle (and learn some woodworking). But in the area I live, it’s hard to afford a decent house with the salary an L4 at Google makes. One of my friends went to Mountain View when they got hired and were told it takes an L5 to buy a decent house within an hour commute, now it’s L6…

So I’m left with seemingly two possibilities. I can take the extra time off to spend with my daughter, or I can try to find the extra time and work for that promotion. It’s not a hard choice for me, my daughter will win out 100% of the time every time. That being said, it still makes me wonder. Would the extra work, the promotion, be worth it?

Thankfully the problem is simple, if you can spend more time with your family, you should do it.

Banning Immigrants

Quite a headline, I know. My wife is an immigrant, here on green card. Many of my freinds are also immigrants, here on H1 or F1 visas, either as students or as employees of companies you’ve heard of. Many of them I met in grad school, or at Google, or at summer schools. These immigrants are a massive boon to the US, and Trumps goal of removing them will dramatically hurt our country and remove our influence in the world.

There’s two major peices of news that have popped up in relation to this. The first can be found:


Trump Suspends Visas Allowing Hundreds of Thousands of Foreigners to Work in the U.S. https://www.nytimes.com/2020/06/22/us/politics/trump-h1b-work-visas.html

In the simplest terms, we are shutting down H1-b visas for the remainder of the year to combat job losses from the Coronavirus. There are many sides to this issue.

First there are large tech companies who only hire international employees to undercut the pay Americans would require to work in low to medium skilled tech jobs. This should be illegal. These companies are obviously working in bad faith. By companies I also mean several large municipal governments…

But for other companies this is not the case. I work at Google, if you can pass the interview, no matter what country you live in and abiding by certain laws we will hire you. You get the same pay no matter where you’re from, Google pays in the top 95% tech pay bracket by employed region. We can’t find enough employees in the US, most people just aren’t good enough, and Americans don’t have the best tech skills.


ICE: Foreign Students Must Leave The U.S. If Their Colleges Go Online-Only This Fall https://www.npr.org/sections/coronavirus-live-updates/2020/07/06/888026874/ice-foreign-students-must-leave-the-u-s-if-their-colleges-go-online-only-this-fa

I think this one is quite a bit worse in some ways. One of the biggest bits of soft power the US has is our colleges. Their known around the world as the best. Harvard, MIT, Yale, even our flagship state universities get the best students other countries have to offer. They come here and learn about the US, meet US students, learn US culture, and often stay in the US to become our top researchers and professors at these colleges.

A common complaint would be why not take US students, and honestly, it’s because either they suck at the technical skills needed or they could easily make much more money just going into the business realm. Why would I bother with a PhD when (if I’m good at what my professed skills are) I could just make a six figure salary at Google, or Amazon, or Microsoft. For international students this is often a door into the US.

The US imports much of our talent. We need these students. Sending them back does immeasurable harm to our future.


I’m going to close this post with a personal note. In 2016, right after I was hired at Google, I went to China to see my wifes family. She was denied a new F1 visa, and I had to spend the next year away from her.

By removing these people from the US, what kinds of families are you hurting? What are you doing to people lives? Please think about this.

Sincerely,

Jon

Thank You Google, and Working From Home

With Covid still looming large in the US, the major tech companies are continuing or expanding their work from home allowance. More then that, many of the larger tech companies are allowing workers to take 12-14 weeks of paid carers leave so they can take care of their children during the absence of childcare services.

As a Googler, I’m getting a possible maximum of 14 weeks of carers leave as paid time off. First, I’m extremely grateful to Google for giving me this extra time off. I’ve taken every Tuesday/ Thursday off for the last 2 months, and will be taking every Tuesday/ Wednesday for the next 2 months.  That is 40 days (8 weeks) of paid time off that I’ve been allowed to spend with my daughter.

This is not suppose to be vacation time. While I’m playing with Lyra, my wife is doing her job. She teaches online (through UMass Boston), holds office hours, and prepares for her classes. During the week there is very little free time, and a 20 month old will take over any extra free time you may have.

The days when I work, I shut the door to my room (office) and do all of my work at my desk. Again, I am very grateful to Google for giving us 1000$ to upgrade our home office! It’s amazing how before the Covid outbreak Googlers were getting pessimistic about the company, but there is a massive amount of goodwill thanks to how they are handling this.

The saddest thing about shutting the door is hearing Lyra cry for Dada. If I’m in a meeting and she hears my voice, then all I hear at the door is “Dada, dada”. I don’t know what she thinks of me not being able to go and play with her, but it’s the saddest part of any workday.

I’m very proud to work for a company that takes such good care of it’s employees. When other people were losing their jobs, Google expanded benefits. When other companies canceled internships, Google worked with our internal resources to make our interns able to keep working.

I hope this goodwill keeps on after we all return to the office. I believe Sundar tries to do the right thing. Thank you!

img_20200527_211144

Over-Engineering FizzBuzz

The main language I use in my day-to-day programming life is Common Lisp. It’s a wonderful language with some very powerful tools that most other languages don’t have. How many other languages have the powerful macro system of Lisp? How about generic functions? Not many.


Side note: A generic function is a function you can have many different version of which use a type system to determine which version should be called. This isn’t completely true, but good enough for what I’m writing here.


With this much power we can write code more complex than it ever should be. Let’s use FizzBuzz for an example. The goal of FizzBuzz is to print the numbers from 1 to 100 where if the number is divisible by 3 we print “Fizz”, if it’s divisible by 5 we print “Buzz
and if it’s divisible by 3 and 5 we print “FizzBuzz”. It’s a classical interview problem and now an interview trope.

First, let’s do a simple macro example. I don’t want recursion or multiple function calls or loop iteration in my code, in this example. So I can make a macro that will unroll into a sequence of print statements.

(defmacro stupid-fizz-buzz (c)
  (cond ((> c 100) ())
        ((zerop (mod c 15))
          `(progn
             (print "FizzBuzz")
             (stupid-fizz-buzz ,(1+ c))))
        ((zerop (mod c 3))
          `(progn
             (print "Fizz")
             (stupid-fizz-buzz ,(1+ c))))
        ((zerop (mod c 5))
          `(progn
             (print "Buzz")
             (stupid-fizz-buzz ,(1+ c))))
        (t
          `(progn
             (print ,c)
             (stupid-fizz-buzz ,(1+ c))))))

Changing 100 to 3 and calling macroexpand-all on (stupid-fizz-buzz 1) we get:

(PROGN (PRINT 1) (PROGN (PRINT 2)
  (PROGN (PRINT "Fizz") NIL)))

There are nicer ways to write stupid-fizz-buzz as a macro, but this is a dead simple way.

Also calling (let ((n 1)) (stupid-fizz-buzz n)) won’t work because n isn’t an integer at the time of macro expansion, so some care must be taken. In order for the macro to work the input must be an integer at time of macro-expansion. To fulfill the problem we could give the below inlined function and we should see the unrolled code wherever we call fizz-buzz in our code after compilation.

(declaim (inline fizz-buzz))
(defun fizz-buzz ()
(stupid-fizz-buzz 1))

Perhaps you believe one function should print “Fizz” and another function should print “Buzz”. Also, you love generic functions.

(defparameter *fizz* 3)
(defparameter *buzz* 5)
(defparameter *up-to* 100)

(defgeneric %stupid-fizz-buzz (count))

(defmethod %stupid-fizz-buzz :before ((count integer))
  (when (zerop (mod count *fizz*))
    (format t "Fizz")))

(defmethod %stupid-fizz-buzz :before ((count rational))
  (when (zerop (mod count *buzz*))
    (format t "Buzz")))

(defmethod %stupid-fizz-buzz (count)
  (if (or (zerop (mod count *fizz*))
          (zerop (mod count *buzz*)))
      (format t "~%")
      (format t "~a~%" count))
  (when (< count *up-to*)
    (%stupid-fizz-buzz (1+ count))))

(defun stupid-fizz-buzz ()
  (%stupid-fizz-buzz 1))

 

Here, integer is more exact than rational so the "Fizz" will occur before the "Buzz". Either way, entirely over-engineered… At least the macro version has the benefit of complete loop unrolling.


How should fizz buzz be done.

(defun fizz-buzz ()
  (loop for i from 1 to 100 do
    (when (zerop (mod i 3))
      (format t "Fizz"))
    (when (zerop (mod i 5))
      (format t "Buzz"))
    (if (or (zerop (mod i 3))
            (zerop (mod i 5))))
        (format t "~%")
        (format t "a~%" i))))

Is probably what it should be, you can do a bit nicer if you understand the format directive better.


I hope you had fun in this silly post.

Shout out to @cgay for some spelling errors and the note about the macro not working for (let ((n 1)) (stupid-fizz-buzz n)) .

There’s more examples: https://www.reddit.com/r/lisp/comments/59ikqm/the_most_elegant_implementation_of_fizzbuzz/

Finally, Little one:

img_20200514_130303