Hello World gRPC Server

In our last post we discussed the basics of gRPC, gave an example flow, and discussed why you would want to use it over different communication protocols. In this post we will create a Hello World server using gRPC. This will involve the mixed-use of both gRPC and cl-protobufs. The code here can be found in my Hello World gRPC Github repo and specifically in the first commit.

HelloWorld Service

Defining the Protocol

Before we start writing code we must define:

  1. The messages we wish to send from the client to server and back.
  2. The server and method name.

We will start off simple. The request will simply contain a string name and the response will contain a string message. Creating these messages has been discussed before and is not very interesting. The new portion is the server. In Proto parlance a server is a service and that service contains a set of callable methods called RPCs.

Looking at the proto file in our Hello World repo we see:

service HelloWorld {
  rpc SayHello(HelloRequest) returns (HelloReply) {}
}

This says we are creating a server named HelloWorld. It will export one callable method called SayHello, which will accept one (serialized) HelloRequest proto message and respond with one HelloReply proto message.

The Protocol, Macroexpanded

We’ve created our proto file. Next we add it to a lisp library with an ASD file, along with all of the defsystem requirements to process the proto file. We detailed an example in our post Proto Over HTTPS if you need a refresher. Now  we would like to create a server that can be called. To understand how to do this we will go over the Service generated code, so please load your ASD file (or just follow along).

Cl-protobufs expands the proto service into several callable functions in the CL-PROTOBUFS.{FILENAME}-RPC package; for us this is CL-PROTOBUFS.HELLO-RPC. For each RPC it creates two functions:

  1. CL-PROTOBUFS.HELLO-RPC:CALL-SAY-HELLO
    1. Channel Argument
      1. An object defined in gRPC
    2. The REQUEST argument
      1. CL-PROTOBUFS.HELLO-REQUEST message
  2. CL-PROTOBUFS.HELLO-RPC:SAY-HELLO
    1. CL-PROTOBUFS.HELLO-REQUEST message
    2. CALL Call object

The CL-PROTOBUFS.HELLO-RPC:CALL-SAY-HELLO function will let clients call our service. The channel is created with the gRPC library and we will discuss this later. The request message is a HELLO-REQUEST object.

The CL-PROTOBUFS.HELLO-RPC:SAY-HELLO is a generic function. The user will have to implement a method overriding this generic. It takes a  (deserialized) CL-PROTOBUFS.HELLO-REQUEST message and a gRPC call object created by the gRPC library.

gRPC Objects

There are two internal book-keeping objects we need to talk about: the CHANNEL object and the CALL object. 

CHANNEL

The channel is an object created by gRPC over which a user can send messages. There are several options for channels – please see the gRPC documentation. We will see a brief example below when we call our complete server.

Call

The call object contains metadata about a call created by the gRPC server, such as whether the call has been canceled. It is currently unused, but will be more useful in the future. For now it will remain ignored in our server implementation.

Server Implementation

Now the fun part: cl-protobufs created the scaffolding for making a server and gRPC created the scaffolding for hosting a server and servicing calls, but we need to implement our server. All we have to do is implement our RPC stub (SAY-HELLO) and start the server!

Implementing our RPC

Since the cl-protobufs scaffolding creates a generic function we just make a method implementing that generic:

(defmethod hello-rpc:say-hello ((request hello:hello-request) call)
  ;; The RPC contains useful data for more intricate requests.
  (declare (ignore call))
  (hello:make-hello-reply
   :message (concatenate 'string "Hello " (hello:hello-request.name request))))

Notice we don’t have any serialization calls, this is all done by the gRPC/cl-protobufs scaffolding. Instead, we make the protos and implement our logic.

Starting our Server

Starting our server requires:

  1. Calling (grpc:init-grpc)
    1. This is done once to initialize pieces of gRPC.
  2. Calling grpc::run-grpc-proto-server

The grpc::run-grpc-proto-server needs, at a minimum, the host:port and the Service symbol, here cl-protobufs.hello:hello-world. It offers more functionality, allowing for SSH, user defined number of threads, etc. See the gRPC code for details.

Full Example

Now that we have created our server we will show an example of starting the server and calling it. First clone the Hello World Repo. To start the server just load the grpc-server  package defined in that repo and call grpc-server:main. Your server has been started! You must specify the hostname and port, in our example we use 127.0.0.1 and 8080 defined in constants +hostname+ and +port-number+ in server.lisp.

(defun main ()
  ;; Before we use gRPC we need to init-grpc, this sets up
  ;; low-level gRPC internals.
  (grpc:init-grpc)
  ;; This starts the server.
  (grpc::run-grpc-proto-server
   “127.0.0.1:8080”
   cl-protobufs.hello:hello-world))

Next we need to call the server. In a REPL, load the grpc-server example. This is just to get the #:grpc and #:cl-protobufs.hello-rpc and #:cl-protobufs.hello packages. Next call:

(grpc:with-insecure-channel
             (channel "127.0.0.1:8080")
           (cl-protobufs.hello-rpc:call-say-hello
            channel
            (cl-protobufs.hello:make-hello-request :name "Bob")))

We will discuss the grpc:with-insecure-channel function in the next post. Just note that here we specify a binding argument – channel – and the host and port. Finally, we call our server using cl-protobufs.hello-rpc:call-say-hello over the channel with a protocol buffer message. This returns:

#S(CL-PROTOBUFS.HELLO:HELLO-REPLY
   :%%SKIPPED-BYTES NIL
   :%%BYTES NIL
   :%%IS-SET #*
   :%-MESSAGE #S(CL-PROTOBUFS.IMPLEMENTATION::ONEOF
                 :VALUE "Hello Bob"
                 :SET-FIELD 0))

Wrapping Up

Here we have seen the creation of a full gRPC server, and we called it with a gRPC client seeing it receive and respond with Protocol Buffer messages over a wire. This requires none of the serialization and deserialization scaffolding we created in our previous HTTP servers as we get it for free! In future posts we will discuss client calls as well as bidirectional streaming.


Thanks goes to Ron Gut and Carl Gay for making edits and comments.

gRPC Basics

Howdy Hackers, it’s finally time to talk to you about gRPC for Common Lisp. In this post we will discuss the basics of gRPC. We will go through an example request/response flow from the perspective of the client and server. In future posts we will make a gRPC server and call it from a client. 

Lyra and Faye Looking Forward to gRPC Discussion?

Background

gRPC is a general RPC framework developed by Google. It is often used to pass Protocol Buffer messages from one system to another, though an advanced user could use it to pass byte vectors. It sits over HTTP/2 and allows for bidirectional message streaming between a client and a server.

For these posts we assume some knowledge of Protocol Buffers.

Why would I use it?

gRPC allows for simple communication between clients and servers. It allows for language-agnostic message passing of complex structured objects.

First let’s look at a simple call flow for a client and server.

  1. Service implementor publishes a gRPC Service description and Request Message as well as a public URL.
  2. Client uses the URL and gRPC library to create a channel.
  3. Client instantiates a request object.
  4. Client uses Protocol Buffer generated code to call the server passing in the channel and request object.
  5. Server receives the request object, does required processing, and returns a response object.
  6. The client received a response message based on the published service descriptor.

The client and server need language specific Protocol Buffer and gRPC libraries. The language of these libraries for the client and server need not be identical. In our examples we will use qitab/grpc and qitab/cl-protobufs both written for Common Lisp.

The protocol buffer library takes care of many of the low-level details for you. Once you specify the request and response message fields protobufs provides convenient constructors in multiple languages and takes care of serialization and deserialization to the correct type for each message field.

The gRPC library is in charge of transmission of the underlying bytes from one client to server. It delegates to the Protocol Buffer library for serialization of the request and response messages.

Alternatives

HTTP(/2)

One option to consider is bare HTTP calls, in fact this is the basic underlying of gRPC! This still leaves a system designer with the need to choose what to send over the wire. This is often JSON or XML. Then one must determine how to send over the API schema, device authentication schemes, and all the work that creating a good API requires. gRPC gives much of this for free.

Apache Thrift

gRPC has a larger market share. The ecosystem you want to with will often determine your choice of Thrift vs gRPC. 

Note:

There are many different RPC frameworks, these are just the most common. Your software environmnet will often determine your framework, if you work at Google you will probably use gRPC where if you work at Facebook you’ll probably use Thrift. Also, not all languages are supported with any RPC framwork.

Conclusion

We now understand gRPC and its use case. We discussed the different types of libraries we need and saw a simple call flow with these libraries. In our next post we will create a gRPC server using qitab/grpc and call it.


Thanks go to Carl Gay for edits!

Promos

Jonathan Godbout

11/20/2022

I recently went from Software Engineer 3 to Senior Software Engineer (L4 to L5) at Google. This promotion doesn’t greatly change my day to day work, it just means I’ll be graded on a higher standard than I was a few months ago. With this change, I’ve been trying to figure out what this promotion actually means, and what the future holds.

My History At Google

I started in 2016 at Google as an L3, this is the position that most people start, a new grad hire. Not much is expected of new grad hires, you generally get your work assigned by someone more senior, it generally has most of the implementation questions filled in, and anything not filled in is expected to be fairly simple. People generally don’t stay at this level very long.

In late 2017 I took up a larger project that was started by a Xoogler but had priority and had a lot of questions. This was my introduction to really writing documentation, to researching the tools needed to support systems, and meeting with people from other teams to determine how to not break them. After a years worth of work, becoming the owner of a large swath of my teams codebase, and making a system that I will be asked about for years to come, I was promoted to L4.

L4 is the highest level you must get at Google. Your expected to own some small portion of your team’s code-base and be able to receive tasks with moderate open questions, design, and implement.

Getting to L5

As an L4, you’re not expected to design and implement large systems at Google, most of the interesting design work is done at L5 and above. Thus the big difference from L4 to L5 is being able to design a large(ish) system given some degree of constraints. Say, the L6 says we need x done, the L5 designs the system, and the L{3,4,5} go forth and implement the system. 

Now, I work on a team whose system was put in maintenance mode years ago, so how does one go about finding such a large project to work on? The best answer I have to this is find a piece of your system that is causing your team pain, (better causing customers pain) and be the one who designs a solution to fix it. Remember, this is not an iterative approach to fixing a problem, this has to be a large project that will give clear benefits to multiple teams and will need to include working across multiple teams.

Every time I saw an interesting project, I took it. 

  • My team has no way to use Protocol Buffers, so I rewrote CL-Protobufs giving us access to Google standard technology.
  • The dreaded NDC future of Airfare Ticketing is coming, try to be part of it.
  • Different release processes are painful, try to fix them.
    • This was nicely paired with 1.

Where I am now

Right now I’m an L5 engineer, and given the average promotion rate, I will probably be an L5 engineer for a few years. In order to reach the next level I need to expand my scope of work. For the next level there are two possibilities:

  • Manager
  • Staff Software Engineer

Both require expanding my range of influence, and growing beyond just being a QPX engineer.

Why Promo

A lot of engineers stay at L5. It is said (everywhere) that the role of L6 is a completely different job, it is leadership instead of engineering. Even as a staff software engineer your still a leader, you own a significant portion of your codebase, and you set priorities on future endeavors for your team. A lot of engineers have no interest in this role.

Personally, I don’t like stagnation. I don’t like sitting still, and I wish to learn more. I can be a better engineer, I can learn more, be more attuned to performance, but at some point in order to enlarge your knowledge you must cast off into new unknowns, and gain responsibility beyond yourself. This is what this next step will be about.


Kids Agency

Agency, noun; action or intervention, especially such as to produce a particular effect.

I’m a big believer that kids should be given as much agency as possible, given their age and ability. There is a movement called Free Range Kids fighting back against the continual coddling and oversight of kids. We should be very supportive of this. In this post I’ll describe some of what I think should be allowed.

Bikes

Being a kid is hard, you are continually being told what you can and cannot do, you can’t transport yourself anywhere, and you live in an adult world. The first taste of freedom you are likely to get is on a bike. You suddenly go from a walking or running speed, to a much faster biking speed. Your parents can no longer feasibly keep up with you, and you have the ability to explore your world.

When I was a kid my bike was how I got around. Living in rural Vermont, the only way to get anywhere was on a bike (or a car if you’re old enough). Getting to friends was nearly impossible on foot. My bike let me get to my friends, the local park, my school, and the town’s convenience store. We’ll discuss more about letting kids run around alone later.

Now that I have kids, I want them to learn how to ride. Not only is it good exercise, and a lot of fun, but it will also serve as their first vehicle to get around town and see their friends. Right now I want them to be in my eyesight, but Lyra already loves riding her bike to the playground. Faye now has a balance bike but it will be a few months before she grasps riding.

Favorite kids bikes:
Woom 1: https://us.woombikes.com/products/1
Woom 2: https://us.woombikes.com/products/2

Playing Without Parents

When kids are old enough, they need their own space to be themselves. They need to be able to order their own lives, to run by themselves, and just be themselves. With parents constantly over their shoulder, they will never be able to learn about themselves.

When I was a kid, I lived in rural Vermont with many acres of land around me. My parents always said “There’s a large forest out back, go play.” Around 7 my friends and I would ride around Huntington, there was a playground near the gravel pit, and a convenience store a few miles away. We would ride down and get root beer, candy, or some other snack. So long as I told my parents where I was going, they were fine.

Now I live in a suburb of Boston MA, and everyone is scared of everyone else. People seem to not want kids to be playing by themselves. But I ask: “Whats the point of having a backyard if I can’t tell her to go out back and play?” She’s old enough to know to stay out back. In a few years (2, 3?) she should have no problem with the 5 minute walk down the street to the park.

Note: The street I lived in in Huntington had cars, we were smart enough to get out of the way.

Being Alone

Sometimes Lyra decides she wants to play alone. Maybe Faye is getting in her space, maybe Mama and Dada are being too belligerent, but she needs her own time. She goes into her room, or the playroom, and cleans, or plays, or reads. This is important for development, allowing her to self calm, self direct, and just be herself. Parents need to give this to kids.

Outro

Kids are near infinitely capable. The bounds that they have, are often the bounds we set upon them. Before thinking about what you’re comfortable with, what fears you have, think about what your child needs, what they can handle, and what kind of agency you want them to have. All this said, you should do what you feel is right for your kids!

Evil Lies about Hash Tables

Greetings readers. There are lies being told in Computer Science, lies you probably believe. Today I want to discuss the lie that is the constant time lookup or insertion of our friend the hash table. Don’t get me wrong, I love hash tables, some of my best functions use them, but they are far from constant time.

So, what the usual statement is: Hash tables have constant time lookups.

This is a lie. They have average case constant time lookups, worst case linear time lookups, and O is about worst case analysis. 

Let’s dig into this statement.

Faye trying to understand HashTables.

How do hash tables work?

Hash tables are key-value pairs, you tell them to store as keys some object type (for example strings), and as their value some other objects (for example a Person object).

This way we can have:

(let ((persons (make-hash-table :test #'equalp)))
  (setf (gethash “Lyra” persons) (list :name "Lyra" :height 36 :weight 34))
  (setf (gethash “Lrya” persons) (list :name "Lrua" :height 34:weight 36))
  (setf (gethash “Faye” persons) (list :name "Faye" :height 24 :weight 20))
  (print (gethash “Lyra” persons))
  …)

So `persons` is a hash table containing a list of person attributes.

Their power is in hashing, the usage of a function that maps an object to an integer. Take strings as an example, a simple hash function could be

(defun hash-string (str)
  (reduce #'+ str :key #'char-int))

Which adds up the integer values of each character, where "a” is 97, “b” is 98…

When we run

 (gethash “Lyra” persons)

First we hash the string “Lyra” to get 408. The constant time lookup type we know of is arrays, so as one would expect hash tables could (and should) be backed by arrays. 

This leads to two problems:

  1. The possible values of our hash function are bigints, a hash table can’t be backed by an array of arbitrary size.
  2. What if two keys have the same value?

The first is simple, we limit the hash table to a certain size, so it has m possible values. Then we just take (mod (string-hash “foo”) m) to get the array index.

The second is also fairly simple, we create a list of elements. So “Ly” and “yL” would be in the same array index, or in hash-table parlance bucket. A value in this bucket will be the key-value pair, so in the bucket containing “Lyra” in persons we might see

(list (“Lyra” .  (list :name "Lyra" :height 36 :weight 34))
       (“Lrya” , (list :name "Lrya" :height 34 :weight 36)))

Then we are left with testing the key portion of our hash-table with our test function versus the given key, and finally returning the value.

Side note: If you have a way to compare the elements in your bucket you can use a balanced binary tree instead of a list. This takes the worst case lookup time from O(k) where k is the number of elements in your bucket, to O(log k). Java 8 did this for large bucket sizes in it’s HashMap (at least some implementations).

Why is this not just O(n) or O(log n)?

Well it is, if your hash function is constant (or bad) then you will get lots of hash-collisions, and turn your nice shiny hash-table into a balanced binary tree or a list. But we really care about average lookup times.

If we fix our bucket size m, then hash-tables are back to bad… The good news is we can resize the buckets. This requires us to rehash all of the elements, and make new lists. This could be expensive, but it probably occurs rarely, and proper values for the size of our hash-table can greatly improve things. You can find the math in any CS Data Structures book, or wikipedia:

but with proper rehashing and a good hash function you’re back down to constant time lookups!

So please, don’t tell me your maps are O(1) without at least alluding to the fact that this is not quite O…

Other lie:

Quicksort is not O(n log n).

it’s actually O(n^2), but it almost always outperforms it’s O(n log n) rivals.

Out With 2021, In With 2022

Greetings readers,

I know I don’t post much, but I have many ideas. Maybe someday I’ll find the time to write them!

The year 2021 is practically over, and I feel like a year end blog post is in order. It was quite a year. I say that every year, so maybe every year is quite a year!

The most important part of the year was the birth of little Faye. It’s interesting having your second child, in some ways you’re much more prepared for the work, for the sleeplessness, and for taking care of a baby. In other ways, you’re much less prepared than you were for the first one. You know the best ways to get them to sleep, but then the older sister goes and wakes them up!

This year saw Lyra starting preschool, spending time out of the house without us. For the first time Grandpa Don took care of her alone. For the first time I saw her interact with a very close best friend. In some ways I know she’s still my little girl, only 3, but in so many other ways she seems to be getting too big already.

Last year ended with us buying a house, this year we didn’t get to move in until nearly the end of the year. Even after that, the place was barely liveable until the end of October. It’s still great for Lyra to have a large yard to play in, and big bad wolves (not really) to hunt out back.

Wenwen keeps working at UMass Boston. The hardest thing is finding time to relax, someday we’ll find some.

As for 2022. I don’t imagine the pandemic going away, it will linger on and we’ll be forced to find better ways to live in a world with a pandemic, but we’ll find ways. We have our family. Despite all thats going on, I don’t think there’s been a better time for me.

Jon

Playgrounds

Sorry for the absolute lack of updates, between 2 kids, a home renovation, and maybe some math, there is no time to write technical posts. I am excited to say that a gRPC client for Common Lisp exists, but it currently lacks an ASDF file so thats not completely ready. This is not a Lisp post however.

Today I brought Lyra and Faye to the park. Well, I do that everyday, but I did it today as well. My older daughter (now 3) likes to play with slightly older girls, and she ran off to play. I sat by with Faye, and watched her. She went up to one of the girls and tried to play, but then came back. She asked me if I could ask if she could play with them, so I told her just ask “Can I play with you.” Next thing I saw was them pushing her away and saying “We don’t want to play with a baby.”

I’m not entirely sure what to think. On one side, she has to learn how to navigate a playground. I don’t know what I could do, but be there for her when she came over.

On the plus side her nickname is now Owlet. I said “Goodnight Lyra” and she said “No dada, say goodnight Owlet!” So I said “Goodnight Owlet” and she said “Goodnight dada” with the saddest little Owlet voice.

Cl-Protobufs Enumerations

In the last few posts we discussed family life, and before that we created a toy application using cl-protobufs and the ACE lisp libraries. Today we will dive deeper into the cl-protobufs library by looking at Enumerations. We will first discuss enumerations in Protocol Buffers, then we will discuss Lisp Protocol Buffer enums.

Enums:

Most modern languages have a concept of enums. In C++ enumerations are compiled down to integers and you are free to use integer equality. For example

enum Fish {
 salmon,
 trout,
}

void main {
  std::cout << salmon == 0 << std::endl;
}

Will print true. This is in many ways wonderful: enums compile down to integers and there’s no cost to using them. It is baked into the language! 

Protocol Buffers are available for many languages, not just C++. You can find the documentation for Protocol Buffer enums here: 

https://developers.google.com/protocol-buffers/docs/proto#enum

Each language has its own way to support enumeration types. Languages like C++ and Java, which have built-in support for enumeration types, can treat protobuf enums like any other enum. The above enum could be written (with some caveats) in Protocol Buffer as:

enum Fish {
  salmon = 0;
  trout = 1;
}

You should be careful though, Protoc will give a compile warning that enum 0 should be a default value, so 

enum Fish {
  default = 0;
  salmon = 1;
  trout = 2;
}

Is preferred.

Let’s get into some detail for the two variants of Protocol Buffers in use.

// Example message to use below.
enum Fish {
  default = 0;
  salmon = 1;
  trout = 2;
}

message Meal {
  {optional} Fish fish;
}

The `optional` label will only be written for proto 2.

Proto 2:

In proto 2 we can always tell whether `Meal.fish` was set. If the field has the `required` label then it must be set, by definition. (But the `required` label is considered harmful; don’t use it.) If the field has an `optional` label then we can check if it has been set or not, so again a default value isn’t necessary.

If the enum is updated to:

// Example message to use below.
enum Fish {
  default = 0;
  salmon = 1;
  trout = 2;
  tilapia = 3;
}

and someone sends fish = tilapia to a system where tilapia isn’t a valid entry, the library is allowed to do whatever it wants! In Java it sets it to the first entry, so Meal.fish would be default! 

Proto 3

In proto3 if the value of Meal.fish is not set, calling its accessor will return the default value which is always the zero value. There is no way to check whether the field was explicitly set. A default value (i.e., a name that maps to the value zero) must always be given, else the user will get a compile error.

If the Fish enum was updated to contain tilapia as above, and someone sent a proto message containing tilapia to a system with an older program that had the message not containing tilapia, the deserializer should save the enum value. That is, the underlying data structure should know it received a “3” for the fish field in Meal. How the accessors return this value is language dependent. Re-serializing the message should preserve this “unrecognized” value.

A common example is: A gateway system wants to do something with the message and then forward it to another system. Even though the middle system has an older schema for the Fish message it needs to forward all the data to the downstream system.

Cl-protobufs:

Now that we understand the basics of enumerations, it is important to understand how cl-protobufs records enumeration values

Lisp as a language does not have a concept of enumerations; what it does understand is keywords. Taking fish as above and running protoc we will get (see readme https://github.com/qitab/cl-protobufs/#enums):

(deftype fish ‘(:default :salmon :trout))

(defun fish-to-int (keyword) 
  (ecase keyword
    (:default 0)
    (:salmon 1)
    (:trout 2)))

(defun int-to-fish (int)
  (ecase int
    (0 :default)
    (1 :salmon)
    (2 :trout)))

Looking at the tilapia example, the enum deserializer preserves the unknown field in both proto2 and proto3. Calling an accessor on a field containing an unknown value will return :%undefined-n. So for tilapia we will see :%undefined-3.

Warning: To get this to work properly we have to remove type checks from protocol buffer enumerations. You can set the field value in a lisp protocol buffer message to any keyword you want, but you will get a serialization error when you try to serialize. This was a long discussion internally, but that design discussion could turn into a blog post of its own.

Conclusion:

The enumeration fields in cl-protobufs are fully proto2 and proto3 compliant. To do this we had to remove type checking. As a consumer, it is suggested that you always type check and handle undefined enumeration values in your usage of protocol buffer enums. We give you a deftype to easily check.

I hope you have enjoyed this deep dive into cl-protobuf enums. We strive to remove as many gotchas as possible.


Thanks to Ron and Carl for the continual copy edits and improvements!

Week n With Two Baby

The last few weeks has been about 2 little kids, and a very tired Wenwen. In the last post I discussed how Lyra felt sad no longer being the only child. I thought I’d give an update on how life was going. Again, there is no programming in this post, I haven’t done any programming in the last few weeks, and only a smidgeon of math.

Lyra is feeling much better about being an older sister. She watches for her sister to cry and will run up to me and say “sisters crying”. She has even helped changed a diaper on occasion. She will come up and hug Faye. 

That being said, she does miss having more one-on-one time. When Wenwen finishes teaching, she has to feed Faye. Lyra will come up to me and say “Dada, hold sister”. When Wenwen brings her to bed she will say “Dada tired” so I will join them (despite having to hold Faye instead). She is an amazing sister!

Faye is still in the Eat, Sleep, Poop phase of baby-dom. She has gas, cries, and does all of the normal baby stuff. She still seems like a very quiet newborn, but as everyone tells me, she’s her own self. 

Two weekends ago my mom and aunt Berta came to help us move out of our old condo. We will (someday?) be moving into our new house, but the court is taking forever to okay the sale. Beware probate!

Finally, a giant thanks to my (and Wenwen’s) advisor: Don Hadwin. He came over and stood out the window to talk to Lyra and Faye. He is also letting us stay with him as we attempt to buy a new house. I’m continually indebted and thankful to you. I hope we can do some math soon!

Welcoming Faye, Helping Lyra

Greeting everyone.

This will be the first post since Faye’s birth. I have one more Proto Cache post coming, in fact the code is already up on Github at head, but I don’t have time to write it yet. Technical posts take quite a while to write, then get copy edits and code fixes. It will come, someday…

Faye is doing great. She’s much quieter than Lyra was at her age. As a parent, you tend to worry about everything, but that’s just life as a parent. You truly forget how small newborns are.

I get worried about Lyra. When Wenjing was at the hospital Lyra kept wondering when Mama was going to come home. She said she was excited to see Faye, but she had no idea the change that would happen when we got to the hospital. That first meeting was hard.

Due to Covid, I was able to enter the hospital once, and since I had a Lyra at home I was only there for around 5 hours. When Lyra and I got to the hospital, I carried Lyra inside as she was spooked to be at the hospital. When we got to Faye, I held Faye, put her in the car seat, and sang a quick lullabye. Lyra’s face started scrunching up, and tears quickly followed. It’s tragic seeing a toddler about to cry, knowing what’s coming, but having no way to stop the tears.

Lyra’s gotten over the immediate shock. She helps us change diapers, and talks to Faye. She seems happy to be an older sister. On the other hand, she gets jealous. Wenjing is allowed to be with Faye, Lyra is fine with that. But Lyra seems to think Dada is hers

Lyra never asked for this. Her small world has changed immensely, and unlike adults she has no experience handling change. Toddlers have a hard time controlling their emotions, everything is shown on their sleeve. Even with this, she already loves her little sister.