Tag: go

Netlify + Cloudflare = Crazy Delicious

January 29, 2020 » Geek

At this years NEJS Conf, Netlify’s Phil Hawksworth gave a highly entertaining talk about mostly static content as a new normal path. His demo was a great little app that showed the possibilities of mixing static rendering and a dash of just in time functions as a service.

Previously I had not been a big fan of the JAMstack concept, it hadn’t clicked for me and seemed very single page app oriented. This demo piqued my interest, so I decided to move cultofthepartyparrot.com over to try out it’s automated deployment systems.

It’s magic.

Seriously, it’s remarkable how well it worked. CotPP has a custom build script and a lot of weird dependencies, and it all built with minimal intervention.

I set the GitHub repo as its source, pointed the build command at my custom script, and let it rip. The first build failed because I messed up the build command name. The second build had a live version for me to view in 50 seconds. That’s pretty great for installing all the tools, generating and deploying it. I was hooked.

I promptly pointed over the domain, got SSL issued and considered it done. And did I mention it creates deploys for all PR’s? I could finally preview new parrots in situ before a merge. Pretty amazing for a free product.

Uh-oh.

The Cult of the Party Parrot has been hosted on a shared Dreamhost server since its creation. I have Google Analytics on there, so I had some idea of the amount of traffic that it received, but never paid much attention. Turns out, it uses a lot of bandwidth.

Within five days I had used 50GB of traffic on Netlify. That means I’d be paying ~$60 a month for hosting, which isn’t viable for me, as CotPP doesn’t have ads or anything. I needed a way to either keep using Netlify, or recreate the Netlify experience (with the PR deploys) in some other toolkit.

My first instinct was to just go back to Dreamhost and figure out the automatic deploys using an existing tool like GCP CloudBuild. But then, the ever reliable and always clever Ben Stevinson suggested that I put Cloudflare in front of it, and speed it up in the process.

That sounded like a good idea to me, if I could get Cloudflare to catch the bulk of the bandwidth, then the 100GB cap of the Netlify free plan should be plenty to host the PR deploys, and I can have the best of both worlds, with the least amount of effort.

Putting Cloudflare in front of Netlify works just fine. I transferred DNS to Cloudflare, and then had it CNAME flatten to the Netlify origin. TLS was easy, and setting up the Cloudflare origin certificate on Netlify was simple too. Finally, I added a page rule that tells the Cloudflare edge to cache everything for a month. Bandwidth problem solved.

But, there was one last issue. Every time Netlify did an automatic deploy for me after I closed a pull request, I would have to manually go in and flush the cache on Cloudflare. That’s no good. The solution was to connect a Netlify deploy notification webhook to a GCP cloud function which clears the cache via the Cloudflare API.

Netlify deploy webhook configuration modal.

The documentation on the Netlify webhook is a little light, so I ran a few deploys and just printed out the contents to find the keys I need. Here’s an abridged example output of what I get in the webhook body.

All I really cared about there was branch, and the ID could be useful for tracing deploys to flushes if something went awry. So with that in hand I started putting together my function. The struct for unpacking the webhook is pretty small, and there’s nothing novel going on there.

You may note that I used a io.TeeReader there to duplicate the request body reader into a buffer. This is used later when validating the JWT that Netlify sends, more on that later.

Once unpacked, we can check that this update is for the master branch before we proceed. If we flushed on every PR deploy it would be a waste of effort, so we only want to proceed for a merge into master.

Now we want to verify that this request really did originate from Netlify. Now, for this use case it probably doesn’t matter that much, who is going to take the time to figure out the details of my cloud function and launch a purging spree to run me out of Netlify bandwidth? But it’s easy to implement, and we can learn something along the way, so why not!

A Brief Aside About JWT

A JWT is a “JSON Web Token”. It’s a three part string consisting of a header, a payload and a signature. The header and payload are JSON that is base64 encoded, and the signature is an HMAC of the other two sections, again base64 encoded. The header tells you things about the token itself, such as the algorithm used to create the signature. The payload is arbitrary data, though there are standardized fields, or “claims” in JWT parlance. You can learn more about it at jwt.io, but that should be enough to get us through this

On to the validation!

Since JWT is a well known standard, we have several packages to pick from for validating them. I chose github.com/gbrlsnchs/jwt, which had the API I liked best.

First we need to define the payload we expect from Netlify. Their payload is very simple with just two claims, as their docs say:

We include the following fields in the signature’s data section:

  • iss: always sent with value netlify, identifying the source of the request
  • sha256: the hexadecimal representation of the generated payload’s SHA256

Next we send the body of the request, the JWT, and an empty struct to jwt.Verify.

The variable hs here is an instance of an HMAC hashing function, specifically a jwt.HS256, since the Netlify hook always uses that algorithm to sign it’s JWTs. That is initialized elsewhere using a secret pulled from the environment.

Once the JWT is validated and the payload extracted from it, we hash the contents of the request body with SHA256. Remember that io.TeeReader? This is what we stashed the body for. We compare the hash we derived from the one in the payload to ensure the body was not tampered with in-flight.

Once everything checks out, we make the request to Cloudflare to purge the whole zone. This is an API method available on all Cloudflare plans, Purge All Files

Then we’re done! We just have to convey the status of the API call as our status to bring it all together.

122.5GB cached bandwidth in a month, 2.37GB uncached.

Overall I’m quite happy with this solution. Perhaps it’s a bit over engineered, but it’s saving a ton of money I don’t have to burn on CotPP, and I don’t have to move it back to Dreamhost either.

You can get the full code for this on Github in the CotPP repo on Github.

Party Gopher!

June 13, 2018 » Geek

The Go slack has a cute little dancing Gopher that appears to have come from Egon Elbre. I love it!

Dancing Gopher

This little dancing Gopher made me think of Party Parrot, so I wanted to parrot-ize him. Normally I might just open up Gimp and start editing, but this is the Go Gopher, we can do better than that!

My plan was to use Go’s image packages to edit each frame and replace the blue with the correct parrot color for that frame by walking over the pixels in each frame.

Once I got into the package docs however, I realized that since gif’s are paletted, I can just tweak the palette on each frame and be done. Much simpler. Let’s get into then, shall we?

Colors!

First things first, I needed to declare the party parrot frame colors, and the light and dark blue that the dancing gopher uses. I grabbed the blues with Sip and I already had the parrot colors on hand. Sure, I could precompute these and declare, but let’s keep it interesting.

Note that I have a DarkParrotColors slice as well, this is for the corresponding dark blue replacements. I generate these with darken which I’ll show in a moment.

Also notable is the hexToColor which just unpacks an HTML hex RGB representation into a color.Color.

Here is the darken function, pretty simple.

Now I need to pull in the gif and decode it, all very boilerplate.

After that, I iterate over the frames and edit the palettes.

Lastly, more boilerplate to write it out to disk.

Party Gopher

You can grab the code on Github, and thanks again to Egon Elbre for the excellent original gif!

Chicken Cam: Incubator Edition

March 4, 2018 » Geek, Life

It’s been over a year since we’ve had chickens and we’ve missed them, so this Christmas we got Lizzy and Charlotte an incubator so that we could try hatching some this spring.

When we went to purchase eggs, we found that you could most easily get them 10 at a time from the hatchery we have used in the past, Murray McMurray. Since the incubator we got the girls could only hold seven, we would need something for the other three. Some searching found that you could use a styrofoam cooler and a lamp to create a makeshift incubator, so I planned on that.

Once I had a plan to create an incubator, I knew I would have to overcomplicate things. Four years ago I built a webcam for our chicks so I figured I would do that this time too. Also, just setting a lamp and thermometer in and hoping for the best seemed like a potential waste of good eggs, so I wanted to monitor the temperature and humidity, and regulate them.

My initial design was a Raspberry Pi connected to a cheap DHT11 temperature and humidity sensor, controlling a relay that could turn the light on and off. All of it would be hooked up through a PID controller to keep the temperatures right where we want them. Eventually, I added a thermocouple with a MAX6675 for more accurate temperature readings.

Raspberry Pi, Relay and a mess of wires.

The server side would be designed similarly to the previous chicken cam, except written in Go. The stats would be tracked in InfluxDB and Grafana would be used for viewing them.

After I got all the parts I did a little testing, then soldered things up and tested it to see how it ran.

Initially I wrote everything in Go, but the DHT11 reading was very spotty. Sometimes it would respond once every few seconds, and sometimes it would go a minute or more failing to read. I wired on a second DHT11 and tried reading from both, but I didn’t get that much better performance.

Eventually I tried them from the Adafruit Python library and had much better luck, so I decided to just read those from Python and send them to my main Go application for consumption. I still have trouble with the DHT11’s, but I suspect it’s my fault more than the sensors fault.

My next issue was that it was extremely jittery, the readings would vary by degrees one second to another, so I collected readings in batches of 5 seconds then averaged them. That smoothed it out enough that graphs looked reasonable.

On. Off. On. Off. On. Off.

Temperature was now well regulated, but the air wasn’t humid enough. I switched to a sponge and found I could manage it much easier that way. I briefly tried a 40W bulb thinking I could spend more time with the lamp off, but temperatures still plunged at the same rate when the light was off, so I mostly just created quicker cycles.

After putting the 25W bulb back in, I still wanted a longer, smoother cycle, so I wrapped up a brick (for cleanliness) and stuck that in there. That got me longer cycles with better recovery at the bottom, it didn’t get too cold before the lamp came back on. Some slight improvements to the seal of my lid helped as well. I had trouble with condensation and too much humidity, but some vent holes and better water management took care of that.

Before the brick.

After the brick.

For the server side, I mostly duplicated the code from the previous Chicken cam, but in Go. Then I used the InfluxDB library to get the most recent temperature and humidity readings for display.

At this point, I felt ready for the eggs, which was good because they had arrived! We placed them in the incubator and we’re just waiting now. On day 8 we candled them with a homebuilt lamp i.e. a cardboard box with a hole cut in it.

Candling

Things seem to be progressing well so far, so here’s hoping something hatches!

gpdmp-to-slack

September 14, 2017 » Geek

When Rdio shut down, I tried a few services before landing on Google Play. It’s not perfect, but it’s good enough and it’s better than Spotify. One thing that seemed lacking was a desktop application, but that need was neatly filled by the excellent GPDMP.

One lesser known feature of GPDMP is the JSON API, which manifests as a simple JSON file that the application updates with information about the playback. When Slack announced custom statuses, I though back to the days of instant messaging and the integrations that set your status to the song you were playing.

Demo

Implementing the link from GPDMP to Slack was, in all, a fairly simple matter. First, I looked at the JSON file to get a feel for the structure.

Short and sweet! Now to represent that in Go for decoding.

I didn’t need to represent all the elements, but it’s a small structure so I went ahead with it. I didn’t embed Song because I wanted to write an equality test for that struct on it’s own. That will get used later on.

Next, I needed a way to monitor that file for updates, which GPDMP does fairly often. fsnotify was the obvious choice, and an easy drop in.
I added a time based debounce so that we don’t read the file on every update, which would be excessive. This will delay updates by up to whatever debounce is set to, but I’m okay with that trade off.

Inside that debounce (at line 16) we open the file, decode it to a new struct and, if it’s playing, pass it off to a channel.

So, that’s it for getting updates from GPDMP! Less than 100 lines, formatted. Now I needed to watch that update channel and post changes in status to Slack.

I found an excellent Slack API client on a different project, so I grabbed that. I started by building a little struct to hold my client and state.

Then, during client initialization, we get the current custom status for the user and save it. This way, when you pause your music, it will revert to whatever you had set before.

Once it is initialized, we just need to range over our updates channel and post them to Slack when it changes. We set a timeout, because the GPDMP client won’t send updates when the song is paused, or if the app quits updating the file (i.e. you quit GPDMP). By putting the logic for the timeout on this side, we have less to pass over the channel, and we can revert properly if something goes awry in the api reading goroutine.

A little bit of glue in main and it’s ready!

You can browse the source and grab your copy at github.com/jmhobbs/gpdmp-to-slack