Tag: go

Netlify + Cloudflare = Crazy Delicious

January 29, 2020 » Geek

At this years NEJS Conf, Netlify’s Phil Hawksworth gave a highly entertaining talk about mostly static content as a new normal path. His demo was a great little app that showed the possibilities of mixing static rendering and a dash of just in time functions as a service.

Previously I had not been a big fan of the JAMstack concept, it hadn’t clicked for me and seemed very single page app oriented. This demo piqued my interest, so I decided to move cultofthepartyparrot.com over to try out it’s automated deployment systems.

It’s magic.

Seriously, it’s remarkable how well it worked. CotPP has a custom build script and a lot of weird dependencies, and it all built with minimal intervention.

I set the GitHub repo as its source, pointed the build command at my custom script, and let it rip. The first build failed because I messed up the build command name. The second build had a live version for me to view in 50 seconds. That’s pretty great for installing all the tools, generating and deploying it. I was hooked.

I promptly pointed over the domain, got SSL issued and considered it done. And did I mention it creates deploys for all PR’s? I could finally preview new parrots in situ before a merge. Pretty amazing for a free product.


The Cult of the Party Parrot has been hosted on a shared Dreamhost server since its creation. I have Google Analytics on there, so I had some idea of the amount of traffic that it received, but never paid much attention. Turns out, it uses a lot of bandwidth.

Within five days I had used 50GB of traffic on Netlify. That means I’d be paying ~$60 a month for hosting, which isn’t viable for me, as CotPP doesn’t have ads or anything. I needed a way to either keep using Netlify, or recreate the Netlify experience (with the PR deploys) in some other toolkit.

My first instinct was to just go back to Dreamhost and figure out the automatic deploys using an existing tool like GCP CloudBuild. But then, the ever reliable and always clever Ben Stevinson suggested that I put Cloudflare in front of it, and speed it up in the process.

That sounded like a good idea to me, if I could get Cloudflare to catch the bulk of the bandwidth, then the 100GB cap of the Netlify free plan should be plenty to host the PR deploys, and I can have the best of both worlds, with the least amount of effort.

Putting Cloudflare in front of Netlify works just fine. I transferred DNS to Cloudflare, and then had it CNAME flatten to the Netlify origin. TLS was easy, and setting up the Cloudflare origin certificate on Netlify was simple too. Finally, I added a page rule that tells the Cloudflare edge to cache everything for a month. Bandwidth problem solved.

But, there was one last issue. Every time Netlify did an automatic deploy for me after I closed a pull request, I would have to manually go in and flush the cache on Cloudflare. That’s no good. The solution was to connect a Netlify deploy notification webhook to a GCP cloud function which clears the cache via the Cloudflare API.

Netlify deploy webhook configuration modal.

The documentation on the Netlify webhook is a little light, so I ran a few deploys and just printed out the contents to find the keys I need. Here’s an abridged example output of what I get in the webhook body.

    "admin_url": "https://app.netlify.com/sites/cultofthepartyparrot",
    "available_functions": [],
    "branch": "master",
    "build_id": "00000000e8ffde017674b0b2",
    "commit_ref": null,
    "commit_url": null,
    "committer": null,
    "context": "production",
    "created_at": "2019-08-26T21:04:48.294Z",
    "updated_at": "2019-08-26T21:05:31.385Z",
    "url": "https://cultofthepartyparrot.com",
    "user_id": "000000011111112222222333"

All I really cared about there was branch, and the ID could be useful for tracing deploys to flushes if something went awry. So with that in hand I started putting together my function. The struct for unpacking the webhook is pretty small, and there’s nothing novel going on there.

type netlifyWebhook struct {
	ID     string `json:"id"`
	Branch string `json:"branch"`

func PurgeCloudFlare(w http.ResponseWriter, r *http.Request) {
	var bodyBuf bytes.Buffer
	tee := io.TeeReader(r.Body, &bodyBuf)
	defer r.Body.Close()

	dec := json.NewDecoder(tee)

	var wh netlifyWebhook
	err := dec.Decode(&wh)
	if err != nil {
		log.Println("error decoding webhook body:", err)
		http.Error(w, err.Error(), http.StatusInternalServerError)

You may note that I used a io.TeeReader there to duplicate the request body reader into a buffer. This is used later when validating the JWT that Netlify sends, more on that later.

Once unpacked, we can check that this update is for the master branch before we proceed. If we flushed on every PR deploy it would be a waste of effort, so we only want to proceed for a merge into master.

	if wh.Branch != "master" {
		w.Write([]byte("Ok. Thanks."))

Now we want to verify that this request really did originate from Netlify. Now, for this use case it probably doesn’t matter that much, who is going to take the time to figure out the details of my cloud function and launch a purging spree to run me out of Netlify bandwidth? But it’s easy to implement, and we can learn something along the way, so why not!

	jwt := r.Header.Get("X-Webhook-Signature")
	if jwt == "" {
		http.Error(w, "Forbidden", http.StatusForbidden)

	if !verifyRequest([]byte(jwt), bodyBuf.Bytes()) {
		http.Error(w, "Forbidden", http.StatusForbidden)

A Brief Aside About JWT

A JWT is a “JSON Web Token”. It’s a three part string consisting of a header, a payload and a signature. The header and payload are JSON that is base64 encoded, and the signature is an HMAC of the other two sections, again base64 encoded. The header tells you things about the token itself, such as the algorithm used to create the signature. The payload is arbitrary data, though there are standardized fields, or “claims” in JWT parlance. You can learn more about it at jwt.io, but that should be enough to get us through this

On to the validation!

Since JWT is a well known standard, we have several packages to pick from for validating them. I chose github.com/gbrlsnchs/jwt, which had the API I liked best.

First we need to define the payload we expect from Netlify. Their payload is very simple with just two claims, as their docs say:

We include the following fields in the signature’s data section:

  • iss: always sent with value netlify, identifying the source of the request
  • sha256: the hexadecimal representation of the generated payload’s SHA256

type netlifyPayload struct {
	ISS    string `json:"iss"`
	Sha256 string `json:"sha256"`

Next we send the body of the request, the JWT, and an empty struct to jwt.Verify.

func verifyRequest(token []byte, body []byte) bool {
	var pl netlifyPayload
	_, err := jwt.Verify(token, hs, &pl, jwt.ValidateHeader)
	if err != nil {
		return false

The variable hs here is an instance of an HMAC hashing function, specifically a jwt.HS256, since the Netlify hook always uses that algorithm to sign it’s JWTs. That is initialized elsewhere using a secret pulled from the environment.

func init() {
	hs = jwt.NewHS256([]byte(os.Getenv("JWT_SECRET")))

Once the JWT is validated and the payload extracted from it, we hash the contents of the request body with SHA256. Remember that io.TeeReader? This is what we stashed the body for. We compare the hash we derived from the one in the payload to ensure the body was not tampered with in-flight.

	h := sha256.New()

	return pl.Sha256 == fmt.Sprintf("%x", h.Sum(nil))

Once everything checks out, we make the request to Cloudflare to purge the whole zone. This is an API method available on all Cloudflare plans, Purge All Files

url := fmt.Sprintf("https://api.cloudflare.com/client/v4/zones/%s/purge_cache", cloudFlareZone)
	req, err := http.NewRequest("POST", url, bytes.NewBuffer([]byte(`{"purge_everything":true}`)))
	if err != nil {
		http.Error(w, err.Error(), http.StatusInternalServerError)
	req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", cloudFlareAPIToken))
	req.Header.Set("Content-Type", "application/json")

	resp, err := cloudflareHTTPClient.Do(req)
	if err != nil {
		http.Error(w, err.Error(), http.StatusInternalServerError)

Then we’re done! We just have to convey the status of the API call as our status to bring it all together.


	if resp.StatusCode != http.StatusOK {
		body, _ := ioutil.ReadAll(resp.Body)
		defer resp.Body.Close()

122.5GB cached bandwidth in a month, 2.37GB uncached.

Overall I’m quite happy with this solution. Perhaps it’s a bit over engineered, but it’s saving a ton of money I don’t have to burn on CotPP, and I don’t have to move it back to Dreamhost either.

You can get the full code for this on Github in the CotPP repo on Github.

Party Gopher!

June 13, 2018 » Geek

The Go slack has a cute little dancing Gopher that appears to have come from Egon Elbre. I love it!

Dancing Gopher

This little dancing Gopher made me think of Party Parrot, so I wanted to parrot-ize him. Normally I might just open up Gimp and start editing, but this is the Go Gopher, we can do better than that!

My plan was to use Go’s image packages to edit each frame and replace the blue with the correct parrot color for that frame by walking over the pixels in each frame.

Once I got into the package docs however, I realized that since gif’s are paletted, I can just tweak the palette on each frame and be done. Much simpler. Let’s get into then, shall we?


First things first, I needed to declare the party parrot frame colors, and the light and dark blue that the dancing gopher uses. I grabbed the blues with Sip and I already had the parrot colors on hand. Sure, I could precompute these and declare, but let’s keep it interesting.

Note that I have a DarkParrotColors slice as well, this is for the corresponding dark blue replacements. I generate these with darken which I’ll show in a moment.

var (
	ParrotColors     []color.Color
	DarkParrotColors []color.Color
	LightGopherBlue  color.Color
	DarkGopherBlue   color.Color

func init() {
	var err error

	for _, s := range []string{
	} {
		c, err := hexToColor(s)
		if err != nil {
		ParrotColors = append(ParrotColors, c)
		DarkParrotColors = append(DarkParrotColors, darken(c))

	LightGopherBlue, err = hexToColor("8BD0FF")
	if err != nil {
	DarkGopherBlue, err = hexToColor("82C2EE")
	if err != nil {

Also notable is the hexToColor which just unpacks an HTML hex RGB representation into a color.Color.

func hexToColor(hex string) (color.Color, error) {
	c := color.RGBA{0, 0, 0, 255}

	r, err := strconv.ParseInt(hex[0:2], 16, 16)
	if err != nil {
		return c, err

	g, err := strconv.ParseInt(hex[2:4], 16, 16)
	if err != nil {
		return c, err

	b, err := strconv.ParseInt(hex[4:6], 16, 16)
	if err != nil {
		return c, err

	c.R = uint8(r)
	c.G = uint8(g)
	c.B = uint8(b)

	return c, nil

Here is the darken function, pretty simple.

func darken(c color.Color) color.Color {
	r, g, b, a := c.RGBA()
	r = r - 15
	g = g - 15
	b = b - 15
	return color.RGBA{uint8(r), uint8(g), uint8(b), uint8(a)}

Now I need to pull in the gif and decode it, all very boilerplate.

	// Open the dancing gopher gif
	f, err := os.Open("dancing-gopher.gif")
	if err != nil {
	defer f.Close()

	// Decode the gif so we can edit it
	gopher, err := gif.DecodeAll(f)
	if err != nil {

After that, I iterate over the frames and edit the palettes.

	for i, frame := range gopher.Image {
		lbi = frame.Palette.Index(LightGopherBlue)
		dbi = frame.Palette.Index(DarkGopherBlue)

		frame.Palette[lbi] = ParrotColors[i%len(ParrotColors)]
		frame.Palette[dbi] = DarkParrotColors[i%len(DarkParrotColors)]

Lastly, more boilerplate to write it out to disk.

	o, _ := os.OpenFile("party-gopher.gif", os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0600)
	defer o.Close()
	gif.EncodeAll(o, gopher)

Party Gopher

You can grab the code on Github, and thanks again to Egon Elbre for the excellent original gif!

Chicken Cam: Incubator Edition

March 4, 2018 » Geek, Life

It’s been over a year since we’ve had chickens and we’ve missed them, so this Christmas we got Lizzy and Charlotte an incubator so that we could try hatching some this spring.

When we went to purchase eggs, we found that you could most easily get them 10 at a time from the hatchery we have used in the past, Murray McMurray. Since the incubator we got the girls could only hold seven, we would need something for the other three. Some searching found that you could use a styrofoam cooler and a lamp to create a makeshift incubator, so I planned on that.

Once I had a plan to create an incubator, I knew I would have to overcomplicate things. Four years ago I built a webcam for our chicks so I figured I would do that this time too. Also, just setting a lamp and thermometer in and hoping for the best seemed like a potential waste of good eggs, so I wanted to monitor the temperature and humidity, and regulate them.

My initial design was a Raspberry Pi connected to a cheap DHT11 temperature and humidity sensor, controlling a relay that could turn the light on and off. All of it would be hooked up through a PID controller to keep the temperatures right where we want them. Eventually, I added a thermocouple with a MAX6675 for more accurate temperature readings.

Raspberry Pi, Relay and a mess of wires.

The server side would be designed similarly to the previous chicken cam, except written in Go. The stats would be tracked in InfluxDB and Grafana would be used for viewing them.

After I got all the parts I did a little testing, then soldered things up and tested it to see how it ran.

Initially I wrote everything in Go, but the DHT11 reading was very spotty. Sometimes it would respond once every few seconds, and sometimes it would go a minute or more failing to read. I wired on a second DHT11 and tried reading from both, but I didn’t get that much better performance.

Eventually I tried them from the Adafruit Python library and had much better luck, so I decided to just read those from Python and send them to my main Go application for consumption. I still have trouble with the DHT11’s, but I suspect it’s my fault more than the sensors fault.

My next issue was that it was extremely jittery, the readings would vary by degrees one second to another, so I collected readings in batches of 5 seconds then averaged them. That smoothed it out enough that graphs looked reasonable.

On. Off. On. Off. On. Off.

Temperature was now well regulated, but the air wasn’t humid enough. I switched to a sponge and found I could manage it much easier that way. I briefly tried a 40W bulb thinking I could spend more time with the lamp off, but temperatures still plunged at the same rate when the light was off, so I mostly just created quicker cycles.

After putting the 25W bulb back in, I still wanted a longer, smoother cycle, so I wrapped up a brick (for cleanliness) and stuck that in there. That got me longer cycles with better recovery at the bottom, it didn’t get too cold before the lamp came back on. Some slight improvements to the seal of my lid helped as well. I had trouble with condensation and too much humidity, but some vent holes and better water management took care of that.

Before the brick.

After the brick.

For the server side, I mostly duplicated the code from the previous Chicken cam, but in Go. Then I used the InfluxDB library to get the most recent temperature and humidity readings for display.

At this point, I felt ready for the eggs, which was good because they had arrived! We placed them in the incubator and we’re just waiting now. On day 8 we candled them with a homebuilt lamp i.e. a cardboard box with a hole cut in it.


Things seem to be progressing well so far, so here’s hoping something hatches!


September 14, 2017 » Geek

When Rdio shut down, I tried a few services before landing on Google Play. It’s not perfect, but it’s good enough and it’s better than Spotify. One thing that seemed lacking was a desktop application, but that need was neatly filled by the excellent GPDMP.

One lesser known feature of GPDMP is the JSON API, which manifests as a simple JSON file that the application updates with information about the playback. When Slack announced custom statuses, I though back to the days of instant messaging and the integrations that set your status to the song you were playing.


Implementing the link from GPDMP to Slack was, in all, a fairly simple matter. First, I looked at the JSON file to get a feel for the structure.

    "playing": true,
    "song": {
        "title": "Freeze Me",
        "artist": "Death From Above 1979",
        "album": "Outrage! Is Now",
        "albumArt": "https://lh3.go...-e100"
    "rating": {
        "liked": false,
        "disliked": false
    "time": {
        "current": 363509,
        "total": 198000
    "songLyrics": null,
    "shuffle": "NO_SHUFFLE",
    "repeat": "NO_REPEAT",
    "volume": 100

Short and sweet! Now to represent that in Go for decoding.

type Song struct {
	Title    string
	Artist   string
	Album    string
	AlbumArt string

type PlaybackJSON struct {
	Playing bool
	Song    Song
	Rating  struct {
		Liked    bool
		Disliked bool
	Time struct {
		Current int
		Total   int
	SongLyrics string
	Shuffle    string
	Repeat     string
	Volume     int

I didn’t need to represent all the elements, but it’s a small structure so I went ahead with it. I didn’t embed Song because I wanted to write an equality test for that struct on it’s own. That will get used later on.

func (a Song) Equal(b Song) bool {
	return a.Title == b.Title && a.Artist == b.Artist && a.Album == b.Album

Next, I needed a way to monitor that file for updates, which GPDMP does fairly often. fsnotify was the obvious choice, and an easy drop in.
I added a time based debounce so that we don’t read the file on every update, which would be excessive. This will delay updates by up to whatever debounce is set to, but I’m okay with that trade off.

watcher, err := fsnotify.NewWatcher()
if err != nil {
defer watcher.Close()

go func() {
	var lastRead time.Time

	for {
		select {
		case event := <-watcher.Events:
			if event.Op&fsnotify.Write == fsnotify.Write {
				if time.Now().After(lastRead.Add(debounce)) {
					lastRead = time.Now()
		case err := <-watcher.Errors:
			log.Println("error:", err)

err = watcher.Add(gp.Path)
if err != nil {

Inside that debounce (at line 16) we open the file, decode it to a new struct and, if it's playing, pass it off to a channel.

f, err := os.Open(event.Name)
if err != nil {

dec := json.NewDecoder(f)
pb := PlaybackJSON{}

err = dec.Decode(&pb)
if err != nil {

if pb.Playing {
	updates <- pb.Song

So, that's it for getting updates from GPDMP! Less than 100 lines, formatted. Now I needed to watch that update channel and post changes in status to Slack.

I found an excellent Slack API client on a different project, so I grabbed that. I started by building a little struct to hold my client and state.

type Slack struct {
	Client       *slack.Client
	CurrentSong  Song
	Set          bool
	InitialText  string
	InitialEmoji string

Then, during client initialization, we get the current custom status for the user and save it. This way, when you pause your music, it will revert to whatever you had set before.

func (s *Slack) Init() {
	auth, err := s.Client.AuthTest()
	if err != nil {

	user, err := s.Client.GetUserInfo(auth.UserID)
	if err != nil {

	s.InitialText = user.Profile.StatusText
	s.InitialEmoji = user.Profile.StatusEmoji
	log.Printf("Initial status: %s %s", s.InitialEmoji, s.InitialText)

Once it is initialized, we just need to range over our updates channel and post them to Slack when it changes. We set a timeout, because the GPDMP client won't send updates when the song is paused, or if the app quits updating the file (i.e. you quit GPDMP). By putting the logic for the timeout on this side, we have less to pass over the channel, and we can revert properly if something goes awry in the api reading goroutine.

func (s *Slack) Sync(emoji string, updates chan Song, revert_after time.Duration) {
	for {
		select {
		case song := <-updates:
			if !s.CurrentSong.Equal(song) {
				log.Printf("Sync: %s by %s\n", song.Title, song.Artist)
				s.Client.SetUserCustomStatus(fmt.Sprintf("%s by %s", song.Title, song.Artist), emoji)
				s.CurrentSong = song
				s.Set = true
		case <-time.After(revert_after):
			if s.Set {
				log.Printf("Reverting Status: %s %s\n", s.InitialEmoji, s.InitialText)
				s.Client.SetUserCustomStatus(s.InitialText, s.InitialEmoji)
				s.CurrentSong = Song{}
				s.Set = false

A little bit of glue in main and it's ready!

func main() {

	api := NewSlack(os.Getenv("SLACK_TOKEN"))
	gpdmp := &GPDMPAPI{os.Getenv("GPDMPAPI_PATH")}


	updates := make(chan Song)
	done := make(chan bool)

	go gpdmp.Watch(updates, done, 5*time.Second)
	go api.Sync(config.Emoji, updates, 15*time.Second)

You can browse the source and grab your copy at github.com/jmhobbs/gpdmp-to-slack