Tag: Programming

gpdmp-to-slack

September 14, 2017 » Geek

When Rdio shut down, I tried a few services before landing on Google Play. It’s not perfect, but it’s good enough and it’s better than Spotify. One thing that seemed lacking was a desktop application, but that need was neatly filled by the excellent GPDMP.

One lesser known feature of GPDMP is the JSON API, which manifests as a simple JSON file that the application updates with information about the playback. When Slack announced custom statuses, I though back to the days of instant messaging and the integrations that set your status to the song you were playing.

Demo

Implementing the link from GPDMP to Slack was, in all, a fairly simple matter. First, I looked at the JSON file to get a feel for the structure.

Short and sweet! Now to represent that in Go for decoding.

I didn’t need to represent all the elements, but it’s a small structure so I went ahead with it. I didn’t embed Song because I wanted to write an equality test for that struct on it’s own. That will get used later on.

Next, I needed a way to monitor that file for updates, which GPDMP does fairly often. fsnotify was the obvious choice, and an easy drop in.
I added a time based debounce so that we don’t read the file on every update, which would be excessive. This will delay updates by up to whatever debounce is set to, but I’m okay with that trade off.

Inside that debounce (at line 16) we open the file, decode it to a new struct and, if it’s playing, pass it off to a channel.

So, that’s it for getting updates from GPDMP! Less than 100 lines, formatted. Now I needed to watch that update channel and post changes in status to Slack.

I found an excellent Slack API client on a different project, so I grabbed that. I started by building a little struct to hold my client and state.

Then, during client initialization, we get the current custom status for the user and save it. This way, when you pause your music, it will revert to whatever you had set before.

Once it is initialized, we just need to range over our updates channel and post them to Slack when it changes. We set a timeout, because the GPDMP client won’t send updates when the song is paused, or if the app quits updating the file (i.e. you quit GPDMP). By putting the logic for the timeout on this side, we have less to pass over the channel, and we can revert properly if something goes awry in the api reading goroutine.

A little bit of glue in main and it’s ready!

You can browse the source and grab your copy at github.com/jmhobbs/gpdmp-to-slack

Manage Unblock-Us on OS X

September 2, 2014 » Consume, Geek

Sometimes you need to pretend to be in another country.

VPN’s are great for this, but one novel approach is Unblock-Us which changes the location of your DNS server, instead. You use your IP, but you make DNS requests against in-country DNS servers, thus directing you to the application servers supporting that country. There is no anonymity, but you don’t have to worry about bandwidth caps, and it’s worked for every service I’ve tried it on.

I use this when I need to access video that is region limited. However, changing your DNS servers through the Mac settings app is a pain when you have to do it over and over again. On Windows they have an app to download which can manage the change for you.

So what I did on my Mac was create a script to use the built in networksetup command to change my DNS as needed.

And one to un-set it.

To top it off, I built a simple Alfred workflow, making it even quicker and cleaner.

Unblock-Us Alfred Workflow

You can download that here: Unblock-Us Alfred Workflow.

Note that if you are using a wired network interface, you’ll need to change the service name from “Wi-Fi” to, well, whatever it is you are using.

Homoglyph Substitution for URL’s

August 29, 2014 » Geek

At Pack we use ascii-based unique identifiers in URL’s a lot. We call them slugs. Dogs have them, users have them, breeds have them, etc.

I made the decision early on to keep the slugs plain old ascii. No unicode. These are primarily for URL’s, and I wanted them easy to type. Most slugs in the system are automatically generated. These slugs are derived from names when a dog or user is created in the system. This is a problem, because there are a lot of people in the world who use characters outside of the ascii set.

Usually, the solution is just to drop non-ascii characters. This is the simplest option, and it works. For example, Designer News uses this technique. In the case of John Henry Müller, they simply drop the ü because of the umlaut, giving him the user URL of https://news.layervault.com/u/11655/john-henry-mller/. Müller becomes mller. I find this less than optimal.

A second technique is to use homoglyph substitution. A homoglyph is a character which is visually similar to another, to the point that they are difficult to quickly distinguish with just a glance. I’m familiar with them from the world of phishing, where people register domains that look very similar to other domains by using homoglyphs.

Once you build a list of homoglyphs, it’s easy to create slugs that are ascii only through substitution. We expanded the definition of homoglyph for our list to include anything you could squint at funny and think they were similar. The method is a bit brute force, but it only ever runs once per string, and I think the outcome is worth it.

This works well for us, we get reasonable URL’s for dogs like “Hólmfríður frá Ólafsfjordur”. holmfriour-fra-olafsfjordur is not the same, but it’s close enough for a URL that you don’t mind, and it’s better than using hlmfrur-fr-lafsfjordur.

Hólmfríður frá Ólafsfjordur

Unfortunately, this doesn’t work well for un-romanized languages, notably asian languages, such as “クッキー“. In this case, the system breaks down and we end up with no usable slug, so we build from a default. I’m still seeking a solution for that. Maybe I should use automatic translation on it.

Delayed Queues for RQ

April 15, 2013 » Geek

I really like RQ. It’s got a sensible API and it’s built for Python, something I can’t say about Resque, pyres is fine, but it’s not the same.

My one beef with RQ is that I can’t delay a job for an arbitrary amount of time. You queue it up and it runs. To get around that, I built a simple delayed queue system for RQ, using the same redis backend.

My delayed queues leverage sorted sets to store and select jobs. We put the job in with it’s earliest run timestamp as the score, then we have a cron job or daemon that pulls them out when they are ready and pushes them over to RQ. Simple enough!

Here’s the really relevant code, everything else is trimming.

Delaying Jobs

Waking Jobs

You can run this at a granularity as low as one second the way it is written, and I’m sure you could go tighter if you wanted. We run it at a minute granularity, since our jobs are not highly time sensitive.

And because redis is atomic, you can run enqueue daemons/cron jobs on multiple machines, and everything should work fine, which is great for availability.

Grab the code and examples here if you are interested, https://gist.github.com/jmhobbs/5358101.

Impromptu logging from a socket.io connection

October 27, 2012 » Geek

I recently participated in a live streamed event that provided a “watching now” counter usin socket.io. Basically it was a super simple node.js script which incremented or decremented a variable when users joined and left the channel, and broadcasted the count to it’s subscribers. What I didn’t realize until right before the event that we might want to have a record for users on page at a given point in the broadcast. With so little time before the broadcast, I didn’t want to tinker with the server and break it, so I did the next best thing, I logged from the subscriber side.

I put up a quick PHP script on my laptop that allowed cross-domain access from the origin server and logged the incoming counter.

Then, in Chrome’s JavaScript console, I just hooked updates from socket.io into an XHR to provide the values to my PHP.

It worked like a charm, I didn’t have to mess with the server at a crucial point, and we got the data we needed.