I love pwgen for passwords. They are simple and strong, but it can be a pain to kick over to the terminal whenever I need one.
So, I made a super simple Alfred Workflow for this.
Basically, you type in “pw”, “pwgen” or “password” and it will generate and copy a 40 character password into your clipboard/open app.
You can use the “secure” option to generate stronger, less memorable passwords, and you can pass a length option as well.
Download it here: pwgen.alfredworkflow
Sometimes you need to pretend to be in another country.
VPN’s are great for this, but one novel approach is Unblock-Us which changes the location of your DNS server, instead. You use your IP, but you make DNS requests against in-country DNS servers, thus directing you to the application servers supporting that country. There is no anonymity, but you don’t have to worry about bandwidth caps, and it’s worked for every service I’ve tried it on.
I use this when I need to access video that is region limited. However, changing your DNS servers through the Mac settings app is a pain when you have to do it over and over again. On Windows they have an app to download which can manage the change for you.
So what I did on my Mac was create a script to use the built in networksetup command to change my DNS as needed.
networksetup -setdnsservers "Wi-Fi" 220.127.116.11 18.104.22.168
And one to un-set it.
networksetup -setdnsservers "Wi-Fi" "Empty"
To top it off, I built a simple Alfred workflow, making it even quicker and cleaner.
You can download that here: Unblock-Us Alfred Workflow.
Note that if you are using a wired network interface, you’ll need to change the service name from “Wi-Fi” to, well, whatever it is you are using.
At Pack we use ascii-based unique identifiers in URL’s a lot. We call them slugs. Dogs have them, users have them, breeds have them, etc.
I made the decision early on to keep the slugs plain old ascii. No unicode. These are primarily for URL’s, and I wanted them easy to type. Most slugs in the system are automatically generated. These slugs are derived from names when a dog or user is created in the system. This is a problem, because there are a lot of people in the world who use characters outside of the ascii set.
Usually, the solution is just to drop non-ascii characters. This is the simplest option, and it works. For example, Designer News uses this technique. In the case of John Henry Müller, they simply drop the ü because of the umlaut, giving him the user URL of https://news.layervault.com/u/11655/john-henry-mller/. Müller becomes mller. I find this less than optimal.
A second technique is to use homoglyph substitution. A homoglyph is a character which is visually similar to another, to the point that they are difficult to quickly distinguish with just a glance. I’m familiar with them from the world of phishing, where people register domains that look very similar to other domains by using homoglyphs.
Once you build a list of homoglyphs, it’s easy to create slugs that are ascii only through substitution. We expanded the definition of homoglyph for our list to include anything you could squint at funny and think they were similar. The method is a bit brute force, but it only ever runs once per string, and I think the outcome is worth it.
# -*- coding: utf-8 -*-
UNICODE_ASCII_HOMOGLYPHS = (
'''If a string is unicode, replace all of the unicode homoglyphs with ASCII equivalents.'''
if unicode == type(string):
for homoglyph_set in UNICODE_ASCII_HOMOGLYPHS:
for homoglyph in homoglyph_set:
string = string.replace(homoglyph, homoglyph_set)
This works well for us, we get reasonable URL’s for dogs like “Hólmfríður frá Ólafsfjordur”. holmfriour-fra-olafsfjordur is not the same, but it’s close enough for a URL that you don’t mind, and it’s better than using hlmfrur-fr-lafsfjordur.
Unfortunately, this doesn’t work well for un-romanized languages, notably asian languages, such as “クッキー“. In this case, the system breaks down and we end up with no usable slug, so we build from a default. I’m still seeking a solution for that. Maybe I should use automatic translation on it.
Yesterday, Mailbox released their beta Mac app. One cute thing they did, was that instead of a beta link or code, they distributed cute little animated gif coins which you could then drop into a “tin can” in the app to gain access.
I was intrigued by the concept, so I got some used betacoins from my friends and did a little digging to figure out how they were doing it.
My plan was to diff the coins and see what was changed from coin to coin, but I didn’t even need to do that. A quick inspect with gifsicle revealed an obvious token in the gif comments extension block.
jmhobbs@Cordelia:~/Desktop/betacoins ✪ gifsicle -I coin113121.gif '#0'
* coin113121.gif 122 images
logical screen 173x130
global color table 
+ image #0 173x130 transparent 45
disposal asis delay 0.03s
From there I checked a couple other coins to see if they had differing comments, and sure enough they did.
So now the question became, could I add the comment from a valid betacoin to another gif and have it still work?
I grabbed a lovely gif of a barfing unicorn off the web, and set to work.
jmhobbs@Cordelia:~/Desktop/betacoins ✪ gifsicle unicorns_puke_rainbows_by_chronicle_vindictive-d56nvl0.gif --no-comments -c '"F1699622-5500-4F31-B643-798427D0DBFA"' '#0' '#1-' > unicoin.gif
jmhobbs@Cordelia:~/Desktop/betacoins ✪ gifsicle -I unicoin.gif '#0'
* unicoin-a.gif 9 images
logical screen 660x850
global color table 
+ image #0 660x850
I then downloaded the beta, crossed my fingers, and dragged the unicoin into the tin can. I was rewarded with a tinkle of a coin dropping in, and access to the beta.
This is a valid betacoin.
Turns out, Mailbox could care less what else is in your gif. Just so long as you have a comment with a valid token, it’ll use that gif and animate it prettily.
As an aside, the coin gif has a staggering 122 frames. 122. Sparkles are expensive, yo.
I created a service for changing up your Mailbox betacoins, called Unicoin. You’re welcome.
Second Edit (2014-08-20)
Every year, What Cheer creates something fun for Big Omaha.
Previous years have been very interactive, requiring direct participation. A seek and find game, a conference only chat tool, etc. These have been fun, but interaction with the project is sporadic and not ubiquitous. This year we decided to build something that everyone would participate in, simply by being in the audience. Alex had the excellent idea of tracking the loudness of the auditorium over time, and we decided to monitor Twitter as well.
To measure sound levels in the auditorium (hangar? main stage?) we would obviously need some hardware on site. We chose a Raspberry Pi for simplicity, and because we already understood it. I initially experimented with using an electret microphone and GPIO, but as time ran out I went simpler and ordered a USB audio interface to plug in.
Before the event Paul and I went to KANEKO to set things up. The helpful guy from binary.net who was setting up the network gave us a hard line so we wouldn’t have to deal with wifi traffic, we ran the mic up the wall, plugged it in and watched the data flow. Pretty smooth install.
Raspberry Pi taped to the floorboards.
Our little mic on the wall.
The architecture of Pandemonium is perhaps a bit over complex, but I was having fun gluing things together and who’s gonna stop me?
Audio starts at the input, which we read with PyAudio. We read 10ms of audio, then calculate the RMS Amplitude of that data to produce our “loudness” value.
This packet gets pushed into a queue with a timestamp that is shared with the UDP client process. This process picks it up, and collects 50 other samples finding the peak value. Once it has collected 50 packets (0.5 seconds) it takes the peak value, wraps it with a signature and sends it off. The signature is an abbreviated HMAC to verify the origin and quality of the data. Originally we were sending 100% of the samples collected, so 100 per second. We decided that was a bit extreme and added the summarization code to reduce it to twice per second.
The UDP server receives the packet, unpacks it, and checks the signature. If it’s valid, it stores it in MySQL (async) and also pushes it to a Redis pubsub channel.
From there a node.js server picks it off the Redis pubsub channel and sends it down through socket.io to waiting clients. Even with all these hops, the roundtrip is pretty snappy, and there is less than a second of obvious lag.
On the client side we had a digital VU-style meter which scaled the volume over it’s seven bars and lit up accordingly. We also pushed the data to a live graph powered by HighCharts.
Tweets were collected for the hashtag #bigomaha and stored directly into MySQL by a daemon using the Twython library.
A second process would aggregate and average the tweets per second, then push that data to a Redis pubsub channel to be distributed by the node.js bridge.
Since there isn’t a natural comparative value for Tweets, the aggregator keeps the peak value in memory and compares the current value against that for a percentage. Not perfect, but it’s works.
Mistakes Were Made
Everything performed better than I expected, honestly. We didn’t have the opportunity to test the audio sampling at a large, loud venue, so I was worried about that. Paul and I installed it in the back of the auditorium, just past a speaker, and put the mic as high up the wall as we could, which seemed to isolate it pretty well.
However, there were some problems. Due to a fat finger, none of the audio data from day one was saved until about 3pm. So that was a bummer. A quick fix gave us good data for day two through.
My second goof was that the MySQL library I used for storing tweets assumed that data was latin-1, even though I created my tables as utf-8. So, when people tweeted anything with odd characters, the database barfed and it dropped the tweets. That also got fixed in the afternoon on day one.
I think it was a neat project, I certainly had fun building it. And it worked, which is always what we are aiming for, and it didn’t require any direct interaction from attendee’s to succeed, it survived on it’s own. I wish I hadn’t made mistakes, but they weren’t too damaging to the real-time experience at Big Omaha.
Day one data.