Sometimes you don’t have access to the S3 console, but you do have keys for a bucket. If you need to change CORS for that bucket, it turns out you can. Boto has API methods for this.
# -*- coding: utf-8 -*-
from boto.s3.connection import S3Connection
from boto.s3.bucket import Bucket
AWS_ACCESS_KEY = 'YOUR KEY HERE'
AWS_SECRET_KEY = 'YOUR KEY HERE'
S3_BUCKET = 'YOUR BUCKET HERE'
cors_xml = """
connection = S3Connection(AWS_ACCESS_KEY, AWS_SECRET_KEY)
bucket = Bucket(connection, S3_BUCKET)
print "Current CORS:"
print "New CORS:"
Something we try to do regularly at Pack is to check for slow queries.
We do this when introducing new features and schema changes, but we also try to do it occasionally to look for anything that may have slipped through, or become more of an issue as usage patterns change.
To make this a more regular occurrence, I decided to automate it.
The first thing that needed to be handled was enabling and disabling the slow query log. I don’t want it to run all the time, because eventually it will eat up too much disk, and there has to be overhead to calculating and saving that data.
To turn it on and off, I created a limited privilege user on the server called “slow_log”. The commands needed to turn on the slow query log are SET GLOBAL and FLUSH SLOW LOGS. Looking at the MySQL documentation, the privileges needed for those commands are RELOAD and SUPER.
GRANT RELOAD,SUPER ON *.* TO [email protected] IDENTIFIED BY 'password';
Once that user was in place, I created two shell scripts. The first just logs into MySQL and turns on slow query logging.
rm -f $SLOW_LOG
The second script turns slow query logging off, then it processes the slow query log with request-log-analyzer and pt-query-digest. Lastly it emails the output of those tools to me.
cat < /tmp/report.txt
cat - <
Finally, I added a cron job to run the first script at the beginning of the day once a month, and another to run the second at the end of the day once a month. That way, once a month, I get an email with slow query logs to look over and try to improve.
As a note, using a subshell to generate the body of the command is something I hadn't seen before and came across while looking for uuencode usage. It's a nice trick.
So. What did I screw up horribly?
I love pwgen for passwords. They are simple and strong, pharmacy but it can be a pain to kick over to the terminal whenever I need one.
So, mind I made a super simple Alfred Workflow for this.
Basically, you type in “pw”, “pwgen” or “password” and it will generate and copy a 40 character password into your clipboard/open app.
You can use the “secure” option to generate stronger, less memorable passwords, and you can pass a length option as well.
Download it here: pwgen.alfredworkflow
Sometimes you need to pretend to be in another country.
VPN’s are great for this, but one novel approach is Unblock-Us which changes the location of your DNS server, instead. You use your IP, but you make DNS requests against in-country DNS servers, thus directing you to the application servers supporting that country. There is no anonymity, but you don’t have to worry about bandwidth caps, and it’s worked for every service I’ve tried it on.
I use this when I need to access video that is region limited. However, changing your DNS servers through the Mac settings app is a pain when you have to do it over and over again. On Windows they have an app to download which can manage the change for you.
So what I did on my Mac was create a script to use the built in networksetup command to change my DNS as needed.
networksetup -setdnsservers "Wi-Fi" 18.104.22.168 22.214.171.124
And one to un-set it.
networksetup -setdnsservers "Wi-Fi" "Empty"
To top it off, I built a simple Alfred workflow, making it even quicker and cleaner.
You can download that here: Unblock-Us Alfred Workflow.
Note that if you are using a wired network interface, you’ll need to change the service name from “Wi-Fi” to, well, whatever it is you are using.
At Pack we use ascii-based unique identifiers in URL’s a lot. We call them slugs. Dogs have them, users have them, breeds have them, etc.
I made the decision early on to keep the slugs plain old ascii. No unicode. These are primarily for URL’s, and I wanted them easy to type. Most slugs in the system are automatically generated. These slugs are derived from names when a dog or user is created in the system. This is a problem, because there are a lot of people in the world who use characters outside of the ascii set.
Usually, the solution is just to drop non-ascii characters. This is the simplest option, and it works. For example, Designer News uses this technique. In the case of John Henry Müller, they simply drop the ü because of the umlaut, giving him the user URL of https://news.layervault.com/u/11655/john-henry-mller/. Müller becomes mller. I find this less than optimal.
A second technique is to use homoglyph substitution. A homoglyph is a character which is visually similar to another, to the point that they are difficult to quickly distinguish with just a glance. I’m familiar with them from the world of phishing, where people register domains that look very similar to other domains by using homoglyphs.
Once you build a list of homoglyphs, it’s easy to create slugs that are ascii only through substitution. We expanded the definition of homoglyph for our list to include anything you could squint at funny and think they were similar. The method is a bit brute force, but it only ever runs once per string, and I think the outcome is worth it.
# -*- coding: utf-8 -*-
UNICODE_ASCII_HOMOGLYPHS = (
'''If a string is unicode, replace all of the unicode homoglyphs with ASCII equivalents.'''
if unicode == type(string):
for homoglyph_set in UNICODE_ASCII_HOMOGLYPHS:
for homoglyph in homoglyph_set:
string = string.replace(homoglyph, homoglyph_set)
This works well for us, we get reasonable URL’s for dogs like “Hólmfríður frá Ólafsfjordur”. holmfriour-fra-olafsfjordur is not the same, but it’s close enough for a URL that you don’t mind, and it’s better than using hlmfrur-fr-lafsfjordur.
Unfortunately, this doesn’t work well for un-romanized languages, notably asian languages, such as “クッキー“. In this case, the system breaks down and we end up with no usable slug, so we build from a default. I’m still seeking a solution for that. Maybe I should use automatic translation on it.