Need to get the exact time that you visited a page in Firefox? I couldn’t find an easy way to look this up in the History interface, or anywhere else for that matter. I did however know that Firefox stores this kind of thing in sqlite3 databases. Here’s how I got what I needed.
First you have to find the sqlite databases, I’m on Linux so that would be in my home directory. The database you want is places.sqlite. Crack that open in sqlite3. Your command will differ as this is based on your profile name, mine is “gmail” so I ended up with g69ap5lc.gmail.
|
$ sqlite3 ~/.mozilla/firefox/g69ap5lc.gmail/places.sqlite |
Be aware you have to shut down the Firefox instance first, because it locks the file. Make sure your privacy settings won’t erase it all when you shut it down! I had to change mine to “Remember history” first.
Next you need to find and grab the timestamp. This can be a chore if you don’t have the full URL. I was looking for the one from spiffie.org below.
|
sqlite>.headers on sqlite>select * from moz_places; id|url|title|rev_host|visit_count|hidden|typed|favicon_id|frecency|last_visit_date 1|http://www.mozilla.com/en-US/firefox/central/|/en-US/firefox/central/|moc.allizom.www.|0|0|0||140| ... 1366|http://spiffie.org/kits/usb7/driver_linux.shtml|Linux USB7 Driver|gro.eiffips.|1|0|0||100|1261169238197827 |
The column we are interested in is last_visit_date which is 1261169238197827 in our case. You can also list all the recent visits from the moz_historyvisits table with the id column.
|
sqlite> select * from moz_historyvisits where place_id = '1366'; id|from_visit|place_id|visit_date|visit_type|session 200|199|1366|1261169238197827|6|42 |
Now we need to convert that timestamp into something we can read (unless you are a super UNIX geek and can read timestamps). This instance is too precise for the date command, so lop off the first 10 digits and use that, so in the example we use 1261169238.
|
$ date -d @1261169238 Fri Dec 18 14:47:18 CST 2009 |
Not short and sweet, but it works.
“There is a war going on for your mind.
If you are thinking, diagnosis you are winning.”
– Flobots, We Are Winning on Fight With Tools
I created a team for Little Filament on [email protected] Our team number is 172406 (in case you want to join), but I wanted to add our latest stats on the Little Filament site. As far as I can tell there is no API for the stats, so I worked up a scraper in bash.
Basically all it does is fetch the page, then grep and sed it’s way to the variables, finally dumping them into a json file (for easy JavaScript consumption).
The kicker is that the stats server is overloaded or down a lot, so we can’t rely on it and we don’t want to stress it out further. My decision was to poll it at a large interval, 12-24 hours. I don’t have enough clients on the team to exact significant change over 6-12 hours, but I don’t want to fall too far out of date either. So if the server is overloaded and drops it once or twice, not a big deal.
Without further ado, here is the script.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
|
#!/bin/bash NOW=$(date +%s) THEN=$(cat fah_check.lock | tr -d '\n') if [ $NOW -gt $(($THEN + 86400)) ]; then wget "http://fah-web.stanford.edu/cgi-bin/main.py?qtype=teampage&teamnum=172406" -O fah_check.html if [ "$?" == "0" ]; then grep "Grand Score" fah_check.html > /dev/null 2&>1 if [ "$?" == "0" ]; then SCORE=$(grep -C 2 "Grand Score" fah_check.html | sed 's/[^0-9]//gm' | tr -d '\n') WU=$(grep -C 2 "Work Unit Count" fah_check.html | sed 's/[^0-9]//gm' | tr -d '\n') RANK=$(grep -C 1 "Team Ranking" fah_check.html | sed 's/[^0-9of]//gm' | tr -d '\n' | sed 's/f\([0-9]*\)of\([0-9]*\)/\1 of \2/') echo "{\"score\": \"$SCORE\", \"work_units\": \"$WU\", \"rank\": \"$RANK\" }" > fah_check.json echo "[$NOW] - Success!" >> fah_check.log echo $NOW > fah_check.lock else echo "[$NOW] - Filter Failed" >> fah_check.log fi else echo "[$NOW] - Download Failed" >> fah_check.log fi else echo "[$NOW] - Skip Update" >> fah_check.log fi |
That cranks out fah_check.json, which looks like this:
|
{"score": "4355", "work_units": "20", "rank": "39881 of 169721" } |
To see it in action, check out the Little Filament Folding page.
I just pushed out my first Ruby on Rails application a few days ago, and the shakedown cruise is going well.
You can check it out at ThirtyDayList.com if you want. There are still some features I want to add and it needs to be DRY’d out, but I’m happy with it.
I used the book Rails for PHP Developers which was pretty good and brought me up to speed without much fuss.
I have to say, I’m really digging Ruby and I’m really digging Rails. It’s nice to not really have to worry about all your database stuff, ActiveModel takes care of the hard parts.
“…for those of you who like your Internet reading where God intended it to be, find in pine, cialis this post basically says we…”
– Josh Jones (Dreamhost Newsletter v8.12)