Tag: [email protected]

[email protected] Init Script Additions: throttle & unthrottle

January 13, 2010 » Geek

As I’ve posted before, I’ve started running [email protected] on my machines. One issue I’ve found is that on a dual core machine I will sometimes bog down as [email protected] consumes a whole core. That plus a lot of busy Firefox tabs and my box starts to crawl.

To fix that, I added a few pieces to my [email protected] init script, which was originally scavenged from this site, though on Googling there is a much nicer one on the [email protected] wiki. You might just want to apply my changes to that one.

In any case, I just added two commands to throttle and unthrottle the [email protected] application using cpulimit. This way I can add a cron job to manage it, or just throttle it when it starts to bug me.

Here it is if you want it!


export DIRECTORY=/var/cache/fah
export OUTPUT=/dev/null

test -f $DIRECTORY/fah6 || exit 0

title() {
  echo $1

status() {

case "$1" in

    title "Starting [email protected]"
    su $USER -c 'nohup $DIRECTORY/fah6 >$OUTPUT 2>&1 &'

    title "Stopping [email protected]"
    killall -15 $DIRECTORY/fah6 || error=$?

    $0 stop; $0 start

    FHPID=$(ps aux | grep FahCore | grep [TR]N | grep -v grep | awk '{print $2}')
    CLPID=$(ps aux | grep "cpulimit -p $FHPID -l" | grep -v grep | awk '{print $2}')
    if [ "$CLPID" != "" ]; then
      echo "Killing existing cpulimit, $CLPID"
      kill -9 $CLPID
    kill -18 $FHPID # It may be in SIGSTOP, so send it a SIGCONT

    $0 unthrottle;
    FHPID=$(ps aux | grep FahCore | grep [TR]N | grep -v grep | awk '{print $2}')
    if [ "$FHPID" != "" ]; then
      echo "Found process $FHPID, throttle to 50%"
      nohup cpulimit -p $FHPID -l 50 >$OUTPUT 2>&1 &
      echo "Could not find fah process!"

    echo "Usage: $0 { start | stop | restart | throttle | unthrottle }"
    exit 1


exit 0

[email protected] Team Statistics Scraper

December 11, 2009 » Geek

I created a team for Little Filament on [email protected] Our team number is 172406 (in case you want to join), but I wanted to add our latest stats on the Little Filament site. As far as I can tell there is no API for the stats, so I worked up a scraper in bash.

Basically all it does is fetch the page, then grep and sed it’s way to the variables, finally dumping them into a json file (for easy JavaScript consumption).

The kicker is that the stats server is overloaded or down a lot, so we can’t rely on it and we don’t want to stress it out further. My decision was to poll it at a large interval, 12-24 hours. I don’t have enough clients on the team to exact significant change over 6-12 hours, but I don’t want to fall too far out of date either. So if the server is overloaded and drops it once or twice, not a big deal.

Without further ado, here is the script.


NOW=$(date +%s)
THEN=$(cat fah_check.lock | tr -d '\n')

if [ $NOW -gt $(($THEN + 86400)) ]; then
	wget "http://fah-web.stanford.edu/cgi-bin/main.py?qtype=teampage&teamnum=172406" -O fah_check.html
	if [ "$?" == "0" ]; then
		grep "Grand Score" fah_check.html > /dev/null 2&>1
		if [ "$?" == "0" ]; then
			SCORE=$(grep -C 2 "Grand Score" fah_check.html | sed 's/[^0-9]//gm' | tr -d '\n')
			WU=$(grep -C 2 "Work Unit Count" fah_check.html | sed 's/[^0-9]//gm' | tr -d '\n')
			RANK=$(grep -C 1 "Team Ranking" fah_check.html | sed 's/[^0-9of]//gm' | tr -d '\n' | sed 's/f\([0-9]*\)of\([0-9]*\)/\1 of \2/')
			echo "{\"score\": \"$SCORE\", \"work_units\": \"$WU\", \"rank\": \"$RANK\" }" > fah_check.json
			echo "[$NOW] - Success!" >> fah_check.log
			echo $NOW > fah_check.lock
			echo "[$NOW] - Filter Failed" >> fah_check.log
		echo "[$NOW] - Download Failed" >> fah_check.log
	echo "[$NOW] - Skip Update" >> fah_check.log

That cranks out fah_check.json, which looks like this:

{"score": "4355", "work_units": "20", "rank": "39881 of 169721" }

To see it in action, check out the Little Filament Folding page.