Friday, October 31, 2008

Sendmail

Setting up (send-)mail nowadays is just a pain in the b#tt.

After having used Gmail for too long, one tend to forget. All I need is to be able to send mail from scripts so that the cron jobs managing the backups are able to e-mail me the results and reminders when copy to DVD-backups are needed.

Anyway, after a days frustration with postfix & mailto for smtp.gmail.com I gave up and installed nullmailer instead (http://untroubled.org/nullmailer/). I'm now relying through my ISP (Glocalnet): God damn their bones. Nullmailer is a sweet little utility, even though it doesn't do much more than can be done with a simple telnet session to the smtp server.

It doesn't work with gmail though, apparently the new authorisation mechanism is too much for it to handle. At least it's honest about it, postfix is just not giving you much hints about what it can and can't do (and what's screwed up). Here's a few links to some seriously misleading articles:

http://www.dslreports.com/faq/6456
http://www.howtoforge.com/postfix_relaying_through_another_mailserver
http://www.marksanborn.net/linux/send-mail-postfix-through-gmails-smtp-on-a-ubuntu-lts-server/

http://www.linuxquestions.org/questions/linux-networking-3/postfix-relay-thru-gmail-316352/

IT guys are just perverts...

Wednesday, October 29, 2008

TinKer project restored

Finally my TinKer project is moved from SourceForge and fully restored from backups.

TinKer is a real-time kernel with ambitions of becoming a real-time embedded OS. It has some features that not many other kernels of it size have, among others an almost complete POSIX 1003.1c implementation.

The projects URL is now http://kato.homelinux.org/~tinker/cgi-bin/wiki.pl/ but just in case better, better use the DYNDNS address http://tinker.webhop.net/

Saturday, October 25, 2008

Safety concerns - Backup already...

After several recent lost hard-disks I've come to the realization that quality of hard-disks are not what they once were. Therefore I'll post a few articles of how to conquer that beast once and for all.

The problem is that a fool-proof backup system suitable for a hobbyist is difficult to do. How to know that it works until you really need it, and how to actually remember all you need to know when time comes? The better the system, the less likely it is that you need to restore and that time will pass until you do need to recapitulate both your data and your memory of how you've stored.

Furthermore, I don't trust all the backup systems. What happens if they fail? Do you really know how to run them? What if they break, can you afford a consultant to fix things for you? As a hobbyist, one ore more of the previous questions are likely answered to your disadvantage.

I.e. what you *really* need instead are the following:

  • A very simple system, something that you can understand and trust.
  • A well planned schedule
  • 2 backup destinations, at least one not part of the system you're backing up
  • A automated e-mail reminder
  • A bunch of DVD's and a burner
  • A safe

This article describes the first point. Consider the following script (~/bin/packfs.sh):

#!/bin/bash

export TS=`date +%y%m%d-%H%M%S`
if [ -z $1 ]; then
echo "Error $0: Arg #1 needs to be the base name of the package." 1>&2
exit -1
fi
if [ -z $2 ]; then
echo "Error $0: Arg #2 needs to be the number of backups to keep" 1>&2
exit -1
fi
if [ -z $3 ]; then
echo "Error $0: Arg #3 needs to be the source path." 1>&2
exit -1
fi

export FN_BASE=$1
export N_BACK=$2
export SOURCE=$3

function pack {
export OLDPWD=`pwd`
export FN=$FN_BASE-$TS

if [ -z $1 ]; then
echo "Error $0: Cant't determine where to put result file" 1>&2
exit -1
fi
if [ -z $2 ]; then
echo "Error $0: Cant't determine the source path" 1>&2
exit -1
fi

cd $2
DIRNAME=`pwd | sed -e 's/.*\///'`
if [ -z $DIRNAME ]; then
DIRNAME=.
fi

cd ..
echo "$0: Packing [$DIRNAME] as [$1/$FN.tar.gz]"
tar --one-file-system -czf $1/$FN.tar.gz $DIRNAME

cd $OLDPWD
}

function keeplast {
export OLDPWD=`pwd`
if [ -z $1 ]; then
echo "Error $0: Cant't determine where backus are" 1>&2
exit -1
fi
if [ -z $2 ]; then
echo "Error $0: Don't know how many backups to keep" 1>&2
exit -1
fi
cd $1
N_FILES=`ls | grep $FN_BASE | awk 'END{print NR}'`


let "N_DEL = $N_FILES - $2"
if [ $N_DEL -gt 0 ]; then
DFILES=`ls | grep $FN_BASE | sort -r | tail -$N_DEL`
echo "$0: Deleting obsolete backup(s): "
echo "$DFILES"
echo "================================================================================"
rm $DFILES
fi

cd $OLDPWD

}
if [ -z $BACKUP1_PATH ]; then
echo "Error $0: BACKUP1_PATH needs to be set." 1>&2
exit -1
fi

pack $BACKUP1_PATH $SOURCE
keeplast $BACKUP1_PATH $N_BACK


Define the environment variable $BACKUP1_PATH and your ready to backup. $BACKUP1_PATH should be on physically different disk (or network mount) than the one you're backing up.

Run your sript from command line first to test that it works:

packfs.sh dayly-auser 7 /home/auser

A backup will be created in $BACKUP1_PATH at each run of the script with a filename containing a timestamp. The script will also tidy earlier backups, but keep the 7 last ones.

The example above shows backing uf a ceratin user, but the script can be used for any part of a system, including the complete system itself:

packfs.sh server 3 /

If you run the script as a cron job as root, you're a bin on your way of creating a safe, simple and fairly fool-proof automated backup system.

Come to think of it, I really have to teach my kids not to save all the garbage they find on the internet in their accounts. Maybe I should learn how to use that quota thingy as well...

Another thing that would be good to learn is how to get the server to scale down it's performance when it's not needed (i.e. how to get hard-disks to enter sleep mode and CPU frequency to adjust dynamically). I've lost three disks in a year, from not loosing 1 in 10 years before now, I'm seriously concerned about heat and component lifespan.