Wednesday, December 31, 2008

Find files offending quota

When using quota I've noticed that the machine becomes sensitive for certain type of usage. In my case the file server is also used as a powerful extra machine for multimedia operations which is considered normal for any Ubuntu installation.

By the "old books" one should divide the root fs in several parts, which is totally meaningless on most normal Ubuntu machines, but which makes sense when using quota.

One of the common cases when quota is offended is that some application does not clean up after itself in the /tmp directory. A recommendation would therefore be to partition the system so that either the following is true (on different partitions):

  • "/home" + "/"
  • "/" + "/tmp"
  • "/" + "/home" + "/tmp"
  • "/" + "/home" + "/tmp" +"/usr"
The last two are really only for historical reference. In fact I would not recommend either of them unless each fs isn't on a drive of it's own (since each partition waste some extra disk space and since it makes the complete system more complicated to handle).

Either of the first two is however a very good idea. The main point is to separate /home from the rest and have it either on a fs of it's own, or together with the root-fs but then having /tmp separated instead (since users usually don't have write access anywhere else). The second one actually makes more sense and is the one prefered by me. The reason is that /bin and /usr today aren't considered big relatively speaking and since it would allow the most important of your drives (the one containing /home) to be easilly movable and bootable on another HW.

The second case also happens to be the easiest way to fix a monolithic system withou tinkering with the main drive (re-partitioning is always a operation that would make me shit in my pants ;) ). Just find one of your old discontinued drives, add it to the system and mount it as /tmp and you're done.

In case you're out of luck and you don't wish to reinstall the whole server (at least not now), the following command might come in handy:

sudo find / -user $AUSER -size +$SIZEINMEGS M -exec ls -aldh '{}' ';'

Saturday, November 15, 2008

How to access to a Samba server as different user

This apparently is a weakness in Windows according to this guy.

As described here, you will at least be able to change the default uid for the next session and onward. The following is what makes it happen in short:

"You will need to end all connections to that computer, for example with the command "
net use \\[server] /d", before you can create a connection with the other set of credentials."

"With the command "net use \\[server] /user:[username] [password]" you can specify a specific set of credentials to use for the connection."

BTW: Make a big fat note about fekking windows drive letters. If you have any assigned to the same server as you whish to invoke the commands above at, the commands would seem to work (especially under command.com), but they wont. A special pit-fall is if you've assigned a drive letter to a server identified but a IP number and not a name - the old credentials will be used no matter how you try until that "drive" is disconnected (grr.. may all Windows lovers burn in h3ll...). IMHO, better avoid drive letter alltogeather. It was a bad idea since birth and a nuicense.

Anywho...:
To handle changing credentials smoothly, you can create a script for Cygwin like this:


#!/bin/bash
net use '\\aserver' '/d'

if [ $# -lt 2 ]; then
echo "Connecting to default user"
net use '\\aserver' '/user:adomain\auser' 'apassword'
else
echo "Connecting to specific user $1:$2"
net use '\\aserver' '/user:adomain\'$1 $2
fi;

Sunday, November 2, 2008

Install Pine for Ubuntu

I thought it would be nice having a native mail client for the system but setting up a mumbo-jumbo mega client for a mail-system rarely used just seems stupid. I figured I'm going for command-line... But the only command line e-mail client that I know of worth knowing is Pine (I'm not a Emacs guy) and a licence issue hinders Pine from being distributed in binary form.

So I compiled a micro how-to based on my own findings:

Get the source: pine.tar.gz

Additionally install the following:
apt-get install libpam0g-dev libldap2-dev libncurses5-dev

Unpack pine sources and build:
cd /usr/local/src/
tar -xvzf somewhere/pine.tar.gz
cd pine4.c4
./build ldb
su
cd ../../bin/
ln -s /usr/local/src/pine4.64/bin/pine


Done!

Saturday, November 1, 2008

Postfix MTA - new try

All right, all right - I have to admit, Postfix has it's advantages...

Turns out having quota and warnquota configured it's impossible to get warnquota warnings delivered to an external recipient. Setting the MAILTO environment variable does not make any difference (believe me, I've tried..). It always sends to localuser@localdomain, where localuser is the user who's violated the quota. I.e. all warnings with nullmailer are sent as localuser@mydomain to the outside world ;(

However, it turned out that postfix handles this better. First of all it recognizes local mail recipients and doesn't use the relay host for those. Secondly, local users can be aliased to external e-mail addresses (!). Since I'm new to all this I didn't bother exploring it in detail, but by adding a line like this in /etc/aliases :

localuser: localuser remoteuser@remotedomain

then run the postfix command:

newaliases

Now I get mails delivered both to the local /var/mail/localuser and to the remoteuser@remotedomain.

Naaajs....!

sasl stuff turned out the same for my ISP Glocalnet as well as for Google. For one reason or the other, the second attempt with postfix worked out (I didn't try relaying through smtp.gmail.com though). Don't ask me why it works now - I still claim IT guys are pervs.. :)


When installing Postfix you get a few options to choose from. Make yourself a big favour by choosing the right one:

Internet Site--This would be your normal configuration for most purposes. Even if you're not sure of what you want, you can choose this option and edit the configuration files later.

Internet Site Using Smarthost--Use this option to make your internal mail server relay its mail to and from your ISP's mail server. You would use this when you don't have your own registered domain name on the Internet. This option can even by used with dial-up Internet access. When the system dials up the ISP, it will upload any outgoing mail to the ISP's server, and download any incoming mail from the server.

Satellite system--Use this option for setting up a relay that would route mail to other MTA's over the network.

Local system--Use this option for when you're just running an isolated computer. With this option, all email would be destined for user accounts that reside on this stand-alone client.



For me the right choice was Internet Site Using Smarthost (make sure 'inet_interfaces = all' in main.cf for receiving to work). Togeather with this hint, I can now both send and receive e-mail for accounts on my system (wow!). Guess who's going to remote control.stuff@home ;)

BTW - I never configured either a MTA, cron or quota ever before. Given that, I think having done this in two-three days is not too bad.

Friday, October 31, 2008

Sendmail

Setting up (send-)mail nowadays is just a pain in the b#tt.

After having used Gmail for too long, one tend to forget. All I need is to be able to send mail from scripts so that the cron jobs managing the backups are able to e-mail me the results and reminders when copy to DVD-backups are needed.

Anyway, after a days frustration with postfix & mailto for smtp.gmail.com I gave up and installed nullmailer instead (http://untroubled.org/nullmailer/). I'm now relying through my ISP (Glocalnet): God damn their bones. Nullmailer is a sweet little utility, even though it doesn't do much more than can be done with a simple telnet session to the smtp server.

It doesn't work with gmail though, apparently the new authorisation mechanism is too much for it to handle. At least it's honest about it, postfix is just not giving you much hints about what it can and can't do (and what's screwed up). Here's a few links to some seriously misleading articles:

http://www.dslreports.com/faq/6456
http://www.howtoforge.com/postfix_relaying_through_another_mailserver
http://www.marksanborn.net/linux/send-mail-postfix-through-gmails-smtp-on-a-ubuntu-lts-server/

http://www.linuxquestions.org/questions/linux-networking-3/postfix-relay-thru-gmail-316352/

IT guys are just perverts...

Wednesday, October 29, 2008

TinKer project restored

Finally my TinKer project is moved from SourceForge and fully restored from backups.

TinKer is a real-time kernel with ambitions of becoming a real-time embedded OS. It has some features that not many other kernels of it size have, among others an almost complete POSIX 1003.1c implementation.

The projects URL is now http://kato.homelinux.org/~tinker/cgi-bin/wiki.pl/ but just in case better, better use the DYNDNS address http://tinker.webhop.net/

Saturday, October 25, 2008

Safety concerns - Backup already...

After several recent lost hard-disks I've come to the realization that quality of hard-disks are not what they once were. Therefore I'll post a few articles of how to conquer that beast once and for all.

The problem is that a fool-proof backup system suitable for a hobbyist is difficult to do. How to know that it works until you really need it, and how to actually remember all you need to know when time comes? The better the system, the less likely it is that you need to restore and that time will pass until you do need to recapitulate both your data and your memory of how you've stored.

Furthermore, I don't trust all the backup systems. What happens if they fail? Do you really know how to run them? What if they break, can you afford a consultant to fix things for you? As a hobbyist, one ore more of the previous questions are likely answered to your disadvantage.

I.e. what you *really* need instead are the following:

  • A very simple system, something that you can understand and trust.
  • A well planned schedule
  • 2 backup destinations, at least one not part of the system you're backing up
  • A automated e-mail reminder
  • A bunch of DVD's and a burner
  • A safe

This article describes the first point. Consider the following script (~/bin/packfs.sh):

#!/bin/bash

export TS=`date +%y%m%d-%H%M%S`
if [ -z $1 ]; then
echo "Error $0: Arg #1 needs to be the base name of the package." 1>&2
exit -1
fi
if [ -z $2 ]; then
echo "Error $0: Arg #2 needs to be the number of backups to keep" 1>&2
exit -1
fi
if [ -z $3 ]; then
echo "Error $0: Arg #3 needs to be the source path." 1>&2
exit -1
fi

export FN_BASE=$1
export N_BACK=$2
export SOURCE=$3

function pack {
export OLDPWD=`pwd`
export FN=$FN_BASE-$TS

if [ -z $1 ]; then
echo "Error $0: Cant't determine where to put result file" 1>&2
exit -1
fi
if [ -z $2 ]; then
echo "Error $0: Cant't determine the source path" 1>&2
exit -1
fi

cd $2
DIRNAME=`pwd | sed -e 's/.*\///'`
if [ -z $DIRNAME ]; then
DIRNAME=.
fi

cd ..
echo "$0: Packing [$DIRNAME] as [$1/$FN.tar.gz]"
tar --one-file-system -czf $1/$FN.tar.gz $DIRNAME

cd $OLDPWD
}

function keeplast {
export OLDPWD=`pwd`
if [ -z $1 ]; then
echo "Error $0: Cant't determine where backus are" 1>&2
exit -1
fi
if [ -z $2 ]; then
echo "Error $0: Don't know how many backups to keep" 1>&2
exit -1
fi
cd $1
N_FILES=`ls | grep $FN_BASE | awk 'END{print NR}'`


let "N_DEL = $N_FILES - $2"
if [ $N_DEL -gt 0 ]; then
DFILES=`ls | grep $FN_BASE | sort -r | tail -$N_DEL`
echo "$0: Deleting obsolete backup(s): "
echo "$DFILES"
echo "================================================================================"
rm $DFILES
fi

cd $OLDPWD

}
if [ -z $BACKUP1_PATH ]; then
echo "Error $0: BACKUP1_PATH needs to be set." 1>&2
exit -1
fi

pack $BACKUP1_PATH $SOURCE
keeplast $BACKUP1_PATH $N_BACK


Define the environment variable $BACKUP1_PATH and your ready to backup. $BACKUP1_PATH should be on physically different disk (or network mount) than the one you're backing up.

Run your sript from command line first to test that it works:

packfs.sh dayly-auser 7 /home/auser

A backup will be created in $BACKUP1_PATH at each run of the script with a filename containing a timestamp. The script will also tidy earlier backups, but keep the 7 last ones.

The example above shows backing uf a ceratin user, but the script can be used for any part of a system, including the complete system itself:

packfs.sh server 3 /

If you run the script as a cron job as root, you're a bin on your way of creating a safe, simple and fairly fool-proof automated backup system.

Come to think of it, I really have to teach my kids not to save all the garbage they find on the internet in their accounts. Maybe I should learn how to use that quota thingy as well...

Another thing that would be good to learn is how to get the server to scale down it's performance when it's not needed (i.e. how to get hard-disks to enter sleep mode and CPU frequency to adjust dynamically). I've lost three disks in a year, from not loosing 1 in 10 years before now, I'm seriously concerned about heat and component lifespan.

Wednesday, July 23, 2008

I couldn't help my selft but to post this :)



This is my room in the new Google lively. Click on the image above to enter the virtual 3D room.

You'll need a plugin for your browser from here to enter.

More about Lively at http://www.lively.com/html/landing.html

Sunday, May 11, 2008

libuuid & NIS mess

After updating to 8.04 LTS you'll better check your /etc/passwd

If you have new 'daemon users' there with UID larger than any of your NIS exported users, you're likely to get in trouble. In mine, I got a new "user" called libuuid with an UID 517.

On each machine in your network (including the server):
  • As root modify the UID for libuuid from 517 to 499 ("First NIS UID" -1)
  • chown -R libuuid:libuuid /var/lib/libuuid
This is tedious work if you have more than a few machines. I'm considering to re-ID my users instead, since it seems that UID starting at 1000 instead of 500 (as it used to be ages ago) has become widely adopted.

Re-id'ing a daemon is bound to lead to issues - if not sooner then at least at next update.

This issue possibly affects GDM and the list of users visible in the face-browser too.

References:
http://linux.about.com/library/cmd/blcmdl3_libuuid.htm

Upgrading from Ubuntu 6.06 LTS (Dapper Drake) to Ubuntu 8.04 LTS

Back-up all you data before proceeding.

# Prepare you system before the actual upgrade

lsb_release -a
sudo apt-get update
sudo apt-get upgrade
sudo apt-get dist-upgrade

# If running remotely, don't use ssh. See the previous tips for adding telnet to your mashine.
# The continue with the following lines (if logged in physically, skip following two cammands):
export DISPLAY=www.xxx.yyy.zzz

# On remote mashine
xhost +



# The following command opens a front-end. First press the button "check", then wait a few seconds, a button
"Upgrade to 8.04 LTS" should apear, press it and follow the instructions

sudo update-manager -d



# In case process breaks, you can try:
dpkg --configure -a

# Re-boot, check version again
lsb_release -a


References:
http://www.ubuntu.com/getubuntu/upgrading
http://www.ubuntu.com/getubuntu/upgrading#head-db224ea9add28760e373240f8239afb9b817f197

Adding telnetd to xinted

Install the package telnetd


sudo apt-get install telnetd


Copy the following to the file /etc/xintet.d/telnet


# description: An xinetd internal telnet service
service telnet
{
# port = 23
disable = no
socket_type = stream
protocol = tcp
wait = no
user = root
server = /usr/sbin/in.telnetd
server_args = -h
}

Saturday, March 1, 2008

DVD read trouble

Is in many cases due do missing decryption. As root do:

apt-get install libdvdcss2
/usr/share/doc/libdvdread3/install-css.sh

Starting hidden Cygwin apps

For example the X server is nice to have started without an extra annoying console window.

Before you do anything else, make sure you have your cygwin bin directory in you Windows system PATH. Cygwing programs expects certain .dll files to be found, which are stored in there.

Download the great little utility hstart

Copy your C:\cygwin\Cygwin.bat to another bat file (C:\cygwin\Cygwin_hidden.bat). Alter it as follows.

@echo off

C:
chdir C:\cygwin\bin

C:\your_personal_bin_bath\hstart.exe /NOWINDOW "bash --login -i %1"

..or since the hstart utility is small and self-contained, just slip it into the C:\WINDOWS\system32 directory and replace the last line with this:

hstart.exe /NOWINDOW "bash --login -i %1"

Please note the quotes. Now change the Cygwin link you want to start hidden (X-Cygwin for example) to point to the new bat file instead.

If you want to start X-Cygwin completly hidden (i.e. even without the xterm), edit the file /usr/X11R6/lib/X11/xinit/xinitrc and replace the line

#exec xterm -e /usr/bin/bash -l
exec xclock -geometry 100x130+1700+0

(You seem to need at least one X application to prevent the X server from terminating).

Tune your start app with standard X command line options.

If you drag X-Cygwin link to your autostart, you'll end up having X running in the background and combined with the hints in the previous post you have something almost Linux like ;)

Staring Linux apps on your Windows host.

Staring Linux apps on your Windows host.

  • First of all, install Cygwin (and make sure that you've included X).
After installation, alter your system path to the Cygwin bin directory. This is needed for Windows to find Cygwins .dll files. Right-click on "your computer" on the desktop, select the advanced tab and click on "environment variables". In the "system" section, find the Path variable and add the new path last (C:\cygwin\bin;).
  • Create a directory ~/bin and a file ~/.bash_profile and in the latter add (at least) the following lines:

export PATH=~/bin:$PATH
export DISPLAY=:0.0


  • Create a ssh key binding to the machine you want to use (see previous post)

  • Make an association in windows to .sh files with bash (right-click on any .sh.file and select "open with")
To associate bash scripts with Cygwing you'll need to alter the key in regedit. Find the key:

HKEY_CLASSES_ROOT\sh_auto_file\shell\open\command

  • Change it's default value from:

"C:\cygwin\bin\bash.exe" "%1"

to:

"C:\cygwin\bin\bash.exe" "-l" "%1"

  • Now create some start scripts in ~/bin. For each command/application you want to run on the remote mashine, create a .sh script with the same name in the form:

~/bin/acommand.sh
===============
#!/bin/bash

ssh -X server acommand

(replace "acommand" with your application/command name and "server" with the name of your server)

  • Drag a shortcut to you desktop and click it - voilĂ ! (make sure you've started X first though, in case you want to run graphical apps).

Tuesday, February 5, 2008

How to set-up SSH to not require a password every time you log into a remote machine.

Have a look at this nice how-to:
http://www.astro.caltech.edu/~mbonati/WIRC/manual/DATARED/setting_up_no-password_ssh.html

I.e. in short:

On local side:
ssh-keygen -t dsa -f .ssh/id_dsa
cd .ssh
scp id_dsa.pub user@remote:~/.ssh/id_dsa.pub
ssh user@remote

On remote side:
cd .ssh
cat id_dsa.pub >> authorized_keys2
chmod 640 authorized_keys2
rm id_dsa.pub
exit

On a new account on the server, you might also first want to run:
remote> ssh-keygen -t dsa

This will help create the .ssh directory on the server that might otherwise be missing, and set the attributes correctly (having the directory attributes wrong is otherwise a cause of error, chaos & confusion).