Monday, December 26, 2016

How to use Google services the smart way

As you surely know there are more ways than one to use Google services (mail, chat, blog e.t.c e.t.c.)

Most people nowadays start an Google account when they buy a new phone, mainly for the reason of having Google as their e-mail provider.

Even if the integration of the Google account and services like Gmail, Chrome e.t.c. is almost seamless, there are cases where you actually want them separated.

One example is when you already have an e-mail (and provider) but still want to use the rest of Google services, especially with that email-address as your account. This would be a very common case with your work-email (often hosted by the company itself) and your private one.

One way of doing this is by association. I.e. you bind your work e-mail address to your private Google-account.

Don't do that!

There are many drawbacks with this, the one that bothers me the most is that you in Chrome have to log-out from one to use the other, they easily get mixed up and you cant't separate notifications between the two (i.e. your private life will suffer). Another is that you can't have family members reaching you on hangouts on one "account" (remember it's not an account, it's just an associated email address) while still having your work-chat active and vice versa. And a third one, what happens when you leave your employer? Purging an account is much easier than purging contents in a shared associative one.

Google tries hard to associate addresses instead of permitting multiple real accounts for reasons not clear to me (well maybe except for the shady ones i.e.).

But you can create an un-associated account with the work email-address account-name instead. This is one way you can do it:


  1. Use a normal computer and run Chrome web-browser
  2. Log out from Chrome from the current user. This should take you to a page where you can sign in again AND where there is a link underneath saying Creating New User.
  3. You now have a login-screen up in-front instead. Choose "create new user". Depending on Chrome version/distribution you may not get to the correct log-in screen. You can then try follow this link instead, note while still logged out: Create New User
  4. Create user with the correct name@domain (the same as you use for your daily work email).
  5. Carefully avoid to enter any other e-mail addresses when asked.
  6. Done
Now participating using shared documents should work straight away. But hangouts may not work until you also do a little trick (hangouts doesn't work if you see a the little circle where the chat-list history should be with the text "Things are taking longer than expected.(Errors: 212, 213, 214)" underneeth that never stops changing colours regardless of how long you wait):

While logged into Chrome with your new user-account, go to google-calendar: https://calendar.google.com
There you'll be asked to create a Google calendar account. After that, hangout's "should" (TM) work.

Hint: You can use this in your phone too. Add account under settings and just un-check mail-sync if it's not already, as you don't have Google-hosted email in that name (you just use a string as account-name that is one, but not Google's).

Saturday, December 24, 2016

Experiences from Debian 6 sever meltdown due to physical disk-crash: 1. Getting the services back online (ejabberd)

After long silence, here's finally a new tip. This time it's about ejabberd.

My server running an old Debian 6 burned up. Of-course all /home was backed-up, but who backs up the root-fs? When you're not working with UNIX server administration, that partition represens a valuable investment.

I have had the habit of using git for all config-changes and was relying on that for "backup". Well, it has some draw-backs... I could salvage most of the content of the root-fs, but then again Debian 6 is by now a very old distro suffering from the heart-bleed security-bug among others.

So I figured I'd install Debian 8.6 instead. Surely most of the services should be portable?

Nope, not at all!

Apache2 is version incompatible as is ejabberd. All hours (weeks) spent setting these up blew up in an isntant. All vhosts, all https configs, portals, PEM-configs of various kinds, user-databases e.t.c. POFF!

If you have spent hours and hours to set services up and don't do it very-day (i.e. your not an admin by profession) PLEASE back up your rootfs regularly too.  Make the FS small, 16G should be more than plenty for a server, yet small enough to back-up entirely on your NAS which is nowadays every-mans home (or somewhere else, just do it!).

You may not want to restore to the exact same version as in my case, but at least you can restore temporarily and use the distributions upgrade mechanisms to change change both HW and distro-version. If all works as it should it will save you lots of time, weeks in my case. Broken rootfs:es usually have the tendency of not being able to run. If not only the fs is broken but the whole disk is (even if just partly), then you might as well forget about any repairs that will either maintain data nor enable you to quick-upgrade.

So back to the pain-staking reconfiguration!

Ejabberd turned out to be fairly salvageable though.  From the partly recovered rootfs:

find . -type d -iname "ejabbe*"

Stop the destinations ejabberd server:

ejabberdctl stop

Then copy relevant directories into the new host. Make sure to check permissions (at least user:group) of the destination before copying to be able to correct them on destination.

scp -r ./etc/ejabberd/ root@FQDN:/etc/
scp -r ./var/lib/ejabberd root@FQDN:/var/lib/
scp -r ./var/log/ejabberd/ root@FQDN:/var/log/

scp -r ./usr/share/ejabberd root@FQDN:/usr/share
scp -r ./usr/share/doc/ejabberd root@FQDN:/usr/share/doc/

The last two you can/should probably omit. The first one obviously not, but the second is not so obvious. It's where ejabberd keeps it's database.

That actually worked almost painlessly. It turned out that ejabberd config-format had changed and and is now also in another file (etc/ejabberd/ejabberd.cfg -> etc/ejabberd/ejabberd.yml).

Open both files, read carefully for each setting and copy-paste & reformat setting.

When done, give it a try but inspect

tail -f /var/log/ejabberd/ejabberd.log
ejabberdctl start

And voila: server is up. Users are there, PEM seems to work and perhaps most importantly: all the friend-connections!

Lesson learned though: Back up your rootfs properly too! This time (and with this service) it was just dumb-luck.

(If you want to chat with me over XMPP/jabber, you can hence now do so again: michael@ambrus.se)

Monday, November 18, 2013

Multiple NIC:s server behind NAT-router - part II

This part is actually harder to understand technically speaking, so for now I'm just going to leave you with a script that does the job. Invoke the script somewhere from init.rc, the order compared to the dyn-DNS script doesn't matter and it's perfectly alright to invoke this before the dyn-DNS script.

Note however that the script can fail if the NIC isn't ready when the script is run. It will also stop working if a NIC is removable (USB WLAN for example), in which case it has to be rerun as routing-tables will be flushed and internal IP-numbers probably different anyway thanks to DHCP. This script is robust however and you could add it to crontab as well, with a quite slow update rate say once an hour. Or better yet, have a daemon detect when a link is broken and reestablished and run the script then.

Also note, that even though one NIC will have a proper back-route in the default table, it doesn't hurt to add one more table/route/rule-set to cover the issue of not knowing which NIC:s will be up first and which ones will be secondary.

Here's the script. Invoke it with one argument, the NIC-name (you can get the NIC-names from the command ifconfig):



Basically, what the script is doing is creating a new table for each new interface it's ever seen (which shouldn't be too many), and to this table create a specific routing table with it's own default route (which will be the "router" that's on the same sub-net as the NIC).

To that table there's also rules saying "what-ever comes in, must go back the same way".

Sunday, November 17, 2013

Multiple NIC:s server behind NAT-router

Say that you'd like to create an El-Cheapo server accessible from the Internet, or just want to access your home PC from work AND that you have some reason to have two "routers" (i.e. NAT-firewall gizmos, commonly but erroneously named "routers").

Me for example, I don't have any physical line as the wireless broad-band (3G/4G) in the neighborhood is very good, though not as reliable as a physical line would be. I'm also very much into the idea of personal freedom and I'm into coastal sailing and to like the idea of being able to bring my stuff with me.

(I also don't want to pay the network-provider extra for housing servers, which I'm lucky not to have to from a legal standpoint because my provider is very liberal. But some others Fascist bastards in this country do have the stomach to charge for inbound traffic and they have even set up limitations for it already. Officially the reason is that they want to get payed for their services, especially for voice traffic. WTF - who cares about voice nowadays!? There's no doubt in my mind the real reason is to limit file-sharing using torrents, like this is going to be such a big hindrance. Bah, twisted trolls. May they rot in Hell... Anyway, back to the story...)

The plans for my accounts are very different. One is very fast and quite cheap downstream and has also unlimited quota - to my kids enormous joy and pleasure). The other costs virtually nothing, but is very throttled downstream, but not upstream. This is probably a mistake by the operator, but who am I to correct their mistakes. They're not correcting mine :-P. Furthermore, I'd like my "server" to be accessible by name(s), not by number and the IP-numbers change from time-to-time each time there's been a glitch an a "router" restarts.

Add to this the fact that radio-based networks of various flavors are by nature not as stable as their wired cousins, what do you do? Hmm... Interesting. Challenge accepted :-)

What you could do is use the old spare router and account you have left over and attach it to an extra NIC, and then use that extra router for inbound traffic (i.e. traffic initiated from the outside in) and the other one for outbound traffic (i.e. the other way around).

But here's the problem. Since these "routers" are not real routers, they change the packets IP headers address fields and piggy-back onto another field (proc ID) to translate back and forth. This is called NAT and is the trick behind why IPV4 has survived for so many years longer was anticipated. In practice, each household has become a network of it's own where all inside nodes share the same public IP-number.

This works extremely well, so to the point that what was once believed to be a IP-number shortage and the end of IPV4, and the big motivation behind the gigantic infrastructure undertaking of IPV6 a decade ago, now mostly seems to have become just a fart up-wind.
 
It was originally thought that the Net-traffic would be distributed fairly equal, which in a sense is somewhat true. But it seems no-one in the late 90' or early 20' though about that there would be that much more consumers (clients) than data-providers (servers), and that initiation of traffic would be mostly unidirectional. Big data-providers hosting your e.mail and so forth has certainly helped in that direction, even if their motivation IMO is quite questionable. NAT solved this problem over-night, and whats even better, the infrastructure cost was completely shifted away from back-bones and sub-nets, down to the end users (IPv6 would had meant a global paradigm shift, affecting each router and each computer in the world. Some people still believe IPv6 is coming, personally I consider it stillborn). Not realizing this in time and before shouting "wolf!", scaring up the whole world, Now that's what I'd call a "once in a life-time blunder"... (only superseded bu the Y2K ditto). The only outcome of such mistakes is a total loss of trustworthiness and even if there's a real need behind, any solution will be greatly delayed.

However there's a few real technical down-sides about keeping IPv4 via NAT:s in favor for IPv6, and this writing is about one of them:

For multi-homed hosts and concerning in-bound traffic: What comes in on one NIC, must go back via the same NIC (!).

This won't happen auto-magically unless you do some wizardry. The reason is that the default route in your server will always choose one NIC over the other, and if it's not the same as where it came from, the second "router" will do a NAT, and the originating outside client-socket will not be able to pair the returning package with the one it sent. Normally, it would be perfectly alright for packets to take different routes one way than the other. But if there's a NAT in the way, this just won't work and all returning packets will be lost in void.

Here's how you solve it:

Create (a) specific route(s) and rules.

This needs to be done eithe for the inferior router, or can be replicated for all your routers. The "inferior" router in this context is the one which don'y not have your servers default route set up first in it's routing table. The order of the list may differ, it's up to the OS to decide, but for computers the primary NIC is often choosen by time it got ready. Note however that this may vary a lot. Smart-phones for example tend to prefer WLAN over WAN in the (sometimes incorrect assumption) that the user would prefer WLAN or that your WLAN is always faster (which in my case for example is almost never true).

Before we continue, lets assume the following for your multi-homed host. Each NIC belongs to a separate test-networ. Networks in 192.168/16 are used for this for this purpost, so:

NIC1: 192.168.0.X  / IF1: eth0  / GW1: 192.168.0.1
NIC2: 192.168.1.Y  / IF2: wlan0 / GW2: 192.168.1.1

Lets furthermore assume that each "router" runs a DHCP-serice and that X and Y are any number in the allowed subnet's DHCP range ( 192.168.0. and 192.168.1. respectively). I've found it best to either allocate a DHCP entry in each "router" or to let it be fixed close to either range-end of the sub-net. For one because normally your server would be headless and besides it saves you the trouble of physically change work-plae just to look up the IP when something isn't working as it should. Except for choosing different network numbers for each subnet and make sure they don't get in contact with each other (your DHCP:s will fail if they do), the actual numbers are not that important as you will soon see. The above is just some friendly advice prto help debugging should you be needing it doing so. Personally I prefer reserve a number in the DHCP, but not some really old routers won't allow you to do that and your only choice will be the second option of narrowing the subnet down and to allocate a fixed address outside of the DHCP range, but still within the subnet range.



There's yet besides routing still the problem about the dynamic DNS thingy and the wish to reach the server's NIC:s by name, not number. Most "routers" nowadays have a dynamic-DNS updater built in. But what if your favorite dyn-DNS provider isn't supported by your router? Then you will have to run the updating from a host in the inside. We're going to do that in a cron script at the host that's supposed to be always up: teh Seeer-Veeer. Note that many of the once free-as-in-free-beer dyn-DNS service providers (like one of the true originals "DynDNS" for example to many peoples dismay) has started to charge for their service, way too much considering the actual "service" if you ask me. Remember that our "server" belongs to the fine category of El-Cheapos...





Set up a free dyn-DNS account at any of the free providers - for example: http://freedns.afraid.org/ and create two host-entries (under the same account if you wish). This service-provider has the elegant solution considering updating DNS-entries. The only thing needed is to access a randomly generated and lightly encrypted URL, known only by you, once in a while or when changes of your IP:s occur. But you want to do that with the right NIC.

So we start by creating a specific route via the right IF connected to the subnet where the intended router is at. Here's a couple of scripts doing just that for you. Note that only the first one contain the logic, the two following are just some helper scripts that can be reused whence we get as far as to routing. Put the following somewhere root can use them (/root/bin perhaps?).

Export your SHA1:s that you've got from freedns to make life easier. For the sake of it, I'm using my own case as example. SHA1:s obfuscated naturally:

 export PI="aSecretString1="
 export KATO="aSecretString2="

Now you can test it it works:

 dyndns_update.sh eth0 $KATO
 dyndns_update.sh wlan1 $PI

The scripts should be fairly scilent, unless something goes wrong or when a IP address is changed. You can try repeate the above but swap interfaces. Try pinging the names using an externally connected computer (use a shell in your smart-phone for example), ICMP will probably not be replied until the next part with routing, but the IP-addresses should be updated.

Add the corresponding information in a crontab running each 5 minutes. Remember that cron lines must be unbroken and that crontab has only a very limited set of variables. I.e. you can't use shell-expansion and hence you need to put the complete SH1-strings in there:

 3,8,13,18,23,28,33,38,43,48,53,58 * * * * dyndns_update.sh eth0 aSecretString1=
 4,9,14,19,24,29,34,39,44,49,54,59 * * * * dyndns_update.sh wlan1 aSecretString2=
 
Note that freedns has more then your hearts desire when it comes to registered suitable domain names. There's is no actual need to consider any other, even if you already have one. It's hard to compete with free. Besides freedns is also very good: Any needed updates usually propagate world-wide within seconds.

Next update will be about the actual routing...


Saturday, March 2, 2013

Linkers & loaders - .init/.fini vs. .ctors/.dtors explained

 .init/.fini isn't deprecated. It's still part of the the ELF standard and I'd dare say it will be forever. Code in .init/.fini is run by the loader/runtime-linker when code is loaded/unloaded. I.e. on each ELF load (for example a shared library) code in .init will be run. It's still possible to use that mechanism to achieve about the same thing as with __attribute__((constructor))/((destructor)). It's old-school but it has some benefits.

.ctors/.dtors mechanism for example require support by system-rtl/loader/linker-script. This is far from certain to be available on all systems, for example deeply embedded systems where code executes on bare metal. I.e. even if __attribute__((constructor))/((destructor)) is supported by gcc, it's not certain it will run as it's up to the linker to organize it and to the loader (or in some cases, boot-code) to run it. To use .init/.fini instead, the easiest way is to use linker flags: -init & -fini (i.e. from gcc command line, syntax would be -Wl,-init,my_init,-fini,my_fini).

On system supporting both methods, one possible benefit is that code in .init is run before .ctors and code in .fini after .dtors. If order is relevant that's at least one crude but easy way to distinguish between init/exit functions.

A major drawback is that you can't easily have more than one "_init" and one "_fini" function per each loadable module and would probably have to fragment code in more .so than motivated. Another is that when using the linker method described above, one replaces the original _init and _fini default functions (provided by crti.o). This is where all sorts of initialization usually occur (on Linux this is where global variable assignment is initialized). A way around that is described here:
http://www.flipcode.com/archives/Calling_A_Function_At_ELF_Shared_Library_Load_Time.shtml

Notice that a cascading to the original _init() is not needed as it's still in place. But the "call" in the inline assembly, that's x86 and would look completely different on for example ARM. I.e. code is not transparent.

.init/.fini and .ctors/.detors mechanisms are similar, but not quite. Code in .init/.fini runs "as is". I.e. you can have several functions in .init/.fini, but it is IFAK syntactically difficult to put them there fully transparently in pure C without breaking up code in many small .so files.

.ctors/.dtors are differently organized than .init/.fini. .ctors/.dtors sections are both just tables with pointers to functions, and the "caller" is a loop that calls each function indirectly. I.e. the loop-caller can be architecture specific, but as it's part of the system (if it exists at all i.e.) it doesn't matter.

The following snippet adds new function pointers to the .ctors function array, principally the same way as __attribute__((constructor)) does (method can coexist with __attribute__((constructor))).

#define SECTION( S ) __attribute__ ((section ( S )))
 

void test(void) {
    printf("Hello\n");
}


void (*funcptr)(void) SECTION(".ctors") =test;
void (*funcptr2)(void) SECTION(".ctors") =test;
void (*funcptr3)(void) SECTION(".dtors") =test;



One can also add the function pointers to a completely different self-invented section. A modified linker script is needed, but with it one can achieve better control over execution order, add in-argument and return code handling e.t.a.. Or in a C++ project, one might need something running before or after global constructors.
I'd prefer __ attribute__((constructor))/((destructor)) where possible, it's a simple and elegant solution even it feels like cheating. For bare-metal coders like myself, this is just not always an option.

Some good reference in the book "Linkers & loaders": http://www.becbapatla.ac.in/cse/naveenv/docs/LL1.pdf

Saturday, May 19, 2012

Forced to upgrade Brunbuntu and same old story again,  new release fixes a little and breaks some more. I swear, this is on purpose by Canonical to sell support (or to get testers for free). Neither of which I'm a big supporter of as it is deliberately not focusing on quality. Next time I promise, I'll go back to old-school and build my own distribution completely from source.

What's the point of messing around fundamentals in user-land like with init, network configuration, boot (grub 1, 2, x...) when it has been working fine in the past? Even Android is better in comparison. At least you know it's different so you don't expect thinks to be as they were... for like the past 2 decades! It's bad enough that Linux kernel isn't following industry standard POSIX 1003.1c & 2b API:s. (Yes, it could - but it doesn't for God only knows what reason.)

I feel like I'm getting ol an gray with all this rambling about...

Anyhow, back to the story: For my VGN-Z21WN-B laptop, I had difficulties getting X to use accelerated 3D NVIDIA drivers. With 10.04 one could just force X to use the correct driver by changing 3 files (see script below). After tearing my hair for 1/2 day, this bloke led me to what I was missing:

http://www.adhocism.net/2011/05/installing-ubuntu-11-04-on-sony-vaio-vpc-z13m9eb/

Turns out neither of all that is needed any more, except one detail:

/etc/default/grub and changed the GRUB_CMDLINE_LINUX_DEFAULT to “quiet splash acpi_osi=”. Now I reinstalled Grub by typing “sudo update-grub”. This enables static switching of the GPUs when restarting from Ubuntu.

VoilĂ !

No special kernel-module that needs recompiling for each new kernel-release, no script tweaking file-tree, no patching /etc/init/gdb.configand and so on...

Here's the old tweak-script for historical reference:


#!/bin/bash
# Set correct X drivers for Sony Vaio Z series
# (Tested on a VGN-Z21WN running Ubuntu 10.04)
#
#Useage: set-xdriver.sh [INTEL|NVIDIA]
# Parameter is optioal.
# If no argument is supplied, script will auto-detect GFX
#
# This script is inspired by the script found at:
# https://wiki.ubuntu.com/sonyvaioz
#
# Script is suitable to be called with no argument from
# /etc/init/gdm.conf
#
#pre-start script
# /etc/X11/set-xdriver.sh
#end script

set -e
set -u

function detect_HW() {

   if ( lspci | grep "00:02.1" >/dev/null) ; then
     echo "INTEL"

   else
     echo "NVIDIA"
   fi
}

function tweak_files() {
   if [ "x$1" == "xINTEL" ]; then
      echo "Setting X for [INTEL]"
      ln -sf /etc/X11/xorg.INTEL /etc/X11/xorg.conf
      ln -sf /usr/lib/mesa/libGL.so /usr/lib/libGL.so
      ln -sf /usr/lib/xorg/modules/extensions/libglx.so.INTEL /usr/lib/xorg/modules/extensions/libglx.so
   else
      echo "Setting X for [NVIDIA]"
      ln -sf /etc/X11/xorg.NVIDIA /etc/X11/xorg.conf
      ln -sf /usr/lib/nvidia-current/libGL.so /usr/lib/libGL.so
      ln -sf /usr/lib/nvidia-current/xorg/libglx.so /usr/lib/xorg/modules/extensions/libglx.so
   fi
}


function sanity_checks() {
   if [ $(whoami) != "root" ]; then
      echo "Error: This script must be run as root"
      exit -1
   fi

   if [ ! -f /etc/X11/xorg.INTEL ] ; then
      echo "Error: /etc/X11/ lacks separate xorg.conf file for INTEL"
      exit -1
   fi

   if [ ! -f /etc/X11/xorg.NVIDIA ] ; then
      echo "Error: /etc/X11/ lacks separate xorg.conf files for NVIDIA"
      exit -1
   fi

   if [ ! -f /usr/lib/xorg/modules/extensions/libglx.so.INTEL ] ; then
      echo "Error: Original libglx.so for INTEL missing. Did you make a copy?"
      echo "Get a new original from the package xserver-xorg-core if needed"
      exit -1
   fi

   if [ ! -f /usr/lib/nvidia-current/xorg/libglx.so ] ; then
      echo "Error: Package nvidia-current is not installed. Please Install it."
      exit -1
   fi

}

sanity_checks

if [ $# -eq 1 ] ; then
   GFX=$1
else
   GFX=$(detect_HW)
fi

echo "Setting X for [$GFX]"



Saturday, October 10, 2009

Time to try Karmic Koala

Yay, time to try Karmic Koala (Beta).

List of release names:
http://en.wikipedia.org/wiki/List_of_Ubuntu_releases

Trigger the uppgrade (from the official page):


Upgrading from Ubuntu 9.04

To upgrade from Ubuntu 9.04 on a desktop system, press Alt+F2 and type in "update-manager -d" (without the quotes) into the command box. Update Manager should open up and tell you: New distribution release '9.10' is available. Click Upgrade and follow the on-screen instructions.

Tuesday, July 21, 2009

Ubunty "registry"

Let me be clear about one thing: Among the various Linux distributions I've tried, Ubuntu is the one I like best (at the moment). I used to be a great fan of RedHat, as a simple, clean nice and well featured disto. At least the "simple" part went down the drain after 7.2. (One of the crappiest one was SuSe, which started out quite well. I liked the idea of having commercial forces supporting a free project, but unfortunately the distribution has become more and more awkward and the free/open support community more and more egg-headed.)

A few years ago I was thinking - if we're going for the "all GUI" approach anyway, why not Brunbuntu :) I've been mostly a happy-puppy with Brunbuntu since then, but only until something needs tweaking. Then it's not so damn funny anymore. One of the things I've been tearing my hair about lately is the network manager. If you think I'm just whining, try upgrading from Edgy to Jaunty Jackalope. Someone should be shot messing up such a vital thing as network management methinks. However, whence you've got it right its actually not that bad at all (I particularly like the concept of being able to assign different settings to the same wlan interface depending on which network you're logged into - really great!).

If you're actually stuck with this, look out for the article abot the Network Manager I'm about to write.

Some things in Debinan/Ubuntu repel me on a deeper level however. One of those is the the idea of mimicking the Windowze registry. How this common-point-of failure strategy found it's way into the Linux community is beyond my understanding.

Anywho, here comes a few hints if you're unlucky enough having to alter some Gnome application setting.

Instead of the usual Unix/Linux approach, Gnome aware programs store all their data in a sub directory called ~/.gconf

Another place to look is in /etc/xdg/, at least for the "autostart" part for some of the applets.

The "keys" are fortunately files which makes this at least somewhat bearable, if not understandable. The content of these "keys" are however in XML, which again is quite repulsive IMHO.

The program to use to manage the "keys" in this "registry" thingy is the Configuration Editor (gconf-editor), which is part of your distribution but which is not enabled in the menu. Run the Menu Editor (alacarte) from a shell and enable it and to make life a little bit easier.

Netbook mode - not playing nicely with Classic Desktop mode

I've been playing around with the netbook-remix package on various machines, aiming to have both netbook desktop management and normal desktop management on the same machine ant to be able to choose between them on demand. The netbook stuff is actually not bad if you hook up your laptop (or netbook for that matter) to your TV as a media-player/web-browser thingy. Especially if you remote control your mouse with a Bluetooth application from your phone, it's great to be able to see what application your starting from a distance.

In the case where a machine is installed from the Ubuntu Netbook distribution and where the switch-desktop package is installed on-top this works somewhat well. But if one starts the other way around with a normal Ubuntu Desktop disto and then install the netbook-remix and switch-desktop one ends up with a situation where all windows are always started maximized no matter which mode you switch to or even which VM you run.

I.e. no matter which desktop mode you select with the Switch Desktop Mode utility, or if you add your own ~/.xinitrc, the issue with the maximized windows remains (!).

Usually one can google around and find at least a couple of persons who's stumbled across the same problem that oneself has, but in this case - nothing.

Anyway, to keep a long story short, here's what I did to resolve this particular issue.

In the folder /etc/xdg/autostart/ there used to be two files:

maximus-autostart.desktop
netbook-launcher.desktop

I removed the first one and after restarting gdm everything works as usual. This leads me to believe that this is some sort of ugly-patch because the compiz maximumize in the CCMS was disabled and it does no difference if it's set or not.

I'm not an expert in either X, VM's, DM's or Gnome and I dare not say what the xdm structure is or what it's good for.

References

Sunday, July 12, 2009

Default keyring nightmare

If forgotten or corrupt:

rm ~/.gnome2/keyrings/default.keyring

References:

Monitoring nework activity

For instant monitoring the following tools are good:
  • wireshark (ethereal)
  • etherape
  • tshark
For server monitoring:
  • darkstat
  • ntop
For either (CLI apps under screen):
  • iptraf

Remember that promiscuous mode monitoring requires packages to actually pass your interface for the host to be able to pick them up. I.e. wired traffic can be difficult to pick up if a network switch is used in the central of a star network topology. Either replace it with a simple hub or you have to put the machine used for monitoring in the way between the router and the rest of the network (i.e. it has to be multi hosted running ip-chains or similar).

Note that darkstat has a config bug. For the -l option the format is:
-l aaa.bbb.ccc.ddd/nnn.nnn.nnn.nnn

and not:
-l aaa.bbb.ccc.ddd/N
(where N is the number of bits from the left. I.e. 1-32)

Thursday, June 4, 2009

Installing VMWare tools under WMWare player

Normally, if you run a VM that was originally created with VMWare Workstation you can run this with VMWare player just as well. VMWare player is free as in free as in free beer and the idea is that you should be able to run VMs but just not be able to create them.

Fair enough, but with sites like EasyVMX you don't need a licensed VMWare Workstation. However, there's a catch:

VMWare tools are a set of utilities including drivers that will boost performance of your VM enormously. They're supposed to be run AFTER you've created your VM and installed your OS on it, and these do not come with VMWare player but only with VMWare Workstation.

So what to do? Of course you should purchase a VMWare Workstation license, but say you're only in it for creating on dang VM and that's all you ever need and you'll be fine forever with VMWare player?

Hmm, here's a way to do it (note, I have no idea if this is legal so before proceeding you should really check).

  • Get a trial licence of VMWare Workstation. It's free of charge, but you have to register.
  • Get it installed somewhere, preferably on another computer.
  • Search for all *.iso files in the directory of the above installation and copy them into a directory of it's own. Those are your VMWare tools for various host OS:es.
  • Copy that directory to your original mashine (or to each mashine you have VMWare player installed at), preferably where you store your VM's as a subdirectory called vmware_tools.
  • Then make the following change in your .vmx file:
#######################################################
## Settings for physical CDROM drive
#ide1:0.present = "TRUE"
#ide1:0.deviceType = "cdrom-raw"
#ide1:0.startConnected = "TRUE"
#ide1:0.fileName = "auto detect"
#ide1:0.autodetect = "TRUE"
#######################################################
# Settings for VMWare tools
ide1:0.present = "TRUE"
ide1:0.deviceType = "cdrom-image"
ide1:0.startConnected = "TRUE"
ide1:0.fileName = "..\vmware_tools\windows.iso"
ide1:0.autodetect = "TRUE"
#######################################################

Note that which device (ide1:0) is mapped to your CD might differ. Adapt the abouve to your own fit.

Now start up your VM and you will notice that instead of CD drive, you have the windows.iso running instead. Click on it and you should get a wizard letting you install the VMWare tools.

When you're done, shut down your VM and restore the vmx file (or swap the sections with remarks with the section without in the snippet above). Start it up again and you'll have VMWare player running your VM at full speed.

Get VMWare player to support USB2.0

If you run say Windows XP as host OS under the free (as in free beer) VMWare player and you find yourself getting BSOD or notifications saying "this device can operate faster" when you attach a USB device to a USB port, then there's a good chance whoever created the VM didn't know how to configure it for VMWare player to use USB2.0. Note that it doesn't matter that your real HW does support USB2.0, VMWare historically didn't.

As of today's writing EasyVMX will not create VMs that will work with USB 2.0 properly.

In such case, make sure the following lines are in your .vmx file (you can edit it while the VM is not running):

virtualHW.version = "7"
usb.generic.autoconnect = "TRUE"
ehci.present = "TRUE"

Start up your VM again, wait for a minute or so until you get a message from Windows saying that something about "drivers have been updated" , and voilá - you have USB2.0 support.

Wednesday, December 31, 2008

Find files offending quota

When using quota I've noticed that the machine becomes sensitive for certain type of usage. In my case the file server is also used as a powerful extra machine for multimedia operations which is considered normal for any Ubuntu installation.

By the "old books" one should divide the root fs in several parts, which is totally meaningless on most normal Ubuntu machines, but which makes sense when using quota.

One of the common cases when quota is offended is that some application does not clean up after itself in the /tmp directory. A recommendation would therefore be to partition the system so that either the following is true (on different partitions):

  • "/home" + "/"
  • "/" + "/tmp"
  • "/" + "/home" + "/tmp"
  • "/" + "/home" + "/tmp" +"/usr"
The last two are really only for historical reference. In fact I would not recommend either of them unless each fs isn't on a drive of it's own (since each partition waste some extra disk space and since it makes the complete system more complicated to handle).

Either of the first two is however a very good idea. The main point is to separate /home from the rest and have it either on a fs of it's own, or together with the root-fs but then having /tmp separated instead (since users usually don't have write access anywhere else). The second one actually makes more sense and is the one prefered by me. The reason is that /bin and /usr today aren't considered big relatively speaking and since it would allow the most important of your drives (the one containing /home) to be easilly movable and bootable on another HW.

The second case also happens to be the easiest way to fix a monolithic system withou tinkering with the main drive (re-partitioning is always a operation that would make me shit in my pants ;) ). Just find one of your old discontinued drives, add it to the system and mount it as /tmp and you're done.

In case you're out of luck and you don't wish to reinstall the whole server (at least not now), the following command might come in handy:

sudo find / -user $AUSER -size +$SIZEINMEGS M -exec ls -aldh '{}' ';'

Saturday, November 15, 2008

How to access to a Samba server as different user

This apparently is a weakness in Windows according to this guy.

As described here, you will at least be able to change the default uid for the next session and onward. The following is what makes it happen in short:

"You will need to end all connections to that computer, for example with the command "
net use \\[server] /d", before you can create a connection with the other set of credentials."

"With the command "net use \\[server] /user:[username] [password]" you can specify a specific set of credentials to use for the connection."

BTW: Make a big fat note about fekking windows drive letters. If you have any assigned to the same server as you whish to invoke the commands above at, the commands would seem to work (especially under command.com), but they wont. A special pit-fall is if you've assigned a drive letter to a server identified but a IP number and not a name - the old credentials will be used no matter how you try until that "drive" is disconnected (grr.. may all Windows lovers burn in h3ll...). IMHO, better avoid drive letter alltogeather. It was a bad idea since birth and a nuicense.

Anywho...:
To handle changing credentials smoothly, you can create a script for Cygwin like this:


#!/bin/bash
net use '\\aserver' '/d'

if [ $# -lt 2 ]; then
echo "Connecting to default user"
net use '\\aserver' '/user:adomain\auser' 'apassword'
else
echo "Connecting to specific user $1:$2"
net use '\\aserver' '/user:adomain\'$1 $2
fi;

Sunday, November 2, 2008

Install Pine for Ubuntu

I thought it would be nice having a native mail client for the system but setting up a mumbo-jumbo mega client for a mail-system rarely used just seems stupid. I figured I'm going for command-line... But the only command line e-mail client that I know of worth knowing is Pine (I'm not a Emacs guy) and a licence issue hinders Pine from being distributed in binary form.

So I compiled a micro how-to based on my own findings:

Get the source: pine.tar.gz

Additionally install the following:
apt-get install libpam0g-dev libldap2-dev libncurses5-dev

Unpack pine sources and build:
cd /usr/local/src/
tar -xvzf somewhere/pine.tar.gz
cd pine4.c4
./build ldb
su
cd ../../bin/
ln -s /usr/local/src/pine4.64/bin/pine


Done!

Saturday, November 1, 2008

Postfix MTA - new try

All right, all right - I have to admit, Postfix has it's advantages...

Turns out having quota and warnquota configured it's impossible to get warnquota warnings delivered to an external recipient. Setting the MAILTO environment variable does not make any difference (believe me, I've tried..). It always sends to localuser@localdomain, where localuser is the user who's violated the quota. I.e. all warnings with nullmailer are sent as localuser@mydomain to the outside world ;(

However, it turned out that postfix handles this better. First of all it recognizes local mail recipients and doesn't use the relay host for those. Secondly, local users can be aliased to external e-mail addresses (!). Since I'm new to all this I didn't bother exploring it in detail, but by adding a line like this in /etc/aliases :

localuser: localuser remoteuser@remotedomain

then run the postfix command:

newaliases

Now I get mails delivered both to the local /var/mail/localuser and to the remoteuser@remotedomain.

Naaajs....!

sasl stuff turned out the same for my ISP Glocalnet as well as for Google. For one reason or the other, the second attempt with postfix worked out (I didn't try relaying through smtp.gmail.com though). Don't ask me why it works now - I still claim IT guys are pervs.. :)


When installing Postfix you get a few options to choose from. Make yourself a big favour by choosing the right one:

Internet Site--This would be your normal configuration for most purposes. Even if you're not sure of what you want, you can choose this option and edit the configuration files later.

Internet Site Using Smarthost--Use this option to make your internal mail server relay its mail to and from your ISP's mail server. You would use this when you don't have your own registered domain name on the Internet. This option can even by used with dial-up Internet access. When the system dials up the ISP, it will upload any outgoing mail to the ISP's server, and download any incoming mail from the server.

Satellite system--Use this option for setting up a relay that would route mail to other MTA's over the network.

Local system--Use this option for when you're just running an isolated computer. With this option, all email would be destined for user accounts that reside on this stand-alone client.



For me the right choice was Internet Site Using Smarthost (make sure 'inet_interfaces = all' in main.cf for receiving to work). Togeather with this hint, I can now both send and receive e-mail for accounts on my system (wow!). Guess who's going to remote control.stuff@home ;)

BTW - I never configured either a MTA, cron or quota ever before. Given that, I think having done this in two-three days is not too bad.

Friday, October 31, 2008

Sendmail

Setting up (send-)mail nowadays is just a pain in the b#tt.

After having used Gmail for too long, one tend to forget. All I need is to be able to send mail from scripts so that the cron jobs managing the backups are able to e-mail me the results and reminders when copy to DVD-backups are needed.

Anyway, after a days frustration with postfix & mailto for smtp.gmail.com I gave up and installed nullmailer instead (http://untroubled.org/nullmailer/). I'm now relying through my ISP (Glocalnet): God damn their bones. Nullmailer is a sweet little utility, even though it doesn't do much more than can be done with a simple telnet session to the smtp server.

It doesn't work with gmail though, apparently the new authorisation mechanism is too much for it to handle. At least it's honest about it, postfix is just not giving you much hints about what it can and can't do (and what's screwed up). Here's a few links to some seriously misleading articles:

http://www.dslreports.com/faq/6456
http://www.howtoforge.com/postfix_relaying_through_another_mailserver
http://www.marksanborn.net/linux/send-mail-postfix-through-gmails-smtp-on-a-ubuntu-lts-server/

http://www.linuxquestions.org/questions/linux-networking-3/postfix-relay-thru-gmail-316352/

IT guys are just perverts...

Wednesday, October 29, 2008

TinKer project restored

Finally my TinKer project is moved from SourceForge and fully restored from backups.

TinKer is a real-time kernel with ambitions of becoming a real-time embedded OS. It has some features that not many other kernels of it size have, among others an almost complete POSIX 1003.1c implementation.

The projects URL is now http://kato.homelinux.org/~tinker/cgi-bin/wiki.pl/ but just in case better, better use the DYNDNS address http://tinker.webhop.net/

Saturday, October 25, 2008

Safety concerns - Backup already...

After several recent lost hard-disks I've come to the realization that quality of hard-disks are not what they once were. Therefore I'll post a few articles of how to conquer that beast once and for all.

The problem is that a fool-proof backup system suitable for a hobbyist is difficult to do. How to know that it works until you really need it, and how to actually remember all you need to know when time comes? The better the system, the less likely it is that you need to restore and that time will pass until you do need to recapitulate both your data and your memory of how you've stored.

Furthermore, I don't trust all the backup systems. What happens if they fail? Do you really know how to run them? What if they break, can you afford a consultant to fix things for you? As a hobbyist, one ore more of the previous questions are likely answered to your disadvantage.

I.e. what you *really* need instead are the following:

  • A very simple system, something that you can understand and trust.
  • A well planned schedule
  • 2 backup destinations, at least one not part of the system you're backing up
  • A automated e-mail reminder
  • A bunch of DVD's and a burner
  • A safe

This article describes the first point. Consider the following script (~/bin/packfs.sh):

#!/bin/bash

export TS=`date +%y%m%d-%H%M%S`
if [ -z $1 ]; then
echo "Error $0: Arg #1 needs to be the base name of the package." 1>&2
exit -1
fi
if [ -z $2 ]; then
echo "Error $0: Arg #2 needs to be the number of backups to keep" 1>&2
exit -1
fi
if [ -z $3 ]; then
echo "Error $0: Arg #3 needs to be the source path." 1>&2
exit -1
fi

export FN_BASE=$1
export N_BACK=$2
export SOURCE=$3

function pack {
export OLDPWD=`pwd`
export FN=$FN_BASE-$TS

if [ -z $1 ]; then
echo "Error $0: Cant't determine where to put result file" 1>&2
exit -1
fi
if [ -z $2 ]; then
echo "Error $0: Cant't determine the source path" 1>&2
exit -1
fi

cd $2
DIRNAME=`pwd | sed -e 's/.*\///'`
if [ -z $DIRNAME ]; then
DIRNAME=.
fi

cd ..
echo "$0: Packing [$DIRNAME] as [$1/$FN.tar.gz]"
tar --one-file-system -czf $1/$FN.tar.gz $DIRNAME

cd $OLDPWD
}

function keeplast {
export OLDPWD=`pwd`
if [ -z $1 ]; then
echo "Error $0: Cant't determine where backus are" 1>&2
exit -1
fi
if [ -z $2 ]; then
echo "Error $0: Don't know how many backups to keep" 1>&2
exit -1
fi
cd $1
N_FILES=`ls | grep $FN_BASE | awk 'END{print NR}'`


let "N_DEL = $N_FILES - $2"
if [ $N_DEL -gt 0 ]; then
DFILES=`ls | grep $FN_BASE | sort -r | tail -$N_DEL`
echo "$0: Deleting obsolete backup(s): "
echo "$DFILES"
echo "================================================================================"
rm $DFILES
fi

cd $OLDPWD

}
if [ -z $BACKUP1_PATH ]; then
echo "Error $0: BACKUP1_PATH needs to be set." 1>&2
exit -1
fi

pack $BACKUP1_PATH $SOURCE
keeplast $BACKUP1_PATH $N_BACK


Define the environment variable $BACKUP1_PATH and your ready to backup. $BACKUP1_PATH should be on physically different disk (or network mount) than the one you're backing up.

Run your sript from command line first to test that it works:

packfs.sh dayly-auser 7 /home/auser

A backup will be created in $BACKUP1_PATH at each run of the script with a filename containing a timestamp. The script will also tidy earlier backups, but keep the 7 last ones.

The example above shows backing uf a ceratin user, but the script can be used for any part of a system, including the complete system itself:

packfs.sh server 3 /

If you run the script as a cron job as root, you're a bin on your way of creating a safe, simple and fairly fool-proof automated backup system.

Come to think of it, I really have to teach my kids not to save all the garbage they find on the internet in their accounts. Maybe I should learn how to use that quota thingy as well...

Another thing that would be good to learn is how to get the server to scale down it's performance when it's not needed (i.e. how to get hard-disks to enter sleep mode and CPU frequency to adjust dynamically). I've lost three disks in a year, from not loosing 1 in 10 years before now, I'm seriously concerned about heat and component lifespan.

Wednesday, July 23, 2008

I couldn't help my selft but to post this :)



This is my room in the new Google lively. Click on the image above to enter the virtual 3D room.

You'll need a plugin for your browser from here to enter.

More about Lively at http://www.lively.com/html/landing.html

Sunday, May 11, 2008

libuuid & NIS mess

After updating to 8.04 LTS you'll better check your /etc/passwd

If you have new 'daemon users' there with UID larger than any of your NIS exported users, you're likely to get in trouble. In mine, I got a new "user" called libuuid with an UID 517.

On each machine in your network (including the server):
  • As root modify the UID for libuuid from 517 to 499 ("First NIS UID" -1)
  • chown -R libuuid:libuuid /var/lib/libuuid
This is tedious work if you have more than a few machines. I'm considering to re-ID my users instead, since it seems that UID starting at 1000 instead of 500 (as it used to be ages ago) has become widely adopted.

Re-id'ing a daemon is bound to lead to issues - if not sooner then at least at next update.

This issue possibly affects GDM and the list of users visible in the face-browser too.

References:
http://linux.about.com/library/cmd/blcmdl3_libuuid.htm

Upgrading from Ubuntu 6.06 LTS (Dapper Drake) to Ubuntu 8.04 LTS

Back-up all you data before proceeding.

# Prepare you system before the actual upgrade

lsb_release -a
sudo apt-get update
sudo apt-get upgrade
sudo apt-get dist-upgrade

# If running remotely, don't use ssh. See the previous tips for adding telnet to your mashine.
# The continue with the following lines (if logged in physically, skip following two cammands):
export DISPLAY=www.xxx.yyy.zzz

# On remote mashine
xhost +



# The following command opens a front-end. First press the button "check", then wait a few seconds, a button
"Upgrade to 8.04 LTS" should apear, press it and follow the instructions

sudo update-manager -d



# In case process breaks, you can try:
dpkg --configure -a

# Re-boot, check version again
lsb_release -a


References:
http://www.ubuntu.com/getubuntu/upgrading
http://www.ubuntu.com/getubuntu/upgrading#head-db224ea9add28760e373240f8239afb9b817f197

Adding telnetd to xinted

Install the package telnetd


sudo apt-get install telnetd


Copy the following to the file /etc/xintet.d/telnet


# description: An xinetd internal telnet service
service telnet
{
# port = 23
disable = no
socket_type = stream
protocol = tcp
wait = no
user = root
server = /usr/sbin/in.telnetd
server_args = -h
}

Saturday, March 1, 2008

DVD read trouble

Is in many cases due do missing decryption. As root do:

apt-get install libdvdcss2
/usr/share/doc/libdvdread3/install-css.sh

Starting hidden Cygwin apps

For example the X server is nice to have started without an extra annoying console window.

Before you do anything else, make sure you have your cygwin bin directory in you Windows system PATH. Cygwing programs expects certain .dll files to be found, which are stored in there.

Download the great little utility hstart

Copy your C:\cygwin\Cygwin.bat to another bat file (C:\cygwin\Cygwin_hidden.bat). Alter it as follows.

@echo off

C:
chdir C:\cygwin\bin

C:\your_personal_bin_bath\hstart.exe /NOWINDOW "bash --login -i %1"

..or since the hstart utility is small and self-contained, just slip it into the C:\WINDOWS\system32 directory and replace the last line with this:

hstart.exe /NOWINDOW "bash --login -i %1"

Please note the quotes. Now change the Cygwin link you want to start hidden (X-Cygwin for example) to point to the new bat file instead.

If you want to start X-Cygwin completly hidden (i.e. even without the xterm), edit the file /usr/X11R6/lib/X11/xinit/xinitrc and replace the line

#exec xterm -e /usr/bin/bash -l
exec xclock -geometry 100x130+1700+0

(You seem to need at least one X application to prevent the X server from terminating).

Tune your start app with standard X command line options.

If you drag X-Cygwin link to your autostart, you'll end up having X running in the background and combined with the hints in the previous post you have something almost Linux like ;)

Staring Linux apps on your Windows host.

Staring Linux apps on your Windows host.

  • First of all, install Cygwin (and make sure that you've included X).
After installation, alter your system path to the Cygwin bin directory. This is needed for Windows to find Cygwins .dll files. Right-click on "your computer" on the desktop, select the advanced tab and click on "environment variables". In the "system" section, find the Path variable and add the new path last (C:\cygwin\bin;).
  • Create a directory ~/bin and a file ~/.bash_profile and in the latter add (at least) the following lines:

export PATH=~/bin:$PATH
export DISPLAY=:0.0


  • Create a ssh key binding to the machine you want to use (see previous post)

  • Make an association in windows to .sh files with bash (right-click on any .sh.file and select "open with")
To associate bash scripts with Cygwing you'll need to alter the key in regedit. Find the key:

HKEY_CLASSES_ROOT\sh_auto_file\shell\open\command

  • Change it's default value from:

"C:\cygwin\bin\bash.exe" "%1"

to:

"C:\cygwin\bin\bash.exe" "-l" "%1"

  • Now create some start scripts in ~/bin. For each command/application you want to run on the remote mashine, create a .sh script with the same name in the form:

~/bin/acommand.sh
===============
#!/bin/bash

ssh -X server acommand

(replace "acommand" with your application/command name and "server" with the name of your server)

  • Drag a shortcut to you desktop and click it - voilĂ ! (make sure you've started X first though, in case you want to run graphical apps).

Tuesday, February 5, 2008

How to set-up SSH to not require a password every time you log into a remote machine.

Have a look at this nice how-to:
http://www.astro.caltech.edu/~mbonati/WIRC/manual/DATARED/setting_up_no-password_ssh.html

I.e. in short:

On local side:
ssh-keygen -t dsa -f .ssh/id_dsa
cd .ssh
scp id_dsa.pub user@remote:~/.ssh/id_dsa.pub
ssh user@remote

On remote side:
cd .ssh
cat id_dsa.pub >> authorized_keys2
chmod 640 authorized_keys2
rm id_dsa.pub
exit

On a new account on the server, you might also first want to run:
remote> ssh-keygen -t dsa

This will help create the .ssh directory on the server that might otherwise be missing, and set the attributes correctly (having the directory attributes wrong is otherwise a cause of error, chaos & confusion).

Thursday, August 30, 2007

Various startup files

Wounder where to put you startup settings?

It depends on what you need and how it's supposed to work, but here's a few:

Per user
=====
~/.bash_profile (recommended)
~/.bashrc
~/.xprofile

System wide
========
/etc/bash.bashrc


(this is not complete, more hints wil follow)

Wednesday, August 22, 2007

How to remove Kwallet

Finding Kwallet annoying?

Having problems removing applications from using it no matter what you do?

I tried following the following thread, only having the wallet database completely screwed up (KDE/gnome compatibility issue?):

http://www.mail-archive.com/debian-kde@lists.debian.org/msg26772.html

I.e. do not install kwalletmanager if youre running gnome (i.e. Ubuntu). Instead do the following (replace kopete with whatever app. you need kwallet removed from).

First of all make sure the app in question is not running and restart the X session just to make sure no processes are still alive that will rewrite/corrupt the files you will remove below. Now:

cd ~./kde
find . -name "*kwallet*" -exec rm -rf '{}' ';'
find . -name "*kopete*" -exec rm -rf '{}' ';'

After that start the app. If/when the kwallet wizard starts again it's important that you run it, but select that you don't want to use kwallet for that app. The dialogs should look like this:





Note that the check-box should not be enabled above.

Monday, June 11, 2007

ODOA - Or how lines are ended

This is not a strict Ubuntu issue but a general issue concerning operating systems and protocols.

Unix uses newline (or linefeed, '\n' = 012 = 0x0A) to terminate lines in text files;
DOS uses carriage return + linefeed ("\r\n" = 015 + 012 = 0x0D + 0x0A), and (AFAIK)
MacOS uses only carriage return ('\c'
= 015 = 0x0D).

Or...

Unix = 0x0A
DOS = 0x0D 0x0A
TCP = 0x0D 0x0A
Mac = 0x0D

Read more at: http://en.wikipedia.org/wiki/Line_feed#Newline_in_programming_languages

Note that \n in the C-language is dependant of the underlaying OS and operational mode for opened files.

Monday, May 14, 2007

Screenshots

Sometimes a picture says more than a thousand words. Taking a screen-shot of your desktop might make it easier to communicate a problem.


Ubuntu runs the gdm windowing system and you have the ability already built in.

(The program is called gnome-screenshot and is part of the package gnome-utils in case it's not pre-installed with your distribution and you have to install it.)

To take a screen-shot, just hit:

* Print Screen - Takes a screen-shot of the entire screen.
* Alt+Print Screen - Takes a screen-shot of the window to which the mouse points.

Friday, January 19, 2007

Apache proxy issues

(from http://httpd.apache.org/docs/trunk/mod/mod_proxy.html#access)
"Strictly limiting access is essential if you are using a forward proxy (using the
ProxyRequests directive). Otherwise, your server can be used by any client to access arbitrary hosts while hiding his or her true identity. This is dangerous both for your network and for the Internet at large. When using a reverse proxy (using the ProxyPass directive with ProxyRequests Off), access control is less critical because clients can only contact the hosts that you have specifically configured."

I.e. This should be OK

proxy.conf:


Order deny,allow
Deny from all
#Allow from .your_domain.com



ProxyPass /viewcvs http://localhost:8080/viewcvs/
ProxyPassReverse /viewcvs http://localhost:8080/viewcvs/

more CVS trixs

To start a new project

1) Copy a premade empty repository directory and point your CVSROOT to it.

2) cvs co .

3) cvs add

No need to fuzz with CVS import & init and stuff, which actually would make the next tip impossible (or very hard at best).

Backup your servers settings (
DO THIS ON YOUR OWN RISK)

su root
cd /
cvs co -p .
cvs_addall etc
cvs_addall root
cvs add usr
cd usr
cvs add lib
cd lib
cvs add yp
cd yp
cvs add *
cd /var/yp
cvs add Makefile
cd ..
cvs add geoipDB.txt #In case you have this file i.e.
cvs add log
cd log
cvs add apache2
cd apache2
cvs add access.log
cd ..
cvs add auth.log
cd /
cvs commit -m "System initial mirror"




To prune CVS out from an existing directory:

cd
find . -type d -name CVS -exec rm -rf '{}' ';'

BIG FAT NOTE
If you put the whole /etc/ in repo, some services might not start because they object finding a file CVS in some of it's directories. You must then use the above command line to remove the directories.

Since you're only going to go one way (i.e. to the repo) and never go from the repo (exept when diffing), you can just check out the offending module/subdir again. Any changes made locally "should" be merged with the ones in repo.

Alternative (and safer), you can rename the CVS dirs to .CVS. Prefixing with a dot is a convention to "hide" stuff, and most services should not be offended by any "hidden" directories.

Services known to be offended by CVS directories:
  • apache2 - The server will not start
  • modprobe.d - This will create a bunch of error entries in system log but is otherwise harmless.

Therefore before you reboot your machine, repeat the following on each directory above:

cd /etc
mdrename.sh CVS .CVS

Tuesday, January 16, 2007

NIS and NFS services

(Please read this post first: http://michael-ambrus-tipps.blogspot.com/2006/11/nis.html)

This contains minimum information to set up NIS and NFS services.

NFS
1)
make sure you have the nfs-kernel-server package installed and running

2)
Add this entry in the file /etc/exports

/home 192.168.0.0/255.255.255.0(rw)

NIS
1) Modify the file /var/yp/Makefile to the following:
*)
< MINUID=1000
< MINGID=1000
---
> MINUID=4
> MINGID=500

*)
< MERGE_PASSWD=false
---
> MERGE_PASSWD=true

*)
< merge_group="false"
---
> MERGE_GROUP=true

*)
< ALIASES = /etc/aliases
---
> ALIASES = /etc/aliases.yp

*)
< GROUP = $(YPPWDDIR)/group
< PASSWD = $(YPPWDDIR)/passwd
---
> GROUP = $(YPPWDDIR)/group.yp
> PASSWD = $(YPPWDDIR)/passwd.yp

3) The NIS domain
(This differs from other distributions)

Set the NIS domain in the file /etc/defaultdomain
ypdomain.logiccroft.de

4) Configure NIS service to be a server
Edit the file /etc/default/nis

NISSERVER=master

You might consider setting
NISCLIENT=false

..but you can leave it until were done testing.

If you do want to test the domainserver locally, you'll have to add the followin line in /etc/yp.conf:

ypserver 192.168.0.2

(Please use the IP number and not IP name for security reasons and for ease of setting up and usa in case nameservice is broken).

5) build the service database
cd /etc
cp aliases aliases.yp

cp group group.yp
cp passwd passwd.yp
(edit each destination file above and remove unwanted entries)
cd /usr/lib/yp
./ypinit -m

6) Test the service locally - Optional
/etc/init.d/nis stop
/etc/init.d/nis start
ypcat passwd

7) Test the service on a client
Log in as root on the client and:

/etc/nfs stop
/etc/nfs start
/etc/nis stop
/etc/nis start

Repeat the process as in 6)

Setting up a new server (2) - Basic network setup

To minimize the efforts on each client, the new server is about to take over the services as the old one did. That includes having the same IP and the same name on the network.


1) Open the Network Settings wizard (Administration->Networking)




2) Click the button properties and fill in as below



On our network we have a DSL modem that normally provides clients with IP addresses, but in our case we want services to be accessible from the outside and we need a fix address.

Please note that we need the "Gateway address" to be filled in (this has to do with that gateways today don't normally follow standard by placing themselves on the networks last address which in our case would be 192.168.0.254)


3) Change the name to the old servers name




4) DNS setting



Our router provides a DNS proxy. Enter the address of the router as a DNS server and we don't need to update this setting each time the router reboots (or the ISP changes their setting)