Kategorie-Archiv: Ubuntu

ubuntu specifics

Using Ubuntu fonts on Debian Testing

I admit it, i kind of like the fonts of Ubuntu within my default KDE installs. Since Ubuntu itsself recently dropped KDE support, one might just re-use available tools within Debian Testing himself.

The install is trivial – download and extract to ~/.fonts

$ wget http://font.ubuntu.com/download/ubuntu-font-family-0.80.zip
$ unzip ubuntu-font-family-0.80.zip
$ mkdir -p ~/.fonts
$ mv ubuntu-font-family-0.80/*.ttf ~/.fonts/

You may now select those from within your system settings as preferred font :)

install opera on debian

Opera is not provided within Debian itsself, as it’s unsupported non-free software. But it’s a rather common browser for those using Icinga, so we need to keep tests with it as well (especially for javascript dom errors). Read more here.

# vim /etc/apt/sources.d/opera.list

deb http://deb.opera.com/opera/ stable non-free #Opera Browser (final releases)
deb http://deb.opera.com/opera-beta/ stable non-free #Opera Browser (beta releases)

# apt-get update

Add the keys.

# wget -O - http://deb.opera.com/archive.key | apt-key add -
# apt-get update

Install opera.

# apt-get install opera

using x2go as remote desktop alternative

x2go is a remote desktop server and client package, partly based on the nx program, but not compatible. Yet there are package repositories for all valuable distributions.

Following this guide, add their debian repository.

# apt-key adv --recv-keys --keyserver keys.gnupg.net E1F958385BFE2B6E
# vim /etc/apt/sources.list.d/x2go.list

# X2Go Repository
deb http://packages.x2go.org/debian squeeze main
# X2Go Repository (sources)
deb-src http://packages.x2go.org/debian squeeze main
# apt-get update
# apt-get install x2go-keyring
# apt-get update

Install the server, plus the xsession package if you already got a desktop environment running (like me having KDE running).

# apt-get install x2goserver x2goserver-xsession

x2go uses the nxclient libs, which requires sort of a local proxy for ssh, listening on port 30001. This must be enabled in your firewall, otherwise you will get messages in syslog like this

sshd connect_to localhost port 30001: failed

Create an entry in your iptables filter and reload it.

# x2go
-A INPUT -p tcp -m tcp -s --dport 30001 -j ACCEPT
-A INPUT -p tcp -m tcp -s --dport 30002 -j ACCEPT
-A INPUT -p tcp -m tcp -s --dport 30003 -j ACCEPT

Using ssh keys for authentication is rather tricky, and got a lot of session errors. So if you run into that, leave it – bug hell.

samba 3.6 changes security defaults using client ntlmv2 auth, incompatible to 3.4

Samba 3.6.x changed the security defaults, which affects the smbclient as well. The root cause is that a current ubuntu-like 3.6.3 revision cannot login onto a 3.4.x based samba server.

It always hits an error like this (pretty misleading, as the user exists, and the password is entered correctly).

$ smbclient //share.host/sharename -U youruser
Enter youruser's password:
session setup failed: NT_STATUS_LOGON_FAILURE

$ smbclient -V
Version 3.6.3

The source of that can be read on the Samba Changelog

Changed security defaults

Samba 3.6 has adopted a number of improved security defaults that will
impact on existing users of Samba.

 client ntlmv2 auth = yes
 client use spnego principal = no
 send spnego principal = no

The impact of 'client ntlmv2 auth = yes' is that by default we will not
use NTLM authentication as a client.  This applies to the Samba client
tools such as smbclient and winbind, but does not change the separately
released in-kernel CIFS client.  To re-enable the poorer NTLM encryption
set '--option=clientusentlmv2auth=no' on your smbclient command line, or
set 'client ntlmv2 auth = no' in your smb.conf

Fixing this for the smbclient is not using the commandline option (does not work for me), but generally within the smb.conf (not within any section like [global], but really global in the first place!)

$ sudo vim /etc/samba/smb.conf

client ntlmv2 auth = no

Then we are lucky again.

$ smbclient //share.host/sharename -U youruser
Enter youruser’s password:
Domain=[SHARE] OS=[Unix] Server=[Samba 3.4.x]
smb: >

Update 2012-08-06: You can also put the complete string "domainusername" in order to compete with the changed auth.

Restore RAID1 data on broken linux os

boot into the system using a live cd like knoppix or kubuntu live.

chose your input method, i prefer the german keyboard.

# dpkg-reconfigure keyboard-settings

since livecds normally do not ship with mdadm, install mdadm into the live cd (remember to repeat that everytime you’ve booted).

# apt-get install mdadm

get an idea about your disk layout.

# ls -la /dev/sd*

assemble the drives on RAID1 level, mounting them.

# mkdir /mnt/md0 /mnt/md1 /mnt/md2 /mnt/md4
# mdadm -A /dev/md0 /dev/sda1 /dev/sdb1
# mdadm -A /dev/md1 /dev/sda2 /dev/sdb2
# mdadm -A /dev/md2 /dev/sda3 /dev/sdb3
# mdadm -A /dev/md3 /dev/sda4 /dev/sdb4

if you get one “cannot open device …: Device or resource busy. … has no superblock – assembly aborted”, this will be most likely the swap partitions you’ve set (which I do not add to RAID1 level normally).

now, mount all the assembled raid arrays.

# mount /dev/md0 /mnt/md0
# mount /dev/md1 /mnt/md1
# mount /dev/md2 /mnt/md2
# mount /dev/md3 /mnt/md3

check the mounts with

# mount

and then go to the mounted partitions

# cd /mnt/

and copy your data to your preferred backup media (usb disk, etc).

hpacucli does not work on kernel 3.0 – wrapper workaround uname26

The current HP Tools, which are available in Debian through an external repository maintained by HP itsself, do not understand newer Kernels starting with the transition to 3.0 – so it won’t find any raid array controllers or devices.

I’m keen on using hpacucli to monitor hard raid controllers, and the devices themselves – e.g. using check_cciss

In order to comply with this “situation”, someone wrote a wrapper faking the 2.6 environment, and running the command afterwards.

Download and “install” it like this …

# cd /usr/lib/nagios/plugins
# mkdir uname26 ; cd uname26
# wget http://mirror.linux.org.au/linux/kernel/people/ak/uname26/Makefile
# wget http://mirror.linux.org.au/linux/kernel/people/ak/uname26/uname26.c
# make
# cp uname26 /usr/sbin

and call the cli tool like this (edit the scripts to add the uname26 call as well)

# /usr/sbin/uname26 hpacucli ctrl slot=11 pd all show status

Until there are maybe new binaries available supporting the 3.0 linux kernel.

iwlwifi driver problems with wireless n connection on kubuntu

Currently, the firmware got bugs, revoking your access to n preferred wireless connections, timing out and asking for authentication afterwards (which is totally misleading btw!).

In order to regain access based on the basic channel without n technology, put the n support into disabled state, and re-add the driver.

sudo rmmod iwlwifi
sudo modprobe iwlwifi 11n_disable=1
echo "options iwlwifi 11n_disable=1" | sudo tee -a /etc/modprobe.d/disable-n.conf

fun with grub2 rescue and disk uuid

Once in while you gotta upgrade your system which indicates a new run of update-grub plus installing it fresh in case of dpkg-reconfigure call. Once in a while you will recognize that this leads into interesting fuckups. In my case, accidently tried to install grub2 onto a mapped raid1 volume next to the 2 pyhsical devices. I would have expected an error, but no, nothing happened. Instead, on reboot, grub-rescue was telling me that the UUID provided is not valid. Guess what, that error unveils a lot of google entries. Speaking of the most – boot your live cd and fix grub. But fix what?

First off, the “set root=” is not only done for the linux entry, but as well for the search. Setting GRUB_DISABLE_LINUX_UUID=true in /etc/default/grub won’t help here, because grub-mkconfig ignores that entry while generating grub.cfg – editing those entries by hand, exporting to a custom config, being loaded within 40_* – well also possible. Most likely there’s more bug hell in there – as partman does not like md’s in current 3.2 kernels and hangs on partitioning during install – but still the best way out there is – purge and reinstall.

Luckily this is rather simple, once you got a live cd to boot. After startup, open a terminal, and get root. Mount the volume into /mnt/temp

$ sudo su -
# mkdir /mnt/temp
# mount /dev/mapper/ /mnt/temp

Mount all needed stuff

# for i in /dev /dev/pts /proc /sys; do mount -B $1 /mnt/temp$i; done

Get into the chroot.

# chroot /mnt/temp

Now verify that internet connection is working, plus all the mounts set (in case you got /var etc on different partitions).

# mkdir /run/resolvconf/
# vim /run/resolvconf/

# mount -a

Then try an apt-get update.

# apt-get update

Now completely wipe grub2 from your chroot, when asked to delete configs, as well.

# apt-get purge grub grub-pc grub-common

Reinstall the grub packages. When asked only install to physical decides, not volumes nor partitions!

# apt-get install grub-pc grub-common

Update grub files – should have happened automatically, but anyways.

# update-grub

Exit the chroot and unmount stuff.

# exit

# for i in /dev/pts /dev /sys /proc; do umount /mnt/temp$1 ; done

Reboot your system and verify everything loaded ok.

An alternate method instead of purge and reinstall will be calling dpkg-reconfigure grub-pc which will allow to (re)set the grub config.