Monitoring vhosts with Icinga 2 and Icinga Web 2

Since Icinga 2 runs stable and is considered mature (1.x still works, but is a pain to configure) I’ve taken 2015 for starting to use it on my productive systems. The server which hosts also serves several other vhosts, overall it is hosted in the NETWAYS cloud (thanks!), one situation where you love your managed service guys & Opennebula, Foreman, Puppet.

Afterall I wanted to monitor at least some web and dns services for all vhosts (more to come though). Web services are just a simple http reachability check, while dns verifies that the A record of the given vhost also returns the server’s ip address. That is mandatory to see whether the dns records are still intact, or some misconfiguration is happening.

Therefore I’ve created a file ‘hosts.conf’ containing one host so far, introducing the ‘vhosts’ dictionary.

# cat hosts.conf

object Host "srv-mfriedrich" {
  check_command = "hostalive"

  address = ""

  vars.vhosts[""] = {
  vars.vhosts[""] = {
  vars.vhosts[""] = {

The services.conf file is rather generic – it takes all hosts with the custom attribute ‘vhosts’ and loops over that dictionary, creating new service object. Each service is prefixed with either “http-” or “dns-” depending on the generated check then. Read more about apply-for-rules.

I’m using the shipped Icinga 2 plugin check commands “http” and “dns” and only set the expected custom attributes.

 # cat services.conf

apply Service "http-" for (http_vhost => config in host.vars.vhosts) {
  import "generic-service"

  check_command = "http"

  vars += config
  vars.http_vhost = http_vhost

  notes = "HTTP checks for " + http_vhost

  assign where host.vars.vhosts

apply Service "dns-" for (dns_lookup => config in host.vars.vhosts) {
  import "generic-service"

  check_command = "dns"

  vars += config
  vars.dns_lookup = dns_lookup

  notes = "DNS checks for " + dns_lookup

  assign where host.vars.vhosts

Note: Apply For was introduced in Icinga 2 v2.2.0 and is the preferred way of configuring the magic stuff.

icinga2_legendiary_01 icinga2_legendiary_02 icinga2_legendiary_03

Added some notifications and users, in that example it’s just everything for vhosts and myself. I don’t like to get notified when someone forgets to configure the ‘address’ attribute required for the checks, so the notifications are not generated for these objects.

The ‘mail-{host,service}-notification’ templates are shipped with Icinga 2 in conf.d/templates.conf, similar to ‘generic-{host,service-user}’ templates. The notification templates do reference the mail notification command, but I don’t really care about it. The only important thing is to set the User object’s ’email’ attribute.

# cat users.conf 
object User "michi" {
  import "generic-user"

  email = ""

# cat notifications.conf 

apply Notification "vhost-mail-host" to Host {
  import "mail-host-notification"

  users = [ "michi" ]

  assign where host.vars.vhosts
  ignore where !host.address //prevent wrong configuration being notified

apply Notification "vhost-mail-service" to Service {
  import "mail-service-notification"

  users = [ "michi" ]

  assign where host.vars.vhosts
  ignore where !host.address //prevent wrong configuration being notified

icinga2_legendiary_04Using the new dynamic Icinga 2 language could become rather complex. But still, for simple vhost monitoring it even saves you a lot of typing. And keep in mind – the notification rules are applied based on patterns. I don’t have to worry about contacts assign to hosts/services as I always struggled with in Icinga 1.x or Nagios.

In the end, I’m also using Icinga Web 2‘s git master. It’s still beta, but works far better than Classic UI or Web 1.x. So you’ll see, it’s time for Icinga 2* and a bright future. Next up – Graphite, Graylog2 and automated Puppet deployments of remote checker clients/satellites.

WordPress backups into Google Drive

Creating wordpress backups is necessary and easy – a simple bash script creating zipped files from htdocs and mysqldump are sufficient. The more related question is – where to store these backups? As these blogs don’t contain any sensitive private information, but are all served by public web domains, best to store them in the cloud.

Since I do have lots of Google Drive storage available, it’s reasonable to upload the most recent backups over there. While looking for possible CLI options, GDrive got my attention. The client is written in Go without any further dependencies – download the binary for Linux amd64, make it executable, and run it once.

$ wget -O drive-linux-amd64

It will generate an authorization url for the Google API, and you’ll have to authorize the application and copy-paste the authorization token into the cli. Done. Now you’re able to upload/download files using the binary.

$ chmod +x drive-linux-amd64
$ ./drive-linux-amd64
Go to the following link in your browser:

Enter verification code:


wordpress_backup_gdriveGdrive stores the token and config below the ~/.gdrive directory.

While GDrive could upload folders, it will always generate a new id with the same folder name below its target. That’s not something I want to keep, so each file is uploaded on its own preserving the global parent folder id (backup/wordpress in my Google Drive tree).

You can easily extract the parent folder id by looking at the Google Drive url:<firstlevel(backup)>/<secondlevel(wordpress)>

which then means, only the second level id is needed for the backup script.

# requires sudo for mysqldump



declare -A WEBS
# vhostname = dbname


# clear backups older than
find $BACKUP_PATH -type f -mtime +10 | xargs rm -f

# start backup and upload to google drive

for web in “${!WEBS[@]}”
timestamp=`date +%Y-%m-%d-%H%M%S`

echo “Creating $web_tar_gz backup for $web…”
`tar czf $web_tar_gz $web_path`

echo “Uploading $web_tar_gz backup to GDrive…”
$GDRIVE_BIN upload -f $web_tar_gz -p $GDRIVE_BACKUP_PARENT_ID

echo “Creating $db_sql_gz backup for $web…”
`sudo mysqldump –databases $db_name | gzip > $db_sql_gz`

echo “Uploading $db_sql_gz backup to GDrive…”
$GDRIVE_BIN upload -f $db_sql_gz -p $GDRIVE_BACKUP_PARENT_ID


Add a cron job running every day.

0 0 * * * /home/michi/backup/ > /dev/null 2>&1

Keep in mind to secure access to the GDrive binary and its configuration – it is able to not only list but manipulate your Google Drive data. If exploited, backups won’t be necessary in Google Drive anymore 😉

Fedora 21 Workstation with Docker

fedora21Now that Fedora 21 is going the feature release way, I’ll stick with workstation. Since I recently just installed Fedora 20, I did not yet have the pleasure to do a dist upgrade (and they do tell it might not work, hehe, as always with RHEL).

Make sure that fedup is the latest version.

$ sudo yum update fedup fedora-release

Now do the network update magic. At the time of writing the site was under heavy load generating lots of 503 errors.

$ sudo fedup --network 21 --product=workstation
setting up repos...
default-installrepo/metalink                                |  16 kB  00:00     
default-installrepo                                         | 3.7 kB  00:00     
default-installrepo/group_gz                                | 113 kB  00:00     
default-installrepo/primary_db                              | 1.4 MB  00:00     
getting boot images...
.treeinfo.signed                                            | 2.1 kB  00:00     
vmlinuz-fedup                                               | 5.5 MB  00:01     
initramfs-fedup.img                                         |  40 MB  00:07     
setting up update...

For me it’s 2268 packages being updates so it takes a while. Once it tells you to reboot and select ‘fedup’ do it. You can safely ignore icedtea* and kmod* kernel modules with unsatisfied dependencies, if asked.

The upgrade takes a while, but will then boot from the basic system into your finally upgraded system. akmod will run and build new kernel modules for my nvidia and virtualbox modules (kmod breaks too often anyways).
After successful login, re-run update to fetch the latest debug symbols I keep for Icinga development (boost to be exact).

$ sudo yum update

Some pitfalls: Gnome 3.14 ignores the button-layout in my overrides settings for certain windows. The default terminal window coloring is somewhat ugly dark green.

docker_grafanaNow for Docker: docker-io is gone, the new package name is only ‘docker’. Simple test with trying grafana and graphite:

$ sudo docker run -d -p 80:80 -p 8125:8125/udp -p 8126:8126 --name kamon-grafana-dashboard kamon/grafana_graphite

LEGO – More than just bricks

lego_pirates_islandLEGO is something I’ve been building since my childhood. Be it the first LEGO Pirates island with 2 pirates, or LEGO Technic. Growing up, it’s still building stuff with LEGO – but more with landscapes or physics and computer stuff like Mindstorms.lego_space_shuttle

It’s still that moment – you’ll get the package (especially at christmas, where you’ll shake the package to hear that there’s LEGO bricks inside!) and that moment when you open it, looking at the bricks in smaller mounts, the instructions. Hey, let’s look directly into how to build it … hm, no. First off, open everything and sort the bricks. You cannot beat it without sorting – it will get annoying, especially if you’re building models with 1000+ bricks.

Still, after you’ve everything on your couch (floor is better, but couch is much more confortable), start the instructions. Visualize your model, compare the steps, focus on which parts are built there. Get an idea how the LEGO designer tried to make building as much enjoyable as also challenging. Go further, see progress. Oh, that was just the ground floor (of the house). Take a break, get something to eat and drink. No, dear friends, I cannot join you now, no Twitter, shut up. You’re not important now. Now it’s just me, relaxing and fighting my own challenge, building the greatest LEGO model there ever was. Using my imagination, and make it reality.

lego_ps3_games_hobbit_lego_movieI also do enjoy playing all the available LEGO games on my PS3 for that exact same reason – it’s fun, humor and you’ll get into your very own, small world of bricks 🙂 Or visit the LEGO Land, getting to know that bilder bigger models is an even more challenge than ever expected. But – when I am old enough, I’ll look for a job at LEGO, or something similar. Building bricks and going into the imagination flow is like forever young!


lego_land_millenium_falconPutting bricks together might sound boring. Selecting different colors and methods to put them together, and sometimes even see – oh, wow, that little stair, and that detail, oh my gosh. There’s a rope, pulling down some stuff, oh, so children can play too. Not everything is round and smooth, it’s bricks and they fit together as once. And if you break the model again, you can build your own imagined model. Toy with your phantasies and show it to others. Be proud of fighting the challenge. Life got more than enough challenges. They all may be satisfying, but at a certain point you’ll choose what’s most important.

Calming down from work, getting some time off, not running directly into a Burn-Out. Last months were exhausting, overwhelming (Icinga Camp SFO, roadtrip trough California, Nevada, Utah,…) coming back, knowing that the almighty OSMC hosted by my employer NETWAYS starts soon, and Icinga 2 must get ready and released in 2.2. Still, that’s a lot of stuff going on. But once you figure that you’re working too much (and thanks Bernd & Martin, I appreciate your feedback (“Geh ham!” style always works ;))), it’s time to go for LEGO.

lego_store_bagThanks to Markus & Nicole, I recently visited the LEGO Store in Nuremberg. Never really walked into there, looked like a large room with LEGO boxes. Nothing special from the outside. The inside is just like: Looking at all the built models, getting an idea about their size and detail grade. Oh, I want that one. Oh, look, I know that brick. They used that 15 years ago already for something completely different. Oh my, there’s Pick-A-Brick – I recently created a present based on Legoaizer for a friend using an image putting it together as mosaic.

It was successful – I got totally into building the LEGO Creator series. The Export series, the smaller ones are cute (3in1), but I need the bigger challenges. Like I already own the Red X-Wing Starfighter but totally missed the older UCS (Ultimate Collectors Series) models.

The Palace Cinema is truly a magnificant model, it took me ~6 hours to build, in 3 parts. And since the Expert models for the town can be put together as city street, yeah well, need to say more?lego_palace_cinema

So I made a list of Creator Expert models I would like to have. Problem: Older models in their exclusiveness don’t last that long in the LEGO sortiment. I had a nice conversation with their support team about that, and maybe the LEGO designers will put solder models into new shape. But that’s not the point either – I want to collect the expert series, as much as possible. There are certainly models like the Coffee Corner, Market Street or Green Grocer – all sold out, and cost too much money.

So, the affordable ones reduce to

  • Fire Brigade 10197 (only on Ebay, Amazon)
  • Grand Emporium 10211 (sold out in Nov 2014, removed from LEGO sortiment end of 2014 as heard from store employee)
  • Pet Shop 10218 (available in LEGO Store Nuremberg)
  • Town Hall 10224 (sold out in Nov 2014, removed from LEGO sortiment end of 2014 as heard from store employee)
  • Palace Cinema 10232 (built already)
  • Parisian Restaurant 10243 (new model 2014, should last a bit longer)

The tower bridge (10214), Sydney Opera house (10234), the VW T1 camping bus (10220) and the Maersk Containership (10241) will be fading out shortly, either by the end of 2014 or in 2015. Yet I was told by the LEGO Store staff, that there might be new exclusive creator models anytime soon. So let’s see about that 🙂

lego_pet_shop_parisian_restaurantlego_fire_brigadeThere are rumors about a new 2015 Creator expert model, but no announcement yet. There’s a new UCS model coming too – Star Wars Boba Fett’s Slave I (75060). Definitely something I am waiting for 🙂

I will get them all, sooner or later – 3 of them are waiting being built on the weekend. Life is hard, but sometimes you’ll just go offline, and build LEGO.


Stuff that matters most

icinga_camp_san_francisco_smallIt’s been a while since I decided to go on my journey to Nuremberg working for NETWAYS, doing more stuff with Icinga and get to know what I like most – work on a team with spirit and dedication, learning new things all the way and get the chance to join conferences & meetups. That was 2012, and it’s nearly 2 years already. 2 years where we finally released Icinga 2 after 20 months of development, sleepless nights, and still not stopping there.

What really matters to me, is not the code, or the things we do. It’s the way we do it – professional, but with a love of fun, spontaneity and team work. Getting a drink together on Friday at 4pm, just because we like it. Going skiing together (where I come from), BBQ, XMas party, … there are countless ways you can join this lovely little family (did I mention we loved the spirit of #atemlos and used that for our #b2run slogan?). It doesn’t matter which dialect you’re using – even my Austrian slang sounds familiar these days.

My work and dedication is now being honored by getting invited to San Francisco, joining my fellow colleagues and Icinga team members on the upcoming Icinga Camp. I am grateful for that, that’s something I never expected to happen after the huge success we already gained through our activities online & at various conferences. After my very first Icinga 2 Training last week with kind & positive feedback, it literally pushes my motivation to the next level. Still, I am an Icinga team member for 5+ years now, and it’s even more fun when you know that your work as a team (Icinga and Netways, that is) is still going strong.

It feels like a dream you keep dreaming as a child – and I haven’t been to USA – and soon, in some hours, it will certainly become true. Cannot wait to visit “The City”, and meeting new Icinga users. But also meeting each other where travelling isn’t always possible – I feel glad to finally meet Matthew and Sam being part of the Icinga team 🙂 We (Tom, Markus & me) are leaving SFO after Icinga Camp for a road trip going south (or east even) – Monteray, Sequoia, Death Valley, Las Vegas and finally Los Angeles. Stuff you’ll do only once 🙂

And finally, the most important impressions you’ll never forget – things you cannot plan, surprises which will come, and the “friends & family” feeling 🙂

See you in San Francisco!


Installing C&C RA2 Yuri’s Revenge on Fedora 20

Playing 3D games also requires the 32bit OpenGL drivers installed, apart from Wine itself.

yum -y install xorg-x11-drv-nvidia-libs.i686
yum -y install wine

Then grab your First Decade installation DVD, install at least Command & Conquer: Red Alert 2 & Yuri’s Revenge by calling setup.exe with wine. After the serial number orgy, copy over a no-cd crack (do that at your own risk, otherwise keep the dvd in your drive all the time).

Yuri’s revenge requires a virtual desktop in Wine, so adjust that one inside ‘winecfg’:

wine_cfg_yuri_01 wine_cfg_yuri_02


Then make sure to select Pulseaudio as audio source.


Start Red Alert 2 – Yuri’s Revenge either directly where it is installed using ‘wine YURI.exe’ or natively from the Wine menu.



Hello, Fedora 20

Since I recently switched my work notebook to Fedora 20, I figured that working with Fedora and Gnome 3 works rather smooth these days.

You’ll get a fairly stable distribution, use community provided repositories (for the nvidia driver), and you’re able to play with bleeding edge software without entirely breaking your production system. I didn’t like (K)Ubuntu that much (dist-upgrade was a huge fail all the time), several alternatives such as Linux Mint or other forks just don’t have a large community base (google the error, and find a solution within minutes). Lately, Debian Testing wasn’t so much pleasure either, and Debian stable is something I will only use for servers, not for notebooks or private workstations.

Other than that, I always play around with RPMs in Icinga Vagrant demo boxes, or keep updating them for Icinga and NETWAYS all the time. Which leaves Fedora the perfect choice for the next years (let’s see about that though).


Find a list of tools installed below, for whoever that may be useful, I’m certain it is for me when re-installing again. I am still too lazy for a puppet module 😉

The install itself happens from a netinstall ISO, choosing the default crypted LVM with a Gnome desktop installation. Once done, I’ll go get Steam and Wine for some Command & Conquer Red Alert 2 mod action.

yum -y install vim

# NVIDIA driver
yum localinstall --nogpgcheck$(rpm -E %fedora).noarch.rpm
yum localinstall --nogpgcheck$(rpm -E %fedora).noarch.rpm

mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r)-nouveau.img
dracut /boot/initramfs-$(uname -r).img $(uname -r)
yum install -y vdpauinfo libva-vdpau-driver libva-utils

# Backup - 1. ssh key
scp -r sys@ .
# Backup - 2. rest
scp -r sys@{.Xauthority,.Xresources,.bash*,.dosbox,.gitconfig,.openvpn,.rtorrent.rc,.vimrc,docs,download,private,tools,coding} .

Chrome fc20 RPM: 
Vagrant fc20 RPM:

# passwordless sudo
sudo visudo

# Allow members of group sudo to execute any command


curl > ~/.bash_git

sudo -i

yum install -y gnome-tweak-tool dconf-editor gparted
yum install -y google-chrome-stable rdesktop shutter pigdin pigdin-otr gimp gimp-paint-studio
yum install -y hplip hplip-gui
yum install -y thunderbird thunderbird-lightning thunderbird-lightning-gdata
yum install -y httpd php php-pear php-mysql php-pgsql php-gd php-soap php-ldap
yum install -y screen lynx tcpdump wireshark htop rrdtool
yum install -y cairo-dock cairo-dock-plug-ins
yum install -y NetworkManager-openvpn NetworkManager-openvpn-gnome openvpn
yum install -y git git-csv git-svn git-email
yum install -y autoconf automake libtool zlib-devel strace gdb valgrind clang ccache cmake gcc-c++
yum install -y rpmlint @development-tools fedora-packager
yum install -y mysql mysql-server mariadb-devel postgresql postgresql-server postgresql-devel
yum install -y nagios-plugins-all
# icinga 1.x
yum install -y httpd gcc glibc glibc-common gd gd-devel libjpeg libjpeg-devel libpng libpng-devel net-snmp net-snmp-devel net-snmp-utils docbook-simple
yum install -y libdbi libdbi-devel libdbi-drivers libdbi-dbd-mysql libdbi-dbd-pgsql
# icinga 2.x
yum install -y cmake bison flex openssl openssl-devel
yum install -y boost-devel boost-regex boost-signals boost-system boost-test boost-thread boost
# icinga-vagrant
yum install -y VirtualBox
yum install -y libvirt-devel libxslt-devel libxml2-devel ruby
# icingaweb2
yum install -y php-ZendFramework php-ZendFramework-Db-Adapter-Pdo-Mysql php-ZendFramework-Db-Adapter-Pdo-Pgsql php-devel

chmod a+r /etc/fuse.conf
vim /etc/fuse.conf
# Allow non-root users to specify the allow_other or allow_root mount options.

vim /etc/vimrc

" custom
set background=dark
set showcmd
set showmatch
" Show (partial) command in status line.
" Show matching brackets.
highlight ExtraWhitespace ctermbg=red guibg=red
match ExtraWhitespace /s+$/
autocmd BufWinEnter * match ExtraWhitespace /s+$/
autocmd InsertEnter * match ExtraWhitespace /s+%#@<!$/
autocmd InsertLeave * match ExtraWhitespace /s+$/
autocmd BufWinLeave * call clearmatches()
" disable shell beep
set vb



# Gnome Config

# 'obere leiste' - date, calendar week
# start apps: terminal, chrome, thunderbird, cairo-dock, pidgin
# options - session restore
# org -> gnome -> shell -> overrides -> button-layout :minimize,maximize,close

# Add __git_ps1 sourcing
vim ~/.bashrc

source ~/.bash_git
function myPrompt() {

# Default dirs
vim ~/.config/user-dirs.dirs


# Thunderbird
thunderbird + CTRL+C
cd .thunderbird
scp -r sys@ .
vim profiles.ini



mv .openvpn .cert
restorecon -R -v ~/.cert

# printer
sudo hp-setup


# RPMBuilds

/usr/sbin/useradd makerpm
usermod -a -G mock makerpm
chmod -R o+rx /home/makerpm/
passwd makerpm
su - makerpm

[makerpm@imagine ~]$ rpmdev-setuptree
[makerpm@imagine ~]$ tree
└── rpmbuild
    ├── BUILD
    ├── RPMS
    ├── SOURCES
    ├── SPECS
    └── SRPMS

6 directories, 0 files

# Databases
systemctl start mariadb

vim /root/.my.cnf
password = XXX

postgresql-setup initdb
systemctl start postgresql
# Users
useradd icinga
groupadd icingacmd
usermod -a -G icingacmd icinga
usermod -a -G icingacmd apache

Time for changes – introducing Fedora

Everytime the xserver-xorg-* packages include abi changes, they’re migrated to Debian testing after a while. Using the non-free nvidia drivers just because nouveau does not work on a dell latitude e6540 with an nvidia optimus chipset always breaks. Even if it may be my own fault, I can’t work like that. Similar example with Vagrant & Virtualbox. Time for something old & new – after fc5 now Fedora 20 again. A while back I’ve already dropped my KDE desktop, using Gnome 3. Considering that I am a long time Icinga Core & Web RPM package hacker, an interesting move into the right direction.

Virtualbox missing vboxdrv dkms modules

During the one of last d-u some trigger accidentally removed the linux-headers meta package which is necessary for building dynamic kernel modules for nvidia and virtualbox.

Start-Date: 2014-06-07  21:17:14
Commandline: apt-get dist-upgrade
Install: python3-markupsafe:amd64 (0.23-1, automatic), gir1.2-secret-1:amd64 (0.18-1, automatic), python3-mako:amd64 (0.9.1-1, automatic), libgcrypt20:
amd64 (1.6.1-2, automatic)
Upgrade: libtotem-plparser18:amd64 (3.10.2-1, 3.10.2-3), rhythmbox-plugins:amd64 (3.0.1-1+b2, 3.0.3-1+b1), kde-runtime:amd64 (4.12.4-1, 4.13.1-1), rhyt
hmbox-data:amd64 (3.0.1-1, 3.0.3-1), rhythmbox-plugin-cdrecorder:amd64 (3.0.1-1+b2, 3.0.3-1+b1), libssh2-1:amd64 (1.4.3-2, 1.4.3-3), rhythmbox:amd64 (3
.0.1-1+b2, 3.0.3-1+b1), plasma-scriptengine-javascript:amd64 (4.12.4-1, 4.13.1-1), gir1.2-rb-3.0:amd64 (3.0.1-1+b2, 3.0.3-1+b1), librhythmbox-core8:amd
64 (3.0.1-1+b2, 3.0.3-1+b1)
Remove: linux-headers-3.13-1-amd64:amd64 (3.13.10-1), linux-headers-3.14-1-amd64:amd64 (3.14.4-1), linux-compiler-gcc-4.8-x86:amd64 (3.14.4-1), linux-headers-amd64:amd64 (3.14+57)
End-Date: 2014-06-07  21:17:26

Virtualbox does start but failed with a fancy non-telling error.


At first sight I was confused with ‘/etc/init.d/vboxdrv setup’ which does not exist with systemd anymore.
Reinstalling the Virtualbox DKMS package just told me – ah, there’s something missing.

# apt-get install --reinstall virtualbox-dkms
Paketlisten werden gelesen... Fertig
Abhängigkeitsbaum wird aufgebaut.
Statusinformationen werden eingelesen.... Fertig
0 aktualisiert, 0 neu installiert, 1 erneut installiert, 0 zu entfernen und 1 nicht aktualisiert.
Es müssen noch 0 B von 559 kB an Archiven heruntergeladen werden.
Nach dieser Operation werden 0 B Plattenplatz zusätzlich benutzt.
(Lese Datenbank ... 297911 Dateien und Verzeichnisse sind derzeit installiert.)
Vorbereitung zum Entpacken von .../virtualbox-dkms_4.3.12-dfsg-1_all.deb ...

Deleting module version: 4.3.12
completely from the DKMS tree.
Entpacken von virtualbox-dkms (4.3.12-dfsg-1) über (4.3.12-dfsg-1) ...
virtualbox-dkms (4.3.12-dfsg-1) wird eingerichtet ...
Loading new virtualbox-4.3.12 DKMS files...
Building only for 3.14-1-amd64
Module build for the currently running kernel was skipped since the
kernel source for this kernel does not seem to be installed.

Fix it by installing the kernel headers and reinstalling the virtualbox dkms package.

# apt-get install linux-headers-amd64
# apt-get install --reinstall virtualbox-dkms

Then the vboxdrv kernel module has to be loaded.

# modprobe vboxdrv

Try to boot the vm, it will fail with the network interfaces requiring the additional ‘vboxnetflt’ kernel module.

# modprobe vboxnetflt

From sysvinit to systemd in Debian Jessie

sysvinit-core gets removed on dist-upgrade, and systemd-sysv is installed instead. Therefore the dependency for sysvinit is fulfilled and a smooth transition to systemd is ensured in current Debian Jessie.

nbmif ~ # apt-get dist-upgrade
Paketlisten werden gelesen... Fertig
Abhängigkeitsbaum wird aufgebaut.
Statusinformationen werden eingelesen.... Fertig
Paketaktualisierung (Upgrade) wird berechnet... Die folgenden Pakete wurden automatisch installiert und werden nicht mehr benötigt:
  libqmi-glib0 xulrunner-29
Verwenden Sie »apt-get autoremove«, um sie zu entfernen.
Die folgenden Pakete werden ENTFERNT:
Die folgenden NEUEN Pakete werden installiert:
  libqmi-glib1 libqmi-proxy systemd-sysv
Die folgenden Pakete werden aktualisiert (Upgrade):
  libmm-glib0 libpam-systemd libsystemd-daemon0 libsystemd-journal0 libsystemd-login0 modemmanager systemd
7 aktualisiert, 3 neu installiert, 1 zu entfernen und 0 nicht aktualisiert.
Es müssen 2.294 kB an Archiven heruntergeladen werden.
Nach dieser Operation werden 1.825 kB Plattenplatz zusätzlich benutzt.
Möchten Sie fortfahren? [J/n]

The previous init binary is now a symlink to systemd.

nbmif ~ # ls -la /sbin/init
lrwxrwxrwx 1 root root 20 Jun 28 13:28 /sbin/init -> /lib/systemd/systemd*

Reboot – thanks to the parallel tasks it’s a matter of seconds on a i5 with 8gb ram and a samsung evo ssd.

Remove trailing whitespaces on save in Komodo Edit 8

Komodo Edit 8 is a free editor (not using the commercial IDE) but is sometimes hard to configure. It certainly plays well while hacking Icinga 2 in C++ or Icinga Web 2 in PHP.

On a fresh install, the trailing whitespaces are not removed when saving a document (and it really should, they are annoying when you open it with vim/git diff and whitespaces highlighting enabled).

Navigate to Edit – Preferences – Editor – Save Options and tick Clean trailing whitespaces and EOL markers and Only clean changed lines. The last option saves us from cleaning the entire document and generate horrible git diffs solving whitespaces issues caused by others. (i hate git diffs solving everything but hiding the real fixed code diff).


Debian Jessie, Chromium 35, NPAPI, Aura and the flash plugin

Chromium 35 did bite me with flashplugin-nonfree not being supported anymore – on saturday with a fresh install of Debian Testing, and later after dist-upgrade on my workstation fetching the latest version.

# less /var/log/apt/history.log

Start-Date: 2014-06-07  21:07:20
Commandline: apt-get upgrade
chromium:amd64 (34.0.1847.116-1~deb7u1, 35.0.1916.114-2),
michi@imagine ~ $ dpkg -l *chromium* | grep ^ii
ii  chromium                     35.0.1916.114-2 amd64        Chromium web browser
ii  chromium-inspector           35.0.1916.114-2 all          page inspector for the Chromium browser

michi@imagine ~ $ dpkg -l *flashplugin* | grep ^ii
ii  flashplugin-nonfree 1:3.4        amd64        Adobe Flash Player - browser plugin

which results in a fancy browser warning:




The reason is simple – Chrome developers decided to drop/remove support for NPAPI plugins in Chrome 35 changing to Linux Aura, rendering the flash plugin incompatible.

Luckily there’s an alternate plugin around already, and explained on the Debian wiki, the pepper flash player.

# apt-get install pepperflashplugin-nonfree

This wrapper package downloads the current Chrome Debian package from and unpacks the pepper flash plugin due to license issues with redistribution of that plugin (hooray, yet again).

Chromium will primarly detect that flash plugin, and starting to work again (after closing and restarting it). I’m aware of the fact that there’s gnash and other alternatives, but they either did not work or caused too much (compatibility) troubles.

Looks far better now 🙂


Gnome 3: Change default user dirs

Presumingly somewhere hidden between gnome-session-properties, gnome-tweak-tool and dconf-editor, but that’s the easierst way:

$ vim ~/.config/user-dirs.dirs


Mainly for the reason that upgrades might re-create the nastly default dirs.

Debian Jessie pixbuf errors reloaded

They still happen occasionally on dist-upgrade, but recently I was running into them every week – still after one year.

Fix (other than noted on the shell):

# /usr/lib/x86_64-linux-gnu/gdk-pixbuf-2.0/gdk-pixbuf-query-loaders > /usr/lib/x86_64-linux-gnu/gdk-pixbuf-2.0/2.10.0/loaders.cache

Debian Testing: Kernel 3.12 & nvidia dkms 319.82 breaks xorg nvidia 319.76 xserver driver

A few days ago the kernel 3.12+55 has hit Debian Testing (Jessie) so the linux-image-amd64 pulls the new kernel, and therefore also new nvidia-kernel-amd64 packages shipping 379.82.

Problem – the xorg nvidia driver is still 379.76 which breaks the current xserver with kernel 3.12 and nvidia drivers in current Debian Jessie.

# less /var/log/gdm3/:0.log

(**) NVIDIA(0): Depth 24, (--) framebuffer bpp 32
(==) NVIDIA(0): RGB weight 888
(==) NVIDIA(0): Default visual is TrueColor
(==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0)
(**) NVIDIA(0): Stereo disabled by request
(**) NVIDIA(0): Enabling 2D acceleration
NVIDIA: API mismatch: the NVIDIA kernel module has version 319.82,
but this NVIDIA driver component has version 319.76.  Please make
sure that the kernel module and all NVIDIA driver components
have the same version.
(EE) NVIDIA(0): Failed to initialize the NVIDIA kernel module. Please see the
(EE) NVIDIA(0):     system's kernel log for additional error messages and
(EE) NVIDIA(0):     consult the NVIDIA README for details.
(EE) NVIDIA(0):  *** Aborting ***
(EE) NVIDIA(0): Failing initialization of X screen 0
(EE) Screen(s) found, but none have a usable configuration.
Fatal server error:
(EE) no screens found(EE)
Please consult the The X.Org Foundation support at
 for help.
(EE) Please also check the log file at "/var/log/Xorg.0.log" for additional information.
(EE) Server terminated with error (1). Closing log file.

Temporary solution – remove the current kernel and step down to 3.11 and nvidia 319.76. But beware – that will remove the linux-image-amd64 and nvidia-kernel-amd64 meta packages pulling the latest kernel/nvidia packages too!

# apt-get remove linux-image-3.12-1-amd64
# tail /var/log/apt/history.log
Start-Date: 2014-01-30  19:13:06
Commandline: apt-get remove linux-image-3.12-1-amd64
Remove: linux-image-3.12-1-amd64:amd64 (3.12.6-2), nvidia-kernel-amd64:amd64 (319.82+3.12+1), linux-image-amd64:amd64 (3.12+55), nvidia-kernel-3.12-1-amd64:amd64 (319.82+1+1+3.12.8-1)
End-Date: 2014-01-30  19:13:23

I’m not going the ‘pin the package and get it from debian sid‘ here, since there are too many dependencies pulled.

Debian Testing: Virtualbox 4.3 breaks Vagrant 1.2

Debian Testing pulled a new Virtualbox version (4.3.2) into my system. Which is apparently incompatible with Vagrant 1.2.2

$ vagrant up

Vagrant has detected that you have a version of VirtualBox installed that is not supported. Please install one of the supported versions listed below to use Vagrant:

4.0, 4.1, 4.2

A debian bug is already open, but the package update is only available in sid, not testing.

Thanks lazyfrosch for the hint about pinning unstable and pulling only newer vagrant updates.

# cat < /etc/apt/sources.list.d/unstable.list
deb [arch=amd64,i386] sid main non-free contrib

# cat < /etc/apt/preferences.d/pinning
Package: *
Pin: release a=sid
Pin-Priority: -100

Package: *
Pin: release a=unstable
Pin-Priority: -100

Package: vagrant
Pin: release a=unstable
Pin-Priority: 991

# apt-get update
# apt-get install vagrant

Now it works like a charm again 🙂


Samsung Galaxy S3 Android 4.3 Update: NFC enabled by default

The update went smooth yesterday, but then I had an unknown icon on the small bar on top. I’ve recognized that they’ve renamed “4G” into “LTE” as well, but I couldn’t find out which new application was running.


Apparently they have switched the menu handling too, so on the upper right corner the full expansion must be tipped. Then disable NFC in order to save battery power.


Playing with Icinga 2 and graphite

If you’ve attended the OSMC 2013 and the Icinga presentation you might have seen it already, but for all new readers – Icinga 2 got native support for writing metrics to graphite carbon-cache. There’s not much to do than

  • have Icinga 2 installed & some checks configured
  • have graphite up & running
  • enable the GraphiteWriter feature

I’m using a Vagrant box for graphite where I am running a puppet module to install graphite from sources, but patching it for realtime performance – so you might assign that a little more disk space then.

The Icinga 2 Vagrant box will install the latest and greatest snapshot rpms built from git next, so we are bleeding edge here – if you encounter any bugs, please report them to

The graphite vagrant box will listen on the forwarded port 20003 on localhost’s ip address. Feel free to modify the virtualbox portforwarding though – it’s just a different port not to harm any local installs.

Now get into the Icinga 2 Vagrant box and enable the GraphiteWriter feature.

$ vagrant ssh
$ sudo -i
# icinga2 feature enable graphite

Now uncomment host and port, and modify it to your carbon cache listener. Restart Icinga 2 to apply changes.

# vim /etc/icinga2/features-available/graphite.conf

 * The GraphiteWriter type writes check result metrics and
 * performance data to a graphite tcp socket.

library "perfdata"

object GraphiteWriter "graphite" {
  host = "",
  port = 20003

# service icinga2 restart

The Vagrant graphite box is accessible at http://localhost:8081.

Home exercise: Set “check_interval = 1s” in your services, and watch graphite in realtime (patched auto-refresh). If you need some detailed insights on graphite itself, you may checkout my employer’s trainings.

Nagios – More drama for the mama

It all starts with a bang

Well, obviously long time before I actually knew about anything like Linux or Nagios. Shortcut – First appearance 1999, trademark issues with the name ‘netsaint’ in 2002, renamed to Nagios which is some acronym for whatever (google it). Versions 1.x and 2.x, simplicity with a plugin api capable for everyone, fetched attraction by many users. We can read that on Google, history, done.

Update 2016-01-15: It is still going on. Now with the Nagios::Plugin Perl CPAN module and legal threats. More to read over here.

Update 2014-01-24 09:29 CET:It seems that the drama continues before it ended, so if you came here reading about “Michael Friedrich”, “Netways” and trademarks, as Ethan G. outlined in his comment on a rhel bug, here’s some additional vita (the rest can be found on XING and LinkedIn). 30, Austrian, 2009-2012 Admin/Developer at the University of Vienna, 2012-2013 Senior Consultant at Netways, 2013-present Application Developer at Netways. Lead Icinga Core Developer since May 2009.
If you’re wondering why last names are abbreviated – I prefer not to expose the possibly offended characters too much. It’s become a psycho drama.

Once you’ve read all through, you can happily start at the very beginning here to read again. Look for the 5th ace and let me know. I’ll happily grant you a beer or two, if you find it.

It’s a drama loop now.

Chapter I: How to (not) deal with problems & fork you

I figure that at some point, when companies have had put Nagios into their monitoring stack, selling consulting, workshops, whilst the original author just wrote code, it got problematic. Especially when people are demanding too much, and provide nothing in return. I do have that feeling sometimes with Icinga users as well, but that makes us human people to tell everyone about it, figure out the input both sides may provide, and reject a feature request, if it does not work out. Still, to remark – listen to the community, and answer.

Start your own business, company, etc is completely legit. The problem arising – community requires love & feedback. If there is none, you really should consider giving up your one-man-show and open up for more developers. While Ethan G. claims that Icinga is “my fork”, it apparently is not. We generally do not own projects. We instead appreciate being part of it. I joined the team one month after they’ve forked from Nagios, looking for a way to integrate Oracle into the *DOUtils backend. The dead nagios-devel mailinglists had made it clear already – a fresh fork with motivated people would allow me to participate and bring my patches upstream.

Basically that fork just happened when everyone’s local fork (Nagios with patches) was growing too big and unmaintainable. It’s still reasonable to do so, also because Ethan G. also suggested forking to users in the past being annoyed by the not-very-reponsive leadership ten years ago.

Chapter II: How to (not) deal with problems & be a drama queen

In Europe, especially Germany, it’s quite common that trademark trolls register trademarks for existing open source projects, and forcing them to rename their projects, and lose their corporate identity and name within the community but also the search engine’s index. I do believe it when Julian says (and told me in person) that he decided to register (and buy) the Nagios trademark in Germany to prevent that from happening. And it all worked out quite well.

  • NagVis was born in 2004, designed by German developers.
  • In 2006, PNP4Nagios was created. The most famous Perfdata Graphing Addon based on RRDTool. Hosted in Germany.
  • The Monitoring Portal was created in late 2003, operated by independent Germans. This is were the community meets. Statistic from 23.1.2014: “12,226 Members – 29,644 Threads – 197,306 Posts (average 51.04 Posts per day)”
  • Monitoringexchange (former Nagiosexchange) operated by Netways offers 2000+ plugins and addons

Trademarks are not free. They need to be paid. So Ethan should’ve been grateful not to pay those fees in Germany afterall, and getting supported by companies caring about the community, letting them work on their spare time projects, and not meaning any harm.

Instead, the bitching about trademarks started. First off, it was about (now, then related to the famous Nagios conference (now Open Source Monitoring Conference) and other brands which were demanded to be handed over to Ethan himself. It ended all with misleading information, a smear campain on twitter.

Announcements by Nagios Enterprises: I II

Statement by Julian

At that time Icinga was already forked, and gained attraction by the existing Nagios community. Furthermore, Icinga is supported by Netways with server hosting and development manpower. Which obviously played a role in this fight, but it’s not that clear.

The outcome for the Nagios community in the end was

  • People started to know Icinga
  • Nagios Business Process addon lost its domain, and it took a while to let search engines adopt
  • the Nagios Conference was called Open Source Monitoring Conference and attracts more participants looking at Nagios & forks, OpenNSM, Cacti, Zabbix, etc
  • Confusion between and
  • Bookmark update fun

In the end, we also had the Nagios drama mailinglists announced.

Chapter III: How to (not) deal with problems & act like a child

At some point, Jean Gabes proposed his rewrite of Nagios in Python to the Nagios Developer’s list. While it was to be expected that Nagios developers wouldn’t just switch from C to Python, the reactions after making Shinken a “Nagios compatible rewrite” were a bit strange and childish. Especially in terms of approaching other people. Or attacking even.

Only quoting a passage here:

blah, blah, blah,
You heard about the drama on the mailing list but yet you are still busy creating more. If you truly believed that flaming is nonsense then why do you initiate it? It is very humble of you to request the flaming to stop after you have had your inaccurate dramatic statements posted. I suppose you thought your statements would not be challenged because we usually do not respond to ridiculous rantings of a seemingly angry person who jumps to inaccurate conclusions and makes rash assumptions.
Your reason for forking lacks substance. Nagios is an engine. A very powerful engine that has gained in popularity because Ethan Galstad has been a very successful programmer and gatekeeper of the project. If you are not happy with the progress of the project, so what! You are just one person . There are millions of Nagios users that are very happy with the Nagios engine. We do not feel the need to cater to one or a few angry people that are jealous of Ethans success. Your lack of respect for Ethan is appalling, knowing that Ethan spent 10 years of his life developing Nagios and cultivating the community.
You “forkers” try to justify your actions by saying it is for the good of the community. The rest of us know the real reason is to satisfy your own egos (really twisted). You must feel unethical about what you are doing or you would not have tried to blame us for your actions. IMO, you forked because you and Gerhard lack the skills and creativity to start your own project. You would rather “steal” someone else’s work. Further evidence of your lack of creativity is posted on your own website where even your tag line is stolen from Nagios.
It is time to stop the negative, inaccurate, flaming drama directed at Ethan and Nagios. Do your own hard work and stop attacking Ethan and the work that he has done for the last 10 years. Please share my response with your partner in crime, Gerhard Loser.

That doesn’t sound very constructive, and contains plently of personal attacks. Gerhard’s last name actually is spelled “Laußer”.

Chapter IV: How to (not) deal with problems & compare yourself

At some point, your customers ask you about your competitors. Or community members even. How do you compare to XY? We had that with Icinga a lot, and therefore created our version of it. While it may make Icinga look good, we’ve learnt one thing: The previous version compared Icinga with Nagios Core and Nagios XI (the commercial entity). While Icinga is free in every single line of code, Nagios XI isn’t. We decided to skip that part, and removed that column from the Icinga comparison chart.

After a while, we came across this interesting OpenNMS article talking about Nagios FUD. And we figured, that Nagios Enterprises had created a comparison chart for Nagios vs Icinga themselves. But actually that one lacks of any substance, comparing Icinga Core with Nagios XI (the commercial one), but only flagging it Nagios. At that time it was unclear that “Nagios” means “Nagios Commercial Version”, while “Nagios Core” follows the free open-core model.

Either way, the pdf also contains false information about IP violation done by Icinga and Netways. Both never have violated any intellectual property, but probably that’s the only arguments to tell managers and buyers that Icinga is a risk, while apparently it’s just pure hate against the fork, spreading FUD themselves all over the internet. How_Nagios_Compares_To_Icinga of that pdf, copyright both pdf and the quote below owned by NE.

About Icinga
Icinga is a young project designed as incompatible Nagios fork by German company Netways GmbH. Icinga suffers from high developer turnover and poses legal risks to organizations that deploy it.

Use Caution
Both Icinga and Netways GmbH have a history of intentionally violating international intellectual property laws. You may be putting yourself and your organization at risk if you deploy or implement Icinga in your organization.
Knowingly using or working with products or companies that violate intellectual property laws and treaties can have significant legal and financial repercussions.

After reading this bullshit, and not liking the Matrix either, the developer in me decided to invest several days to create a comparison from a developer’s point of view. Since many developers don’t necessarily mean bad project quality. And quite frankly, being the release manager since 0.8.3 4,5 years ago, it’s still annoying to see that they advertise their lies publicly.

Btw – there are other comparisons with Centreon, Groundwork, Opsview and Zabbix.

Chapter V: How to (not) deal with problems & censor the internet

Wikipedia is an open-minded community, enforcing free speech in their articles. The english article about Nagios contains references to its forks. Some people obviously influenced or working for Nagios Enterprises (whois said so) have had a different opinion on that, and tried to censor that Nagios article by removing all unwanted strings and urls. Or, they’ve added subjective meanings into it, like “Nagios XI is the best monitoring tool in the world”. That did not end well for them after several attempts of fraud on that wikipedia article. It got even worse – Wikipedia banned Nagios Enterprises being sockpuppets.

And – trademarks join the drama queens the first time. The French Nagios community was formed by Olivier J. at former also talking about Icinga being a Nagios Fork. It seems that Ethan G. and Mary S. (wife) didn’t like that much, and attacked Olivier in a way, well, no words for that. But go figure, transferring the domain and changing the community portal to was the only wise choice. And it’s still a good community portal for everyone, not only Nagios. And you may not censor their articles, but read them – Chrome offers auto-translate anyways.

Chapter VI: How to (not) deal with problems & treat community members like shit

In early 2012 there was an interview done by with Ethan G. and Michael L. – both sides added their arguments, comments, etc. While the article was great, I felt that it required some more additional content (some of it is found in this article as well now).

A short time after that article, Michael L., the former owner of got a letter from ICANN that Nagios Enterprises has claimed the trademark on the domain, and that the “Uniform Domain Name Dispute Resolution Policy” now applies. .org TLD disputes are directly handled by ICANN, while cTLD disputes for example for .de are handled by the local domain registry “DENIC” (and not even that, but this is handled under civil law in Germany). That ICANN procedure is pretty annoying and for non-native English speaking people a mess without lawyers.

At that time, it was already known thanks to the “Julian incident” that trademarks are now handled differently by Nagios. So the domain “” had been prepared long time ago, the primary record was changed, and Google did get a hit. The final announcement left hope that the community would go one like before. And we weren’t mistaken – it’s even more international after all those years.

Sending a patch to Nagios to in the name of the Icinga Development Team resulted in “Changelog clarifications” by Ethan G., and after claiming the copyright on that patch, on a ban on their tracker. Meanwhile on the nagios-* mailinglists the string “*” was banned (my previous employer) – i know that because I couldn’t write to nagios-plugdevel either. Holger unblocked me when we were debugging the root cause.

Chapter VII: How to (not) deal with problems & work with a zombie community

Have a Nagios core developer maintain the patch queue & redesign the Nagios Core 4 – awesome. Kicking that developer out of the dev team just because he didn’t test the cgis properly – priceless. So yeah, 2013 was when it all went shits with Nagios. Fun fact: Andreas chose the Open Source Monitoring Conference (the one which had to be renamed due to trademark claims by NE) to literally hijack the “Nagios future” presentation, telling the audience that he got removed, and forked Nagios 4 into Naemon. Well, he actually wrote 99% of it, so no big deal.

So it really feels like that the (former) Nagios community now meets at conferences, presenting new features and having lots of beers with community members. And it doesn’t matter if it’s called Nagios, Icinga, Shinken, Naemon or anything else. At OSMC 2013, we had a great time with Zabbix & OpenNMS developers too, it felt like one big happy family. Not celebrating that Nagios got forked again – only having a great time together, and leaving each other with visions & ideas.

So it’s not about Nagios anymore, and the community changes as well. is bloated with do-it-yourself, student, pro and commercial editions so people won’t easily find the download urls anymore. Everything is about business and selling a product. And the difference between Nagios and Nagios XI is not clear – rather close the webpage and ask for alternatives these days.

Content in that picture is copyright by NE.

The final chapter

Many people still use Nagios. They are annoyed by bugs, or broken releases. Missing features like real distributed monitoring, or stable apis are bad, but there are other tools on the market solving those. You may still make Nagios write to graphite somehow. But at some point, #monitoringsucks will apply.

In any way, installing a Nagios/Icinga/Shinken/Naemon/Centreon/Opsview Core will require plugins. Small nifty executables, be it compiled source binaries, or just perl/python/etc scripts returning some output and an exit code. If not, your first look onto the webinterface won’t be green. Green means “Everything is OK, don’t panic”. As a matter of fact, every single admin gets nervous if there’s something not green. Especially after the first install.

That being said – without the enormous effort of the Monitoring Plugins Development Team (former Nagios Plugins Development Team) all those core check engines would be nothing. Not even a single customer would buy your support without plugins making this work out of the box. We (in terms of the Icinga team) include the Fedora EPEL repositories in our Vagrant demo boxes for Icinga 2 pulling the nagios-plugins package.

Last week, wednesday 15.1.2014 at 22:38 to be exact, the following dialog happened on IRC (#nagios-devel

22:24:40 < dnsmichi> is that the iframe forwarder thing?
22:27:37 < emias> dnsmichi:
22:28:11 < emias> dnsmichi: And if you query that web server with "Host:", you'll get a modified version of our home page.
22:28:25 < dnsmichi> modified?
22:28:34 < emias> dnsmichi: I'm pretty sure yhey're taking over the domain.
22:28:42 < emias> "they're"
22:29:11 < dnsmichi> i guess. they are trying everything to hide projects talking about nagios forks.
22:29:57 < emias> dnsmichi: Yes. Ethan had sent an email a while back, seems ge wasn't happy with my response.
22:30:00 < emias> s/ge/he/
22:31:59 < emias> Soooo!  Are we totally sure?  Then I guess we should put up and post to the list as soon as possible.
22:32:35 < dnsmichi> No. But 2014 is a good year to change things
22:32:41 < emias>
22:32:54  * emias wanted to go to bed early today ...
22:33:02 < dnsmichi> Like leaving sourceforge
22:33:47 < emias> Well that's done.
22:34:25 < emias> As we moved everything to our own server (+ GitHub), switching to would be very easy now technically.
22:35:46 < emias> http://xxxxxx/archive/np-new.html
22:36:00 < emias> That's the content I currently get from their server.
22:36:33 < emias> I think I'm sure enough.
22:38:10 < dnsmichi> ehm
22:38:24 < dnsmichi> i just figured
22:38:35 < dnsmichi> that webserver actually serves
22:38:49 < dnsmichi> i miss icinga, neamon, shinken on the main website
22:39:11 < emias> dnsmichi: That's what I'm talking about.
22:39:40 < dnsmichi> yeah, but not only via telnet Host trickery, but live in my browser
22:40:10 < dnsmichi> can you edit the main page and look if your changes are applied?
22:40:37 < emias> heh
22:41:36 < dnsmichi> i think you have been hijacked.

So basically, at that time, the websites were identical, after the Nagios Plugins Development Team was required to rename their project. I may be biased and have reacted emotional in that situation, but my german speaking brain told me, that copying a website, and changing the dns records would mean: Put your own (censored) content into the user’s sight. He won’t notice. That’s just like getting a mail asking for your bank account details, showing you a website which _looks_ identical, but isn’t technically. So if would allow to change your passwords, they would simply have fetched and stored your passwords being able to use that against you.

On Sunday, 19.1.2014 it looked like this


In that specific region, the problem is even more specific: is offering a software download. A software which is installed on thousands of servers. By serving an identical software package, everyone will trust it and just upgrade/install it, as the documentation of many projects say. The user won’t recognize any changes. What if they decide to add a phone home functionality to their software? You’ll never know, because you are trusting the name. You are trusting a vendor, which had been replaced. That obviously means “compromising” or “hijacking” a website.

The legal part of copying a copyrighted content just came up later in the drama (being totally illegal to act like that). But still, NE at least recognized that, even if they didn’t put it into a public statement. now looks a bit different than the original content now served by Even though, their actions are still illegal. But if no-one stops them, they will continue like before.

Either way, there was an announcement the day it happened on the website, and later to the mailinglists, where a discussion started. Also some insights on the previously happened Naemon fork (look through the archive, if interested).

On thursday (16.1.2014) the discussion on IRC was still ongoing. At that time, I’ve taken the liberty to step forward, and inform the package maintainers of ‘nagios-plugins’ while the now kicked and renamed team started to continue their renaming process. Jan W. is also the Debian packager, so I just informed RHEL, SUSE, FreeBSD, OpenBSD, Gentoo and ArchLinux. These distributions are well aware of Icinga, and I do have some contacts over there. Or the other way around – me being a developer changing stuff wants to inform packagers making their lives easier. I’ve been maintaining the Icinga RPMs for 4 years now, so I know what I’m talking about. Upstream fucks up, you’re screwed with mass reworking your packages.

In any attempt made, it was about users and their changed upstream source infecting their chain of trust.

Somewhere between thursday and friday (17.1.2014) the story hit reddit (it wasn’t me). There were plenty of comments in 2 threads (linux and sysadmin). While ‘scumbag’ isn’t a nice description, reddit still allows you to name it like so. And apparently, given the fact that NE did not tell anyone about their website defacement, ‘scumbag’ the is very least one could say (not that I would, just saying, eh?).

Interestingly enough, that story was to be read on Slashdot too. And then a bit more widespread on twitter as well.

In parallel to all the news, and also overwhelming feedback & appreciation of the monitoring plugins (and also Icinga which normally benefits by any Nagios drama), there was the discussion on-going on the bugs I’ve opened for the packagers. While I was and am not essentially interested in helping NE to gain their ‘nagios-core-plugins’ package, it’s still vital to know that the Redhat bug unleashed a competitive discussion how to proceed here. While NE employee Andy B. always insisted that the Monitoring Plugins project is a fork of the original (and fixed the git repo with a single commit after my remark of trust – that same passage is also quoted in that interesting blog article referencing my community concerns), the general outcome was that no-one is able to use/update the ‘nagios-plugins’ package in EPEL. Which is a good deal imho so far.

The thing with a newly formed Nagios Plugins team is – they don’t know the code very well, even if claimed otherwise. Not even Eric S. who tends to be the only part-time core/ndo/nrpe/nsca developer of NE, may handle all the stuff. From my personal observation following their git history closely he broke more stuff than Andreas ever did (getting paid for breaking Nagios? Haha, Just kidding.).

On monday, 20.1.2014 Ethan G. joins the drama for even more drama. NE has announced the new Nagios Plugins team, while attacking Holger W. using his full name in their company blog. What a shame – no more words to add here.

Content copyright by NE ofc.

In any attempt, now looks different (Wikipedia tells about the controversy too). They’ve also changed their opinion on mailinglists (recently shut down to prevent Andreas E. announcing his Nagios fork, as rumors like Andreas E. do tell) and have now auto-subscribed all users already subscribed to monitoring-plugins list to the nagios-plugins list. Copying the subscribers database and using that personal data against the unwanted project? Well, nice try. Users are confused anyways, and will happily unsubscribe from the two mailinglists looking for alternatives to follow the news or look for support.

Anyways. Too much drama here involved. FreeBSD, OpenBSD, SUSE, Debian have reacted. Some of them will just move from ‘nagios-plugins’ to ‘monitoring-plugins’ and oprhan the old package. For RHEL, Sam K. is working on a new package (I’ve only done some additional package foo in order to help resolve all the remaining showstoppers) becoming the new ‘monitoring-plugins’ maintainer in EPEL 7. For EPEL5/6 there may not be an obvious solution, but we’ll make sure to get something working for our communities.

Even if a Nagios user installs packages from being happy about fixed problems. Is that a crime? No it isn’t. But NE thinks it is. But it doesn’t matter in the end. We (the forks) are the bad guys, while NE and Ethan G. are the ones who never started anything, and are being harmed by us. Tbh I consider this a personal attack, but hey – that’s a RHEL bug to fix problems. Not to generate more drama for the mama, eh?

An now for the closing end, some more dramatics dramas. Well, first off an interesting blog entry by Matt Simmons, which showed that my English is actually bad in terms of vocabulary. Using the term “compromised” actually had put the RHEL bug up to hacker & security news feeds. Well, public attention for what price. Still NE is probably using that argument against me, but who cares anyways. That really was a language accident or incident, call it whatever you like 😉 Oh, and I’ve made sure that the internet doesn’t forget even if this story goes offline.

There are probably some more addons or plugins around which are compatible with Icinga, or mention that on their websites. They’ll probably get an email soon-ish and being ordered for censorship serving the greater power of the NE dictatorship. And then they’ll lose their motivation. Or step down, because renaming all the stuff causes that much work, moving the domain, re-establishing a trust platform. Or they are forked by NE into their enterprise stack, like happened with Nagiosql (now Core Config Manager), NagTrap (now NSTI) or Teeny Nagios (now Nagios Mobile). But you as addon developer get all the support questions, that’s for sure.

Nagios Enterprises has shown in many ways how to kill their own community being grown around Nagios. Conferences have started popping up where people meet not to only talk about Nagios (the Nagios World Conference Italy renamed itself to Open Source Systems Management Confernce while the Open Source Monitoring Conference attracts >250 open source enthusiasts every year here in Nuremberg). Community platforms which have resisted against their censorship and renamed themselves (the real community lives at Addons which are looking for alternative hosting platforms. Developers who just do not like them for their contribution agreements (and other licenses which explicitely forbid you to fork). Everyone chooses their platforms where they’re free. Like we are on #icinga on freenode. Or social media where censorship just doesn’t work. Not even fork developers will send them patches anymore being either banned or ignored.

In any way, the ordinary user will not create any plugin or addon in the future. There are too many ways to fail, and if you’re open-minded you’ll get a letter from a lawyer forbidding you to talk about competitors with your Nagios-exclusive tool. A zombie community – and then it will be interesting to see how the open core business model of Nagios will be able to survive, if the community stops contributing. Or if the community starts over in Icinga/Shinken/Naemon, or any other cool devop tool such as Sensu.

But, that’s another drama for the future. In order to be prepared, get this t-shirt. Then meet with all Nagios forkers, create cool code & laugh with community members, and enjoy your beers.

And leave the drama to those having abandoned their old friends.

Galaxy S3 Automatic Backup failed

My Galaxy S3 says “automatic backup failed” without any mention of its origin, nor a popup is opened when touched. While looking for a possible solution on the net, I’ve stumble over Google+ trying to sync all photos and media automatically using wifi access. I remember that I had that disabled in Picasa, and told Google not to sync photos from my mobile to my Google+ account, even with wifi access.

Well, Google+ uses a new setting, having that enabled by default – WTF? Kill it with fire.

Screenshot_2014-01-13-20-20-02   Screenshot_2014-01-13-20-20-41

Change content width in WordPress Twenty Fourteen Theme

Well obviously WordPress Twenty Fourteen got an issue: It’s was made for tablets and it’s primary width is 1260px – which obviously makes the content width 654px. That is a mess when you want to present log output and code to the reader.

I generally dislike hacking stylesheets, even if the default style.css is very well documented. The problem is that you ain’t gonna get it for free with a single entry, but must adopt plenty of them.

After installing this nifty plugin in order to overwrite various styles (the child theme idea just sucks), my attention was caught by this blog post.

I’ve modified it a bit, and set the following entries

  • site and site-header max-width are set to 1420px
  • all context is set to 1024px (present everything to the reader)
  • content area (hentry) is set to 768px


.site {
	max-width: 1420px;

.site-header {
	max-width: 1420px;

.hentry {
	max-width: 786px;

.site-content .entry-header,
.site-content .entry-content,
.site-content .entry-summary,
.site-content .entry-meta,
.page-content {
	max-width: 1024px;

.image-navigation {
	max-width: 1024px;

.page-header {
	max-width: 1024px;

.contributor-info {
	max-width: 1024px;

.comments-area {
	max-width: 1024px;

.site-main .mu_register,
.widecolumn > h2,
.widecolumn > form {
	max-width: 1024px;

get_secrets_cb(): Failed to request VPN secrets using gnome-shell, Network Manager & openvpn

Using the openvpn network manager applet, this strange error occured when trying to connect to my VPN. I also figured that the configuration edit was denied with a strange error on not being able to load the template.

Jan  4 16:57:03 nbmif NetworkManager[2872]:  Starting VPN service 'openvpn'...
Jan  4 16:57:03 nbmif NetworkManager[2872]:  VPN service 'openvpn' started (org.freedesktop.NetworkManager.openvpn), PID 6230
Jan  4 16:57:03 nbmif NetworkManager[2872]:  VPN service 'openvpn' appeared; activating connections
Jan  4 16:57:03 nbmif NetworkManager[2872]:  [1388851023.483964] [nm-vpn-connection.c:1374] get_secrets_cb(): Failed to request VPN secrets #3: (6) No agents were available for this request.
Jan  4 16:57:08 nbmif NetworkManager[2872]:  VPN service 'openvpn' disappeared

The gdm session log unveils that something is missing…

$ tail -f .cache/gdm/session.log

      JS LOG: Error 'VPN plugin at /usr/lib/NetworkManager/nm-openvpn-auth-dialog is not executable' while processing VPN keyfile '/etc/NetworkManager/VPN/'

Looking at the openvpn service configuration itsself, it’s configured correct:

# vim /etc/NetworkManager/VPN/


But the file itsself is missing…

# ls -la /usr/lib/NetworkManager/nm-openvpn-auth-dialog
ls: Zugriff auf /usr/lib/NetworkManager/nm-openvpn-auth-dialog nicht möglich: Datei oder Verzeichnis nicht gefunden

Re-install the involved packages.

# apt-get install --reinstall network-manager-openvpn network-manager-openvpn-gnome

# ls -la /usr/lib/NetworkManager/nm-openvpn-auth-dialog
-rwxr-xr-x 1 root root 31K Sep 13 19:54 /usr/lib/NetworkManager/nm-openvpn-auth-dialog*

Now check the broken symlink for the gnome shell and manually fix it.

# ls -la /usr/lib/gnome-shell/nm-openvpn*
# ln -s /usr/lib/NetworkManager/nm-openvpn-auth-dialog /usr/lib/gnome-shell/
# ls -la /usr/lib/gnome-shell/nm-openvpn*
lrwxrwxrwx 1 root root 46 Jan  4 17:10 /usr/lib/gnome-shell/nm-openvpn-auth-dialog -> /usr/lib/NetworkManager/nm-openvpn-auth-dialog*

But still, it remains broken.

$ tail -f .cache/gdm/session.log

      JS LOG: Invalid VPN service type (cannot find authentication binary)

It’s necessary to not only restart the network manager, but also dbus in order to apply the changes. And while at it, save your current work, and reboot. Killing dbus while running a window manager isn’t much fun 😉

# service network-manager restart
# service dbus restart
# reboot

Kudos to this forum post out of many many bug reports and hints. It seems that package upgrade broke/changed the location of the auth dialog file which then broke everything, as usual.

gnome shell flash fullscreen freeze workaround

There’s a bug in the current gnome-shell 3.8.x+ with fullscreen flash videos presented in chromium and iceweasel. While youtube seems to have a workaround for the missing window focus (but only in chromium), other flash players just open the video into an invisible layer, the browser video freezes, and you’ll only hear the sound.

Works in fullscreen – does not work. The bug is described here, but seems a bit dead.

Debian Testing, d-u 2013-12-29
xorg 1.14.5-1
gnome-shell 3.8.4
chrome flash player 11.2 r202

Workaround – use a tool named devilspie and set the focus for such fullscreen events automatically as described here.

# apt-get install devilspie
$ mkdir ~/.devilspie
$ cat <<EOF > ~/.devilspie/flash_fullscreen.ds
    (is (application_name) "plugin-container")
        (contains application_name) "chromium-browser")
        (contains application_name) "flash-plugin")

Now make sure that devilspie is started automatically in gnome 3 – either using ‘ALT + F2’ and ‘gnome-session-properties’ and adding a new entry like shown below


or add a new entry manually:

$ cat <<EOF > ~/.config/autostart/devilspie.desktop
[Desktop Entry]
Comment[de_DE]=flash fullscreen fix
Comment=flash fullscreen fix

Logout/login again, or run devilspie in background using ‘/usr/bin/devilspie&’ – works again 🙂