Network Manager not working: Disabled by Hardware Key

If Network Manager claims that Wifi is disabled by a hardware key, take these steps:
  1. Check if hardware keys disable the wifi (duh!) - this may include key combinations such as Fn-F2.
  2. Check rfkill list if some devices are blocked and unblock them with sudo rfkill unblock all
  3. If you have one blocked device, this may block all other devices due to a bug in Network Manager. Hence removing the modules of all other devices may be necessary to enable a device which is actually already unblocked. (e.g. sudo rmmod rf2800pci and sudo rmmod ideapad_laptop).
Worked for me - let me know how this works for you.

Successfully Connecting your Bluetooth Headset with PulseAudio and A2DP to Linux

After lots of trying to connect the bluetooth headset via bluetooth and pulseaudio to Linux, the following procedure has proved pretty successful for me most of the time:

  1. Pair Headset normally via blueman.
  2. Connect to the headset (aka telephony) service (not A2DP)
  3. killall -9 pulseaudio (it often hangs for me)
  4. start playing something
  5. open pavucontrol
  6. switch the playing output to the bluetooth device
  7. you now hear crappy sound
  8. now while the sound is playing, connect A2DP, either via blueman.
  9. pulseaudio should automatically switch to high quality playback now. otherwise: Go to the last tab in pavucontrol and set the profile to A2DP.

Enjoy! And let us know if this worked for you! ;)

Kernel Compile Service for Testing Patches and Bisecting

You know the situation, when you report a bug in the kernel and then ideally you're asked to test a patch? This means you have to compile your own kernel to test it and that means a lot of time: setting up a build environment, compiling, testing, compiling again, .... Depending on your skill and machine this may take a lot of effort.

It seems to be this could be so much easier and faster. The quick&dirty fix: e.g. the Ubuntu kernel ppa could provide a package with the already compiled sources so that you only have to compile that part which changed - if that would even work. The better fix: you could request a compiled image with a certain patch applied. That would not take too many resources - after all the compiled tree would already be there. And it would make things enourmously easier for debugging. It would not need to be available to the general public, but an "access key" could be distributed to someone who reported a bug in the bugzilla once a patch is ready for testing.

And what would help even more, would be if a git bisect could be done in the same way. If you had a powerful compilation server and you bisect via a web interface, download a kernel image, test it, and then tell the interface if the error occurred or not. Of course that would take much more power. But that's exactly why it would be great to have.

And overall, all this would improve the kernel debugging a lot and reduce the effort for developers and users with debugging.

Running New Software on an Old System

I have a netbook which still runs a very old system. It's ancient and no longer supported (just like Windows XP). But it's the last version which supports the proprietary Intel Graphics drivers (poulsbo) with video acceleration in Linux. Now the problem is using this in any environment would be extremely dangerous, because it's full of years of serious security issues. And the software really is very ancient.

Luckily, there is help, called schroot. Using a simple script, you can replace any command with a command out of the chroot. With this simple script, the application will behave just as if it was installed in your current environment - except for access to directories.

#!/bin/bash
schroot -c precise -- /usr/bin/google-chrome-stable ${@}

Of course you could just the same method to simple make your system a bit more secure for a few programs. You can use schroot and debootstrap to setup a secure environment.

New Favorite Root Server Hoster: On demand virtual machine for 5 $/month or .7 ¢/hour

I've been thinking about root server hosters recently and a friend pointed me towards Digital Ocean. And besides their good pricing and choice in OSes I really like that you can pay by the hour. That means you can use a server for a few hours, shut it down again, create a snapshot, and "detroy" it. You keep the state of your machine and you can return to this state easily at any time.

A server with 512 MB RAM, 20 GB SSD and 1 TB transfer costs you 5 $ per month. But you can also use a server just for an hour for just one cent or precisely $ 0.007. This is very cheap yet comfortable for remotely testing things. And you get a fully set up Ubuntu or other server in just under a minute. Just remember to shut it down, snapshot and destory the server when you're finished if you want to pause the billing. And they have servers in Amsterdam / EU.

Of course, you could also use this to create your own private cloud service. Try it out!

Note: I get a provision for referrals if you click the link. But I wrote about it because I like it, not for the money. I may have written more than I would have otherwise, though. ;)

Two Factor Authentication for SSH with Ubuntu

Here is a great very quick Ubuntu two factor authentication setup guide, which I'ved tested to work for Ubuntu 12.04 and 14.04. It sets up two factor authentication for SSH login. You can still login via public key instead, which may be even safer. But it's a good way to prevent bad effects from someone just stealing your password.

Here's a link to the companion authenticator apps for different mobile platforms (Android, iPhone, ...).

Make sure you keep logged in with one session when setting it for remote machines! If you make a mistake you won't be able to log in at all until you correct it.

Two Factor Authentication for SSH with Ubuntu

Here is a great very quick Ubuntu two factor authentication setup guide, which I'ved tested to work for Ubuntu 12.04 and 14.04. It sets up two factor authentication for SSH login. You can still login via public key instead, which may be even safer. But it's a good way to prevent bad effects from someone just stealing your password.

Here's a link to the companion authenticator apps for different mobile platforms (Android, iPhone, ...).

Make sure you keep logged in with one session when setting it for remote machines! If you make a mistake you won't be able to log in at all until you correct it.

Freetz and Knockd: Most Ports don't work

One important thing to note when using knockd with freetz is that only forwarded ports are possible. Only they arrive at the dsl interface. But of course you can add forwarding rules. "dsl" is the correct interface for dsl users.

Very Short Knock Client Bash Script

All clients I saw were a bit too elaborate. So I wrote this one which only needs four lines and could do with less:

#!/bin/bash
target=$1; shift
echo knock $target with knock $*
for i in $*; do
echo > /dev/udp/$target/$i;
done

Fixing "Valid eCryptfs headers not found in file header region or xattr region, inode"

This is a manual process, once step. You need to remove the file. Then it can be recreated.

e. g. Valid eCryptfs headers not found in file header region or xattr region, inode 123344

You will need to inode number in bold. Then use this command to find out which file it corresponds to:

find ~/ -inum 123344

I'm assuming here that the ecryptfs is mounted in your home directory, as e.g. in Ubuntu.

Then this gives me some filename, e.g.
/home/user/.kde/share/config/session/konsole_10101dd02011

To check it's really the problem try to read the file:
cat /home/user/.kde/share/config/session/konsole_10101dd02011.

You should get an input output error now. If so, feel free to remove the file:

rm /home/user/.kde/share/config/session/konsole_10101dd02011

Done. HTH!

Self Compiled Kernel Stops Booting in Ubuntu 14.04 [Fixed]

After switching from Ubuntu 12.04 to 14.04 I've noticed my self compiled kernels would not boot well. There was always an error that a device (root / and swap) could not be mounted. Also I could no longer mount removable devices, such as USB sticks.

This is due to changes in the boot process with the new version of Ubuntu. The issue disappears if root is mounted rewrite by default. Hence you should change the root=/dev/... ro to root=/dev/... rw. Actually just booting once like this seems to have fixed my problem permanently, i.e. now I can boot with ro again and I have no issue.
Another possible reason could be a problem during fsck (fix).

Ubuntu now also requires the kernel automounter support version 4. The config entry is CONFIG_AUTOFS4_FS=m. Then everything worked again.

Fix for blank screen or black display after resume with open source radeon drivers in Xorg

The bug was reported in detail here. The fix comes in two ways:

1. For just echo mem > /sys/power/state to work, use the kernel boot parameters nomodeset acpi_sleep=s3_bios,s3_mode
-> You have to test which of the s3_bios or s3_mode settings work for you. You may also use number 1, 2, and 3 in /proc/sys/kernel/acpi_video_flags instead to change the setting at run time.

2. For pm-suspend to work (e.g. *Ubuntu) you need to create a file
/etc/pm/config.d/radeon  which contains  QUIRK_S3_BIOS="true"  QUIRK_S3_MODE="true"
3. To have working mode setting, load the radeon module with modeset=1, by creating a file
   
"/etc/modprobe.d/radeon.conf"
with
   
"options radeon modeset=1"

Converting OpenCard Files to Anki [manual approach]

It's a bit troublesome, a better solution would probably be to create an XSLT file as an export filter from Impress to CSV. (A good start for this would be the <draw:text-box> tag.) But with a bit of effort this works, too.

1. Open opencards file in impress.
2. Copy all cards into clipboard
3. Paste cards into Openoffice
4. Export file as Text file
5. Open Text file in Text editor
6. Find and replace all ";".
7. Find all question and answer fields and seperate them by ";"
8. Import into Anki.

There also seems to be a way to do this by converting the slides to powerpoint, then using this macro. I'm giving up for now. Any more hints would be appreciated, because I still have tons of cards! :)

Fix for "GLX error: Can not get required symbols" in Xorg log file / No 3D graphics with radeon Open source driver

If you get the error: "(EE) GLX error: Can not get required symbols" when trying to use the open source "radeon" driver, and you have AMD's fglrx installed, I will tell you a very simple and fast fix. Most people will recommend to completely uninstall the fglrx driver, but there is much simpler and faster fix for that.

First, check if this is really the issue:
sudo /usr/lib/fglrx/switchlibglx query

If the command tells you that the AMD driver is currently in use, you have to switch to the "intel" driver in order to use the open source radeon driver. This is also very simple:

sudo /usr/lib/fglrx/switchlibglx intel

Now restart your X Server, or your computer if you wish, and you should be all set! And if you want to use fglrx again, remember to switch back with:
sudo /usr/lib/fglrx/switchlibglx amd

Enjoy! And let me know if this worked for you!

How to configure Darktable with OpenCL for your AMD APU

I was really curious for a piece of working opencl software. Then I found out that a software I love actually also does opencl! The only issue is, it did not work by default on my system with AMD APU, because the assigned graphics memory by my BIOS did not suffice. I can only dedicate a maximum of 512 MB to the GPU part of the APU. But this is probably also the minimum for a good setup.

Setting up the AMD OpenCL SDK is a requirement for this to work! Also you have to enable OpenCL support in the Darktable configuration. Also be aware that this may crash your system as the fglrx driver is not very stable with OpenCL yet. In my experience it likes to crash especially after a suspend to ram.

Anyway, so after all this what you have to do is to edit your darktable config file manually. Make sure to make a backup and then add these lines to .config/darktable/darktablerc:
opencl_memory_requirement=250
opencl_memory_headroom=150

Issue a "sync" command before starting darktable to be sure you don't lose data with a possible crash! ;)

These numbers work pretty well for me on a system with 512 MB dedicated memory. But they should probably work more or less on any AMD APU. I have bad experience with higher and lower numbers than these. Also I think vaapi/xvba did not work at the same time. You can still enable and disable OpenCL on demand in the GUI settings menu.

Let there be OpenCL! Enjoy!

Darktable: Twice as fast with OpenCL!

In my setup (AMD A4), using opencl with darktable is about twice as fast as using the dual-core cpu. 

It's not working out of the box, though. Not even if you have installed the OpenCL SDK by AMD.  Because my BIOS only allows me to set up to 512M as vram for my GPU/APU. But then I thought let's just try what happens and set the required memory in darktable down to 250M. And it worked. And it worked great. I get a great speed increase:
OpenCL (zoom to 200%): [dev_process_image] pixel pipeline processing took 3,574 secs (0,460 CPU)
GPU (zoom to 200%): [dev_process_image] pixel pipeline processing took 6,442 secs (11,640 CPU)

And as you can see, the CPU is hardly used anymore, so it could do something else at the same time!

My setup:
Ubuntu 12.04
Darktable 1.4~rc1
A4-4355M

Conclusion: It appears you can set the requirements lower. A 2x speed increase for standard activities is really worth it! See the next post for how to set up Darktable for AMD APUs.

Using the Kernel NTFS driver with Ubuntu

To make life easier for the average user, Ubuntu installed a link at /sbin/mount.ntfs to ntfs-3g, thus automatically mounting all ntfs drives with the ntfs-3g driver. Unfortunately this makes it impossible for you to use the classic driver. But you can fix this issue by changing the name of the link:

sudo mv /sbin/mount.ntfs{,-backup}

Then a simple mount will use the kernel driver by default. To revert to the default setup, use this command:

sudo mv /sbin/mount.ntfs{-backup,}

Speeding up NTFS writes in Linux

Sometimes you want to do some serious operations on ntfs volumes in Linux - and if it's only virus scanning. This can be slow. It will probably still be slow with this trick. But hopefully a bit less:

sudo mount -o remount,big_files,noatime /media/somedir
PS. You just might be able to speed up your reading speed by using the kernel driver.

Ignore Package Dependency in dpkg

If you want to install a package, which has a dependency you want to ignore, e.g. because it's wrong, outdated or unnecessary, you may use this command:

sudo dpkg -i --ignore-depends="nvidia-experimental-310:i386" Downloads/beat-hazard-ultra_1.66_20130308_i386.deb 

This example fixes installation for the Beat Hazard Ultra Debian package, which for some odd reason requires a special version of a certain nvidia driver.

Fixing a Broken Cursor with AMD/ATI fglrx Drive: Steam, Braid, Wine: Option UseFastTLS 2

If you notice a broken cursor, which may means the cursor
  • doesn't show up at all
  • is a black square
  • is a square mixed black and/or with parts of another screen area or the screen background (Steam)
or you have display issues when running wine, try this option in your xorg.conf:

Option "UseFastTLS" "2"

If this works for you in a situation which is not described above, please post a comment. Otherwise, e.g. if you don't know what xorg.conf is and where to find it, please use google for further information.

Installing Windows 8 on USB - The easy way

There are several different guides for installing Windows 8 on a USB disk, which often unpack an installation file of install.wim, which is no longer available with Windows 8.1 and may be difficult to use. So I want to mention two rather easy ways I've found to work for me:

1. Install with a VM

You can setup VirtualBox to use a raw hard drive as one of its drives. If you mount that drive and a Windows 8 installation ISO, you will be able to install Windows very quickly. Usually a simple reboot should be enough to be able to use Windows. 

2. Simply copy the files of an existing installation. (caveat: Apps no longer worked for me, and it was very slow)

3. If booting does not work, manually set up the boot disk
- Copy bcdboot from x:\windows\system32\bcdboot to your windows drive, e.g. C:.
- Use it to make drive X bootable: C:\bcdboot X:\windows /s b:

Of course the target drive X should be a primary partition formatted with nfts already and the partition should be active.

Have fun!

Removing Crossover Deletes All Your Bottles

You have to be extremely careful when uninstalling crossover, because generally, it will always remove all your bottles of all your applications. This means if you have an Office or Picasa or some other installations, they will silently completely disappear.

There are only limited options:
a) Install/Uninstall via your package manager, e.g. the Debian .deb package.
b) Save the installed applications via Archive or moving the .cxoffice directory from your home directory to another location and name.
c.) Remove crossover manually with: rm -rf /opt/cxoffice.

Do NOT simply use /opt/cxoffice/uninstall, as it will remove all your custom installations wihout warning.

There really are no other options. I've contacted support and all they did was point me to this link.