Problems with running gpg-agent as root

February 14, 2011 in bug, fedora, howto, problem, security

This is gonna be short post for people experiencing various issues with pinentry and gpg-agent. This is mostly happening on systems with only gpgv2.

I have been asked to look at bug 676034 in Red Hat Enterprise Linux. There we actually two issues there:

  • Running pinentry with DISPLAY variable set but no available GUI pinenty helpers
  • Using gpg on console after doing “su -”

First problem was relatively easy to figure out. Pinentry finds DISPLAY variable and looks for pinentry-gtk, pinentry-qt or pinentry-qt4 helpers to ask for passphrase. Unfortunately if none of these GUI helpers can be found, pinentry doesn’t try their console counterpart. Workaround is simple: unset DISPLAY variable if you are working over ssh connection (or don’t use X forwarding when you don’t need it). More recent pinentry features proper failover to pinentry-curses

Second problem was a bit more tricky to figure out, although in the end it was a facepalm situation. When trying to use GNUPG as root on console, hoping for pinentry-curses to ask for passphrase, users were instead introduced to this message: ERR 83886179 Operation cancelled. To make things more confusing, everything seemed to work when logging in as root directly from ssh.

At first I thought that this must be caused by environment variables, but this seemed to be incorrect assumption. Instead the reason was that current tty was owned by original owner and not root. This seemed to cause problem with gpg-agent and/or ncurses pinentry. I will investigate who was the real culprit here, but this bug seems to be fixed at least in recent Fedoras

So what should you do if you have weird problems with gpg and pinentry as root? Here’s what:


$ su -
[enter password]
# chown root `tty`
[use gpg, pinentry as you want]

Easy right? As a final note…I’ve been to FOSDEM and I plan to blog about it, but I guess I am waiting for the videos to show online. It’s quite possible I’ll blog about it before that however, since it’s taking a while.

Local DNS caching – just do it already!

November 11, 2010 in en, network, open source, problem, software

I recently encountered weird problems with my network connection at home. Everything worked, but was unbelievably slooooow. Ping showed times of ~30 ms, but I could easily see it took more time for those packets to go there and back.

I took me some time to figure out what was happening. Looking back, checking DNS server should have been one of the first things to do. Seems like first DNS server provided by my provider has been down. That meant that every DNS query timed out and then went to the second DNS which got me my response. For some reason ping did DNS query before every new package being sent. That explains its weird behaviour.

This problem got me to finally install local caching dns. I was thinking about doing it before, but I never got around to do it until now. I always thought it’s gonna be a few-hour nightmare. Now I blame my previous experience with bind 😀 For simple local caching bind would be overkill, so I chose dnsmasq. Using it was as simple as installing, running dnsmasq and executing

$ echo 'nameserver 127.0.0.1' > /etc/resolv.conf.head

From that point on every resolv.conf file generated by dhcpcd will have my local DNS as first DNS server to try. For this time you can add it there manually. Then you can verify your setup works by running following command twice in a row:

$ dig randomserver.com

First execution should have Query time: XX msec with XX being few tens of miliseconds. Query time for second run should be zero or very close to zero.

Congratulations. You have your very own caching server. Who knows…maybe you’ll even notice some improvements in your network connection :-)

And he’s back! (from hibernating)

March 27, 2010 in bug, en, kernel, linux, open source, problem, software

What better way to celebrate summer solstice, than by making my computer able to hibernate? Since my last post a lot has happened with me. I got a new phone (HTC Hero FTW!), I finished university, went traveling a bit and I also got a new notebook (because the old one died on me). R.I.P. Thinkpad R51, welcome Thinkpad T500. There are several things I could start writing about now. Starting with how great Hero and Android is to use all the way to today’s blog post: How to make my computer hibernate?

Linux has had support for hibernating for quite a few years now and although it’s not perfect, it usually works out of the box. What it needs however is swap device big enough so that it can store image of memory for hibernating. Now I hit a problem. When I got my new Thinkpad I thought “Hey, I have 4GB of RAM…why would I need a swap?”. And even if I REALLY needed more than 4GB RAM I can still create temporary swap by using swapfile. Unfortunately I couldn’t make swapfile on LVM work with TuxOnIce. TuxOnIce also has another alternative to swap or swapfile for hibernating: Using filewriter, which is quite similar to swapfile support, I managed to get it to work (after some work, kernel debugging and one small patch to TuxOnIce).

I set FilewriterLocation in hibernate.conf to point to a place where I wanted to store hibernation file and I set the size to 4GB. As instructed in TuxOnIce HOWTO, I then ran

hibernate --no-suspend

to create this image. It created the file as expected, but when it was supposed to tell me settings for bootloader (resume argument) it silently failed. When I tried again, whole computer froze. I was puzzled. How could this happen? I am using Linux so things like this don’t happen! But hey, I should be able to figure out what’s wrong with it right? I set up my kernel to include netconsole, and ran hibernate again. This time I caught where the bug happened. The output was something like this:

TuxOnIce: No image found.
BUG: unable to handle kernel paging request at 6539207a
IP: [] toi_attr_store+0x186/0x2a0
*pdpt = 0000000032732001 *pde = 0000000000000000
Oops: 0000 [#1] PREEMPT SMP
last sysfs file: /sys/power/tuxonice/file/target
Modules linked in: netconsole aes_i586 aes_generic radeon ttm drm_kms_helper drm
i2c_algo_bit sco bnep ipt_MASQUERADE iptable_nat nf_nat ipt_LOG nf_conntrack_ip
v4 nf_defrag_ipv4 xt_state nf_conntrack ipt_REJECT xt_tcpudp iptable_filter ip_t
ables x_tables rfcomm l2cap vboxnetadp vboxnetflt vboxdrv arc4 iwlagn iwlcore ma
c80211 sdhci_pci snd_hda_codec_conexant sdhci pcmcia e1000e uvcvideo mmc_core cf
g80211 snd_hda_intel yenta_socket btusb rsrc_nonstatic tpm_tis pcspkr pcmcia_cor
e videodev v4l1_compat intel_agp wmi agpgart tpm snd_hda_codec tpm_bios video fu
se xfs raid10 raid1 raid0 md_mod scsi_wait_scan sbp2 ohci1394 ieee1394 usbhid uh
ci_hcd usb_storage ehci_hcd usbcore sr_mod sg uvesafb cfbfillrect cfbimgblt cn c
fbcopyarea [last unloaded: microcode]

Pid: 12870, comm: hibernate Not tainted 2.6.33.1-w0rm #16 2082BRG/2082BRG
EIP: 0060:[] EFLAGS: 00010202 CPU: 0
EIP is at toi_attr_store+0x186/0x2a0
EAX: 00000000 EBX: 36203430 ECX: 00000000 EDX: f231f200
ESI: 65392066 EDI: 00f60062 EBP: f6006331 ESP: f62a7f14
DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068
Process hibernate (pid: 12870, ti=f62a6000 task=f20a0270 task.ti=f62a6000)
Stack:
00000000 fffffff4 00000001 c1790ca0 00000000 f6e8ab64 c16c75a4 f6d1c380
<0> f62a7f64 c114298d 00000015 00000015 b7709000 f21385c0 f6d1c394 c16c75a4
<0> f6ec7ac0 f21385c0 b7709000 00000015 f62a7f8c c10f207c f62a7f98 00000000
Call Trace:
[] ? sysfs_write_file+0x9d/0x100
[] ? vfs_write+0x9c/0x180
[] ? sysfs_write_file+0x0/0x100
[] ? sys_write+0x3d/0x70
[] ? sysenter_do_call+0x12/0x22
Code: c7 45 e0 00 00 00 00 3b 5d 08 0f 85 e9 fe ff ff 8b 46 20 85 c0 0f 84 de fe
ff ff ff d0 8b 7d e0 85 ff 8d 76 00 0f 84 d9 fe ff ff <8b> 46 14 31 d2 e8 60 03
05 00 8b 46 10 c7 46 14 00 00 00 00 a8
EIP: [] toi_attr_store+0x186/0x2a0 SS:ESP 0068:f62a7f14
CR2: 000000006539207a
---[ end trace 124a5ee29ef71277 ]---

So what can we deduce from this bug output? Let’s go from the top. Bug name (unable to handle kernel paging request) means that it is likely a memory corruption issue. Someone accessed memory that he was not supposed to. IP tells us that function where the error occurred was toi_attr_store in unknown file, unknown line (I don’t have debug information included in kernel). There are other information we can get from that output, but I didn’t really need them. Quick search through kernel sources told me that toi_attr_store is a function inside kernel/power/tuxonice_sysfs.c. I scanned the code, learning what approximately it did. Then I placed printk statements thorough the function so that I could approximate where inside the function the code fails. After some time I narrowed it down to following snippet:


if (!result)
result = count;

/* Side effect routine? */
if (result == count && sysfs_data->write_side_effect)
sysfs_data->write_side_effect();

/* Free temporary buffers */
if (assigned_temp_buffer) {
toi_free_page(31,
(unsigned long) sysfs_data->data.string.variable);
sysfs_data->data.string.variable = NULL;
}

Kernel crashed when it tried to call toi_free_page. After a few reboots and printks later I found out that this was just a coincidence, and sysfs_data variable itself became corrupt even before the call to the toi_free_page. Good candidate? Of course: write_side_effect. But what exactly was write_side_effect? This function was passed as an argument, and therefore I wasn’t able to easily find out what was real code executed at this point. Time to find out! From my previous debugging attempts I knew code failed while it tried to write location of my resume file into /sys/power/tuxonince/file/target. TuxOnIce code defined handling for string sysfs arguments as such:


#define SYSFS_STRING(_name, _mode, _string, _max_len, _flags, _wse) { \
.attr = {.name = _name , .mode = _mode }, \
.type = TOI_SYSFS_DATA_STRING, \
.flags = _flags, \
.data = { .string = { .variable = _string, .max_length = _max_len } }, \
.write_side_effect = _wse }

I found this macro used inside tuxonice_file.c source code like this:

 
SYSFS_STRING("target", SYSFS_RW, toi_file_target, 256,
SYSFS_NEEDS_SM_FOR_WRITE, test_toi_file_target)

So we found our write_side_effect code inside test_toi_file_target function. In one part this function was calling hex_dump_to_buffer to convert device UUID into hexadecimal string. The call looked like this:

 
hex_dump_to_buffer(fs_info->uuid, 16, 32, 1, buf, 50, 0);

This should convert input (fs_info->uuid) into hexadecimal string and store it inside buf. Author of the original code correctly thought about function adding spaces between bytes and therefore need to have more space in the buffer (argument 50 is telling hex_dump_to_buffer how big is output buffer). Unfortunately that same author declared buf as 33 char array. hex_dump_to_buffer therefore stepped outside the buffer and corrupted memory, causing all the problems. I fixed this bug, and sent a patch to the tuxonice-devel mailing list. As of now, it is already in the git repository ready to be released with next bugfix release of TuxOnIce.

That is everything for today, but as I already noted I am using LVM on my system (except root partition) and also use fbsplash for nice animations while rebooting. I am using initrd for this, and I will have another post on that topic.

Mobile (not so) open standards

August 25, 2009 in en, linux, lock-in, mobile, problem, projects, rant

Yesterday I promised I’ll talk about why I hate mobile phones. Of course I didn’t mean all of them. Just the ones I have to deal with. Why? Well my mobile phone kind of died few days ago. I have a Nokia N73 and it’s really quite good phone even if it’s a bit old by today’s standards. You control the phone by using “joystick” kind of thing in the upper part of keyboard. I decided to include image so you don’t have to look for it :-)

So this joystick stopped working (even slightest touch would be evaluated as pushing it, therefore it was unusable). I didn’t have my backup phone with me, but one friend gave me her battered Siemens S55. So what was the problem? Well I have the same sim card for almost 10 years now. Back then only 100 contacts would fit on it. I have almost 300 contacts in my N73. So how do I get all contacts from one phone to the other? Normally I could just send them through bluetooth, but since I couldn’t really control my N73 this was out of question. I was barely able to turn on the bluetooth. I thought that I’ll use SyncML interface to get vCards from N73 to my computer and then sync them again to the S55. In the end I kind of did, but boy was that an unpleasant experience!

So what exactly happened? I installed OpenSync libraries and tools and using multisyncgui I created sync group with one side being file-sync plugin and other was syncml-obex-client plugin. Configuraion of file-sync plugin was mostly just changing path to directory where I wanted to sync. Final version looked like this:





/tmp/sync
contact
vcard30


Configuration for syncml-obex-client appeared to be much more challenging. It appears that Nokia N73 has two quirks:

  • It only talks to SyncML client if it says its name is “PC Suite”
  • It contains a bug that causes it to freeze after certain amount of data if configuration is not correct

First of these quirks is mentioned in almost every tutorial on data synchronization in Linux. However the second one caused me to lose quite some time. My Nokia N73 would freeze after synchronizing approximately 220-240 contacts. To continue working I had to restart the whole phone.In the end I found out that I need to set parameter recvLimit to 10000 in order to synchronize everything. Final setting for syncml-obex-client looks like this:




2
00:1B:33:3A:D1:37

13
0
PC Suite
1
1


1
0
0
10000
0

Contacts
contact
vcard21


So after all that I was able to get vCards from my N73 to my notebook. For every vCard OpenSync created file in directory /tmp/sync. Now came the interesting part. How to get these vCards to Siemens S55?

Simple Google search on Siemens S55 and synchronization in Linux seemed to suggest that tool most suited to do the job was scmxx. This little app is specialized on certain Siemens phones. According to some manuals it was supposed to be able to upload vCards themselves, however I couldn’t get it to work as scmxx was complaining about invalid command line arguments.After some testing I found out that it could access and change sim card phone numbers.

Unfortunately for me, my sim card has limit of 100 phone numbers, each with 14 character identifier (name). This meant I needed to convert vCards from N73 to special format that scmxx used. Mentioned format looked something like this:


1,"09116532168","Jones Rob"
2,"09223344567","Moore John"
...

First column being number of slot that will be overwritten by new information, second column is number and third one name of contact (less than 15 characters).

So I fired up vim and started coding conversion script. It didn’t take long and I had my contact in the old-new phone. There are a lot of hard-coded things in that script since I don’t plan to ever use it again but you can download it from my dropbox. Consider it public domain, and if anyone asks I didn’t have anything to do with it :-)


import os
import re

MAX_CONTACTS=100

class PbEntry(object):

def __init__(self, name, tel, year, month, day):
self.name = name
self.tel = tel
self.year = year
self.month = month
self.day = day

def cmp_pb(e1, e2):
if e1.year > e2.year:
return -1
elif e1.year return 1
else:
if e1.month > e2.month:
return -1
elif e1.month return 1
return 0


telRe = re.compile('TEL(;TYPE=\w+)*:([*#+0-9]+)', re.M)
revRe = re.compile('REV:(\d{4})(\d{2})(\d{2}).*', re.M)
nameRe = re.compile('^N:(.*);(.*);;;', re.M)
def get_entry_from_text(text):
ret = nameRe.search(text)
surname = None
name = None
tel = None
rev = None
if ret:
surname = ret.group(1)
name = ret.group(2)

ret = telRe.search(text)
if ret:
tel = ret.group(len(ret.groups()))

if surname and name:
fn = "%s %s" % (surname,name)
elif surname:
fn = surname
else:
fn = name

if fn:
ret = re.search('(.{0,14}).*', fn)
fn = ret.group(1)


ret = revRe.search(text)
year = ret.group(1)
month = ret.group(2)
day = ret.group(3)

return PbEntry(fn, tel, year, month, day)


entries = []

files = os.listdir('/tmp/sync')
for file in files:
fh = open('/tmp/sync/%s' % file, 'r')
content = fh.read()
entry = get_entry_from_text(content)
entries.append(entry)

entries = sorted(entries, cmp=cmp_pb)

i = 1
for entry in entries:
print '%d,"%s","%s"' % (i, entry.tel, entry.name)
i = i + 1
if i > MAX_CONTACTS:
break

I had my share of incompatibilities between mobile phones, computers and other devices. Fortunately most of devices being sold today use open communication protocols for sharing of data (and other stuff). Too bad people had to put so much energy into reverse engineering proprietary solutions in the past. Just ranting about this vendor lock-in could be spread on quite a few pages. Imagine having 300+ contacts and calendar information in your phone of brand X. When you are buying your new phone, you would be able to synchronize your data only if you bought new phone also from brand X. Would that affect your decision? It sure would affect mine.

Now I have a choice. After fixing my old N73 I will start looking into new phone. So far HTC Hero looks pretty cool and reviews are not half bad.

Mount me, but be careful please!

June 30, 2009 in en, gsoc, howto, linux, open source, problem, projects, security, software

First a bold note. I already have repository on Gentoo infrastructure for working on my GSoC project. Check it out if you want.

Last time I mentioned I won’t go into technical details of my GSoC project any more on this blog. For that you can keep an eye on my project on gentooexperimental and/or gentoo mailing lists, namely gentoo-qa and gentoo-soc. But there is one interesting thing I found out while working on Collagen.

One part of my project was automation of creating of chroot environment for compiling packages. For this I created simple shell script that you can see in my repository. I will pick out one line out of previous version of this script:

mount -o bind,ro "$DIR1" "$DIR2"

What does this line do? Or more specifically what should it do? It should create a virtual copy of conents of directory DIR1 inside directory DIR2. Copy in DIR2 should be read-only, that means no creating new files, no changing of files and so on. This command succeeds and we as far as we know everything should work OK right? Wrong!

Command mentioned above actually fails silently. There is a bug in current linux kernels (2.6.30 as of this day). When you execure mount with “-o bind,ro” as arguments, the “ro” part is silently ignored. Unfortunately it is added to /etc/mtab even if it was ignored. Therefore you would not see that DIR2 is writable unless you tried writing to it yourself. Current proper way to create read-only bind mounts is therefore this:

mount -o bind "$DIR1" "$DIR2"
mount -o remount,ro "$DIR2"

There is issue of race conditions with this approach, but in most situations that should not be a problem. You can find more information about read-only bind mounts in LWN article about the topic.

HDD failure imminent

March 2, 2009 in en, linux, problem, windows

I suppose people who work with computers for a few years saw similar message at least once. Unfortunately it’s quite common for hard drives to fail. There is early warninig system that can predict a lot of these misfortunes. It’s called S.M.A.R.T. and it is in fact quite smart :-) A lot of HDDs come with this monitoring disabled for (to me) unknown reason. Maybe it’s performance reasons, maybe manufacturers don’t want users to know their HDDs fail. Aaah…conspiracy theories :)

Enough of being smart though (pun intended). Recently, over 5 years old computer of my parents refused to boot when one of HDDs (320GB WD Caviar) was connected. No matter what I did, Windows wouldn’t boot with that HDD connected. The HDD was (still is actually) under warranty, but I really wanted to save the data. Most important files were backed up elsewhere, but my music collection and some movies waiting to be seen were not. I’ll skip the boring stuff. Since the computer had other problems my parents decided to buy new one. With 320GB WD Caviar connected even Vista would not boot (old computer was XP).

I made one final attemt to save data. I booted Ubuntu live cd. To my big surprise, Ubuntu did not just “see” the hard drive. It was able to mount it without problems. It didn’t even complain. I just backed up the hard drive, did low level format (e.g. dd if=/dev/zero of=/dev/sda bs=1M) and suddenly windows was able to boot without problems. I had one other problematic 80GB Seagate HDD that I remembered and the outcome was the same. Windows was able to see it after low level format. These HDDs were not system HDDs, so even if MBR was corrupted I shouldn’t have mattered. I couldn’t find anything final on the Internet about this type of HDD “failures” so any info is welcome. S.M.A.R.T. is not complaining so it seems that I have 2 good HDDs in my hands now. Linux saves the day! :-)

2B Free || ! 2B Free

January 16, 2009 in en, linux, problem

Recently ext4 filesystem was marked as stable with release of Linux 2.6.28. Since I like bleeding edge from time to time and backup my files regularly anyway I decided to give it a spin. As far as performance is concerned, I have nothing to report yet, since I haven’t been using it that long. But as usuall I found certain annoyance :-)

I was going through my filesystems and converting them one-by-one (after doing one more backup). When it came to /var I hit the wall though. df showed that there is free space (more than 400MB) but tar was telling me there is not enough space on the filesystem to create a directory (ENOSPC). So what was it? I was looking around and finally found the problem. Since the size of /var is only 1GB on my computer, mkfs.ext4 decided I will never use more than ~65000 inodes. Problem is that I have a lot of small files on the filesystem. Ebuilds, git and svn repositories and standard /var stuff. This together meant that I hit the 65000 mark quite easily whithout filling up the filesystem.

Solution to my problem was obvious from this point on. Recreate /var filesystem while manually overriding mkfs.ext4‘s choice for maximum inode count. Voila, ext4 seems working well from this point on.

Kflickr hidden bugs and developer unfriendliness

January 13, 2009 in en, linux, open source, problem, projects, software

First of all…All hail our new overlord. And by overlord I mean year 2009. I hope you all will have a great time. I know I will :-). I didn’t write for some time, because I was travelling then I was celebrating holidays with my family and friends. All in all I didn’t have so much time to keep my information up to date not to mention doing anything resembling work. That’s changing NOW!.

I recently bought new camera (lovely Nikon D90) and also decided I need to backup my previous photos to more than 2 places. I realized you can never have enough backups after a few failed HDDs. So what were the options I was considering?

  • Google’s Picasa: 20$/year for 10GB  storage space
  • Flickr: 25$/year for unlimited storage and better sharing/privacy settings, presentation options etc.

I didn’t consider other services because…well because I didn’t.

Now the issue was…How to upload all of my photos (several gigabytes)? Flickr has client for Windows/MacOS, but not for Linux (The orignal client appears to work through wine though). Kflickr to the rescue! I started uploading photos in no time. But I wouldn’t be writing this blog entry if everything went according to plan now would I?

Everything seemed to work, the photos were on the web. I could see them, organize them, tag them…you name it. Then I wanted to download original file from certain photo (for reason I don’t remember). How great was my surprise when the file was <1MB in size. The originals I had were ~3 MB. Something rotten in here. The files were obviously recompressed with lower jpeg quality settings before being uploaded. Not all of them were this way though. It seemed like it has something to do with license I used for the files. Power is in the source, Luke so there I was. I wanted to investigate the problem and maybe fix it. Unfortunately opening Kflickr project files with Kdevelop and trying to debug didn’t work. For some reason the gdb was ignoring my breakpoins as if the application was compiled without debugging information. It was however compiled with -g3 (all debugging info). So far I was unable to properly diagnose the orignal bug, but I wrote to author of Kflickr asking for information. Now let’s wait.

Xorg evdev madness

November 14, 2008 in en, linux, open source, problem, software

It is really astonishing how easy it is to find topics for blogging when one looks around :)

I recently upgraded my Xorg installation to latest ~x86 version. For Gentoo virgins, this means unstable version, although it is usually considered stable upstream, just integration with other apps can be sometimes problematic. Stable version was really old and had problems with recent kernel versions. I was very happy with the upgrade, which made my 5 year old Thinkpad more alive than ever. I decided to recreate my xorg.conf because most of the stuff that was there was not needed anyway, since XRandR 1.2 is used.

What is my problem then? Well after the upgrade some features of my touchpad stopped working (most notably circular scrolling) and I could not switch between different layouts of my keyboard. First thing I did was of course look at Xorg.0.log. Important part follows:


(II) XINPUT: Adding extended input device “AT Translated Set 2 keyboard” (type: KEYBOARD)
(**) Option “xkb_rules” “base”
(**) AT Translated Set 2 keyboard: xkb_rules: “base”
(**) Option “xkb_model” “evdev”
(**) AT Translated Set 2 keyboard: xkb_model: “evdev”
(**) Option “xkb_layout” “us”
(**) AT Translated Set 2 keyboard: xkb_layout: “us”
(II) config/hal: Adding input device ThinkPad Extra Buttons
(**) ThinkPad Extra Buttons: always reports core events
(**) ThinkPad Extra Buttons: Device: “/dev/input/event3″
(II) ThinkPad Extra Buttons: Found keys
(II) ThinkPad Extra Buttons: Configuring as keyboard
(II) XINPUT: Adding extended input device “ThinkPad Extra Buttons” (type: KEYBOARD)
(**) Option “xkb_rules” “base”
(**) ThinkPad Extra Buttons: xkb_rules: “base”
(**) Option “xkb_model” “evdev”
(**) ThinkPad Extra Buttons: xkb_model: “evdev”
(**) Option “xkb_layout” “us”
(**) ThinkPad Extra Buttons: xkb_layout: “us”

As it happened evdev found additional “keyboards” and IGNORED my layout settings for keyboard. I found few forum posts dealing with the same problem on Gentoo and Arch Linux. I will not go into details, if you really want to know all the crazy solutions people found, read the forums. But easiest solution? Uninstall evdev driver for now if you don’t need it (you probably don’t). Similar effect could be probably reached by adding Option AutoAddDevices “boolean” to Serverflags section of xorg.conf, however I didn’t try this approach.