Howto create private channel on Freenode

March 25, 2014 in configuration, fedora, howto, IRC

As it happens I sometimes need to create a private IRC channel for prolonged time where nobody but selected people should be able to join. This is a step by step guide to do it on Freenode:


  • All participants have registered usernames
  • You must be channel operator (OP) and the channel must NOT be registered (otherwise contact one of channel founders or pick different channel)

Step by step process

  1. /msg chanserv register <channel>
  2. /msg chanserv set <channel> guard on (Chanserv will join your channel)
  3. /mode <channel> +i (set channel to invite only)
  4. /msg chanserv access <channel> add <nick> (for each person that should be able to join)
  5. /mode <channel> +I <nick> (for each person that should be able to join)
  6. Prosper!

Other people modifications

Add person as co-founder (full permissions):

  • /msg chanserv access <channel> add <nick> FOUNDER

Automatically give OP status to person after joining:

  • /msg chanserv flags <channel> <nick> +O

Allow people to invite others:

  • /msg chanserv flags <channel> <nick> +i

For full information try /msg chanserv help, /msg chanserv help flags and /help mode.

fedwatch – running scripts based on fedmsg messages

March 21, 2014 in automation, fedmsg, fedora, projects

Fedora fedmsg infrastructure can give you a lot of interesting information nearly in real-time. For example:

  • Koji build starts/finishes/fails
  • FAS account creation and membership changes
  • Changes in Fedora packages git repositories
  • Changes of package ownership in pkgdb
  • Changes to bodhi updates

Full list of topics is of course online, including the data in those messages.

It doesn’t take a genius to realize some of these messages can be used to do interesting stuff. For example Java SIG uses buildsys.repo.done to generate minimal installation size of several packages each time rawhide buildroot is regenerated. That way we immediately see if some new dependency creeps in.

To make all of this more simple I created a separate library and utility for monitoring specified topics on fedmsg bus and then running scripts in configured directory with data from fedmsg message passed as arguments. Introducing: fedwatch.


I am planning to package fedwatch for Fedora and EPEL. In the meantime feel free to use my Copr repo.

If you have up-to-date dnf-plugins-core you can just do dnf copr enable sochotni/fedwatch fedora-<ver>-<arch>

If you are not using dnf you can download appropriate repository for your distribution manually (F20, rawhide, EPEL 6 supported) and then just run yum install fedwatch .


# fedwatch --help
usage: fedwatch [-h] [--config-file CONFIG_FILE] [--script-dir SCRIPT_DIR]

Run tasks on fedmsg changes

optional arguments:
  -h, --help            show this help message and exit
  --config-file CONFIG_FILE
                        Configuration file for topic selection and data
                        mapping (default: /etc/fedwatch.conf)
  --script-dir SCRIPT_DIR
                        Directory with scripts to run for fedmsg messages
                        (default: /etc/fedwatch.d)
  --debug               Run with debug output (default: False)

Configuration file format is that of standard ini file (ConfigParser format for Pythonistas). Example:


When fedwatch receives a git.receive message it will convert fedmsg message data into arguments for running scripts based on above configuration. Keys in the configuration file are ignored and values are evaluated as XPath expressions inside the message data. So msg/commit/username means message[‘msg’][‘commit’][‘username’].

Taking an example fedmsg commit message from fedmsg documentation:

{ 'i': 1,
  'msg': { 'commit': { 'branch': 'master',
                       'email': '',
                       'message': 'Clear CFLAGS CXXFLAGS LDFLAGS.\n\n-This is a bit of a hammer.',
                       'name': 'Mark Wielaard',
                       'repo': 'valgrind',
                       'rev': '7a98f80d9b61ce167e4ef8129c81ed9284ecf4e1',
                       'stats': { 'files': { 'valgrind.spec': { 'deletions': 2,
                                                                'insertions': 1,
                                                                'lines': 3}},
                                  'total': { 'deletions': 2,
                                             'files': 1,
                                             'insertions': 1,
                                             'lines': 3}},
                       'summary': 'Clear CFLAGS CXXFLAGS LDFLAGS.',
                       'username': 'mjw'}},
  'timestamp': 1344350850.886738,
  'topic': ''}

For the above configuration file and this specific message, fedwatch would execute scripts in /etc/fedwatch.d/ with following arguments:

<script> mjw valgrind master \
       7a98f80d9b61ce167e4ef8129c81ed9284ecf4e1 'Clear CFLAGS CXXFLAGS LDFLAGS.'

And example script to handle the above message could be /etc/fedwatch.d/


if [[ "$1" != "" ]];then
    exit 0


# do stuff

I hope the configuration file is easy to understand. Package contains systemd and SysV init integration so you can run it as a system-wide service. In that case I suggest you change default (empty) configuration in /etc/fedwatch.conf and add some handling scripts into /etc/fedwatch.d/.

And of course if you find some interesting use cases for fedwatch, let me know!

Belated FOSDEM 2013 trip report

February 12, 2013 in conference, fedora, fosdem, packaging, report

I’ve been to FOSDEM this year (again) and as always it’s been an interesting experience. You meet all kinds of people, you can create new contacts and realize different projects can share some code or at least approaches. This is just going to be a rundown of talks that I’ve seen (or most of them anyway).

(R)evolution of Java Packaging in GNU/Linux

I’ll start with talk Mikolaj Izdebski and myself gave in Java devroom titled (R)evolution of Java Packaging in GNU/Linux, since that resulted in some possible interesting cooperation. The  talk went reasonably well without any hiccups. We could have used more time for discussion and generic talk about Java buildsytems, but oh well. 

After the talk Charles Oliver Nutter (headius @ JRuby/JVM projects) mentioned Sonatype is already running conversion of all jar files to rubygems on their infrastructure. His idea (which seems pretty neat) was to make this for RPMs as well. He got us in touch with Jason van Zyl and it seems if we create conversion tool we might get this to run on their infrastructure. This actually ties in nicely with some work we are doing to improve life for Java developers on Fedora because they’d be able to install unlimited number of different Maven artifacts that could tie in directly to Yum/RPM and companies could limit access to Maven Central repository and just allow these RPMs.

The state of OpenJDK & OpenJDK7u & OpenJDK Q&A

These were talks by Oracle guys giving updates where JVM is going, and how does the schedule look. Nothing too interesting or worrying here, but it seems that Oracle is really trying to put more people working on JVM, plus there’s community participation as well. Mark Reinhold’s prediction was that OpenJDK board will stay the same after reelections in next few months.

Interestingly Q&A at the end of Saturday was waaay more calm than previous years. Maybe that has to do something with community around OpenJDK maturing? :-)

Porting OpenJDK to AArch64

One of the more technical talks about intricacies of simulating hardware that (almost) doesn’t exist. Our Andrews (Andrew Haley and Andrew Dinn) had some hard drive issues but they handled it pretty gracefully. The  trick with jumping between AArch64 and x86 code outside of simulator was quite interesting. Also interesting were notes on register usage by JVM and how number of registers helps or hinders implementations.

Community Management in Meat Space

Talk by our very own Leslie Hawthorn and Lydia Pintscher. Basically it was a quick tutorial for dealing with decision making and problems within communities. A sort of small prelude to Leslie’s keynote. It just had more duct tape :-0

Gentoo Hardened

This talk was done by one of more controversial people (one of co-developers of eudev which got a lot of flaming apparently). Francisco Riera used a volunteer to show various features of hardened kernels (not necessarily SELinux style) in Gentoo, how and why they operate in certain ways. Each change was accompanied with quite nice examples how in non-hardened kernels certain things could be misused. I missed last few minutes due to other talk.


Jeremy Allison talked about history and splitting of of Samba4 from Samba3 codebase. Apparently there was almost a fork due to “old” network filesystem guys and new “let’s create an AD” guys. Code still has samba3 & samba4 source directories but they are merging/cleaning them up. They have a lot of code bundled/developed in Samba4 to ease integration and configuration. KerberosLDAP..the list goes on. Makes me wonder if all of this was *really* necessary. I guess that’s why Jeremy added this bit to one of the slides:

We stopped checking for monsters under our beds when we realized they were inside us.

systemd, two years later

Surprisingly no flamewars. During the talk I (perhaps) understood why Lennart is facing so many of them though. I believe he doesn’t clearly state that “Yes, we reimplemented part of X in systemd so that it’s more reliable. BUT! You can still install that old thing, we won’t break it and you can still configure it”. Prime example for me being acpid, which Lennart replaced just to get power button working. That’s probably fine for 99% of use cases, but on a lot of notebooks acpid is primary way to handle interesting buttons that X knows nothing about (and perhaps we need them to work outside of X)

FreedomBox 1.0

FreedomBox is a project by Eben Moglen and Bdale Garbee, now mostly a software solution running on top of Debian systems to make private and secure communication without dependence on governments. And one important feature: it has to be usable by common people. There were already 2 keynotes about it in previous years. Now it’s slowly coming to 1.0. There was an interesting point about replacing CA infrastructure in webservers with p2p/gnupg trust principles and development of apache module that would be able to authenticate these.

LibreOffice: cleaning and refactoring a giant code-base

I really like where LO is going. They’ve been mostly doing huge cleanups, getting rid of old cruft, German comments etc. Java dependency is going away. They have a healthy community, ood code review/unit test processes. All the right pieces. I can’t wait for when they really start adding new things.

Has GNOME community turned crazy?

Talk about various controversial features developed by Gnome is past few years. Mostly Gnome Shell. Vincent Untz made a good point: Gnome 2.0 was nothing like Gnome 2.3x. However when he asked audience if they thought Gnome 3.0 was ready for release even people who like Gnome shell said no. There were a lot of Gnome people in the audience who were co-answering questions.

The Keeper of Secrets

Our Leslie Hawthorn had a closing keynote, which in my mind was a continuation  of her previous talk. It dealt with handling community participants when they have problems they would like to keep secret while still allowing the community to handle their absence. Most valuable part for me was probably resources & references to other literature. Because let’s face it…you can’t really give a silver bullet generic answer, because each situation is different. I guess it could have been articulated more clearly, because some people in the audience never realized this in my opinion.

Now I can start looking forward to DevConf already.

Moving My Blog to WordPress on OpenShift

February 6, 2013 in dns, fedora, google apps, howto, openshift, wordpress

As you might have noticed I moved my blog from blogger to WordPress instance on OpenShift. And then set-up a new DNS record for my domain to point to this OpenShift WordPress. OpenShift has pretty nifty documentation for how to do it. What might not be obvious is that if you are using Google Apps for Domains, you can easily have point to OpenShift WordPress application (or any other OpenShift app for that matter)

To set this up you can follow these steps:

  1. Go to your Google Apps dashboard 
  2. Go to Domain Settings -> Domain Names
  3. Click “Advanced DNS settings”
  4. Note the login details for GoDaddy (mostly Sign-in name and password)
  5. Click “Sign in to DNS console” and login with previously received information
  6. You should be in main DNS panel, click on <yourdomain>
  7. Look for “DNS Manager” and click “Launch” below a list of your current records
  8. There are 3 parts A records, CNAME (alias) records and MX records
  9. You need to add CNAME so click “Quick add” in CNAME section
  10. First field is subdomain name you want to use. For me that was “blog”
  11. Second field is your domain name (i.e <yourapp>-<yournamespace>
  12. Click “Save zone file” in upper right corner
  13. Now you need to teach rhcloud about this new alias as well so run
    rhc alias add -a <yourapp> --alias blog.<yourdomain>.com
  14. You should be able to see WordPress instance at blog.<yourdomain>.com soon enough
  15. Links within WordPress will be pointing to rhcloud still so let’s fix that…
  16. Login to wordpress (if you do it through your domain you will get SSL certificate errors, this is expected)
  17. Go to Settings -> General
  18. Set “WordPress Address (URL)” and “Site Address (URL)” to http://blog.<yourdomain>.com
  19. You should be done

Gentoo Boot Optimization

September 13, 2012 in boot, gentoo, optimization, speed


There is something Zen about boot time optimization. Let’s face it, most of us don’t reboot our Linux machines all that often. Yet shaving off a second or two from boot process gives me certain type of satisfaction.

Recent post on Google+ by Lukáš Zapletal made me try out bootchart for the first time. Original post was about e4rat – a tool for defragmenting ext4 partitions to optimize them for boot speed when using traditional rotational media. I decided to have a look at my bootcharts and see what can be done on my SSD based Gentoo system.

Bootchart Installation

I am not going to go into details. Just install from your distribution repositories. Gentoo contains app-benchmarks/bootchart2 in base portage tree, and Gentoo wiki has all the instructions you’ll need.

First run

After I installed bootchart, my initial result was 11 seconds from init to X server running and showing password prompt. Let’s analyze the Bootchart (200kB png) a little bit. It looks like slim was waiting for to finish. Not visible in the image, but that is actually net.eth0 script. In other words: network configuration, dhcp. Since I have stable IP address in my local network, I decided to stop using DHCP. For a simple Gentoo system this can be achieved by editing /etc/conf.d/net (configuring IPv4 and IPv6 statically):

$ cat /etc/conf.d/net
config_eth0="A.B.C.D/24 2001:470:413b:0:2e2:d4ff:ff8d:ccd1/64"
routes_eth0="default via A.B.C.1"

So how are we faring after making our IP static? 5 seconds! At this point I’d be willing to say mission accomplished, but something told me there’s more to do…

Let the fun begin

Looking at the second bootchart tells us one thing: xdm/slim is waiting for my ntfs partition to get mounted. We should probably avoid that!

What I decided to try was installing autofs and just mounting my /shared-data partition when it’s actually accessed. To my dismay, the resulting bootchart showed that the boot got even slower! (5.6 seconds). Time for the big guns baby!


The problem with XDM seems to be that it is waiting for something. Let’s have a look at /etc/init.d/xdmsnippet:

depend() {
        need localmount xdm-setup

        # this should start as early as possible
        # we can't do 'before *' as that breaks it
        # (#139824) Start after ypbind and autofs for network authentication
        # (#145219 #180163) Could use lirc mouse as input device
        # (#70689 comment #92) Start after consolefont to avoid display corruption
        # (#291269) Start after quota, since some dm need readable home
        # (#390609) gdm-3 will fail when dbus is not running
        # (#366753) starting keymaps after X causes problems
        after bootmisc consolefont modules netmount
        after readahead-list ypbind autofs openvpn gpm lircmd
        after quota keymaps
        before alsasound

        # Start before X
        use consolekit dbus xfs

XDM seems to have a lot of dependencies. It is understandable because distributions will always prefer correctness over speed (hopefully). We are running a simple desktop. No network authentication, no heavyweight display manager or desktop environment like KDE or GNOME. So what happends if we remove netmount and autofs from requirements of xdm? After all we don’t need them to start slim. Final bootchart is much more interesting. Roughly 3.5-4 seconds from init to X!

Quo Vadis

I finished with that 3.5-4 second boot. But as the final bootchart shows there’s still room for improvement. List of things that could probably be looked into:

  • blkid takes too long. Perhaps we could avoid it completely?
  • e1000 (network card) and i915 (graphics card) take quite a while to initialize. Perhaps having e1000e as module and loading it later during boot would be faster
  • Not using LVM would speed things up, but I like its advantages
  • Avoiding udev could be useful as well for static system where no USB devices are to be attached dynamically
  • While we are at it, disabling USB completely would save around 250ms as well

Installing BackTrack on USB: mounting dev/loop0 failed

February 21, 2012 in backtrack, bug, fedora, howto, kvm, linux, usb

Recently I wanted to make use of my 16GB usb drive in a sensible way, and I didn’t really need another classic pendrive for moving data. In the end I decided to install BackTrack on it. BackTrack is a general forensic analysis/penetration testing distribution based on Debian. And it’s fairly nice as far as a rescue distribution too.
I could have installed with with UNetbootin, which has direct support for BackTrack, but I wanted something a little more fancy: full disc encryption and persistence of data.
There is a very nice how-to linked from main BackTrack website for doing exactly this sort of thing. But I didn’t want to burn the image first or even reboot. We have virtualization for that today! Right? Right! Or not…
So I downloaded BackTrack KDE/64bit variant iso, checked the md5sum to be correct, and started installation. Silly me thoght that running a KVM VM like this would make it possible to install BackTrack on the usb drive:

$ virt-install -n test -r 1024 --cdrom BT5R1-KDE-64.iso \
--boot cdrom --nonetworks --graphics spice \
--disk path=/dev/sdg

Where BT5R1-KDE-64.iso would be my BackTrack iso image and /dev/sdg would be my USB drive. Sadly this failed with ugly error message after BackTrack started booting:

# (initramfs) mount: mounting dev/loop0 on //filesystem.squashfs failed

After some investigation I found out that BackTrack booted fine if it was the only drive in the system, but failed with the above messages when I tried to attach my USB drive. Never found the reson, but the solution was to make the USB drive use virtio bus like this:

$ virt-install -n test -r 1024 --cdrom BT5R1-KDE-64.iso \
--boot cdrom --nonetworks --graphics spice \
--disk path=/dev/sdg,bus=virtio

After that I just continued according to the how-to with a few differences (such as USB key being seen as /dev/vda). Welcome our encypted overlords.

DevConf 2012 – "How I Have Seen the Future"

February 21, 2012 in conference, devconf, fedora, Red Hat

We’ve had Developer Conference (DevConf) in Brno last weekend and there have been numerous interesting talks and hackfests. You can see the full programme on Fedora wiki.
This year we’ve had a pleasure to welcome a lot of our colleagues from other Red Hat offices around globe. And they in turn had some of the most interesting talks. I spend most of my time at talks dealing with filesystems, storage and other core components, but there were a few not-so-technical talks that sparked my interest.
Bryn Reeves had two talks, one titled “Supporting the Open Source enterprise” and the other “How to lose data and implicate people”. Sadly I had my own lab around fedora-review at the time of the second presentation, but if the first one was any indication the second one must have been great. The talk I’ve seen was dealing mostly with processes and tools our support uses to help customers deal with problems. And examples. Lots of interesting, fun examples of ingenuity of our engineers when dealing with bugs. Yeah, try replicating customer’s setup of few thousand machine grid where the problem occurs. Apparently “git, git, git, git, git, git” is the tool that is saving their lives every day (not surprising).
Other talk that sparked my interest was Lukáš Czerner’s “Btrfs – Design, Implementation and the Current Status”. While Lukáš is not a Btrfs developer, he is a kernel developer familiar with its internals and the talk contained a lof of technical information I haven’t known about before. “The root of the root of the roots” tree must probably be the motto of Btrfs. It looks to me that Btrfs has very powerful abstraction where everything is either a tree or node in a tree, but I guess only time will tell if this abstraction is going to be strong enough for the years to come. I am definitely looking forward to trying Btrfs in a controlled environment (for now).
Another feature that I was drooling over a bit was thin provisioning in LVM that was discussed in a talk given by Edward “Joe” Thornber & Zdeněk Kabeláč. It is a fairly recent feature (first upstream release with support for this was done in January) that allows one to thinly provide LVM volumes. What this means you ask? Well it means you can create 20 GiB volume “pool” that can contain 3 10 GiB thinly provisioned volumes. I.e. these volumes will start small and grow as needed. They will eventually also shrink when space is freed by the underlying filesystem. This of course requires the filesystem to support discard/TRIM commands, but this is not a problem for modern linux filesystems. As I see it thin provisioning teamed with snapshotting will change the way I manage my virtual machines for sure and I can’t wait to try it out.
Many people believe Btrfs will take over role of LVM in following years, but the way I see it Btrfs will simplify use cases that LVM is too complex for, while LVM will keep on improving support for more demanding scenarios. Because let’s be honest, LVM on desktop is just too darn complicated for an ordinary user/administrator.
I’ve been to FOSDEM few weeks back, and I have to say that DevConf was smaller, but no less interesting. There are projects or new features that I haven’t heard about, but caused some “WOW” monents for me. Definitely looking forward to next year! (and you should try to come too)

Auto-tag Fedora package git repo from koji builds

January 26, 2012 in automation, fedora, git, packaging, tool

I often browse through various Fedora packages and miss having git tags in package repositories corresponding with builds done in koji. Therefore I created following simple bash script that will create git tags in current git repo (if it’s a Fedora package).


giturl=`fedpkg giturl`
if [ $? -ne 0 ];then
echo "This doesn't look like fedora package directory"
exit 1

pkgname=`echo "${giturl}" |\
sed -e 's|git://\(.*\)?.*|\1|'`
# make sure we are up-to-date
git fetch

# go through last 3 releases (incl. rawhide)
for dist in f15 f16 f17;do
builds=`koji list-tagged "${dist}" "${pkgname}" | \
grep "${pkgname}" | awk '{print $1}'`
for build in $builds;do
# task urls sometimes have ".git" suffix
git_sha=`koji buildinfo "${build}" | grep '^Task:' | \
sed -e "s|.*${pkgname}\(\.git\)*:\(.*\))|\2|"`
version=`echo $build | sed -e "s:${pkgname}-::"`
echo BUILD: $pkgname\($version\) = $git_sha
git tag "${version}" "${git_sha}"


fedora-review – Package reviews made easier

November 11, 2011 in fedora, json, packaging, projects, python, review, tool

For package to be included in default Fedora repositories, it has to go through process called package review. If you’ve done a few package reviews you know big chunks of this process are repeated ad-nausea in every review.
There have been quite a few tools aimed at automating and simplifying this process. However they all had one major flaw. They have been designed for reviewing specific class of packages, be it Perl, Python or generic C/C++ packages. Few us us decided to change this.
We used Tim Lauridsen’s FedoraReview package as a base for our work and started adding new features and tweaks. Current work has a website on fedorahosted where you’ll find all important information. Full feature list would be quite long, but I’ll list a few major things:
  • Bugzilla integration
  • Mock integration
  • JSON api for external plugins (more info further down)
  • Several automated tests
The tool runs all checks/tests on spec file and rpms and writes output into a text file. Snippet of the output looks like this:

Package Review

- = N/A
x = Check
! = Problem
? = Not evaluated

==== Generic ====
[ ]: MUST License field in the package spec file matches the actual license.
[ ]: MUST License file installed when any subpackage combination is installed.
[!]: MUST Package consistently uses macros (instead of hard-coded directory names).
Using both %{buildroot} and $RPM_BUILD_ROOT
[x]: SHOULD Spec use %global instead of %define.

==== Java ====
[!]: MUST If package uses "-Dmaven.local.depmap" explain why it was needed in a comment
[!]: MUST If package uses "-Dmaven.test.skip=true" explain why it was needed in a comment

[!]: MUST Package consistently uses macros (instead of hard-coded directory names).
Using both %{buildroot} and $RPM_BUILD_ROOT
[!]: MUST If package uses "-Dmaven.local.depmap" explain why it was needed in a comment
[!]: MUST If package uses "-Dmaven.test.skip=true" explain why it was needed in a comment
We display only relevant results. In other words if there are no post/postun scriptlets there is no reason to include sanity output checking in the template. This will make more and more sense as we add more checks.


So how is it that different people will be able to write additional plugins for this review tool? We provide a relatively simple JSON api through stdin/stdout.
So to create a new check plugin you create a script or program in your language of choice. There is only one requirement:
  • Programming language has to have JSON format support

When the review tool runs your plugin it will send following message on its stdin:

"supported_api": 1,
"pkgname": "package name",
"version": "package version",
"release": "package release",
"spec":{ path: "path/to/spec",
"text": "spec text with expanded macros"},
"rpms":[ "path/to/rpm", ...],
"rpmlint": "rpmlint output",
"build_dir": "/path/to/src/directory/after/build"

When your plugin is done with checks it returns following message by writing it to stdout:

"command": "results",
"supported_api": 1,
"name": "CheckName",
"url": "URL to guidelines usually",
"group": "Group for this test.(Java, Perl, etc.)",
"text": "Check description that shows on review template",
"deprecates":["DeprecatedTest", ...]
"type": "MUST"|"SHOULD",
"result": "pass"|"fail"|"inconclusive",
"extra_output": "text",

If the plugin closes stdout without writing anything there, it means there were no relevant automated tests to run and no non-automated tests to include in template for manual evaluation. This is useful so we don’t include for example Perl-related test output for Java packages and vice-versa.


While the tool is already usable and soon to be packaged in Fedora, there are still quite a few things we want to improve:

  • Add more functions to API (currently there is just get_section)
  • Automate all automatable tests currently available
  • Get rid of redundant tests (don’t duplicate rpmlint)
  • Add more tests of course!
  • Maybe add templating support?

We have currently 3-4 active committers, checks for C/C++, generic, Java, R packages. There is already and example external plugin written in Perl. If you have any improvement ideas, bugreports or just want to tell us we suck because we should have done X…get in touch!

My job: I am a software chef

November 3, 2011 in fedora, packaging, personal, rant

How often are you asked what is your job? Most non-IT people will not be able to understand packaging, dependencies, rpms and whatnot. Hell, I even had trouble explaining what I do to my ex-schoolmates from university working in a traditional corporate environments. And they are software developers.
Was that just my problem? I don’t think so. I had an epiphany while on a vacation few months back. I am almost sure the idea was not mine and it was just my subconsciousness that stole it from someone else. So what is my revelation? As you might have guessed from the title:

I am a software chef. I create recipes and prepare them.

I work in a restaurant, that we call Linux distribution. There are many restaurants, each having their own recipes, rules and so on. Some restaurants form “chains” where they share most of their recipes. In these cases there is usually one restaurant that creates most recipes (Debian is such a restaurant in its Linux ecosystem).
Each restaurant usually has hundreds of chefs, some of them specialize in few recipes (build scripts), some are more flexible. In my case I specialize in a type of recipes dealing with coffee (i.e. Java).
Every recipe starts with customer (user) ordering some meal they have heard about. I look up ingredients (upstream projects) the food is made of and start recreating recipe for our restaurant. Quite often the food is made of more recipes (dependencies) and I have to create those first. Sometimes these recipes are already being prepared by other chefs, so I just use their work for my final meal. However our ingredients can be slightly bit different from the original. For example we have cow milk, but no goat milk that was in original recipe. So I have to find a way to fix the recipe using spices (patches).
Creating recipes is only part of my job though. I also work with our suppliers of ingredients (upstream developers). Sometimes the ingredients are bad, or I have found a way to improve the ingredient so I contact the suppliers and we work together.
Third part of my job is improving cooking process (simplifying packaging). So sometimes I move some furniture around so that other chefs don’t have so much between the fridge and other places. Or I create a new mixer (tools) that speeds up mixing of ingredients.
Final part of my job is to work in a VIP part of the restaurant (RHEL). Only some customers can go there, most meals are usually very similar to normal restaurant, but each meal is tasted (tested) before we give it to customers and if they don’t like it they can complain and we bring them improved recipe.
I find this metaphor kind of works for most things to a surprising degree. For the record:
  • Package maintainers – chefs
  • QE/QA – tasters
  • Security – bouncers
  • Release engineering – waiters (sorry guys)

Do you have an idea where this came from? Or can you think of a better metaphor for packaging? I’ll probably keep updating and expanding this post as I go so I can point people to this when then want to know what I do..