Howto create private channel on Freenode

March 25, 2014 in configuration, fedora, howto, IRC

As it happens I sometimes need to create a private IRC channel for prolonged time where nobody but selected people should be able to join. This is a step by step guide to do it on Freenode:

Prerequisities:

  • All participants have registered usernames
  • You must be channel operator (OP) and the channel must NOT be registered (otherwise contact one of channel founders or pick different channel)

Step by step process

  1. /msg chanserv register <channel>
  2. /msg chanserv set <channel> guard on (Chanserv will join your channel)
  3. /mode <channel> +i (set channel to invite only)
  4. /msg chanserv access <channel> add <nick> (for each person that should be able to join)
  5. /mode <channel> +I <nick> (for each person that should be able to join)
  6. Prosper!

Other people modifications

Add person as co-founder (full permissions):

  • /msg chanserv access <channel> add <nick> FOUNDER

Automatically give OP status to person after joining:

  • /msg chanserv flags <channel> <nick> +O

Allow people to invite others:

  • /msg chanserv flags <channel> <nick> +i

For full information try /msg chanserv help, /msg chanserv help flags and /help mode.

Moving My Blog to WordPress on OpenShift

February 6, 2013 in dns, fedora, google apps, howto, openshift, wordpress

As you might have noticed I moved my blog from blogger to WordPress instance on OpenShift. And then set-up a new DNS record for my blog.ochotnicky.com domain to point to this OpenShift WordPress. OpenShift has pretty nifty documentation for how to do it. What might not be obvious is that if you are using Google Apps for Domains, you can easily have blog.yourdomain.com point to OpenShift WordPress application (or any other OpenShift app for that matter)

To set this up you can follow these steps:

  1. Go to your Google Apps dashboard 
  2. Go to Domain Settings -> Domain Names
  3. Click “Advanced DNS settings”
  4. Note the login details for GoDaddy (mostly Sign-in name and password)
  5. Click “Sign in to DNS console” and login with previously received information
  6. You should be in main DNS panel, click on <yourdomain>
  7. Look for “DNS Manager” and click “Launch” below a list of your current records
  8. There are 3 parts A records, CNAME (alias) records and MX records
  9. You need to add CNAME so click “Quick add” in CNAME section
  10. First field is subdomain name you want to use. For me that was “blog”
  11. Second field is your rhcloud.com domain name (i.e <yourapp>-<yournamespace>.rhcloud.com)
  12. Click “Save zone file” in upper right corner
  13. Now you need to teach rhcloud about this new alias as well so run
    rhc alias add -a <yourapp> --alias blog.<yourdomain>.com
  14. You should be able to see WordPress instance at blog.<yourdomain>.com soon enough
  15. Links within WordPress will be pointing to rhcloud still so let’s fix that…
  16. Login to wordpress (if you do it through your domain you will get SSL certificate errors, this is expected)
  17. Go to Settings -> General
  18. Set “WordPress Address (URL)” and “Site Address (URL)” to http://blog.<yourdomain>.com
  19. You should be done

Installing BackTrack on USB: mounting dev/loop0 failed

February 21, 2012 in backtrack, bug, fedora, howto, kvm, linux, usb

Recently I wanted to make use of my 16GB usb drive in a sensible way, and I didn’t really need another classic pendrive for moving data. In the end I decided to install BackTrack on it. BackTrack is a general forensic analysis/penetration testing distribution based on Debian. And it’s fairly nice as far as a rescue distribution too.
I could have installed with with UNetbootin, which has direct support for BackTrack, but I wanted something a little more fancy: full disc encryption and persistence of data.
There is a very nice how-to linked from main BackTrack website for doing exactly this sort of thing. But I didn’t want to burn the image first or even reboot. We have virtualization for that today! Right? Right! Or not…
So I downloaded BackTrack KDE/64bit variant iso, checked the md5sum to be correct, and started installation. Silly me thoght that running a KVM VM like this would make it possible to install BackTrack on the usb drive:

$ virt-install -n test -r 1024 --cdrom BT5R1-KDE-64.iso \
--boot cdrom --nonetworks --graphics spice \
--disk path=/dev/sdg

Where BT5R1-KDE-64.iso would be my BackTrack iso image and /dev/sdg would be my USB drive. Sadly this failed with ugly error message after BackTrack started booting:


# (initramfs) mount: mounting dev/loop0 on //filesystem.squashfs failed

After some investigation I found out that BackTrack booted fine if it was the only drive in the system, but failed with the above messages when I tried to attach my USB drive. Never found the reson, but the solution was to make the USB drive use virtio bus like this:

$ virt-install -n test -r 1024 --cdrom BT5R1-KDE-64.iso \
--boot cdrom --nonetworks --graphics spice \
--disk path=/dev/sdg,bus=virtio

After that I just continued according to the how-to with a few differences (such as USB key being seen as /dev/vda). Welcome our encypted overlords.

Getting your Java Application in Linux: Guide for Developers (Part 2)

April 20, 2011 in fedora, howto, java, packaging

Ant and Maven

Last time I have written about general rules of engagement for Java developers if they want to make lives of packagers easier. Today I’ll focus on specifics of two main build systems in use today: Ant and Maven, but more so on Maven for reasons I’ll state in a while.

Ant

Ant is (or at least used to be) most widely deployed build system in Java ecosystem. There are probably multiple reasons for it, but generally it’s because Ant is relatively simple. In *NIX world Ant is equivalent of pure make (and build.xml of Makefile). build.xml is just that: an XML, and it has additional extensions to simplify common tasks (calling javac, javadoc, etc.). So the question is:

I am starting a new java project. How can I use Ant properly to make life easier for you?

The most simple answer? DON’T! It might seem harsh and ignorant of bigger picture and it probably is. But I believe it’s also true that Ant is generally harder to package than Maven. Ant build.xml files are almost always unique pieces of art in themselves and as such can be a pain to package. I am always reminded of following quote when I have to dig through some smart build.xml system:

Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.

  –Brian Kernighan

And I have a feeling some people try to be really clever when writing their build.xml files. That said, I understand there are times when using Ant is just too tempting so I’ll include a few tips for it anyway.

Use apache-ivy extension for dependencies

One of main problem with and is handling of various dependencies. Usually, they are in some subdirectory of main tree, some jars versioned, some not, some patched without any note about it…in other words nightmare in itself. Apache-ivy extension helps here because it works with dependency metadata that packagers can use to figure out real build dependencies including versions. We can also be sure that no dependencies are patched in one way or the other.

Ivy is nice for developers as well. It will make your source tarballs much smaller (You do have source tarballs right?!) and your build.xml nicer. I won’t include any examples here because I believe that Ivy documentation is indeed very good.

One lib/ to rule them all

In case you really don’t want to use Ivy, make sure you place all your dependencies in one directory in top level of your project (don’t scatter your dependencies, even if you are using multiple sub-projects). This directory should ideally be called lib/. It should contain your dependencies named as ${name}-${version}.jar. Most of the time you should include license files for every dependency you bundle, because you are becoming distributors and for most licenses this means you have to provide full text of the license. For licenses use identical name as jar filenames, but use “.license” suffix. All in all, make it easy to figure out your build dependencies and play with them.

Don’t be too clever

I can’t stress this enough. Try to keep your build.xml files to the bare minimum. Understanding ten 30 KiB big build.xml files with multiple-phase build and tests spread through 10 directories is no fun. Please think of poor packager when you write your build.xml files. I don’t mind having grey hair that much, but I’d rather if it came later rather than sooner.

Maven

And now we are coming to my favourite part. Maven is a build and project management tool that has extensive plugin support able to do almost anything developer might ask for. And all that while providing formal project structure, so that once you learn how Maven works in one project you can re-use your knowledge in other projects.

Maven goodies

Maven provides several good things for packagers such as providing clear dependencies and preventing simple patched dependencies from sneaking in. Most important advantage for packagers coming with Maven is the fact that problems are the same in all projects. Once you understand how certain Maven plugin works, you will know what to expect and what to look for. But Maven is nice not just for packagers, but also for developers.

Declarative instead of descriptive

You don’t tell Maven:

Add jar A, jar B to the classpath, then use this properies file to set-up test resources. Then compile tests (Have you compiled sources yet?) and then … and run them with X

Instead you place test files and resources into appropriate directories and Maven will take care of everything. You just need to specify your test dependencies in nice and tidy pom.xml.

Project metadata in one place

With Maven you have all project information in one place:

  • Developer contact information
  • Homepage
  • SCM URLs
  • Mailinglists
  • Issue tracker URL
  • Project reports/site generation
  • Dependencies
  • Ability modify behaviour according to architecture, OS or other property

Need I say more? Fill it out, keep it up-to-date and we will all be happy.

Great integration with other tools

Ecosystem around Maven has been growing in past years and now you will find good support for handling your pom.xml files in any major java IDE. But that is just the tip of the iceberg. There are Maven plugins adding all kinds of additional tool support. Running checkstyle on your code, helping with licensing, integration with gpg, ssh, jflex and making releases. There are plugins for that and more.

Support for Ant

If you are in process of migrating your build system from Ant to Maven, you can do it in phases. For parts of your builds you can easily run Ant with maven-ant-plugin. Good example of such migration in progress is checkstyle. In version 5.2 they introduced Maven build system while preserving their old layout and running Ant for tests.

Maven messier side

A.K.A What you need to be aware of. It’s generally quite hard to do something bad in Maven, because it won’t let you do that easily. That said, there are plugins that can make it hard for us to package your software.

maven-dependency-plugin:copy-dependencies

This specific goal can potentially cause problems because it allows to copy classes from dependencies into resulting jar files. As I wrote last time, this is unacceptable because it creates possible licensing, security and maintenance nightmares. If you need even just one class from another project, rather than copying it, add it as a dependency into pom.xml

maven-shade-plugin

Shade plugin is a very shady plugin (pun intended). It can be used to weave depdencies inside your jars while changing their package names and doing all kinds of modifications in the process. I’ll give you a small test now :-) Let’s say you have jar file with following contents:


META-INF/
META-INF/MANIFEST.MF
META-INF/maven/
META-INF/maven/org.packager/
META-INF/maven/org.packager/Pack/
META-INF/maven/org.packager/Pack/pom.properties
META-INF/maven/org.packager/Pack/pom.xml
org/
org/packager/
org/packager/signature/
org/packager/signature/SignatureReader.class
org/packager/signature/SignatureVisitor.class
org/packager/signature/SignatureWriter.class
org/packager/Pack.class

Can you tell, from looking at jar contents where is org.packager.signature subpackage coming from? Take your time, think about it. Nothing? Well here’s a hint:



org.apache.maven.plugins
maven-shade-plugin



org.objectweb.asm
org.packager




I believe this demonstrates why usage of shade plugin is evil (in 99% of cases at least). This is especially problematic if the shaded packages are part of public API of your project, because we won’t be able to simply fix this in one package, but it will cascade up the dependency chain.

maven-bundle-plugin

Bundle is one of the more controversial plugins, because it can be used both for good and bad :-) One of the most important good use cases for bundle plugin is generating OSGI bundles. Every project can easily make their jar files OSGI compatible by doing something like this:


...
bundle
...



org.apache.felix
maven-bundle-plugin
true



...

Easy right? Now to the darker side of bundle plugin. I have another example to test your skills. This one should be easier than shade plugin:


META-INF/MANIFEST.MF
META-INF/
META-INF/maven/
META-INF/maven/org.packager/
META-INF/maven/org.packager/Pack/
META-INF/maven/org.packager/Pack/pom.properties
META-INF/maven/org.packager/Pack/pom.xml
org/
org/objectweb/
org/objectweb/asm/
org/objectweb/asm/signature/
org/objectweb/asm/signature/SignatureReader.class
org/objectweb/asm/signature/SignatureVisitor.class
org/objectweb/asm/signature/SignatureWriter.class
org/packager/
org/packager/Pack.class

Problem is the same as with shade plugin (bundling of dependencies), but at least here it’s more visible in the contents of the jar and it will not poison API of the jar. Just for the record, this is how it was created:



org.apache.felix
maven-bundle-plugin
true


org.objectweb.asm.signature



Summary

Today I wrote about:

  • Ant and why you shouldn’t use it (that much)
  • Ant and how to use it if you have to
  • Maven and why it rocks for packagers and developers
  • Maven and its plugins and why they suck for packagers sometimes

There are a lot more things that can cause problems, but these are the most obvious and easily fixed. I’ll try to gather more information about things we (packagers) can do to help you (developers) a bit more and perhaps include one final part for this guide.

Getting your Java Application in Linux: Guide for Developers (Part 1)

April 8, 2011 in fedora, howto, java, packaging

Introduction to packaging Java

Packaging Java libraries and applications in Fedora has been my daily bread for almost a year now. I realized now is the time to share some of my thoughts on the matter and perhaps share a few ideas that upstream developers might find useful when dealing with Linux distributions.

This endeavour is going to be split into several posts, because there are more sub-topics I want to write about. Most of this is going to be based on my talk I did @ FOSDEM 2011. Originally I was hoping to just post the video, but it seems to be taking more time than I expected :-)

If you are not entirely familiar with status of Java on Linux systems it would be a good idea to first read a great article by Thierry Carrez called The real problem with Java in Linux distros. A short quote from that blog:

The problem is that Java open source upstream projects do not really release code. Their main artifact is a complete binary distribution, a bundle including their compiled code and a set of third-party libraries they rely on.

There is no simple solution and my suggestions are only mid-term workarounds and ways to make each other’s (upstream ↔ downstream) lives easier. Sometimes I am quite terse in suggestions, but if need be I’ll expand them later on.

Part 1: General rules of engagement

Today I am going to focus on general rules that apply to all Java projects wishing to be packaged in Linux distributions:

  • Making source releases
  • Handling Dependencies
  • Bugfix releases

For full understanding a short summary of general requirements for packages to be added to most Linux distributions:

  • All packages have to be built from source
  • No bundled dependencies used for building/running
  • Have single version of each library that all packages use

There are a lot of reasons for these rules and they have been flogged to death multiple times in various places. It mostly boils down to severe maintenance and security problems when these rules are not followed.

Making source releases

As I mentioned previously most Linux distributions rebuild packages from source even when there is an upstream release that is binary compatible. To do this we need sources obviously :-) Unfortunately quite a few (mostly Maven) projects don’t do source release tarballs. Some projects provide source releases without build scripts (build.xml or pom.xml files). Most notable examples are Apache Maven plugins. For each and every update of one of these plugins we have to checkout the source from upstream repository and generate the tarball ourselves.
All projects using Maven build system can simply make packagers’ lives easier by having following snippet in their pom.xml files:




...

maven-assembly-plugin


project




make-assembly
package

single




...


This will create -project.zip/tar.gz files containing all the files needed to rebuild package from source. I have no real advice for projects using Ant for now, but I’ll summarise them next time.

Handling dependencies

I have a feeling that most Java projects don’t spend too much time thinking about dependencies. This should change so here are a few things to think about when adding new dependencies to your project.

Verify if the dependency isn’t provided by JVM

Often packages contain unnecessary dependencies that are provided by all recent JVMs. Think twice if you really need another XML parser.

Try to pick dependencies from major projects

Major projects (apache-commons libraries, eclipse, etc.) are much more likely to be packaged and supported properly in Linux distributions. If you use some unknown small library packagers will have to package that first and this can sometimes lead to such frustrating dependency chains they will give up before packaging your software.

Do NOT patch your dependencies

Sometimes a project A does almost exactly what you want, but not quite…So you patch it and ship it with your project B as a dependency. This will cause problems for Linux distributions because you basically forked the original project A. What you should do instead is work with the developers of project A to add features you need or fix those pesky bugs.

Bugfix releases

Every software project has bugs, so sooner or later you will have to do a bugfix release. As always there are certain rules you should try to uphold when doing bugfix releases.

Use correct version numbers

This depends on your versioning scheme. I’ll assume you are using standard X.Y.Z versions for your releases. Changes in Z are smallest released changes of your project. They should mostly contain only bugfixes and unobtrusive and simple feature additions if necessary. If you want to add bigger features you should change Y part of the version.

Backward compatible

Bugfix releases have to be backwards compatible at all times. No API changes are allowed.

No changes in dependencies

You should not change dependencies or add new ones in bugfix releases. Even updating dependency to a new version can cause massive recursive need for updates or new dependencies. The only time it’s acceptable to change/add dependency version in bugfix release is when new dependency is required to fix the bug.

An excellent example of how NOT to do things was Apache Maven update from 3.0 to 3.0.1. This update changed requirements from Aether 1.7 to Aether 1.8. Aether 1.8 had new dependency on async-http-client. Async-http-client depends on netty, jetty 7.x and more libraries. So what should have been simple bugfix update turned into need for major update of 1 package and 2 new package additions. If this update contained security fixes it would cause serious problems to resolve in timely manner.

Summary

  • Create source releases containing build scripts
  • Think about your dependencies carefully
  • Handle micro releases gracefully

Next time I’ll look into some Ant and Maven specifics that are causing problems for packagers and how to resolve them in your projects.

Problems with running gpg-agent as root

February 14, 2011 in bug, fedora, howto, problem, security

This is gonna be short post for people experiencing various issues with pinentry and gpg-agent. This is mostly happening on systems with only gpgv2.

I have been asked to look at bug 676034 in Red Hat Enterprise Linux. There we actually two issues there:

  • Running pinentry with DISPLAY variable set but no available GUI pinenty helpers
  • Using gpg on console after doing “su -”

First problem was relatively easy to figure out. Pinentry finds DISPLAY variable and looks for pinentry-gtk, pinentry-qt or pinentry-qt4 helpers to ask for passphrase. Unfortunately if none of these GUI helpers can be found, pinentry doesn’t try their console counterpart. Workaround is simple: unset DISPLAY variable if you are working over ssh connection (or don’t use X forwarding when you don’t need it). More recent pinentry features proper failover to pinentry-curses

Second problem was a bit more tricky to figure out, although in the end it was a facepalm situation. When trying to use GNUPG as root on console, hoping for pinentry-curses to ask for passphrase, users were instead introduced to this message: ERR 83886179 Operation cancelled. To make things more confusing, everything seemed to work when logging in as root directly from ssh.

At first I thought that this must be caused by environment variables, but this seemed to be incorrect assumption. Instead the reason was that current tty was owned by original owner and not root. This seemed to cause problem with gpg-agent and/or ncurses pinentry. I will investigate who was the real culprit here, but this bug seems to be fixed at least in recent Fedoras

So what should you do if you have weird problems with gpg and pinentry as root? Here’s what:


$ su -
[enter password]
# chown root `tty`
[use gpg, pinentry as you want]

Easy right? As a final note…I’ve been to FOSDEM and I plan to blog about it, but I guess I am waiting for the videos to show online. It’s quite possible I’ll blog about it before that however, since it’s taking a while.

Automatic squashing of last git commits

December 10, 2010 in git, howto, open source, software

I have written before about my workflow in Fedora. This workflow includes a relatively high number of rebasing where I squash last two commits into one. I use it to quickly refine and test patches. My history then usually looks something like this:

$ git log --format=oneline --decorate=no | head -3
2db8eacd7f7c20be88824caae5f5af16b9520d34 temp
443bb0f019c87f0090bb9da295a019c0eee23729 Add conditional BRs to enable ff merge between f14
b4c602f9f044598544cff3d68710f68b9447ea0f Fix installation of pom files for artifact jars

Where 443bb0f is my first attempt at the fix and 2db8eac is a fixed fix :-). Before pushing this into upstream repository I usually squash last two commits to look like this:


$ git log --format=oneline --decorate=no | head -2
f852c3e260d21bbc642f861e6fa6ea62caa7b69b Add conditional BRs to enable ff merge between f14
b4c602f9f044598544cff3d68710f68b9447ea0f Fix installation of pom files for artifact jars

I used to do it manually, but I do it often enough it made sense to automate this. So without further ado:


#!/bin/sh

export EDITOR="sed -i '2s/pick/squash/;/# This is the 2nd commit message:/,$ {d}'"

git rebase -i HEAD~2

Save this somewhere inside your $PATH and chmod +x (or set up shell alias). Then just running this will automatically squash last two commits, using HEAD~1 commit message and discarding last commit message. No warranties though :-)

Edit: Benjamin suggested to use “fixup” instead of “squash”. This is new thing in git 1.7+. For more information on this see this blogpost.

Python3, PyQt4 and missing QString

October 20, 2010 in howto, open source, programming, pyqt, python, qt

As I was recently adding support for Python 3 to my little trailer downloader application that I mentioned before (PyQTrailer) I encountered a strange problem with PyQt4 that only occurred in Python 3.

Let’s take this simple python example:


$ python
>>> from PyQt4.QtCore import QString
>>>

That same code snippet doesn’t work in python3 interpreter though:


$ python3
>>> from PyQt4.QtCore import QString
Traceback (most recent call last):
File "", line 1, in
ImportError: cannot import name QString
>>>

My first instinct was: Bug! Gentoo PyQt4 ebuild was doing something terrible and somehow made PyQt4 unusable in python3 interpreter. Turns out my gut instinct was wrong (once again :-) ).

PyQt4 since version 4.6 changed API of QString and QVariant for python 3.x. For QString this is due to fact that from Python 3.0, string literals are unicode objects (no need for u’unicode’ magic anymore). This means that you can use ordinary Python strings in place for QString. But I wanted my QString for something like this:


...
downloadClicked = pyqtSignal((QString, ))
...

This snippet creates Qt signals that you can then emit. Question is… How can we update this for Python 3.x? We could probably just replace QString with type(“”), but for a change that wouldn’t work with Python 2.x. So? Python dynamic nature to the rescue!
Edit: simplified QString definition (thanks Arfrever)


try:
from PyQt4.QtCore import QString
except ImportError:
# we are using Python3 so QString is not defined
QString = str

If we put previous code sample to the beginning of our Python file we can use QString in our code and it will keep working both in Python 3.x and Python 2.x. Case closed dear Watson.

Packaging workflow, patch management and git magic in Fedoraland

October 15, 2010 in en, fedora, git, howto, linux, packaging

Big part of my job is packaging for Fedora Linux (I am pretty sure I haven’t mentioned this before :-) ). I have spent last 6 months working on various Java packages, adding new packages to Fedora, updating dependencies etc. I have developed certain workflow which I believe might be of interest to other packagers. So here goes. Most of these hints are about managing patches for your packages. I’ll also try to work on concrete package so it won’t be completely theoretical.

Let’s assume your project already has some history and patches. Let’s fix velocity bug 640660 for example. I’ll start with steps I took and what they meant, and I’ll summarize in the end with rationale what I have gained by using my workflow (and what could be improved).

After modifying BuildRequires and Requires to tomcat6 servlet api I tried to build velocity:

$ fedpkg mock

This is what I got:

---snip----
compile-test:
[javac] Compiling 125 source files to /builddir/build/BUILD/velocity-1.6.3/bin/test-classes
[javac] /builddir/build/BUILD/velocity-1.6.3/bin/test-src/org/apache/velocity/test/VelocityServletTestCase.java:135: org.apache.velocity.test.VelocityServletTestCase.MockServletContext is not abstract and does not override abstract method getContextPath() in javax.servlet.ServletContext
[javac] static class MockServletContext implements ServletContext
[javac] ^
[javac] Note: /builddir/build/BUILD/velocity-1.6.3/bin/test-src/org/apache/velocity/test/VelocityServletTestCase.java uses or overrides a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 1 error
BUILD FAILED
/builddir/build/BUILD/velocity-1.6.3/build/build.xml:251: Compile failed; see the compiler error output for details.
Total time: 47 seconds
---snip---

The issue seems simple to fix, just missing stub function in a test case, right? So what now?

$ fedpkg prep
$ mv velocity-1.6.3 velocity-1.6.3.git
$ cd velocity-1.6.3.git
$ git init && git add . && git commit -m 'init'

This effectively created my small git repository for sources and populated it with all files. Using fedpkg prep step we extracted the tarball and applied already existing patches to unpacked sources. I suggest you create shell alias for last three commands as you’ll be using it a lot. We moved directory to velocity-1.6.3.git so that next (accidental?) fedpkg prep won’t erase our complicated changes (yes it happened to me once. I’ve had better days). Note that velocity-1.6.3.git is not a temporary directory. I will keep it around after fixing this bug so that I can use git history, diffs and other features in the future. It is especially nice when you have packages with lot of patches on top.

Now we can easily work in our new git repository, edit source file in question and do:

$ git add src/test/org/apache/velocity/test/VelocityServletTestCase.java
$ git commit -m 'Fix test for servlet api 2.5'
$ git format-patch HEAD~1

This created commit with descriptive message and generated a patch file 0001-Fix-test-for-servlet-api-2.5.patch in our current directory. This is how the patch looks like:

From 8758e3c83411ffadc084d241217fc25f1fd31f42 Mon Sep 17 00:00:00 2001
From: Stanislav Ochotnicky
Date: Thu, 14 Oct 2010 10:20:52 +0200
Subject: [PATCH] Fix test for servlet api 2.5

---
.../velocity/test/VelocityServletTestCase.java | 7 ++++++-
1 files changed, 6 insertions(+), 1 deletions(-)

diff --git a/src/test/org/apache/velocity/test/VelocityServletTestCase.java b/src/test/org/apache/velocity/test/VelocityServletTestCase.java
index 824583e..ac0ab5c 100644
--- a/src/test/org/apache/velocity/test/VelocityServletTestCase.java
+++ b/src/test/org/apache/velocity/test/VelocityServletTestCase.java
@@ -16,7 +16,7 @@ package org.apache.velocity.test;
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
- * under the License.
+ * under the License.
*/

import java.io.IOException;
@@ -149,6 +149,11 @@ public class VelocityServletTestCase extends TestCase
return this;
}

+ public String getContextPath()
+ {
+ return "";
+ }
+
public String getServletContextName()
{
return "VelocityTestContext";
--
1.7.2.3

Now that we have patch prepared for velocity we need to use it in the spec file and we’re done.

Let’s say our first attempted patch wouldn’t work as expected and build (or test) still failed. We modify the sources again and do another commit. What we have now is:

$ git log --format=oneline
c15f7e02eaae93b755cc0bfde6def3d6e67d2b0f (HEAD, master) Fix previous commit
3e3d654c142c7028c9c7895579fba204c4c4bf08 Fix test for servlet api 2.5
2f32554ddf892f4cca3f78b1f82a7c3ab169c357 init

We don’t want two patches in the spec file for one fix so: time for git magic. You’ve probably heard of git rebase if you’ve been using git for a while. What we want to do now is merge last two commits into one, or in git-speak “squash” them. To do this you have to do:

$ git rebase -i HEAD~2

Now your editor should pop-up with this text:

pick 3e3d654 Fix test for servlet api 2.5
pick c15f7e0 Fix previous commit

# Rebase 2f32554..c15f7e0 onto 2f32554
#
# Commands:
# p, pick = use commit
# r, reword = use commit, but edit the commit message
# e, edit = use commit, but stop for amending
# s, squash = use commit, but meld into previous commit
# f, fixup = like "squash", but discard this commit's log message
#
# If you remove a line here THAT COMMIT WILL BE LOST.
# However, if you remove everything, the rebase will be aborted.
#

So we just need to change “pick c15f7e0 Fix previous commit” into “squash c15f7e0 Fix previous commit” (you can also use just ‘s’ instead of ‘squash’). Save. Close. Another editor window will open with something like this:

# This is a combination of 2 commits.
# The first commit's message is:

Fix test for servlet api 2.5

# This is the 2nd commit message:

Fix previous commit

# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.

For this case we will delete second message because we just want to pretend our first attempt was perfect :-). Save. Close. Now we have:

$ git log --format=oneline
cbabb6ac43f7bdb8e52ccd09c25cfd0a032b553c (HEAD, master) Fix test for servlet api 2.5
2f32554ddf892f4cca3f78b1f82a7c3ab169c357 init

Repeat as many times as you want. You can also re-order commits and change commit messages with rebase (note that if you just want to change last commit message you can do “git commit –amend”). I generally don’t create commits until I have working patch though.

So why do I think all this mumbo-jumbo improves my workflow? Let’s see:

  • I can have long comments for every patch I create (instead a line or two in spec file)
  • I can use the same patches to send directly to upstream
  • I don’t have to juggle around with diff and remember what files I changed where
  • Probably several other things I haven’t even realized

I have a few things that bother me of course. git format-patch generates filenames that are different from standard practice of %{name}-%{version}-message.patch. This is not a git problem. For packages where only my patches exist I stick with git naming, but when there are different patches I stick with naming they started. Another thing that is bothering me is that creating initial repository by using “fedpkg prep” hides patches that were applied to sources. That’s why I am thinking about re-working my packages so that all patches will be in my git repositories as commits with descriptive messages. No need for comments in the spec file anymore. Perhaps someone can suggest other improvements to my approach.

Mount me, but be careful please!

June 30, 2009 in en, gsoc, howto, linux, open source, problem, projects, security, software

First a bold note. I already have repository on Gentoo infrastructure for working on my GSoC project. Check it out if you want.

Last time I mentioned I won’t go into technical details of my GSoC project any more on this blog. For that you can keep an eye on my project on gentooexperimental and/or gentoo mailing lists, namely gentoo-qa and gentoo-soc. But there is one interesting thing I found out while working on Collagen.

One part of my project was automation of creating of chroot environment for compiling packages. For this I created simple shell script that you can see in my repository. I will pick out one line out of previous version of this script:

mount -o bind,ro "$DIR1" "$DIR2"

What does this line do? Or more specifically what should it do? It should create a virtual copy of conents of directory DIR1 inside directory DIR2. Copy in DIR2 should be read-only, that means no creating new files, no changing of files and so on. This command succeeds and we as far as we know everything should work OK right? Wrong!

Command mentioned above actually fails silently. There is a bug in current linux kernels (2.6.30 as of this day). When you execure mount with “-o bind,ro” as arguments, the “ro” part is silently ignored. Unfortunately it is added to /etc/mtab even if it was ignored. Therefore you would not see that DIR2 is writable unless you tried writing to it yourself. Current proper way to create read-only bind mounts is therefore this:

mount -o bind "$DIR1" "$DIR2"
mount -o remount,ro "$DIR2"

There is issue of race conditions with this approach, but in most situations that should not be a problem. You can find more information about read-only bind mounts in LWN article about the topic.