Understood and agreed with

October 16, 2011 in en, lyrics, personal, reply, song

Dear OSS/Fedora/whatever reader. Stop right here.

Oh nothing’s going to change my love for you
I wanna spend my life with you
So we make love on the grass under the moon
No one call tell, damned if I do
Forever journey on golden avenues
I drift in your eyes since I love you
I got that beat in my veins for only rule
Love is to share, mine is for you

Making packaging Maven projects easier

September 12, 2011 in en, fedora, packaging

There are two recent changes to our Java guidelines in Fedora and use of Maven when packaging that I’d like to mention today

Maven dependency mapping macros

Thing I haven’t blogged about yet but it’s pretty important: We have new macros for maven depmaps in Fedora. In the past when you wanted to map certain groupId:artifactId to a file in _javadir, you had to include snippet like this in your spec:


%add_to_maven_depmap com.google.guava guava 05 JPP guava
%add_to_maven_depmap com.google.collections google-collections 05 JPP guava

This tells our maven that com:google:guava:guava and com.google.collections:google-collections can be found in one of the repositories as JPP/guava.jar. It meant you had to know the groupId:artifactId and other information, plus it was extremely easy to make a mistake here causing all sorts of trouble. Current code doing the same thing:


%add_maven_depmap JPP-guava.pom guava.jar -a "com.google.collections:google-collections"

We parse the pom file and get groupId:artifactId from it, plus we do additional sanity checks such as:

  • naming of pom and jar file have to be consistent
  • jar file has to exist if packaging type is not pom

If you need additional mappings you can easily add them. There are few other options for this new macro useful in certain situations.

Maven test deps skipping

Long story short: When you use -Dmaven.test.skip=true in Fedora packages you no longer need to patch those test dependencies out of pom.xml.

We’ve had Apache Maven in Fedora for quite some time and packaging using Maven has been getting easier over time due to small tweaks to our packaging macros and guidelines changes. However there has been one problem that’s been bugging all Java packagers and was especially confusing for those starting to package software built with Maven. The problem is that Maven creates a tree of dependencies before it starts building the project, but it includes test dependencies even when tests are being skipped.

Skipping tests is sometimes necessary due to problems with koji, or dependencies and up until now we had to either patch those tests dependencies from pom.xml or use custom dependency mappings (ugly concept in itself).

Last week I decided it’s about time someone did something about this, so I dug in the Maven code and created a solution (more of a hack really) that is already included in Fedora. If you want the gory details, you can read the patch itself (I advise against it). I’ll try to make the patch work properly so that it can be included in mainstream code.

I can just hope that packagers will find these changes helpful, but general feedback has been positive.

Local DNS caching – just do it already!

November 11, 2010 in en, network, open source, problem, software

I recently encountered weird problems with my network connection at home. Everything worked, but was unbelievably slooooow. Ping showed times of ~30 ms, but I could easily see it took more time for those packets to go there and back.

I took me some time to figure out what was happening. Looking back, checking DNS server should have been one of the first things to do. Seems like first DNS server provided by my provider has been down. That meant that every DNS query timed out and then went to the second DNS which got me my response. For some reason ping did DNS query before every new package being sent. That explains its weird behaviour.

This problem got me to finally install local caching dns. I was thinking about doing it before, but I never got around to do it until now. I always thought it’s gonna be a few-hour nightmare. Now I blame my previous experience with bind ūüėÄ For simple local caching bind would be overkill, so I chose dnsmasq. Using it was as simple as installing, running dnsmasq and executing

$ echo 'nameserver 127.0.0.1' > /etc/resolv.conf.head

From that point on every resolv.conf file generated by dhcpcd will have my local DNS as first DNS server to try. For this time you can add it there manually. Then you can verify your setup works by running following command twice in a row:

$ dig randomserver.com

First execution should have Query time: XX msec with XX being few tens of miliseconds. Query time for second run should be zero or very close to zero.

Congratulations. You have your very own caching server. Who knows…maybe you’ll even notice some improvements in your network connection :-)

PyQTrailer revisited

November 10, 2010 in en, fedora, open source, projects, pyqt, python, software

Some time ago, I wrote about my little project: apple.com trailer downloader. Apple is still not very open-source friendly as far as its trailer website is concerned. So all points I made in my original post still stand. To my surprise this little project is still alive and kicking, with new ideas for improvements coming and coming :-). What is even more important: it seems that so far no breakage happened due to apple changing something on the web.

Since the first version I released almost 6 months ago several new features appeared. Some of them include:

  • Parallel downloading of trailers
  • Ability to run movie player (mplayer, vlc, etc) without downloading file to HDD
  • Lot of customisation/performance options added
  • Working support for trailer search
  • Localisation support
  • Python 3 support

Latest version (0.5.2) is available in Gentoo repositories already, and should hit Fedora updates in next day or so (this will be delayed due to new package acceptance criteria though). Enjoy.

Packaging workflow, patch management and git magic in Fedoraland

October 15, 2010 in en, fedora, git, howto, linux, packaging

Big part of my job is packaging for Fedora Linux (I am pretty sure I haven’t mentioned this before :-) ). I have spent last 6 months working on various Java packages, adding new packages to Fedora, updating dependencies etc. I have developed certain workflow which I believe might be of interest to other packagers. So here goes. Most of these hints are about managing patches for your packages. I’ll also try to work on concrete package so it won’t be completely theoretical.

Let’s assume your project already has some history and patches. Let’s fix velocity bug 640660 for example. I’ll start with steps I took and what they meant, and I’ll summarize in the end with rationale what I have gained by using my workflow (and what could be improved).

After modifying BuildRequires and Requires to tomcat6 servlet api I tried to build velocity:

$ fedpkg mock

This is what I got:

---snip----
compile-test:
[javac] Compiling 125 source files to /builddir/build/BUILD/velocity-1.6.3/bin/test-classes
[javac] /builddir/build/BUILD/velocity-1.6.3/bin/test-src/org/apache/velocity/test/VelocityServletTestCase.java:135: org.apache.velocity.test.VelocityServletTestCase.MockServletContext is not abstract and does not override abstract method getContextPath() in javax.servlet.ServletContext
[javac] static class MockServletContext implements ServletContext
[javac] ^
[javac] Note: /builddir/build/BUILD/velocity-1.6.3/bin/test-src/org/apache/velocity/test/VelocityServletTestCase.java uses or overrides a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 1 error
BUILD FAILED
/builddir/build/BUILD/velocity-1.6.3/build/build.xml:251: Compile failed; see the compiler error output for details.
Total time: 47 seconds
---snip---

The issue seems simple to fix, just missing stub function in a test case, right? So what now?

$ fedpkg prep
$ mv velocity-1.6.3 velocity-1.6.3.git
$ cd velocity-1.6.3.git
$ git init && git add . && git commit -m 'init'

This effectively created my small git repository for sources and populated it with all files. Using fedpkg prep step we extracted the tarball and applied already existing patches to unpacked sources. I suggest you create shell alias for last three commands as you’ll be using it a lot. We moved directory to velocity-1.6.3.git so that next (accidental?) fedpkg prep won’t erase our complicated changes (yes it happened to me once. I’ve had better days). Note that velocity-1.6.3.git is not a temporary directory. I will keep it around after fixing this bug so that I can use git history, diffs and other features in the future. It is especially nice when you have packages with lot of patches on top.

Now we can easily work in our new git repository, edit source file in question and do:

$ git add src/test/org/apache/velocity/test/VelocityServletTestCase.java
$ git commit -m 'Fix test for servlet api 2.5'
$ git format-patch HEAD~1

This created commit with descriptive message and generated a patch file 0001-Fix-test-for-servlet-api-2.5.patch in our current directory. This is how the patch looks like:

From 8758e3c83411ffadc084d241217fc25f1fd31f42 Mon Sep 17 00:00:00 2001
From: Stanislav Ochotnicky
Date: Thu, 14 Oct 2010 10:20:52 +0200
Subject: [PATCH] Fix test for servlet api 2.5

---
.../velocity/test/VelocityServletTestCase.java | 7 ++++++-
1 files changed, 6 insertions(+), 1 deletions(-)

diff --git a/src/test/org/apache/velocity/test/VelocityServletTestCase.java b/src/test/org/apache/velocity/test/VelocityServletTestCase.java
index 824583e..ac0ab5c 100644
--- a/src/test/org/apache/velocity/test/VelocityServletTestCase.java
+++ b/src/test/org/apache/velocity/test/VelocityServletTestCase.java
@@ -16,7 +16,7 @@ package org.apache.velocity.test;
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
- * under the License.
+ * under the License.
*/

import java.io.IOException;
@@ -149,6 +149,11 @@ public class VelocityServletTestCase extends TestCase
return this;
}

+ public String getContextPath()
+ {
+ return "";
+ }
+
public String getServletContextName()
{
return "VelocityTestContext";
--
1.7.2.3

Now that we have patch prepared for velocity we need to use it in the spec file and we’re done.

Let’s say our first attempted patch wouldn’t work as expected and build (or test) still failed. We modify the sources again and do another commit. What we have now is:

$ git log --format=oneline
c15f7e02eaae93b755cc0bfde6def3d6e67d2b0f (HEAD, master) Fix previous commit
3e3d654c142c7028c9c7895579fba204c4c4bf08 Fix test for servlet api 2.5
2f32554ddf892f4cca3f78b1f82a7c3ab169c357 init

We don’t want two patches in the spec file for one fix so: time for git magic. You’ve probably heard of git rebase if you’ve been using git for a while. What we want to do now is merge last two commits into one, or in git-speak “squash” them. To do this you have to do:

$ git rebase -i HEAD~2

Now your editor should pop-up with this text:

pick 3e3d654 Fix test for servlet api 2.5
pick c15f7e0 Fix previous commit

# Rebase 2f32554..c15f7e0 onto 2f32554
#
# Commands:
# p, pick = use commit
# r, reword = use commit, but edit the commit message
# e, edit = use commit, but stop for amending
# s, squash = use commit, but meld into previous commit
# f, fixup = like "squash", but discard this commit's log message
#
# If you remove a line here THAT COMMIT WILL BE LOST.
# However, if you remove everything, the rebase will be aborted.
#

So we just need to change “pick c15f7e0 Fix previous commit” into “squash c15f7e0 Fix previous commit” (you can also use just ‘s’ instead of ‘squash’). Save. Close. Another editor window will open with something like this:

# This is a combination of 2 commits.
# The first commit's message is:

Fix test for servlet api 2.5

# This is the 2nd commit message:

Fix previous commit

# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.

For this case we will delete second message because we just want to pretend our first attempt was perfect :-). Save. Close. Now we have:

$ git log --format=oneline
cbabb6ac43f7bdb8e52ccd09c25cfd0a032b553c (HEAD, master) Fix test for servlet api 2.5
2f32554ddf892f4cca3f78b1f82a7c3ab169c357 init

Repeat as many times as you want. You can also re-order commits and change commit messages with rebase (note that if you just want to change last commit message you can do “git commit –amend”). I generally don’t create commits until I have working patch though.

So why do I think all this mumbo-jumbo improves my workflow? Let’s see:

  • I can have long comments for every patch I create (instead a line or two in spec file)
  • I can use the same patches to send directly to upstream
  • I don’t have to juggle around with diff and remember what files I changed where
  • Probably several other things I haven’t even realized

I have a few things that bother me of course. git format-patch generates filenames that are different from standard practice of %{name}-%{version}-message.patch. This is not a git problem. For packages where only my patches exist I stick with git naming, but when there are different patches I stick with naming they started. Another thing that is bothering me is that creating initial repository by using “fedpkg prep” hides patches that were applied to sources. That’s why I am thinking about re-working my packages so that all patches will be in my git repositories as commits with descriptive messages. No need for comments in the spec file anymore. Perhaps someone can suggest other improvements to my approach.

Uploading original photos from Digikam to Flickr

August 28, 2010 in en, kde, open source, photography, programming, projects, qt, software

I have been using Digikam for managing my photos for some time. It’s pretty neat software and development is progressing quite fast. You can think of it as Lightroom, just without the neat non-destructive editing. But that is coming too, thanks to Google and another round of Summer of Code participants. But I wouldn’t be writing this blog post if Digikam was flawless would I? :-)

First I’ll describe my work-flow in few short bullet-points :-)

  • Shoot a LOT of photos
  • Delete almost as many photos
  • Geotag, tag/keyword and rate remaining photos
  • If it was party or something similar upload to Facebook right away
  • Pick few photos and improve them a bit with Gimp (nothing fancy, just crop/levels)
  • Upload all photos to Flickr as a backup/sharing place

Digikam enables me to work like this, except the last point. Why? Because apparently Digikam developers don’t think anyone would update their photos to Flickr without first resizing/recompressing them. This¬† how flickr export dialog looks like this:

See the problem? Even if I don’t chose “Resize photos before uploading” Digikam will still re-compress them which is a Really-Bad-Thing(tm) to do with jpeg files. I had some previous experience with Qt3 and even Qt4. so I though it might be a good idea to look into fixing this small annoyance. I will not bore you with the details how I checked out svn repository with git and rest of the stuff. Here is the result:

If you un-check “Send original file (no resizing)” original settings will appear and you can resize/recompress as much as you want (Blasphemy! Madness!). The patch is not flawless, because it won’t prevent you from trying to upload RAW files to Flickr, but it’s good enough for me :-) It’s not even that big, stats are like this:

¬†flickrexport/flickrtalker.cpp |¬†¬† 18 ++++++++++++——
 flickrexport/flickrtalker.h   |    2 +-
¬†flickrexport/flickrwidget.cpp |¬†¬† 34 ++++++++++++++++++++++++++——–
 flickrexport/flickrwidget.h   |    5 +++++
 flickrexport/flickrwindow.cpp |    4 ++++
 flickrexport/flickrwindow.h   |    1 +
 6 files changed, 49 insertions(+), 15 deletions(-)

You can download the patch from my Dropbox for now until the bugreport I created some time ago will get sorted out (don’t hold your breath too much though). The patch applies cleanly across all versions of kipi-plugins I tried (from 0.8.something to 1.4.0). Happy uploading.

Downloading trailers on Linux – final solution

May 10, 2010 in en, linux, open source, projects, python, software

I love films. All of them to be exact. I believe you just have to be in the right mood and you would enjoy even few of the worst movies ever made. Even though I am a proponent of open source philosophy, we as a society are obviously not ready to embrace it in entertainment industry just yet.

This is where www.apple.com/trailers comes into play. Apple made great deals with movie studios and you can watch/download newest movie trailers. Well…sort of. Apple employs variety of restrictions which makes this site next to useless on a Linux desktop. It hides links to trailers themselves behind reference files so that when you download with your favorite browser, you will only get small reference file not the trailer itself. And that is after you circumvent user-agent protection. Because apple believes nothing but iTunes/iPad/iOtherAppleStuff should access these trailers. There are scripts around that can make downloading possible for Linux users. I have been using Apple Trailer Download script for Greasemonkey for quite some time, but it always stopped working after some time.

Another opportunity for me I guess. I have been trying to improve my Python-fu for some time so what better way then a small project like this? I started last weekend after I found out Apple actually publishes JSON data of trailers on its site. This made access quite easy from python and is quite error-prone to changes of website itself (as long as Apple doesn’t pull whole JSON thingy…but they are actively using it too). Long story short…there are two outputs from this endeavor:

  • pytrailer – python module to simplify access to movies on apple.com/trailers
  • pyqtrailer – Qt4 interface that displays poster, movie information and enables downloading of trailers

You can report bugs on respective websites (there are quite a few now, but basic downloading for hd trailers works). If you want to try it out just running:

# easy_install pyqtrailer
should work as long as you have PyQt4 installed. You can just run pyqtrailer now and you should see something like this:

That’s it. I will improve/fix it a bit but don’t expect too much :-)

And he’s back! (from hibernating)

March 27, 2010 in bug, en, kernel, linux, open source, problem, software

What better way to celebrate summer solstice, than by making my computer able to hibernate? Since my last post a lot has happened with me. I got a new phone (HTC Hero FTW!), I finished university, went traveling a bit and I also got a new notebook (because the old one died on me). R.I.P. Thinkpad R51, welcome Thinkpad T500. There are several things I could start writing about now. Starting with how great Hero and Android is to use all the way to today’s blog post: How to make my computer hibernate?

Linux has had support for hibernating for quite a few years now and although it’s not perfect, it usually works out of the box. What it needs however is swap device big enough so that it can store image of memory for hibernating. Now I hit a problem. When I got my new Thinkpad I thought “Hey, I have 4GB of RAM…why would I need a swap?”. And even if I REALLY needed more than 4GB RAM I can still create temporary swap by using swapfile. Unfortunately I couldn’t make swapfile on LVM work with TuxOnIce. TuxOnIce also has another alternative to swap or swapfile for hibernating: Using filewriter, which is quite similar to swapfile support, I managed to get it to work (after some work, kernel debugging and one small patch to TuxOnIce).

I set FilewriterLocation in hibernate.conf to point to a place where I wanted to store hibernation file and I set the size to 4GB. As instructed in TuxOnIce HOWTO, I then ran

hibernate --no-suspend

to create this image. It created the file as expected, but when it was supposed to tell me settings for bootloader (resume argument) it silently failed. When I tried again, whole computer froze. I was puzzled. How could this happen? I am using Linux so things like this don’t happen! But hey, I should be able to figure out what’s wrong with it right? I set up my kernel to include netconsole, and ran hibernate again. This time I caught where the bug happened. The output was something like this:

TuxOnIce: No image found.
BUG: unable to handle kernel paging request at 6539207a
IP: [] toi_attr_store+0x186/0x2a0
*pdpt = 0000000032732001 *pde = 0000000000000000
Oops: 0000 [#1] PREEMPT SMP
last sysfs file: /sys/power/tuxonice/file/target
Modules linked in: netconsole aes_i586 aes_generic radeon ttm drm_kms_helper drm
i2c_algo_bit sco bnep ipt_MASQUERADE iptable_nat nf_nat ipt_LOG nf_conntrack_ip
v4 nf_defrag_ipv4 xt_state nf_conntrack ipt_REJECT xt_tcpudp iptable_filter ip_t
ables x_tables rfcomm l2cap vboxnetadp vboxnetflt vboxdrv arc4 iwlagn iwlcore ma
c80211 sdhci_pci snd_hda_codec_conexant sdhci pcmcia e1000e uvcvideo mmc_core cf
g80211 snd_hda_intel yenta_socket btusb rsrc_nonstatic tpm_tis pcspkr pcmcia_cor
e videodev v4l1_compat intel_agp wmi agpgart tpm snd_hda_codec tpm_bios video fu
se xfs raid10 raid1 raid0 md_mod scsi_wait_scan sbp2 ohci1394 ieee1394 usbhid uh
ci_hcd usb_storage ehci_hcd usbcore sr_mod sg uvesafb cfbfillrect cfbimgblt cn c
fbcopyarea [last unloaded: microcode]

Pid: 12870, comm: hibernate Not tainted 2.6.33.1-w0rm #16 2082BRG/2082BRG
EIP: 0060:[] EFLAGS: 00010202 CPU: 0
EIP is at toi_attr_store+0x186/0x2a0
EAX: 00000000 EBX: 36203430 ECX: 00000000 EDX: f231f200
ESI: 65392066 EDI: 00f60062 EBP: f6006331 ESP: f62a7f14
DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068
Process hibernate (pid: 12870, ti=f62a6000 task=f20a0270 task.ti=f62a6000)
Stack:
00000000 fffffff4 00000001 c1790ca0 00000000 f6e8ab64 c16c75a4 f6d1c380
<0> f62a7f64 c114298d 00000015 00000015 b7709000 f21385c0 f6d1c394 c16c75a4
<0> f6ec7ac0 f21385c0 b7709000 00000015 f62a7f8c c10f207c f62a7f98 00000000
Call Trace:
[] ? sysfs_write_file+0x9d/0x100
[] ? vfs_write+0x9c/0x180
[] ? sysfs_write_file+0x0/0x100
[] ? sys_write+0x3d/0x70
[] ? sysenter_do_call+0x12/0x22
Code: c7 45 e0 00 00 00 00 3b 5d 08 0f 85 e9 fe ff ff 8b 46 20 85 c0 0f 84 de fe
ff ff ff d0 8b 7d e0 85 ff 8d 76 00 0f 84 d9 fe ff ff <8b> 46 14 31 d2 e8 60 03
05 00 8b 46 10 c7 46 14 00 00 00 00 a8
EIP: [] toi_attr_store+0x186/0x2a0 SS:ESP 0068:f62a7f14
CR2: 000000006539207a
---[ end trace 124a5ee29ef71277 ]---

So what can we deduce from this bug output? Let’s go from the top. Bug name (unable to handle kernel paging request) means that it is likely a memory corruption issue. Someone accessed memory that he was not supposed to. IP tells us that function where the error occurred was toi_attr_store in unknown file, unknown line (I don’t have debug information included in kernel). There are other information we can get from that output, but I didn’t really need them. Quick search through kernel sources told me that toi_attr_store is a function inside kernel/power/tuxonice_sysfs.c. I scanned the code, learning what approximately it did. Then I placed printk statements thorough the function so that I could approximate where inside the function the code fails. After some time I narrowed it down to following snippet:


if (!result)
result = count;

/* Side effect routine? */
if (result == count && sysfs_data->write_side_effect)
sysfs_data->write_side_effect();

/* Free temporary buffers */
if (assigned_temp_buffer) {
toi_free_page(31,
(unsigned long) sysfs_data->data.string.variable);
sysfs_data->data.string.variable = NULL;
}

Kernel crashed when it tried to call toi_free_page. After a few reboots and printks later I found out that this was just a coincidence, and sysfs_data variable itself became corrupt even before the call to the toi_free_page. Good candidate? Of course: write_side_effect. But what exactly was write_side_effect? This function was passed as an argument, and therefore I wasn’t able to easily find out what was real code executed at this point. Time to find out! From my previous debugging attempts I knew code failed while it tried to write location of my resume file into /sys/power/tuxonince/file/target. TuxOnIce code defined handling for string sysfs arguments as such:


#define SYSFS_STRING(_name, _mode, _string, _max_len, _flags, _wse) { \
.attr = {.name = _name , .mode = _mode }, \
.type = TOI_SYSFS_DATA_STRING, \
.flags = _flags, \
.data = { .string = { .variable = _string, .max_length = _max_len } }, \
.write_side_effect = _wse }

I found this macro used inside tuxonice_file.c source code like this:

 
SYSFS_STRING("target", SYSFS_RW, toi_file_target, 256,
SYSFS_NEEDS_SM_FOR_WRITE, test_toi_file_target)

So we found our write_side_effect code inside test_toi_file_target function. In one part this function was calling hex_dump_to_buffer to convert device UUID into hexadecimal string. The call looked like this:

 
hex_dump_to_buffer(fs_info->uuid, 16, 32, 1, buf, 50, 0);

This should convert input (fs_info->uuid) into hexadecimal string and store it inside buf. Author of the original code correctly thought about function adding spaces between bytes and therefore need to have more space in the buffer (argument 50 is telling hex_dump_to_buffer how big is output buffer). Unfortunately that same author declared buf as 33 char array. hex_dump_to_buffer therefore stepped outside the buffer and corrupted memory, causing all the problems. I fixed this bug, and sent a patch to the tuxonice-devel mailing list. As of now, it is already in the git repository ready to be released with next bugfix release of TuxOnIce.

That is everything for today, but as I already noted I am using LVM on my system (except root partition) and also use fbsplash for nice animations while rebooting. I am using initrd for this, and I will have another post on that topic.

Mobile (not so) open standards

August 25, 2009 in en, linux, lock-in, mobile, problem, projects, rant

Yesterday I promised I’ll talk about why I hate mobile phones. Of course I didn’t mean all of them. Just the ones I have to deal with. Why? Well my mobile phone kind of died few days ago. I have a Nokia N73 and it’s really quite good phone even if it’s a bit old by today’s standards. You control the phone by using “joystick” kind of thing in the upper part of keyboard. I decided to include image so you don’t have to look for it :-)

So this joystick stopped working (even slightest touch would be evaluated as pushing it, therefore it was unusable). I didn’t have my backup phone with me, but one friend gave me her battered Siemens S55. So what was the problem? Well I have the same sim card for almost 10 years now. Back then only 100 contacts would fit on it. I have almost 300 contacts in my N73. So how do I get all contacts from one phone to the other? Normally I could just send them through bluetooth, but since I couldn’t really control my N73 this was out of question. I was barely able to turn on the bluetooth. I thought that I’ll use SyncML interface to get vCards from N73 to my computer and then sync them again to the S55. In the end I kind of did, but boy was that an unpleasant experience!

So what exactly happened? I installed OpenSync libraries and tools and using multisyncgui I created sync group with one side being file-sync plugin and other was syncml-obex-client plugin. Configuraion of file-sync plugin was mostly just changing path to directory where I wanted to sync. Final version looked like this:





/tmp/sync
contact
vcard30


Configuration for syncml-obex-client appeared to be much more challenging. It appears that Nokia N73 has two quirks:

  • It only talks to SyncML client if it says its name is “PC Suite”
  • It contains a bug that causes it to freeze after certain amount of data if configuration is not correct

First of these quirks is mentioned in almost every tutorial on data synchronization in Linux. However the second one caused me to lose quite some time. My Nokia N73 would freeze after synchronizing approximately 220-240 contacts. To continue working I had to restart the whole phone.In the end I found out that I need to set parameter recvLimit to 10000 in order to synchronize everything. Final setting for syncml-obex-client looks like this:




2
00:1B:33:3A:D1:37

13
0
PC Suite
1
1


1
0
0
10000
0

Contacts
contact
vcard21


So after all that I was able to get vCards from my N73 to my notebook. For every vCard OpenSync created file in directory /tmp/sync. Now came the interesting part. How to get these vCards to Siemens S55?

Simple Google search on Siemens S55 and synchronization in Linux seemed to suggest that tool most suited to do the job was scmxx. This little app is specialized on certain Siemens phones. According to some manuals it was supposed to be able to upload vCards themselves, however I couldn’t get it to work as scmxx was complaining about invalid command line arguments.After some testing I found out that it could access and change sim card phone numbers.

Unfortunately for me, my sim card has limit of 100 phone numbers, each with 14 character identifier (name). This meant I needed to convert vCards from N73 to special format that scmxx used. Mentioned format looked something like this:


1,"09116532168","Jones Rob"
2,"09223344567","Moore John"
...

First column being number of slot that will be overwritten by new information, second column is number and third one name of contact (less than 15 characters).

So I fired up vim and started coding conversion script. It didn’t take long and I had my contact in the old-new phone. There are a lot of hard-coded things in that script since I don’t plan to ever use it again but you can download it from my dropbox. Consider it public domain, and if anyone asks I didn’t have anything to do with it :-)


import os
import re

MAX_CONTACTS=100

class PbEntry(object):

def __init__(self, name, tel, year, month, day):
self.name = name
self.tel = tel
self.year = year
self.month = month
self.day = day

def cmp_pb(e1, e2):
if e1.year > e2.year:
return -1
elif e1.year return 1
else:
if e1.month > e2.month:
return -1
elif e1.month return 1
return 0


telRe = re.compile('TEL(;TYPE=\w+)*:([*#+0-9]+)', re.M)
revRe = re.compile('REV:(\d{4})(\d{2})(\d{2}).*', re.M)
nameRe = re.compile('^N:(.*);(.*);;;', re.M)
def get_entry_from_text(text):
ret = nameRe.search(text)
surname = None
name = None
tel = None
rev = None
if ret:
surname = ret.group(1)
name = ret.group(2)

ret = telRe.search(text)
if ret:
tel = ret.group(len(ret.groups()))

if surname and name:
fn = "%s %s" % (surname,name)
elif surname:
fn = surname
else:
fn = name

if fn:
ret = re.search('(.{0,14}).*', fn)
fn = ret.group(1)


ret = revRe.search(text)
year = ret.group(1)
month = ret.group(2)
day = ret.group(3)

return PbEntry(fn, tel, year, month, day)


entries = []

files = os.listdir('/tmp/sync')
for file in files:
fh = open('/tmp/sync/%s' % file, 'r')
content = fh.read()
entry = get_entry_from_text(content)
entries.append(entry)

entries = sorted(entries, cmp=cmp_pb)

i = 1
for entry in entries:
print '%d,"%s","%s"' % (i, entry.tel, entry.name)
i = i + 1
if i > MAX_CONTACTS:
break

I had my share of incompatibilities between mobile phones, computers and other devices. Fortunately most of devices being sold today use open communication protocols for sharing of data (and other stuff). Too bad people had to put so much energy into reverse engineering proprietary solutions in the past. Just ranting about this vendor lock-in could be spread on quite a few pages. Imagine having 300+ contacts and calendar information in your phone of brand X. When you are buying your new phone, you would be able to synchronize your data only if you bought new phone also from brand X. Would that affect your decision? It sure would affect mine.

Now I have a choice. After fixing my old N73 I will start looking into new phone. So far HTC Hero looks pretty cool and reviews are not half bad.

Final thoughts on GSoC

August 24, 2009 in en, google, gsoc, linux, open source, projects, software engineering

So this year’s Google Summer of Code is officially over. Today 19:00 UTC was deadline for sending in evaluations for both mentors and students. Therefore I think some kind of summary what was happening and what I was doing is in order.

I was working on implementing neat idea that would allow previously impossible things for Gentoo users. Original name for the idea was “Tree-wide collision checking and provided files database”. You can find it on Gentoo wiki still. I later named the project collagen (as in collision generator). Of course implemented system is quite a bit different from original wiki idea. Some things were added, some were removed. If you want to relive how I worked on my project, you can read my weekly reports at gentoo-soc mailing list (I will not repeat them here). Some information was aggregated also on soc.gentooexperimental.org. As final “pencils down” date approached I created final bugreports of features not present in delivered release (and bugs there were present for that matter). Neither missing features, nor present bugs are a real show-stopper, they mostly affect performance. And more importantly I plan to continue my work on this project and perhaps other Gentoo projects. I guess some research what those projects are is in order :-)

Before GSoC I kind of had an idea how open-source projects work since I’ve participated with some to a degree. However I underestimated a lot of things, and now I would do them differently. But that’s a good thing. I kind of like the idea that no project is a failed one as long as you learn something from it. It reminds me of recent Jeff Atwood’s post about Microsoft Bob and other disasters of software engineering. To quote him:

The only truly failed project is the one where you didn’t learn anything along the way.

I believe I have learned a lot. I believe that if I started collagen now, it would be much better in the end. And the best thing is that I can still do that. I get to continue my project and learn some more. If I learned anything during my work on collagen it’s this:

If you develop something in language without strong type checking CREATE THE DAMN UNIT TESTS! It will make you life later on much easier.

In next episode: Why I think Gmail is corrupting minds of people and why I hate mobile phones