Problems with running gpg-agent as root

February 14, 2011 in bug, fedora, howto, problem, security

This is gonna be short post for people experiencing various issues with pinentry and gpg-agent. This is mostly happening on systems with only gpgv2.

I have been asked to look at bug 676034 in Red Hat Enterprise Linux. There we actually two issues there:

  • Running pinentry with DISPLAY variable set but no available GUI pinenty helpers
  • Using gpg on console after doing “su -”

First problem was relatively easy to figure out. Pinentry finds DISPLAY variable and looks for pinentry-gtk, pinentry-qt or pinentry-qt4 helpers to ask for passphrase. Unfortunately if none of these GUI helpers can be found, pinentry doesn’t try their console counterpart. Workaround is simple: unset DISPLAY variable if you are working over ssh connection (or don’t use X forwarding when you don’t need it). More recent pinentry features proper failover to pinentry-curses

Second problem was a bit more tricky to figure out, although in the end it was a facepalm situation. When trying to use GNUPG as root on console, hoping for pinentry-curses to ask for passphrase, users were instead introduced to this message: ERR 83886179 Operation cancelled. To make things more confusing, everything seemed to work when logging in as root directly from ssh.

At first I thought that this must be caused by environment variables, but this seemed to be incorrect assumption. Instead the reason was that current tty was owned by original owner and not root. This seemed to cause problem with gpg-agent and/or ncurses pinentry. I will investigate who was the real culprit here, but this bug seems to be fixed at least in recent Fedoras

So what should you do if you have weird problems with gpg and pinentry as root? Here’s what:


$ su -
[enter password]
# chown root `tty`
[use gpg, pinentry as you want]

Easy right? As a final note…I’ve been to FOSDEM and I plan to blog about it, but I guess I am waiting for the videos to show online. It’s quite possible I’ll blog about it before that however, since it’s taking a while.

Mount me, but be careful please!

June 30, 2009 in en, gsoc, howto, linux, open source, problem, projects, security, software

First a bold note. I already have repository on Gentoo infrastructure for working on my GSoC project. Check it out if you want.

Last time I mentioned I won’t go into technical details of my GSoC project any more on this blog. For that you can keep an eye on my project on gentooexperimental and/or gentoo mailing lists, namely gentoo-qa and gentoo-soc. But there is one interesting thing I found out while working on Collagen.

One part of my project was automation of creating of chroot environment for compiling packages. For this I created simple shell script that you can see in my repository. I will pick out one line out of previous version of this script:

mount -o bind,ro "$DIR1" "$DIR2"

What does this line do? Or more specifically what should it do? It should create a virtual copy of conents of directory DIR1 inside directory DIR2. Copy in DIR2 should be read-only, that means no creating new files, no changing of files and so on. This command succeeds and we as far as we know everything should work OK right? Wrong!

Command mentioned above actually fails silently. There is a bug in current linux kernels (2.6.30 as of this day). When you execure mount with “-o bind,ro” as arguments, the “ro” part is silently ignored. Unfortunately it is added to /etc/mtab even if it was ignored. Therefore you would not see that DIR2 is writable unless you tried writing to it yourself. Current proper way to create read-only bind mounts is therefore this:

mount -o bind "$DIR1" "$DIR2"
mount -o remount,ro "$DIR2"

There is issue of race conditions with this approach, but in most situations that should not be a problem. You can find more information about read-only bind mounts in LWN article about the topic.

Once it’s out, it’s out

November 15, 2008 in en, privacy, security

Have you ever said anything you wanted to take back right after you finished the sentence? Well maybe you got lucky and there were only a few people around. But once you put something on the web, it’s there forever. Internet doesn’t have concept of delete button.

There is always omnipresent cache and archives, so even deleting content from you site doesn’t help. This happened recently when Apple pulled biography of their new executive Mark Papermaster from their website, after court barred him from reporting to work in Apple until his lawsuit with IBM is closed. I will not go into details (you can read Ars Technica coverage of the issue) because my point lies elsewhere. You can say what you want, if it is connected to the Internet it is public FULLSTOP

Internet is full of stories where people wanted to hide their humiliations and errors from public by injunctions, lawsuits and whatnot. The end result is almost always Streissand effect. If you read the wiki, there are some nice examples why you should keep your private things private. Once it’s out, trying to censor it will only make it worse (the more famous/sexy you are the worse for you). It might be a good time to read guides to privacy right now. I know you are not going to do that anyway, but it is still my dream that once, a new generation will be able to protect their privacy online. Unfortunately anecdotal evidence suggest otherwise.

By the way. Anyone knows a simple list of things to improve your privacy online?

Earn money sending spam!

November 14, 2008 in en, privacy, security

Seriously. According to joint study by security researchers, Storm botnet can create as much as $ 3.5M of revenue per year. It was definitely one of the most ingenious research and analytical papers I have read so far.

In order to measure effectiveness of spam campaigns, researchers joined Storm botnet with bots that were used to conduct MITM attack on Storm itself. These bots changed spam campaigns slightly and redirected targets of spam campaign (users) to servers controlled by researchers. These servers mimicked websites of spammers and counted number of visitors and number of actual victims who fell for the scams and provided their information (credit card number, social security number, etc.). If the results are correct, spam campaigns are effective in less than 0.00001% of cases. This number is indeed extremely low, but if you consider size of the Storm and number of emails that it sends every day, you get to more interesting numbers ranging from $7000 to $9500 of revenue per DAY.

I left out few interesting details so if you have some time, consider reading the whole paper (12 pages).

We need CAPTHHA

October 11, 2008 in en, privacy, rant, security, software engineering

I am pretty sure everyone has seen CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) before. Maybe you didn’t know the (full) name but you have encountered it when registering accounts, posting comments or accessing some parts of web. You know, those annoying things that are exercising your ability to recognize distorted words with weird backgrounds.

CAPTCHAs are used to protect against automated attacks. For example automatic registration of new users on Gmail would create great opportunities for spammers. CAPTCHAs are mostly working, even when they get “hacked” from time to time. The biggest problem? They are reaching levels where even humans are having problems reading the letters. I still have nightmares when I remember CAPTCHAs used on RapidShare. Telling cats from dogs was not that easy for me somehow. I am not sure about “hackability” of reCAPTCHA, but as far as usability goes, it’s one of the best ones for me. Too bad only a few sites are using it.

The main problem of CAPTCHAs is not the complexity but relay attacks and human solvers from 3rd world countries paid for solving thousands of CAPTCHAs a day. What we really need is CAPTHHA (Completely Automated Public Test to tell Humans and Humans Apart). Computer science is far from being able to tell humans with “clean” intentions from those being paid to get past the defences. One solution would be to issue certificates of “humanity” signed by central authority. You could then ban users that were misusing their certificates. There are of course privacy and security problems with this approach, not to mention financial “issues”, so I guess this is not how it’s gonna work.  Other approaches have also been tried, but they usually have problems with disabled people. I am certainly interested how Computer Science solves this problem.

Google Chrome mass betatesting

September 16, 2008 in en, google, rant, security, software, software engineering

Google released its own Web browser called Chrome few weeks ago and whole web was buzzing with excitement since then. They did it Google style. Everything is neat, clean and simple. And quite a few features are also unique. Google engineers obviously put a lot of thought into scratching their itches with web applications. Javascript engine is fast and whole browser is created around the idea that web is just a place for applications. One of the most touted things about Chrome were its security features. You can read whole account of basic Chrome features on its project page.

In Chrome each tab runs as a separate process communicating with main window through standard IPC. This means that if there is fatal error in handling of some page (malicious or otherwise), other tabs should be unaffected and your half-written witty response to that jerk on the forum will not be lost. Chrome also has other security enhancements, that should make it more secure. I said should. Within few days of Chrome release several security vulnerabilities surfaced, ranging from simply annoying DOS to plain dangerous remote code execution.

What caught my attention was bug that enabled downloading files to user’s desktop without user confirmation. It was caused by Googlers using older version of Webkit open source rendering engine in Chrome. Integrating “foreign” software with your application can be tricky, especially if you have to ensure that everything will be working smoothly after the upgrade. In that respect, it is sometimes OK to use older versions of libraries. As long as you fix at least security bugs. People write buggy software. Google engineers included. I am just surprised that they don’t have any process that would prevent distribution of software with known security vulnerabilities to the public.

And that is the main problem. Chrome is beta software. Because of this, bugs are to be expected. But Google went public with Chrome in the worst possible way. They included link to Chrome download page on their home page, making hundreds of thousands of people their beta testers. People who have no idea what “beta testing” actually means. They just know that Google has some cool new stuff. So let’s try it right? Wrong. Most of us expect our browser to be safe for e-banking, porn and kids (not necessarily in that order). Unfortunately Chrome is not that kind of browser. Yet. I am pretty sure it is gonna be great browser in the future though. But right now Google should put big red sign saying “DANGEROUS” in the middle of Chrome download page.

Until Chrome becomes polished enough for Google to stop calling it “beta“, it has no place on desktops of common computer users. Even oh-so-evil Microsoft doesn’t show download link for IE8 beta on their main page to promote it. Mentioned issues aside, Chrome really sports few good ideas that other browsers could use as well. Try it out, and you will like it. Then go back to your old browser for the time being.

Stumbleupon password policy

September 10, 2008 in en, rant, security

I already wrote one post about passwords few weeks ago. As much as we would like to, passwords are not going away in foreseeable future. But it seems I found something worth mentioning again :)

Recently I started using stumbleupon. For those who don’t know this site I provide short description from their main page:

StumbleUpon discovers web sites based on your interests. Whether it’s a web page, photo or video, our personalized recommendation engine learns what you like, and brings you more.

It’s basically social networking site for link rating and exchange. It’s a nice way to discover yet unknown gems of the Interweb. Just stumble around :)

Here’s what sparked my interest. After registering with the site I received following email:

StumbleUpon

Discover new web sites

Hi xxx,
Thanks for joining StumbleUpon! Please click here
to verify your email address:

http://www.stumbleupon.com/verifyuser.php?email=3Dxxx%4=0gmail.com&verification=3Dd6z505kjmtjox3

Here are your login save this information and
store it securely:

Email: xxx@gmail.com

Password: MY PASSWORD IN CLEARTEXT

...
...

What the hell are they thinking? Sending cleartext password through email is not acceptable for quite a few years now, especially for large public websites. There are other options when users forget their password, for example:

  • resetting password to random one that is usable only once,
  • using control questions, i.e. “What was the name of your first pet?”. They are not very secure, but still better then cleartext passwords.
  • lots of other options (google training for the readers :) )

Maybe they count on Stumbleupon being low-risk site, where losing account is not dangerous to your online identity. But they obviously forgot that most users use the same password over and over again. So their password for Stumbleupon will be the same as for their Gmail account, and that will be the same as xy other passwords. I am only fortunate that I stopped recycling passwords long time ago. Shame on you Stumbleupon!

Lack of security is not a problem

August 28, 2008 in en, rant, security, software

False sense of security is. As Dan Kaminsky pointed out recently, there have been numerous BIG security problems with fundamental Internet services. All of them undermine basic principles on which Internet is based: routing or DNS.

Can we trust the other side? How can we know that we are “talking” to the same computer as few days ago? This question is usually answered by encryption of communication and authentication through SSL (https). Most websites use self-signed certificates, but these provide only encryption, not authentication. There are quite a few good examples of security pitfalls of self-signed certificates.

Recently I also managed to stumble on nice Firefox extension called Perspectives. Usually only your browser checks security certificate of https server you are connecting to. If attacker takes over path between you and destination server, trying to execute MITM attack, Perspectives would detect this and warn you. It would even warn you if the certificate changed recently. This makes even self-signed certificates somehow more secure. Without Perspectives you could be easily lured in a den of wolves. For more in-depth explanation on how Perspectives works, see original publication.

The basic principle still stands. You are most vulnerable, when you don’t expect an attack. In other words:

Little paranoia never hurts

So next time you see a warning about invalid/outdated/self-signed certificate, don’t accept it without thinking about consequences.

Strong passwords suck, but they don’t have to

August 26, 2008 in en, rant, security

Amrit Williams wrote a nice piece on sucking passwords. But as Martin McKeay pointed out Amrit didn’t provide any real solutions except maybe using passphrases. Passwords are gate to online existence of most people. Most people know that there are certain rules for creating strong passwords (at least I hope so). But only a handful of people use really secure passwords. Moreover you should have different passwords for every program/email account/social networking site/etc. Why? So that when one account becomes compromised (by whatever means), others will stay safe.

You can find a lot of rules for chosing good passwords all around Internet. There is only one problem with them. If we would like to really follow all the rules, most of us would end up with 20+ passwords, every one longer than 8 characters, most of them without any meaning. Good luck with remembering them. But hey! We are in computer age, we don’t have to remember stuff anymore right? Why not use a decent password manager? Then you have to remember only one password (but it better be REALLY secure).

This approach creates one more problem for us though. Mobility of our passwords. You want to access website x.y.com? I hope you have your password manager with database at hand. Otherwise you’re screwed. I see two solutions:

  • If you use some kind of UNIX-like system, and you have a public IP, you could use command-line password manager to access your passwords from anywhere.
  • Carry your password manager with your database around.

I like the second method more because you don’t have to worry about firewalls, proxys and similar stuff.

Recently I found out about PortableApps. It’s a set of open source applications designed to be run from USB thumb drive without leaving anything behind after you close them. No registry changes, no temporary files etc. One of applications offered is KeePass Password Safe. It uses AES encryption to securely encrypt database of passwords. This Windows-only set of applications provides means to have strong, unique passwords that you can carry around with you. So what are you waiting for? Make them unique!

Note: I tend to use gpass password manager (Unix-only, but I usually have access to my machine) and I remember most important passwords by heart. I’ll probably migrate to some other multiplatform solution soon (maybe PasswordSafe?)

Note2: Apparently there is similar (or even better) software for MacOS X (1Password) I haven’t tried it though.