In a previous post, I gave some examples of FLOSS programs that performed essential computing tasks (namely operating system, email and cloud storage) which you could use instead of proprietary alternatives. I claimed that the FLOSS versions had the potential to improve the security of your data (in case you're concerned about unauthorised third-parties accessing it) without going into detail about how.
This post will explain why entrusting your data to open source software can protect it from snoopers. Please be aware that there are many ways to spy electronically; I'll only be addressing some that the use of FLOSS can guard against. Also be aware that computer security is not my speciality, I'm just trying to present the issues as best I understand them from several perspectives.
The "Many Eyeballs" Perspective
When I discussed operating systems in my previous post on this subject, I pointed out how it's hard to know exactly what a closed source program does. This has a lot to do with how software is built, which I've explained before: programmers write software in a human-readable source code and then run that code through a compiler to produce the binary program that's readable only by a computer. While closed source (a.k.a. proprietary) software is released in binary form only, software released under a FLOSS licence comes with the source code as well, a bit like getting a Haynes manual (those trusty books that detail every single component of an automobile) supplied with your new car. Conversely, with a proprietary program, you get the equivalent of no manual and a car with a sealed bonnet.
Without the source code, you're forced to look at the program like a black box, so all you can see is its external behaviour. This can make it extremely difficult to determine its internal workings. You may wonder whether there's any code in the program that makes it "phone home", reporting information about you to a remote server. You may ask yourself if there are vulnerabilities in the program which would allow someone to log into your system without your knowledge. For questions like these, there's no easy way to find the answers without the source code.
With the source code, you have the opportunity to search for such vulnerabilities. When many people are able to look through the code, it should go without saying that all these eyeballs act as strong defence against the insertion of secret spying routines and hard-coded back doors. This is not just all theoretical speculation. In the 1990s, for example, Borland released a database server called InterBase that had a back door intentionally engineered into it (see David Wheeler's article mentioning this), which allowed people with the knowledge to break in. As long as the program remained proprietary, this vulnerability remained secret (at least six years). After InterBase was eventually open sourced, the back door was discovered within months.
As well as gaining access to the program, the availability of source code helps you verify what happens when sending stuff out from the program. Whenever you want to transmit data securely from your computer, you need to encrypt it using an encryption algorithm, which turns your readable data into "meaningless" cipher text. There are many such algorithms available, some strong, some weak. With access to the source code, you can verify for yourself whether or not the program has poor quality encryption routines and thus makes it a simple matter for third parties to decode your intercepted messages. It's partly for this reason that famed IT security expert Bruce Schneier recommends that security-conscious engineers use FLOSS.
Keep in mind that the benefits of access to the source code are not automatic. Just because it's possible to review the code for vulnerabilities, that doesn't necessarily mean it gets done. The community that develops the software needs to be sufficiently large, active and knowledgeable about IT security.
The "Keys and Padlocks" Perspective
What I termed the "Many Eyeballs" Perspective is worth knowing, but from what we know about the current spying scandal it isn't actually the most relevant one. The stories that have dominated the front pages for most of this summer are not about secret source, but rather who has the keys to your information.
One of the most important concepts in contemporary computer security is public key encryption. It's a bit complicated, but it's basically a system that simulates keys and padlocks in software. Each user has a public key and a private key, which are both actually just very long numbers in a file and are mathematically linked with each other. The public key can be passed around freely to anyone you like, whereas the private key must be kept secret. Although it's called a public key, by analogy it's more like a padlock. I'll explain...
The classic application of this idea is message encryption. If your friend wants to send a message intended only for you, they have to encrypt it so anyone who might intercept it is unable to read it. Encrypting a message is easy: your friend just uses your freely available public key to turn the plain text into cipher text. However, decrypting the message can only be done with the corresponding private key (hence why you must keep this one secret). By analogy, it's as though your friend wants to send you a box and be certain that only you can open it. You send your open padlock (public key) to your friend, who then uses it to seal the box. Only you have the (private) key for this padlock, so your friend can send the box knowing no-one other than you can unlock the padlock if it's intercepted. If you want to keep your emails safe from prying eyes, encrypting them prior to dispatch using algorithms such as PGP is always a good idea, but there's another way to apply this idea other than encryption, and that's authentication.
Authentication deals with who is allowed access to a computer system. The most common way of logging into a computer (i.e. being authenticated) is with a username and password. However, passwords are sometimes unreliable, particularly when the password owner is sloppy. Choosing the name of your pet cat as a password is pretty easy for a nefarious hacker to guess and a brute force attack can try every word in the dictionary in a very short time. (Hence the reason why your system administrator forces you to use passwords with a minimum of 32 characters and utilising letters, numbers, mathematical symbols and Egyptian hieroglyphs.) An alternative is to use public and private keys. Instead of asking you for a password, you give your public key to the server administrator, who puts it on the computer, while your private key remains on your own computer. Thereafter, when you attempt to log in to that server via your computer, the server checks whether the public key and private key match. If they do, you are granted access to system -- and all without a single bloody password!
Your public key might be the only one on the server or it could be one of many, it makes no difference. When your public key is added to a server, it's as though a new door into the system is built and your padlock is used to lock that door. This gives you your very own exclusive entry point. There could be dozens of other doors, but you can only open your own; anyone without a door has no way to get into the system.
Now, here's the rub. The person in charge of the system gets to set up the doors. If you are the system administrator, then you know who has access to your system. However, if responsibility for the system is delegated to an external party (as is the case with most popular web services we all use) it can be really hard to know for sure who has access. They own and maintain the servers not you. Of course, there's a secured door that only you can pass through, but how do you know whether or not other secret entry points exist, the so-called "back doors" that let someone else access your data?
This is pretty much the situation with many web-based services like GMail and Dropbox. They stand accused of setting up back doors on their services which allow intelligence agencies to step in and poke around at will. Sadly, I'm afraid there's not much you can do about this. They certainly won't provide you with a copy of their software which you can use to set up your own email servers or cloud-storage services.
But you do have a choice with FLOSS programs, which is why (in the previous post) I recommended Kolab and ownCloud as open source alternatives to their proprietary counterparts. By deploying these programs on your own server, you get to exercise control yourself and prevent the installation of secret back doors.
Bonus Perspective: "The Wiretap"
(This last perspective follows on from the previous one, although is not exclusively related to FLOSS. )
Another aspect to the spying scandal is the alleged listening in on communications as they whizz through the Internet, essentially wiretapping the world wide web. If you're concerned about the security of your messages, you should encrypt your outgoing traffic with encryption methods like PGP. With FLOSS programs this is usually quite simple to set up and, as a bonus, you have the chance to control the encryption keys.
However, proprietary programs don't always make it easy to encrypt traffic, or they might deny you the option altogether. (Just try and encrypt your GMail traffic and see how difficult things get.) What's more, where traffic is encrypted, the service providers may be in control of the encryption keys and we're back to the problem in the "Keys and Padlocks" perspective: how far do you trust them not to reveal the keys?
As I've shown over these last two posts, FLOSS-based programs have the potential to improve the security of your computer-based data and keep prying eyes away. An operating system like Linux, email services with Kolab or cloud-sharing via ownCloud are all safe bets.
But I must stress the benefits are not automatic. There are prices to pay.
For one thing, more proactivity is required. Of course, software developers need to be well-clued up on writing secure software, but privacy-concerned users also have to be vigilant. They have to take on more responsibility for the data and ask the right questions, like: Is this software FLOSS? Who is responsible for controlling access to my computer and its programs? Is proper encryption being used? etc.
Also, there may be a price to pay in a more literal sense. A famous comic depicts two pigs chatting with each other, remarking how lucky they are to be living on a farm that feeds them and houses them -- all for free. Too good to be true? As Georg Greve points out, when you're the product being sold, being put under surveillance is part of your payment to the service provider. I will venture to say that when you are instead the paying customer of a service provider, particularly one that uses FLOSS, surveillance is probably a much lesser risk... but still don't forget to read those terms and conditions.
In short, paying those prices gives you the chance to be an empowered customer rather than a product sold at the market.