Brown Dogs and Barbers: Exclusive samples hot off the press!

  • PDF (this is identical in layout the paperback version)
  • eBook
  • Mobi

We live in exciting times. My book, Brown Dogs and Barbers (which explains computer science to just about anyone who can read), is very close to publication.

The funding drive over the last few months raised enough to produce a professionally designed paperback and ebook, complete with crisp design, beautiful diagrams and an insanely cute front cover. It will be available to buy in places like Amazon and iTunes later this month.

Until then, you can follow the links above to get hold of a sample of the final book…

… and behold the front cover!

Cover_Brown_Dogs_and_Barbers_epub_Kindle

Donating to Brown Dogs and Barbers: Small update

Some readers have recently reported to me that the PayPal donate button (which in theory should allow you to donate to the publication of my book Brown Dogs and Barbers) isn’t working.

I’m not certain yet, but I think PayPal have recently made changes that have stopped the button from working… I’m currently looking into getting it working again.

In the meantime, if you’re interested in using PayPal to donate to the production of my book, you can send contributions to my email address: karl.beecher@outlook.com.

Thank you for your continued support.

Brown Dogs and Barbers: Donations have topped €1000!

Recently I launched a donation service whereby people who wanted to see my book, Brown Dogs and Barbers, in print, could donate in order to make it happen. The levels of funding and what each level brings are visible on the donations page.

I’m pleased to report that with your help the total recently went over €1000 – in fact they currently stand at €1085. This not only means I can now hire a proof reader and an artist, but I am almost at the point where I can commission the production of a paperback version that can then be put on sales in places like Amazon and Lulu.

I’d like to thank everyone who donated so far. There’s still more I’d like to do, so please either donate to the project or pass this information on.

Chapter 1: Inputs, Processes and Outputs

The definition of computer science

[Note: This is a sample of my upcoming book Brown Dogs and Barbers. Please be aware that this text is subject to change and that diagrams are only placeholders.

If you'd like to see this book become a fully illustrated and professional book, why not consider donating?]

Update: This sample is now slightly out of date. For a sample of the published book, please see the About Page.

Computer scientists study the science of computation. Yes, I admit, it seems embarrassingly obvious to say that, after all it’s right there in the name. Nonetheless I promise you I’m not being flippant; it’s a useful thing to say, but it needs some explanation. Ask yourself: what does it mean to compute? In particular, what possible meaning of compute could apply to all the diverse fields of computer science?

Figure 1: What it means to compute
Figure 1: What it means to compute

In its most general form, computation is as simple a concept as that in Figure 1. It involves taking some input data, processing it in some way, and giving the output. Simple as that. It’s like a conveyor belt that carries the raw materials into a machine, whereupon the machine thrashes around doing its magic and eventually pushes the finished product out the other end. As a model of computation it’s widely applicable. From the smallest little operation to the biggest computer task imaginable, computing always involves taking some input, doing some work with it and returning some output.

It describes all sorts of things you do when you use your computer, even the simplest thing like moving the mouse pointer across the screen. During this action, your hand movement is fed via the mouse into your computer as input. The computer must then process it, before outputting the corresponding movement of the mouse pointer on screen. It looks simple and you do it all the time without giving it a thought. But for such a simple action, the computer actually has to do an awful lot of stuff to animate that pointer.

Figure 2: The movement of a mouse pointer
Figure 2: The movement of a mouse pointer

First, let’s talk about the input. When you move the mouse, the distance it has moved is fed into the computer. In this case, there’s actually more than one piece of information involved in the input. Because the computer records the mouse pointer’s position as a pair of coordinates on the screen, the distance is broken down into its horizontal and vertical components. Modern mice sense movement optically, but back in the days when mice had balls — if you’ll pardon the expression — that ball would turn two internal wheels when the mouse was moved: one wheel measured horizontal movement and the other vertical movement. There are therefore two pieces of input to this computation, or — to give them their posh names — two input parameters: distance moved along the horizontal axis and distance moved along the vertical axis.

Next comes the process. In this case, the mouse alerts the computer to a change in its position and passes the parameters along.

“Hey!” says the mouse. “This guy just moved me five millimetres to the right and two millimetres up.”

“OK,” the computer acknowledges, “I’ll get right on it.”

The computer then has to take those physical movements and turn them into on-screen movements via some quick computations. The current position of the mouse pointer on the screen is kept by the computer and continuously updated. Now let’s say that each millimetre of movement corresponds to two pixels distance on screen. In this case, the computer would change the value of the mouse pointer’s screen position, increasing it ten pixels further to the right and four pixels further to the top. Sounds simple enough, but there are a few hidden subtleties in any computer process. If, for example, the user moves the mouse left but the mouse pointer is already at the extreme left of the screen, the computer must not move the pointer any further left. Why, in this case, would the computer essentially ignore the user? Because if the computer didn’t make this check, the x-coordinate would keep decreasing past 0 into negative numbers and cause the mouse pointer to disappear off the left-hand side of the screen! Computations are almost always riddled with hidden traps like these which can cause errors. Sometimes they’re little ones which cause weird side effects, sometimes they’re whoppers which crash a whole system. (Note: Computer bugs are examined in more detail in Part IV: Mastering the Machine.)

After the process has finished comes the output. The updated coordinates are passed to the computer screen, which redraws the whole image showing the new position of the mouse pointer (along with any other parts of the screen which may have changed too). In order to maintain a smooth user experience, the computer will repeat this whole computation about fifty or sixty times every second. The example in Figure 2 shows a mouse pointer on a screen 1024 pixels wide and 768 pixels high. It has moved from coordinates 200 by 100 along the dotted line to 800 by 400. It is thus 600 pixels further to the right and 300 pixels higher than the beginning, but the rapid repetition inbetween presented an apparent smooth motion to the user. During all this, your computer is also working on dozens of other computations simultaneously, most of which are much more complicated than processing your mouse movements. It’s just as well that today’s computers are extremely fast.

This input-process-output model describes how computers execute programs, but it’s just as applicable when people write them too. When coming up with a new program, a computer scientist frames it as a series of instructions which accept input, carry out some processing and return output. This model of computation occurs all over computer science. Every computer scientist is involved in an effort to process information according to this basic form. They are each thinking hard, trying to come up with a series of steps which start with one state and end with another. Each person may be trying to achieve different things, but they all share the same goal of taking input, processing it and giving output.

In doing this, a computer scientist is basically trying to work out how to solve a problem. Her ultimate goal is to enable a computer to actually perform the work rather than a human, which means reforming the eventual solution into a computer program. The study of how best to achieve this is what computer science is all about. This work may involve using a lot of mathematics, but computer science diverges from its mathematical parent in the following way. Mathematicians seek to understand fundamental things like quantities, structures and change, with their goal being to create new proofs and theories about them. Computer scientists take established mathematical ideas and understand how they can best be used to solve problems automatically.

A trivial example might involve calculating square roots. Just in case you’ve forgotten, squaring a number means multiplying it by itself, hence three squared (32) is nine. Reversing this process is called taking the square root, meaning the square root of nine (√9) is three. In this example, our input is nine, the process is the square root operation, and the output is three. Figure 3 illustrates it. This computation takes in just one input parameter and calculates the square root of it, which it spits it out the other end.

Figure 3: Input, processing and output of taking a square root.
Figure 3: Input, processing and output of taking a square root.

A computer scientist’s interest in square roots would lead them to developing a program for computing the square root of any arbitrary number. She would know that mathematics already provides a wonderful range of methods for humans to perform this particular calculation. Her job would be to prepare one of them for automatic execution by a computer. This gives her all sorts of new worries. Working out a square root is a laborious process that can potentially take a long time — that’s why this computer scientist chose to automate it, I suppose. The usual method requires the repetition of the same series of steps, iteratively building up the result until finally the full number is found. But, just like in the mouse example when the possibility of a disappearing mouse pointer cropped up, our computer scientist has to worry about things going wrong when a computer tries to follow her instructions.

Computers — and I want you to remember this — are dumb. They are exceedingly literal-minded things who will do exactly as you tell them, even if what you told them to do was stupid. For example, if we humans begin to work out the square root of two, we will notice after a while as we construct the result (1.4142135623…) that the number never seems to end. That’s because the result is an irrational number and literally does go on forever. Eventually a human would get bored of all this and stop, but computers never tire. If the computer scientist failed to take this eventuality into account, she would end up developing a program that causes a computer to repeat the same steps endlessly when given 2 as a parameter. It would continue until the power were cut off, its circuits rotted away or the universe ended; whichever came first.

To prevent irrational numbers from playing such havoc, our imaginary computer scientist faces a choice. How should the possibility of a never-ending program be dealt with? Should she just impose a maximum size on results, like ten decimal places, and so force the computer to stop calculating upon reaching this limit? This wouldn’t give a strictly accurate answer, and the question still remains how many decimal places is enough. Or should she instead analyse the parameter first to see if it would yield an irrational answer and deal with it differently than usual? Is that preferable? Is it even possible? She also faces a lot of other choices too, such as how to deal with bad input. What should happen if the parameters are negative numbers? What if they’re not numbers at all?

Questions like these, particularly whether a program will actually finish or not, are fundamental concerns of computer science. Those raised here are just a tiny selection of the issues that computer science deals with at its foundation. Many of these issues are actually now well-developed and understood, so that other fields in computer science are able to build on them routinely. But there was a time when there was no foundational knowledge; a time before computer science, when no-one could even conceive of computers, let alone deal with the issues they raise.

The next chapter will take you back to such a place.

Part I. Fundamental Questions

“When all is said and done, the only thing computers can do for us is to manipulate symbols and produce results of such manipulations.”
–Edsger Dijkstra (1930 — 2002)

What is computer science? What does a computer scientist actually do? This is actually a difficult question to answer, but if we hope to learn anything about the subject then I suppose we’d better deal with it.

Looking for the definition of computer science in a dictionary won’t be much help because there are as many different definitions as there are dictionaries. In fact, even computer scientists don’t tend to agree on the definition of their subject, so what chance have the dictionary writers? What’s more, the subject has developed a huge array of sub-fields over the years and at first glance they seem absurdly diverse. For instance, computer vision specialists look at how computers deal with images; network experts concern themselves with how to get computers talking with each other; and information theorists don’t even deal with computers at all, instead spending their time worrying about how to process and quantify information. Given all this, how could I possibly discuss computer science in a way that covers the whole discipline?

But physics also covers a lot of diverse ground, and physicists can collectively claim that they are studying the fundamental nature of matter and how the universe behaves, whether it’s sub-atomic particles or whole families of galaxies. Surely, then, we can also sum up computer science in such a nice, tidy phrase. That’s one thing I’ll do in this first part. I’ll show that there is a way to address computer science collectively, and in so doing I’ll show that all its practitioners share a stock in trade, which is studying how to compute.

Furthermore, no subject is born in a vacuum. Every science we developed branched off from some predecessor, taking a handful of ideas with it along the way and using them to form the core of a new discipline. To demonstrate the kind of concepts essential to the subject, this first part will also explain a few ideas that pre-date computer science but nevertheless lie at its heart.