In recent years – but at least as far back as when I was a pupil in the 1990s – education in computing and computer science within British schools had a rather narrow focus. Children learned mainly about operating computers: using word processors to write documents, whipping up spreadsheets, (maybe) building simple databases and proficiency in using an operating system (any operating system, so long as it’s Microsoft Windows).
There’s nothing wrong with this. It’s a fine goal to teach someone how to make good use of everyday applications. However, it must be admitted that this narrow focus merely teaches children how to be passive users of a computer. It gives them no grounding in the fundamentals of computing; they learn nothing about how a computer actually works.
But the upcoming overhaul of computing education will change that. Computing education will in the future focus on things like what an algorithm actually is; how to program a computer; how a program relates to an algorithm; how to detect errors in programs; how to reason about source code and find errors, and much more. It will be like physics lessons going from focusing on how to drive a car to learning the principles of the internal combustion engine.
And these changes won’t only affect college-level, or secondary school-level. It will begin from the first year of primary school.
Parents naturally want to support their children’s learning at home. With many subjects, you can do this. Many of today’s subjects are the same as when you were at school (Maths, Science, English, History etc.), so discussing their contents and helping with homework are doable. But chances are you were taught nothing about computer science at school, so how could you support your child in this subject?
One way to get a feeling is to look at the proposed syllabus. Schools in England and Wales divide all schooling into several blocks called key stages. Each key stage covers several years of a child’s education.
Key stages 1 – 3 cover all of primary and most of secondary education. Children educated within these stages are aged between 5 and 14 years. Here’s a link to the UK Government’s breakdown of plans for teaching computing in England, but I’ve picked out some of the key parts here:
At this stage, some things your child will learn include:
Some may look at that list and find that most of the items mean nothing to them. That might be discouraging if you’re a parent with a child in school. Nevertheless, it might prompt you to learn about the subject for yourself so you can share in what your son or daughter is picking up in computing lessons, but you may be unsure where to begin.
That’s one of the reasons I wrote my recently released book about computer science, Brown Dogs and Barbers. It has several intended audiences, but one of the primary ones is those people with no background in computing whatsoever who would like to learn about its fundamentals. That’s why it’s an easy-to-read book with a fun, casual style and touch of humour mixed in.
As an indicator of how helpful Brown Dogs and Barbers should be, compare the list of topics covered in the book (below) with the school syllabus. Topics that appear in both the book and the syllabus are emphasised:
I think that my book is ideal if you have school-age children and want to brush up on computer science so that you can prepare yourself to help them get to grips with this sometimes challenging but nevertheless rewarding and important subject.
It’s available to order at Smashwords or Amazon, where there are also samples to try before you buy.
]]>Right now you can get it from several distribution channels, including Amazon (find it your nearest Amazon outlet, like the US, Canada, UK or Germany) and Smashwords. Other retailers, like iTunes, are also currently preparing it for sale in their webstores. More news on those as I receive word.
]]>We live in exciting times. My book, Brown Dogs and Barbers (which explains computer science to just about anyone who can read), is very close to publication.
The funding drive over the last few months raised enough to produce a professionally designed paperback and ebook, complete with crisp design, beautiful diagrams and an insanely cute front cover. It will be available to buy in places like Amazon and iTunes later this month.
Until then, you can follow the links above to get hold of a sample of the final book…
… and behold the front cover!
]]>[Note: This is a sample of my upcoming book Brown Dogs and Barbers. Please be aware that this text is subject to change and that diagrams are only placeholders.
If you’d like to see this book become a fully illustrated and professional book, why not consider donating?]
Update: This sample is now slightly out of date. For a sample of the published book, please see the About Page.
Computer scientists study the science of computation. Yes, I admit, it seems embarrassingly obvious to say that, after all it’s right there in the name. Nonetheless I promise you I’m not being flippant; it’s a useful thing to say, but it needs some explanation. Ask yourself: what does it mean to compute? In particular, what possible meaning of compute could apply to all the diverse fields of computer science?

In its most general form, computation is as simple a concept as that in Figure 1. It involves taking some input data, processing it in some way, and giving the output. Simple as that. It’s like a conveyor belt that carries the raw materials into a machine, whereupon the machine thrashes around doing its magic and eventually pushes the finished product out the other end. As a model of computation it’s widely applicable. From the smallest little operation to the biggest computer task imaginable, computing always involves taking some input, doing some work with it and returning some output.
It describes all sorts of things you do when you use your computer, even the simplest thing like moving the mouse pointer across the screen. During this action, your hand movement is fed via the mouse into your computer as input. The computer must then process it, before outputting the corresponding movement of the mouse pointer on screen. It looks simple and you do it all the time without giving it a thought. But for such a simple action, the computer actually has to do an awful lot of stuff to animate that pointer.

First, let’s talk about the input. When you move the mouse, the distance it has moved is fed into the computer. In this case, there’s actually more than one piece of information involved in the input. Because the computer records the mouse pointer’s position as a pair of coordinates on the screen, the distance is broken down into its horizontal and vertical components. Modern mice sense movement optically, but back in the days when mice had balls — if you’ll pardon the expression — that ball would turn two internal wheels when the mouse was moved: one wheel measured horizontal movement and the other vertical movement. There are therefore two pieces of input to this computation, or — to give them their posh names — two input parameters: distance moved along the horizontal axis and distance moved along the vertical axis.
Next comes the process. In this case, the mouse alerts the computer to a change in its position and passes the parameters along.
“Hey!” says the mouse. “This guy just moved me five millimetres to the right and two millimetres up.”
“OK,” the computer acknowledges, “I’ll get right on it.”
The computer then has to take those physical movements and turn them into on-screen movements via some quick computations. The current position of the mouse pointer on the screen is kept by the computer and continuously updated. Now let’s say that each millimetre of movement corresponds to two pixels distance on screen. In this case, the computer would change the value of the mouse pointer’s screen position, increasing it ten pixels further to the right and four pixels further to the top. Sounds simple enough, but there are a few hidden subtleties in any computer process. If, for example, the user moves the mouse left but the mouse pointer is already at the extreme left of the screen, the computer must not move the pointer any further left. Why, in this case, would the computer essentially ignore the user? Because if the computer didn’t make this check, the x-coordinate would keep decreasing past 0 into negative numbers and cause the mouse pointer to disappear off the left-hand side of the screen! Computations are almost always riddled with hidden traps like these which can cause errors. Sometimes they’re little ones which cause weird side effects, sometimes they’re whoppers which crash a whole system. (Note: Computer bugs are examined in more detail in Part IV: Mastering the Machine.)
After the process has finished comes the output. The updated coordinates are passed to the computer screen, which redraws the whole image showing the new position of the mouse pointer (along with any other parts of the screen which may have changed too). In order to maintain a smooth user experience, the computer will repeat this whole computation about fifty or sixty times every second. The example in Figure 2 shows a mouse pointer on a screen 1024 pixels wide and 768 pixels high. It has moved from coordinates 200 by 100 along the dotted line to 800 by 400. It is thus 600 pixels further to the right and 300 pixels higher than the beginning, but the rapid repetition inbetween presented an apparent smooth motion to the user. During all this, your computer is also working on dozens of other computations simultaneously, most of which are much more complicated than processing your mouse movements. It’s just as well that today’s computers are extremely fast.
This input-process-output model describes how computers execute programs, but it’s just as applicable when people write them too. When coming up with a new program, a computer scientist frames it as a series of instructions which accept input, carry out some processing and return output. This model of computation occurs all over computer science. Every computer scientist is involved in an effort to process information according to this basic form. They are each thinking hard, trying to come up with a series of steps which start with one state and end with another. Each person may be trying to achieve different things, but they all share the same goal of taking input, processing it and giving output.
In doing this, a computer scientist is basically trying to work out how to solve a problem. Her ultimate goal is to enable a computer to actually perform the work rather than a human, which means reforming the eventual solution into a computer program. The study of how best to achieve this is what computer science is all about. This work may involve using a lot of mathematics, but computer science diverges from its mathematical parent in the following way. Mathematicians seek to understand fundamental things like quantities, structures and change, with their goal being to create new proofs and theories about them. Computer scientists take established mathematical ideas and understand how they can best be used to solve problems automatically.
A trivial example might involve calculating square roots. Just in case you’ve forgotten, squaring a number means multiplying it by itself, hence three squared (32) is nine. Reversing this process is called taking the square root, meaning the square root of nine (√9) is three. In this example, our input is nine, the process is the square root operation, and the output is three. Figure 3 illustrates it. This computation takes in just one input parameter and calculates the square root of it, which it spits it out the other end.

A computer scientist’s interest in square roots would lead them to developing a program for computing the square root of any arbitrary number. She would know that mathematics already provides a wonderful range of methods for humans to perform this particular calculation. Her job would be to prepare one of them for automatic execution by a computer. This gives her all sorts of new worries. Working out a square root is a laborious process that can potentially take a long time — that’s why this computer scientist chose to automate it, I suppose. The usual method requires the repetition of the same series of steps, iteratively building up the result until finally the full number is found. But, just like in the mouse example when the possibility of a disappearing mouse pointer cropped up, our computer scientist has to worry about things going wrong when a computer tries to follow her instructions.
Computers — and I want you to remember this — are dumb. They are exceedingly literal-minded things who will do exactly as you tell them, even if what you told them to do was stupid. For example, if we humans begin to work out the square root of two, we will notice after a while as we construct the result (1.4142135623…) that the number never seems to end. That’s because the result is an irrational number and literally does go on forever. Eventually a human would get bored of all this and stop, but computers never tire. If the computer scientist failed to take this eventuality into account, she would end up developing a program that causes a computer to repeat the same steps endlessly when given 2 as a parameter. It would continue until the power were cut off, its circuits rotted away or the universe ended; whichever came first.
To prevent irrational numbers from playing such havoc, our imaginary computer scientist faces a choice. How should the possibility of a never-ending program be dealt with? Should she just impose a maximum size on results, like ten decimal places, and so force the computer to stop calculating upon reaching this limit? This wouldn’t give a strictly accurate answer, and the question still remains how many decimal places is enough. Or should she instead analyse the parameter first to see if it would yield an irrational answer and deal with it differently than usual? Is that preferable? Is it even possible? She also faces a lot of other choices too, such as how to deal with bad input. What should happen if the parameters are negative numbers? What if they’re not numbers at all?
Questions like these, particularly whether a program will actually finish or not, are fundamental concerns of computer science. Those raised here are just a tiny selection of the issues that computer science deals with at its foundation. Many of these issues are actually now well-developed and understood, so that other fields in computer science are able to build on them routinely. But there was a time when there was no foundational knowledge; a time before computer science, when no-one could even conceive of computers, let alone deal with the issues they raise.
The next chapter will take you back to such a place.
]]>If you’d like to see this book become a fully illustrated and professional book, why not consider donating?]
Update: This sample is now slightly out of date. For a sample of the published book, please see the About Page.
I’d like to begin this book about computer science by asking you about your toaster. If I asked you to tell me how your toaster worked, I bet you’d have no trouble coming up with a decent explanation. Initially, you might claim you have no idea, but I’m sure a moment’s thought would yield a good description. Even in the worst case, you could actually look inside a toaster and deduce what was happening. Perhaps then you’d be able to tell me all about how electricity causes the filaments to heat up and that heat radiates onto the bread or the crumpet or whatever, causing it to cook.
If I were to ask about how a car worked, that might be more challenging. Again, you might instinctively feel that a car’s workings are a mystery to you. But even then, if you stop and think about it, you might recall a few vague terms that help out. Perhaps you could tell me about how petroleum is stored in the car’s tank and when you press the footpedal, the fuel is drawn into the engine where it’s ignited. Then you’d go on and tell me that this action drives the pistons… or something… and they turn the… I think it’s called the crankshaft… which is connected to the wheels and makes them turn. That’s what I would say anyway, and I know virtually nothing about how cars actually work.
I’m guessing all this without even knowing you, your occupation or your interests. True, you might be an engineer or a physicist for all I know, and able to give better explanations, but the chances are that you’re not. My point is, even if you have only the merest passing interest in science and technology, I’m confident that you comprehend things like toasters and cars enough to give half-decent explanations of them. Understanding things like these comes partly from school learning where, even if you sat spaced out during physics lessons, you still picked up some of that stuff about electricity and internal combustion engines. And let’s not underestimate how ingrained on our popular consciousness these concepts are. The people around us talk about the workings of everyday technical items all the time, so some of it is bound to stick with us whether we realise it or not.
But computers are different. Many of us haven’t got the first clue how computers work. Think about it. Could you tell me how the individual components in your computer work together? Could you even name any of the components? I’m certain some of you could, but I’m just as sure that a lot more people couldn’t even begin to explain a computer. To some, it’s a kind of magic box that sits under the desk and somehow draws letters and images on the monitor screen at breathtaking speed.
Let’s get one thing straight: I wouldn’t blame you for being unable to offer an explanation, because there are several reasons why you shouldn’t be expected to know about computers. One very important reason, again, is schooling. In many countries, computer science is not taught as part of general education. In my own country of birth (the United Kingdom), computing education has for many years meant nothing more than learning how to use word processors and spreadsheets; important skills to be sure, but this is definitely not computer science, a topic that studies at a fundamental level how to use mathematical principles in the solving of problems. The great majority of children leave school having learned to be passive users of computers at the most and many people are currently asking why such an important area of knowledge is absent from the curriculum.
The mystery surrounding computers is a problem that’s only becoming worse over time. When computers first arrived they were monstrous things bigger than a family-sized fridge and kept in huge, environmentally-controlled rooms. Their job was usually to carry out boring tasks like process tax returns and payrolls; tasks that anyone could do by hand, albeit a lot slower. They had banks of flickering lights that lit up when the machines were “thinking”; spools of tape mounted on the front spun around, indicating that the computer was looking in its databank; some were even partly mechanical, clicking and tapping uproariously when the numbers were being crunched. Yes, they were still mysterious — but today it’s even worse.
Computers are no longer just mysterious — they’re magical.
Today’s computers are a million light years ahead of their early ancestors. Nowadays they’re small, sometimes able to fit into the palm of your hand. How can something so tiny do such impressive things? They’re also ubiquitous, having gone far beyond their original, humble number-crunching duties until they organise every aspect of our lives. As a result they’ve become utterly unknowable. Today’s computer is an impersonal black box that gives no hint as to its workings. Of course, there’s a user interface that allows us mere humans to operate the computer, but one main purpose of a modern user interface is actually to hide the internal workings of the machine as much as possible. There are few external indicators about what’s really happening inside. Without moving parts (apart from the cooling fan, which I assure you performs no calculations) and with internal components that give no visible clue as to what they’re doing, it’s become impossible to try and deduce how a computer works by examining it. So advanced and unknowable have computers become, they may as well operate on principles of magic.
But there are genuinely knowable principles upon which computers operate. We find things that pump, rotate or burn easier to understand, because physical principles are more intuitive to us. In contrast, the driving principles behind computers are mathematical and thinking in these terms comes harder to humans. There are some physical principles involved, of course. Your computer contains various things —
circuit-boards, wires and chips — which all function according to good old-fashioned physics. But (and I don’t mean this to sound dismissive), those are merely the computer’s hardware. In computer science, there is a sharp and critical distinction between the physical machinery that performs the work (the hardware) and the mathematical principles which allow it to do anything meaningful. These principles make up the field of computer science. In theory, you can build computers out of all sorts of weird and wonderful parts, be they mechanical, electronic, or even water-powered. Yet, however a computer is implemented, it must work according to the principles of computer science in the same way that every car’s internal combustion engine, as varied as they are, all work according to the relevant laws of physics.
Hardware gets mixed up with the field of computer science. I’m pretty laid back about that, but some purists like to emphasise the strict division between the machinery and the principles. Roughly speaking, this corresponds to a separation between hardware and software. Software, a word I’m sure you’ve heard before, is the collection of programs which computers run and the concept of a program goes to the heart of computer science. Unfortunately, programs are a little hard to define, but rest assured that you’ll come to understand what a program is over the course of this book. What makes them tough to penetrate is that they’re nebulous, abstract things rooted in mathematics, a subject that’s a sort of parent to computer science. Programs have numerous legacies through this inheritance. Like mathematics, programs don’t really exist in a physical sense. They’re conceptual things, ideas that exist in programmers’ minds which are only given substance after they’re written down.
This inheritance from mathematics explains many things. It explains why programs look like jumbles of mathematical formulae. It explains why computer science attracts so many nerdy folks who are good with numbers. And it explains why programmers count up from 0 instead of 1 like the rest of the human race. Maybe you’ve noticed that? You might look through some of the programs on your computer and find a new one labelled version 1.0 . Why 1.0 ?
OK, you might say, after a program is updated the author appends a number to the version to make it clear. After the initial version is updated several times we progress through versions like 1.4 to 1.5 to 1.6 and so on. I get that. But why start at 1.0 ? Why not 1.1 ? And why, when I upgrade to the second version, is that called version 1.1 ?
You’d also find this peculiarity were you to read through the contents of a computer program. If you watch a race on TV, then at the end you’d say that the winner came in position 1, the runner-up in position 2 and so on. If you ask a programmer to write a program for processing the race, the results would begin with the winner assigned position 0 instead and the runner-up in position 1. To a programmer, the hero is a zero.
Counting up from zero, which instinctively seems unnatural, actually simplifies matters when you deal with lists of things. In these cases, counting up from 1 can cause confusion. For instance, have you ever stopped to think why the years of the twentieth century all began with 19 and not 20 ? It’s something that often trips up little kids (and occasionally big ones too). Why was the year 1066 part of the eleventh century and not the tenth?

To explain, let’s look at an example of counting up from 0, because we all do that occasionally whether we realise it or not. In some parts of the world, the bottom floor of a building is called the ground floor and the next one up is the first floor. In this case, the ground floor could just as easily be called the zeroth floor. Similarly, when programmers refer to specific items in a list (which they do a heck of a lot), they often need to calculate the position of an item in that list by offsetting it from a base position. This base item is labelled number 0. Working out a position when a list is arranged like the floors in a building makes things a little simpler. Floor 3 (or the third item) is three above the ground floor (or zeroth item). If the ground floor were floor 1, then the third floor would be two above the ground floor. This is visualised in Figure 1. We count centuries similarly to the left-hand building. Because we count centuries up from the one (the years 1 to 100 were the first century, not the zeroth century), we then have to remember that centuries don’t match with the years within them. It’s only a small confusion, but working out positions in a list is done so often that little hiccups like this can actually cause more problems than you think.
With this explanation, you’ve hopefully just learned something new about computer science. I know it’s only trivial, but nevertheless it shows you something about the subject and explains why that something is the way it is. This example is just the tip of the iceberg, so there’s much more complex and interesting stuff still to come. Computers are complex things, more so than any other machine we’re likely to use on a daily basis. Unfortunately, they remain mysterious to many people. For many of us, our relationship with computers is one of bemusement, frustration, and fascination, all experienced at arm’s length. We sometimes even find ourselves as the servile member in the relationship, desperately reacting to the unfathomable whims of our computer trying to make it happy. This is not the best state of affairs to be in if we’re going to be so reliant on them in our everyday lives. It doesn’t have to be this way. If our relationship with computers is sullied by their mysteriousness, the answer is simple: learn more about them. And I don’t mean learn how to make spreadsheets.
To understand what’s going on in that magic box beneath your desk, we’ll look in this book at the science behind it.
This book presents you with the core ideas of computer science. By reading them you will learn about the subject’s history, its fundamentals and a few things about its most pertinent protagonists. Understanding them will help to demystify the machine. Each chapter can be read as a self-contained unit, but nevertheless, they have all been written together in a way that reading from start to finish is like a story. They vaguely follow a chronology and each chapter builds gently on preceding ones. It’s your choice.
However you choose to read it, this book will take you from the earliest beginnings of mechanical computation and show you how we arrived at today’s world of the magical and ubiquitous electronic computer. You will learn of the monumental problems that faced computer scientists at every stage. You will see how they developed ingenious solutions which allowed the field to progress. And you will observe how progress leads to both new opportunities and new problems.
]]>Although it’s proven informative even to IT veterans (read what Professor Cornelia Boldyreff kindly wrote about it), it’s particularly aimed at beginners. I’ve shared drafts with readers who are firmly outside the computing sphere and the response has been very encouraging. Despite the topic being new to them, my test audience got the hang of the concepts I discuss, concepts that go to the heart of computer science.
So, if you already work in IT, there’s every reason to be interested in it. Furthermore, if you have friends of relatives who puzzle over what exactly you do for a living and pester you to explain what you do all day long, you might consider Brown Dogs and Barbers an ideal gift. After they’ve read it, the recipient will have gained an understanding in the fundamentals of your subject and won’t harass you any longer… either that, or they’ll have a hundred more questions for you, their appetite suitably whetted.
In fact, my crowd funding campaign has the ideal perk for you. If you contribute €60 (that’s about $80 US or £50 UK), you’ll get two signed advance copies of the book, one of them already gift-wrapped ready for you to give as a present. The book scheduled to be ready in June, so it would arrive just in time to supply some summer holiday reading.
Go over to the crowd funding page and contribute today.
]]>“For many of us, our relationship with computers is one of bemusement, frustration, and fascination, all experienced at arm’s length. We sometimes even find ourselves as the servile member in the relationship, desperately reacting to the unfathomable whims of our computer trying to make it happy. This is not the best state of affairs to be in if we’re going to be so reliant on them in our everyday lives. It doesn’t have to be this way. If our relationship with computers is sullied by their mysteriousness, the answer is simple: learn more about them… To understand what’s going on in that magic box beneath your desk, we’ll look in this book at the science behind it.”
I believe that by learning about the scientific principles behind computers, we put ourselves in a much stronger position: informed, confident, and empowered.
While perusing one of my favourite authors, Ben Goldacre, I found we share similar sentiments in this regard. In his excellent book Bad Science Ben explains how an ignorance of science can have negative impacts.
“Fifty years ago you could sketch out a full explanation of how an AM radio worked on the back of a napkin, using basic school-level knowledge of science… When your parents were young they could fix their own car, and understand the science behind most of the everyday technology they encountered, but this is no longer the case. Even a geek today would struggle to give an explanation of how his mobile phone works because technology has become more difficult to understand and explain, and everyday gadgets have taken on a ‘black box; complexity that can feel sinister, as well as intellectually undermining.”
Today’s mobile phones are not phones – they’re computers with an antenna attached to them. And it’s not just phones; computers have crept into most modern technology, rendering them much harder to understand. This is not going to go away. If anything, it’s going to intensify with some truly staggering applications of computers on the horizon (self-driving cars, anyone?).
By making sure people have a basic understanding of computing principles, we can dispel the ignorance, the suspicion and the frustration.
I offer my book as one place to start. Please help me crowdfund the publication process so I can make it available to everyone.
]]>I’m going to self-publish it and for that I need several things to make it a professional piece of work. These all need paying for, so I’ve launched a crowdfunding project at Indiegogo to cover the costs. Time for the hard sell…
Computers are a huge part of our lives. They are everywhere powering so much of what we do.
And yet, how well do we understand them or how they became so ubiquitous? We take computers for granted but many of us don’t appreciate the fascinating ideas behind them. If you look closely, there is a rich trail of puzzles that had to be solved to make them what they are now.
I’ve written a book, Brown Dogs and Barbers, which explains how the ideas of computer science developed throughout history.
When you read this book, you will join me on a journey through the story of computing, discovering the basic principles of what makes the machines tick and learning why computers work they way they do.
I would like to make computer science accessible to all. Brown Dogs and Barbers is a work of popular science aimed at both beginner and experienced alike, no expertise required with as little in the way of formulas and code as possible.
If you are a beginner you will get an introduction to the fascinating world of computer science. If you are experienced you can enjoy reading about your field from a different perspective and perhaps learn a new thing or two. It would also make a great gift for an IT worker’s friends and family who haven’t got a clue what it is they do all day.
In any case, you will develop an understanding of the puzzles and theories behind computers, and meet some of the characters who have steered computing over the centuries.
I’m a big fan of reading about science. Whenever I go into a bookshop, I’m dismayed to see that the popular science section hardly ever seems to carry titles explaining my subject – computer science – to the masses.
I’m trying to fill this gap with my book. Brown Dogs and Barbers examines some of the foundational concepts of computing. I can still remember the stumbling blocks I encountered when I first learned about these fascinating ideas, so my book strives to light the path so you may avoid them. I’m also a PhD-level computer scientist, an experienced teacher and a published writer on IT and computing topics.
All text is written and a collection of placeholder diagrams and illustrations are in place. It now needs some polish, formatting and professionally designed images to make it a kick-ass publication.
The book has 38 chapters. That might sound like a lot, but each chapter deals primarily with one idea and in the final product I estimate chapters will be around 5-6 pages long on average. That’s about 220-230 pages.
To polish the book, I need three things:
I already have estimates for each of these services.
Go here.
You might also be interested to know I’ve contributed several articles in the past to Linux User and Developer magazine. Some of them are available online (e.g. “Wikimedia: Wikipedia’s Game Changer” and “Kolab: David and Goliath” ).
Don’t forget, you can contribute in ways other than donating funds. Tell your friends, share this page and tweet about it to the world. Help me get the word out!
Please visit the project’s Indiegogo page to find out more and, more importantly, to contribute!
]]>His achievements were many and went beyond computer science, but just within my own field he is probably best remembered for developing the EDSAC computer (the first computer with an internally stored program) and for co-founding the British Computer Society (as well as being its first president).
He was described by Prof. Simon Lavington as the father of British computer science. “Godfather” was already taken by Alan Turing, of course.

These days, I am on the other side; I am an educator. It is part of my job to teach people, and my subject matter involves heavy technical detail. Because of this, I try desperately to cling onto my memories of what it was like to be in the audience of the lecture hall, rather than at the podium. I want to remember what I thought as the lecturer droned on at me, what I wanted him/her to actually say in order to make sense. I still have these memories, but some day they will slip away, and I’ll be doomed to delivering my obtuse explanations, all the while berating the students, asking myself “What’s wrong with them, why don’t they get what I’m saying?”
But, until that day comes, I am determined to remain empathetic and keep in mind how I favoured being introduced to new complex new ideas. My favoured way was by examples, and especially by visualisations although these were exceedingly rare. I do not think it is an accident that one of the degree courses I can remember most clearly, Program Slicing with Professor Keith Gallagher, began by the professor refusing to give any definitions until he had thrown wads of illustrative examples at us. As another example, over the pond, Professor Walter Lewin won great acclaim for his very physical demonstrations of physical principles at MIT.
Now, why should I bias my approach towards the way I favoured learning? A fair point, but I think I can justify it by arguing that my way generalises to the great majority of humans. (And remember that I’m only talking about introducing someone to a topic — once someone is knowledgeable, they should be expected to deal in the equations, models or code.) Humans cope best with visual data — a huge proportion of the brain is devoted to processing images — and humans are also intuitive creatures, whose grasp of something (be it correct or not) is improved by subjectively getting a “feel” for it.
Let’s take an example from my own domain, computer science, a particularly challenging subject in which to visualise concepts, so maddeningly abstract are they. If you enter computing, you have to know about algorithms, which are simply lists of instructions used for solving a problem. You have to know things like: what they do, how they work, how efficient they are. One particular algorithm, called merge sort, I could introduce with a definition (from U.S. National Institute of Standards and Technology):
A sort algorithm that splits the items to be sorted into two groups, recursively sorts each group, and merges them into a final, sorted sequence.
Or I could show you the code (taken from Wikipedia):
function merge_sort(m)
if length(m) ≤ 1
return m
var list left, right, result
var integer middle = length(m) / 2
for each x in m up to middle
add x to left
for each x in m after middle
add x to right
left = merge_sort(left)
right = merge_sort(right)
result = merge(left, right)
return result
function merge(left,right)
var list result
while length(left) > 0 or length(right) > 0
if length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
else if length(left) > 0
append first(left) to result
left = rest(left)
else if length(right) > 0
append first(right) to result
right = rest(right)
end while
return result
And that’s everything you need to know to learn merge sort. Does that help? Do you get a feel for it? Can you run through an example in your head? Eventually, you could gain an understanding of the way it works, but if you’re being introduced to sorting algorithms, what use is it really? What’s more, you would you need to be able to compare merge sort to other sorting algorithms. Would you have an intuitive understanding of that?
I use the example of sorting algorithms, not only because it is relevant to computer scientists and time-consuming to gain a feel for, but also because I came across this attempt to give sounds to sorting algorithms. This, I think, is a great way to provide intuitive understanding to something as abstract and difficult as an algorithm in process.
So much of our time learning is spent providing ourselves with an intuitive grasp of things, the ideal being that we can “see” them in action within our minds. Those with extraordinary talents, from mathematicians to musicians, often tell us mere mortals that they can actually see the numbers or the notes in front of them. The memory masters, reciting π to a million decimal places, only enable their great talent by using images to power their feats of recollection.
We can’t all be extraordinary, but we can help others to achieve a correct, intuitive and “visual” understanding of things to cement them more firmly in our minds.
]]>