How to Code It: An Overview

Introduction

Problem solving is a hard, strenuous activity for which, most of the time, there are no fixed rules. Thus, when a book comes out claiming to provide some sort of script for problem solving, it will inevitably draw a lot of attention. It was so with Polya’s “How to Solve It”, published in 1945. But the book became a huge success – more than one million copies sold world wide – not only because of the topic it addresses, but especially because of its simple, clear rules for problem solving.

The problem solving task is divided into four phases. First, you have to understand the problem in order to see clearly what is it that you have to find and what was given for you to work with. Second, you have to devise a plan, understand how the various items are connected, and what computations should be carried out to lead you to find the unknown quantity – the answer to the problem. Third, you have to execute the plan you devised, with great care, paying attention to each step of the execution. Fourth you have to look back, analyze your solution, to see what worked and what didn’t, and what could be generalized or reused to solve other problems.

When you look at the four phases above, it seems that they can be used to solve any type of problem, not only the mathematical ones for which they were originally intended. But to what domains could they be extended? Given the close relationship between problem solving in Mathematics and Computer Science, how can we adapt Polya’s rules to solve algorithmic problems?

Let’s review each of the rules and see how they can be applied.


Rule #1: Understanding the problem

Polya’s first commandment is: “You have to understand the problem.” The first thing you have to do is ask yourself three basic questions: “What is the unknown? What is the data? What are the conditions?”

Rule #1: Understanding the problem.

A thorough understanding of the problem is necessary in any domain. It’s painfully hard to try to solve a problem that you don’t fully understand. Yet, we do it all the time.

For a programmer, the urge to jump at the keyboard and start typing a solution to a problem he’s yet to fully grasp is too tempting. “I’ll just explore the problem a bit”, is what we usually think at these moments. And there’s nothing wrong with a little exploration. But more often than not, our little explorations turn into flimsy solutions to problems we haven’t yet completely understood.

Pen and paper are still the best friends of a computer programmer, despite the amount of time we spend at our keyboards. Again, writing code to explore a possible solution is great. But there should be much more back and forth trips between the keyboard and a sheet of paper than usually happens.

A lack of understanding is harmful in other scenarios as well. Donald Knuth once said that “premature optimization is the root of all evil”, in the sense that programmers spend far too much time thinking (or worrying) about the speed of noncritical parts of their programs. If we programmers better understood our programs (and the problems they solve), we would be better equipped to debug, maintain, and even optimize them. A thorough understanding of a problem is necessary not only to write the program that solves it, but also in every other part of the program’s lifecycle.


Rule #2: Devising a Plan

Polya’s second commandment is: “Find the connection between the data and the unknown. You may be obliged to consider auxiliary problems if an immediate connection cannot be found. You should obtain eventually a plan for the solution.”

Rule #2: Devising a Plan.

So the goal in this phase is to devise a plan for the solution, to think about the steps that led you to find the unknown. If we think of it in terms of algorithmic problems, this is the phase in which we actually design the algorithm to solve the problem.

According to Donald Knuth, “[…] an algorithm is a procedure, a recipe, a process, a routine, a method, a set of rules or directions for getting a specific output from a specific input. The distinguishing feature of an algorithm is that all vagueness must be eliminated; the rules must describe operations that are so simple and so well defined that they can be executed by a machine. Furthermore, an algorithm must always terminate after a finite number of steps.”

Books upon books have been written on the design of algorithms. And there are several strategies (or paradigms) for the design of algorithms. Algorithms in the same paradigm have the same overall shape. For instance, all divide-and-conquer algorithms divide the original problem into smaller ones, solve these smaller problems, and then combine their solution to obtain the solution for the original problem. For instance, suppose you have an array of numbers that you want to sort. A divide-and-conquer algorithm would split the it into sub-arrays, sort them, and then combine the smaller sorted arrays to sort the original one. At first, it might seem counterintuitive that this strategy has any advantages, but the fastest sorting algorithms today are divide-and-conquer algorithms.

Sometimes in the design of an algorithm, once you “find the connection between the data and the unknown”, you also discover what type of algorithm you should use to solve the problem. Then, the steps to the problem’s solution become easy because your algorithm will employ the same overall strategy as any other algorithm of its kind.

But it doesn’t stop there. Designing an algorithm is much more than simply saying it will be a divide-and-conquer, or greedy, or dynamic programming algorithm. There are all sorts of details one needs to be aware of and all sorts of pitfalls one should avoid while crafting an algorithm. For instance, your “plan for the solution” will involve a number of steps. It is your responsibility to make sure that these steps will lead to the solution of the original problem. As is your responsibility to make sure that the steps you’ve come up with obey the constraints of the problem.

At this point, you’ve come a long way toward obtaining a plan to the solution. But it still doesn’t stop there. After designing an algorithm, you should have a general idea of how long it will take to execute as a function of its input, that is, you have to conduct an asymptotic analysis of your algorithm.

As with the design of algorithms, there are a myriad of books on the analysis of algorithms. But in short, what you have to figure out at this stage is whether the running time of your algorithm will grow linearly, quadratically, exponentially, and so on, as a function of the size of the input. This will allow you to determine, roughly, how much time it will take for you to get the answer to the problem.

Sometimes, you’ll also want to have a certain amount of confidence that your algorithm is correct. To mathematically prove the correctness of an algorithm is very hard. But there are techniques to help convince you that your algorithm actually computes the thing you want it to compute and that it eventually terminates.

Among the most used techniques to help you get a sense of the correctness of your algorithm are preconditions, postconditions, and invariants. Preconditions are things your algorithm assumes will be satisfied for it to function well. For instance, a precondition to a binary search is that the input array be sorted. Postconditions are conditions that have to be met when a given portion of your algorithm finishes. In the case of a sorting algorithm, a postcondition might be that the input array is sorted at the end of a given function. An invariant is a condition that must be true during a certain portion of the algorithm. For example, an invariant in a sorting algorithm might be that at each step of the algorithm, elements up to a certain index in the array are sorted. A nice way to check preconditions, postconditions, and to ensure invariants is through the use of assertions, which are, according to Jon Bentley, a way to state that we believe a logical expression to be true. For instance, the statement assert(n >= 0) will do nothing if n is positive, but will raise an error if n is negative.


Rule #3: Carrying out the Plan

After all the work you’ve put into designing an algorithm to solve the problem, now comes a relatively easier part. As Polya puts it: “To devise a plan, to conceive the idea of the solution is not easy. It takes so much to succeed; formerly acquired knowledge, good mental habits, concentration upon the purpose, and one more thing: good luck. To carry out the plan is much easier; what we need is mainly patience.”

Rule #2: Carrying out the Plan.

In Computer Science, besides a lot of patience, to carry out the plan one needs to know how to actually implement the proposed solution in a given programming language. In other words, this stage is about turning an algorithm into a program. According to Donald Knuth, “the word algorithm denotes an abstract method for computing some output from some input, while a program is an embodiment of a computation method in some language.”

The question then becomes: given a specific algorithm, what are the skills one needs to have and the steps one needs to follow to implement the algorithm in a programming language of choice?

You need, of course, to know a programming language. It might sound obvious, but knowing a programming language – and knowing it well – in an invaluable skill. Get to know the syntax of a programming language, get to know the libraries available, how the library functions are called – what they get as input and what they return –, be able to write programs without looking online what parameters you need to pass to the library functions you are using. By mastering a programming language you will be able to turn algorithms into programs much more easily, faster, and elegantly. Besides, you will be able to prevent and identify errors in your programs more effectively.

Programming is an error-prone activity, and one needs to be prepared to identify and correct bugs throughout the whole process. Your algorithm might be flawless, but bugs can creep in when you try to implement an algorithm in a programming language.

Despite sounding old-fashioned, the best way to identify bugs in your code is by testing it extensively. Test each function you write, test how the functions interact with one another, test how the whole program behaves when you weave your functions together. Test not only the normal, expected inputs, but also the corner cases as well as how your program behaves when fed unexpected inputs.


Rule #4: Looking Back

Now you’ve finally made it. You’ve got an answer to your problem.

Rule #2: Looking Back.

You began by gaining a thorough understanding of the problem, then designed an algorithm to solve it, and finally implemented your algorithm.

Now, you have to look back and examine your solution, look at what you have done, think about what worked and what didn’t. You can also evaluate possible ways to make your solution more general and analyze whether it can be applied to other problems. You can even think of ways to make your program more elegant, simpler, faster.

Maybe there is some little constraint you can leverage to speed up your program, or some little detail – that you now see at a glance – which will help you make your solution simpler. Take a moment, look back, and improve your solution.