As I mentioned before, the state of the art in logics of programs isn't yet capable of verifying large-scale programs, even with the best theorem provers currently available. As a result, the vast majority of programmers don't use any formal methods during software development, and the term "verification and validation" refers mainly to testing. What many people don't realize is that there is a middle way in using formal methods; it is possible to improve software quality by using limited but effective parts of the theory.

Design by Contract is a method of using assertions in object-oriented programs to specify their behavior. This importance of this specification is first and foremost methodological. The mere fact of writing these specifications and having them available in libraries focuses programmer attention to the correct use of program interfaces. Many errors stem from misunderstandings; a simple but common example is when the writer of a method assumes that a certain parameter may not be null, and omitting to check or handle the case of a null pointer, while the client of the method assumes that the method does something reasonable when given a null value. The contract defines precisely the responsibilities of the client and supplier; in particular, if the method doesn't handle null pointers, it must declare that fact in its contract. This contract is a formal part of the interface, and (unlike the source code) must be available to all programmers who use the method. They, in turn, must look at the contract and behave according to their responsibilities in it.

This simple process of writing the contract for suppliers and consulting it when writing clients can prevent many errors before they are even made. In order for this to work, the contract doesn't even need to be completely formal, as long as it is precise. And it has to be simple enough to be easily understood, otherwise it is worthless and may even cause damage. However, if developers take the trouble of writing formal contracts, they can be rewarded by more than methodological advantages. Indeed, if contracts are written using a formal notation, it is possible to create tools that give additional benefits.

First, it is possible (though by no means trivial) to write tools that check the contract at runtime. Many errors are time bombs waiting to happen. For example, storing a null value in a field of an object may by itself cause no harm. Only later, when the field is accessed under the assumption that it can't contain null, does the software present an error (in Java, it would be a NullPointerException, nicknamed NPE). The process of finding where in the program the original null value was assigned can be difficult and time consuming. A debugger is a useful tool, but it takes skill and time to use it to track such errors, because in many cases the method call that assigned the null value has already completed its execution and is no longer available on the stack to be examined by the debugger. A tool that checks contracts at runtime would alert the developer to the problem much earlier, thus speeding both discovery and fix of the problem.

Contracts also enable some level of verification. Instead of having to prove that the program as a whole has some desired properties, the method and class contracts provide "stepping stones" to a (partial) proof of correctness. If we can prove that all calls in the program satisfy the method of the called method, we will have increased our confidence in the correctness of the program. Because proving such properties is much easier than proving correctness of a whole program, tools such as theorem provers can be more effective there. While there aren't yet any commercial tools that do that, there is a lot of research on this topic, and I expect the benefits of automated theorem provers to become practical first in this limited arena.

One of the important advantages of design by contract is that it provides incremental benefits. Programmers can get immediate benefit from simple contracts, such as those having to do with null pointers. By adding more details to the contract, they can get more benefit. It isn't necessary (or even useful) to attempt a full specification in order to use this methodology.

Purists may scoff and say that this is not what logicians mean by verification, and is no better than testing. I would agree with the first part of this statement, but not with the second. Contracts are complementary to testing, and both are necessary. (I will have more to say about this in the future.) If you insist in full verification, meaning proofs of complete correctness, you will not bring practical value to software development for a very long time. Design by contract is a practical and useful methodology that should be employed by every software developer. I will discuss some of the reasons why this is not yet the case in a future post.

**Thoughts about software development and related issues.
**

The views expressed here are my own and do not necessarily reflect those

of my employer, IBM.

## Labels

Agile development
(1)
AI
(1)
cognitive systems
(1)
Design by contract
(12)
ECMAScript Harmony
(1)
Extreme Programming
(1)
Formal specification
(1)
Functional programming
(9)
Haskell
(1)
Java 8
(2)
JavaScript
(1)
Logic
(10)
Methodology
(8)
Programming languages
(8)
Refactoring
(1)
robopsychology
(1)
Scala
(1)
Security
(2)
Tools
(9)
Xtend
(2)
Xtext
(1)

## Saturday, February 26, 2011

## Thursday, February 24, 2011

### Logics of Programs

From the early days of computers, it was obvious that programming is difficult and error-prone. The mathematical nature of computer programs leads to the obvious thought that the same methods used to make sure that mathematical theorems are correct can help ensure that programs are correct. This has led to a lot of research on logics for computer programs. There are many abstractions of programs, and correspondingly many logics that deal with them.

At the simplest level, it is possible to think of a program as being executed sequentially, instruction by instruction. This is almost never true; even the simplest processors typically interrupt the sequential flow to handle external events, such as pressing a key on a keyboard or control panel connected to the computer. However, if we trust the operating system to make a clean separation between such interrupt handlers and the running application, then it is possible to think at a higher level of abstraction of some applications as running sequentially. But many application are concurrent by nature, and these are the most difficult to get right. There are many examples of small programs with two parallel components that have subtle and hard-to-find bugs in them, and parallel or concurrent systems are notoriously hard to program.

Even the simple sequential model requires special mathematical treatment, because it has to represent the changing state of the computer. Every operation in a program changes the state of the machine in some way; otherwise, there would be no point in executing it. There are many types of changes: first, the location of the next instruction (the "program counter") changes, either to the next instruction in the sequence, or to a different place for instructions that change the flow of control. Some instructions change the contents of memory, registers, or various control bits (for example, one that records whether the previous comparison resulted in a zero value). High-level languages hide some of these details, but most (with the notable exception of functional and logic languages) still have the notion of changing state.

A logical model of programs with changing state must represent this state explicitly. If x is a variable in a program, the formula x=3 is meaningless with respect to a program, because it may true or false in different states of the computation of the same program. The logical model must have an explicit representation of states, and some syntactic way of referring to different states. One of the best known formalisms is called

Every formula in this logic is a

As another example, consider how to prove that a certain formula holds for a conditional: {P} if C then A else B {Q}. If the condition C is true in the state before the computation, then the conditional will choose the subprogram A. In order for the desired formula to hold, it must be the case that if P and C are true before the computation, Q must be true afterwards (provided, as always, that the computation terminates). Formally, {P

As is the case for most logics of programs, the difficulty lies in reasoning about loops (and recursion). In order to prove a property of a loop using these logics, it is necessary to come up with a

This is very similar to a proof by induction; the first step is the base of the induction, and the second is the inductive step, in which you prove that if the invariant holds in the beginning of one iteration, it also holds on the beginning of the next one. As is the case with inductive proofs, coming up with a suitable invariant is difficult. Much of the work of automatic theorem provers (for mathematics as well as for logics of programs) is discovering the induction hypotheses or, equivalently, the loop invariants. (In fact, in one of my favorite theorem provers, ACL2, these are exactly the same.)

This kind of reasoning is very low-level, and it is impossible to use it to prove the correctness of a large program manually. Automated theorem provers are a great help, but the process is still difficult and time-consuming, and requires high skills in logic and the specific theorem prover. It is therefore limited to relatively small but safety-critical programs. Wide application of automatic theorem proving to large-scale programs seems to be far in the future. As a result, there is a chasm between the people developing logics of programs and associated theorem provers, and the general population of software developers, most of whom are not even aware of the existence of these methods and tools. But there is a middle way; I will discuss it in the next post.

At the simplest level, it is possible to think of a program as being executed sequentially, instruction by instruction. This is almost never true; even the simplest processors typically interrupt the sequential flow to handle external events, such as pressing a key on a keyboard or control panel connected to the computer. However, if we trust the operating system to make a clean separation between such interrupt handlers and the running application, then it is possible to think at a higher level of abstraction of some applications as running sequentially. But many application are concurrent by nature, and these are the most difficult to get right. There are many examples of small programs with two parallel components that have subtle and hard-to-find bugs in them, and parallel or concurrent systems are notoriously hard to program.

Even the simple sequential model requires special mathematical treatment, because it has to represent the changing state of the computer. Every operation in a program changes the state of the machine in some way; otherwise, there would be no point in executing it. There are many types of changes: first, the location of the next instruction (the "program counter") changes, either to the next instruction in the sequence, or to a different place for instructions that change the flow of control. Some instructions change the contents of memory, registers, or various control bits (for example, one that records whether the previous comparison resulted in a zero value). High-level languages hide some of these details, but most (with the notable exception of functional and logic languages) still have the notion of changing state.

A logical model of programs with changing state must represent this state explicitly. If x is a variable in a program, the formula x=3 is meaningless with respect to a program, because it may true or false in different states of the computation of the same program. The logical model must have an explicit representation of states, and some syntactic way of referring to different states. One of the best known formalisms is called

*Hoare logic*, after C. A. R. Hoare, Turing prize winner and famous, among other things, as the inventor of the quicksort algorithm.Every formula in this logic is a

*Hoare triple*of the form {P}A{Q}, where A is a (possibly partial) program and P and Q are logical statements about the variables of the program. This formula asserts that if P holds in the state just prior to the computation of A, and if this computation terminates, then Q will be true in the resulting state. For example, a (somewhat simplified) way of describing the effect of an assignment x := e in a program is by the formula {e = c} x := e {x = c}, where c is a constant. (Note the crucial difference between logical equality and assignment, so unfortunately blurred by languages from Fortran to Java that use "=" for assignment.) This formula asserts that if the value of the expression e in the state preceding the assignment is c, than the value of x after the assignment is also c.As another example, consider how to prove that a certain formula holds for a conditional: {P} if C then A else B {Q}. If the condition C is true in the state before the computation, then the conditional will choose the subprogram A. In order for the desired formula to hold, it must be the case that if P and C are true before the computation, Q must be true afterwards (provided, as always, that the computation terminates). Formally, {P

**and**C} A {Q} must be true. Similarly, {P**and not**C} B {Q} must hold to guarantee the desired result in the case where the condition C is initially false. And indeed, the desired formula follows from the latter two in Hoare logic.As is the case for most logics of programs, the difficulty lies in reasoning about loops (and recursion). In order to prove a property of a loop using these logics, it is necessary to come up with a

*loop invariant*. This is a property that always holds at the beginning of each iteration. In order to prove a property P following the loop, you prove that the invariant holds on entry to the loop, that it sustains itself across one iteration, and that if you exit the loop with the invariant holding, P must be true at that point.This is very similar to a proof by induction; the first step is the base of the induction, and the second is the inductive step, in which you prove that if the invariant holds in the beginning of one iteration, it also holds on the beginning of the next one. As is the case with inductive proofs, coming up with a suitable invariant is difficult. Much of the work of automatic theorem provers (for mathematics as well as for logics of programs) is discovering the induction hypotheses or, equivalently, the loop invariants. (In fact, in one of my favorite theorem provers, ACL2, these are exactly the same.)

This kind of reasoning is very low-level, and it is impossible to use it to prove the correctness of a large program manually. Automated theorem provers are a great help, but the process is still difficult and time-consuming, and requires high skills in logic and the specific theorem prover. It is therefore limited to relatively small but safety-critical programs. Wide application of automatic theorem proving to large-scale programs seems to be far in the future. As a result, there is a chasm between the people developing logics of programs and associated theorem provers, and the general population of software developers, most of whom are not even aware of the existence of these methods and tools. But there is a middle way; I will discuss it in the next post.

## Tuesday, February 15, 2011

### Thinking Formally

If there is no alternative to clear thinking about our programs, we should investigate what we can learn from the science of organized thought. Logic has a long history, with equal roots in philosophy and mathematics. In some sense, computer programs breath life into abstract logic and give logic the abilitiy to do real things. And, just like Pinocchio, they often misbehave!

One of the fundamental concerns of logic is the study of proofs: how can we convince ourselves in a precise and mechanical way of the truth of some claim. The desire to do so resulted from the need to distinguish between valid and invalid mathematical proofs, expressed using prose. Sometimes, hidden assumptions were not recognized, leading to faulty proofs. For example, consider the first axiomatic proof system, Euclidean geomerty, studied in middle school. It has five axioms, including the famous "parallel postulate". It is an amazing achievement, which is still an excellent example of how to reason formally, and was the source of many important advances in mathematics. But Euclidean geometry has many hidden assumptions, which are glossed over by the use of informal drawings. About 2200 years after Euclid, several mathematicians tried to completely formalize geometrical proofs. David Hilbert's axiomatization of geometry includes twenty axioms instead of Euclid's five. One of these states that between every two different points on a straight line there is a third point on the same line. This was so obvious to Euclid that he didn't even think to mention it in his list of axioms. But axioms are exactly those self-evident propositions on which the theory is based, and the proof isn't logically sound unless all these are stated explicitly.

Logic is closely tied with computation, although the connection was made explicit only towards the 20th century. A formal proof is a series of deductions, each of which is based on axioms or previously-proven propositions, using a set of deduction rules. It is supposed to be so simple that it can easily be checked for correctness, without any understanding of its subject matter, whether it is geometry, number theory, or abstract algebra. In other words, it should be easy to write a program that checks a formal proof; all you need to do is verify that each step follows from the axioms and from previous steps according to the rules.

But wait! Is there a limit to the complexity of the axioms or of the rules used in the proof? Without such limits, the whole exercise falls apart. For example, imagine a system of axioms that contains all true statements in number theory. This is, of course, an infinite set. How difficult is it to write a program to check whether a given statement is an axiom? Well, it turns out to be impossible! (This is a consequence of Gödel's first incompleteness theorem.) So, if we are to be able to write a program to check formal proofs, we need to restrict the possible sets of axioms. For example, in Gödel's theorems, the set of axioms is required to be recursively enumerable; this means that there is a (finite) program that can generate all axioms and never generates anything that is not an axiom. Of course, this program will never terminate if the set is infinite. That's acceptable, as long as the program is guaranteed to produce every axiom if you wait long enough.

So now, a concept about computability has entered the study of logic itself. It is only fair, then, to apply logic to computation; and, indeed, the study of logics of programs is an important field in computer science. There are even computer programs called

One of the fundamental concerns of logic is the study of proofs: how can we convince ourselves in a precise and mechanical way of the truth of some claim. The desire to do so resulted from the need to distinguish between valid and invalid mathematical proofs, expressed using prose. Sometimes, hidden assumptions were not recognized, leading to faulty proofs. For example, consider the first axiomatic proof system, Euclidean geomerty, studied in middle school. It has five axioms, including the famous "parallel postulate". It is an amazing achievement, which is still an excellent example of how to reason formally, and was the source of many important advances in mathematics. But Euclidean geometry has many hidden assumptions, which are glossed over by the use of informal drawings. About 2200 years after Euclid, several mathematicians tried to completely formalize geometrical proofs. David Hilbert's axiomatization of geometry includes twenty axioms instead of Euclid's five. One of these states that between every two different points on a straight line there is a third point on the same line. This was so obvious to Euclid that he didn't even think to mention it in his list of axioms. But axioms are exactly those self-evident propositions on which the theory is based, and the proof isn't logically sound unless all these are stated explicitly.

Logic is closely tied with computation, although the connection was made explicit only towards the 20th century. A formal proof is a series of deductions, each of which is based on axioms or previously-proven propositions, using a set of deduction rules. It is supposed to be so simple that it can easily be checked for correctness, without any understanding of its subject matter, whether it is geometry, number theory, or abstract algebra. In other words, it should be easy to write a program that checks a formal proof; all you need to do is verify that each step follows from the axioms and from previous steps according to the rules.

But wait! Is there a limit to the complexity of the axioms or of the rules used in the proof? Without such limits, the whole exercise falls apart. For example, imagine a system of axioms that contains all true statements in number theory. This is, of course, an infinite set. How difficult is it to write a program to check whether a given statement is an axiom? Well, it turns out to be impossible! (This is a consequence of Gödel's first incompleteness theorem.) So, if we are to be able to write a program to check formal proofs, we need to restrict the possible sets of axioms. For example, in Gödel's theorems, the set of axioms is required to be recursively enumerable; this means that there is a (finite) program that can generate all axioms and never generates anything that is not an axiom. Of course, this program will never terminate if the set is infinite. That's acceptable, as long as the program is guaranteed to produce every axiom if you wait long enough.

So now, a concept about computability has entered the study of logic itself. It is only fair, then, to apply logic to computation; and, indeed, the study of logics of programs is an important field in computer science. There are even computer programs called

*theorem provers*, which have been used to prove difficult mathematical theorems as well as to verify the correctness of other programs. However, this has still not affected the day-to-day work of most software developers. In the next post, I will discuss why this is the case and what formal tools can still be used for software development today.## Tuesday, February 1, 2011

### Responsible Adults

If some programmers are not behaving maturely (see Programmers as Children), what can we do as responsible adults to make things better? There are many answers, and they depend on who is meant by "we".

First, however, I must mention what is perhaps Fred Brooks's best-known paper, "No Silver Bullet: Essence and Accidents of Software Engineering" (

This is, of course, a provocative statement, and it sparked lively discussion. But in spite of various opinions to the contrary, no silver bullet has yet been found. This is not to say that progress has not been made. Even if the essential problem is difficult, programmers are still being asked to develop larger and larger systems. They should use whatever cures are available, even if they cure only the symptoms and not the disease.

So what can we do? The most important thing is to be aware of the difficulties and look for ways to avoid them. There is no substitute for careful thinking, and no tool will do that for us until true artificial intelligence is available. (While great progress is being made – for example, see the Watson Jeopardy challenge – don't hold your breath waiting for "automatic programming" in the sense of the computer writing its own programs.) More concretely, there are many practices we can follow in order to make software development easier, more productive, more reliable, and also more fun. These fall into several categories, including methodology and process, languages and tools, and education. I will have something to say about each of these, but only a book the size of an encyclopedia can do justice to them all.

There are many software-developoment methodologies, and none is perfect. In fact, Daniel Berry of the University of Waterloo has a paper titled "The Inevitable Pain of Software Development: Why There Is No Silver Bullet". Another provocative paper, and one of the responses to the one by Brooks, this one focuses on why no methodology can ever be a silver bullet. Berry claims that every methodology has some inherent pain associated with it. It may be all pain, as in the maligned waterfall method, or the pain could be just a small part of it. In any case, it is unavoidable, and developers therefore evade it by not following the methodology rigorously, thus defeating its purpose. But this is like parenting: although no one knows of a sure method to raise perfect children, that doesn't keep us from trying. In the same way, the fact that there is no perfect development methodology doesn't mean that we should not attempt to find the best process for our situation and follow it to the best of our ability.

As I mentioned before, I'm a great believer in tools, and spend a lot of time developing new tools. I have also been involved with computer-science education, at both the university and high-school levels, and I have a lot to say about these topics. Stay tuned!

First, however, I must mention what is perhaps Fred Brooks's best-known paper, "No Silver Bullet: Essence and Accidents of Software Engineering" (

*IEEE Computer*, April 1987). In it, Brooks claims that software engineering is hard*in essence*, and therefore nothing can make it easy. While a lot of progress has been made on what Brooks calls*accidental difficulties*, which have to do with the way represent the abstract concepts of software engineering, these only expose the essential difficulty of the conceptual structures underlying software development. The silver bullet is the only effective way to kill werewolves; Brooks claims that no such magical cure can be found for the ills of software development.This is, of course, a provocative statement, and it sparked lively discussion. But in spite of various opinions to the contrary, no silver bullet has yet been found. This is not to say that progress has not been made. Even if the essential problem is difficult, programmers are still being asked to develop larger and larger systems. They should use whatever cures are available, even if they cure only the symptoms and not the disease.

So what can we do? The most important thing is to be aware of the difficulties and look for ways to avoid them. There is no substitute for careful thinking, and no tool will do that for us until true artificial intelligence is available. (While great progress is being made – for example, see the Watson Jeopardy challenge – don't hold your breath waiting for "automatic programming" in the sense of the computer writing its own programs.) More concretely, there are many practices we can follow in order to make software development easier, more productive, more reliable, and also more fun. These fall into several categories, including methodology and process, languages and tools, and education. I will have something to say about each of these, but only a book the size of an encyclopedia can do justice to them all.

There are many software-developoment methodologies, and none is perfect. In fact, Daniel Berry of the University of Waterloo has a paper titled "The Inevitable Pain of Software Development: Why There Is No Silver Bullet". Another provocative paper, and one of the responses to the one by Brooks, this one focuses on why no methodology can ever be a silver bullet. Berry claims that every methodology has some inherent pain associated with it. It may be all pain, as in the maligned waterfall method, or the pain could be just a small part of it. In any case, it is unavoidable, and developers therefore evade it by not following the methodology rigorously, thus defeating its purpose. But this is like parenting: although no one knows of a sure method to raise perfect children, that doesn't keep us from trying. In the same way, the fact that there is no perfect development methodology doesn't mean that we should not attempt to find the best process for our situation and follow it to the best of our ability.

As I mentioned before, I'm a great believer in tools, and spend a lot of time developing new tools. I have also been involved with computer-science education, at both the university and high-school levels, and I have a lot to say about these topics. Stay tuned!

Subscribe to:
Posts (Atom)