Correctness and Completeness (Progress in Theoretical Computer Science)

Progress in Theoretical Computer Science. Free Preview. © Semantics of Type Theory. Correctness, Completeness and Independence Results. Authors.
Table of contents

Such mathematical systems form a rich class of programming logics which have proven useful in computer science. Conversely, advances in computer science have created a new field of mathematics called Formalized Mathematics.

This course will demonstrate these connections between computer science and logic. Cornell University, Spring CS aims at third or fourth year undergraduates and first year graduate students interested in logic. From Analytic Tableaux to Gentzen Systems.

Product details

From Gentzen Systems to Refinement Logic. Correctness and Completeness of Refinement Logic. First-Order Logic Syntax and Semanctics. First-Order Tableaux Proof System. First-Order Tableaux Completeness and Compactness. The idea began as a theoretical construct but is now fully naturalized throughout computer science as an organizing principle and specification tool, independent of any analytical considerations.

Introductory texts describe certain programming patterns as state driven Garland, or state based Clancy and Linn, An archetypal state-based program is a menu-driven telephone-inquiry system. Based on their familiarity with the paradigm, software engineers instinctively know how to build such programs.


  • Static and Dynamic Coupled Fields in Bodies with Piezoeffects or Polarization Gradient: 26 (Lecture ;
  • .
  • Family-Style Meals at the Haliimaile General Store?
  • .

The ubiquity of the paradigm has led to the development of special tools for describing and building state-based systems, just as for parsers. Work continues to devise machine models to describe different types of systems.

CS Applied Logic

The theory of computability preceded the advent of general-purpose computers and can be traced to work by Turing, Kurt Godel, Alonzo Church, and others Davis, Computability theory concentrated on a single question: Do effective procedures exist for deciding mathematical questions? The requirements of computing have raised more detailed questions about the intrinsic complexity of digital calculation, and these questions have raised new issues in mathematics. Algorithms devised for manual computing often were characterized by operation counts.

For example, various schemes were proposed for carrying out Gaussian elimination or finite Fourier transforms using such counts. This approach became more common with the advent of computers, particularly in connection with algorithms for sorting Friend, However, the inherent degree of difficulty of computing problems did not become a discrete research topic until the s.

By , the analysis of algorithms had become an established aspect of computer science, and Knuth had published the first volume of a treatise on the subject that remains an indispensable reference today. Over time, work on complexity theory has evolved just as practical considerations have evolved: In the early s, Hao Wang 3 noted distinctions of form that rendered some problems in mathematical logic decidable, whereas logical problems as a class are undecidable.

There also emerged a robust classification of problems based on the machine capabilities required to attack them. Hartmanis then left GE to found the computer science department at Cornell University. Hartmanis and Stearns developed a "speed-up" theorem, which said essentially that the complexity hierarchy is unaffected by the underlying speed of computing.

What distinguishes levels of the hierarchy is the way that solution time varies with problem size—and not the scale at which time is measured. Thus, it is useful to talk of complexity in terms of order-of-growth. To that end, the "big-oh" notation, of the form O n , was imported from algorithm analysis to computing most notably by Knuth [] , where it has taken on a life of its own.

The notation is used to describe the rate at which the time needed to generate a solution varies with the size of the problem. Problems in which there is a linear relationship between problem size and time to solution are O n ; those in which the time to solution varies as the square of the problem size are O n 2. The quantitative approach to complexity pioneered by Hartmanis and Stearns spread rapidly in the academic community. Applying this sharpened viewpoint to decision problems in logic, Stephen Cook at the University of Toronto proposed the most celebrated theoretical notion in computing—NP completeness.

His "P versus NP" conjecture is now counted among the important open problems of mathematics. It states that there is a sharp distinction between problems that can be computed deterministically or nondeterministically in a tractable amount of time. The practical importance of Cook's work was vivified by Richard Karp, at the University of California at Berkeley UC-Berkeley , who demonstrated that a collection of nondeterministically tractable problems, including the famous traveling-salesman problem, 6 are interchangeable "NP complete" in the sense that, if any one of them is deterministically tractable, then all of them are.

Cook's conjecture, if true, implies that there is no hope for precisely solving any of these problems on a real computer without incurring an exponential time penalty. As a result, software designers, knowing that particular applications e. This leads to another question: How good a solution. A more refined theory about approximate solutions to difficult problems has been developed Hochbaum, , but, given that approximations are not widely used by computer scientists, this theory is not addressed in detail here.


  1. In the Sanctuary of Women:A Companion for Reflection & Prayer!
  2. Proving correctness and completeness of normal programs – a declarative approach;
  3. Winding through Time: The Forgotten History and Present-Day Peril of Bayou Manchac.
  4. IN ADDITION TO READING ONLINE, THIS TITLE IS AVAILABLE IN THESE FORMATS:.
  5. Fortunately, good approximation methods do exist for some NP-complete problems. For example, huge "traveling salesman routes" are routinely used to minimize the travel of an automated drill over a circuit board in which thousands of holes must be bored. These approximation methods are good enough to guarantee that certain easy solutions will come very close to i.

    Although the earliest computer algorithms were written largely to solve mathematical problems, only a tenuous and informal connection existed between computer programs and the mathematical ideas they were intended to implement. The gap between programs and mathematics widened with the rise of system programming, which concentrated on the mechanics of interacting with a computer's environment rather than on mathematics. The possibility of treating the behavior of programs as the subject of a mathematical argument was advanced in a compelling way by Robert Floyd at UC-Berkeley and later amplified by Anthony Hoare at The Queen's University of Belfast.

    The academic movement toward program verification was paralleled by a movement toward structured programming, christened by Edsger Dijkstra at Technische Universiteit Eindhoven and vigorously promoted by Harlan Mills at IBM and many others. A basic tenet of the latter movement was that good program structure fosters the ability to reason about programs and thereby assure their correctness.

    Structured programming became an obligatory slogan in programming texts and a mandated practice in many major software firms. In the full verification approach, a program's specifications are described mathematically, and a formal proof that the program realizes the specifications is carried through. To assure the validity of the exhaustingly long proof, it would be carried out or checked mechanically. To date, this approach has been too onerous to contemplate for routine programming.

    Nevertheless, advocates of structured programming promoted some of its key ideas, namely precondition, postcondition, and invariant see Box 8.

    23. Computational Complexity

    These terms have found their way into every computer science curriculum, even at the high school level. In formal verification, computer programs become objects of mathematical study. A program is seen as affecting the state of the data with which it interacts. The purpose of the program is to transform a state with known properties the precondition into a state with initially unknown, but desired properties the postcondition. A program is composed of elementary operations, such as adding or comparing quantities.

    The transforming effect of each elementary operation is known. Verification consists of proving, by logical deduction, that the sequence of program steps starting from the precondition must inexorably lead to the desired postcondition. When programs involve many repetitions of the same elementary steps, applied to many different data elements or many transformational stages starting from some initial data, verification involves showing once and for all that, no matter what the data are or how many steps it takes, a program eventually will achieve the postcondition.

    Such an argument takes the form of a mathematical induction, which asserts that the state after each repetition is a suitable starting state for the next repetition. The assertion that the state remains suitable from repetition to repetition is called an "invariant" assertion. An invariant assertion is not enough, by itself, to assure a solution. To rule out the possibility of a program running forever without giving an answer, one must also show that the postcondition will eventually be reached. This can be done by showing that each repetition makes a definite increment of progress toward the postcondition, and that only a finite number of such increments are possible.

    Although nationally straightforward, the formal verification of everyday programs poses a daunting challenge.


    1. .
    2. The Robber Baron and the Candy Store.
    3. A Mile in My Flip-Flops: A Novel.
    4. Looking for other ways to read this?.
    5. .
    6. Familiar programs repeat thousands of elementary steps millions of times. Moreover, it is a forbidding task to define precise preconditions and postconditions for a program e. To carry mathematical arguments through on this scale requires automation in the form of verification tools. To date, such tools can handle only problems with short descriptions—a few dozen pages, at most.

      Nevertheless, it is possible for these few pages to describe complex or subtle behavior. In these cases, verification tools come in handy. The structured programming perspective led to a more advanced discipline, promulgated by David Gries at Cornell University and Edsger Dijkstra at Eindhoven, which is beginning to enter curricula.

      In this approach, programs are derived from specifications by algebraic calculation. In the most advanced manifestation, formulated by Eric Hehner, programming is identified with mathematical logic. Although it remains to be seen whether this degree of mathematicization will eventually be-. In one area, the design of distributed systems, mathematicization is spreading in the field perhaps faster than in the classroom. The initial impetus was West's validation of a proposed international standard protocol.

      The subject quickly matured, both in practice Holzmann, and in theory Vardi and Wolper, By now, engineers have harnessed a plethora of algebras e. It is particularly difficult to foresee the effects of abnormal events on the behavior of communications applications. Loss or garbling of messages between computers, or conflicts between concurrent events, such as two travel agents booking the same airline seat, can cause inconvenience or even catastrophe, as noted by Neumann These real-life difficulties have encouraged research in protocol analysis, which makes it possible to predict behavior under a full range of conditions and events, not just a few simple scenarios.

      A body of theory and practice has emerged in the past decade to make automatic analysis of protocols a practical reality. Cryptography is now more important than ever. Although the military has a long history of supporting research on encryption techniques to maintain the security of data transmissions, it is only recently that cryptography has come into widespread use in business and personal applications. It is an increasingly important component of systems that secure online business transactions or maintain the privacy of personal communications.

      The field has also been controversial, in that federal agencies have sometimes opposed, and at other times supported, publicly accessible research. Here again, the NSF supported work for which no funding could be obtained from other agencies. The scientific study of cryptography matured in conjunction with information theory, in which coding and decoding are central concerns, albeit typically in connection with compression and robust transmission of data as opposed to security or privacy concerns.

      Although Claude Shannon's seminal treatment of cryptography Shannon, followed his founding paper on information theory, it was actually written earlier under conditions of wartime security. Undoubtedly, Shannon's involvement with cryptography on government projects helped shape his thinking about information theory.

      Get Correctness and Completeness (Progress in Theoretical PDF

      Although impressive accomplishments, such as Great Britain's Ultra code-breaking enterprise in World War II, were known by reputation, the methods were largely kept secret. The National Security Agency NSA was for many years the leader in cryptographic work, but few of the results were published or found their way into the civilian community. However, an independent movement of cryptographic discovery developed, driven by the availability and needs of computing. Ready access to computing power made cryptographic experimentation feasible, just as opportunities for remote intrusion made it necessary and the mystery surrounding the field made it intriguing.

      The mechanism of DES was disclosed, although a pivotal aspect of its scientific justification remained classified. Speculation about the strength of the system spurred research just as effectively as if a formal request for proposals had been issued. Hellman had been interested in cryptography since the early s and eventually convinced the NSF to support it Diffie and Hellman, Their method won instant acclaim and catapulted number theory into the realm of applied mathematics. Each of the cited works has become bedrock for the practice and study of computer security.

      The NSF support was critical, as it allowed the ideas to be developed and published in the open, despite pressure from the NSA to keep them secret.

      Customers who bought this item also bought

      The potential entanglement with International Traffic in Arms Regulations is always apparent in the cryptography arena Computer Science and Telecommunications Board, Official and semiofficial attempts to suppress publication have often drawn extra notice to the field Diffie, This unsolicited attention has evoked a notable level of independence among investigators. Most, however, have achieved a satisfactory modus vivendi with the concerned agencies, as evidenced by the seminal papers cited in this chapter that report on important cryptographic research performed under unclassified grants.

      Before public-key cryptography was invented, cipher systems required two communicating parties to agree in advance on a secret key to be used in encrypting and decrypting messages between them. To assure privacy for every communication, a separate arrangement had to be made between each pair who might one day wish to communicate.

      Parties who did not know each other in advance of the need to communicate were out of luck. By contrast, public-key cryptography requires merely that an individual announce a single public encryption key that can be used by everyone who wishes to send that individual a message. To decode any of the messages, this individual uses a different but mathematically related key, which is private. The security of the system depends on its being prohibitively difficult for anyone to discover the private key if only the public key is known.

      The practicality of the system depends on there being a feasible way to produce pairs of public and private keys. The first proposals for public-key cryptography appealed to complexity theory for problems that are difficult to solve. The practical method proposed by Rivest, Shamir, and Adleman RSA depends on a problem believed to be of this type from number theory. The problem is factoring. The recipient chooses two huge prime numbers and announces only their product.

      The product is used in the encryption process, whereas decryption requires knowledge of the primes. To break the code, one must factor the product, a task that can be made arbitrarily hard by picking large enough numbers; hundred-digit primes are enough to seriously challenge a stable of supercomputers. The RSA method nicely illustrates how theory and practice evolved together.

      Complexity theory was motivated by computation and the desire to understand whether the difficulty of some problems was inherent or only a symptom of inadequate understanding. When it became clear that inherently difficult problems exist, the stage was set for public-key cryptography. This was not sufficient to advance the state of practice, however. Theory also came to the fore in suggesting problems with structures that could be adapted to cryptography. It took the combination of computers, complexity theory, and number theory to make public-key cryptography a reality, or even conceivable.