Drinking & Drowning in TDD

Test Driven Development: Testing methodology associated with Agile Programming in which every chunk of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code. (Agile Testing: Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm.)


 The first step for you to make the jump to TDD properly is to drop the code gen tools for a while. The whole premise of TDD is to design your code using tests as the design artifact to help drive out the solution incrementally. Most code gen tools do a big bang approach of code gen which you as the developer then need to go in an tweak to make work for your scenario. Utilizing code gen will not help you get into the habit of:
  • Writing a failing test for a requirement.
  • Getting the test to compile
  • Coding up the necessary behavior to make the test pass.
  • Cleaning up
  • Continuing
One of the main reasons people have a hard time getting into TDD is that it is a radical departure from the way most people normally develop. In the beginning you will walk slowly, stumbling and falling often. The tests that you initially write may not be overly good. This is because you are now crafting your skills in a new art. Over time, what initially seemed alien and uncomfortable will seem normal, rapid, and welcoming.

“Wouldn’t it take a very long time to finish the project as you are hand coding each class one by one?”

One of the things that is hard to appreciate from the small video that I showed on DNRTV, is the tools and techniques that come into play when you become a proficient test driven developer. In the class that I just finished teaching in Richmond,VA, we built a full portion of an enterprise e-commerce application with a Rich Domain Model, O/R Mapping Layer etc all over the course of a week using Test Driven Development, Design Patterns etc. One of the comments that someone made was the fact that not once during the course of the week did I use studio to either compile or run the project!
 
Again, that might seem like something out of the blue, but it is the little things that make you much more proficient as you get more accustomed to TDD. Once you are into the swing of it, you can bring your CodeGen tools back into the mix where they make sense. More often than not, most people who get swallowed up by TDD start to seriously question the value of Code Gen tools. It’s not to say that they don’t have their place, their use just becomes considerably diminished to how you may be using it right now.
Test-driven development’s main goal is not testing software, but aiding the programmer and customer during the development process with unambiguous requirements. The requirements are expressed in the form of tests, which are a support mechanism (scaffolding, you might say) that stands firmly under the participants as they undertake the hazards of software development

Abraacadabraa...RED GREEN REFACTOR presto !!...and again
The following sequence is based on the book Test-Driven Development by Example, which many consider to be the original source text on the concept in its modern form.

1. Add a test

In test-driven development each new feature begins with writing a test. This test must fail since it is written before the feature is implemented. In order to write a test, the developer must understand the specification and the requirements of the feature clearly. This may be accomplished through use cases and user stories to cover the requirements and exception conditions. This could also imply an invariant, or modification of an existing test.

2. Run all tests and see the new one fail

This validates that the test harness is working correctly and that the new test does not mistakenly pass without requiring any new code.

The new test should also fail for the expected reason. This step tests the test itself, in the negative.

3. Write some code

The next step is to write some code that will cause the test to pass. The new code written at this stage will not be perfect and may, for example, pass the test in an inelegant way. That is acceptable as later steps will improve and hone it. It is important that the code written is only designed to pass the test; no further (and therefore untested) functionality should be predicted and 'allowed for' at any stage.

4. Run the automated tests and see them succeed

If all test cases now pass, the programmer can be confident that the code meets all the tested requirements. This is a good point from which to begin the final step of the cycle.

5. Refactor code

Now the code can be cleaned up as necessary. By re-running the test cases the developer can be confident that refactoring is not damaging any existing functionality. The concept of removing duplication is an important aspect of any software design. In this case, however, it also applies to removing any duplication between the test code and the production code—for example magic numbers or strings that were repeated in both, in order to make the test pass in step 3.

The cycle is then repeated, starting with another new test to push forward the functionality. The size of the steps can be as small as the developer likes, or get larger if s/he feels more confident. If the code written to satisfy a test does not fairly quickly do so, then the step-size may have been too big, and maybe the increment should be split into smaller testable steps. When using external libraries (such as Microsoft ADO.NET) it is important not to make increments that are so small as to be effectively testing the library itself

History
There are various aspects to using test-driven development, for example the principles of "Keep It Simple, Stupid" (KISS) and "You Ain't Gonna Need It" (YAGNI). By focusing on writing only the code necessary to pass tests, designs can be cleaner and clearer than is often achieved by other methods[3].

To achieve some advanced design concept (such as a Design Pattern), tests are written that will generate that design. The code may remain simpler than the target pattern, but still pass all required tests. This can be unsettling at first but it allows the developer to focus only on what is important.

Test-driven development requires the programmer to first fail the test cases. The idea is to ensure that the test really works and can catch an error. Once this is shown, the normal cycle will commence. This has been coined the "Test-Driven Development Mantra", known as red/green/refactor where red means fail and green is pass.

Test-driven development constantly repeats the steps of adding test cases that fail, passing them, and refactoring. Receiving the expected test results at each stage reinforces the programmer's mental model of the code, boosts confidence and increases productivity.

GLOSSARY
Acceptance Testing: Testing conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed acceptance criteria.

Automated Testing: Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing.
The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.

Basis Path Testing: A white box test case design technique that uses the algorithmic flow of the program to design tests.Basis Set: The set of tests derived using basis path testing.

Baseline: The point at which some deliverable produced during the software engineering process is put under formal change control.

Benchmark Testing: Tests that use representative sets of programs and data designed to evaluate the performance of computer hardware and software in a given configuration.

Beta Testing: Testing of a prerelease software product conducted by customers.

Black Box Testing: Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.

Bug: A fault in a program which causes the program to perform in an unintended or unanticipated manner.

CMM: The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.

Cause Effect Graph: A graphical representation of inputs and the associated outputs effects which can be used to design test cases.

Code Complete: Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.

Code Coverage: An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.

Code Inspection: A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.

Code Walkthrough: A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.

Component: A minimal software item for which a separate specification is available

Context Driven Testing: The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.

Cyclomatic Complexity: A measure of the logical complexity of an algorithm, used in white-box testing

Debugging: The process of finding and removing the causes of software failures.

Defect: Nonconformance to requirements or functional / program specification

End-to-End testing: Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Exhaustive Testing: Testing which covers all combinations of input values and preconditions for an element of the software under test.

Functional Testing: Testing the features and operational behavior of a product to ensure they correspond to its specifications.
Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.

High Order Tests: Black-box tests conducted once the software has been integrated

Gray Box Testing: A combination of Black Box  and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.

Integration Testing: Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.

Localization Testing: This term refers to making software specifically designed for a specific locality.

Negative Testing: Testing aimed at showing software does not work. Also known as "test to fail".

Positive Testing: Testing aimed at showing software works. Also known as "test to pass".

Load Testing/Performance Testing: Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".

Race Condition: A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access.

Ramp Testing: Continuously raising an input signal until the system breaks down.

Regression Testing: Retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.

Sanity Testing: Brief test of major functional elements of a piece of software to determine if its basically operational

Smoke Testing: A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.

Static Analysis: Analysis of a program carried out without executing the program.

Stress Testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is  performance testing using a very high level of simulated load.

System Testing: Testing that attempts to discover defects that are properties of the entire system rather than of its individual components.

Test Bed: An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.

Test Case: Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc. A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.

Test Plan: A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. Ref IEEE Std 829.

Test Scenario: Definition of a set of test cases or test scripts and the sequence in which they are to be executed.

Test Script: Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool.

Test Specification: A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests.

Test Suite: A collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test.

User Acceptance Testing: A formal product evaluation performed by a customer as a condition of purchase.

White Box Testing: Testing based on an analysis of internal workings and structure of a piece of software. Includes techniques such as Branch Testing and Path Testing. Also known as Structural Testing and Glass Box Testing. Contrast with Black Box Testing.

Workflow Testing: Scripted end-to-end testing which duplicates specific workflows which are expected to be utilized by the end-user.

Code Refactoring is any change to a computer program which improves its readability or simplifies its structure without changing its results.(When refactoring, best practice is to have test fixtures in place: They can validate that your refactorings don't change the behavior of your software.)

Test-Driven Development (TDD) is a software development technique that involves repeatedly first writing a test case and then implementing only the code necessary to pass the test. Test-driven development gives rapid feedback. The technique began to receive publicity in the early 2000s as an aspect of Extreme Programming and agile programming, but more recently is creating more general interest in its own right

References
http://en.wikipedia.org/wiki/Test-driven_development
http://www.agileprogrammer.com/dotnetguy/archive/2006/08/01/17795.aspx

posted @ Monday, April 16, 2007 7:12 PM
Print

Comments on this entry:

# re: Drinking & Drowning in TDD

Left by los mejores casinos at 1/20/2011 10:03 PM
Gravatar
I very much appreciated your attitude on this. Pretty unrelated, but it makes me think of when I was going for another site but after read this site i have decided it’s a good option for me..

# re: Drinking & Drowning in TDD

Left by PTC at 3/20/2012 3:38 PM
Gravatar
Interesting post and thanks for sharing. Some things in here I have not thought about before.Thanks for making such a cool post which is really very well written.

Your comment:



(not displayed)

 
 
 
 

Live Comment Preview: