Papers

Douglas Hoffman, President of Software Quality Methods, LLC., has written papers and presented talks on a variety of subjects relating to software quality assurance. (PDF List of Publications)  The abstracts, papers, and slide sets provide varying degrees of detail on frequently overlapping  subjects. Not all of the material has been linked to the web site, but labels are included for all of the material being assembled.

These works are licensed under a Creative Commons Attribution-Noncommercial 3.0 Unported License Creative Commons License Written permission from Douglas Hoffman is required for exceptions.  (Contact Doug)

The materials available below are in pdf format. You need an Adobe™ Reader 5.0 or later to view them. (Download)

Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

Alphabetical Order

Title

Date

Purpose

Advanced Automation Architectures (Tutorial)

07/2007

Conference for the Association for Software Testing (CAST) 2007

Advanced Equivalence Class Analysis

03/2014

Presentation at Belgium Testing Days 2014

Analysis of The Taxonomy of Test Oracles Papers and Presentations

10/1998

Fifth Los Altos Workshop on Software Testing

Architecture and Design of Automated Software Tests

05/2000

PNSQC Spring Workshop 2000

Automated Results Comparison Isn't Always Easy

02/2009

Recife Summer School

Automated Testing of Embedded Software

03/2003

Spring 2003 Software Test Automation Conference

Avoiding the "Test and Test Again" Syndrome

07/2007

Conference for the Association for Software Testing (CAST) 2007

Bugs For Sale

04/2010

IV EBTS

CAST 2009 Interview with Doug Hoffman

06/2009

Michael Kelly Interview about Why Tests Don't Pass

Cost Benefits for Test Automation

10/1999

STAR West 1999

The Darker Side of Metrics

10/2000

PNSQC 2000

Design of Oracle Based Automated Tests

05/2009

SQC Dusseldorf

Divide and Conquer

01/2005

Better Software, "Front Line" January 2005

Early Testing Without the Test and Test Again Syndrome

11/2006

SSQA

Evolving Leadership within AST

03/2012

AST Blog

Exhausting Your Test Options

07/2003

Software Testing and Quality Engineering

Exploratory Automated Testing

03/2014

Tutorial at Belgium Testing Days 2014

Exploratory Test Automation

10/2011
03/2012

STPCon Fall 2011 (session)
STPCon Spring 2012 (workshop)

Exploratory Test Automation

08/2010

Conference for the Association for Software Testing (CAST) 2010 (with Cem Kaner)

Failure Mode and Effects Analysis

05/2000

ASQ Section 0613

Five Automation Fallacies

06/2009

Better Software Conference

Five Questions With Douglas Hoffman

10/2007

Interview by Michael Hunter in Dr. Dobbs Journal

Foundations of Software Quality

1994

ASQ Section 0613 Class

Fundamentals of Software Testing

1995

ASQ Section 0613 Class

Fundamentals of Software Quality Assurance

1992-1995

ASQ Section 0613 Class

A Graphical Display of Testing Status for Complex Configurations

10/2007

PNSQC

Heuristic Test Oracles

04/1999

Software Testing and Quality Engineering

Improved Testing Using a Test Execution Model

10/2013

STPCon Fall 2013

Lessons for Testing From Financial Accounting

07/2008

2008 Conference of the Association for Software Testing

Leverage Test Automation ROI

10/2012

STAR West 2012 (Keynote)

Measuring the Quality of Software Consulting

10/1994

Fourth International Conference on Software Quality

A Method for Measuring Quality of Software Consulting

1994

ASQ Section 0613

Metrics for Metrics: Cost Analysis and Justification

05/1998

Developing Strategic I/T Metrics Conference 1998

Misleading Metrics

10/2002

Dr. Dobbís Journal, October 01, 2002

Mutating Automated Tests

05/2000

SSQA

Mutating Automated Tests

04/2000

Software Testing Analysis & Review (STAR) East 2000

Nine Types of Oracles

02/2013

Belgium Testing Days 2013

Non-regression Test Automation

10/2012

STPCon Fall 2012

Non-Regression Test Automation

10/2008

PNSQC 2008

Overview of ASQís Certified Software Quality Engineer (CSQE)

09/2002

Quality Week 2002

A Process for Measuring the Quality of Software Consulting

05/1994

PNSQC

Requirements for Test Automation

12/2000

SSQA

Requirements for Test Automation

10/1999

PNSQC 1999

The Software Quality Groupís Relationship to Development

05/1993

Quality Week 1993

Self-Verifying Data

10/2012

PNSQC 2012

Some Measures of Quality of Software Consulting 1994 (ASQ)

1994

SSQA

SWEBOK, Feedback to IEEE

06/2003

Review feedback to IEEE

Taxonomy of Test Oracles, Analysis of The

05/1998

Quality Week 1998

The "Test and Test Again" Syndrome

10/2004

2004 Better Software Conference

Test Automation Architectures: Planning for Test Automation

05/1999

Quality Week Conference 1999

Test Oracles

04/2010

IV EBTS

Test Oracles; Planning Ahead for Test Automation

03/1998

East Bay SSQA (EBSSQA)

Test Automation Exploratory

02/2013

Tutorial at Belgium Testing Days 2013

Testing Automation Beyond Regression Testing

04/2008

Spring STPCon

21 CFR Part 11: Electronic Signatures, Electronic Records

05/2000

ASQ Section 0613

Using Test Oracles in Automation

04/2006

Google Tech Talk April 25, 2006

Using Test Oracles in Automation

03/2003

Spring 2003 Software Test Automation Conference

Using Test Oracles in Automation

10/2001

2001 Pacific Northwest Software Quality Conference (PNSQC 2001)

Using Test Oracles in Automation

05/2000

Quality Week 2000 Tutorial

Whatís Different About Software

11/2001

ASQ Golden Gate Section

Whatís Different About Software

04/2001

ASQ Section 0613

Why Tests Don't Pass

07/2009

CAST 2009

Why Tests Do Not Pass (or Fail)

10/2009

PNSQC 2009

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

Management Topics

Title

Date

Purpose

21 CFR Part 11: Electronic Signatures, Electronic Records

05/2000

ASQ Section 0613

Lessons for Testing From Financial Accounting

07/2008

2008 Conference of the Association for Software Testing

A Process for Measuring the Quality of Software Consulting

05/1994

PNSQC

 Avoiding the "Test and Test Again" Syndrome

07/2007

Conference for the Association for Software Testing (CAST) 2007

Bugs For Sale

04/2010

IV EBTS

Cost Benefits for Test Automation

10/1999

STAR West 1999

Divide and Conquer

01/2005

Better Software, "Front Line" January 2005

Evolving Leadership within AST

03/2012

AST Blog

Failure Mode and Effects Analysis

05/2000

ASQ Section 0613

Five Automation Fallacies

06/2009

Better Software Conference

Fundamentals of Software Quality Assurance

1992-1995

ASQ Section 0613 Tutorial

Improved Testing Using a Test Execution Model

10/2013

STPCon Fall 2013

Leverage Test Automation ROI

10/2012

STAR West 2012

Measuring the Quality of Software Consulting

10/1994

Fourth International Conference on Software Quality

Metrics for Metrics: Cost Analysis and Justification

05/1998

Developing Strategic I/T Metrics Conference 1998

Misleading Metrics

10/2002

Dr. Dobbís Journal, October 01, 2002

Non-Regression Test Automation

10/2008

PNSQC

Overview of ASQís Certified Software Quality Engineer (CSQE)

09/2002

Quality Week 2002

Requirements for Test Automation

10/1999

PNSQC 1999

The Darker Side of Metrics

10/2000

PNSQC 2000

The Software Quality Groupís Relationship to Development

05/1993

Quality Week 1993

The "Test and Test Again" Syndrome

10/2004

2004 Better Software Conference

Early Testing Without the Test and Test Again Syndrome

07/2007

Conference for the Association for Software Testing (CAST) 2007

Whatís Different About Software

04/2001

ASQ Section 0613

Why Tests Don't Pass

10/2009

CAST 2009

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

Technical Topics

Title

Date

Purpose

Advanced Automation Architectures (Tutorial)

07/2007

Conference for the Association for Software Testing (CAST) 2007

Advanced Equivalence Class Analysis

03/2014

Presentation at Belgium Testing Days 2014

Analysis of The Taxonomy of Test Oracles

05/1998

Quality Week 1998

Architecture and Design of Automated Software Tests

05/2000

PNSQC Spring Workshop 2000

Automated Results Comparison Isn't Always Easy

02/2009

Recife Summer School

Automated Testing of Embedded Software

03/2003

Spring 2003 Software Test Automation Conference

Design of Oracle Based Automated Tests

05/2009

SQC Dusseldorf

Exhausting Your Test Options

07/2003

Software Testing and Quality Engineering

Exploratory Automated Testing

03/2014

Tutorial at Belgium Testing Days 2014

Exploratory Test Automation

10/2011
03/2012

STPCon Fall 2011 (session)
STPCon Spring 2012 (workshop)

Exploratory Test Automation

08/2010

Conference of the Association for Software Testing (CAST) 2010 (with Cem Kaner)

Failure Mode and Effects Analysis

05/2000

ASQ Section 0613

Fundamentals of Software Quality Assurance

1992-1995

ASQ Section 0613 Tutorial

A Graphical Display of Testing Status for Complex Configurations

10/2007

PNSQC

Heuristic Test Oracles

04/1999

Software Testing and Quality Engineering

Mutating Automated Tests

04/2000

Software Testing Analysis & Review (STAR) East 2000

Nine Types of Oracles

02/2013

Belgium Testing Days 2013

Non-Regression Test Automation

10/2012

STPCon Fall 2012

Overview of ASQís Certified Software Quality Engineer (CSQE)

09/2002

Quality Week 2002

Requirements for Test Automation

10/1999

PNSQC 1999

Feedback to IEEE on SWEBOK

06/2003

Review feedback to IEEE

Self-Verifying Data

10/2012

PNSQC 2012

Test Automation Architectures: Planning for Test Automation

05/1999

Quality Week Conference 1999

Self-Verifying Data

10/2012

PNSQC 2012

Test Automation Exploratory

02/2013

Tutorial at Belgium Testing Days 2013

Test Oracles

04/2010

IV EBTS

Test Oracles; Planning Ahead for Test Automation

03/1998

East Bay SSQA (EBSSQA)

Testing Automation Beyond Regression Testing

04/2008

Spring STPCon

Using Test Oracles in Automation

04/2006

Google Tech Talk April 25, 2006

Using Test Oracles in Automation

05/2000

Quality Week 2000 Tutorial

Using Test Oracles in Automation

10/2001

2001 Pacific Northwest Software  Quality Conference (PNSQC 2001)

Using Test Oracles in Automation

03/2003

Spring 2003 Software Test Automation Conference

Why Tests Don't Pass

10/2009

CAST 2009

Why Tests Do Not Pass (or Fail)

10/2009

PNSQC 2009

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

Reverse Chronological Order

Title

Date

Purpose

Exploratory Automated Testing

03/2014

Tutorial at Belgium Testing Days 2014

Advanced Equivalence Class Analysis

03/2014

Presentation at Belgium Testing Days 2014

Improved Testing Using a Test Execution Model

10/2013

STPCon Fall 2013

Test Automation Exploratory

02/2013

Tutorial at Belgium Testing Days 2013

Nine Types of Oracles

02/2013

Belgium Testing Days 2013

Non-regression Test Automation

10/2012

STPCon Fall 2012

Self-Verifying Data

10/2012

PNSQC 2012

Leverage Test Automation ROI

10/2012

STAR West 2012 (Keynote)

Exploratory Test Automation

03/2012

STPCon Spring 2012 (Workshop)

Why Tests Donít Pass (or Fail)

03/2012

STPCon Spring 2012

Evolving Leadership within AST

03/2012

AST Blog

Exploratory Test Automation

10/2011

STPCon Fall 2011

Exploratory Test Automation

08/2010

Conference of the Association for Software Testing (CAST) 2010 (with Cem Kaner)

Bugs For Sale

04/2010

IV EBTS

Test Oracles

04/2010

IV EBTS

Why Tests Do Not Pass (or Fail)

10/2009

PNSQC 2009

Why Tests Don't Pass

07/2009

CAST 2009

CAST 2009 Interview with Doug Hoffman

06/2009

Michael Kelly Interview about Why Tests Don't Pass

Five Automation Fallacies

06/2009

Better Software Conference

Design of Oracle Based Automated Tests

05/2009

SQC Dusseldorf

Automated Results Comparison Isn't Always Easy

02/2009

Recife Summer School

Non-Regression Test Automation

10/2008

PNSQC

Lessons for Testing From Financial Accounting

07/2008

2008 Conference of the Association for Software Testing

Testing Automation Beyond Regression Testing

04/2008

Spring STPCon

Five Questions With Douglas Hoffman

10/2007

Interview by Michael Hunter in Dr. Dobbs Journal

A Graphical Display of Testing Status for Complex Configurations

10/2007

PNSQC

Advanced Automation Architectures (Tutorial)

07/2007

Conference for the Association for Software Testing (CAST) 2007

Avoiding the "Test and Test Again" Syndrome

07/2007

Conference for the Association for Software Testing (CAST) 2007

Early Testing Without the Test and Test Again Syndrome

11/2006

SSQA

Divide and Conquer

01/2005

Better Software, "Front Line"

The "Test and Test Again" Syndrome

10/2004

2004 Better Software Conference

Using Test Oracles in Automation

04/2006

Google Tech Talk April 25, 2006

Exhausting Your Test Options

07/2003

Software Testing and Quality Engineering, "Bug Report"

Feedback to IEEE on SWEBOK

06/2003

Review feedback to IEEE

Using Test Oracles in Automation

03/2003

Spring 2003 Software Test Automation Conference

Automated Testing of Embedded Software

03/2003

Spring 2003 Software Test Automation Conference

Misleading Metrics

10/2002

Dr. Dobbís Journal, October 01, 2002

Overview of ASQís Certified Software Quality Engineer (CSQE)

09/2002

Quality Week 2002

Whatís Different About Software

11/2001

ASQ Golden Gate Section

Using Test Oracles in Automation

10/2001

2001 Pacific Northwest Software  Quality Conference (PNSQC 2001)

Whatís Different About Software

04/2001

ASQ Section 0613

Requirements for Test Automation

12/2000

SSQA

The Darker Side of Metrics

10/2000

PNSQC 2000

Mutating Automated Tests

05/2000

SSQA

Using Test Oracles in Automation

05/2000

Quality Week 2000 Tutorial

Architecture and Design of Automated Software Tests

05/2000

PNSQC Spring Workshop 2000

21 CFR Part 11: Electronic Signatures, Electronic Records

05/2000

ASQ Section 0613

Failure Mode and Effects Analysis

05/2000

ASQ Section 0613

Mutating Automated Tests

04/2000

Software Testing Analysis
& Review (STAR) East 2000

Cost Benefits for Test Automation

10/1999

STAR West 1999

Requirements for Test Automation

10/1999

PNSQC 1999

Test Automation Architectures: Planning for Test Automation

05/1999

Quality Week Conference 1999

Heuristic Test Oracles

04/1999

Software Testing and Quality Engineering

Analysis of The Taxonomy of Test Oracles

10/1998

Fifth Los Altos Workshop on Software Testing

Metrics for Metrics: Cost Analysis and Justification

05/1998

Developing Strategic I/T Metrics Conference 1998

Analysis of The Taxonomy of Test Oracles

05/1998

Quality Week 1998

Test Oracles; Planning Ahead for Test Automation

03/1998

East Bay SSQA (EBSSQA)

Fundamentals of Software Testing

1995

ASQ Section 0613 Tutorial

Foundations of Software Quality

1994

ASQ Section 0613 Tutorial

Measuring the Quality of Software Consulting

10/1994

Fourth International Conference on Software Quality

A Process for Measuring the Quality of Software Consulting

05/1994

PNSQC

Some Measures of Quality of Software Consulting 1994 (ASQ)

1994

SSQA

A Method for Measuring Quality of Software Consulting

1994

ASQ Section 0613

The Software Quality Groupís Relationship to Development

05/1993

Quality Week 1993

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 

Abstracts

 ***********************************************************

A Taxonomy of Test Oracles

 1998 Quality Week

Extended Abstract:
Automation of testing is often a difficult and complex process.  The most familiar aspects of test automation are organizing and running of test cases and capturing  and verifying test results. Generating the expected results is often done using a mechanism called a test oracle. This paper describes several classes of oracles created to provide various types of verification and validation.  Several relevant characteristics of oracles are described and the advantages and disadvantages for each type of oracle are described.

Background:
In software testing, the mechanism used to generate the expected results is called an oracle. (In this paper, the first letter will be capitalized when referring to the Oracle for a specific test.) Many different approaches can be used to create, capture, and compare test results. The author, for example, has used the following methods for generating expected results:

  • Manual verification of results (human oracle)
  • Separate program implementing the same algorithm
  • Simulator of the software system to produce parallel results
  • Debugged hardware simulator to emulate hardware and software operations
  • Earlier version of the software
  • Same version of software on a different hardware platform
  • Check of specific values for known responses
  • Verification of consistency of generated values and end points
  • Sampling of values against independently generated expected results

Software tests themselves can be classified in many different ways. Automated tests that include evaluation of results need some kind of oracle regardless of the type or purpose of the tests. Yet, the mechanism for evaluation  of results ranges from none (the program or system didnít crash) to exact (all values, displays, files,  etc., are verified). Various levels of effort and exactness are appropriate under different circumstances. The nature and complexity of an oracle is also dependent upon those circumstances.

Presented at 1998 Quality Week (QW), and October, 1998 Los Altos Workshop on Software Testing (LAWST 5)

Taxonomy Slides floppy disk Taxonomy Paper

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

Architecture and Design of Automated Software Tests

 Spring 2000 PNSQC Tutorial (1/4 Day Tutorial)

Automated regression testing is the most popular approach to software test automation, but too frequently is neither effective nor cost-effective. It is just one strategy among many. Likewise, regression test tools are one set of test tools among many. The workshop looks at several  approaches to test automation, provides some cost/benefit/risk/prerequisite ideas about them, and provides some architectural and design suggestions for developing automated test suites. This tutorial is intended to help you do better requirements analysis and develop a sensible architecture for automated testing efforts.

 Topics emphasized:

  • Why Automate Software Tests?
  • Automated Test Design
  • Test Automation Strategies
  • Automated Test Oracles
  • Automation Architectures

NOTE: This tutorial was not about how to use any particular test tool or about code in any particular test toolís programming language.

Presented at Spring 2000 Pacific Northwest Software Quality Conference Tutorials

Workshop Abstract floppy disk Automation Architecture Tutorial Slides

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

Design of Oracle Based Automated Tests

 SQC Germany 2009 (1/2 Day Class)

When automating tests we must have some oracle (a way of telling the verdict of each test).  This tutorial describes how to design powerful automated tests based on characteristics of the  software under test (SUT) and available oracles. It discusses execution models for the SUT,  many sources for oracles, and how to use the oracles.

Presented at Software and Systems Quality Conference, Dusseldorf, Germany 2009

Class Slides

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

Automating Results Comparison Isn't Always Easy

 2009 Recife Summer School (Brazil)  (1/2 Day Class)

Automating tests means both exercising the software and determining whether the resultant  behavior is expected. Capturing and comparing results can be tricky and difficult.  The talk presents some of the issues and ways to deal with them.

Five factors about automating results comparison need to be factored into any test automation effort.  The talk describes the five factors (listed below), explains why each is an issue, and goes into some of  the implications and possible actions to deal with them.

The five factors are:

  • Which [potential] results to compare
  • What results to expect
  • Not all expected differences indicate errors
  • "Fuzzy" comparisons
  • When to generate and compare results

Presented at 2000 Recife Summer School 2009 (Brazil)

Class Slides

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

Lessons for Testing From Financial Accounting:
Consistency in a self-regulated profession

 2008 Conference of the Association for Software Testing (CAST)

As different and the fields of accounting and software testing seem, there are valuable lessons to be learned about software testing from financial accounting. This paper posits several processes, rules, and measures from financial accounting and casts them in relation to testing.  The lessons help us understand the value of processes (sometimes), keeping measurements simple, test strategies, documentation, and more. Some examples of financial rules that relate to testing are:

  • Accounting Tenents are based on Assumptions, Principles, and Guidelines
  • All accounting is based on monetary units (dollars)
  • Accounting is done based on Generally Accepted Accounting Principles (GAAP)
  • The profession is self-regulated by the Financial Accounting Standards Board (FASB)
  • GAAP may be modified to fit different industries

Presented at 2008 CAST

Lessons from Finance paper floppy disk Lessons from Finance Slides

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

Leverage Your Test Automation ROI with Creative Solutions

 STAR West 2012 (Keynote)

Typical automated tests perform repetitive tasks quickly and accurately to lighten the burden of manual testing. These tests mimic typical interactions with the system, checking for predetermined outcomes. However, with some creativity and a sound strategy, you can leverage automation to dramatically increase its return on investment and long-term value. The talk demonstrates how to employ test automation for more interesting testing activities - ones that are impossible with manual testing - using examples of automated tests that have been used to magnify the power of exploratory test techniques. These exploratory tests discover new defects - ones that most test designers would never have considered.

Presented at STAR West 2012

Leverage Automation ROI Slides

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

Improved Testing Using a Test Execution Model

 Fall 2013 STPCon

A software test includes more than providing inputs and checking results. This session provides a descriptive model of the influences and outcomes when a test runs that provides a foundation for understanding how to design and run better tests. It also provides ideas and guidance for test oracles to detect unusual or erroneous behavior and find more bugs. Session Takeaways:

  • Elements influencing the behavior of the software under test.
  • Outcomes that could be impacted by the running of a test.
  • Some of the reasons for non-repeatable test results.
  • Design considerations for better checking of test outcomes

Presented at Fall 2013 Software Testing and Performance Conference

Beyond Regression Slides

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

Testing Automation Beyond Regression Testing

 Fall 2008 STPCon

When they picture test automation testers often think of GUI based scripted regression testing, which amounts to automating manual tests.  This is a very limited view of the potentially vast possibilities open to us when automating tests.  When we think of test automation we should first think about doing things that we canít do manually.  This talk is about getting the limitations of the automated regression suite approach and generating much more valuable kinds of test automation.

Presented at Fall 2008 Software Testing and Performance Conference

Beyond Regression Slides

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 **********************************************************

Self-Verifying Data

 PNSQC 2012

Some tests require large data sets. The data may be database records, financial information, communications data packets, or a host of others. The data may be used directly as input for a test or it may be pre-populated data as background records. Self-verifying data (SVD) is a powerful approach to generating large volumes of information in a way that can be checked for integrity. This paper describes three methods for generating SVD, two of which can be easily used for large data sets.

For example, a test may have a prerequisite that the database contains 10,000,000 customers and 100,000,000 sales orders. The test might check adding new customers and orders into the existing data, but not directly reference the preset data at all. How can we generate that kind of volume of data and still be able to check whether adding customers and orders might erroneously modify existing records? SVD is a powerful, proven approach to facilitate such checks.

The paper and talk describe the concepts, applications, and methods for generating such data and checking for data corruption. They cover:

  • What self-verifying data is
  • Why and how self-verifying data can be used
  • Applications where such data is useful
  • Three ways to apply self-verifying data
  • How to check the data records generated this way

Presented at 2012 Pacific Northwest Software Quality Conference

SVD paper floppy disk SVD Slides

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

***********************************************************

Non-Regression Test Automation

 Fall 2012 STPCon

The principle advantage of automated tests are repeatability and speed. The principle disadvantages are that they are relatively more expensive to create, require more maintenance, and are more limited in the specificity of things they can compare relative to manual tests. Part 1 of the presentation describes another way to approach test automation: to test things that cannot be tested manually. These tests enable us to focus on learning about the software, can go behind the UI to extend our reach, are not limited to doing the same thing each time, and can perform huge numbers of iterations and combinations that would be unthinkable using manual testing or automated regression tests. This approach also encourages checking broader classes of test outcomes, thus improving the types of that can be discovered errors.

 

Part 2 of this presentation describes oracle mechanisms that enable testers to take advantage of non-regression automation. The oracles determine whether the softwareís behavior appears to be normal of erroneous. The oracles allow non-regression tests to vary their behavior and still have predictable, checkable outcomes. This part presents over a dozen different types of oracle mechanisms.

Presented at Fall 2012 Software Testing Professionals Conference

Non-Regression Automation Slides

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

Exploratory Test Automation

 Fall 2011 STPCon (double session presentation)
Spring 2012 STPCon (1 day workshop)
Belgium Testing Days 2013 (1 day workshop)
PNSQC 2013 (1 day workshop)
Belgium Testing Days 2014 (1 day workshop)

Automated software testing has historically meant having the computer run individually crafted test cases. The vast majority of automated test cases are regression tests that perform the same exercises as manual tests, only run by a machine. The principle advantages for these automated tests are repeatability, speed, and volume of checking. The principle disadvantages are that they are relatively more expensive to create than manual tests, require more maintenance than manual tests, and are more limited in the specificity of things they can compare relative to manual tests.

Regression testing has its place but higher power automation is  possible. Exploratory tests are more capable of finding new bugs in  products. Automating this exploration can provide higher test automation ROI in terms of the number and complexity of bugs found. This is not  automating of exploratory sessions; it is creating automated tests that  are capable of uncovering bugs we never conceived of.

These presentations describe another way to approach test automation: to test things that cannot be tested manually. Extending the scope of testing in this way allows checking for errors that might not be found otherwise or even conceived of. These tests enable us to focus on learning about the software, can go behind the UI to extend our reach, are not limited to doing the same thing each time (although even random sequences can be repeated), and can perform huge numbers of iterations and combinations that would be unthinkable using manual testing or automated regression tests. These tests are quick-hit or abstracted one or two levels from the user interface, which substantially reduces maintenance costs. This approach also encourages checking broader classes of test outcomes, thus improving the types of errors that can be discovered.

These are an evolving set of conference track presentations and one-day tutorials. Although the titles are the same and most of the slide sets have heavily overlapping content, I customized each and provided some unique content.

Presented at Fall 2011 Software Testing Professionals Conference

Presented at Spring 2012 Software Testing Professionals Conference

Presented at Belgium Testing Days Conference 2013

Presented at Pacific Northwest Software Quality Conference 2013

Presented at Belgium Testing Days Conference 2014

STPCon Fall 2011 Exploratory Automation Session Slides
STPCon Spring 2012 Exploratory Automation Workshop Slides

BTD 2013 Exploratory Automation Workshop Slides

PNSQC 2013 Exploratory Automation Session Slides
PNSQC 2013 Exploratory Automation Workshop Slides

BTD 2014 Exploratory Automation Workshop Slides

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

  ***********************************************************

 Belgium Testing Days 2014 (double session presentation)

Advanced Equivalence Class Analysis

 Equivalence class analysis (EC, also known as domain analysis) is the most widely known and taught software test analysis technique. It has been presented using simple, clear examples such as identifying types of triangles given the lengths of the sides or adding two integers that range from -99 to 99. Test values are chosen through boundary value analysis. Although these examples may convey the very basic idea of EC, they do not teach about real-world analysis problems and knowledge transference is rare. (The student is unable to use the technique except on the problem used in the example.

 This presentation describes tasks that can be applied to any data type (e.g., numbers, strings, lists, dates, devices such as printers, constrained values, etc.). The content is derived from The Domain Testing Workbook. Co-authored by Doug with Cem Kaner and Sowmya Padmanabhan, it contains nearly 500 pages describing equivalence class analysis. It details a systematic method for domain analysis and test design with 30 fully worked out examples of real-world testing problems.

Presented at Belgium Testing Days Conference 2014

Advanced Equivalence Class Analysis Slides

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

  ***********************************************************

The Myths Behind Software Metrics

 PNSQC 2013 (session presentation)

Measurement, metrics, and statistics can be powerful tools for understanding the world we live in. Measures and metrics abound in software engineering, QA, and especially testing. Coming up with numbers is the easiest part. Making some computations using the numbers is straightforward. Getting meaningful, useful metrics is a whole different story. Computing metrics is easy to do poorly. It is much more difficult to take measurements and generate truly meaningful and useful metrics or statistics.

 

Presented at Pacific Northwest Software Quality Conference 2013

Myths Behind Metrics Slides

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

  ***********************************************************

Nine Types of Test Oracles

 Belgium Testing Days 2013 (track presentation)

Software tests are valuable because of their ability to identify suspect behavior in the software. For decades there have been myths that good  tests require predefined results and that those results were the only  oracle mechanism. Testers today have recognized there are different ways to detect bugs and predefined results within each test exercise isn't  enough. That won' t find memory leaks, for example.

This talk describes nine other test oracle approaches and mechanisms that have been applied to discover software bugs.

Topics include:

* What a test oracle is
* A model for understanding the test execution environment
* Nine atypical test oracle mechanisms
* Outcome comparison mechanisms
* Designing tests based on available test oracles
* How the test oracles enable exploratory automated testing

 

Presented at Belgium Testing Days Conference 2013

Nine Types of Oracles Slides

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

  ***********************************************************

A Graphical Display of Testing Status for Complex Configurations

 Pacific Northwest Software Quality Conference 2007

Representing the status of software under test is complex and difficult. It becomes more difficult when testers are trying new ways to uncover defects; when testing priorities shift as new information is discovered about the software under test. This is sometimes compounded when there are many interacting subsystems and combinations that must be tracked. This paper describes a spreadsheet method developed to provide a single page representation of the test space for large and complex sets of product components and configurations. The spreadsheet can be used as the focal point for project testing.

Presented at 2007 Pacific Northwest Software Quality Conference

Talk presented at ASQ Silicon Valley Section Meeting, September 2007

Graphical Display Slides floppy disk Graphical Display Zip File

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

Why Tests Don't Pass

 CAST July, 2009; PNSQC October, 2009

Most testers think of tests passing or failing. Either they found a bug or they did not. Unfortunately, experience shows us repeatedly that  passing a test does not really mean there is no bug. It is quite possible for a test to surface an error but it not be detected at the time. It is also possible for bugs to exist in the feature being tested in spite of the test of that capability.  Passing really only means that we did not notice anything interesting.

Likewise, failing a test is no guarantee that a bug is present. There could be a bug in the test itself, a configuration problem, corrupted data, or a host of other explainable reasons that do not mean that there is anything wrong with the software being tested. Failing really only means that something that was noticed warrants further investigation.

The talk explains the ideas further, explores some of the implications, and suggests some ways to benefit from this new way of thinking about test outcomes. The talk concludes with examination of how to use this viewpoint to better prepare tests and report results.

Presented at Toronto Association of Systems and Software Quality (TASSQ) March 31, 2009

Presented at Kitchner Waterloo Software Quality Association (KWSQA) April 1, 2009

Paper and presentation at the Conference for the Association for Software Testing (CAST) July 14, 2009

TASSQ Slides floppy disk CAST Slides floppy disk Why Not Pass Paper (CAST)

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

Non-Regression Test Automation

 Pacific Northwest Software Quality Conference 2008

In my experience, most automated tests perform the same exercise each time the test is run. They are typically collected and used as regression tests, and are unlikely to uncover bugs other than very gross errors (e.g., missing modules) and the ones they were specifically designed to find. Testers often think of test automation as GUI based scripted regression testing, using scripts to mimic user behavior. Tool vendors actively sell the automating of manual tests. These are very narrow views of the potentially vast possibilities for automating tests because they are limited to doing things a human tester could do. When we think of test automation we should first think about extending our reach by doing things that we canít do manually.  This topic describes getting past the limitations of automated regression suites and generating more valuable kinds of test automation.

Presented at 2008 Pacific Northwest Software Quality Conference

Non-Regression Slides floppy disk Non-Regression Paper

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

Mutating Automated Tests

2000 STAR East Conference; SSQA

Keywords: Automated Testing, Non-deterministic Tests, Mutating Tests, Test Oracles, Pseudo Random Numbers

Key points attendees take away:

  • Benefits and shortcomings of automated tests
  • Types of automated tests that are easy and hard to vary
  • Some methods to improve the value of some automated tests
  • Examples of non-deterministic automated tests
  • Design approaches for creating more powerful automated tests

Summary:
Most automated tests are used as regression tests - doing the same exercises each time the test is run. The paper and talk describe a powerful type of automated test - one that does something different each time it runs. The technique does not apply to all situations of automated tests, but the author presents the pros and cons for mutating automated tests based on his experience with them. The paper also provides several examples.

Presented at 2000 Software Testing, Analysis, and Review (STAR) East Conference,  and May, 2000 meeting of Silicon Valley Software Quality Association (SSQA)

Mutating Tests Slides floppy disk SSQA May 2000 Slides floppy disk Mutating Tests Paper

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

Overview of ASQís Certification in Software Quality Engineering

2002 Quality Week Conference

Abstract: For 2002, the American Society for Quality (ASQ), a not-for-profit professional society, has restructured and updated the Body of Knowledge (BOK) used for their Certification in Software Quality Engineering (CSQE). The Tutorial describes the content of the updated BOK, highlights the changes, and covers the following topics:

  • Certification Requirements
  • Levels Of Cognition (from Bloomís Taxonomy, 1956)

The Subject Areas of the CSQE 2002 Body of Knowledge are:

  • General Knowledge, Conduct, and Ethics
  • Software Quality Management
  • Software Engineering Processes
  • Program and Project Management
  • Software Metrics, Measurement, and Analytical Methods
  • Software Verification and Validation (V&V)
  • Software Configuration Management
  • Examples Of Performance Skill Levels
  • Mapping Of Performance Levels To Job Requirements
  • Describing Individual Performance Levels

The emphasis of the tutorial is on mapping knowledge and skill areas into performance measures. By identifying relevant skill areas and performance measures, the quality engineering team can understand what levels of performance are expected and what levels of performance are being shown. Only a brief BOK topical overview is presented, since defining of all the topics in the CSQE BOK in detail would take more time than is available in a one day tutorial.

Presented at 2002 Software Quality Week (QW)

2002 CSQE BOK Slides (6 up) floppy disk 2002 CSQE BOK Slides (2 up) floppy disk 2002 CSQE BOK Paper

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

Divide and Conquer

January 2005 Better Software Magazine

This article describes a way to deal with overwhelming tasks, such as those we have when starting a new job. It describes one way Iíve quickly prioritized tasks and dealt with long to do lists.

Published as a "Front Line" article in Better Software Magazine January 2005

Talk presented at ASQ Silicon Valley Section Meeting, November 2005

Divide Slides floppy disk Divide and Conquer Paper

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

Exhausting Your Testing Options

July/August 2003 STQE Magazine

This article describes my experience at a startup, working with a massively parallel system. Because of its size and speed, I found that going through all the input values was practical for some of the function testing. Some of the surprises from this testing:

  • Failures can lurk in incredibly obscure places
  • You could need 4 billion computations to find 2 errors caused by a bug
  • Even covering all of the input and result values isnít enough to be certain

Published as a "Bug Report" within Software Testing and Quality Engineering (STQE) Magazine July/August 2003

Exhausting Options Paper

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

Exploratory Test Automation

CAST 2010

Joint presentation with Cem Kaner

There are many different test automation techniques that we can call exploratory. This talk supports the keynote and presents a conceptual  framework for exploratory automation techniques that Cem and Doug have been organizing over the past 12 years. This talk will provide several  examples that illustrate that framework. The paper will collect ideas  that Doug or Cem have published in several slidesets but not yet in any  citeable paper.

Presented jointly with Cem Kaner at the Conference of the Association for Software Testing (CAST) 2010

Exploratory Test Automation Slides floppy disk Exploratory Test Automation  (Paper)

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

Five Test Automation Fallacies that Will Make You Sick

Better Software Conference, 2009

Five common misunderstandings about test automation lead to trouble. If unchecked, any of these problems can cause failure of an automation effort. When these common fallacies are recognized, we can minimize or avoid the problems.

The presentation covers the fallacies, how to find more bugs with automated tests, what makes automated tests different from manual tests, typical errors in test automation, the difficulties with most automated results comparisons, where automated tests are valuable, and actions that can be taken to avoid trouble over these problems.

  • Automated tests find many bugs.
  • Manual tests make good automated tests.
  • What to check is clear and simple.
  • We know what to expect.
  • More automated regression tests are always better.

Presented at Better Software Conference, June, 2009

Five Fallacies Slides

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

Early Testing Without the "Test and Test Again" Syndrome

2004 Better Software Conference

SSQA 11/2006

This paper introduces what I call the "Test and Test Again" Syndrome. This happens when a test group begins testing early in the development cycle and finds itself repeatedly testing and retesting to the exclusion of all other activities. The presentation describes the syndrome, its likely causes, and things to do to avoid and break out of the cycle.

Presented at 2004 Better Software Conference

Early Testing Without the Test and Test Again Syndrome (Abstract) floppy disk Early Testing (Slides)

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

Avoiding the "Test and Test Again" Syndrome

2007 Conference for AST

CAST 07/2007

Iíve heard that a frog wonít jump out of boiling water if the water is slowly heated from room temperature to boiling and the frog was placed in it before heat is applied (which I tend to believe without needing to test it for myself). It seems that a frog does not react to slow changes in temperature, even when its life is threatened.  Some test projects Iíve worked with seemed to have gone through a similar process; the test team came in one day and realized that it was time for the final testing push, but they had been so busy running tests  that they hadnít had time to prepare properly. The test team had become embroiled in what I call the "Test  and Test Again Syndrome." What are some of the forces behind this? What does it cost us? How can testers successfully deal with the Syndrome? How might it be avoided? What are some of the approaches that have failed to deal with it? The session delves into some of the lessons learned through the school of hard knocks.

Presented at 2007 CAST

Avoiding the Test and Test Again Syndrome (Abstract) floppy disk
Avoiding the Test and Test Again Syndrome (Slides) floppy disk
Avoiding the Test and Test Again Syndrome (Paper)

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

Automation Architecture Approaches: Beyond Regression Testing

2007 Conference for AST; 2006 SSQA

Most testers think of GUI based, scripted regression testing when they picture test automation. These scripts are rerun as a regression test qualification for the software.  It is especially true for managementís dream test set - where large numbers of regression tests are created to fully and automatically qualify a product. This type of automation amounts to automating existing manual tests,  and is less effective than just running the tests manually.  This is also an expensive undertaking and more frequently than not it is unsuccessful for a variety of economic and technical reasons.

There are vast possibilities beyond that open to us when automating tests. When we think of test automation we should first think about doing things that we can not do manually. Based on experience creating non-regression automated tests, these presentations address what and how we can create more powerful automated tests.

The SSQA talk is a one hour presentation about the limitations and how other kinds of test  automation may be much more valuable.

The CAST tutorial is a one-day presentation of advanced automated test architectures.

The Tutorial covers:

  • The relative strengths and weaknesses of manual and automated testing
  • The trouble with automated regression tests
  • Architectures for automated oracles to establish pass/fail verdicts
  • 8 frameworks for non-regression automated tests
  • 12 types of errors discoverable only with automated tests

 Automation Architecture Approaches: Beyond Regression (Tutorial Abstract) floppy disk
Automation Architecture (CAST 2007 Slides) floppy disk
SSQA Beyond Regression Slides (SSQA)

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

Heuristic Test Oracles

April 1999 STQE Magazine

Capture and comparison of results is one key to successful software testing. For manual tests this often consists of viewing results to determine if they are anything like what we might expect. It is more complicated with automated tests, as each automated test case provides a set of inputs to the software under test (SUT) and compares the returned results against what is expected. Expected results are generated using a mechanism called a test oracle.

It is often impractical to exactly reproduce or compare accurate results, but it is not necessary for an oracle to be perfect to be useful.  In this article, I describe some ideas associated with what I call heuristic oracles. A heuristic oracle uses simpler consistency checks (heuristics) for the results of a test.

Published as an article "Heuristic Test Oracles" in Software Testing and Quality Engineering (STQE) Magazine April 1999

Heuristic Test Oracles Paper

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

Test Automation Architectures: Planning for Test Automation

1999 International Quality Week Conference; East Bay SSQA (EBSSQA)

Designing a practical test automation architecture provides a solid foundation for a successful automation effort.  This talk describes elements of automated testing that need to be considered, models for testing that can be used  for designing a test automation architecture, and considerations for successfully combining the elements to form an  automated test environment. The paper covers:

  • Important differences between manual and automated testing that must be factored into a test automation  architecture
  • The role test results capture and comparison plays in test automation
  • A model for software testing useful for designing test automation architectures
  • The use of oracles for generation of expected results
  • An approach to designing a test automation architecture based on the factors.

Presented at EBSSQA, March 1998 and 1998 International Quality Week Conference

Planning Ahead Slides (EBSSQA) floppy disk Automation Architectures Slides floppy disk Automation Architectures

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

Automated Testing of Embedded Software

2003 Spring/Fall Software Test Automation Conference

More and more software is being embedded in everyday devices ranging from computer peripherals to toys. Development and testing of embedded software offers new challenges because of the combination of the hardware devices and environments. This presentation describes some of the common issues and some ways they have been addressed.

Presented at Spring 2003 Software Test Automation Conference

Auto Embedded Slides

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

The Darker Side of Metrics

2000 PNSQC

CAST 2006

There sometimes is a decidedly dark side to software metrics that many of us have observed, but few have openly discussed. It is clear to me that we often get what we ask for with software metrics and we sometimes get side effects from the metrics that overshadow any value we might derive from the metrics information. Whether or not our models are correct, and regardless of how well or poorly we collect and compute software metrics, peopleís behaviors change in predictable ways to provide the answers management asks for when metrics are applied. Donít take me wrong; I believe most people in this field are hard working and well intentioned, and although some of the behaviors cause by metrics may seem funny, quaint, or even silly, they are serious responses created in organizations because of the use of metrics. Some of these responses seriously hamper productivity and can actually reduce quality.

The presentation focuses on a metric that Iíve seen used in many organizations (readiness for release) and some of the disruptive results in those organizations. Iíve focused on three examples of different metrics that have been used and a few examples of the behaviors elicited by using the metrics. For obvious reasons, the examples have been subtly altered to protect the innocent (or guilty). The three metrics are:

1. Defect find/fix rate
2. Percent of tests running/Percent of tests passing
3. Complex model based metrics (e.g., COCOMO)

Some of the observed behaviors include:

  • testers withholding defect reports
  • punishment of test groups (and individual testers) for not finding defects sooner
  • use of "Pocket lists" of defects by developers and testers blocks of unrelated defects being marked as duplicates of one new consolidated defect (to reduce the defect count)
  • artificial shifting of defects to other projects or temporary "unassigning" of defects to reduce the defect count
  • changing definitions of what a test or test case is to change the count of tests
  • shipping of products with known missing features because 100% testing was achieved
  • routine changing of expected results to known incorrect results so the test would pass
  • lowered ranking of testers because they werenít finding defects as quickly as the model showed they should
  • holding back on defect reporting and testing because the model showed they shouldnít be found yet

Presented at 2000 Pacific Northwest Software Quality Conference

Presented at CAST 2006

Dark Side Abstract floppy disk PNSQC Dark Side Slides floppy disk PNSQC Dark Side Paper
CAST Dark Side (Slides)

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

Cost Benefits for Test Automation

1999 STAR

There are many factors to consider when planning for test automation. Automation changes the complexion of testing from design through implementation and test execution. It is important to understand the potential costs and benefits before undertaking the kind of change automation implies. Automated tests can be incredibly effective, giving more coverage and new visibility into the software under test. However, it often provides us with opportunities for testing in ways impractical or impossible for manual testing; yet conventional metrics may not show any improvements. This presentation describes financial, organizational, and test effectiveness impacts observed when software test automation is installed. Equations, suggestions and examples are provided to help decide when automation is beneficial.

Cost benefits from automation are viewed as trade-offs in comparison to manual testing (or the current situation). Financial impacts are computed in comparison to two alternatives: manually testing the same thing or not testing (accepting the risk of not knowing).  Organizational impacts such as the skills needed to design and implement automated tests, automation tools, automation environments, development, and maintaining automation tools and environments are also discussed.

Test automation is not always necessary or appropriate. Automating existing manual tests is a path frequently chosen by default, but usually is not cost beneficial and sometimes results in decreased test effectiveness. The costs and benefits of test automation can be identified and estimated, and good management decisions made about using automation to improve testing.

Paper presented at 1999 Software Testing, Analysis, and Review (STAR) Conference.

Cost Benefits Abstract floppy disk Cost Benefits Paper floppy disk Cost Benefits Slides

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

Using Test Oracles in Automation

Quality Week 2000

PNSQC 2001

Spring 2003 Software Test Automation Conference

These presentations and slides show a progression of understanding and applying of automated software test oracles.

Software test automation is often a difficult and complex process. The most familiar aspects of test automation  are organizing and running of test cases and capturing and verifying test results. A set of expected  results are needed for each test case in order to check the test results. Verification of these expected results is often done using a mechanism called a test oracle. The paper and talks describe the use of oracles in  automated software verification and validation. Several relevant characteristics of oracles are included with the advantages, disadvantages, and implications for test automation.

Real world oracles vary widely in their characteristics. Although the mechanics of various oracles may be  vastly different, a few classes can be identified which correspond with automated test approaches. These types of oracles are categorized based upon the strategy for verification using the oracle. Thus, an  oracle strategy using a lookup table to generate expected results can be the same as one that uses an  alternate algorithm implementation to compute them. Four types of oracle strategies (and not using any  oracle) are identified and defined. The strategies are labeled True, Heuristic, Consistency, and Self Referential.

Slides presented at Spring 2003 Software Test Automation Conference  March 2003

Oracles In Automation Slides

Paper presented at PNSQC 2001

Oracles In Automation Paper floppy disk Oracles In Automation Slides

Tutorial Slides presented at Quality Week 2000

Oracles In Automation Tutorial Slides

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

Measuring the Quality of Consulting

Quality Week 1994

PNSQC 1994

4ICSQ

These papers describe an innovative process for measuring and improving quality for quality assurance professionals software contracted through an agency. The process was implemented over a three-year period and used by their clients and software quality professionals of Systems Partners (a consulting agency). The papers provide brief histories of the program, the main elements and mechanisms, and some of the results obtained.

Paper and presentation made at Quality Week, 1994 PNSQC 1994, and the 4th International Conference on Software Quality (4ICSQ))

PNSQC Measuring Consulting Paper floppy disk 4ICSQ Measuring the Quality of Software Consulting (paper)

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

Requirements for Test Automation

PNSQC 2000

SSQA December 2000

Automating testing is like any software development project, and as such we need to articulate the requirements. Test automation should not be done piecemeal or in an ad hoc fashion because the resulting  work products will become more and more expensive to use, eventually becoming obsolete or insupportable. The presentation describes many special considerations for automation requirements and a method for understanding and organizing them.

Presentation at PNSQC 2000 and SSQA 12/2000

Requirements for Test Automation

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 **********************************************************

Misleading Metrics

Dr. Dobbís Journal
October 01, 2002

This short article provides some examples of observed side effects in some organizations because of the measures and metrics they were using. The examples are based on bug find and fix rates, tests run/passing, and measuring quality and defect density.

Published in Dr. Dobbís Journal, October 01, 2002

Misleading Metrics

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

The Software Quality Groupís Relationship to Development

Quality Week 1993

This paper presents the roles of the Software Quality Organization in software development as observed in dozens of commercial organizations. It describes the development process as the way quality is built into software. It also looks at the different ways the quality groupís purpose and charters were viewed. The potential benefits and drawbacks for various charters are described, along with the organizational structure and typical activities for each. The idea that the charter for the quality group changes over time is also presented, along with observed progressions in organizations. The various possible organizations, charters, and roles are described and related briefly to quality systems described in both  the SEI Maturity Model and ISO 9000 Standards (ISO 9001 and ISO 9000-3). In summary, it describes the  impact on product quality of the different types of development process and possible roles for the  software quality group.

Paper and presentation made at 1993 Quality Week 10/1993

Relationship to Development Paper floppy disk Relationship to Development Slides

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

Review Comments on the Proposed IEEE Software Engineering Body of Knowledge V1.00 (2003)

Personal Email to IEEE SWEBOK Committee, June 2003

The IEEE Software Engineering Coordinating Committee put out the SWEBOK (Trial Version 1.00 - May 2001) for review and feedback. I felt compelled to provide 12 pages of comments because of my extremely strong [negative] reaction.

I spent several hours going into the first two chapters before skipping to the chapters on Testing and then Software Quality. I was encouraged by the explicit recognition that different organizations, users, and products require different techniques in both chapters. But, I was discouraged by the many deficiencies in the Testing chapter and the gaping holes in the Software Quality chapter. I was shocked by the fact that no reference is made to ASQís CSQE, even if only to criticize it. Either the drafters of the SQEBOK were really ignorant of the existence of a sister societyís related Software Quality Engineering Body of Knowledge, or they choose to ignore it because it was inconvenient or at odds with their SWEBOK. In my opinion, if the first case is true, they were incompetent and if the second case was true, they committed professional malpractice.

In any case, they choose not to respond to or even acknowledge any of my input in the 2004 publication of the "SWEBOK_Guide_2004". The 2004 SWEBOK still does not acknowledge the ASQ or the CSQE BOK. I also find it curious that the Preface states that the ACM was active in creating the joint committee, that they approved the code of ethics in 1998, and they were working on an alternate educational curriculum, but the Preface fails to mention that ACM rejected the SWEBOK itself.

"SWEBOK Observations" for IEEE-SWEBOK V1.0 1 Page Summary of My "SWEBOK Observations"

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

Metrics for Metrics: Cost Analysis and Justifications

Developing Strategic I/T Metrics, May 1988

This talk was targeted at IT managers and CFOs who were developing metrics programs. The talk discusses some of the key elements in determining where to direct and evaluate a software metrics program:

  • Determining metrics expendituresí place within your I/T budget
  • Analyzing metrics results
  • Refining your metrics program
  • How do you measure the ROI of your metrics program?
  • Short term testing for metrics justification
  • Ensuring yours is a flexible metrics program for optimal long-term performance

The speaker notes are available with the slides.

Metrics for Metrics (Slides)" floppy disk Metrics for Metrics (Notes)"

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

What Makes Software Quality Different?

ASQ SV Section 2003

Quality models for software are often based on hardware parallels or assembly line concepts. Although these models may be convenient for people coming from other industry segments, they ignore significant differences that can lead to counter productive processes, metrics, and controls. The talk discusses some of the reasons software quality assurance is different, for example:

  • Software is developed, not manufactured
  • Software development processes vary tremendously
  • Software is easily modified
  • Side effects of software changes are not well understood
  • Change control and version management frequently arenít rigorous
  • Quality records are easily modified and rarely secure
  • Standard MTBF and MTTR types of measures donít mean the same as with hardware

"What Makes Software Quality Different (Slides)

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

The Role of the Quality Group in Software Development

PNSQC 2003

This paper describes the role of the quality organization in software development as observed in dozens of commercial organizations. It views the different charters and purposes the quality groups have. The potential benefits and drawbacks for various charters are presented, along with the organizational structure and typical activities in each. The charter for the quality group changes over time, and observations of progressions in organizations are made. The various possible organizations, charters, and roles are described and related briefly to quality systems described in both the SEI Capability Maturity Model and ISO 9000 Standards (ISO 9001 and ISO 9000-3). The impact on  product quality of the different types of development process, and possible roles for the quality group are also covered.

Paper and presentation made at 1993 PNSQC 10/1993

Organization Roles Paper floppy disk Organization Roles Slides

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

Failure Modes and Effects Analysis

May, 2000 ASQ Presentation

Failure Modes and Effects Analysis (FMEA) is a risk analysis and prioritization method developed for the aerospace industry in the 1960s. This talk describes how the FMEA techniques can be applied to software.

Talk presented at May 19, 2000 ASQ Silicon Valley Section Dinner Meeting.

FMEA Slides

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

21CFR Part 11: Electronic Signatures, Electronic Records

May, 2000 ASQ Presentation

The US Food and Drug Administration (FDA) has issued regulations that apply to the security of Electronic Signatures and Electronic Records. The Code of Federal Regulations, Section 21, Part 11 (21CFR11) is the model for information security being considered for all government agencies.

This talk describes the major characteristics of the regulations and implications for  software testing.

Talk presented at May 19, 2000 ASQ Silicon Valley Section Dinner Meeting.

21CFR11 Slides

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

Why Tests Do Not Pass (or Fail)

October, 2009 PNSQC

Most testers think of tests passing or failing. Either they found a bug or they did not. Unfortunately, experience shows us  repeatedly that passing a test does not really mean there is no bug. It is quite possible for a test to surface an error but it not  be detected at the time. It is also possible for bugs to exist in the feature being tested in spite of the test of that capability.  Passing really only means that we did not notice anything interesting.

Likewise, failing a test is no guarantee that a bug is present. There could be a bug in the test itself, a configuration problem, corrupted data, or a host of other explainable reasons that do not mean that there is anything wrong with the software being tested. Failing really only means that something that was noticed warrants further investigation.

The paper explains the ideas further, explores some of the implications, and suggests some ways to benefit from this new way of thinking about test outcomes. The paper concludes with examination of how to use this viewpoint to better prepare tests and  report results.

Talk presented at October, 2009 Pacific Northwest Software Quality Conference (PNSQC).

Why Tests Do Not Pass (Slides) floppy disk Why Tests Do Not Pass (Paper)

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

Bugs For Sale

April, 2010 IV EBTS

Our goal in testing is not just to find and report bugs; it includes getting appropriate action taken based on the information we report. By taking the view that we are selling a bug when we report it we can be more successful at getting the right attention. At the same time we will increase the number of bugs we get fixed by improving communication in the bug reports.

The talk explains how to sell bugs and how the selling of bugs helps get more bugs fixed and better decisions made.  It describes the kind of information it takes to sell a bug.

The main points made include:

    1. Motivating a person to buy
    2. Overcoming objections
    3. Identifying the audience
    4. Capturing the important information
    5. Keeping it simple

Talk presented at IV Encontro Brasileiro de Testes de Software (IV EBTS).

Bugs For Sale (Slides)

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

Test Oracles

April, 2010 IV EBTS

Designing tests includes both exercising the software and determining whether the resultant behavior is expected.  Capturing and comparing results can be tricky and difficult. An oracle is the principal or mechanism we use to tell whether  or not the software behaves as expected.

The talk describes the nine types (listed below), explains what each is, and goes into some of their applications and possible mechanisms to use them.

The nine types are:

    1. Complete
    2. Heuristic
    3. Statistical
    4. Consistency
    5. Self-verifying
    6. Model-based
    7. Inverse function
    8. Hand crafted
    9. None

Talk presented at IV Encontro Brasileiro de Testes de Software (IV EBTS).

Test Oracles (Slides)

Return to Top | Alphabetical Order | Management Topics | Technical Topics | Reverse Chronological Order

 ***********************************************************

 

 

 

Updated March 22, 2014

Copyright © 1995-2013 Software Quality Methods, LLC. All Rights Reserved.

[Home] [About Us] [Services] [FAQ] [Papers] [What people say...] [Associates and Links]