Blog Post

50 Software Testing Terms Defined

December 21, 2021 Chris Kenst

50 Software Testing Terms Defined

The software engineering field is filled with specialized terminology that you need to know in order to better communicate with colleagues.

One of the best ways to reduce misunderstandings and help people work together better is to build a glossary. Explaining important terms can create a shared understanding of our work, encourage discussion, and create better collaboration.

Here are fifty software testing terms you need to know as you progress in this industry.

A

A/B testing – A test technique used to show different versions of the same feature to end-users to determine whether one yields better results.

Acceptance testing – A test written by domain experts or customers to determine if software meets specifications. Previously, the term was used to describe the part of the contractual process where software is accepted after being built by a third party.

Accessibility testing – Determines whether a product is usable by people with disabilities. Often involves determining to what degree assistive technology works with a given product.

B

Behavior-driven development (BDD) – An extension of test-driven development in which collaboration is used to specify requirements as executable tests. It describes the user’s preferred outcome in the form given-when-then~tags~(~’given20when20then))~searchTerm~’~sort~false~sortDirection~’asc~page~1)).

Black-box testing – Testing and test design without using knowledge of the code. Black-box testers focus on the relationships between the product and stakeholders.

Bug – A software error or defect that produces incorrect results. It may be a coding, design, or functional error.

C

Code coverage – Code coverage tools measure how much of your source code has been executed during testing. Coverage is often measured as a percentage of the amount of testing done of a certain type. For example, line coverage measures the percentage of lines of code that have been executed.

Continuous testing – Per the book Accelerate, automated unit and acceptance tests are run against every commit to version control to give developers fast feedback on changes. Developers should be able to run all automated tests on their machines in order to triage and fix problems.

Continuous integration – Every time someone commits a change, an application is built and a comprehensive set of automated tests are run against it. The goal is to have the application in a working state at all times.

Continuous delivery – The goal of continuous delivery is to get changes into the hands of users (or just into production) safely, quickly, and sustainably. From a high level, this includes using continuous integration to keep the application working and tested and using automated processes for ease of deployments.

Component tests – Component tests exercise a well-defined part of the system. Unlike unit tests, which are typically defined as being a developer activity, component tests can be done either at a code level or by a test team focusing on the behavior of the application.

D

Domain testing – The most widely taught type of software testing, it combines equivalence class partitioning and boundary analysis. Per The Domain Testing Workbook, you divide all possible values into a subset of similar values (equivalence classes) and use one or two values from each subset (boundaries) to maximize the likelihood of exposing a bug.

E

End-to-end testing – Commonly confused with system testing. The goal is to use external interfaces such as a user interface or API to determine how the system works when you test against a normal running system.

Exploratory testing – This testing style relies on the working style of the individual tester by treating test-related learning, design, execution, and interpretation of results as mutually supportive activities that run in parallel throughout the project.

F

Functional testing – Testing and test design that focuses on individual features (sometimes called functions) of an application. This can be done from a black-box or white-box approach.

H

High volume automated testing – HiVAT relies on automated generation, execution, and evaluation of multiple tests. The individual tests are often weak, but together they can expose problems that individually crafted tests will miss.

I

Integration testing – Focusing on two or more units of a product. Low-level integration testing might focus on just two units working together, whereas high-level integration testing might expand to a full working system.

K

Keyword-driven testing – In this technique, keywords are used to describe each executable function. The test design is separated from the programming work, so that you can build out the tests simultaneously with the application.

L

Load testing – This assesses whether a user or group of users could overload a system’s resources. A simple load test might check the number of connections a website could handle, while a more robust test might combine increased connections with functional tasks.

M

Meantime to restore – MTTR is the average time it takes to recover from any failure. It’s used to measure software delivery performance.

Monitoring – Monitoring is a way to increase the understanding of a system. Typical application monitoring consists of collecting and analyzing data (like system logs, exceptions, or errors) overtime to gather metrics on performance and problems, in order to better understand how bugs happen.

Model-based testing – Uses mathematical and visual models to build representations of what a system might do in order to determine if the test is passing or failing.

O

Observability – Observability is about being able to tell what’s going on within a complex system based on the data output. The easier a system is to observe, the better you understand what it is doing. The insights you gain from observability help you measure the success or failure of a system.

Oracle – A mechanism or principle for gauging whether a program passed or failed a test and whether a problem might exist. Oracles can be anything from reference programs to an individual’s specific knowledge to something embedded in data.

P

Pairwise testing – Pairwise, also called all-pairs testing, tells you how to combine a pair of variables to reduce the overall number of possible variables to test. This is a popular subset of combinatorial testing.

Performance testing – Performance testing measures how well the program runs in various situations. The results might expose errors in the application, the environment, or even the configuration.

Procedure – A series of actions to be performed in a certain way, usually given as instructions to a computer. Sometimes people are given high-level procedures as part of a testing process, although this isn’t recommended.

Q

Quality – Quality defines the value of a product, although this can be subjective depending on the stakeholders and the product.

Quality assurance – This describes the people who test software, as well as the act of testing a product to determine quality.

Quicktests – Quicktests are low-cost tests used to search for common bugs early in the development process.

R

Regression testing – Reusing tests after an update to confirm the system still works as expected or to see whether the change was effective.

Risk-based testing – These tests are designed to expose any problems that might cause software to fail.

Robustness – The ability of a system to gracefully handle bugs, bad or unexpected inputs, or poor environmental conditions while continuing to run.

S

Scalability – The measure of how well a system handles increases (and in some cases decreases) in work demand based on performance and cost. As software systems increase in work demand, it is common to increase the amount of resources given to the system (such as CPU, memory, and database size) before any architectural changes might be necessary.

Smoke test – Also called build verification tests, smoke tests are a small number of tests run to see if the system will start to break after a build.

System testing – Demonstrates how well the system works when you test against a normal running system. If you think about a normal running system as a set of components, system testing focuses on all those components running together with real data.

Scripted testing – This technique involves working from a script or a procedure. Automated testing is the most common form of scripted testing.

T

Test – A single conceptual unit used to discover information. Sometimes tests are written as automated procedures, sometimes in more exploratory or expressive forms.

Testing – Testing describes the process of determining the quality of a product. This can mean searching for bugs, measuring the product’s speed, evaluating whether it meets specifications or other factors.

Test strategy – Given a testing mission, what set of ideas guides your test design? This can include your resources, your knowledge, what risks apply, and what is feasible. You choose the best combination of resources and techniques to meet your mission.

Test plan – This combines your test strategy, your logistics, and your project risk management. Some plans will be documented, while others won’t.

Test-driven development (TDD) – A style of software development that focuses on “clean code.” For every test that fails, you add the minimum amount of code to make it pass, then refactor as needed. This has given rise to the mantra “red/green/refactor.”

Test design – The goal of test design is to create effective tests by selecting the right tests for the mission and making sure they reveal relevant information.

Testability – This development requirement focuses on being able to effectively test a software product, with a focus on the ability to write automated tests. Development practices like TDD can help improve testability by writing automated tests upfront.

U

Unit testing – Focuses on testing individual units of a product. This could mean everything from a single module or function to a complete program.

Usability testing – Using a product’s graphical interface to show that it’s well-defined or easy to use. This is often run with end users to evaluate their experience with the product.

V

Validation – Validation asks whether you’re building the right software to solve the customer’s problem or meet their expectations.

Verification – Verification asks whether you’re building the software properly and implementing it correctly according to specifications.

W

White-box testing – Testing and test design using knowledge of the code. Black-box testers focus on the implementation of the product under test. Sometimes called glass-box testing, because you can’t see into something that is white.

X

xUnit – A family of unit testing frameworks. xUnit frameworks are popular in the adoption and use of TDD and are often mentioned as criteria for the selection or comparison of unit testing frameworks.

Conclusion

Glossaries like this are a great tool for building understanding between test and development teams. Better communication leads to better collaboration, which can improve work performance and results across the industry.

Before we redirect you to GitHub...
In order to use Codecov an admin must approve your org.