49 Agile Terms You Need to Know

Reference this glossary of the most common Agile terms.

For marketers without a technical background, understanding the Agile methodologies and verbiage that developers use can be quite the challenge.

To help those of you non-technical marketers familiarize yourselves with the agile vocab, we’ve assembled a list of the 49 most common Agile terms that marketers need to know. If you’d like to see these terms in use, take a look at our annual Research Guide publications on the state of Agile: the 2016 Guide to Software Development Lifecycle: QA and Testing and the 2015 Guide to Code Quality and Software Agility.


Acceptance criteria: Pass/fail conditions that a piece of software or a sprint must meet in order to be considered finished.  

Agile: A set of software development principles focused on adapting to changing requirements and continuously improving products and processes.  

Agile coach: A long-term consultant focused on transitioning an organization to practice agile development.  

Automated testing: A form of verification that activates the software being tested and looks for a predicted outcome. It asserts that the test passed or failed based on whether the actual outcome matches the predicted outcome. 


Backlog: A collection of feature requests and user stories that have not been completed and are not scheduled for any upcoming iterations. 

Behavior-driven development (BDD): A requirement born out of test-driven development, where tests of any unit of software should be specified in terms of the desired behavior for the business case of the unit. 


Code coverage: a measure of what percentage of the total lines or blocks of code is executed by your automated tests. 

Code review: a systematic review of source code performed by a developer that did not write the code. The resulting discussion from that review is intended to identify mistakes overlooked in the initial development phase and to help improve the original developer’s skills.

Context-driven testing: A testing philosophy that asserts that there are no best practices for testing in every context and that testing methods must be flexible enough to evolve with projects that often change in unpredictable ways. 

Continuous delivery- The automation and optimization of deploying software to production as soon as changes have been made. 

Continuous integration: A development practice that requires developers to submit code to a shared repository to be tested before being deployed to production. 


Dependency: A piece of software that is relied on by another piece of software in order to function as intended. 


Emergent design: The practice of letting software design be determined by pieces of that software as it’s built, rather than designing the application at the start of the SDLC. 

Exploratory testing: A form of test design, test execution, and constant learning that involves a skilled tester flexibly using their experience and creativity to predict issues and experiment with no pre-determined methodology in an effort to more effectively test the software. 

Extreme programming (XP):  An agile development methodology created by Kent Beck that includes frequent releases, unit testing all code, extensive code review, and pair programming. All of these practices have heavily influenced the software industry. 


Flow: Proactive action to remove barriers for team members so work is completed in a timely manner with as little difficulty as possible. 

Functional testing: Comparing the observed behavior of a software component to the intended behavior based on documentation and user stories.  


Integration testing: A testing stage that occurs after unit testing where software modules are tested as a group to ensure that they work together to complete more complex tasks 

Issue tracking system:  A tool that stores, organizes, and presents visualizations of recorded feature requests and software bugs with various contextual information to help those who are tasked with fixing the bugs.


Kanban: A work management technique where the development process is illustrated through single tasks displayed in cards on a board for the team to see. On the board, each task is pulled from a queue by team members that are responsible for that task, and each task is tracked from definition to completion. 


Lean approach: The philosophy of creating a product with as little waste as possible, inspired by Toyota’s manufacturing process.  


Manual testing: Any test where a person attempts to complete a task with the software from an end user’s perspective, sometimes with additional tools or monitoring, and decides whether the test passes or fails by seeing if the actual outcome matches the desired outcome. 


Negative testing: A test strategy that explores how unexpected inputs will affect a system.

Open source: A classification for software whose source code is available to modify or use at no cost.


Product owner: A team member who is the key stakeholder of a project, responsible for prioritizing user stories and backlogged issues. 

Pair programming: A development strategy that involves two people coding together at a single computer, each giving frequent feedback and working together as equals, even if skill levels differ significantly. Some definitions include scenarios where one person writes code while the other watches and gives feedback.

Positive testing: A test strategy that checks to see if specific inputs yield expected results. 


Quality assurance (QA) or software quality assurance (SQA): A process, often owned by a separate department, that examines an organization’s software engineering practices to ensure that products are meeting specified requirements. The department often includes all software testers.


Refactoring: The process of changing the structure of an application without changing the external behavior. 

Regression testing: Ensuring that software that was previously developed and passed tests still functions as intended after changes have been made to other parts of the application. 

Release candidate: Any version of a piece of software that could be released as a final product. 

Replication: Storing the same data on several different devices to improve fault tolerance and stability and create backups. 

Requirements: What a piece of software needs to do, as defined by the business and by user stories. 

Responsive user experience: A web design principle that allows web apps to be viewed in different ways depending on the size of the device being used to view them. 

Retrospectives: Meetings at the end of sprints or iterations, where the team discusses the iteration and how to improve processes going forward. 


Sanity testing: A simple, ad-hoc type of test that is often manual and used to check that certain software functionality works roughly as expected.

Scrum: An Agile methodology focused on several small teams independently working in short sprints or iterations. Scrum also involves daily planning meetings and regular retrospectives on how to improve sprints in the future. 

Scrum master: A team member who manages communication between teams practicing Scrum and organizes regular planning meetings and retrospectives. 

Source control: A form of revision control (also called version control) that manages changes to a software project by allowing multiple programmers to work on the same source code by creating timestamped copies that can be rolled back, compared with, or merged into the mainline source code.

Sprint: A regular, repeatable cycle of time to work on particular pieces of software, usually one to two weeks. Also called iterations.

Static code analysis: A type of software analysis that measures code without running it. A variety of complexity, security, or business metrics can be gathered depending on the tool used.


Technical debt: The future development work that will have to be done when a piece of code, usually a “quick fix,” is implemented without proper checks or testing, or without being communicated to the team. 

Test-driven development: A development strategy in which tests that are meant to fail are written and new code is only added to a project if it passes those tests. 


Unit testing: The practice of testing the functionality of the smallest usable parts of an application, such as a class in object-oriented programming. 

Usability testing: A testing method that gathers feedback from real-world users who try to execute a given set of tasks using the software product. Its purpose is not just to find bugs but also to ensure that the user experience is as streamlined as possible.

User acceptance testing (UAT): A testing method that verifies that the application satisfies the entire user story outlined in initial business requirements.

User stories: A description of how an end user uses a piece of software to do a specific task. 


Waterfall: A development process that defines all possible requirements, design, architecture, and deadlines for a piece of software before development starts. 

Yagni:  Abbreviation for “You Aren’t Gonna Need It,” referring to code that does not add functionality to pass tests or meet requirements. 

You’re now an expert in the most common Agile terms. For more information and popular content on Agile, be sure to visit our Agile Zone.