We are back in Europe and hope you join us!

lean hypothesis statement

Prague, Czech Republic, 15 – 17, May 2023

lean hypothesis statement

Evolving the Scaled Agile Framework:

Update to SAFe 5

Guidance for organizing around value, DevSecOps, and agility for business teams

Scaled Agile Framework

  • SAFe Contributors
  • Extended SAFe Guidance
  • Community Contributions
  • SAFe Beyond IT
  • Books on SAFe
  • Download SAFe Posters & Graphics
  • Presentations & Videos
  • FAQs on how to use SAFe content and trademarks
  • What’s new in the SAFe 5.1 Big Picture
  • Recommended Reading
  • Learn about the Community
  • Member Login
  • SAFe Implementation Roadmap
  • Find a Transformation Partner
  • Find a Platform Partner
  • Customer Stories
  • SAFe Training


What if we found ourselves building something that nobody wanted? In that case what did it matter if we did it on time and on budget? —Eric Ries

Portfolio epics are typically cross-cutting, typically spanning multiple value streams and Program Increments (PIs). SAFe recommends applying the Lean Startup build-measure-learn cycle for epics to accelerate the learning and development process, and to reduce risk.

This article primarily describes the definition, approval, and implementation of portfolio epics . Program and Large solution epics, which follow a similar pattern, are described briefly at the end of this article.

There are two types of epics, each of which may occur at different levels of the Framework. Business epics directly deliver business value, while enabler epics are used to advance the Architectural Runway  to support upcoming business or technical needs.

It’s important to note that epics are not merely a synonym for projects; they operate quite differently, as Figure 1 highlights. SAFe generally discourages using the project funding model (refer to the Lean Portfolio Management article). Instead, the funding to implement epics is allocated directly to the value streams within a portfolio. Moreover, Agile Release Trains (ARTs) develop and deliver epics following the Lean Startup cycle (Figure 6).

lean hypothesis statement

Defining Epics

Since epics are some of the most significant enterprise investments, stakeholders need to agree on their intent and definition. Figure 2 provides an epic hypothesis statement template that can be used to capture, organize, and communicate critical information about an epic.

lean hypothesis statement

Download Epic Hypothesis Statement

Portfolio epics are made visible, developed, and managed through the  Portfolio Kanban system where they proceed through various states of maturity until they’re approved or rejected. Before being committed to implementation, epics require analysis. Epic Owners take responsibility for the critical collaborations required for this task, while  Enterprise Architects typically shepherd the enabler epics that support the technical considerations for business epics.

Defining the Epic MVP

Analysis of an epic includes the definition of a Minimum Viable Product (MVP) for the epic. In the context of SAFe, an MVP is an early and minimal version of a new product or business Solution that is used to prove or disprove the epic hypothesis . As opposed to story boards, prototypes, mockups, wire frames and other exploratory techniques, the MVP is an actual product that can be used by real customers to generate validated learning.

Creating the Lean Business Case

The result of the epic analysis is a Lean business case (Figure 3).

lean hypothesis statement

Download Lean Business Case

The LPM reviews the Lean business case to make a go/no-go decision for the epic. Once approved, portfolio epics stay in the portfolio backlog until implementation capacity and budget becomes available from one or more ARTs. The Epic Owner is responsible for working with Product and Solution Management  and  System Architect/Engineering to split the epic into Features or Capabilities during backlog refinement. Epic Owners help prioritize these items in their respective backlogs and have some ongoing responsibilities for stewardship and follow-up.

Estimating Epic Costs

As Epics progress through the Portfolio Kanban, the LPM team will eventually need to understand the potential investment required to realize the hypothesized value. This requires a meaningful estimate of the cost of the MVP and the forecasted cost of the full implementation should the epic hypothesis be proven true.

  • The MVP cost ensures the portfolio is budgeting enough money to prove/disprove the Epic hypothesis and helps ensure that LPM is making investments in innovation in accordance with lean budget guardrails
  • The forecasted implementation cost factors into ROI analysis, help determine if the business case is sound, and helps the LPM team prepare for potential adjustments to value stream budgets

The MVP cost estimate is created by the epic owner in collaboration with other key stakeholders. It should include an amount sufficient to prove or disprove the MVP hypothesis. Once approved, the MVP cost is considered a hard limit, and the value stream will not spend more than this cost in building and evaluating the MVP. If the value stream has evidence that this cost will be exceeded during epic implementation, further work on the epic should be stopped.

Estimating Implementation Cost

The MVP and/or the full implementation cost is further comprised of costs associated with the internal value streams plus any costs associated with external suppliers. It is initially estimated using t-shirt sizing (Figure 4) and refined over time as the MVP is implemented.

Estimating Epics in the early stages can be difficult since there is limited data and learning at this point. T-shirt sizing is a cost estimation technique which can be used by LPM, Epic Owners, architects and engineers, and other stakeholders to collaborate on the placement of epics into groups (or cost bands) of a similar size. A cost range is established for each T-shirt size using historical data. Each portfolio determines the relevant cost range for each T-shirt size. The gaps in the cost ranges reflect the uncertainty of estimates and avoid too much discussion around the edge cases. The full implementation cost can be refined over time as the MVP is built and learning occurs

Figure 4. Estimating Epics using T-shirt sizes

Supplier Costs

An Epic investment often includes a contribution and cost from suppliers, whether internal or external. Ideally, enterprises engage external suppliers via Agile contracts which supports estimating the costs of a suppliers contribution to a specific epic. For more on this topic, see the Agile Contracts advanced topic article.

Forecasting an epic’s duration

While it can be challenging to forecast the duration of an epic implemented by a mix of internal ARTs and external suppliers, an understanding of the forecasted duration of the epic is critical to the proper functioning of the portfolio. Similar to the cost of an epic, the duration of the epic can be forecasted as an internal duration, the supplier duration, and the necessary collaborations and interactions between the internal team and the external team. Practically, unless the epic is completely outsourced, LPM can focus on forecasts of the internal ARTs affected by the epic, as internal ARTs are expected to coordinate work with external suppliers.

Forecasting an epic’s duration requires an understanding of three data points:

  • An epic’s estimated size in story points for each affected ART, which can be estimated using the T-shirt estimation technique for costs by replacing the cost range with a range of points
  • The historical velocity of the affected ARTs
  • The percent (%) capacity allocation that can be dedicated to working on the epic as negotiated between Product and Solution Management, epic owners, and LPM

In the example shown in Figure 5, a portfolio has a substantial enabler epic that affects three ARTs and LPM seeks to gain an estimate of the forecasted number of PIs. ART 1 has estimated the epic’s size as 2,000 – 2,500 points. Product Management determines that ART 1 can allocate 40% of total capacity toward implementing its part of the epic. With a historical velocity of 1,000 story points per PI, ART 1 forecasts between five to seven PIs for the epic.

lean hypothesis statement

After repeating these calculations for each ART, the epic owner can see that while some ARTs will likely be ready to release on demand earlier than others, the forecasted duration to deliver the entire epic across all of the ARTs will likely be between six and eight PIs. If this forecast does not align with business requirements, further negotiations will ensue, such as adjusting capacity allocations or allocating more budget to work delivered by suppliers. Once the epic is initiated, the epic owner will continually update the forecasted completion.

Implementing Epics

The Lean Startup strategy recommends a highly iterative build-measure-learn cycle for product innovation and strategic investments. This strategy for implementing epics provides the economic and strategic advantages of a Lean startup by managing investment and risk incrementally while leveraging the flow and visibility benefits of SAFe (Figure 6). Gathering the data necessary to prove or disprove the Epic Hypothesis is a highly iterative process that continues until a data-driven result is obtained or the team consumes the MVP budget. In general, the result of a proven hypothesis is an MVP suitable for continued investment by the value stream. Continued investment in an Epic that has a dis-proven hypothesis requires the creation of a new epic and approval from the LPM Function.

SAFe Lean Startup Cycle

After it’s approved for implementation, the Epic Owner works with the Agile Teams to begin the development activities needed to realize the epic’s business outcomes hypothesis:

  • If the hypothesis is proven true,  the epic enters the persevere state, which  will drive more work by implementing additional features and capabilities. ARTs manage any further investment in the Epic via ongoing WSJF feature prioritization of the Program Backlog . Local features identified by the ART, and those from the epic, compete during routine WSJF reprioritization.
  • However, if the hypothesis is proven false, Epic owners can decide to pivot by creating a new epic for LPM review or dropping the initiative altogether and switching to other work in the backlog.

After evaluating an epic’s hypothesis, it may or may not be considered to remain as a portfolio concern. However, the Epic Owner may have some ongoing responsibilities for stewardship and follow-up.

The empowerment and decentralized decision-making of Lean budgets depend on Guardrails for specific checks and balances. Value stream KPIs and other metrics also support guardrails to keep the LPM informed of the epic’s progress toward meeting its business outcomes hypothesis.

Program and Solution Epics

Epics may also originate from local ARTs or Solution Trains, often starting as initiatives that warrant LPM attention because of their significant business impact or initiatives that exceed the epic threshold. These epics warrant a Lean Business Case and review and approval through the Portfolio Kanban system. The Program and Solution Kanban article describes methods for managing the flow of these epics.

Last update: 20 October 2022

Privacy Overview

Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.

Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.

Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.

Agile Rising Logo

The SAFe® Epic – an example

We often have questions about what a “good” sufficiently developed SAFe Epic looks like. In this example we use with clients during the Lean Portfolio Management learning journey, we dive into an example of real-world details behind an epic hypothesis statement.

For now, we have not provided a fully developed SAFe Lean Business Case as an example because business cases are typically highly contextual to the business or mission. That being said please remember that the core of the business case is established and driven from the Epic Hypothesis Statement and carried over into the documented business case.

lean hypothesis statement

Agile Lifecycle Management Solution Enabler Epic

Epic Owner: John Q. Smith

Epic Description

The big awesome product company requires a common platform for collaboration on work of all types across the portfolio for all teams, managers, architects, directors, and executives including customer collaboration and feedback loops. This solution will solve the problems in our system where we have poor quality or non-existent measurement and multiple disparate systems to manage and report on work.

For the Portfolio Teams

Who needs to manage their work, flow, and feedback

The single-source system of record for all work

IS A web-based software tool suite that provides customizable workflows that support the enterprise strategic themes related to creating better business and IT outcomes using guidance from the Scaled Agile Framework for Lean Enterprises (SAFe), Technology Business Management (TBM), and Value Stream Management (VSM) via the Flow Framework

THAT will provide a common place for all customers and portfolio stakeholders to have a transparent vision into all of the work occurring in the system/portfolio, provide a mechanism to manage capacity at scale, and enable easier concurrent road mapping  

UNLIKE the current array of disparate, ad hoc tools and platforms

OUR SOLUTION will organize all work in a holistic, transparent, visible manner using a common enterprise backlog model combined with an additive enterprise agile scaling framework as guidance including DevOps, Lean+Systems Thinking, and Agile

Business Outcomes:

  • Validate that the solution provides easy access to data and/or analytics, and charts for the six flow metrics: distribution, time, velocity, load, efficiency, and predictability for product/solution features (our work). (binary)
  • The solution also provides flow metrics for Lean-Agile Teams stories and backlog items. (binary)
  • 90% of teams are using the solution to manage 100% of their work and effort within the first year post implementation
  • All features and their lead, cycle, and process times (for the continuous delivery pipeline) are transparent. Feature lead and cycle times for all value streams using the system are visible. (binary)
  • Lean flow measurements — Lead and cycle times, six SAFe flow metrics, and DevOps metrics enabled in the continuous delivery pipeline integrated across the entire solution platform (binary)
  • Activity ratios for workflow, processes, and steps are automatically calculated (binary)
  • Percent complete and accurate (%C&A) measures for value streams automatically calculated or easily accessible data (binary)
  • Number of documented improvements implemented in the system by teams using data/information sourced from the ALM solution > 25 in the first six months post implementation
  • Number of documented improvements implemented in the system by teams using data/information sourced from the ALM solution > 100 in the first year post implementation
  • Flow time metrics improve from baseline by 10% in the first year post implementation (lead time for features)
  • Portfolio, Solution/Capability, and Program Roadmaps can be generated by Lean Portfolio Management (LPM), Solution Management, and Product Management at will from real-time data in the ALM (binary)
  • Roadmaps will be available online for general stakeholder consumption (transparency)
  • Increase customer NPS for forecasting and communication of solution progress and transparency of execution by 20% in the first year post implementation (survey + baseline)
  • Build a taxonomy for all work including a service catalog (binary)
  • Run the system and the system produces the data to produce the capacity metrics for all value streams to enable the LPM guardrail (binary)
  • Stops obfuscation of work hidden in the noise of the one sized fits all backlog model (everything is a CRQ/Ticket) and allows for more accurate and representative prioritization including the application of an economic decision-making framework using a taxonomy for work (binary)
  • Enables full truth in reporting and transparency of actual flow to everyone, real-time – including customers (100% of work is recorded in the system of record)
  • Enables live telemetry of progress towards objectives sourced through all backlogs, roadmaps, and flow data and information (dependent)
  • 90% of teams are using the solution to manage 100% of their capacity within the first year post implementation

Leading Indicators:

  • Total value stream team member utilization > 95% daily vs. weekly vs. per PI
  • Low daily utilization < 75% indicates there is a problem with the solution, training, or something else to explore
  • % of teams using the ALM solution to manage 100% of their work and effort
  • Number of changes in the [old solutions] data from the implementation start date
  • Usage metrics for the [old solutions]
  • We can see kanban systems and working agreements for workflow state entry and exit criteria in use in the system records
  • Teams have a velocity metric that they use solely for the use of planning an iterations available capacity and not for measuring team performance (only useful for planning efficiency performance)
  • Teams use velocity and flow metrics to make improvements to their system and flow (# of improvements acted from solution usage)
  • Teams are able to measure the flow of items per cycle (sprint/iteration) and per effort/value (story points; additive)
  • Program(s)[ARTs] are able to measure the flow of features per cycle (PI) and per effort/value (story points; additive from child elements)
  • Portfolio(s) are able to measure the flow of epics per cycle (PI) and per effort/value (story points; additive from child elements)
  • % of total work activity and effort in the portfolio visible in the solution
  • Show the six flow metrics
  • Features (Program) – current PI and two PI’s into the future
  • Epics and Capabilities – current PI up to two+ years into the future
  • are the things we said we were going to work on and what we actually worked on in relation to objectives and priorities (not just raw outputs of flow) the same?
  • The portfolio has a reasonable and rationalized, quality understanding of how much capacity exists across current and future cycles (PI) in alignment with the roadmap
  • Identification and reporting of capacity across Portfolio is accurate and predictable;
  • Identification of Operational/Maintenance-to-Enhancement work ratio and work activity ratios and % complete and accurate (%C&A) data readily available in the system
  • including operations and maintenance (O&M) work and enhancements,
  • highlighting categories/types of work
  • Work activity ratios are in alignment with process strategy and forecasts, process intent, and incentivizing business outcomes;
  • allows leadership to address systemic issues;
  • data is not just reported, but means something and is acted upon through decision-making and/or improvements
  • # of epics created over time
  • # of epics accepted over time
  • # of MVP’s tested and successful
  • Parameters configured in the tool to highlight and constrain anti-patterns
  • Stimulates feedback loop to assist in making decisions on whether to refine/improve/refactor and in that case, what to refine/improve/refactor
  • Strategic themes, objectives, key results, and the work in the portfolio – Epics, Capabilities, Features, Stories traceability conveyed from Enterprise to ART/Team level

Non-Functional Requirements

  • On-Premise components of the ALM solution shall support 2-Factor Authentication
  • SaaS components of the ALM solution shall support 2-Factor Authentication and SAML 2.0
  • The system must be 508 compliant
  • The system must be scalable to support up to 2000 users simultaneously with no performance degradation or reliability issues
  • Must be the single-source system for all work performed in the portfolio and value streams.
  • ALM is the single-source system of record for viewing and reporting roadmap status/progress toward objectives

Building your Experiment – The Minimum Viable Product (MVP)

Once you have constructed a quality hypothesis statement the product management team should begin work on building the lean business case and MVP concept. How are you going to test the hypothesis? How can we test the hypothesis adequately while also economically wrt to time, cost, and quality? What are the key features that will demonstrably support the hypothesis?

  • Name First Last
  • Guide: Hypothesis Testing

Daniel Croft

Daniel Croft is an experienced continuous improvement manager with a Lean Six Sigma Black Belt and a Bachelor's degree in Business Management. With more than ten years of experience applying his skills across various industries, Daniel specializes in optimizing processes and improving efficiency. His approach combines practical experience with a deep understanding of business fundamentals to drive meaningful change.

  • Last Updated: September 8, 2023
  • Learn Lean Sigma

In the world of data-driven decision-making, Hypothesis Testing stands as a cornerstone methodology. It serves as the statistical backbone for a multitude of sectors, from manufacturing and logistics to healthcare and finance. But what exactly is Hypothesis Testing, and why is it so indispensable? Simply put, it’s a technique that allows you to validate or invalidate claims about a population based on sample data. Whether you’re looking to streamline a manufacturing process, optimize logistics, or improve customer satisfaction, Hypothesis Testing offers a structured approach to reach conclusive, data-supported decisions.

The graphical example above provides a simplified snapshot of a hypothesis test. The bell curve represents a normal distribution, the green area is where you’d accept the null hypothesis ( H 0​), and the red area is the “rejection zone,” where you’d favor the alternative hypothesis ( Ha ​). The vertical blue line represents the threshold value or “critical value,” beyond which you’d reject H 0​.

Here’s a graphical example of a hypothesis test, which you can include in the introduction section of your guide. In this graph:

  • The curve represents a standard normal distribution, often encountered in hypothesis tests.
  • The green-shaded area signifies the “Acceptance Region,” where you would fail to reject the null hypothesis ( H 0​).
  • The red-shaded areas are the “Rejection Regions,” where you would reject H 0​ in favor of the alternative hypothesis ( Ha ​).
  • The blue dashed lines indicate the “Critical Values” (±1.96), which are the thresholds for rejecting H 0​.

This graphical representation serves as a conceptual foundation for understanding the mechanics of hypothesis testing. It visually illustrates what it means to accept or reject a hypothesis based on a predefined level of significance.

Table of Contents

What is hypothesis testing.

Hypothesis testing is a structured procedure in statistics used for drawing conclusions about a larger population based on a subset of that population, known as a sample. The method is widely used across different industries and sectors for a variety of purposes. Below, we’ll dissect the key components of hypothesis testing to provide a more in-depth understanding.

The Hypotheses: H 0 and Ha

In every hypothesis test, there are two competing statements:

  • Null Hypothesis ( H 0) : This is the “status quo” hypothesis that you are trying to test against. It is a statement that asserts that there is no effect or difference. For example, in a manufacturing setting, the null hypothesis might state that a new production process does not improve the average output quality.
  • Alternative Hypothesis ( Ha or H 1) : This is what you aim to prove by conducting the hypothesis test. It is the statement that there is an effect or difference. Using the same manufacturing example, the alternative hypothesis might state that the new process does improve the average output quality.

Significance Level ( α )

Before conducting the test, you decide on a “Significance Level” ( α ), typically set at 0.05 or 5%. This level represents the probability of rejecting the null hypothesis when it is actually true. Lower α values make the test more stringent, reducing the chances of a ‘false positive’.

Data Collection

You then proceed to gather data, which is usually a sample from a larger population. The quality of your test heavily relies on how well this sample represents the population. The data can be collected through various means such as surveys, observations, or experiments.

Statistical Test

Depending on the nature of the data and what you’re trying to prove, different statistical tests can be applied (e.g., t-test, chi-square test , ANOVA , etc.). These tests will compute a test statistic (e.g., t , 2 χ 2, F , etc.) based on your sample data.

Here are graphical examples of the distributions commonly used in three different types of statistical tests: t-test, Chi-square test, and ANOVA (Analysis of Variance), displayed side by side for comparison.

  • Graph 1 (Leftmost): This graph represents a t-distribution, often used in t-tests. The t-distribution is similar to the normal distribution but tends to have heavier tails. It is commonly used when the sample size is small or the population variance is unknown.

Chi-square Test

  • Graph 2 (Middle): The Chi-square distribution is used in Chi-square tests, often for testing independence or goodness-of-fit. Unlike the t-distribution, the Chi-square distribution is not symmetrical and only takes on positive values.

ANOVA (F-distribution)

  • Graph 3 (Rightmost): The F-distribution is used in Analysis of Variance (ANOVA), a statistical test used to analyze the differences between group means. Like the Chi-square distribution, the F-distribution is also not symmetrical and takes only positive values.

These visual representations provide an intuitive understanding of the different statistical tests and their underlying distributions. Knowing which test to use and when is crucial for conducting accurate and meaningful hypothesis tests.

Decision Making

The test statistic is then compared to a critical value determined by the significance level ( α ) and the sample size. This comparison will give you a p-value. If the p-value is less than α , you reject the null hypothesis in favor of the alternative hypothesis. Otherwise, you fail to reject the null hypothesis.


Finally, you interpret the results in the context of what you were investigating. Rejecting the null hypothesis might mean implementing a new process or strategy, while failing to reject it might lead to a continuation of current practices.

To sum it up, hypothesis testing is not just a set of formulas but a methodical approach to problem-solving and decision-making based on data. It’s a crucial tool for anyone interested in deriving meaningful insights from data to make informed decisions.

Why is Hypothesis Testing Important?

Hypothesis testing is a cornerstone of statistical and empirical research, serving multiple functions in various fields. Let’s delve into each of the key areas where hypothesis testing holds significant importance:

Data-Driven Decisions

In today’s complex business environment, making decisions based on gut feeling or intuition is not enough; you need data to back up your choices. Hypothesis testing serves as a rigorous methodology for making decisions based on data. By setting up a null hypothesis and an alternative hypothesis, you can use statistical methods to determine which is more likely to be true given a data sample. This structured approach eliminates guesswork and adds empirical weight to your decisions, thereby increasing their credibility and effectiveness.

Risk Management

Hypothesis testing allows you to assign a ‘p-value’ to your findings, which is essentially the probability of observing the given sample data if the null hypothesis is true. This p-value can be directly used to quantify risk. For instance, a p-value of 0.05 implies there’s a 5% risk of rejecting the null hypothesis when it’s actually true. This is invaluable in scenarios like product launches or changes in operational processes, where understanding the risk involved can be as crucial as the decision itself.

Here’s an example to help you understand the concept better.

The graph above serves as a graphical representation to help explain the concept of a ‘p-value’ and its role in quantifying risk in hypothesis testing. Here’s how to interpret the graph:

Elements of the Graph

  • The curve represents a Standard Normal Distribution , which is often used to represent z-scores in hypothesis testing.
  • The red-shaded area on the right represents the Rejection Region . It corresponds to a 5% risk ( α =0.05) of rejecting the null hypothesis when it is actually true. This is the area where, if your test statistic falls, you would reject the null hypothesis.
  • The green-shaded area represents the Acceptance Region , with a 95% level of confidence. If your test statistic falls in this region, you would fail to reject the null hypothesis.
  • The blue dashed line is the Critical Value (approximately 1.645 in this example). If your standardized test statistic (z-value) exceeds this point, you enter the rejection region, and your p-value becomes less than 0.05, leading you to reject the null hypothesis.

Relating to Risk Management

The p-value can be directly related to risk management. For example, if you’re considering implementing a new manufacturing process, the p-value quantifies the risk of that decision. A low p-value (less than α ) would mean that the risk of rejecting the null hypothesis (i.e., going ahead with the new process) when it’s actually true is low, thus indicating a lower risk in implementing the change.

Quality Control

In sectors like manufacturing, automotive, and logistics, maintaining a high level of quality is not just an option but a necessity. Hypothesis testing is often employed in quality assurance and control processes to test whether a certain process or product conforms to standards. For example, if a car manufacturing line claims its error rate is below 5%, hypothesis testing can confirm or disprove this claim based on a sample of products. This ensures that quality is not compromised and that stakeholders can trust the end product.

Resource Optimization

Resource allocation is a significant challenge for any organization. Hypothesis testing can be a valuable tool in determining where resources will be most effectively utilized. For instance, in a manufacturing setting, you might want to test whether a new piece of machinery significantly increases production speed. A hypothesis test could provide the statistical evidence needed to decide whether investing in more of such machinery would be a wise use of resources.

In the realm of research and development, hypothesis testing can be a game-changer. When developing a new product or process, you’ll likely have various theories or hypotheses. Hypothesis testing allows you to systematically test these, filtering out the less likely options and focusing on the most promising ones. This not only speeds up the innovation process but also makes it more cost-effective by reducing the likelihood of investing in ideas that are statistically unlikely to be successful.

In summary, hypothesis testing is a versatile tool that adds rigor, reduces risk, and enhances the decision-making and innovation processes across various sectors and functions.

This graphical representation makes it easier to grasp how the p-value is used to quantify the risk involved in making a decision based on a hypothesis test.

Step-by-Step Guide to Hypothesis Testing

To make this guide practical and helpful if you are new learning about the concept we will explain each step of the process and follow it up with an example of the method being applied to a manufacturing line, and you want to test if a new process reduces the average time it takes to assemble a product.

Step 1: State the Hypotheses

The first and foremost step in hypothesis testing is to clearly define your hypotheses. This sets the stage for your entire test and guides the subsequent steps, from data collection to decision-making. At this stage, you formulate two competing hypotheses:

Null Hypothesis ( H 0)

The null hypothesis is a statement that there is no effect or no difference, and it serves as the hypothesis that you are trying to test against. It’s the default assumption that any kind of effect or difference you suspect is not real, and is due to chance. Formulating a clear null hypothesis is crucial, as your statistical tests will be aimed at challenging this hypothesis.

In a manufacturing context, if you’re testing whether a new assembly line process has reduced the time it takes to produce an item, your null hypothesis ( H 0) could be:

H 0:”The new process does not reduce the average assembly time.”

Alternative Hypothesis ( Ha or H 1)

The alternative hypothesis is what you want to prove. It is a statement that there is an effect or difference. This hypothesis is considered only after you find enough evidence against the null hypothesis.

Continuing with the manufacturing example, the alternative hypothesis ( Ha ) could be:

Ha :”The new process reduces the average assembly time.”

Types of Alternative Hypothesis

Depending on what exactly you are trying to prove, the alternative hypothesis can be:

  • Two-Sided : You’re interested in deviations in either direction (greater or smaller).
  • One-Sided : You’re interested in deviations only in one direction (either greater or smaller).

Scenario: Reducing Assembly Time in a Car Manufacturing Plant

You are a continuous improvement manager at a car manufacturing plant. One of the assembly lines has been struggling with longer assembly times, affecting the overall production schedule. A new assembly process has been proposed, promising to reduce the assembly time per car. Before rolling it out on the entire line, you decide to conduct a hypothesis test to see if the new process actually makes a difference. Null Hypothesis ( H 0​) In this context, the null hypothesis would be the status quo, asserting that the new assembly process doesn’t reduce the assembly time per car. Mathematically, you could state it as: H 0:The average assembly time per car with the new process ≥ The average assembly time per car with the old process. Or simply: H 0​:”The new process does not reduce the average assembly time per car.” Alternative Hypothesis ( Ha ​ or H 1​) The alternative hypothesis is what you aim to prove — that the new process is more efficient. Mathematically, it could be stated as: Ha :The average assembly time per car with the new process < The average assembly time per car with the old process Or simply: Ha ​:”The new process reduces the average assembly time per car.” Types of Alternative Hypothesis In this example, you’re only interested in knowing if the new process reduces the time, making it a One-Sided Alternative Hypothesis .

Step 2: Determine the Significance Level ( α )

Once you’ve clearly stated your null and alternative hypotheses, the next step is to decide on the significance level, often denoted by α . The significance level is a threshold below which the null hypothesis will be rejected. It quantifies the level of risk you’re willing to accept when making a decision based on the hypothesis test.

What is a Significance Level?

The significance level, usually expressed as a percentage, represents the probability of rejecting the null hypothesis when it is actually true. Common choices for α are 0.05, 0.01, and 0.10, representing 5%, 1%, and 10% levels of significance, respectively.

  • 5% Significance Level ( α =0.05) : This is the most commonly used level and implies that you are willing to accept a 5% chance of rejecting the null hypothesis when it is true.
  • 1% Significance Level ( α =0.01) : This is a more stringent level, used when you want to be more sure of your decision. The risk of falsely rejecting the null hypothesis is reduced to 1%.
  • 10% Significance Level ( α =0.10) : This is a more lenient level, used when you are willing to take a higher risk. Here, the chance of falsely rejecting the null hypothesis is 10%.

Continuing with the manufacturing example, let’s say you decide to set α =0.05, meaning you’re willing to take a 5% risk of concluding that the new process is effective when it might not be.

How to Choose the Right Significance Level?

Choosing the right significance level depends on the context and the consequences of making a wrong decision. Here are some factors to consider:

  • Criticality of Decision : For highly critical decisions with severe consequences if wrong, a lower α like 0.01 may be appropriate.
  • Resource Constraints : If the cost of collecting more data is high, you may choose a higher α to make a decision based on a smaller sample size.
  • Industry Standards : Sometimes, the choice of α may be dictated by industry norms or regulatory guidelines.

By the end of Step 2, you should have a well-defined significance level that will guide the rest of your hypothesis testing process. This level serves as the cut-off for determining whether the observed effect or difference in your sample is statistically significant or not.

Continuing the Scenario: Reducing Assembly Time in a Car Manufacturing Plant

After formulating the hypotheses, the next step is to set the significance level ( α ) that will be used to interpret the results of the hypothesis test. This is a critical decision as it quantifies the level of risk you’re willing to accept when making a conclusion based on the test. Setting the Significance Level Given that assembly time is a critical factor affecting the production schedule, and ultimately, the company’s bottom line, you decide to be fairly stringent in your test. You opt for a commonly used significance level: α = 0.05 This means you are willing to accept a 5% chance of rejecting the null hypothesis when it is actually true. In practical terms, if you find that the p-value of the test is less than 0.05, you will conclude that the new process significantly reduces assembly time and consider implementing it across the entire line. Why α = 0.05 ? Industry Standard : A 5% significance level is widely accepted in many industries, including manufacturing, for hypothesis testing. Risk Management : By setting  α = 0.05 , you’re limiting the risk of concluding that the new process is effective when it may not be to just 5%. Balanced Approach : This level offers a balance between being too lenient (e.g., α=0.10) and too stringent (e.g., α=0.01), making it a reasonable choice for this scenario.

Step 3: Collect and Prepare the Data

After stating your hypotheses and setting the significance level, the next vital step is data collection. The data you collect serves as the basis for your hypothesis test, so it’s essential to gather accurate and relevant data.

Types of Data

Depending on your hypothesis, you’ll need to collect either:

  • Quantitative Data : Numerical data that can be measured. Examples include height, weight, and temperature.
  • Qualitative Data : Categorical data that represent characteristics. Examples include colors, gender, and material types.

Data Collection Methods

Various methods can be used to collect data, such as:

  • Surveys and Questionnaires : Useful for collecting qualitative data and opinions.
  • Observation : Collecting data through direct or participant observation.
  • Experiments : Especially useful in scientific research where control over variables is possible.
  • Existing Data : Utilizing databases, records, or any other data previously collected.

Sample Size

The sample size ( n ) is another crucial factor. A larger sample size generally gives more accurate results, but it’s often constrained by resources like time and money. The choice of sample size might also depend on the statistical test you plan to use.

Continuing with the manufacturing example, suppose you decide to collect data on the assembly time of 30 randomly chosen products, 15 made using the old process and 15 made using the new process. Here, your sample size n =30.

Data Preparation

Once data is collected, it often needs to be cleaned and prepared for analysis. This could involve:

  • Removing Outliers : Outliers can skew the results and provide an inaccurate picture.
  • Data Transformation : Converting data into a format suitable for statistical analysis.
  • Data Coding : Categorizing or labeling data, necessary for qualitative data.

By the end of Step 3, you should have a dataset that is ready for statistical analysis. This dataset should be representative of the population you’re interested in and prepared in a way that makes it suitable for hypothesis testing.

With the hypotheses stated and the significance level set, you’re now ready to collect the data that will serve as the foundation for your hypothesis test. Given that you’re testing a change in a manufacturing process, the data will most likely be quantitative, representing the assembly time of cars produced on the line. Data Collection Plan You decide to use a Random Sampling Method for your data collection. For two weeks, assembly times for randomly selected cars will be recorded: one week using the old process and another week using the new process. Your aim is to collect data for 40 cars from each process, giving you a sample size ( n ) of 80 cars in total. Types of Data Quantitative Data : In this case, you’re collecting numerical data representing the assembly time in minutes for each car. Data Preparation Data Cleaning : Once the data is collected, you’ll need to inspect it for any anomalies or outliers that could skew your results. For example, if a significant machine breakdown happened during one of the weeks, you may need to adjust your data or collect more. Data Transformation : Given that you’re dealing with time, you may not need to transform your data, but it’s something to consider, depending on the statistical test you plan to use. Data Coding : Since you’re dealing with quantitative data in this scenario, coding is likely unnecessary unless you’re planning to categorize assembly times into bins (e.g., ‘fast’, ‘medium’, ‘slow’) for some reason. Example Data Points: Car_ID Process_Type Assembly_Time_Minutes 1 Old 38.53 2 Old 35.80 3 Old 36.96 4 Old 39.48 5 Old 38.74 6 Old 33.05 7 Old 36.90 8 Old 34.70 9 Old 34.79 … … … The complete dataset would contain 80 rows: 40 for the old process and 40 for the new process.

Step 4: Conduct the Statistical Test

After you have your hypotheses, significance level, and collected data, the next step is to actually perform the statistical test. This step involves calculations that will lead to a test statistic, which you’ll then use to make your decision regarding the null hypothesis.

Choose the Right Test

The first task is to decide which statistical test to use. The choice depends on several factors:

  • Type of Data : Quantitative or Qualitative
  • Sample Size : Large or Small
  • Number of Groups or Categories : One-sample, Two-sample, or Multiple groups

For instance, you might choose a t-test for comparing means of two groups when you have a small sample size. Chi-square tests are often used for categorical data, and ANOVA is used for comparing means across more than two groups.

Calculation of Test Statistic

Once you’ve chosen the appropriate statistical test, the next step is to calculate the test statistic. This involves using the sample data in a specific formula for the chosen test.

Obtain the p-value

After calculating the test statistic, the next step is to find the p-value associated with it. The p-value represents the probability of observing the given test statistic if the null hypothesis is true.

  • A small p-value (< α ) indicates strong evidence against the null hypothesis, so you reject the null hypothesis.
  • A large p-value (> α ) indicates weak evidence against the null hypothesis, so you fail to reject the null hypothesis.

Make the Decision

You now compare the p-value to the predetermined significance level ( α ):

  • If p < α , you reject the null hypothesis in favor of the alternative hypothesis.
  • If p > α , you fail to reject the null hypothesis.

In the manufacturing case, if your calculated p-value is 0.03 and your α is 0.05, you would reject the null hypothesis, concluding that the new process effectively reduces the average assembly time.

By the end of Step 4, you will have either rejected or failed to reject the null hypothesis, providing a statistical basis for your decision-making process.

Now that you have collected and prepared your data, the next step is to conduct the actual statistical test to evaluate the null and alternative hypotheses. In this case, you’ll be comparing the mean assembly times between cars produced using the old and new processes to determine if the new process is statistically significantly faster. Choosing the Right Test Given that you have two sets of independent samples (old process and new process), a Two-sample t-test for Equality of Means seems appropriate for comparing the average assembly times. Preparing Data for Minitab Firstly, you would prepare your data in an Excel sheet or CSV file with one column for the assembly times using the old process and another column for the assembly times using the new process. Import this file into Minitab. Steps to Perform the Two-sample t-test in Minitab Open Minitab : Launch the Minitab software on your computer. Import Data : Navigate to File > Open and import your data file. Navigate to the t-test Menu : Go to Stat > Basic Statistics > 2-Sample t... . Select Columns : In the dialog box, specify the columns corresponding to the old and new process assembly times under “Sample 1” and “Sample 2.” Options : Click on Options and make sure that you set the confidence level to 95% (which corresponds to α = 0.05 ). Run the Test : Click OK to run the test. In this example output, the p-value is 0.0012, which is less than the significance level α = 0.05 . Hence, you would reject the null hypothesis. The t-statistic is -3.45, indicating that the mean of the new process is statistically significantly less than the mean of the old process, which aligns with your alternative hypothesis. Showing the data displayed as a Box plot in the below graphic it is easy to see the new process is statistically significantly better.

Why do a Hypothesis test?

You might ask, after all this why do a hypothesis test and not just look at the averages, which is a good question. While looking at average times might give you a general idea of which process is faster, hypothesis testing provides several advantages that a simple comparison of averages doesn’t offer:

Statistical Significance

Account for Random Variability : Hypothesis testing considers not just the averages, but also the variability within each group. This allows you to make more robust conclusions that account for random chance.

Quantify the Evidence : With hypothesis testing, you obtain a p-value that quantifies the strength of the evidence against the null hypothesis. A simple comparison of averages doesn’t provide this level of detail.

Control Type I Error : Hypothesis testing allows you to control the probability of making a Type I error (i.e., rejecting a true null hypothesis). This is particularly useful in settings where the consequences of such an error could be costly or risky.

Quantify Risk : Hypothesis testing provides a structured way to make decisions based on a predefined level of risk (the significance level, α ).

Decision-making Confidence

Objective Decision Making : The formal structure of hypothesis testing provides an objective framework for decision-making. This is especially useful in a business setting where decisions often have to be justified to stakeholders.

Replicability : The statistical rigor ensures that the results are replicable. Another team could perform the same test and expect to get similar results, which is not necessarily the case when comparing only averages.

Additional Insights

Understanding of Variability : Hypothesis testing often involves looking at measures of spread and distribution, not just the mean. This can offer additional insights into the processes you’re comparing.

Basis for Further Analysis : Once you’ve performed a hypothesis test, you can often follow it up with other analyses (like confidence intervals for the difference in means, or effect size calculations) that offer more detailed information.

In summary, while comparing averages is quicker and simpler, hypothesis testing provides a more reliable, nuanced, and objective basis for making data-driven decisions.

Step 5: Interpret the Results and Make Conclusions

Having conducted the statistical test and obtained the p-value, you’re now at a stage where you can interpret these results in the context of the problem you’re investigating. This step is crucial for transforming the statistical findings into actionable insights.

Interpret the p-value

The p-value you obtained tells you the significance of your results:

  • Low p-value ( p < α ) : Indicates that the results are statistically significant, and it’s unlikely that the observed effects are due to random chance. In this case, you generally reject the null hypothesis.
  • High p-value ( p > α ) : Indicates that the results are not statistically significant, and the observed effects could well be due to random chance. Here, you generally fail to reject the null hypothesis.

Relate to Real-world Context

You should then relate these statistical conclusions to the real-world context of your problem. This is where your expertise in your specific field comes into play.

In our manufacturing example, if you’ve found a statistically significant reduction in assembly time with a p-value of 0.03 (which is less than the α level of 0.05), you can confidently conclude that the new manufacturing process is more efficient. You might then consider implementing this new process across the entire assembly line.

Make Recommendations

Based on your conclusions, you can make recommendations for action or further study. For example:

  • Implement Changes : If the test results are significant, consider making the changes on a larger scale.
  • Further Research : If the test results are not clear or not significant, you may recommend further studies or data collection.
  • Review Methodology : If you find that the results are not as expected, it might be useful to review the methodology and see if the test was conducted under the right conditions and with the right test parameters.

Document the Findings

Lastly, it’s essential to document all the steps taken, the methodology used, the data collected, and the conclusions drawn. This documentation is not only useful for any further studies but also for auditing purposes or for stakeholders who may need to understand the process and the findings.

By the end of Step 5, you’ll have turned the raw statistical findings into meaningful conclusions and actionable insights. This is the final step in the hypothesis testing process, making it a complete, robust method for informed decision-making.

You’ve successfully conducted the hypothesis test and found strong evidence to reject the null hypothesis in favor of the alternative: The new assembly process is statistically significantly faster than the old one. It’s now time to interpret these results in the context of your business operations and make actionable recommendations. Interpretation of Results Statistical Significance : The p-value of 0.0012 is well below the significance level of = 0.05   α = 0.05 , indicating that the results are statistically significant. Practical Significance : The boxplot and t-statistic (-3.45) suggest not just statistical, but also practical significance. The new process appears to be both consistently and substantially faster. Risk Assessment : The low p-value allows you to reject the null hypothesis with a high degree of confidence, meaning the risk of making a Type I error is minimal. Business Implications Increased Productivity : Implementing the new process could lead to an increase in the number of cars produced, thereby enhancing productivity. Cost Savings : Faster assembly time likely translates to lower labor costs. Quality Control : Consider monitoring the quality of cars produced under the new process closely to ensure that the speedier assembly does not compromise quality. Recommendations Implement New Process : Given the statistical and practical significance of the findings, recommend implementing the new process across the entire assembly line. Monitor and Adjust : Implement a control phase where the new process is monitored for both speed and quality. This could involve additional hypothesis tests or control charts. Communicate Findings : Share the results and recommendations with stakeholders through a formal presentation or report, emphasizing both the statistical rigor and the potential business benefits. Review Resource Allocation : Given the likely increase in productivity, assess if resources like labor and parts need to be reallocated to optimize the workflow further.

By following this step-by-step guide, you’ve journeyed through the rigorous yet enlightening process of hypothesis testing. From stating clear hypotheses to interpreting the results, each step has paved the way for making informed, data-driven decisions that can significantly impact your projects, business, or research.

Hypothesis testing is more than just a set of formulas or calculations; it’s a holistic approach to problem-solving that incorporates context, statistics, and strategic decision-making. While the process may seem daunting at first, each step serves a crucial role in ensuring that your conclusions are both statistically sound and practically relevant.

  • McKenzie, C.R., 2004. Hypothesis testing and evaluation .  Blackwell handbook of judgment and decision making , pp.200-219.
  • Park, H.M., 2015. Hypothesis testing and statistical power of a test.
  • Eberhardt, L.L., 2003. What should we do about hypothesis testing? .  The Journal of wildlife management , pp.241-247.

Q: What is hypothesis testing in the context of Lean Six Sigma?

A: Hypothesis testing is a statistical method used in Lean Six Sigma to determine whether there is enough evidence in a sample of data to infer that a certain condition holds true for the entire population. In the Lean Six Sigma process, it’s commonly used to validate the effectiveness of process improvements by comparing performance metrics before and after changes are implemented. A null hypothesis ( H 0 ​ ) usually represents no change or effect, while the alternative hypothesis ( H 1 ​ ) indicates a significant change or effect.

Q: How do I determine which statistical test to use for my hypothesis?

A: The choice of statistical test for hypothesis testing depends on several factors, including the type of data (nominal, ordinal, interval, or ratio), the sample size, the number of samples (one sample, two samples, paired), and whether the data distribution is normal. For example, a t-test is used for comparing the means of two groups when the data is normally distributed, while a Chi-square test is suitable for categorical data to test the relationship between two variables. It’s important to choose the right test to ensure the validity of your hypothesis testing results.

Q: What is a p-value, and how does it relate to hypothesis testing?

A: A p-value is a probability value that helps you determine the significance of your results in hypothesis testing. It represents the likelihood of obtaining a result at least as extreme as the one observed during the test, assuming that the null hypothesis is true. In hypothesis testing, if the p-value is lower than the predetermined significance level (commonly α = 0.05 ), you reject the null hypothesis, suggesting that the observed effect is statistically significant. If the p-value is higher, you fail to reject the null hypothesis, indicating that there is not enough evidence to support the alternative hypothesis.

Q: Can you explain Type I and Type II errors in hypothesis testing?

A: Type I and Type II errors are potential errors that can occur in hypothesis testing. A Type I error, also known as a “false positive,” occurs when the null hypothesis is true, but it is incorrectly rejected. It is equivalent to a false alarm. On the other hand, a Type II error, or a “false negative,” happens when the null hypothesis is false, but it is erroneously failed to be rejected. This means a real effect or difference was missed. The risk of a Type I error is represented by the significance level ( α ), while the risk of a Type II error is denoted by β . Minimizing these errors is crucial for the reliability of hypothesis tests in continuous improvement projects.

Daniel Croft is a seasoned continuous improvement manager with a Black Belt in Lean Six Sigma. With over 10 years of real-world application experience across diverse sectors, Daniel has a passion for optimizing processes and fostering a culture of efficiency. He's not just a practitioner but also an avid learner, constantly seeking to expand his knowledge. Outside of his professional life, Daniel has a keen Investing, statistics and knowledge-sharing, which led him to create the website learnleansigma.com, a platform dedicated to Lean Six Sigma and process improvement insights.

Free Lean Six Sigma Templates

Improve your Lean Six Sigma projects with our free templates. They're designed to make implementation and management easier, helping you achieve better results.

Other Guides

Dee Project Manager

Drive Agile Value with SAFe Lean Business Case

  • On May 18, 2023
  • By David Usifo (PSM, MBCS, PMP®)

SAFe Lean Business Case

The Scaled Agile Framework (SAFe) is a proven and widely-adopted methodology that helps organizations scale Agile practices across all levels of the enterprise.

One of the key aspects of implementing SAFe successfully is the Lean Business Case. This article aims to provide a comprehensive understanding of the Lean Business Case in SAFe, its importance, components, and best practices for creating, reviewing, and updating it.

Table of Contents

Basics of the Lean Business Case

A Business Case is a formal document that captures the rationale behind a proposed initiative, project, or investment.

It outlines the expected benefits, costs, risks, and other relevant factors, providing a basis for informed decision-making.

Lean thinking  is an approach focused on maximizing customer value while minimizing waste. In the context of a Business Case, this means focusing on the most critical information and minimizing unnecessary complexity.

A Lean Business Case is a streamlined, hypothesis-driven version of a traditional business case, emphasizing agility, learning, and adaptability.

Key elements of a Lean Business Case include:

  • A clear hypothesis statement
  • Assumptions and dependencies
  • Financial analysis
  • Risks and mitigations
  • An implementation plan

The Role of the Lean Business Case in SAFe

The Lean Business Case plays a crucial role in the SAFe framework, supporting several core principles:

  • Aligning strategy with execution : Lean Business Cases help ensure that initiatives align with an organization’s strategic objectives and focus on delivering value.
  • Decentralizing decision-making : By providing clear and concise information, Lean Business Cases empower teams and stakeholders to make informed decisions at all levels of the organization.
  • Embracing a culture of continuous learning : Lean Business Cases encourage organizations to test hypotheses, learn from feedback, and adapt their plans accordingly.

In the SAFe implementation roadmap, the Lean Business Case is integrated with the Portfolio, Large Solution, and Program levels, fostering collaboration and decision-making among stakeholders .

Components of a SAFe Lean Business Case

The SAFe Lean Business Case is made up of the following components:

1. Hypothesis Statement

The hypothesis statement is the foundation of the Lean Business Case. It succinctly captures the problem or opportunity being addressed, the proposed solution, the target market and customers, and the success criteria for the initiative.

2. Assumptions and Dependencies

Documenting assumptions and dependencies helps to identify areas of uncertainty that may impact the success of the initiative. This includes business, technical, and organizational assumptions, as well as dependencies on other projects or initiatives.

3. Financial Analysis

The financial analysis provides an estimate of the costs, projected revenues, and financial returns associated with the proposed initiative. 

Key metrics include Return on Investment (ROI), Net Present Value (NPV), and Internal Rate of Return (IRR). A sensitivity analysis can also be included to assess the impact of changes in key variables.

4. Risks and Mitigations

Identifying, prioritizing, and mitigating risks is essential to ensure the success of the initiative.  Risk assessment  involves examining potential threats, their likelihood, and their potential impact, as well as defining appropriate mitigation strategies.

5. Implementation Plan

The implementation plan provides a high-level timeline, resource requirements, key milestones, and governance structure for the proposed initiative. This helps stakeholders understand the scope, complexity, and dependencies of the project.

Creating a Lean Business Case in SAFe

1. steps to create a lean business case.

  • Gather input and data from stakeholders : Engage with key stakeholders to gather insights, data, and perspectives that will inform the Lean Business Case.
  • Develop hypothesis statement : Define the problem or opportunity, proposed solution, target market, and success criteria.
  • Identify assumptions and dependencies : Document the assumptions and dependencies that underpin the hypothesis statement and financial analysis.
  • Conduct financial analysis : Estimate costs, projected revenues, and financial returns, and perform sensitivity analysis.
  • Assess risks and define mitigations : Identify and prioritize risks, and develop mitigation strategies.
  • Create implementation plan : Outline the timeline, resource requirements, key milestones, and governance structure.
  • Review and refine the business case : Engage with stakeholders to review, refine, and validate the Lean Business Case.

2. Tips for Creating an Effective Lean Business Case

  • Focus on the most critical information : Identify the key elements that stakeholders need to understand and make decisions.
  • Be concise and clear : Use clear, concise language and avoid unnecessary complexity or jargon.
  • Use visuals to convey information : Leverage diagrams, charts, and other visuals to present information in an easily digestible format.
  • Iterate and update as needed : Continuously refine the Lean Business Case as new information becomes available, and learn from feedback.

Reviewing and Approving a SAFe Lean Business Case

1. roles involved in the review and approval process.

  • Portfolio Steering Committee : Ensures alignment with strategic objectives and oversees portfolio-level decision-making.
  • Lean Portfolio Management : Provides guidance and support for Lean Business Case development and evaluation.
  • Enterprise Architects : Assess technical feasibility and alignment with enterprise architecture standards and practices.
  • Other relevant stakeholders : Contribute insights and expertise to inform the decision-making process.

2. Criteria for Evaluating a Lean Business Case

  • Alignment with strategic objectives : The proposed initiative should support the organization’s strategic goals and priorities.
  • Financial viability : The financial analysis should demonstrate a positive return on investment and acceptable levels of risk.
  • Feasibility and risk : The initiative should be technically and organizationally feasible, with manageable risks and appropriate mitigations in place.
  • Capacity and resource availability : The organization must have the necessary resources and capacity to execute the initiative successfully.

3. Decision-Making Process in SAFe

SAFe emphasizes collaborative decision-making, continuous exploration and learning, and adaptation based on feedback.

In the context of Lean Business Cases, this means that stakeholders should work together to evaluate proposals, identify opportunities for improvement, and make informed decisions about whether to proceed, pivot, or cancel initiatives.

Monitoring and Updating the Lean Business Case

Regular monitoring and updating of the Lean Business Case are essential to ensure that it remains accurate and relevant as the initiative progresses and new information becomes available.

This includes tracking key performance indicators (KPIs) to measure progress against success criteria, conducting periodic reviews and updates, and incorporating lessons learned and feedback from stakeholders.

1. Importance of Monitoring and Updating the Lean Business Case

  • Ensures alignment with evolving strategic objectives and priorities
  • Provides an opportunity to learn from feedback and adapt plans and assumptions as needed
  • Maintains transparency and accountability across the organization

2. Key Performance Indicators (KPIs) to Track Progress

  • Financial metrics (e.g., Return on Investment (ROI), Net Present Value (NPV), Internal Rate of Return (IRR))
  • Operational metrics (e.g., delivery milestones, resource utilization)
  • Customer value metrics (e.g., customer satisfaction, market share)

3. Periodic reviews and updates

  • Conduct regular progress reviews with stakeholders
  • Update the Lean Business Case to reflect changes in assumptions, risks, or other factors
  • Revise the implementation plan, financial analysis, and risk mitigation strategies as needed

4. Incorporating Lessons Learned and Feedback

  • Gather insights and feedback from stakeholders throughout the initiative lifecycle
  • Use this information to refine the Lean Business Case, improve decision-making, and enhance the overall effectiveness of the SAFe implementation

The SAFe Lean Business Case is a vital tool for aligning strategy with execution, fostering collaboration and informed decision-making, and promoting a culture of continuous learning.

By following the best practices outlined in this article, organizations can create, review, and update their Lean Business Cases effectively, ensuring that they deliver maximum value and minimize waste.

Embrace the Lean Business Case to drive success in your SAFe implementation and achieve your strategic objectives.

David Usifo (PSM, MBCS, PMP®)

David Usifo (PSM, MBCS, PMP®)

David Usifo is a certified project manager professional, professional Scrum Master, and a BCS certified Business Analyst with a background in product development and database management.

He enjoys using his knowledge and skills to share with aspiring and experienced project managers and product developers the core concept of value-creation through adaptive solutions.

Related Posts

Six Sigma DMAIC Analyze Phase

Unraveling the Six Sigma DMAIC Analyze Phase

What is a Six Sigma Control Plan

Using a Control Plan in Six Sigma

What is a Data Collection Plan in Six Sigma

A Guide to Six Sigma Data Collection Plan

Six Sigma vs Lean Six Sigma

Six Sigma vs Lean Six Sigma: Difference Between Six Sigma and Lean Six Sigma

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

Name  *

Email  *

Add Comment  *

Save my name, email, and website in this browser for the next time I comment.

Post Comment

Privacy Overview

  • Get Started

Developer Center

  • Documentation
  • Business leaders
  • Accelerated app delivery
  • Application modernization
  • Cloud native developer experience
  • Cloud native platform ops
  • Secure software supply chain
  • Manufacturing
  • Public Sector
  • Tanzu Application Service
  • Tanzu Application Platform
  • Tanzu Data Services
  • Tanzu Intelligence Services
  • Tanzu Spring Runtime
  • View All Products
  • Modernize apps
  • Build a platform
  • Transform data
  • Rapid Portfolio Modernization
  • Technical Account Management
  • Services for nonprofits
  • Get started
  • Get started with Tanzu
  • Tanzu Build Service
  • Tanzu for Kubernetes Operations
  • Tanzu Observability
  • Spring Office Hours
  • Enlightning
  • Tanzu Learning
  • Tanzu Academy
  • Spring Academy
  • KubeAcademy
  • Tanzu Tech Zone
  • Open source projects
  • Customer stories
  • Content Library
  • Tech Insights
  • Tanzu Vanguard
  • Case Studies
  • App Dev Best Practices
  • Platform Engineering Best Practices
  • Enterprise Strategy
  • Infographics
  • Analyst Reports
  • White Papers

go down

  • Announcements
  • Atos Syntel
  • Automotive Sector
  • Cloud-Native Java Workshop-Denver
  • Pivotal Labs Paris
  • Practitioners’ Blog
  • SpringOne Platform 2018

Use Lean Hypotheses to Define a Minimum Viable Product

  • Share this Article

Many people new to building apps fall in love the moment they learn about the idea of a Minimum Viable Product. “It’s minimal! So there’s less risk. And it’s viable! So it’ll prove something!”. Unfortunately, it’s easy for the line of “minimum” or “viable” to slip. How can a team stay focused?

Lean Hypothesis are an effective way to help the team connect the problem they’re trying to solve to the product they’re building. They take the form of

Let’s say we’ve got a problem on Hamazon , our e-commerce platform. Users are clicking around and spending lots of time-on-site, but they’re not putting items in their cart and converting. We might start with

Once we have a hypothesis, an MVP can be defined as the least amount of work we can do to in/validate the hypothesis. We started from the assumption that we need a recommendation engine, but rather than building it out (an expensive proposition), we’ve honed in on the a more specific problem: shoppers need guidance. For much less effort, we could test this hypothesis by curating and featuring a small selection of recommended products—this “Recommended Products” section is our new MVP! This might fail to impel shoppers to add items to their cart more quickly; if so, we’ll try a different hypothesis. (Notice we didn’t incur the cost of building a recommendation engine!). But if it succeeds, we’ll be able to iterate. Maybe we find that the Recommended Products converts very well for users from the east coast, but not so well for users from the west coast.

So now our MVP would be a feature that lets our in-house curation team target separate Recommendation sets based on geography. We’d continue to iterate, and it’s possible our recommendation targeting would get so specific that we’d end up building a recommendation engine, but we’d only do so if the business needs led us there, rather than our intuition. In that way, we’d iterate towards a truly minimal, truly viable product.

About the Author Biography

Pivotal People—Andy Piper, Cloud Foundry Developer Advocate

This week, Cloud Foundry Developer Advocate, Andy Piper, is featured in our Pivotal people series. Learn mo...

Pivots Talking Tuesday: Minding Your Own Business

Today I present to you Lisa Crispin speaking at TestBash 2.0! Lisa discusses why you should learn the busin...

Subscribe to our Newsletter

Related content in this Stream

This Month in Spring - March 2024

Trivy can now utilize CSAF VEX data to filter out false positives in CVE reports, maximizing the value of VEX documents in VMware Tanzu Application Catalog.

Bitnami-Packaged Containers and Helm Charts on DockerHub are Now Signed by Notation

Bitnami-packaged open source software container images available in DockerHub are now signed by Notation, an implementation of the Notary Project specifications and a CNCF-incubating project.

There's Never Been a Better Time to Be a Java and Spring Boot Developer: A Sneak Peak into JD Conference 2024

There’s never been a better time to be a Java and Spring developer! Let me show you why with a sneak peak into JD Conference 2024.

FOCUS-Compliant Reporting for Multiple Clouds with VMware Tanzu CloudHealth

If you're into FinOps, you've probably heard of FOCUS. Introducing our FOCUS FlexReports template for AWS, Azure, and GCP. Turn your cloud bills into FOCUS-compliant reports in minutes!

Spring Boot 3.3.0-M2’s Support for Bitnami Container Images: Developer’s Guide

The latest Spring Boot simplifies infrastructure setup with Docker Compose. Now, supporting Bitnami images, it opens new possibilities for developers. Exciting times ahead!

Contribute to the Future of Spring! Complete the State of Spring Survey 2024 Today

Shape the future of Spring! Participate in the State of Spring Survey 2024. Share insights, collaborate with the community, and drive innovation.

Extending Support for Apache Tomcat: Leading the Way with Tanzu Spring Runtime

Extend Apache Tomcat support with Tanzu Spring Runtime. Seamless transition, enhanced security, and uninterrupted workflow for Java applications.

What's new with Tanzu Application Catalog

Welcome to another edition of What’s new with Tanzu Application Catalog. This is a quarterly round up of all things related to Tanzu Application Catalog.

Unlocking the Power of Greenplum: A Journey Through Time and Innovation

As we stand at the threshold of a new era in data management, Greenplum continues to lead the industry with its commitment to innovation.

New in Tanzu Application Platform 1.8: Code in confidence with SLSA Level 3 and Secure Builds

Experience enhanced security with Tanzu Application Platform. Elevate your organization's defenses from code to build with SLSA Level 3, image scanning scheduling & automatic upgrades for new patches.

Beyond the Numbers: Why Spring's High NPS Scores Should Be on Your Radar

Explore Spring's exceptional NPS score of 75, surpassing industry benchmarks by 18%. Discover why it matters.

Microservices Modernization Missteps: Four Anti-Patterns of Rebuilding Apps

From single apps to portfolios of apps in large enterprises and our experience has led us to identify four of the most common anti-patterns impacting organizations.

This Month in Spring - February 2024

Tanzu by Broadcom helps customers consistently and continuously manage and secure Kubernetes clusters more easily with automated cluster deployments regardless of operating environment.

Tanzu Spring Runtime: Empowering Developers for Tomorrow's Challenges

Tanzu Spring Runtime: Empowering developers with comprehensive accessibility, support, and transformative capabilities.

Tanzu CloudHealth Remains Committed to Customers’ FinOps Journey Post Acquisition

No matter if you’re just getting started on your FinOps journey or are well down the path, let Tanzu CloudHealth and its years of maturity and innovation join you in your own meaningful hard work.

Tanzu Application Catalog: Enabling Stronger Alignment with Data Residency Requirements

Tanzu Application Catalog now enables enterprises to meet data residency and data localization requirements while working with open source software.

2023 Product Highlights from Tanzu CloudHealth

2023 was a huge year for Tanzu CloudHealth! Let’s recap all of the innovative features we released to help you across all three phases of the FinOps Journey – inform, optimize, and operate.

Tanzu CloudHealth Turns Cloud Financial Management Green

Tanzu CloudHeath is helping customers tackle Green Operations (GreenOps) by responding to the rapid increase in energy and water consumption to run IT workloads.

  • Share this Hub


  1. Epic

    lean hypothesis statement

  2. The full guide to Lean UX

    lean hypothesis statement

  3. What are the 5 phases of Lean Six Sigma?

    lean hypothesis statement

  4. Lean UX

    lean hypothesis statement

  5. How to Write a Strong Hypothesis in 6 Simple Steps

    lean hypothesis statement

  6. Hypothesis Testing

    lean hypothesis statement


  1. Hypothesis [Research Hypothesis simply explained]

  2. Hypothesis Testing: Introduction, All Terms and Concepts with Examples


  4. 6 Steps to Formulate a STRONG Hypothesis

  5. How To Write An A-Grade Research Hypothesis (+ Examples & Templates)

  6. What is Hypothesis Testing in Statistics ?


  1. Epic - Scaled Agile Framework

    Epic. An Epic is a significant solution development initiative. Due to their considerable scope and impact, epics require the definition of a Minimum Viable Product (MVP) [1] and approval by Lean Portfolio Management (LPM). Portfolio epics are typically cross-cutting, typically spanning multiple Value Streams and PIs.

  2. Epic - Scaled Agile Framework

    In general, the result of a proven hypothesis is an MVP suitable for continued investment by the value stream. Continued investment in an Epic that has a dis-proven hypothesis requires the creation of a new epic and approval from the LPM Function. SAFe Lean Startup Cycle Figure 6. Epics in the Lean Startup Cycle

  3. The SAFe® Epic - an example - Agile Rising

    The SAFe® Epic – an example. We often have questions about what a “good” sufficiently developed SAFe Epic looks like. In this example we use with clients during the Lean Portfolio Management learning journey, we dive into an example of real-world details behind an epic hypothesis statement. For now, we have not provided a fully developed ...

  4. Guide: Hypothesis Testing - Learn Lean Sigma

    Alternative Hypothesis (Ha or H1): This is what you aim to prove by conducting the hypothesis test. It is the statement that there is an effect or difference. Using the same manufacturing example, the alternative hypothesis might state that the new process does improve the average output quality.

  5. Epic Hypothesis Statement That Captivates Stakeholders ...

    The Epic Hypothesis Statement (EHS) is a detailed hypothesis that describes an Epic or a large initiative designed to address a growth roadblock or to capitalize on a growth opportunity. Epics are always significant in scale and traditionally customer-facing. They should support a company’s current needs while preparing it to navigate future ...

  6. SAFe Lean Business Case: Driving Agile Value

    The SAFe Lean Business Case is made up of the following components: 1. Hypothesis Statement. The hypothesis statement is the foundation of the Lean Business Case. It succinctly captures the problem or opportunity being addressed, the proposed solution, the target market and customers, and the success criteria for the initiative. 2.

  7. Use Lean Hypotheses to Define a Minimum Viable Product

    Use Lean Hypotheses to Define a Minimum Viable Product. Many people new to building apps fall in love the moment they learn about the idea of a Minimum Viable Product. “It’s minimal! So there’s less risk. And it’s viable! So it’ll prove something!”. Unfortunately, it’s easy for the line of “minimum” or “viable” to slip.

  8. Forming Experimental Product Hypotheses | by Chris ... - Medium

    Forming the Hypothesis. With a user or business problem to solve and a hypothesis template ready it’s time to fill in the statement blanks. As shown in the table below there’s four initial ...

  9. Hypothesis Testing Made Simple — CiKATA Lean Six Sigma 4.0

    Again, it's a statement that suggests there is a difference between the populations being compared. Keep it practical and explain this relationship in plain English. Step #4: As in Step 2, geek out and write the alternate hypothesis statement in statistical terms. Dig out your notes if necessary to obtain the correct alternate hypothesis statement.