Agile 2016: Agile Program Management: Measurements to See Value and Delivery

Datetime:2016-08-23 03:31:40          Topic: Agile Development           Share

Johanna Rothman gave an inspiring talk at Agile2016 about measurements for agile program management. She started off by recognizing the current issues that program teams are facing in measuring results:

  1. What to measure?
  2. What can we measure?
  3. What’s useful?

Johanna explained that management wants the big picture, but teams see their own picture. The program needs to see the whole. She points out how agile teams use metrics that help the team self-organize and self-manage; however those data points are not meaningful to management.

Johanna also explained that depending on what the program looks like, you may have to change the recommended metrics.

She touched on dynamics in measurements today being due to a history of not being able to depend on a stream of continuous value being delivered from the team. Because of this, leaders often ask for predictive measurements rather than empirical.

Johanna continues to describe what leaders really care about, such as customer acquisition, customer retention, revenue growth, customer experience and touch points. For SaaS companies, customer experience is critical since the company has other options if the customer experience does not deliver.

Here are some of the metric movement pattern examples she gave showing Predictable vs. Empirical measurements:

Move From This Measurement…(Predictable)

Move To This Measurement…(Empirical)

When will you be done?

How much will it cost?

How much are you willing to invest?

Do you have a target date?

Are you on track?

What’s the Earned Value?

Let us show you working product.

When will we see revenue?

We can show you value now.

We can release now.

What do the customers think?

We can show progress against release criteria.

We have customer satisfaction data.

Johanna also described in more detail what is important to measure.

Measure learning: Can we measure learning? Get feedback all the time on small pieces, and ask, “Do we need to learn more?” We learn to build momentum.

Measure trends:Are we creating more defects than we fix? Are there more defects than before? Trend data over time is what is key. Snapshots do not provide sufficient information.

Measure completed features:Programs start with Epics/Themes, sound like features, but are vague and not sure what to deliver. Don’t count these as features. Measure more detailed features from a value to the customer perspective.

Measure Product Backlog Burn-up Chart: Partial answer to “Where are we?” and shows when features grow in scope and shows progress on features overall. Count the stories for each feature set, and use that as the data.

Measure what you want more of and less of:Less WIP, less multi-tasking. More working software that users care about; more users happy with the release.

Measure Cost:Run Rate ($), how much does the team cost per month, and are we willing to pay that amount for the results we are getting each month? (Results being released and valuable software from a customer perspective.)

Measure Releases:Release Frequency – How long does it take from build to release? Reduce build time down to a few hours! Is our build time increasing or decreasing? Should not be increasing? Cycle time - how long does it take to get a feature from ready to released?

Measure Product Measurements:Create scenarios, or performance scenarios. People have to log in, transfer file, pay, etc. What are the performance attributes of these tasks? Performance, Reliability…. Product Performance Measurements: These can be a function of release criteria. Have we improved since the last release?

Measure Qualitative Measurements:Customer feedback and happiness

How often do you get feedback from customers? Obstacle report - if you have people, you will have obstacles. How long does it take to make decisions? Are things outside of the program affecting our ability to get work done?

She concluded with common measurement traps she sees:

  • Never measure an individual or team’s productivity
  • Never compare people adjacent each other
  • Never compare teams against each other

Guest editor Angela Wick is an Agile Coach and Trainer, she is the Founder & CEO of  BA-Squared , LLC a training and consulting company that helps organizations modernize requirements practices. She helps traditional, agile, and hybrid teams develop the skills they need to build the right solutions that deliver the intended value to the organization.





About List