AGILE
P L A Y B O O K
1
INTRODUCTION ..........................................................................................................4
Who should use this playbook? ................................................................................6
How should you use this playbook? .........................................................................6
Agile Playbook v2.1—What’s new? ...........................................................................6
How and where can you contribute to this playbook?.............................................7
MEET YOUR GUIDES ...................................................................................................8
AN AGILE DELIVERY MODEL ....................................................................................10
GETTING STARTED.....................................................................................................12
THE PLAYS ...................................................................................................................14
Delivery ......................................................................................................................15
Play: Start with Scrum ...........................................................................................15
Play: Seeing success but need more exibility? Move on to Scrumban ............17
Play: If you are ready to kick o the training wheels, try Kanban .......................18
Value ......................................................................................................................19
Play: Share a vision inside and outside your team ..............................................19
Play: Backlogs—Turn vision into action .............................................................. 20
Play: Build for and get feedback from real users ................................................ 22
Teams ..................................................................................................................... 25
Play: Organize as Scrum teams ........................................................................... 25
Play: Expand to a value team when one Product Owner isn’t enough .............. 30
Play: Scale to multiple delivery teams and value teams when needed ............. 30
Play: Invite security into the team ........................................................................31
Play: Build a cohesive team ..................................................................................31
TABLE OF CONTENTS
2
Craftsmanship ...........................................................................................................35
Play: Build in quality ..............................................................................................35
Play: Build in quality and check again ................................................................. 40
Play: Automate as much as you can ....................................................................43
Measurement ............................................................................................................47
Play: Make educated guesses about the size of your work .................................47
Play: Use data to drive decisions and make improvements .............................. 49
Play: Radiate valuable data to the greatest extent possible ............................... 50
Play: Working software as the primary measure of progress .............................53
Management .............................................................................................................55
Play: Manager as facilitator ...................................................................................55
Play: Manager as servant leader .......................................................................... 56
Play: Manager as coach .........................................................................................57
Adaptation ................................................................................................................ 58
Play: Reect on how the team works together ................................................... 58
Play: Take an empirical view................................................................................. 58
Meetings ....................................................................................................................59
Play: Have valuable meetings ...............................................................................59
Agile at scale ............................................................................................................. 60
Play: Train management to be agile .....................................................................61
Play: Decentralize decision-making ......................................................................61
Play: Make work and plans visible ....................................................................... 62
Play: Plan for uncertainty in a large organization ............................................... 62
Play: Where appropriate, use a known framework for agile at scale................. 62
STORIES FROM THE GROUND ............................................................................... 63
U.S. Army Training Program: An Agile Success Story .........................................67
PARTING THOUGHTS ............................................................................................... 68
ABOUT BOOZ ALLEN ................................................................................................ 69
About Booz Allen Digital Solutions ........................................................................ 69
About Booz Allens agile practice and experience ................................................. 69
REFERENCES AND RECOMMENDED READING LIST ...........................................70
3
4
Lead Engineer Thuy Hoang sits
working in our Charleston Digital Hub
INTRODUCTION
Agile is the de facto way of delivering software today.
Compared to waterfall development,
agile projects are far more likely to deliver
on time, on budget, and having met the
customer’s need. Despite this broad
adoption, industry standards remain
elusive due to the nature of agility
there is no single set of best practices.
The purpose of this playbook is to
educate new adopters of the agile
mindset by curating many of the good
practices that we’ve found work for teams
at Booz Allen. As we oer our perspective
on implementing agile in your context, we
present many “plays”—use cases of agile
practices that may work for you, and
which together can help weave an overall
approach for tighter delivery and more
satised customers.
Core to our perspective are the following
themes, which reverberate throughout
this playbook.
We’ve come to these themes as software
practitioners living in the trenches and
delivering software on teams using
increasingly modern methods, and in
support of dozens of customers across
the U.S. Government and the
international commercial market.
+ Agile is a mindset. We view agile as a
mindset—dened by values, guided
by principles, and manifested through
emergent practicesand actively
encourage industry to embrace this
denition. Indeed, agile should not
simply equate to delivering software
in sprints or a handful of best practices
you can read in a book. Rather, agile
represents a way of thinking that
embraces change, regular feedback,
value-driven delivery, full-team
collaboration, learning through
discovery, and continuous improvement.
Agile techniques cannot magically
eliminate the challenges intrinsic to
high-discovery software development.
But, by focusing on continuous delivery
of incremental value and shorter
feedback cycles, they do expose these
challenges as early as possible, while
there is still time to correct for them.
As agile practitioners, we embrace
the innate mutability of software and
harness that exibility for the benet of
our customers and users. As you start
a new project, or have an opportunity
to retool an existing one, we urge you
to lean to agile for its reduced risk
and higher customer satisfaction.
+ Flexibility as the standard, with
discipline and intention. Booz Allen
Digital Solutions uses a number of
frameworks across projects, depending
on client preferences and what ts
best. We use Scrum, Kanban, waterfall,
spiral, and the Scaled Agile Framework
(SAFe), as well as hybrid approaches.
But we embrace agile as our default
approach, and Scrum specically as
our foundational method, if it ts the
scope and nature of the work.
+ One team, multiple focuses.
Throughout this playbook, we explicitly
acknowledge the symbiotic relationship
between delivery (responsible for the
“how”) and value (responsible for the
5
Senior Consultant Anastasia Bono, Lead
Associat Joe Out, and Senior Lead Engineer
Kelly Vannoy solve problems at our oce in
San Antonio, TX.
“what”), and we use terms likedelivery
team” and “value team” to help us
understand what each team member’s
focus may be. However, its crucial to
consider that, together, we are still one
team, with one goal, and we seek a
common path to reach that goal.
+ Work is done by teams. Teams are
made of humans. A team is the core
of any agile organization. In a project
of 2 people or 200, the work happens
in teams. And at the core of teams
are humans. Just as we seek to build
products that delight the humans
who use them, we seek to be happier,
more connected, more productive
humans at work.
+ As we move faster, we cannot sacrice
security. According to the U.S. Digital
Service, nearly 25% of visits to
government websites are for nefarious
purposes. As we lean toward rapid
delivery and modern practice, we must
stay security-minded. Security cannot
be a phase-gate or an afterthought;
we must bring that perspective into our
whole team, our technology choices,
and our engineering approach.
6
Consultant Leila Aliev, Associate
Joanne Hayashi, Engineer Tim Byers,
and Lee Stewart work together at our
oce in Honolulu, HI.
WHO SHOULD USE THIS PLAYBOOK?
This playbook was written primarily
for new adopters of agile practices, and
it is intended to speak to managers,
practitioners, and teams.
While initially written as a guide solely
for Booz Allen digital professionals, it
is our hope that the community will
also nd value in our experience. We
have deliberately minimized Booz Allen
specic “inside baseball” language
wherever possible.
A Booz Allen internal addendum is also
available for Booz Allen sta, with links
and information only relevant for them.
This is not because it is full of “secret
sauce” proprietary information; rather,
it is to keep the community version
accessible and broadly valuable.
HOW SHOULD YOU USE THIS
PLAYBOOK?
This playbook is not intended to be read
as a narrative document. It is organized,
at a high level, as follows:
1. Agile Playbook context. This is what
you are reading now. We introduce
the playbook, provide a high-level
Agile Model.
2. The plays. The plays are the meat of
the playbook and are intended to be
used as references. Plays describe
valuable patterns that we believe agile
teams should broadly consider—they
are “the what.” Within many plays, we
describe techniques for putting them
into practice. Plays are grouped into
nine categories: Delivery, Value, Teams,
Craftsmanship, Measurement,
Management, Adaptation, Meetings,
and Agile at scale.
AGILE PLAYBOOK V2.1WHAT’S NEW?
This is version 2.1 of our Agile Playbook.
The rst edition was published in 2013
and aided many practitioners in adopting
and maturing their agile practice across
our client deliveries and internal eorts.
In June 2016, we created version 2.0,
expanding that content, especially
around agile at scale and DevOps,
and transforming what was an internal
playbook into an external publication—
open source and publicly available.
This version is primarily a visual refresh
with minor content adjustments.
OUR DIGITAL HUBS
Over the last 4 years, our digital business
and footprint has grown by leaps and
bounds, in part through our acquisitions
of the software services unit SPARC in
Charleston, South Carolina and the
digital services rm Aquilent in Laurel,
Maryland. These digital hubs are part of
our Digital Solutions Network, where
integrated teams of digital professionals
collaborate to solve our clients’ toughest
problems. Within this virtual network,
we’re able to help our clients in more
places, and with more expertise. Our
hubs create a tight-knit community for
our technologists and innovators to
exchange ideas and combine their
expertise in cloud, mobile, advanced
analytics, social, and IoT with modern
techniques, including user-centered
design, agile, DevOps, and open source.
7
REMINDER: BEST
LEARNING
PRACTICES
The rst playbook referred to itself as a
collection of best practices, but we want
to clarify that. In most cases these are
best learning practices. If you need a
place to start or you want to understand
something in context, we hope this
playbook serves as a valuable guide
and helps you keep walking toward high
performance as a team or program. But,
just as guitar virtuosos or Olympic skiers
have moved past the form and rules they
learned in their rst few weeks of practice,
we expect our delivery teams to learn
the values here, start with the learning
practices, and eventually innovate their
way toward even higher performance
methods that are unique to them.
HOW AND WHERE CAN YOU
CONTRIBUTE TO THIS PLAYBOOK?
We want to build this playbook as
a community.
If you have ideas or experiences (or nd a
typo or broken link) you wish to share, we
would love to hear about it. Contributions
can be sent in one of two ways:
+ Pull requests—This playbook is
maintained and kept up to date
continuously through GitHub at
https://github.com/booz-allen-
hamilton/agileplaybook. We
welcome your contributions
through pull requests.
+ Email us at [email protected]
We welcome feedback on the entire
playbook, but we are looking for
contributions in a few specic areas:
+ New play or practice descriptions
+ Reports or articles from the ground,
completely attributed to you
+ Favorite tools and software for
inclusion in our tools compendium
+ Additional references or further reading
Please be sure to take a look at our style
guide before submitting feedback.
Agile organizations view change
as an opportunity, not a threat.
-JIM HIGHSMITH
8
Luke Lackrone
@lackrone
Lauren McLean
@SPARCedge
Marianne Rogers
@SPARCedge
Timothy Meyers
@timothymeyers
Doug James
@SPARCedge
Stephanie Sharpe
@Sharpneverdull
Hallie Krauer
@SPARCedge
Noah McDaniel
@SPARCedge
Claire Atwell
@twellLady
Kim Cumbie
@kimnc328
MEET YOUR GUIDES
The bulk of this content was developed by the following
practitioners, coaches, and software developers from
Booz Allen Hamilton.
9
Lead Associate Stan Hawk, Lead Associate
Henry Lee, Alumna Ashley Fagan, Consultant
Maggie Joyce, Lead Associate Darren Withers,
and Senior Consultant Hallie Miller talk
during a meeting in Chantilly, VA.
We would like to also thank all of our reviewers,
editors, contributors, and supporters:
Gary Labovich, Je Fossum, Dan Tucker,
Kileen Harrison, Keith Tayloe, Elizabeth
Buske, Joe Dodson, Emily Leung, Jennifer
Attanasi, James Cyphers, Merland Halisky,
Bob Williams, Gina Fisher, Aaron Bagby,
and Elaine (Laney) Hass.
And, we would like to acknowledge the
champions, contributors, and reviewers
of the rst version of this Agile Playbook:
Philipp Albrecht, Tony Alletag, Maxim
Aronin, Benjamin Bjorge, Wyatt Chaee,
Patrice Clark, Bill Faucette, Shawn Faunce,
Allan Hering, Amit Kohli, Raisa Koshkin,
Paul Margolin, Debbie McCoy, Erica
McDowell, Johnny Mohseni, Robert
Newcomb, Jimmy Pham, Rose Popovich,
Melissa Reilly, Haluk Saker, Li Lian Smith,
Alexander Stein, Tim Taylor, Loree
Thompson, Elizabeth Wakeeld, Gary
Kent, Amy Dagliano, Alex Lyman, Alicia
White, Kevin Schaa, and Joshua Sullivan
10
Like most models, it is not perfect, but
we believe it is useful. It illustrates the
focus areas that a team may have over
timeoften simultaneouslyand
provides context for the plays and
practices described in the rest of this
playbook. Our intentions are to show that
delivery is majority of our work and that
successful delivery is built on a foundation
of alignment and preparation.
To truly put this model into action, refer
to the plays and practices.
We’ll describe these focus areas in broad
strokes. To truly put this model into action,
refer to the details captured in the plays
and practices.
AN AGILE DELIVERY MODEL
Here we introduce the agile delivery model that we use to drive
delivery across our business.
Figure 1: Our agile delivery model
PREPARE
Estimate &
Prioritize Work
Create Initial
Infrastructure
Product Plan
& Backlog
Plan Initial
Sprint
Review Demo
Deploy
Software
Short-Term
Plan
DEL IV E R
Check
Retrospective
Stop
Start
More
Less
Build Working Software
Scrum • Scrumban • Kanban
D
E
S
I
G
N
C
O
D
E
T
E
S
T
B
U
I
L
D
Do
Plan
Set Technical
Expectations
Determine
Vision & Goal
Form & Charter
the Team
ALIGN
Create Working
Agreements
Act
Act
08.031.17_01
Booz Allen Hamilton
11
ALIGN
We must have a clear understanding of who
we are and what we’re trying to build.
With this focus, we cultivate relationships
with our stakeholders and users. Together,
we build a shared understanding of
the project vision, goals, values, and
expectations. We often employ
chartering sessions to build product
roadmaps and user personas, and to
capture strategic themes.
We also create our team working
agreements—how we will work together.
These identify how we’ll communicate,
resolve conict, and have fun, among
other things, and we’ll establish technical
expectations, such as coding standards
and our denitions of done.
This is a dominant focus area during a
projects rst several days (but no more!
We want to align quickly and get to
delivering as quickly as possible). While
our teams begin with this focus, we
realize that this is not only important
during project startup. Teams may
need to realign throughout the life of a
project when signicant change occurs.
PREPARE
Once we are clear on who we are and what
we’re here to do, the team needs to come
together to get a look ahead, and prepare
enough to get started.
With this focus, we build, estimate, and
groom our product backlog. We put some
thought into architecture. We sketch out a
few sprints’ worth of work—3 months or
so. Here, were trying to understand the
things we will do soon. We accept that we
are fuzzy on things we’ll be doing a few
months from now, and that time spent
planning these things now is possibly
waste. Because software development is
primarily highly creative knowledge work,
we have to do it to understand it. We
continually discover. It is dicult to pull
together a 12-month master schedule
if we did, it would probably be wrong in a
matter of days. We embrace this truth
instead of ghting it.
In addition to doing just-enough planning,
we generally invest some time in our
infrastructure and tooling. We want to
make sure we can get from a commit
to a build quickly. Let’s smooth our
deployment process so it’s not a pain
when time is tight.
DELIVER
This is where the rubber meets the road,
so to speak.
With this focus, we transform the needs
of users into valuable, tested, potentially
shippable software. Generally, we follow
small Plan-Do-Check-Act cycles. In the
Delivery section of this playbook, we
expand on several popular agile delivery
frameworks, and when we use them.
When we are delivering, we build; we test;
we keep designing; we keep talking to
users. We inspect and adapt. We do this
as much as we need to, until we are done.
Any team members who are not actively
delivering something for the current
sprint are helping the team get ready
for the next sprint.
Booz Allen Hamilton
13
SPRINT 0
This period does not have to be strictly
timeboxed; you want to get things in place
so that you’re ready to begin delivery as an
agile team. Don’t linger thoughwe want
to be delivering!
LIKELY ACTIVITIES
+ Identify your Scrum Master and
Product Owner
+ Identify the users and stakeholders
+ Have a team chartering session
+ Identify, dene, and commit to an
initial set of coding standards
+ Identify initial architecture approach
+ Identify some likely technologies
+ Prepare the physical team space with
information radiators
+ Setup your development environment
+ Set up your build infrastructure
+ Set up a basic automated test
environment
+ Get a few user stories into the backlog
+ Communicate the output of Sprint 0 to
the teams and stakeholders
SPRINT 1
LIKELY ACTIVITIES
+ Hold daily standups
+ Prioritize product backlog
+ Estimate top user stories
+ Establish a sprint backlog
+ Write code; test; get user stories “done
+ Hold a demo with whatever you have
+ Hold a retrospective and make changes
to improve
SPRINT 2
LIKELY ACTIVITIES
+ Hold daily standups
+ Reprioritize the backlog with new items
+ Hold sprint planning, establish the
sprint backlog
+ Write code; test; get user stories “done
+ Hold a demo; gather user feedback
+ Measure your velocity
+ Reect and identify improvements
SPRINT 3+
LIKELY ACTIVITIES
+ Keep going inspect and adapt!
“In my experience, there’s no such
thing as luck.
- OBI-WAN KENOBI
14
Consultant Candice Moses and Sta Engineer
Perry Spyropoulos work together in our
Charleston Digital Hub.
THE PLAYS
Plays describe valuable patterns that we believe agile teams
should broadly consider.
These plays and practices are intended to be used as reference. Within many plays, we
describe practices that teams can do to turn the plays into action.
“If, on your team, everyone’s input
is not encouraged, valued, and
welcome, why call it a team?”
- WOODY WILLIAMS
DELIVERY
This section describes how agile teams
work together to produce value that
satises their stakeholders.
VALUE
This section describes how agile teams can
understand the value of the work they do.
TEAMS
Agile teams are where the work gets done.
Team members care about each other,
their work, and their stakeholders. And
agile teams are constantly stretching,
reaching for high performance. This
section describes plays for team
formation, organization, and cohesion.
CRAFTSMANSHIP
This section walks through practical
ways to inject technical health into
your solutions.
MEASUREMENT
Measurement aects the entire team.
It is an essential aspect of planning,
committing, communicating, improving,
and, most importantly, delivering.
MANAGEMENT
Where is the manager on an agile team?
This section explores how the manager
leads in an agile organization.
ADAPTATION
This section looks at ways to regularly
examine and nd ways to improve the
team and product.
MEETINGS
This section describes common meetings
for agile teams, and how to eectively use
your time together.
AGILE AT SCALE
This section describes some of our initial
thoughts on scaling agility.
Antipatterns
Antipatterns are consistently observed behaviors that can impede a team’s
agility. Throughout this guide, examples of antipatterns are given in boxes
like this one.
15
DELIVERY
Agile teams are biased to action and are
constantly seeking ways to deliver more
product and more often.
PLAY: START WITH SCRUM
Start with Scrum for agile delivery, but with
an eye for “agility.
Scrum is the most popular delivery
framework for agile teams to use, by
far; so much so that it’s often confused
for “agile” itself. Scrum is a powerful,
lightweight product delivery framework
that has existed for 25 years. The denitive
description of the framework is maintained
by its creators in the Scrum Guides
[Sutherland and Schwaber 2013].
Because the Scrum Guides are such
well-maintained and well-used resources,
we won’t try to explain everything about
Scrum here.
Scrum was developed to rapidly deliver
value while accommodating the changes
that are inevitable in product delivery.
Its also meant to create a predictable
pace for the team.
Traditionally, a product is designed, then
developed, then demonstrated or released
to the customer. This occurs mostly as a
sequence and over long stretches of time.
Often, the customer is unhappy with the
result and wants changes. Since we are so
late in development, changes found after
release are typically very costly. Scrum,
however, incorporates frequent demos and
feedback to mitigate surprise requirement
changes. All the project work gets cut
into short development iterations known
as sprints. Scrum emphasizes that work
planned in sprints must be small, well
understood, and prioritized.
Each sprint is typically 1–4 weeks long (and
stays consistent for a given team). During
a sprint, the delivery team chooses the
high-value work it can complete; the team
focuses on just that work for the sprint’s
duration. At the end of each sprint, the
team demonstrates the working software
it produced. During the demo, the team
gathers feedback that helps shape the
direction of the product going forward.
Practically, this means that design
decisions are not made way ahead of time,
but rather right before or even during
active development. Instead of having
heavy, top-down design, design emerges
and evolves over several iterations:
develop, demo, gather feedback,
incorporate feedback, develop, demo…
THIS SECTION DESCRIBES HOW AGILE
TEAMS WORK TOGETHER TO PRODUCE VALUE
THAT SATISFIES THEIR STAKEHOLDERS:
“Deliver working software
frequently, from a couple of
weeks to a couple of months,
with a preference to the
shorter timescale.
Agile processes promote
sustainable development.
The sponsors, developers,
and users should be able to
maintain a constant pace
indenitely.
Simplicitythe art of
maximizing the amount
of work not done is essential.
Figure 3: The Scrum framework
D
E
S
I
G
N
C
O
D
E
T
E
S
T
B
U
I
L
D
WEEKS
2
-
4
Daily Standup
Meeting
Potentially
Shippable
Product
Product Backlog Sprint Backlog
HOURS
24
08.031.17_03
Booz Allen Hamilton
16
Over the course of several sprints, a
picture of progress and direction emerges.
Customers and management are kept
informed through progress charts (see
Measurement section) and end-of-sprint
demos. It becomes easy to keep everyone
informed while avoiding many pitfalls of
micromanagement.
If Scrum is followed, many of the traditional
problems associated with complex projects
are avoided. The frequent feedback prevents
projects from spending too much time
going in the wrong direction. Furthermore,
because of Scrum’s iterative nature,
projects can be terminated early (by choice
or circumstance) and deliver value to the
end user.
Iterationtrying something and looking
at it—is core to how Scrum operates.
A variety of development teams use Scrum,
from those working on highly complex
systems with an unknown end, through
operations and maintenance patching.
The key to using this method is ensuring
the maintenance of a groomed backlog
and allowing for the exibility needed in
this short learning cycle.
SCRUM IS BUILT ON THREE PILLARS: TRANSPARENCY, INSPECTION, AND ADAPTATION
TRANSPARENCY
Many of the challenges teams face boil down to communication issues. Scrum values keeping communication
and progress out in the open; doing things as a team; being transparent. The fact that Scrum’s ceremonies
(Sprint Planning, Daily Standup, Sprint Review, and Retrospective) are intended for the whole team speaks to the
importance of transparency.
INSPECTION
Inspecting things is how we know if theyre working. Everything on a Scrum team is open to inspection, from the
product to our process.
ADAPTATION
As we inspect things, if we think we would benet from doing it dierently, let’s try it! We can always adjust again
later. Notably, Scrum’s Sprint Review ceremony gives us an explicit, regular opportunity to adapt based on how
the product is coming along; the Retrospective ceremony does the same for how our team is working. Scrum is
built on three pillars: transparency, inspection, and adaptation
17
P L AY: S E E I N G S U C C E S S B U T
N E E D M O R E F L E X I B I L I T Y ?
MOVE ON TO SCRUMBAN
If Scrum is too restrictive or there are too
many changing priorities within a sprint,
consider Scrumban to provide the structure
of ceremonies with the exibility of delivery.
Scrumban is derived from Scrum and
Kanban (described below, in the next play)
as the name would suggest. It keeps the
underlying Scrum ceremonies while
introducing the ow theory of Kanban.
Scrumban was developed in 2010 by Corey
Ladas to move teams from Scrum which
is a good starting point in agile, to
Kanban, which enables ow for delivery
on demand [Ladas 2010]. Kanban focuses
on ow, but it does not have prescribed
meetings and rolesso we borrow those
from Scrum in this Scrumban model.
The primary dierence versus Scrum is
that the sprint timebox no longer applies
to delivery in Scrumban.
Instead, the team is constantly prioritizing
and nishing things as soon as possible.
In Scrumban, we keep Scrum’s cadence
just to have our ceremonies so we don’t
miss out on planning together, showing
our work, and having a time for reection.
A strict work-in-progress limit (WIP limit)
is set to enable team members to pull
work on demand, but not so many things
that they have trouble nishing the work
at hand. Scrumban has evolved some of
its own practices.
Examples of unique practices to Scrumban
include the following:
+ Bucket-size planning was developed to
enable long-term planning where a
work item goes from the idea bucket, to
a goal bucket, then to the story bucket.
The story bucket holds items ready to
be considered during an on-demand
planning session.
+ On-demand planning moves away from
planning on a regular cadence, instead
only holding planning sessions when
more work is needed. Items to be
pulled into the Kanban board are
prioritized, nalized, and added to
the Kanban board.
Scrumban is useful for teams who are very
familiar with their technical domain and
may have constantly changing priorities
(e.g., a team working on the same product
for an extended amount of time). The
exibility of Scrumban allows for the
backlog to be re-prioritized quickly and
release the product on demand.
< No process >
In environments where we are used to doing work in our own swim lane,
or a single person possesses most the knowledge, it’s easy to skip dening
processes. The team needs to nd ways to work together which often take
the form of a process. Repeatable well-understood processes for regularly
occurring tasks help the team move faster, reduce stress, and integrate
new team members. A process needs to be understood by the entire team
and perhaps documented for it to work. The repeatable tasks are often
dened during team chartering and revisited at retrospectives. Some
common processes to reect on: Who checks our work? When and how
do we deploy our software? How do we avoid becoming single threaded
on a capability? How do we track our work? Do we need a tool? What is
our defect tracking process?
18
P L AY: I F Y O U A R E R E A DY T O K I C K
OFF THE TRAINING WHEELS,
TRY KANBAN
Try Kanban on only the most disciplined
teams and when throughput is paramount.
Kanban is a framework adopted from
industrial engineering. It was developed to
be mindful of organizational change
management, which is apparent in the four
original principles:
+ Start with existing process.
+ Agree to pursue incremental,
evolutionary change.
+ Respect the current process, roles,
responsibilities and title.
+ Leadership at all levels.
So, in Kanban, you will not (inherently)
be receiving a bunch of new titles, or
using much new vocabulary.
In 2010 David Anderson elaborated with
four “Open Kanban” practices tailored
for software delivery:
+ Visualize the workow.
+ Limit WIP.
+ Manage ow.
+ Make process policies explicit.
+ Improve collaboratively (using models
and the scientic method).
In Kanban, you fundamentally want to
make all of your work visible, continuously
prioritize it, and always ow things to
“done.” This is great for a software team
that issues several new releases per week,
or per day. A pitfall, however, is that if
priorities are allowed to change too often,
no work will ever get done. So, be mindful
about nishing things and not starting
too many things.
Kanban is appropriate for teams ready
to self-regulate, rather than rely on
timeboxes. The practices require
discipline to enable ow. An operations
and maintenance team with a small
backlog could benet from Kanban, as it
would enable delivery of small items as
needed and ensure all issues are getting
to a done state. In addition, mature agile
teams with a highly automated pipeline
could use Kanban as a way to enable
quick ow of value to production.
19
PRACTICE: PRODUCT BOX
The product box is another way to try to
crack into the products vision. You might
try this exercise as an alternative to working
with the Vision Board. The product box is
a great way to engage a whole team in the
conversation around the vision and value
of the project at hand, and to have some
fun together while doing so. While there
are many versions of this idea, Innovation
Games is a well-known one. As described
there, “[Ask your stakeholders] to imagine
that they’re selling your product at a
tradeshow, retail outlet, or public market.
Give them a few cardboard boxes and ask
them to literally design a product box
that they would buy. The box should have
the key marketing slogans that they nd
interesting” [Innovation Games 2015]. We
have also seen this work nicely by imagining
that your product is appearing in the App
Store; what would the description, icon,
screenshots, and reviews look like?
Figure 4: Product Vision Board by Roman
Pichler [Pichler Consulting 2016]
VALUE
Most software has more features
than necessary.
Agile teams emphasize prioritizing features
by the value they bring to real users and
stakeholders. Considering the value of
things is just as important as delivering
working software, since time spent on
non-valuable features is wasted time.
P L AY: S H A R E A V I S I O N I N S I D E
AND OUTSIDE YOUR TEAM
The vision is the foundation upon which
product decisions are made. When at
critical junctures, turn to the vision to help
determine which direction will help the
vision become a material reality. At the
team and individual levels, the vision
provides a common mission to rally
around and helps understand the long-
term goals, as well as incremental goals.
PRACTICE: PRODUCT VISION
STATEMENT
The vision for the project should be
encapsulated in a product vision statement
created by the Product Owner. Akin to
an “elevator pitch” or quick summary, the
goal of the product vision statement is to
communicate the value that the software
will add to the organization. It should be
clear and direct enough to be understood
by every level of the eort, including
project stakeholders. The Vision Board
by Roman Pichler is a nice, simple
template for forming this statement
[Pichler Consulting 2016].
Once you have your vision board, work with
the Product Owner and key stakeholders to
test the vision with the target group to see
how well it resonates with eventual users.
What is your vision, your overarching goal for creating the product?
www.romanpichler.com
Which market segment
does the product address?
Who are the target users
and customers?
How will you market and
sell the product to the
customers?
Do the channels exist
today?
What are the main cost
factors to develop, market,
sell, and service the
product?
What resources and
activities incur the highest
cost?
How can you monetise
your product and generate
revenue?
What does it take to open
up the revenue sources?
Who are the product’s
main competitors?
What are their strengths
and weaknesses?
How is the product going
to benefit the company?
What are the business
goals?
Which one is most
important?
What product is it?
What makes it desirable
and special?
Is it feasible to develop the
product?
How does the product
create value for its users?
What problem does it
solve?
Which benefit does it
provide?
Vision
Needs
Channels
Cost
Factors
Revenue
Sources
Competitors
Business
Goals
Product
Target
Group
“Business people and developers
must work together daily
throughout the project.
20
Sta Engineer Peter Bingel with Lead Technologists Lakeshia Winchester, Sebastian Steadman,
Amy Longworth, and Sta Engineer Peter Bingel work together to solve problems in Charleston, SC.
Once you have your product box, use focus
groups or hallway testing with your target
group to discover how well the vision
resonates with target users and to under-
stand their expectations for what’s inside.
PRACTICE: PRODUCT ROADMAP
Most teams need a product roadmap to
understand high-level objectives and
direction for the project. We think of this
as the project’s North Star. If we’ve
deviated distinctly from it, we should have
a good reason, and we probably need to
update the roadmap. Be sure to include
the ultimate project goals in the roadmap,
keeping in mind their value added and the
desired outcomes from the customer’s
point of view. The roadmap should loosely
encapsulate the overall vision and give a
sense for when capabilities will be
delivered or intersecting milestones are
going to occur. The roadmap should
probably cover the next 6–12 months, and
only in broad strokes. On the roadmap,
releases contain less detail the farther they
are into the future. Your team will have to
talk through rough sizing of work and
prioritization during the creation process.
You’re not building a schedule; you’re
trying to paint a plausible picture. Be sure
to build this together as a team, or at least
review and revise it together. Too many
roadmaps are built by leadership and
never have buy-in from the team.
PRACTICE: RELEASE PLAN
Once the roadmap is complete, a release
plan may be created for the rst release.
Each release should begin with the
creation of a release plan specic to the
goals and priorities for that release. This
plan ensures that the value being added
to the project is consistently reviewed and,
if necessary, realigned to maximize the
overall value and eciency of development.
Like the product roadmap, the release plan
should include a high-level timeline of the
progression of development, specic to
the priorities for that release only. The
highest value items should be released
rst, allowing the stakeholders visibility
into the progression of the work.
User engagement is often overlooked when
developing release plans. We nd that the
best roadmaps and release plans include user
communication and engagement strategies,
such as training and rollout.
PLAY: BACKLOGS—TURN VISION
INTO ACTION
Once we have and share our vision, we
understand the big stu, but we have to
turn this to action quickly. A core practice
for agile teams is to have a backlog of work.
In this context, a backlog is a prioritized
set of all the desired work (that we know
about) we want to do on the project.
BACKLOG BASICS
Treat your backlog as a “catch all; any
item that moves the team ahead to a nal
product or project goal can be added to
your backlog. New features, defects,
abandoned refactoring, meetings, and
other work are all game to be placed
there. Additionally, keep in mind that your
backlog will evolve in detail and priority
through engagement with the end user.
Your backlog should be a living, breathing
testament to your product, as you will
iteratively rene your backlog as your build
your product. Product backlog items are
added and rened until all valuable
features of that product are delivered,
which may occur through multiple releases
during the project lifecycle.
21
BACKLOG CREATION
A backlog is made of epics and user
stories. User stories are simply something
a user wants, and they’re sized such that
we understand them well; epics are bigger
than that and we need to break them down
further so the team can actually execute.
Generally, anyone can add something to
the backlog, but the Product Owner
“owns” that backlog overall—setting the
priority of things and deciding what we
really should be working on next.
We dive further into both epics and user
stories in the following play.
PRACTICE: PRODUCT AND
SPRINT BACKLOGS
A product backlog is a prioritized list of
product requirements (probably called
user stories), regularly maintained,
prioritized, and estimated using a scale
based on story points. It represents all
of the work we may want to do for the
project, and it changes often. We have
not committed to deliver the full scope
captured in the product backlog.
GOOD BACKLOGS FOLLOW THE ACRONYM DEEP
DETAILED
The backlog should be detailed enough so that everyone
understands the need (not just the person who wrote it).
ESTIMATED
The user story should be sucient for the delivery team to
provide an estimated eort for implementing it. (Stories near
the top of the product backlog can be estimated more accurately
than those near the bottom.)
EMERGENT
The product backlog should contain those stories that are
considered emergent—reecting current, pressing, or
realistic needs.
PRIORITIZED
The product backlog should be prioritized so everyone
understands which stories are most important now and
require implementation soon.
Welcome changing requirements, even late in development. Agile
processes harness change for the customer’s competitive advantage.
22
Figure 5: A persona template
P L AY: B U I L D F O R A N D G E T
FEEDBACK FROM REAL USERS
PRACTICE: DEFINE PERSONAS
Personas are ctitious people who
represent the needs of your users, and
they help us understand if our work is
going to be valuable for the people
we’re trying to reach. They can be very
useful at the start of the requirements
gathering process, but typically remain
important throughout.
Each persona should capture the user
and their individual needs. Create a
template with an area to draw a picture
of the user and separate spaces to
describe the user personally, but also
describe their desired goals, use cases,
and desires for the software.
PRACTICE: TALK WITH USERS ABOUT
NEEDS, NOT SOLUTIONS
Consider the dierence between “I want
the software to have one button for each
department in my business” and “I need
to be able to access each department in
my business from the software.
Users often think they know exactly what
they want to see, but we nd it can be
much more eective if we understand
the needs and then exercise creativity in
how to provide a delightful experience
that satises that need.
A sprint backlog is a detailed list of all the
work we’ve committed to doing in the
current sprint—just a few weeks of work.
Once we set it up in sprint planning, it
remains locked for the duration of the
sprint. No new (surprise!) work should be
added. The whole team should keep it up
to date by (at least daily), and items should
be marked complete based on the team’s
agreed denition of done.
CUSTOMER NAME:
Picture (Yes, draw it!) Description
Goals & Needs
Age:
Gender:
Occupation:
Tech Usage (web savvy, desktop, laptop,
tablet, smart phone, favorite sites/apps...)
“Our highest priority is to
satisfy the customer through
early and continuous delivery
of valuable software.
23
As you can, guide your users into
conversations of value and need, and
let the delivery team work through
the solution.
PRACTICE: EPICS AND USER STORIES
User stories
User stories are the agile response to
requirements, and you can call them
requirements if you like.
They often look like this:
+ As a solo traveler
+ I want to safely discover other travelers
who are traveling alone
- So I can meet possible companions
on my next trip.
There are a few things that are
dierent about user stories versus
typical requirements.
+ They are communicated in terms of
value, from the user perspective.
+ They might look a little lightweight at
rst. We agree we need to tell stories
we know there are details that can only
be sussed out through collaborating
with our users and stakeholders. We
understand that what is written down
is imperfect. What we want to do is
capture enough to get started!
A TEMPLATE FOR USER STORIES AND EPICS
Epics and user stories are described below, but both share a typical template:
+ Title (short)
+ Value statement
- Outlines and communicates the work to be completed and what value
delivering this epic or story will bring to a specic persona/user
- Format: As a (user role/persona) I need (some capability), so that
(some value)
+ Acceptance criteria
- An outlined list of granular criteria that must be met in order for the story
or epic to be fully delivered and adequately tested and veried. This helps
inform development and understanding of when the story is ready to
demonstrate or test further
THE THREE Cs
CARD
This is the description of the user story itself, written on a card or in a tracking tool.
The card should give us enough detail to get started and know whom to talk to.
CONVERSATION
We acknowledge that anyone implementing needs toand shouldspeak
with some of the players involved in the value that story will deliver. So they
need to go have a conversation and record anything that needs to be preserved
from that conversation.
CONFIRMATION
Once implemented, every user story needs to be veried. We call this the
Conrmation. And we should record in the user story how we intend to verify
it. This could be the list of acceptance criteria, test plans, and so on. Important
now and require implementation soon.
24
The Product Owner collaborates with
the delivery team to develop user stories
that will mold the products functionality.
User stories are identied and prepared
throughout the project’s lifecycle.
There are two devices we typically use to
describe good user stories: the three Cs
and INVEST.
Epics
Epics are broad functionalities we want
our product to deliver. They are larger
than user stories. Epics would typically
take multiple sprints to deliver, and
would ultimately be broken down into
many user stories. You can still write
epics like user stories, and the guidelines
above typically apply, with the exception
of Small and Estimable.
INVEST
A way to see if your user stories are pretty good is to consider the INVEST acronym.
INDEPENDENT
Stories should be as independent as possible, so they can be implemented out
of order.
NEGOTIABLE
We should be able to discuss the details of the user story, nd the optimal
solution, and not treat the initial writing as gospel.
VALUABLE
A story must deliver value to the user or customer when complete.
ESTIMABLE
Stories should be such that we can estimate their eort.
SMALL
User stories should be small enough to prioritize, work on, and test. For teams
using sprints, user stories should be able to be completed inside one sprint.
TESTABLE
We should know how to verify and test the story.
STRONG USER STORIES ARE… WEAK USER STORIES ARE…
ü Developed and prioritized by the Product Owner û Created with limited Product Owner involvement
ü Written from the user’s perspective û Developed without designating the specic user or user group
that will receive value from the story
ü Simple and concise, with clear alignment to business value û Missing a description of the business value
ü Entry points to a conversation on how the implementation
activities can be decomposed
û Technical specications that dont link to the user’s point of view
ü Written with easy-to-understand acceptance criteria û Open ended with no means to validate acceptance
Table 1: Strong vs. weak user stories
STRONG VS. WEAK USER STORIES
25
TEAM
AGI LE
08.031.17_06
Booz Allen Hamilton
Scrum Master
Removes impediments and keeps the
team on track
Facilitates ceremonies; helps the team
reflect and stretch themselves
Product Owner
Helps the team know what to build
Owns the backlog and works directly with
the delivery team
Team Members
Do whatever it takes to create value and
meet the sprint commitments
Figure out how to build it
Develop, design, test, ask questions,
and hold each other accountable
TEAMS
Agile teams are where the works gets done.
Team members care about each other,
their work, and their stakeholders. And
agile teams are constantly stretching,
reaching for high performance.
Build projects around motivated
individuals. Give them the environment
and support they need, and trust them
to get the job done.
The most ecient and eective
method of conveying information to
and within a development team is
face-to-face conversation.
PLAY: ORGANIZE AS SCRUM TEAMS
Agile teams are cross-functional in nature
and work together to analyze, design,
and build the solution their customers
need. Agile team members, together, can
understand the business or mission needs
and create an eective solution that meets
those needs.
We use one particular set of vocabulary
here for this playbook, which is reective
of some of the most common terms found
across government and industry; but,
individual client environments may dictate
dierent names for things. We stress that
the jobs and structures mentioned here are
helpful in any context, and we urge teams
to strive for the greatest agility possible,
then seek to continuously improve through
small changes.
Figure 6: An agile team
PRACTICE: BUILD THE SCRUM TEAM
Scrum gives us a simple model for a team,
and we believe this is a valuable frame for
most agile teams. Scrum suggests just
three roles:
+ Scrum Master
+ Product Owner
+ Team Member
26
Although Scrum is a specic agile
framework, (explained more in the
Delivery section), the notion of a delivery
team facilitator in an agile team is so
common and so eective, we feel it’s
simplest to refer to this person as a
Scrum Master—even for a non-Scrum
teamgiven it is the most common
term for that role.
Growing Scrum Masters
We routinely encounter clients and
teams who place more stock in entry-
level certications, like Certied Scrum
Master, than is truly warranted. While this
certication and others like it provide a
great overview of the Scrum framework,
it does not magically make someone an
eective Scrum Master. Because of this,
we recommend the following learning
plan for our Scrum Masters:
The Scrum Master
The Scrum Master is an experienced
agilist, responsible for upholding agile
values and principles for the delivery
eort. The Scrum Master helps the team
execute its agreed-upon delivery process
and typically facilitates common team
ceremonies, like daily standup meetings,
planning meetings, demos, retrospectives,
and so on. The Scrum Master biases
the team toward action and delivery
and stretches the team to continuously
improve, hold each other accountable,
and take ownership of the way the
process works.
Table 2: Scrum Master learning plan
CORE OR ELECTIVE CERTIFICATION DESCRIPTION
Core ICP ICAgile Certied Professional
Core PSM I Professional Scrum Master I
Core ICP-ATF ICAgile Certied Professional in Agile Team Facilitation
Elective PMI-ACP PMI-Agile Certied Practitioner
Elective ICP-ACC ICAgile Certied Professional in Agile Coaching
Elective SA SAFe Agilist
27
The Product Owner
The Product Owner is the person who
most considers the mission or business
value of the solution being developed. They
are responsible for maximizing the return
on investment for the development eort,
and they speak for the interests of the users.
They could be from the client organization;
if they are from the Booz Allen team, they
represent the clients perspective. They
interact regularly with the delivery team,
clarifying needs and providing feedback
on designs, prototypes, and iterations
of the solution.
Growing Product Owners
We use the following learning plan
to grow Product Owners and agile
business analysts:
PRODUCT OWNERS TYPICALLY:
1. Create and maintain the product backlog
2. Prioritize and sequence the backlog according to business value
3. Assist with elaboration of epics into user stories that are granular enough
to be completed soon (like a single sprint)
4. Convey the vision and goals for the project, for every release, and for
every sprint
5. Represent and engage the customer
6. Participate in regular team ceremonies (like standups, planning, reviews,
retrospectives)
7. Inspect progress every sprint, accept or reject work, and explain why
8. Steer the team’s direction at sprint boundaries
9. Communicate status externally
10. Terminate a sprint when drastic change is required
Table 3: Product Owner and agile business
analyst learning plan
CORE OR ELECTIVE CERTIFICATION DESCRIPTION
Core ICP ICAgile Certied Professional
Core PSPO I Professional Scrum Product Owner I
Core ICP-BVA ICAgile Certied Professional in Business Value Analysis
Elective PMI-ACP PMI-Agile Certied Practitioner
Elective ICP-ATF ICAgile Certied Professional in Agile Team Facilitation
Elective SPM/PO SAFe Product Manager/Product Owner
28
The team member
Team members are, of course, the other
members of the team, such as testers,
developers, and designers. They are the
people who, collectively, work together to
deliver value on the project. They carry a
diverse set of skills and expertise, but they
are happy to help out in areas that are not
their specialty. Team members, together,
take collective responsibility for the total
solution, rather than having a “just my job”
outlook. Team members do whatever they
can to help the team meet its sprint goal
and build a successful product.
On functional roles and T-shaped people
On agile teams, there should be less
emphasis on being a testeror a
“business analyst” or a “developer”; we
should be working together, sharing the
load, collectively getting to the goal. That
said, we still value the specialties and
disciplines our team members bring to
the project. Agile teams are ideally made
of generalizing specialists, sometimes
referred to as T-shaped people, who
collaborate across skill sets but bring
valuable depth in a useful specialty to
the project. This means that while a team
member may have deep knowledge in a
particular area, known as a specialist,
they also need to build knowledge broadly,
known as a generalist.
The Tin T-shaped is made up of a
vertical line representing the deep
knowledge of a specialist and a horizontal
line representing the broad knowledge of
a generalist. Specialists who are deep
but not broad can only accomplish work
in a particular area and can become a
bottleneck when they are the only team
member who can accomplish the work.
Conversely, generalists who have broad
but not deep knowledge can become a
bottleneck when the work requires a deeper
understanding or greater skill set. Teams
of generalizing specialists build trust in one
another by collectively committing to goals,
sharing knowledge, actively mentoring,
and delivering solutions.
On bigger teams, its reasonable that
people may live in their specialty more than
on a smaller team. Team composition is
heavily inuenced by nancial feasibility.
Teams working for a client under contract
often have constraints related to a labor
category and hourly rate. This is the perfect
opportunity to bring junior members
along and encourage learning skills outside
the specialty as needed. For example, a
junior developer may benet from learning
test automation, which is adjacent to
development. Constraints on skill sets
or level are the perfect opportunity to
emphasize the need for teams with
T-shaped members. Ultimately this is a
balance that the team, with all its local
context, should talk about and adapt
through inspection and conversation.
< Assuming a class is enough >
While agile education is a core piece of the adoption puzzle, sending a
few people (or worse, a single would-be Scrum Master) to a 2- or 3-day
certication course is not a recipe for success. In many ways, agile
represents a paradigm shift for individuals, teams, and leadership. These
entry-level agile certications simply introduce this new way of thinking
and some of the popular frameworks or practices. They do not equip an
individual with the essential change management skills required to achieve
sustainable agility. We employ a team of certied and experienced agile
coaches to partner with teams on their journeys towards agility and
high-performance. Read more about creating a coaching capability from
Lyssa Adkins [Adkins 2015].
29
On team size
Our experience shows that a cross-
functional team of less than 10 is the
preferred team size. This is supported by
the Project Management Institute and
generally follows the Scrum Guides.
This size would include a dedicated
Scrum Master and a Product Owner.
As the project size scales beyond 10,
the eectiveness of the team per person
declines, and signicantly more time is
spent coordinating work. If the scale of the
work requires a large team, we need to
think about how that can be divided into
multiple cross-functional teams that are
tightly coordinated but loosely coupled.
More thoughts on scale can be found in
the Agile at Scale section.
Team structure patterns
In its simplest frame, agile teams have
people who focus on the value of things
what needs to be built—and other folks
focused on the delivery of things—how we
will build it, whats possible. Though they
operate as a single team with one goal and
one purpose, you can also consider that
there are two virtual, smaller teams here:
a value team and a delivery team. There
are a few patterns we see work well, so
we explain those in the next three plays.
< Assuming delivery or maintenance are
someone else’s problem >
So we’re back to throwing things over the fence in a that’s not my job”
mentality (see generalizing specialist). Not only does this cause tension
between teams but it also leads to poor quality and just plain “bad stu.
When we don’t take responsibility for delivering valuable, quality solutions
as a team in the larger sense of the word, we short change our customer.
< Scrum Master is also the Product Owner >
The Product Owner naturally wants as much value completed in the
shortest time to market. The Scrum Master is there to facilitate the
team, help unblock impediments, and stay attuned to the team needs.
The natural tension that exists with the Scrum Master’s and Product
Owner’s goals helps the team nd balance. If they are combined into
one person, you lose the benets of each role. Either the Product Owner
no longer pushes the team to get the most, based on their Scrum Master
persona; or the Scrum Master no longer cares about the team’s pulse
and pushes as the Product Owner. The team needs both roles.
30
Figure 7: An agile project team
PL AY: E XPAN D T O A VA L U E T E A M
WHEN ONE PRODUCT OWNER
ISN’T ENOUGH
If the mission needs are suciently
complex or there are complicated
relationships with multiple client
stakeholders, it might be impossible to
have just one Product Owner. In that case,
we recommend thinking of a larger value
team. This team might grow to around
10 people and would typically include
representatives like business analysts,
compliance interests, end users, and so on.
In this case, there can still be a Product
Owner, but that person is now the value
team’s facilitator, bringing together all
those perspectives and creating a common
voice, and typically would not be able to
trump the other stakeholders. In this
setting, the jobs of the Scrum Master and
Product Owner become more dicult—
trying to coordinate all the interests at
play—but they can still be very successful.
P L AY: S C A L E T O M U LT I P L E
DELIVERY TEAMS AND VALUE
TEAMS WHEN NEEDED
If the overall solution scope is very large,
the expertise required is suciently diverse,
or the timeline is constrained such that a
single team is insucient to produce the
solution, then you’ll have to scale up to
multiple delivery teams, each with its own
Product Owner/value team. There are a
few frameworks that help in scaling agile,
which we discuss later in this playbook.
Additional roles are likely required,
like architects to keep the technology
suciently robust and coordinated.
Project Teams
AG IL E
Value Team
Communicates and represents client
needs by defining priorities and
acknowledging acceptance
Delivery Team
Cross-functional group that does
whatever it takes to produce a
valuable, working product
Delivery
Team
Value
Team
Product
Owner
Developers
Tes ter s
Architect/
Tec h L ead
Scrum
Master
Business
Analysts
Program
Managers
Compliance
08.031.17_07
Booz Allen Hamilton
< Scrum Master is also the Project Manager >
The Scrum Master is there to support the team and uphold their process,
whereas the Project Manager is there as an interface with the client and
keep the project on the rails. Much like the Product Owner, the Project
Manager should be in a natural tension with the Scrum Master. Managing
the work often means tracking resources, budget, risk, tasking, and cross
team dependencies, and ensuring delivery on the work committed.
The Scrum Master focuses on the team needs. By combining these roles,
you make both less eective. The Project Manager taking the Scrum
Master stance will put the team rst, being less likely to push the team
on delivery. The Scrum Master taking the Project Manager stance would
be less likely to protect the team from the outside pressures, and impose
less favorable team atmosphere.
31
skills and passions are. How do you like
to have fun? How do you like to
communicate? What makes you happy?
What makes you frustrated? You’ll likely
want a good facilitator to pull this
chartering meeting together. Teams nd
it has lasting eects on the sense of
community, empathy for each other,
and overall eectiveness.
PRACTICE: CHARTERING
At the beginning of a project, or after
signicant change on a project (in scope
or in team makeup), we recommend team
chartering. This is a meeting, ideally in
person, together—even for a team thats
otherwise distributed. And the real focus
of this meeting is: Who are we, and what
are we doing? We recommend working
through activities to get to know each
other and nd out what each member’s
P L AY: I N V I T E S E C U R I T Y I N T O
THE TEAM
The security perspective needs to be ever
present in the process, not just an after
thought. From establishing requirements,
through designing the system and
implementing features, to operations
and sustainment, security needs to be
considered and baked in. In a modern
team, it is everyone’s responsibility to
think about, address, and implement
secure practices. Security should be
embedded in the culture; it isn’t just a
step at the end, or “that other team
down the hall. It will typically make sense
for security-focused professionals to nd
their home in the value team, when we
think about how the software may need
to attain a certain accreditation or
certication. But, security-mindedness
is essential for the delivery team as well,
because we want those practices and
habits that build secure software to be
part of the routine work.
PLAY: BUILD A COHESIVE TEAM
We strongly believe in keeping the team
together. And the simplest team is under
10 people, has a dedicated single Product
Owner, and has a dedicated single Scrum
Master. This team is committed to just
one project at a time and can plan and
estimate together. The longer a team is
together, it tends to be more predictable,
has great potential for high-performance,
and really enjoys working together.
< People are rewarded for non-agile,
non-collaborative behavior >
The hero approach is often applauded by organizations because it shows
immediate results and it’s easy to identify the reason for success. In the
long run, it diminishes the importance of tackling issues as a team.
When each team member strives to take on everything alone, they risk
burnout, becoming a bottleneck for the team, and stunting growth of
team members. When team members are rewarded for collaborative
behavior, they build trust, grow skill sets, exchange knowledge, mentor
one another, and share responsibility in success and failure.
< Moving people to work, rather than work to teams >
Management made switches to resources on an organization chart, so
the work is dispersed and assigned to the new project. Managers should
consider how to foster high-performing teams. Teams that work together
for a stretch of time begin tond their stride and form an identity.
When a team member is moved toput out res” on another project,
the team members left behind must reform. They must establish velocity,
revisit working agreements, and begin to normalize around their new
composition. This disruption ends up being costly since the receiving
team must do the same with the addition of the new team member.
When the project needs to increase capacity, the additional stories should
be prioritized into the teams backlog instead of moving individuals.
32
TEAM CHARTER
TEAM MEMBERS
+ Who are the team members? Names and preferred
contact info (email, call, text, etc.)
+ Who will be acting as the Product Owner (or
representative) and Scrum Master?
COLLABORATION LOCATIONS
+ Team area location
+ Conference call/video chat info
+ Agile board name & URL
+ Wiki URL
WORKING TOGETHER
+ What does our team value? What do we stand for?
+ Working agreement: How will we get work done and
stay happy along the way? (e.g., How will absences be
communicated? How will we hold ourselves accountable
for tasks/action items?)
+ How will our team handle conict?
+ Core working hours: Do you have core hours when team
members will be available/reachable
PRODUCT OWNER
+ Who is the Product Owner?
+ Who are our stakeholders and how do they coordinate
with the Product Owner?
+ Product Owner availability: When is the Product Owner
available/unavailable? Does the Product Owner sit with
the team? Which ceremonies does the Product Owner
lead/attend?
+ What do we do when the Product Owner is unavailable
for agile ceremonies?
SPRINT CADENCE
+ What day does our sprint begin and end? (recommend
not Monday or Friday)
+ Sprint planning: time/day/location
+ Daily standup: time/location
+ Sprint retrospective: time/day/location
+ Backlog renement/grooming: time/day/location
+ Who is invited to each ceremony? (list attendees)
+ Important known milestones: Are there any dates or
deliveries that we know of already? Begin to build out our
team roadmap (in Conuence or some other accessible
place) using those known milestones
COORDINATING WITH OTHER TEAMS/VENDORS
+ Do we ever need to coordinate across teams?
+ How will our team handle this situation?
+ How will we proactively manage dependencies/
blockers/etc.?
+ How will you collaborate? Who will facilitate the
coordination?
DEFINITIONS
+ Denition of done: How will we dene done on our team?
Getting on the same page about this and having the
discussion up front and being able to refer back to it will
save the team later. Revisit what done means to you once
in a while to make sure you update it as things change
(e.g., passed unit tests, documentation done, peer
reviewed, code checked in…)
+ Denition of ready: How can we make sure something
is ready to be worked or ready to be planned? (e.g., meets
INVEST criteria; independent, negotiable, valuable,
estimable, small, testable)
+ Product vision: It’s important to know where you are
going as a team. As a pod as part of the larger team, what
is the vision for the larger team? What is the vision for
your particular project/group?
+ Estimation: What estimation scale/points are we using?
Will we estimate using planning poker?
33
PRACTICE: COLOCATE
Colocation is still best
We still recommend colocating agile teams
as much as possible. Being together in one
room, or in close proximity, allows a lot of
things that are much tougher otherwise:
You can pull together an impromptu
meeting or demo more easily. You can put
an idea on a whiteboard, or tape up some
designs on the wall and get feedback
quickly. And there’s no substitute for
hearing your teammates’ sighs, squeals,
and applause, signaling you to what’s
happening on the project right this
moment. Our brains have evolved to
communicate with much more than words,
and seeing each other’s faces is just as
important as the language being used.
< Matrixed resources >
Matrixing resources is common in organizations but can be detrimental to
the individual and the team cohesion. One key to happy employees is the
understanding of responsibilities, priorities, and where employees t in the
grand scheme of the organization. Individuals on teams need to be able to
commit their time and be fully engaged in the work. Additionally, one team
means one set of priorities. The cost of context switching between teams or
responsibilities means the person will likely be less eective at either one.
A G R E AT T E A M R O O M
CONTAINS A FEW THINGS
+ Comfortable workspaces for
each team member
+ Enough space to move around
and pair with another team
member at their desk
+ Information radiators on the
walls that reect project status
and monitoring
+ A comfortable space to lounge
and be social or to use for
meetings
+ A private space for team
members to conduct sensitive
conversations or high-
concentration work
Figure 8: Booz Allen’s Charleston, SC,
agile delivery hub
34
PRACTICE: DISTRIBUTE WITH INTENT
Distribution can work
Colocation isn’t always possible. For
lots of dierent reasons, we may lean to
distributed teams: availability of talent,
cost, client geographies, and so on. We
might be fully distributed, or just have a
few team members in satellite locations.
When we are distributed, we have to put
in extra eort to stay communicative
and engaged with each other, and every
common team meeting requires more
work from the facilitators. Planning
meetings, standups, retrospectives—
distribution complicates how we
collaborate for all of them. Clearly,
technology plays a role here to keep us
together—tools like chat, video, electronic
whiteboards. If your team is distributed,
you need to approach this intentionally;
fewer good things will happen by accident
than what we see in colocated teams.
WE HAVE A FEW TIPS THAT CAN HELP DISTRIBUTED TEAMS
+ Use video. Make sure each team member, regardless of location, has a
camera, and create a virtual team room by having everyone on camera all
the time. This way, when speaking on the phone or chatting via IM, we still
get to see facial expressions.
+ Get o email. Consider banning email for communication inside your team.
Emails are easy to lose, and waiting for email responses slows teams down.
Use the phone more, and get a persistent chat room tool for your team.
+ Facilitate from the lowest-bandwidth perspective. If you need to lead a meeting,
and four people will be in a conference room together and two will be attending
via Skype, then you should get on Skype and conduct the meeting from another
room, treating each person as remote. This ensures no one is overlooked.
+ Talk about it. There’s no substitute for regularly asking each other how things
are going, and if we can improve. As time goes on, you’ll need dierent
rhythms, dierent tools, or just to express what’s frustrating you about the
way we work. Create space for those conversations. Distribution can be tough,
but there are many teams that do it well, and it unlocks a certain freedom
that those team members really enjoy.
35
CRAFTSMANSHIP
Agile software engineering is still
software engineering. Ron Jeries tells
us, “The software we build has to be
robust enough to support the business
need. It needs to be suciently free
of defects to be usable and desirable.
It needs to be well structured enough
to allow us to sustain its development
as long as required.
Robust, reliable, well designed. These are
not things that we just automatically get
by having a Product Owner and a Scrum
Master. Much less are these things we
get by having really good Portfolio Vision
and an Agile Release Train.
Delivering great systems requires a
dedication to the craft and an eye
toward excellence. Technical agility
and strength are necessary for teams
to be truly agile. This section walks
through practical ways to inject
technical health into your solutions.
PLAY: BUILD IN QUALITY
We will delve into some ways to ensure
quality is always considered in every
aspect of software delivery. Security and
quality cannot be thought of as bolt-on
or follow-on functions after development
is done. Building security and technical
AN EXAMPLE OFDONE” MIGHT LOOK LIKE THIS:
1. Code complies with the standards agreed upon through manual or automated static
code inspection
2. Unit test has been written and passed
3. Code has been reviewed by a peer
4. Code has been checked in to a repo
5. Code has passed automated or manual security and/or compliance inspection
6. Code has been successfully integrated into a build
7. User story has been successfully deployed to a test environment
8. User story has passed functional testing
excellence into the solution as we go is a
shared responsibility, and we need the
team to continue to stretch itself to
achieve better results over time. The team
will do this by using agreed-upon practices,
talking together, regularly inspecting, and
automating their work.
PRACTICE: TECHNICAL DEFINITION
OF DONE
In addition to the Product Owner
accepting user stories as complete, the
team must also determine what practices
they will follow to determine when the user
story is of sucient quality to review with
the Product Owner. This denition of done
must be explicitly discussed and followed
by all team members. The denition often
includes the practices within this play.
“If we do not build it well, all
our teamwork, communication,
retrospectives, business focus and
WIP limitation are for nothing.
–RON JEFFRIES
“Continuous attention to technical
excellence and good design
enhances agility.
36
PRACTICE: ADOPT A CODING
STANDARD
Forming a coding standard is an essential
task for any agile team. Creating agreed-
upon documentation that denes the
style in which code will be written and
any practices to be followed or deemed
unacceptable prevents the occurrence of
potential problems. The team’s coding
standards also likely will include topics
such as how errors are handled and how
code is structured (directory convention,
etc.). The intent is to ensure the delivery
team develops code uniformly, which
aids future code updates and developer
comprehension and eases code review.
Coding best practices fall into two groups:
independent best practices (e.g., variable
naming conventions) and dependent best
practices (e.g., how to use aspect-oriented
programming principles).
The team and its senior developers should
dene the initial coding best practices the
team will use during software development
activities. These practices should be
presented to the team and discussed in
detail to ensure complete understanding
of how to execute acceptable coding
practices, as well as the implications of
not doing so. During retrospectives, these
practices should be considered and
modied as needed to reect possible
improvements for the team.
PRACTICE: PAIRED WORK
Pairing with a colleague benets the team
through early identication of potential
issues, shortened review cycles, and
enhanced commitment to the quality of the
product, in addition to being a learning
opportunity for both parties. It should be
employed when creating any artifact for the
project, from code through documentation.
PRACTICE: CODE REVIEWS
There are two primary benets of code
reviews. First, any bug found during
review is cheaper to x then (by orders of
magnitude) than if it is found later in the
process. Second, a team that is practiced
in inspecting the code tends to be able to
embrace the agile principle of collective
code ownership. All of the code (and any
bugs identied) is the responsibility of the
team, rather than a specic individual.
This mindset results in a tighter, higher
quality product.
Agile teams use code reviews to identify
andx bugs and weaknesses that may
have been overlooked. Code reviews
also enable senior developers to mentor
junior developers on how to write better
quality code.
Code reviews are performed for all code
changes and should be included in the
team’s software development workow.
Generally, code is not included in the
next build or made available to the test
team unless the team has held an review
on that code. Typically, the following
elements will be included in the team’s
code review practice:
< Poor technical practice >
Agile does not dictate technical practices, but often it highlights issues
that exist. Increasing the pace of deployments or automation shows
weaknesses in the technical practices through a faster learning cycle.
The goal is to decrease time dedicated to defect correction or manual
processes that can slow down a team.
37
+ Review changed code
+ Verify functionality
+ Review results of security and code
quality scans.
+ Review test case results, including any
automated unit or regression tests.
Teams should create a brief training for
new developers to introduce them to how
reviews work on their team.
Collaborative code reviewing tools, such as
Atlassian’s Crucible, provide the ability for
inline comments for code with informative
feedback. Additionally, look for ways to
automate code quality basic review using
review tools such as Atlassian Clover or
Code Climate.
< No documentation >
Agile is not an excuse to toss out all of your documentation! The Manifesto
simply states, Working software over comprehensive documentation.
But, we do emphasize looking at your documentation and understanding
its value proposition, just as we do for features we build. Who is asking for
it? Why are they asking for it? What will it be used for? If you are on a team
that is not documenting anything, it is worth investigating.
FOR CODE REVIEWS
DO…
+ Look for code style issues
+ Look for obvious coding mistakes
+ Ask questions/pose alternatives if code appears complex
+ Look for code areas that seem fragile
+ Provide constructive feedback, contributing to a solution
if possible
+ Look for concise and clear comments to explain unusual
cases, techniques, and anything that feels non-obvious
+ Be respectful in your commentspicking apart someone
else’s code can make the author feel vulnerable
+ Ensure someone has the job to shepherd the review
through timely completion
+ Ensure team members have had a chance to review code
before considering the review done
+ Link the code review artifacts with how you track the
original work request (ticket, requirement, etc.)
DON’T
+ Close out a review someone is actively working on
+ Try to obtain a 100% complete understanding of all
the code
+ Examine every possible input case; this also takes way
too much time
+ Expect overly verbose comments
38
PRACTICE: SOURCE CODE ANALYSIS
Source code analysis is empirically proven
to be one of the most eective pre-test
defect-prevention techniques, increasing
quality and reducing downstream rework.
We required more advanced methods
to address defects, vulnerabilities, and
sub-standard coding practices to ensure
the highest levels of structural quality,
maintainability, and security before
application deployment. Project teams
needed an automated and repeatable way
to measure and improve the application
software quality of multi-platform,
multi-language, and multi-sourced
applications. Implementing source
code analysis includes:
+ Planning for and incorporating
code scanning as part of the
continuous integration activities to
have a real-time characterization of
application health at the code level
with tools such as SonarQube
+ Performing structural quality scan
on the code with tools such as CAST;
using scan results as a way to direct
development eorts due to specic
vulnerabilities or business priorities
PRACTICE: SECURE CODING
Secure coding is a practice to prevent
known vulnerabilities and keep the code
secure by refactoring as new vulnerabilities
are identied, or as the environment
(operating systems, browsers, web
specications, etc.) changes.
Implementing this practice includes:
+ Dening secure coding practices at
the start of the project
+ Reviewing practices during the code
review for the checked-in le
+ Using free open-source software that
checks against dened best practices
(general coding practices and custom
practices that the team denes in the
tool); see the Tools section of this
document for more information on
code review and scanning tools
+ Integrating automated code reviews
into code check-in and continuous
integration to gain the best results
with minimal manual eort
+ Scanning applications for
vulnerabilities on the server. Three
types of scans would benet the team:
+ Scan the server for container-based
vulnerabilities. The Retina network
security scanner is one of the best
tools to use for this purpose; another
tool for the same purpose is the
Nessus Vulnerability scanner
+ Perform static analysis on the code
with tools such as IBM Appscan or
HP Fortify
+ Perform automated penetration testing
using tools such as Hailstorm
+ Documenting the actions to be taken
for each type of scan
< No definition of done >
When a team has not established and agreed to a denition of done, there
can be confusion. Each team member has a diering opinion of what
“done means. Whether it’s applied to stories, features, epics, sprint,
release, ready for production, etc., the denition of done should be
established by the team prior to the work being accomplished. This is
usually done during team chartering but can be established at any time
prior to planning the work. This level setting across the team means
members can hold one another accountable when it comes time to deliver.
This also prevents the lingering story that never seems to be done.
39
Secure coding best practices incorporate
the Open Web Application Security Project
(OWASP) Enterprise Security Application
Programming Interface (ESAPI) to simplify
and standardize the implementation of
security functions in the environment,
unless environmental factors prohibit
deployment. OWASP ESAPI toolkits help
software developers guard against security-
related design and implementation aws
by ensuring simple, strong security controls
are available to every developer. An integrity
check of software products is included to
facilitate organizational verication of
software integrity after delivery.
PRACTICE: ADDRESS SECURITY
THROUGH THE ENTIRE STACK
From physical location to client-facing
application, teams must be versed in the
skill sets needed to ensure applications
are secure. All talent from development
to security professionals should be
security-minded: trained in software
development practices that are secure
throughout. This includes awareness of
threats and attack vectors not only in the
layer of application being built but also in
the surrounding layers; layers belonging
to partners, shared libraries, and so on.
Further, as security concerns span the
entire application, an approach that
only addresses one layer in the stack is
especially at risk for breaches in any of
the other layers. Defense-in-depth is an
architectural security pattern well suited
for modern applications, which employs
a multi-layered approach to security.
As a team, practice continual learning
and diligence in understanding your
technology stack and its security posture.
< Not enough attention to quality >
Agile is sometimes only seen as the ability to deliver faster, and that
becomes the only focus. By building in multiple layers of checks and
testing, less time correcting defects is needed in the long run. Build quality
check practices into your processes, and look for opportunities to
automate them. This will ensure fewer defects and less time xing those
defects. The cost of a defect found early in the lifecyle is signicantly
cheaper to x than in production.
AS A STARTING POINT, REGULARLY CONSIDER THESE ELEMENTS:
1. Are security tools used to check software vulnerabilities, and can we decide on
an action for each vulnerability?
2. Are security scans included as a part of each automated build, and is that
security posture radiated to the team and stakeholders?
3. Have compliance requirements, such as NIST, RMF, 800-53, been addressed
as technical stories?
4. Does your security strategy also address containers, network, rewalls, and
operating system for vulnerabilities? As an example, Netixs Security Monkey
is a tool that checks the security conguration of your cloud implementation
on AWS.
5. Have functional security test scripts been developed and executed to verify
security features, such as authorization, authentication, eld level validation,
and PII compliance?
6. Does the conguration of security components, such as the perimeter rewall
and Intrusion Detection/Prevention System (IDS/IPS), follow a similar model
in terms of provisioning and conguration as application servers? Use
Infrastructure as Code artifacts, to describe these congurations and to ensure
the ability to consistently and repeatedly congure components, prevent
system administration drift, and support audits and traceability of changes.
7. Is advanced network monitoring in place to actively nd vulnerabilities or
active attacks?
8. Is security talent embedded within teams, and is each team member from
developer to security professional security-minded? Remember, security is
a shared concern.
9. Is the process of dening, implementing, and monitoring security, from
beginning to end, an iterative cycle throughout the life of the software? This is
a proven strategy as illustrated by the U.S. Postal Services Secure Digital
Solutions Electronic Postmark Identity Proong project.
10. Are software security fundamentals implemented, such as OWASP’s Top 10,
as well as project-specic security concerns, such as HIPAA or PII compliance?
40
PRACTICE: REFACTOR YOUR
SOLUTIONS
Code refactoring is a technique for
restructuring an existing body of code
altering its internal structure without
changing its external behavior. We refactor
to improve the nonfunctional attributes of
the software. Advantages include improved
code readability and reduced complexity
to support maintenance. There are two
other benets to refactoring:
+ Maintainability. It is easier to x bugs/
defects when the source code has been
written so that it is easy to understand
and grasp. This might be achieved by
reducing large routines into a set of
individually concise, well-named,
single-purpose methods. It might also
be achieved by moving a method to a
more appropriate class or by removing
misleading comments.
+ Extensibility. It is easier to extend the
capabilities of an application if it uses
recognizable design patterns and
provides some exibility where it
may not have existed previously.
PRACTICE: INVEST IN UNIT TESTING
Unit tests are a foundational software
engineering practice for development
teams. This practice refers to developers
writing small tests that test individual
components in the system, typically with
code thats automatically run during builds.
Designing unit testing is a challenging task
for the development team. The unit test
infrastructure and architecture need to be
designed during Sprint 0 of the project.
The unit tests could cover all layers of the
application or target only certain layers.
Unit tests should be included in the
denition of done for any work item, and
developers should discuss plausible test
strategies when the work is planned. When
a developer makes the code ready for the
sprint’s releasable software, this indicates
that the unit tests are also ready.
PLAY: BUILD IN QUALITY
… AND CHECK AGAIN
Testing incremental functionalities of
the product developed by the agile team
involves reviewing the user stories to
ensure they meet the denition of done,
are considered complete, and have
passed the acceptance testing criteria.
Preparing for testing activities includes
any artifacts required to successfully
execute testing, such as the scripts,
code, and data. The objective of any
testing activity is to determine whether
the incremental product developed
satises the intended requirements and
also proves to be a testable component.
Along with functional and regression
testing activities, automated testing and
acceptance testing are also performed.
< No testing >
If you have a project that seems straightforward or you are very familiar
with the technology, it may seem like testing is just going to slow you
down. A key tenet of the Manifesto is “working software.” Whatever we
deliver has to work, and we have to know that it works. Testing may just
conrm what you already know, but it is part of technical excellence and
is a foundation of quality.
41
PRACTICE: USE TEST-DRIVEN
DEVELOPMENT (TDD)
TDD is a software development practice
that increases code quality by ensuring
high unit test coverage. Unit test coverage
has been proven to increase overall code
quality by providing the rst level of test
on which later testing continues to check
additional quality factors. When using
TDD, developers write a unit test rst, then
produce only enough code to pass that test,
then refactor the code to elegantly integrate
into the existing codebase. When diligently
followed, this practice builds the foundation
for a robust, tested body of code.
TDD is performed whenever code is
written. It is not a testing methodology;
it is a software development technique.
The objective is 100% coverage for all
software with automation.
The system’s correct behavior is
well-dened in the body of tests that
developers can condently make changes
and deploy. These tests augment any
code documentation by demonstrating
the functionality, enabling developers
unfamiliar with the code to quickly
become comfortable with the intended
code behavior.
PRACTICE: FUNCTIONAL TESTING
The team performs functional testing to
ensure the functional behavior of the
product (or system) corresponds to its
specications; this involves testing one
component at a time. Unit testing focuses
on testing the smallest individual units
or components of the build, application
modication, or system to verify that each
component is built to design specications.
Functional testing is performed after the
component has been unit tested to verify
functionality against the requirements
and specications and before system
and integration testing.
PRACTICE: REGRESSION TESTING
The team performs regression testing to
isolate and resolve any errors or defects
introduced during modications based on
change requests. This testing veries that
the component still meets its specied
requirements and no adverse eects have
been identied as a result of implemented
changes. Regression testing focuses on
executing test scripts and/or scenarios in
functional and nonfunctional areas of a
system after changes have been made,
ensuring the product or system still meets
its specied requirements and will not
fail when deployed to the operational
environment or adversely aect code
currently in production. The agile team
should conduct a degree of regression
testing at the end of each level of testing to
ensure any defects corrected during each
testing segment have not caused additional
defects. Over time, we encourage all of our
teams to strive for automated regression
testing to the greatest degree possible.
< Security and QA happens at the end >
The security and QA groups aren’t on the team, so you exclude them
from the sprint plans. Changing your approach to committing work can
sometimes mean bringing in groups who were previously a dierent
department. It’s easy to forget about the QA or security people if they
are not actively part of your teams. Leaving security and QA until the end
equates to phases in traditional waterfall. If something is discovered in
security or QA, the story must go back to the team for rework. At this
point, the team has already moved onto new stories and must stop to
accomplish the rework. The discovery may aect more than one story,
which results in extra eort tox the problems. If security and QA can be
included throughout the lifecycle, problems can be uncovered early before
requiring much rework. This also promotes collaboration and a shared
commitment to security and quality as one team delivering solutions.
42
PRACTICE: AUTOMATED TESTING
Automated testing must be considered part
of the software development cycle. The
tests themselves are an integral part of
software development because they help
minimize time spent debugging and help
the team identify and resolve issues before
users do. Prior to committing code, the
code should be thoroughly subjected to
automated test, and those tests should be
committed with the code. This helps team
members ensure their work is compatible
with the code being committed. The entire
automated test suite should be run
against all code changes before that code
is committed to ensure there are no
conicts with other areas of the project.
When a bug is found in the system, the
developer committed to xing the bug
should perform the following steps:
1. Write a unit test that expects the
specic failing behavior not to occur.
2. Run the test, which should fail because
the bug will still be in the code.
3. Fix the bug.
4. Run the unit test to ensure the test
now passes.
This practice ensures the bug can never be
reintroduced into the system without
being caught by the automated test suite.
PRACTICE: USER FEEDBACK AND
ACCEPTANCE TESTING
Successful agile projects regularly put
new software in front of the users for
immediate feedback. This does not
require a deployment to a production
server but does require a demonstration
or test server to which users have access
and can try out new features still in
development. Making this environment
available increases communication
between the users and the agile team.
If features are found to be o track, the
developers can start over, with only the
loss of a sprint’s worth of work versus
6 months or more if discovered later. User
testing has the added benet of showing
users that while you may not have had a
deployment recently, you are making
progress on the highest priority request.
Testing Visibilit y & Updates
44
Continuous integration tools can be
set up to notify the development team
of any failed build. These enable the
team to resolve small integration issues
early, before they become large or
multiple issues just prior to a release
or deployment.
When combined with automated testing,
code quality inspection tools, and
vulnerability scanning tools, continuous
integration becomes an even more
powerful tool for the agile team.
Automated unit tests identify problems
early and prevent integration defects
from piling up—reducing risk to release
candidates and product deliveries.
Automated code quality inspection tools
and vulnerability scanning tools can
identify poor coding practices, code
violations, and security violations without
the need for additional developer code
review. Continuous integration increases
code quality and reduces bugs in the
deployed software.
Teams that practice continuous integration
tend to be able to complete a build easily
and have a lower cost of change than
teams that have to manually exercise their
build process.
PRACTICE: USE CONTINUOUS
INTEGRATION
Continuous integration is a foundational
agile technical practice. It requires each
team member to integrate their latest
work with the trunk frequentlyat least
dailyand to have each integration
veried by an automated build (with
automated testing included). Continuous
integration increases the quality of the
software by reducing the defect escape
rate and decreases maintenance and
sustainment costs. Developers working
from a local copy for days at a time is a
bad practice and contributes to risky,
complicated merges; continuous
integration mitigates this risk.
Continuous integration means that unit
tests are executed every time automated
test builds are generated from the code
repository. If one of the unit tests should
fail during execution, the continuous
integration mechanism noties the project
team every time it tries to execute the
test, until the problem is resolved.
G U I D E L I N E S F O R S U C C E S S F U L C O N T I N U O U S I N T E G R AT I O N
INCLUDE:
+ Providing the ability for a “one-click-build
+ Executing automated builds at every commit or at least once daily.
+ Achieving 60% unit test code coverage as a target, with anything above 70%
considered exceptional
+ Providing automated notication to the entire development team when a build
or unit test fails
45
PRACTICE: CONTINUOUS DELIVERY
AND CONTINUOUS DEPLOYMENT
When a team has continuous integration
working well, it can consider the more
mature practices of continuous delivery
and continuous deployment.
Continuous integration gets us as far as
having a server somewhere that builds
the software and runs the automated tests,
but it doesn’t get it o that build server.
Continuous delivery is that next step:
ensuring each change to the software is
packaged up and releasable, and we can
push it to production with click. This means
we must have automated and smoothed
all the mechanical steps of the release and
deployment process, which often slows
down teams. In a continuous delivery
environment, releases become boring.
Continuous deployment goes one more
level. Each change is pushed to production
automatically. You must be practicing
continuous delivery before you can
consider continuous deployment. To
make this leap, all the qualitative and
judgment aspects of releases that might
have been performed by humans before
are now automated. Do we really trust
the automated QA? Have we covered the
edge cases? Does this release degrade
performance anywhere? Is the timing right
for the business? When the software can
automatically stop releases that should
not go out but lets the others pass, we’ve
reached continuous deployment.
W H E N G AT H E R I N G I N F O R M AT I O N , I T I S I M P O R TA N T T O
UNDERSTAND:
+ How and why is early detection of defects important to your project?
+ What is the system availability requirement and what is the plan to achieve it?
+ What is the labor cost of your operations?
+ How many systems engineers, systems administrators, and database
administrators do you have monitoring the production environments, and how
do they react when there is an issue? (DevOps Playbook publishing soon)
PRACTICE: CONTINUOUS
MONITORING
To have success with continuous delivery
and continuous deployment, you must
have the backbone of continuous
monitoring in place. Without the ability to
know how the application is performing,
the processes in place no longer stand.
Continuous monitoring is constant
validation that the application is functioning
and performing as expected. Through the
creation of monitoring tools, teams are able
to remain condent that the applications
are performing and functioning at their
optimal levels. This means that we must
know there is a failure before one occurs
and build feedback mechanisms to gather
information from the system.
When gathering information, it is
important to understand: When keeping
these questions in mind with continuous
monitoring, teams are able to nd issues
and defects early and automatically, which
in turn increases overall condence and
reliability in the application.
46
PRACTICE: SEEK AN AGILE
ARCHITECTURE
An architecture that supports agile
projects must support the current eort
and any future complexities. Loosely
coupled service-oriented architectures
are often used to leave room for future
changes. Advancements in technology,
such as microservices and containerization,
aect the nature of the architecture and
change the need to a virtualized
infrastructure. Many projects do not
have the luxury of starting new and need
the legacy infrastructure and construct
integrated. When these complexities
arise, tailored solutions that take into
account latency or potential performance
issues must be developed with the
end-user experience in mind. Architecture,
just like any other part of the development,
is work that must be planned, executed,
and accounted for.
The more complex the solution, the more
planning is needed for infrastructure, as
well as application architecture. Using an
architectural roadmap to plot out future
needs will provide a type of scaolding
that will then be completed with a
detailed design that builds out just
enough infrastructure or detail needed
to complete features. Depending on the
size of architecture being built, one or
several architectural epics are executed
through technical spikes within the
sprints. The level of documentation
and diagrams needed will be informed
by the agile principles in mind during
development and implementation.
The aims of a DevOps implementation
are to:
1. Reduce risk
2. Increase velocity
3. Improve quality
Although it addresses a fundamental
need, DevOps is not a simple solution
to master. By integrating software
developers, quality control, security
engineers, and IT operations, DevOps
can provide a platform for new software
or program xes to be deployed into
production as quickly as they are coded
and tested. That’s the idea, anyway
but it is easier said than done.
Specics on implementing DevOps can be
found in the Booz Allen DevOps Playbook.
T O E XC E L AT D E V O P S , T E A M S A N D O R G A N I Z AT I O N S M U S T D O T H E
FOLLOWING:
+ Transform their cultures
+ Automate legacy processes
+ Design contracts to enable integration of operations and development
+ Collaborate and then collaborate some more
+ Honestly assess performance
+ Reinvent software delivery strategies based on lessons learned and project
requirements
47
MEASUREMENT
Measurement aects the entire team. It is
an essential aspect of planning, committing,
communicating, improving, and, most
importantly, delivering.
PLAY: MAKE EDUCATED GUESSES
ABOUT THE SIZE OF YOUR WORK
It is important to understand what you
are committing to and acknowledge when
you do not. Agile teams seek to discover
this understanding through the process
of estimation.
One of the historical challenges of
software estimation is the perception
of precision. When a feature is estimated
as “273 FTE hours,” it unintentionally
gives a very false sense of precision and
condence in that estimate. In reality, a
manager asked ve dierent people to
provide estimates across 150 requirements
and summed them up.
Agile teams apply several simple
practices to drive conversation, quickly
expose what they collectively understand
or dont understand, and get some
numbers that are approximately right
instead of precisely wrong.
PRACTICE: USE RELATIVE
ESTIMATION
Relative estimation is a concept that
those new to the agile world often struggle
with, but it simply means “compare two
things to each other.” If you’ve ever said
something like, “Hey, this tree is twice as
tall as that tree,” you already know how
to do it.
When we think about estimating software,
we recommend agile teams do so in terms
of size. Software size is intended to
capture, all at once, the:
1. Amount of work
2. Diculty of the work
3. Risk inherent in the work
When paired with relative estimation, it
results in questions like, Is this feature as
complicated as the other feature we built
last week? Or, If we take on this feature,
is it more risky than this other feature?
When work items are similar enough
across these questions, we give them the
same number on a scale (see table below)
that the team has previously selected.
As work items dier, the team discusses
those dierences to understand just how
dierent the work is, relatively, and give
a corresponding number from that
same scale.
NAME EXAMPLES
Modied Fibonacci 1, 2, 3, 5, 8, 13, 20, 40, 100
Small, not too big 1, 2, 3, 100
Same-sized work 1, too big, no clue
Powers of 2 1, 2, 4, 8, 16, 32
T-shirt sizes XS, S, M, L, XL
Table 4: Example relative estimation scales
48
These are just a few examples of scales
we’ve seen teams use at Booz Allen. The
most popular is the Modied Fibonacci
scale because it was documented in a
fantastic book (Agile Estimating and
Planning by Mike Cohn) and is sold on
estimation card decks [Cohn 2005]. But,
its popularity is for good reason: The
nonlinear sequence works well because
the gaps reect the greater levels of
uncertainty associated with estimates for
bigger, more unknown pieces of work.
More mature and disciplined teams often
move toward simpler scales, such as small,
not too big (SNTB) and same-sized work
(SSW); in these cases, they essentially seek
to know if work is small and understood,
or if more discussion is required before
commitment. Don’t try to look up SNTB
or SSW; we made those names up.
What do we call these numbers?
Generally, agile teams call these unit-less
measures of software size “story points,
or simply “points.” We recommend that
teams avoid the temptation to continue to
estimate in hours, even when deliberately
using relative scales, because the
perception to stakeholders is often a
higher degree of certainty than there
truly is. At one point, “ideal hours” were
oered as an alternative to story points.
An ideal hour is a mythical hour where
you can work, uninterrupted, with all the
knowledge and resources at your ngertips
to get the job done. In practice, we nd
this causes too much confusion among
the team and stakeholders—someone
inevitably confuses an ideal hour estimate
with an actual hour from reality.
PRACTICE: ESTIMATE AS A TEAM
A generally accepted practice among
the agile community is to have the team
make estimates together. This practice
allows the team to discover how well work
is understood, encourages knowledge
sharing, builds team cohesiveness, and
fosters buy-in.
One of the most popular techniques for
having these team conversations to obtain
estimates is Planning Poker [Cohn 2016].
The gist of it is that the team talks about
its work in a structured (but fun) way. It is
based on a popular expert-based method
calledwide-band delphi,” but we
encourage you to provide chips and salsa.
There are other methods (e.g., Anity
Estimation, Magic Estimation), and
more are created all the time. What do
they all have in common? They get the
team to make the estimates together,
they drive conversation, and they expose
uncertainty. And numbers pop out that
are just good enough to move forward.
< Collecting data without purpose >
Any data collected should be intentional and with purpose to improve the
current process. Collecting the same data because you always have is
waste. Review what you do collect and how it improves the system.
49
P L AY: U S E D ATA T O D R I V E
DECISIONS AND MAKE
IMPROVEMENTS
Across many of our highest performing
teams and programs, we’ve found a
common love of data. The best software
practitioners really are geeks about it.
And why not?! Data is objective. Data
tells a story. Data is awesome.
Agile teams should use data to drive
important decisions, commitments,
schedules, and improvements.
PRACTICE: USE THE PAST TO
PREDICT THE FUTURE
An important input to any planning
process is the team’s capacity for doing
work. In traditional capacity planning, this
may look a lot like the sum of your team’s
planned work hours during a given period.
The trouble with this is you are making
decisions solely on estimates and focusing
entirely on the individual people on the
team rather than the team itself.
In contrast, agile teams predict how much
work they can collectively accomplish by
continuously looking back at how much
work they have accomplished. Yesterday’s
weather is the best predictor of today’s
weather (unless you live on the East Coast
of the United States).
Velocity (and throughput)
For most agile teams practicing some
variation of Scrum, they should measure
their velocity. Velocity is the amount of
work the team has historically been able
to deliver in a single sprint. We usually
recommend using three to seven sprints’
worth of data to produce a rolling average
of your velocity each time you walk into a
planning meeting.
For agile teams that are moving toward a
more Kanban-style delivery, the analog to
velocity is throughput. Throughput is the
amount of work the team is able to deliver
over some period of time. A Kanban team
may measure throughput in terms of hours
or days.
Some teams may get value in tracking
their velocity in a Velocity Chart; see
example below.
< Taking partial credit >
Have you ever felt like you’re super busy, but you’re not getting many
things nished? This happens to agile teams, too. It’s tempting to quickly
start all the current tasks on deck. But, code isn’t really valuable until it’s
running and the user can try it. Since delivering value should be our
primary measure of progress, our time is better spent taking things to
done rather than starting more things. Taking partial credit for work does
not help the team improve and deliver value. Inspect why the team is doing
this; if there are blockers to completion, those blockers should be
addressed rather than starting new work.
Story Point
100
80
60
40
20
Velocity
120
Sprint 1
Sprint 2 Sprint 3 Sprint 4 Sprint 5
110
101
115
95
78
08.031.17_10
Booz Allen Hamilton
50
Kanban metrics borrowed from lean
manufacturing (cycle time, lead time,
response time)
Kanban teams utilize several additional
metrics from the manufacturing world
because Kanban emphasizes the ow
of work through the system to produce
valuable outcomes. These metrics
help understand how well the system
is working.
Team predictability: Commitment variance
Commitment variance is a predictability
measure of your team’s ability to deliver on
its commitments.
In essence, it is the percentage ratio of the
team’s delivered work to the team’s
committed work, averaged over time. For
example, if a team commits to delivering
10 points in a sprint, but it actually delivers
12, the commitment variance for that
sprint is 12/10*100 or 120%. The team has
delivered 120% of its commitment.
If your team’s commitment variance tends
to be less than 100%, you may consider
requiring the team to commit to no more
than its previously delivered velocity
instead of its rolling average velocity, for
several sprints. This practice is literally
called “Yesterday’s Weather.” Once the
team becomes more predictable, you may
consider returning to a rolling average
velocity for planning.
PLAY: RADIATE VALUABLE DATA TO
THE GREATEST EXTENT POSSIBLE
Agile teams are known for their information
radiators. These are Big Visible Charts. Ron
Jeeries said it best: “Display important
project information not in some formal way,
not on the web, not in PowerPoint, but in
charts on the wall that no one can miss”
[Jeries 2004].
Certainly, some team environments pose
a challenge in radiating information in this
way. The trend toward globally distributed
teams alone is an impediment. In these
cases, the teams will have to explore other
methods for radiating information and
discover practices that work for them.
< Using velocity to compare across teams >
Throughput and velocity are unique to every team. Said dierently, you
cannot compare teams based on their velocity measures. Relative
estimates are completely dependent on the team that is doing the
estimating; one team’s 1 might be another team’s 5. In practice, we’ve
found that eorts to standardize estimation scales across multiple teams
tend to diverge fairly quickly. Teams should be compared based on the
value they are delivering per sprint and the predictability of their delivery.
METRICS DEFINITION
Cycle Time Average time per work item (the inverse of throughput—think about it!)
Lead Time Average time it takes to deliver an item from the moment work starts
Response Time Average time it takes to begin work on a single item from the moment it is
added to the backlog
Note: These are “time per item” metrics. The average is calculated across a sample of your work items.
Table 5: Kanban metrics and denitions
51
PRACTICE: BURNDOWN CHART
Burndown charts are the go-to chart for
agile teams. Simply stated, they track the
amount of work the team haslefton
any given day, andhopefullythe
team’s workload goes down most days.
We have found burndown charts to be
most eective for tracking progress
“within a sprint.
Coming out of a planning meeting (e.g.,
sprint planning), the team can sum up the
estimates for all of the work in the sprint
backlog. When stories are complete,
according to the team’s denition of done,
points are burned down to show progress.
In some cases, once a team gets started
on the work, it learns a particular item is
bigger than originally thought. That, too,
is captured in the burndown chart as the
points going up.
Note that for teams practicing Kanban, a
burndown chart may not be the most
useful way to observe progress. For that,
we recommend a cumulative ow diagram
(described below).
Burndown patterns
Keep an eye on the shape of your team’s
burndown charts. We’re fans of this article
from RallyDev, which speaks to this idea
[CA Technologies 2016].
< Extending the sprint >
Often and inevitably, a challenge will arise where extending a single sprint
will look like a plausible solution. And why not? For decades, weve grown
accustomed to sliding schedules to the right.” Maybe the team did not
complete all of the work they signed up to do, or maybe priorities
drastically changed in the middle of the sprint. Or, even more simply,
maybe there is a holiday in the sprint. The answer to this question should
almost always be No. The team’s sprint cadence is an essential piece of its
rhythm and predictability. If you’re not sure why the sprint is coming up
short, thats a good thing to talk about at the retrospective.
Figure 11: A burndown chart excels at tracking
progress within a sprint
Points
50
45
40
35
30
25
20
15
10
5
0 1 2 3 4 5 6 7 8 9 10
Time
Work RemainingTarget
Team is
behind plan
Team is
ahead of plan
08.031.17_11
Booz Allen Hamilton
52
PRACTICE: BURNUP CHART
Burnup charts are quite similar to
burndown charts; they just go in the
opposite direction. Whereas the burndown
chart shows the team “burning down”
their work to 0, the burnup chart shows
work being completed and moving up.
Likely the largest dierence between the
two is the addition of a target line in the
burnup chart. As the team progresses,
burning up progress, the total amount
of work it expects to get done also is
graphed. This chart is an excellent way
to display changes in scope or
understanding over time.
It is for this reason that we recommend
teams use burnup charts for tracking
bigger eorts across multiple sprints, such
as for a large epic or feature delivery, or a
multi-sprint plan time horizon (sometimes
called a release” or aproduct increment”).
PRACTICE: VISUALIZE WORK IN
PROCESS USING A CUMULATIVE
FLOW DIAGRAM
Cumulative ow diagrams show where
work is in your process. Over time, you can
see how much work you have in any given
stage of your product ow. This diagram is
an excellent tool for teams to visualize
their WIP. It is easy to observe bottlenecks,
understand your team’s level of focus, and
get a sense of some of the Kanban metrics
described previously (cycle time, response
time, lead time).
For teams of all shapes and sizes, WIP
should be managed and kept to a
minimum to ensure continuous ow of
value and reduce wasted eort.
Figure 12: A burnup chart is great for tracking
progress across multiple sprints
Target Work Completed Project Total
10
50
45
40
35
30
25
20
15
10
5
0 1 2 3 4 5 6
7 8 9
Team is
behind plan
Team is
ahead of plan
Increased Scope
Decreased Scope
Time
Points
08.031.17_12
Booz Allen Hamilton
53
P L AY: W O R K I N G S O F T WA R E A S T H E
PRIMARY MEASURE OF PROGRESS
Yes! In any discussion of measurement,
it is essential that we not lose sight
of an agile team’s true measure: the
regular and continuous delivery of
working, high-quality software that is
potentially shippable.
PRACTICE: TECHNICAL DEBT
Technical debt is a useful metaphor for
“stu that will come back to bite you in the
long term.” Technical debt is the cost of
xing the structural quality violations that,
if left unxed, put the business at serious
risk. The data on technical debt provides
an objective frame of reference for the
development team. It also provides a way
for the development and management
teams to have a tradeos discussion.
Technical debt could be defects left
unresolved, code that is not reviewed or
unit tested, or shortcut architectural
decisions. What technical debt truly
means to your team is a worthy
discussion to have often.
Whatever you determine to be your
technical debt, you should track it. You
are going to have to pay it o sometime.
And, like interest, the cost of change
increases over time as your technical debt
ages and grows.
Figure 11: A burndown chart excels at tracking
progress within a sprint
In ProgressTo Do Done
Time (d)
Stories
08.031.17_13
Booz Allen Hamilton
Lead Time
Backlog
Response Time
Cycle Time
Lack of Focus
Good Focus
H E R E I S A S H O R T L I S T O F
C O D E Q U A L I T Y I N D I C AT O R S
THAT ARE WORTH TRACKING:
+ Unresolved defects over
time, sliced by severity
+ Unit, integration, and
functional test code coverage
+ Frequency and duration
of builds
+ Code review coverage
+ Code coupling and
cohesion metrics
“Working software is the primary
measure of progress.
54
PRACTICE: VALUE PREDICTABILITY
We’ve previously discussed velocity as a
measure of the team’s capacity to deliver
work. Because of its name, we sometimes
forget that it is slightly dierent than the
velocity of the physical sciences (in physics,
velocity is a measure of speed and
direction). In the agile world, velocity has
no sense of the teams direction.
Velocity is valuable input for planning, but
it is not exactly an indicator of progress.
To help get a sense of the team’s direction
and the value of its output, we turn to the
value predictability measure. This measure
is a comparison of the actual value
delivered and the planned value delivered.
Coming out of a planning session, the
team’s Product Owner assigns a value
estimate to each work item (usually on a
simple relative scale, like 1 to 10). Then,
coming out of a review, the team’s Product
Owner identies how much value the team
delivered for each item on the same scale.
Both of these sets of values are summed
and tracked over time.
Note, this is a measure that we’ve seen
used by teams at Booz Allen, but it is also
quite similar to a measure popularized by
SAFe: the program predictability measure.
SAFe suggests that this type of metric
could be useful at higher levels of an
organization and recommends a
predictability measure between 80% and
100% as being “predictability sucient to
run a business” [Scaled Agile, Inc. 2016a].
We see no reason to disagree.
PRACTICE: CUSTOMER
SATISFACTION
Customer satisfaction is perhaps the
most dicult part of any development
eort to dene and measure. It weaves
together multiple factors, including user
experience, value delivered, performance,
timeliness, and more. Because of these
complexities, it’s important to engage your
target groups regularly for direct feedback
and collaboration.
The simplest approach we’ve seen
teams successfully use to track customer
satisfaction is to ask for a letter grade
(e.g., A, B, C, D, E) coming out of all
demonstrations or deliveries. Tracking
these letter grades over time can give you
a really solid sense of how successful the
program is at delivering the value it says
it will deliver.
One caveat: In cases where customers
may churn a lot on direction, there is a
slight risk that the customer satisfaction
metric could be low in spite of the team’s
best eorts.
While direct stakeholder engagement is
often more time consuming, it’s necessary
to develop a complete and accurate picture
of your customer satisfaction. Some
common strategies include regular
engagement with a stakeholder committee
or champion group, 1:1 and small group
interviews, and user surveys. To avoid
over taxing your stakeholders, develop a
sampling strategy that engages dierent
users based on your release plan over
time. The combination of qualitative and
quantitative data will facilitate alignment
of your product with evolving user needs
and business priorities.
55
Agile tools and reports are designed
to make status seamless and available
to anyone who wants it, without having
to ask each other. We are hoping the
team members will keep each other
accountable for fullling their own
commitments, but the facilitator still is
watchful and plays a guiding hand in
these interactions. He or she will still
make sure the right people are talking
and encourage the team members to
stretch themselves.
Facilitators help everyone hold their time
as valuable. They take an active role in
making sure meetings are valuable, are not
too long, and have the right audience in
the room. So, while it’s really the broad
team that develops the sprint or release
plan, facilitators make sure the meeting is
set up for success, with the right
information and the right players.
Facilitators also help the daily standup
MANAGEMENT
In a system with self-organizing teams,
where is the room for a manager? What
does agile management versus leadership
look like and how does this aect everyone
on the team? We take a look at how the
manager leads in an agile organization.
As managers with agile teams, we may nd
that our role has evolved, but it is still
valuable. With self-organized teams, there
is always this question: What do project or
program managers do now? Not only do
the managers themselves ask the question,
but also those who report to them. The
traditional hierarchy is blown up and
things may get uncomfortable at rst.
Here we present a few shifts to consider
as you engage with agile teams.
The following plays represent a
progression of skill sets a manager may
go through. However, each play can also
be considered a stance the manager may
take in order to meet the need of the
team(s) regardless of progression. As
a manager, ask yourself, What type of
manager do I need to be in order to
best serve my team(s)?
PLAY: MANAGER AS FACILITATOR
Some managers will have great success
looking at themselves as a delivery
facilitator. How can you make delivery
easier? What roadblocks can you remove?
Who can you connect to quicken the pace
of the whole team?
The facilitator role can be dicult for
those who are more comfortable with
assigning work (or having it assigned)
and receiving status until it is complete.
Build projects around motivated
individuals. Give them the
environment and support they need,
and trust them to get the job done.
56
move briskly and make sure any
monitoring and reporting connections are
being made to sponsors and executives.
Facilitators may nd that they get more
opportunity to focus on the people they
work withand encourage the team
members to think in that frame as well.
Formerly repetitive processes associated
with status are now valued less than the
cohesiveness of the team and building
something valuable together. Facilitators
ensure time is spent reecting and
discussing how the team works together
(most focused during retrospectives but
welcome at any time).
P L AY: M A N A G E R A S S E R VA N T
LEADER
Through facilitative leadership, where you
no longer need to direct the team but
rather enable its own direction, you are
ready to embrace servant leadership. In
facilitation, you often learn you are not
the person with the answer; rather, you
look to the team for answers as it helps
the individuals and team more than it
helps you. And, teams are very wise. This
can be a very humbling experience. The
humility you learn as a facilitator guides
your growth as a servant leader. You
celebrate the team’s success of producing
a valuable quality product. You view
yourself as the person always thinking of
ways to improve team members’ work-life
to create their best work, by clearing
impediments and not interfering. This is
strange and dierent for those used to
being the go-to person, or being a hub.
For teams used to having a clear leader to
make all decisions, servant leadership can
be very unnerving. Sometimes there are
those who need permission to proceed.
Moving to this leadership style can initially
cause apprehension and delays with
people who usually very eciently
complete tasks. Once again, managers
facilitate an individual’s reliance on the
rest of the team. Agile practice provides us
57
Senior Lead Engineer Kelly Vannoy, Associates
Jason Rauck, Associate Warren Pennington,
Senior Consultant Brittney Pauls, Senior
Consultant Manoj Ram, Lead Associate
Christina Welborn, and Lead Associate
Courtney Anderson.
Just as a team starts with a more
prescriptive framework, a facilitator starts
with training and later is able to apply
lessons informed by personal experience.
Experience with people, process, and
technology coalesce for someone ready to
coach. A coach knows when to apply skills
from mentoring, teaching, and facilitating;
coaches expect to adapt rapidly and help
teams do the same.
The roles may have changed, but ultimately
striving for a long-term, sustainably happy
work environment means living through
the changes. Throughout the process,
acknowledge with each other when things
are uncomfortable, and examine if its the
change or if something is truly not working.
Issues may emerge that have always
existed in the organization, so be ready to
address them through organizational,
process, or change reengineering.
tools to help individuals answer their own
questions. If you are ready for more work,
turn to the Kanban board for the next
prioritized item on the backlog. If you are
unsure about roles, refer back to the team
charter. If there is anything not covered in
the agile tools, talk about it and create
something for yourselves (e.g., knowledge
base, wiki pages).
PLAY: MANAGER AS COACH
So you have been practicing agile for a
while, and you nd yourself telling stories
about how your past programs handled
situations. Your agile books are dog-eared
and you’ve seen a lot of success and failure.
Although you may hold the same role as a
“Project Manager,” you have more depth to
draw from and are taking more of a
coaching stance with your team. You’ve
found that you ask a lot more questions
than give advice, and you’ve taken a deep
interest in developing others to unlock
their personal potential.
58
ADAPTATION
Adaptation is key to improving team
outcomes and adjusting as the environment
and team change. In this section, we look at
a couple of ways to regularly examine and
nd ways to improve the team and product.
Routinely adaptingand expecting
adaptationare vital to a healthy agile
project and a healthy agile team.
P L AY: R E F L E C T O N H O W T H E T E A M
WORKS TOGETHER
If a team must pick only one agile practice
to adopt, it should be retrospectives! As a
way to inspect and adapt the way the team
works together, they become a vital part of
a healthy team. They take the form of a
regular meeting for a team to be able to
pause and reect on how things are going,
adjust process, and reect on areas of
improvement with each other. The simplest
form of a retrospective would be to come
together and ask a team questions like,
What are we doing well? What aren’t we
doing well? What would we change for the
next few weeks? Retrospectives should be
common enough that the team is
comfortable experimenting with things.
It is an opportunity to tune processes and
behaviors. In addition to improving the
work, it build trust and cohesiveness in
the team. Many teams fall into a trap of
skipping retrospectives when pressure is
on. Don’t do it! Treat retrospectives with
importance, and reach out to another
facilitator or a coach if you’re having trouble
nding value in the meeting. Holding a
retrospective about once per month is a
healthy practice for most teams.
We always recommend Agile Retrospectives
[Derby and Larsen, 2006] to increase your
skills in conducting retrospectives. But
there is no better learning than by doing.
PLAY: TAKE AN EMPIRICAL VIEW
Be ready for experiments! We are engaged
in creative knowledge work that did not
exist yesterday. We can’t expect to have all
the answers. But we can expect to design
smart experiments, try things, and learn.
Apply this principle in all you do.
Whether you’re deciding what technology
framework to use, or you’re just trying to
spend less time on help desk tickets,
consider an empirical view. Reach back to
elementary school and borrow from the
scientic method basics:
+ Dene the problem.
+ Form a hypothesis.
+ Dene the resources you’ll need from
your team.
+ Conduct an experiment—and
experiment with only one variable
at a time.
+ Gather datacompile test data
once completed.
+ Consider what you learned. Take
action on ndings.
Keep an open mind about experiments to
conduct; they can include technical,
interface, process, or people.
< Under pressure, so skip the retrospective >
When a team feels it is behind on work committed, or unexpected issues
arise that take time, its easy to think the retrospective is too long a time
commitment. Sometimes you have to slow down to go faster. The
retrospective is an investment in team. Without the opportunity to reect
in an allotted time, the team will continue to push aside issues that are
aecting velocity. The mindset that it is an investment will help the team
ultimately become more eective.
At regular intervals, the team
reects on how to become more
eective, then tunes and adjusts its
behavior accordingly.
59
MEETINGS
The most ecient and eective method
of conveying information to and within
a development team is face-to-face
conversation.
PLAY: HAVE VALUABLE MEETINGS
Eective meetings bring value to a team
in the form of common understanding.
The richest form of communication is
face-to-face communications while
drawing on a whiteboard to express ideas
beyond words. Codied meetings are a
central element in implementing agile,
and they serve several purposes:
+ The team gets to know each other
and builds respect.
+ Ideas can be expressed, built upon,
and nalizedfacilitating a shorter
learning cycle.
+ Meetings reinforce face-to-face
communication, or at the very
least talking.
+ They give regular check-in points and
a rhythm to work.
EFFECTIVE FACILITATION OF MEETINGS
+ Plan as far in advance as possible. Regular meetings should be on everyone’s
radar far in advance.
+ Emphasize the importance of the meeting as part of work.
+ Ensure everyone feels comfortable contributing by asking opinions and giving
everyone a chance to talk.
+ Timebox and keep to the timebox. Adding time to the planned meeting just
contributes to the feeling that meetings are not useful.
+ Stick to the agenda. Use tools like a parking lot to determine follow-on meetings.
+ Provide notes for the meeting in an easily accessible place so decisions are
not lost.
+ Meet in the same team space as much as possible. Keep things like your sprint
Kanban board there and refer to it during meetings.
+ Emphasize respect for each other. Only one person should speak at a time.
+ Build working agreements. Ensure a working agreement for meetings is
discussed for each type of meeting.
+ Review actions from previous meetings to ensure team members are keeping
commitments and get the assistance they need.
“Business people and developers
must work together daily
throughout the project.
Table 6: Useful, possibly valuable meetings
MEETING OUTPUTS FREQUENCY
Release Planning Scope out several sprints worth of work, consider risks, and high level
design
Once per Release (may
occur quarterly)
Sprint Planning Commit to the next few weeks of work, with whole-team buy-in Once per Sprint
Backlog Grooming Be sure the backlog is emergent and well-prioritized; rene detail and
cancel dead stories
Once per Sprint
Sprint Review Show the work that’s completed and receive regular feedback on the
product
Once per Sprint
Retrospective Take time to reect as a team on how we’re working, and how we can
improve ourselves or our process
At the End of Each Sprint
Daily Standup Coordinate the day’s work,nd impediments, and hold each other
accountable
Daily
+ They reduce the need for
documentation to be passed around as
a primary form of communication.
The below table provides an overview of
the types of meetings the agile team can
employ during its project’s lifecycle.
60
AGILE AT SCALE
We know that agile teams can better
handle changing priorities while being
more productive and more predictable
[VersionOne 2016]. But, the same things
that help us create that environment on a
team of 10 may break down when we have
50, 80, or 200 people on the team. Agile at
scale is more than simply applying agile
practices on multiple teams. Agile at scale
requires multiple levels of coordination to
ensure all the teams in the enterprise are
moving in the same direction. Other
aspects such as culture and funding
must be considered as well. Implementing
agile in an organization cannot be top
down or grassroots alone. A successful
adoption includes energy coming from
both directions.
Keep in mind that, as in teams, agile is
not sucient as its own goal. Look to
agility to nd happier customers, higher
team morale, more space for innovation
but not just because its agile.
As you get to it, let’s just acknowledge
this is hard. There are no silver bullets, no
singular unifying theory or framework that
will guarantee stress-free success with
agile at scale. It will take trying, learning,
and re-learning. For an organization not
accustomed to agile methods, it will take
signicant energy to transition. But, when
in doubt, remember to use agile to scale
agile. Build in constant experimentation
and feedback to how you work. Build a
foundation of great agile teams; develop
empathic lean-agile leaders. Simple
structures and simple changes are almost
61
always better than complex ones. So think
simply, even for a complex organization.
Much more so than in individual teams,
consistent training is essential. Small
organizations can accidentally run into
great discoveries and organic ways of
working, but this is much less likely at
scale. Whether using a packaged delivery
framework or your own creation, training
and alignment are essential. Its important
that everyone within an organization uses
the same words, understands the same
principles. But, don’t overemphasize the
process! Process isn’t everything. Process
is important, but it’s insucient for
success. Culture is as or more important
than process. As your organization
transitions toward agile, be mindful that
culture change is happening. Guide it, and
treasure great culture.
P L AY: T R A I N M A N A G E M E N T
TO BE AGILE
To migrate an entire organization to a
new way of thinking and acting, all
members of the organization need
two-way communication to help
everyone understand what it means
to the organization and for their particular
role. This is most crucial for management
at all levels. By educating the management
team in the changing mindset and overall
vision for the organization, the team can
work through the change management
together with a unied approach. This
communication assists with organizational
change, as well as ensuring the teams have
the resources and space they need to norm.
P L AY: D E C E N T R A L I Z E
DECISION-MAKING
The key to remaining agile with more than
one team is allowing decisions that are
made frequently or that lack widespread
impact to be made at the team level.
Requiring an organizational body to meet
and make all decisions for all parts of a
large organization often causes delay. The
decision delay can be as long as it would
take to execute the decision, causing waste
in the form of time for an organization.
Localized decision-making also contributes
to team ownership, which is essential to
embodying the agile principles. Centralized
decision-making still has a place in large
organizations, however. When deciding
to centralize a decision, examine whether
the decision is infrequent, impacts the
entire organization, and has a time
constraint. This will help guide whether
it would benet from decentralization.
< Lack of an executive sponsor >
The team is enthusiastic about being more collaborative and delivering
more value to the end user, but does not see how “management” can
help. Or, the management thinks agile is something only the software
developers do. Supported by the annual ndings in VersionOne’s State
of Agile survey [VersionOne 2016], most barriers to the successful
adoption of sustainable agility derive from culture change and support
(not just buy-in) from executive-level leadership. It is essential to have
a leader who understands agile and has the authority to set the vision
for its adoption across the organization, program, or team.
62
P L AY: M A K E W O R K A N D
PLANS VISIBLE
Scaling agile development teams to
program or portfolio levels means
managing competing needs through
alignment of vision and synchronization
of sprints and delivery with dependence
upon each other. Just as a team has a
backlog that is regularly prioritized and
elaborated, a program or portfolio must
also have a backlog that is groomed to
allow for prioritization of work. The
backlog at this level needs to have exibility
to align with near-term organizational
priorities and enough elaboration to
assign the appropriate level of resources.
A planning roadmap is a good tool to plan
for the near term dened as the current
scal quarter. Less detail in the roadmap
is needed as it progresses to the future,
as the longer term needs will continue to
be prioritized and elaborated upon on a
regular basis.
P L AY: P L A N F O R U N C E R TA I N T Y
IN A LARGE ORGANIZATION
How do you plan for something
unplanned? As discussed earlier,
decentralizing decision authority is one
technique to enable ow and eliminate
waiting for a centralized authority. Other
ways are to plan only as longterm as
required. Put energy into only the
immediate or funded activities. Planning
beyond that should consume less eort
so the team can pivot if organizational
context changes. Planning horizons vary
among organizations, but often scal
schedules or contracts guide the overall
roadmap. Agile is a mindset; even those
at the highest level of the organization
need to remember that.
P L AY: W H E R E A P P R O P R I AT E ,
U S E A K N O W N F R A M E W O R K
FOR AGILE AT SCALE
Signicant thought has been put into
large-scale agile delivery frameworks.
We acknowledge this remains an area of
development and change. There’s no need
to start from scratch. In particular, we’d
advise you to look at SAFe, LeSS, and
Nexus. Consider these as a starting point.
You will nd that each is well documented
by its founders, but all share a notion of
being context-aware for your situation,
and customizing to what makes sense for
you, so long as it remains built on a solid
foundation of agile principles and values.
< Assuming resources, time, and scope can all
remain fixed >
We’re all familiar with the iron triangle of time, resources, and scope.
Agile turns it upside down and adjusts scope assuming time and
resources arexed. Time is xed by using a cadence, and resources are
usually associated with money, which is also xed during a period of
time. Byxing scope as well, you’re assuming everything is known in
advance and the delivery and solution are predictable. This is often not
reality. Much of the solution must be discovered through creativity and
experimentation. Fixing all three parameters results in date slip, cost
overrun, and/or insucient delivery. There is no room for injecting new,
valuable items based on learning and discovery. By allowing the scope
to ex and adjust to accommodate changes in the ecosystem or lessons
learned through experimentation, we shift from being plan focused to
being value focused.
SOME KNOWN FRAMEWORKS
TO EXPLORE:
+ SAFe: http://www.
scaledagileframework.com/
+ LeSS: http://less.works
+ Nexus: http://scrum.org
63
By the mid-2000s, the program that the
U.S. Army used for training was coming
apart at the seams.
By the mid-2000s, the program that the
U.S. Army used for training was coming
apart at the seams. More than a decade
old at the time, the Automated Systems
Approach to Training (ASAT) relied on old
technology, and, despite a slew of add-ons,
patches, and workarounds over the years,
the program couldn’t keep up with training
needs, delivered inconsistent instruction,
contained redundancies, and was
expensive to maintain.
To replace ASAT, the Army decided to
develop a new system, the Training and
Doctrine Development Capability (TDDC),
which would ostensibly be state of the art.
This plan didn’t quite work out as hoped.
While the TDDC was designed to take
advantage of the Web and of gains in
hardware capabilities, the program’s
builders weren’t as forward thinking in
their methodologies. Structured primarily
around traditional waterfall development
techniques, the project continued for
several years and chewed through nearly
$100 million. However, the resulting
program never worked to anyone’s
satisfaction. It lacked the basic
functionality users wanted and it couldn’t
handle even a minimum number of
concurrent training professionals.
Implementation delays would have meant
users had to endure ASAT for a while longer,
but the Army chose to scrap the TDDC
entirely and replace it. This time, Army
technologists were determined to try a
dierent approach. A requirement of the
newly drawn-up Request for Proposals for
the new Training Development Capability
(TDC) was that the contractor use agile
methodology, with collaborative teams,
frequent iterations, constant load testing,
and deep engagement by the user
community. A fully working product was
completed by 2008, less than 2 years after
the projects start—and there have been no
hiccups. The system has been successfully
rolled out to all of the Army training schools
as a replacement for the ASAT system.
“Because of the rst asco with TDDC,
I came to the initial 30-day evaluation of
TDC ready to fail it quickly and take an
early ight home,” says Henry Koelzer, a
retired artillery NCO and early evaluator
of the project. But after just a few hours,
he decided,This system, and the agile
programming methodology, was going
to work.
The primary failing of ASAT was its
dependence on 1990s two-tiered, fat client
architecture, which resulted in a wholly
decentralized program. “Every school was
a system in itself,” says Dennis Baston,
who is retired now but was a Supervisory
Systems Analyst at the U.S. Army Training
Support Center.
STORIES FROM THE GROUND
U.S. Army Training Program: An Agile Success Story
64
For example, the training software used at
Fort Knox’s armor school, Fort Bennings
infantry school, and Fort Sill’s eld artillery
school had to be loaded manually on
servers at each of these locations. And
because the applications were stove piped,
the installations at the separate schools
could not practically communicate with
each other. The sheer redundancy of the
courseware and the need to dedicate as
many as 50 dierent servers exclusively to
ASAT was a huge a drain on technology
and nancial resources.
What’s more, once a course was placed
on the server, individual trainers at each
school could tweak it to their perceived
needs. As a result, there were multiple
versions of each set of training materials
oating around, and no way of knowing
which was the most current. In fact,
sometimes a course got so lost in the
system that it could only be found with an
extensive searchand lots of manpower
earmarked for it. At Fort Knoxs armor
school, for instance, after a search for
the most current version of the weapons
maintenance course, Army training
professionals nally found it in the music
school. “Who would have guessed those
people were so hard core,” Baston says.
Baston adds that when congressional
investigators and U.S. prosecutors asked
to see the training content related to
interrogation methods used at Iraq’s Abu
Ghraib prison after military personnel
were found to have abused inmates
there in 2003, it was impossible to
denitively decipher which version
each soldier actually received.
Consequently, the Army’s goal in
developing TDDC (and later TDC) was
to provide an integrated and centralized
repository of training products that
were approved, under development,
or being considered for general use.
In addition, secondary benets sought
included eliminating the duplicate
content and reducing the time to
develop training products.
The contract to build the TDDC was
awarded to what Koelzer calls “a major
company; one you would immediately
recognize.” Despite the vendor’s
reputation and resources, the waterfall
approach doomed the project from the
start. Following the typical waterfall
techniques, program requirements were
set in stone during the planning phase
even before one line of code was written.
No Army users—trainers or trainees—
saw the interfaces and tested the
functionality until TDDC was completed
and delivered. “We gave them the use
case, function points, and other major
specications, and when they were all
done, they gave us the software, which was
going to be a surprise, either good or bad,”
recalls Baston.
The contractor tried to minimize the risk
of the waterfall method by pairing it with
spiral development techniques, which
involve more testing and even agile-like
iterations during the project, but the
spiral model shares a fatal aw with
the waterfall model: the program’s
requirements cannot change during
development. So government evaluators
were uncertain what they would see when
the product was nally delivered. Combine
that with the spiral approach of working
on overlapping aspects of the project
at the same time, with separate mini-
development teams basing their activities
on user requirements, functions, and
features that were frozen in time during
upfront planning. “So what you end up
with is organized chaos,” Baston says.
But even given that the waterfall method
doesn’t allow for modications in
project design, Baston says, “What was
delivered didn’t meet the requirements
that were specied in the rst place.
He attributes this result to the fact that
he and other evaluators could not see,
or make corrections to, what was being
produced until the very end.
65
For example, the system was supposed to
support 6,000 training developers. But the
software couldn’t handle a load anywhere
close to thatperhaps fewer than 100.
Baston pins the blame on the contractors
testing process. Rather than assessing the
system with real developers and realistic
numbers of concurrent users, the
contractor used a few of its own coders
and not in sucient numbers to push the
software to the breaking point.
The outcome couldn’t have been more
of a disaster. After carefully evaluating
the TDDC, Baston determined with
98% certainty that it could not be xed
and should be shelved. However, the
project lead, a two-star general who was
the deputy chief of sta for operations,
was not willing to trash such an expensive
eort even though Baston had given it a
2-percent chance of beingxable. Says
Baston,He wanted more certainty in our
ndings. So we had to go back and do
more testing, more in-depth analysis, and
we ended up with a 100% certainty that it
was a complete, unrecoverable failure.”
The project was then rebid, this time as an
agile development eort. Phase 1 of the
TDC, which began November 1, 2006, was
a 30-day demonstration phase, at the end
of which the prospective contractors had
to demonstrate a prototype to a packed
house of about 30 government evaluators.
On the basis of this session, the contract
was awarded to the contractor team of
Unitech, Booz Allen Hamilton, and MPRI.
One immediate advantage of agile
methodology over the waterfall approach
was its continuous performance testing
regime even during the development of the
software. For example, load measurements
were conducted each month with an
application that estimated total system
capability based on the behavior of the
program when accessed by a large number
of concurrent users—as many as 30,000
by the time the software was ready to
launch. In addition to merely issuing a “yea
or “nay” to do it at no cost. However, when
it went beyond a small adjustment, the
Army and the contractors negotiated ways
to put more resources into that area of the
system while streamlining other sections.
For example, the military had failed to
include a critical security function in
its system requirements. When that
substantial shortcoming became
evident, the development team and
the Government hammered out ways
to make up for it, eventually agreeing
to reuse some of the existing system
accreditation documentation from
the earlier programs. This freed up
resources to tackle the security gap.
66
In the end the project came in on budget
and on time. “What I saw happening was
that there was an acceptance of the system
from the user base as opposed to the
contractor having to try to force its nished
results on people,” says Baston.
The nal phase of the project, including
deployment, maintenance, and data
conversion, lasted from July 1, 2008,
through September 30, 2008, when the
training system went live.
The project was successfully deployed, and
now training professionals can access
courseware via a web browser and use the
portions of it they need without corrupting
the original program. As new content is
added to the courseware templates, the
system keeps track of which version is the
most recent and who is responsible for it.
When an appropriate supervisor signs o
on a new version, the updated training
materials are marked as complete and are
made available to anyone with TDC access.
TDC has already generated numerous
critical improvements with tangible gains.
Perhaps the single largest benet is
TDC’s impact on the preparation of
course description documentation
known as Course Administrative Data
(CAD) and Program of Instruction
(POI)which ultimately determines
funding for training eorts. Accuracy is
essential, so each CAD or POI undergoes
a lengthy review by nancial, training,
and training development experts at
Training Operations Management
Authority (TOMA) before submission to
the Department of the Army. With ASAT,
schools submitted these documents by
exporting their databases to hard drives,
which were then mailed to TOMA. In turn,
TOMA personnel would import the data
onto their servers, indicate necessary
changes, and then send the edited
documents back to the schools. The
schools would make the required
corrections, and the process would begin
all over again. This system was so
cumbersome that TOMA could barely
meet the Armys minimum requirements
for training assessment.
In sharp contrast, under TDC, CAD and
POI are sent to TOMA through the
workow architecture within the system.
TOMA receives notication electronically,
and its experts then make comments
directly into the les and route them back
to the school for changes. As a result,
67
submission times to the Army for CAD
and POI have been reduced from about
1 month under ASAT to 1 day under TDC.
In addition, TDC’s security architecture
permits compartmentalization of
information not possible under ASAT.
With ASAT, restricting which information
each user had access to was a complicated
process. As a result, sometimes
unauthorized users would inadvertently
edit or change a le that didn’t belong to
them. By providing ve separate domains,
TDC allows supervisors to limit user
access to only those programs theyre
authorized to work on. TDC also allows
for consolidation of equipment, which
reduces hardware, support and security
costs, and complexity. ASAT ran on
78 dierent servers, each of which had to
be housed in a restricted physical location.
TDC runs on just a handful of web servers
and a single database server.
Currently, TDC is used by almost 3,000
people on a daily basis and intermittently
by an additional 3,000 users.
FOR MORE INFORMATION
Shawn M. Faunce, faunce_shaw[email protected]om, Booz Allen Hamilton
Dan Tucker, [email protected], Booz Allen Hamilton
Haluk Saker, [email protected], Booz Allen Hamilton
Wyatt Chaee, chaee_wyatt@bah.com, Booz Allen Hamilton
68
Use this advice along with your own conversations to instill team collaboration and
improve delivery eciency as you address and execute a vision.
We hope this gives you a good place to start. Keep in mind that each team can and
should mold its approach, gain momentum, improve its skill, and become more mature.
For more information about Booz Allen, our Digital Solutions business, or our
agile practice, please visit BoozAllen.com/expertise/digital-solutions.html or
reach out to agil[email protected].
PARTING THOUGHTS
For teams new to the agile mindset, this Agile Playbook
oers recommendations for achieving sustainable agility
and success.
69
For more than 100 years, business,
government, and military leaders have
turned to Booz Allen Hamilton to solve
their most complex problems. They trust
us to bring together the right minds: those
who devote themselves to the challenge
at hand, who speak with relentless candor,
and who act with courage and character.
They expect original solutions where there
are no roadmaps. They rely on us because
they know that—together—we will nd
the answers and change the world.
We solve the most dicult management
and technology problems through a
combination of consulting, analytics,
digital solutions, engineering, and cyber
expertise. With global headquarters in
McLean, Virginia, our rm employs more
than 23,300 people and had revenue of
$5.80 billion for the 12 months ended
March 31, 2017. To learn more, visit
BoozAllen.com. (NYSE: BAH)
ABOUT BOOZ ALLEN
DIGITAL SOLUTIONS
Booz Allen provides a full-stack enterprise
digital solutions capability and provides
high-performing, cross-functional agile teams.
At Booz Allen, we share a passion for
using the power of digital to change
the world. Through our many decades
supporting the Federal Government, we
blend in-depth mission understanding
and digital technical expertise with a
consultative approach.
We work side-by-side with you to transform
your organization and create new and
innovative digital services by combining
social, mobile, advanced analytics, cloud,
and IoT with modern techniques including
user-centered design, Agile, and DevOps.
Our digital strategists and technologists
are changing the way our clients think
about, assemble, ship, and run digital
services. From cloud platform experts
and data scientists, to Ruby and Hadoop
developers, security engineers, and user
experience designers, its a community
with license to open new perspectives and
with freedom to explore new partnerships
and the reuse of code and ideas.
ABOUT BOOZ ALLEN
Booz Allen Hamilton has been at the forefront of strategy and
technology for more than 100 years.
70
Je Patton. 2014. User story mapping:
Discover the whole story, build the right
product, United States: O’Reilly Media.
Pichler Consulting. 2016. The product vision
board. (May 2016). Retrieved April 23, 2016
from http://www.romanpichler.com/tools/
vision-board/
Rally Software Development Corporation.
2014. Impact of agile quantied: Swapping
intuition for insight. (2014). Retrieved May
6, 2016 from https://www.rallydev.com/
nally-get-real-data-about-benets-
adopting-agile
Scaled Agile, Inc. 2016a. Metrics—SAFe.
(2016). Retrieved April 23, 2016 from
http://www.scaledagileframework.com/
metrics/#P2
Scaled Agile, Inc. 2016b. Scaled Agile
Framework—SAFe for lean software and
system engineering. (2016). Retrieved
April 21, 2016 from http://www.
scaledagileframework.com
Greg Smith and Ahmed Sidky. 2009.
Becoming agilein an imperfect
world, Greenwich, CT: Manning
Publications Company.
Je Sutherland. 2014. Scrum: The art
of doing twice the work in half the time,
United States: Crown Business.
Je Sutherland and Ken Schwaber. 2013.
ScrumGuides.org. (July 2013). Retrieved
April 23, 2016 from http://scrumguides.org
VersionOne. 2016. State of agile report.
(March 2016). Retrieved April 23, 2016
from http://stateofagile.versionone.com
Lyssa Adkins. 2010. Coaching agile
teams: A companion for Scrum Masters,
agile coaches, and project managers in
transition, United States: Addison-Wesley
Educational Publishers.
Lyssa Adkins. 2015. Developing an internal
agile coaching capability: A cornerstone
for sustainable organizational agility.
(November 2015). Retrieved April 29, 2016
from http://www.agilecoachinginstitute.
com/wp-content/uploads/2015/11/
Developing-an-Internal-Agile-Coaching-
Capability.pdf
David J. Anderson. 2010. Kanban:
Successful evolutionary change in your
software business, United States: Blue
Hole Press.
Kent Beck et al. 2001. Manifesto for
agile software development. (February
2001). Retrieved April 27, 2016 from
http://agilemanifesto.org
Booz Allen Hamilton. 2015a. Booz Allen
becomes Scaled Agile Framework® (SAFe)
Gold Partner. (October 2015). Retrieved
May 9, 2016 from http://www.boozallen.
com/media-center/press-releases/2015/10/
booz-allen-becomes-scaled-agile-
framework-gold-partner
Booz Allen Hamilton. 2015b. Booz
Allen Hamilton acquires software services
business of SPARC, LLC. (November
2015). Retrieved May 9, 2016 from
http://www.boozallen.com/media-
center/press-releases/2015/11/
booz-allen-hamilton-acquires-software-
services-business-of-sparc
CA Technologies. 2016. Reading a
burndown chart. (2016). Retrieved April 23,
2016 from https://help.rallydev.com/
reading-burndown-chart
Mike Cohn. 2005. Agile estimating and
planning (Robert C. Martin Series) 5th ed.,
Upper Saddle River, NJ: Prentice Hall
Professional Technical Reference.
Mike Cohn. 2016. Planning poker: An agile
estimating and planning technique. (2016).
Retrieved April 21, 2016 from https://www.
mountaingoatsoftware.com/agile/
planning-poker
Rachel Davies and Liz Sedley. 2009.
Agile coaching, United States: The
Pragmatic Programmers.
Esther Derby and Diana Larsen.
2006. Agile retrospectives: Making
good teams great, United States:
The Pragmatic Programmers.
ICAgile. 2010. International Consortium for
Agile (ICAgile). (2010). Retrieved April 22,
2016 from http://www.icagile.com
Innovation Games. 2015. Product box.
(2015). Retrieved April 23, 2016 from
http://www.innovationgames.com/
product-box/
Ron Jeries. 2004. Big visible charts.
(October 2004). Retrieved April 22, 2016
from http://ronjeries.com/xprog/articles/
bigvisiblecharts/
Ron Jeries. 2015. Um, agile software
development requires software development.
(July 2015). Retrieved May 13, 2016 from
http://ronjeries.com/articles/015-jul/
um-software-development/
Corey Ladas. 2010. Scrum-ban. (2010).
Retrieved April 29, 2016 from http://
leansoftwareengineering.com/ksse/
scrum-ban/
Frederic Laloux. 2014. Reinventing
organizations: A guide to creating
organizations inspired by the next
stage of human consciousness, France:
Laoux (Frederic).
REFERENCES AND RECOMMENDED
READING LIST
71
About Booz Allen
For more than 100 years, business,
government, and military leaders have
turned to Booz Allen Hamilton to solve
their most complex problems. They
trust us to bring together the right
minds: those who devote themselves
to the challenge at hand, who speak
with relentless candor, and who act
with courage and character. They expect
original solutions where there are no
roadmaps. They rely on us because they
know thattogetherwe will nd the
answers and change the world. To learn
more, visit BoozAllen.com.
BOOZALLEN.COM
© 2017 Booz Allen Hamilton Inc. | VCS.C.08.031.17