Find the value in planning

I decided to post this today as I had a conversation with a colleague who was struggling with the concept of velocity what it means, how it is applied in terms of sprint planning. He was struggling with a product owner using velocity to predict what could fit into a sprint and the team using it as the definitive line of what they could work on. As surprised as they were, I said, this is how velocity can be used although it should be seen as a radiator of information, not a definitive prediction for future work.

He was also very surprised when I told him that using velocity was dependent on how planning was approached as it might not factor at all. (insert head scratching here)

I explained that based on things I had learned from a planning and estimation course (thank you Mike Cohn) that there were actually 2 potential approaches that I was aware of when taking on sprint planning by a team.

  1. Velocity Driven Planning
  2. Commitment Driven Planning

The former, as the name holds makes velocity a primary point of measurement for determining team capacity while the latter uses available team member capacity as the point of measurement.

Velocity Driven Planning

This is the type of planning that is most commonly used, from my experience, and most commonly taught in scrum classes that I have attended. This is the process by which a team will determine it’s maximum team velocity based on past delivery to determine the level of commitment that they feel they can make in the current sprint. This is a view typically based on a historical perspective. That being said, velocity is helpful over the longer term when consistency shows the actual velocity of a team as opposed to the sort-term. And any disruption to that team means a potential change in their velocity. It is a radiator of possibility, not an absolute.

 Yesterday’s Weather

This is the process where a team may look a sprint or two behind (successes and failures are fine) and together determine level of commitment in terms of points that it can make commitment to based on that ideal.

Average Velocity

A team may look across several sprints, add the sum of total points delivered and divide by the total number of sprints to arrive at an average maximum velocity for the team. This will be used to set a maximum velocity for commitment by the team.

Capacity Driven Planning

This approach is much more focused on the amount of available capacity to each team member to meet the task  obligations so that they can apply this capacity to the items when decomposed to determine when they feel “full” as a team in relation to the overall hours they can utilize.

This process uses the following process flow:

  1. The team members determine how much capacity is available to the for the sprint. This can be determined by factoring in capacity for organizational overhead (meetings, personal time off, etc.) plus a projected maximum timeframe for unplanned work (such as support responsibilities, emergent tasks not identified or other items that may come up; as we never want to plan for max capacity typically) subtracted from the available determined timeframe for the sprint.
  2. The team will select a story from the backlog with the highest priority and decompose the work into the basic tasks that need to be completed to deliver the story and attach an estimate of the amount of time projected to complete the work in hours.
  3. The team members will take the decomposed items and apply it against their projected capacity (if development work the developers may need to discuss how the time applies across multiple people with same domain specialty). Once deducted, they now have their remaining capacity and repeat steps 2-3 until they have filled that capacity close enough that the team members are as full as the team can carry for the sprint. This will not always be the case for every team member.

This approach is focused more on the short term as opposed to velocity as it uses a projection of time over the next sprint period and the capacity of the team for that sprint work. Using this as a baseline after the given sprint would not be a good idea as the first step should be that the team members calculate the available capacity before each commitment.

This approach can be much more time consuming as the team is decomposing the work as it goes along to measure each item against remaining capacity before it makes the commitment to the given story. You have to be vigilant that a team does not get caught up into so much perfection of task decomposition that they make this meeting long, painful and less productive.

Which Should you Use?

Get ready for the answer you will love to hate.  It depends.

If your organization has team members not really following a focus of scrum or they have them handling other responsibilities, then capacity planning can be really helpful (and a savor for those that think the developer who can focus 1 hour a day has 8 hours to work on a product).

If your team velocity is all over the place or you have a sense that the team has fallen into a comfortable groove and never pushing themselves because it’s “easier”, maybe capacity planning is something to try.

If you want to find that sustainable pace that allows your team to use past information to give them insight into a future prediction, velocity can certainly help you do this.

Do you have a product owner that thinks in terms of features and not in terms of sprints? Either of these methods can assist you to help them understand how to create that bridge of trust with the team to understand what goes into what they are asking to accomplish.

A general rule of thumb is that if you look at the velocity of a team you will see some variance over time (based on team member availability or unplanned work) so it is really just a guide. Using capacity, you are trying to apply a more concrete form of measurement to the work at hand (although there is room for error here as well as things take longer, unplanned work takes up more time, etc.) but it is based on what the person expects that they have available to commit in terms of the value they provide to the team.

What Have I Typically Used?

Most teams I have worked with have used velocity as it feels easier to them than to determine their capacity and fortunately I have been able to minimize the distractions they have from product focus. But, I would be open to change if needed or to experiment. We often determine capacity for team members (like leave, etc.) at my organization with them just so that we can make visible to the team the drop in team member involvement so they can consider how this impacts their commitment.

My Observations of Team Planning Meetings

What I will say is that often times, from what I see, teams do not utilize planning sessions to the utmost effectiveness. They tend to rush to commit, rush to decompose so that they can start working on building. It often feels as if they are just trying to put something down and move on. It is our responsibility to help them understand that this is a time to refine the understanding before the energy of work begins.

Some teams do not use the time of these meetings to discuss architecture and design much or talk about approaches. They merely try to identify the tasks, drop some numbers and move into the work. This is a failure on our part of servant leaders if they do not realize that this is a horizon for which they can better understand the work based on the known today. If using the general scrum rule of thumb for 1 hour planning and 1 hour tasking for every week of the sprint, these can be long meetings for some teams.

A 30 day sprint alone could be a full day meeting. Why not use it in the best way possible? I get it, I have been a technical team member and was ready to build but the greatest power I found from the team was that we saved time when we came together, discussed and white-boarded ideas BEFORE we ever leveraged code. We maximized the time to gain a shared understanding of work that we could reflect and adjust as well as understanding the items we would be building. Approaching our sprints in this manner allowed us to create a shared and unified approach even if the path to get there was evolving throughout the sprint.

One of the best comments I ever heard about sprint planning is that it is far less important to be precise in identifying each and every individual task for a sprint than it is ensuring that you understand the stories to be delivered that make up the commitment as a team.

Stay Agile!

Advertisements

Agile Tips from a Barista

I have always known that typically coffee shops utilize a “pull system” so I watched in quiet observation as my coffee was made this morning at a local chain and it was indeed a Kanban style approach being used (or a gated system for controlling the WIP at least as I cannot be sure that the drive to eliminate muda through continuous improvement exists ).

But basically here was the “flow”:

  1. Counter person takes my order (“the what is needed”)
  2. Writes the order on the cup and places it into the queue of work
  3. Counter person takes my payment
  4. When there is available bandwidth, the barista pull my cup (ie; “the work itself”)
  5. Barista sets the environment based on the nature of my order (double shot of expresso, etc.) This is where the adaptability comes in in a decomposition of the things to be done to achieve the success of the order
  6. Barista focuses on the completion of the order (“the done”) by only pulling in the work that they can manage
  7. Barista calls my name and delivers the product

This is a very simple process that they repeat but if you focus on their intention of flow it becomes very apparent that there is a control the flow of the work and applied focus to deliver the completed product. This combination allows consistency and execution in a manner that allows the people doing the work to control the queue of work to be done.

I just found this interesting as a simple example of a workflow that is designed to produce a consistent product and a workflow that remains gated to ensure that the person making the coffee is focusing on the order and not the queue until completion.

I feel somewhat confident that this repetitive action has been observed and the overall average “flow time” from order to delivery has been calculated as it seems that within a small deviation, my coffee arrives at the same interval each time.

As much as I love a good cup of coffee, I enjoy being able to observe a system at work that is so simple but highly efficient to deliver a consistent product.

 

 

 

 

The Code you Love to Hate

“No matter how hard the past, you can always begin again” – Buddha

In my work environment, we have a portfolio which consists of a large amount of older code bases for which the team members are responsible to support for the business. I have personally experienced in the past the difficulties of supporting a legacy (I’ll use that term here but actually believe any product that starts it’s lifecycle from stabilization becomes legacy) applications and see how it can be a struggle for team members that I lead today.

Older technologies, poor implementations, no real guidance or design patterns, etc. Sometimes it is a real mess. I recall one of my mentors in development relaying to me when I was learning early in my career that fixing a problem in legacy code is often like donning a pith helmet and traipsing through a dense jungle. At the time, some of things I encountered used to make me so angry at why a developer might do some of the things he did in the implementation and it often caused massive refactoring (with care) to make it more maintainable. But as I matured as a technical worker, I began to realize a few base things:

  1. The developer may have been unequal to the task put before him. He may have been asked to do something for which his skill level made him unequipped to actually do and he created this in the best and most logical way he saw fit to accomplish his job. Maybe that popular “off the shelf” program did not exist when it was built.
  2. I believe that no developer sets out with the approach “I am going to write the sible code I can create”.
  3. The tools and technologies (or the depth of knowledge of them) at his disposal may have not allowed him to understand some of the side effects of things done.
  4. Maybe they did not have any real code standards or design patterns under which they were expected to write code.
  5. Refactoring of code is a given. Any code that exists and extended is likely to need refactoring when changed. Nothing is perfect.
  6. Bugs exist in the world. There are no perfect systems and any complex system will likely have issues that are unaccounted.
  7. If deep dependencies are used (3rd party or other systems) it is likely the developer though in terms of the environment of the day, not that the code might outlive those dependencies or they might change before this code base did so.
  8. Even if it is “legacy code”, as the current developer or team doing that job today, you assume ownership.  You may not like the way it is written or find it difficult but the base fact is that you have taken on this code base as a part of your job.
  9. You are in control of refactoring and can, whenever possible, make things better for those who come after you.
  10. Instead of focusing on the negative aspects of the code base, why not (as a developer reminded me recently) utilize the boy scout motto and “leave it better than you found it”.

Actual Problems with older code bases

I know as a technical professional we all hate inheriting a messy or poorly designed code base. It really sucks to have to work on it, fix it and sometimes just keep it running as the technical world changes around it. I often describe some as neighborhoods in which I try and perform the least amount of interaction and then get out.

I also know that many organizations do not fully grasp the need of the “care and feeding” of code bases or the deeper understanding that software should have a lifecycle and die and be reborn or transformed. Many organizations I have been a part of would rather hear the “thump, thump, thump” of the old reliable software and not think about what happens if it becomes in a critical state.

“If it ain’t broke, don’t fix it” is a very common idea. I actually knew a major software company that ran it’s accounting and payroll from a mainframe Cobol code base long past it’s prime and even hired high dollar consultants long term to ensure it kept running each time it sputtered and dropped a dump in hexadecimal code.

But the cost of continuing to gamble on these large complex systems is risk and if understood by the organization, they would see the importance to mitigate that risk as they would any other risk to the business.

It’s like driving a 1965 Chevy Bel-Air car. As time goes by, maintenance becomes more pervasive, parts become more difficult to find and eventually you may not even be able to get them or need to have them custom machined to keep your car running. This car may be something you keep covered in the garage and drive only on Sundays when the weather is nice. But if like a core business system, you used this classic car for your daily commute, the expense of upkeep and the risk of failure become greater and greater as time wears on.

So what do I do? I am a little scared now …

You basically have 2 core options:

  1. Bite the bullet and replace the system in total or find a way to “cage and lifecycle”
  2. Use a legacy transition pattern to migrate the system to a more maintainable code base in the future.
  3. Cross your fingers and hope for the best.

Option #1

Replacing a large complex core organizational code base is expensive. It is a large scale investment by a company and may require a significant technical  and personnel investment to replace. This is a fact.

But, just like buying a home, it is also an investment that hopefully returns greater end results through actually taking time to ensure that the system still actually supports the business and does not have “dead zones” of features that people cared about or wanted 20 years ago but no longer need. It gives you an opportunity to think about how a system models to the current business. And even while recreating the system, you still must maintain the current system and managing the changes to that system can be a challenge.

You also could encounter the dreaded problem of “thinking in the old system” as opposed to thinking in terms of how the business works. This can be especially true when a system has been a core system. People stop thinking about what the business needs and relate it to more of what they currently have to maintain in the system . This can be especially risky in disaster recovery scenarios in which the business must know how to function without a system being in place. For many businesses, they never consider this issue (given the world of distributed servers and cold site switchover).

In short, if you take this route, the organization has to be focused and it may be painful but the goal is to create something (hopefully in a better designed way) that can be cared for over time and change over time. If you take this route, placing a lifecycle for the system is probably important as well.

Option #2

One pattern I have become very interested in comes from Martin Fowler. It is called the “Strangler Pattern“. I find this approach very interesting as it balances the replacement of the former system with the new system but utilizes a common set of services to supply both until the legacy system is “strangled out”. The idea coming from vine growth in Fig trees that Fowler and his wife learned about on vacation. The vines sprout in the top portions of the tree and grow and eventually take over and replace the host.

I am not deeply versed on replacement patterns but my assumption is that this is only one of many. If you are fortunate enough to have an application that did use an MVC pattern, you might be able to begin to replace pieces in place and minimize disruption.

Another approach might be to leave the data structure as is (although legacy systems are often rife with business logic in the data tier), build the new system on an MVC or MVVM pattern with a services layer and later refactor the data tier over time and adjust the models to utilize the new data source while moving the business logic in a more desirable tier of the pattern.

Option #3

As surprising as it sounds, this is actually a path that some folks take. Please don’t find yourself in this situation as all of the hope and good karma in the world is not going to pull your backside out of the fire when (and I say this as opposed to “if”) things go South.

Final Thoughts

I started this post as I think it is important as team members that have legacy application to understand just as we collective own the new code we create, we own the code that came before us as part of what we do. I think that it is important for organizations to understand that these code bases can equate to risk and should be mitigated as any other potential impact to the business and that there are many ways that we as technical and creative people can help move the needle forward.

And most of all, the statement given to me by that developer on a team that said “leave it better than you found it“(the Boy Scout motto).

I think that is sound advice, no matter what you do in your organization and should drive us all …

 

Undone Work

One of the principles underlying the agile manifesto states “Simplicity. The art of maximizing the work undone — is essential“.

This seems pretty straight-forward in terms of wrapping one’s head around things as in software so many of us are very familiar with the general KISS principle.

However, with many teams I have worked with I see a very familiar pattern emerge at times …

Once the team starts the sprint, the developers spread across the tasks on multiple stories (based on skill level or dependencies or any number of reasons). They are working away on the small pieces across multiple features and feel confident in their commitment. Then the clock ticks by …

Then somewhere during the sprint (let’s hope they are coached in the “first responsible moment” approach) they realize that something was much harder than they imagine or someone was sick or generally something derailed their optimistic view of sprint completion. So they take stock of what they have and determine they are going to complete 3 of 6 features but indicate “the others are about 40% complete though so it will take us less time if they are moved forward” or in a worse case scenario they have 6 partial features, maybe not tested or unit tested or whatever …

Sounds familiar to anyone? The value that I referenced above applies to this as well. This is where the “art of maximizing the work undone” also comes into play. All of that partial work (even though it was probably great effort) created absolutely NO value for the end stakeholder.

Let’s use a real world example …

You need to get a few items completed for yourself and your family for an upcoming event. The items are:

  1. Mail 100 letters
  2. Wash all of the windows at my house
  3. Cut the grass
  4. Feed the dog

So you take these jobs and decompose them into tasks:

Mail Letters:

  • Get envelopes
  • Get stamps
  • Fold letters
  • Put letters into envelopes
  • Take to post office and place into mail

Wash Windows:

  • Get cleaning supplies
  • Treat each window
  • Wipe each window

Cut Grass:

  • Get gas for lawnmower
  • Start lawnmower and cut grass
  • Do edging
  • Blow grass clippings off pavement

Feed Dog:

  • Get dog food
  • Put food in bowl
  • Ensure water is in water dish
  • Place on floor for dog to eat and drink

In the course of working on your commitment, you determine that you will spread yourself across all tasks so you do the following:

  1. Fold all the envelopes
  2. Go to the store and get cleaning supplies
  3. Pickup some dog food
  4. Fill your gas can with gas
  5. Get your stamps
  6. Fill the lawnmower
  7. Stuff the envelopes

Unfortunately, you run out of time to get things done (based on unforeseen situations), which can often happen.

So what have you actually accomplished in terms of end value of your commitment?

Let’s look at it from that perspective. You have:

  1. A full lawnmower but the grass is still not mowed
  2.  Letters in envelopes that sit not mailed
  3. Plenty of cleaning supplies but dirty windows
  4. A dog with a bag full of food but very hungry

So what if you had:

  1. Gotten the stamps and envelopes and ensured the mail got sent.
  2. Went to the store, got dog food and fed your dog
  3. Got cleaning supplies and cleaned the windows.

If you still ran out of time, what value have you received? You completed 3 of the four of your tasks leaving only the lawn undone. You maximized the amount of work by moving the tasks to completed before moving on. You minimized the undone work through focus.

How does this apply to teams?

In a sprint, teams often spread out across stories to the individual tasks and when completing all they can complete for reasons such as:

  1. Their skill level does not allow them to pull more advanced tasks
  2. All other tasks are being worked by other team members

Most, then move on to pull additional tasks in another story that they can work on as this seems the most logical thing to do to continue to provide value. But what if I said that we should not do this? What if I said that we should focus on applying our energy to getting items “in flight” to a state of done? Would this seem counter-intuitive? Or, if looking at it from the lens of completed value, would it make sense?

Many people think that people should move on to complete as much work as possible and not become “idle” but I firmly believe that given the focus of completing stories, anyone not “doing” can still find a way to become valuable. What if instead of moving on they did the following things:

  1. Decomposed the next story or stories into tasks (as we could focus on the things we are working on and not try and decompose everything at once)
  2. Assisted in testing, maybe writing scripts or unit tests for the code under development or helping to write test cases
  3. Performed the “just needed” level of documentation for the project
  4. I firmly believe that just like the old adage “if you buy a bigger house, you will find things to fill it” applies to teams. If they are without direct domain work, there are always items to which they can apply themselves to realize the end value.

Why I keep getting told this does not work …

I am often told by teams that this approach does not work as stories have dependencies, it slows them down overall as everyone is “not busy” or creates more refactoring (which is a given in iterative development).

However, in each conversation I have about this, no one is actually willing to try it and see if it actually works or not. Many seemed convinced it will not work.

I speculate that decomposing the work needed and performing this decomposition effort “just in time” coupled with the focus of work can result in a much better end value proposition from a commitment. If a backlog is truly a prioritization of value from top to bottom, I would much rather be informed of the completed value I can receive than the collection of untested (and therefore undone) features I might receive that are “80% complete”.

I challenge people to try this. I may truly be wrong here but I suspect that the focus on the immediate and driving to completion will see things completed and delivered quicker.

Just my thoughts, right or wrong …  A game is always won a play at a time, not the whole game at once. And I think this is the same with our commitments to stories to be delivered.

Until next time … Stay Agile!

Scrum master tips – The burndown

I have been working with several entry level scrum masters over the past few years and have discovered that although they may fully understand the general ideas of rituals and artifacts of the scrum framework, understanding the underlying agile value proposition these things support. Gaining insight into how they can utilize them with continued effectiveness with their teams seems to not be something that many do not often explore initially.

So I thought it might be helpful to convey how I view these items and how they can actually create impact for you as a scrum master to up your effectiveness with the teams.

My personal philosophy when I became a scrum master was that my role was a lynchpin in the process overall and that by increasing my understanding and effectiveness of the roles, rituals and artifacts, I could become a better servant-leader to a team overall.

This is merely how I view and utilize these things and hope that they help you as well.

What is the burndown chart?

The burndown chart falls into a category for me to be viewed as an “information radiator. It provides information in a static way that can allow the scrum master and the team to view the current state of product work within the sprint. It radiates information that can then be consumed and acted upon. It is a great artifact for assessment and course correction as well as allows scrum masters to begin to see items occurring within a team that may not be readily apparent, even to the team itself.

It is a chart that is measured by taking the amount of overall effort to be performed (hours of work) and plotting it across an axis of the total working calendar days of the sprint. Often it is coupled with an “ideal line” that reflects the optimum burn of effort equally distributed among days. As the “task hours” are burnt down, it reflects where the team is in terms of hours remaining to complete within remaining time of the sprint.

A typical burndown might look something like this:

burndown chart

In this scenario, the team has 400 hours of delivery tasks over an iteration of 10 days. This reflects work that may be done by the entire team (coding, testing, design, etc) for the 2 week iteration. So, the ideal line is reflecting to the team that at an average burn rate of 40 combined hours of effort by the team that is projected to complete the committed work within the 10 day sprint period.

Typically  the burndown charts I have used reflect the days of dedicated effort and do not reflect periods of planning, review, etc as these are bookend ceremonies to provide sprint lift-off and sprint landing. The burndown is focused on the time where team effort is directed towards end commitment.

The basic idea behind this chart is that from an agile perspective the highest value is to be aware of the work remaining undone and not to remain focused on the work already completed. So the responsibility of the product team is to reflect the remaining hours daily to represent the actual burn of working on a given story.

This is a very brief and straightforward explanation of what a burndown chart is so let’s delve into the next level of how it is helpful, how we might utilize the information and look at some ways to interpret patterns we may see inside a burndown to better frame questions or make inquiries to the team to help them.

What does a burndown chart do for a team?

As I mentioned before, it is an information radiator to a team. It gives them a snapshot of the past brief period of work and tasks completed and a reflection point for the work remaining to be done within the sprint. This information allows them to reflect a current team state both internally and externally to convey where they are without a “direct status report” to others outside the team as well. Anyone looking at the chart can get a general idea of where the team might be even if they do not fully understand the chart itself. This, however, can also lead to bad perceptions as opposed to informational learning as we will discuss later.

It is incumbent that a scrum master fully understand the purpose of this artifact, the reasons behind how it actually works and to begin to see patterns of work or potential team dysfunction within a burndown chart.

This allows them to not only accurately convey how to use this information and its purpose but frame questions aimed at potentially keeping the team productive or assist the team to be actively  aware of the “first responsible moment” of when a product commitment might become in jeopardy.

Patterns of burndown charts

Visual representation of work often allows an excellent opportunity to the scrum master and the team to reflect on familiar patterns that can be a visual queue to certain information or team dysfunction.

The Ideal Line is merely a Projection

As a scrum master, if you have the belief that below the ideal represents the idea of “being on track” and above the ideal is “being off track” you are taking a far too simple view of the  information that the team provides.

The burndown should be seen as a reflection for consideration of current state of a product development print to make inquiry, not predictive like a project schedule. The ideal is merely a projection of all things working with no issues impacting the team. It is a projection used for comparison. Establishing the thought as a scrum master of anything other than this or reinforcing to the team that it means ahead or behind the curve is a bad precedence to set and can lead to future problems.

Effort Swelling

For instance, in the image above, the team starts on day one burning down tasks and within the next 24 hours begins drifting above the ideal. A simple view might create panic and say “HEY EVERYONE! WE’RE BEHIND ON DAY TWO. WE GOTTA DO SOMETHING!!!!” when the circumstances that cause this picture could result from many potential reasons and should be information we can use as a scrum master to reflect on and potentially frame questions such as:

  • Did the committed work had greater unknowns than the team imagined and tasks are emerging? Normal and very possible. Probably just something to be aware of and see how the trend moves within 24 hours or so. But a basis to have the team reflect on this data maybe as a visual cue to them in your next stand-up.
  • Is the team struggling? Is there a causation? Has the team experienced a drop in overall capacity from the commitment due to sickness, not accounting for a team member vacation, etc? Many modern electronic tools, such as Microsoft Team Foundation Services,  will allow you to adjust the loss of capacity by each team member and the chart will update to reflect this effort recalculated across the iteration.
  • Are they just stuck? Are they mentioning impediments in their daily stand-up? Is there something you can do to assist or coach them to self-organize around solving the problem through inquiry?
  • A common symptom of a potential team dysfunction is that the members are not actually “burning down” their hours. They are pulling a task into a working state and working on it until done, therefore reflecting the total hours when there is actually less to be completed and then moving it to done all at once. This can show work not moving and so the work remaining can begin to begin a swelling pattern. One way of getting some insight into the pattern is if you see large drops of work following a swell.

Effort Drops (“work falling off the cliff”)

This pattern may look something like this:

Burndown Chart 2

Some questions that might happen when seeing such a drastic drop quickly:

  • Was the work committed less complex than the team thought initially and they just hit a “rapid burn” (Go Team!)
  • If following a swell, is the team actually burning down the hours regularly?  Are they perhaps holding on to tasks and not updating remaining hours but just pushing it to done when completed? This might prompt a scrum master to look into this information that the burndown provides in combination with the sprint task board and use it during a retrospective for exploration of “what does this mean to the team”?

Again, the key takeaway here is that these types of patterns provide an opportunity to learn, observe and inquire to assist you as the scrum master to help the team remain productive towards meeting their commitment. This information can help you teach the team to use this information to get some insight as to what is going on either during the sprint or used for exploration with the team within the retrospective.

These are merely some simple examples of how this artifact of the scrum framework can be used. It is best to keep in mind the concept of how it reflects back information to the team. Many scrum pioneers have discussed “BVC’s” (big visible charts) of which this can be one to help teams reflect on their current state of work and make adaptations based on insight. I hope this blog post allows new scrum masters a new way to use this artifact and can help them assist their teams with better insight …

I often consider this quote when thinking of a burndown chart in relation to teams …

“Information is only useful when it can be understood” – Muriel Cooper

 

 

 

Thinking in Moments

“Capturing the moments of today will wow your hearts tomorrow” – Anon

This will be a relatively brief post but wanted to share something that struck me in my early morning reading (which is my habit to feed my mind a little to start the day).

I am currently reading the new Chip and Dan Heath book, “The Power of Moments” in which they explore the idea of what defining moments are in terms of people and how people can make the shift to begin thinking in moments so that they can capitalize of the impact.

Doug Dietz, an engineer from General Electric spent roughly two years designing a new MRI machine. The book describes how he was so excited to see the first patient in a children’s hospital be able to utilize his creation but was absolutely dismayed when the experience was met with fear. He said this was the point that he “saw the room through the child’s eyes” which was cold and sterile and his machine sat inside like a “brick with a hole in it”. As a result, many children had to be sedated to undergo the use of the machine just to allow them to stay still and overcome the fear of the experience.

This heart wrenching experience fueled him to take this “pit” (low point or negative experience) and strive to make it a “peak”. So he worked with a vast cross functional team and was driven to redesign the user experience for children. The end result was he and his team created MRI rooms in children’s hospitals that resembled a pirate ship, a space ship or a amazon adventure (this one encouraging the child to stay still as not to tip the machine painted as a canoe). He careful observed the difference in the experience and was delighted when one small child tugging at his mother’s leg asked “can we come back tomorrow”? He had taken a “pit” moment for these children and transformed it into a “peak”.

I read this and thought to myself, “this is so applicable in organizations”. How may “pits” are we aware of as an organization for people (poor performance, bad service, bad product interaction) for which we have no plan or allow it to sit without putting passion behind turning it into a peak?

Conversely, how many “peaks” (employee first day, retirement, significant life events, transitions in career or life) do we place the minimal effort into a miss the power of creating the moment of connection between people and our organization.

This brief story really impacted me and I began to think (and I feel like I will begin to formulate known moments that I think I am missing for my organization) about what I could do to “think in moments”.

How about you? Are you thinking in moments? Are you turning “peaks into pits” or “pits into peaks”? I have not finished the book so I cannot give you a full review just yet but I can say that this small concept resonated with me and I am sharing it with you in hopes that it might with you as well.

Stay Agile!

 

When Fear of Failure Strikes

“Failure seldom stops you. What stops you is the fear of failure” – Jack Lemmon

I had an interesting coaching moment with a scrum master today. He came to me with a dilemma he had for his present team. He has a new team that has a new senior developer on it and they are in the process of storming as a team. Given this state of team development, it is only natural that they are in a growth period.

In their prior sprint, the team had a plague of sickness spread across them and they did not meet a significant amount of their sprint commitment due to the unexpected loss in capacity. This significantly concerned them as when they reviewed their product at sprint’s end, the stakeholders who attended did not seem to understand what the idea of “undone work” was nor how it was typically handled by the team.

The product owner managed the stakeholders relationship well and explained how the undone work would roll forward to the next sprint as the highest priority (given that was still the case) and be addressed by the team. But the team became very worried about the perceptions of the stakeholders. Fear of failure had reared its head within the team.

The scrum master, being conscientious about this engaged the team as based on some severe inclement weather had forced them to see the possibility of a failed sprint looming again. The team had asked, maybe we should change the process to only demo those features to the stakeholder that actually work. Being a relatively new scrum master, he wanted to explore this with me to help guide them. He came and met with me and told me what the team had suggested and admitted he felt uneasy about this change but wanted additional guidance. Instead of “giving” him an answer, we explored the situation and circumstances surrounding this request once he related the situation of the undone work in the previous sprint and the feeling of dread by the team with their current sprint.

I asked a few questions of him:

“So the team feels confident that they will not complete their commitment”?

“Do you feel this is a reaction to a fear of failure”?

“Do you think the compromise to hide this undone work compromises the agile value of transparency”?

“Is this the first responsible moment to speak with the product owner to indicate the work that will be undone so expectations are set with them rather than surprise”?

These four questions helped him find his own answer. He realized that he did have time within the sprint to engage the product owner and set the expectations of him and in doing so release the pressure of the team without finding some method to place transparency to the users into the shadows. He determine he could help the new team more by helping them understand that failure is opportunity, not a point of suspicion, ridicule and derision. He found a solution within his own problem by thinking through his problem so he could help his team do the same.

As a result, he engaged the product owner to indicate to them the work that may be undone so that they could discuss this with the team, ensure that the items worked on were the highest valued among the undone work (which they were) and kept the issue of fear of failure as a point of opportunity as opposed to the crushing blow of defeat. He helped them find relief and refocus on the things in progress without continuing to worry about the items that they knew they might not achieve.

In the end, they completed their commitment as it was the fear of failure to the stakeholders that made them concerned not the reality of doing so. They were so caught up in not appearing to be failing that they saw the loss of capacity as a reason to react. He guided them to understand this principle as well as one of the “first responsible moment” so that they could create transparency and strengthen the openness and relationship with their product owner. He turned this situation born of fear into one of working more closely together to succeed.

And he did so by just stopping to consider a few questions to allow himself to explore the possibilities. Fear of failure is a real thing with teams, especially teams that are allowed to select work within capacity to deliver as they feel they are driving the work. But failure is an opportunity to learn (although I know many executives who may not agree).

But through inspection of the underlying reasons behind this fear we can often find ways to gain clarity and help shape it into a learning point and create stronger relationships and stronger teams.

We all fear failing. But it does happen and often as a result of things far beyond our control or sometimes when we are doing everything right. So when this happens, it’s easy to just react based on this but the more challenging thing is to examine the fear and determine if our failings can be something that makes us better moving forward.