Thursday, December 31, 2009

Running Agile With Manual Testing Part 2: Feeding the Queue

A few weeks ago, I told you about how to practice Agile with an all-manual test harness. Now I'd like to give you some additional principles to keep in mind that will help you in this challenging scenario (and come in handy as good practices even with test automation).

A key notion to focus on in this situation is keeping a flow going at a steady pace, and getting stories through to QA testing early on in the process.

(Note: I'm adapting some of these principles from research work done on the construction of the Empire State Building by Mary and Tom Poppendieck for their new book, Leading Lean Software Development: Results Are Not the Point):

1.   Get ONE to Done:
Focus the team on one user story at a time and moving it to QA testing as soon as possible. So if you have four developers on your team, instead of having them work on four separate stories and moving all four towards QA testing at a relatively equal pace (which could get you four stories ready for QA testing at the same time and create a logjam), have them team pair up, split up tasks and get two stories, or even better just one, moving to QA. Think of automotive assembly lines, but instead of each developer building their own car, the team works on one car at a time, together, to get it complete before moving on to the next car.

2.   Feed the Queue Logically:
Be sure to consult the QA testers regarding the logic of testing order for the stories in the sprint. In Sprint Planning and in the daily Standup, be sure to ask the question “which of these stories make sense to test together, or in a certain order?” It sounds like an obvious point, but if you don’t ask the question, it could be missed. Then take this into consideration in prioritizing the stories in-sprint.

3.   Prioritize by Size:
We always want to work first on the stories with the highest priority per the Product Backlog. But within a sprint, IF the relative priorities of two stories are close and there's little danger of being stopped mid-sprint, and all else being equal, focus on the largest story first. There's generally (though not always) a direct relationship between the amount of development work for a story and the amount of testing required. There's usually also more unknowns, more issues, that can crop up on a larger story than a smaller one. By putting these stories into your testers' hands sooner, you give them more time to test and uncover these issues, and therefore give your developers more time to address the issues, during the sprint.

Just to illustrate the point, let's say you have five user stories in a sprint, A - E (by priority), as illustrated in the first graph below. I've applied a rule of thumb of 25% of development time required for QA. IF all else is equal, and you have the leeway to do so, prioritizing the stories by size (as illustrated in the second graph) results in a cycle of steadily decreasing testing time required for each story flowed to testers by developers:

(click to enlarge)

Also, by leaving the smaller stories to the end (the ones requiring less testing as well, per the rule of thumb), you accomplish several things. First, you lessen the lag time between when your developers complete their work and the end of the sprint. This keeps the developers from potentially “sitting around” at the end of a sprint. Also, you make it easier for QA testers to turn around testing successfully in the limited time at the end of a sprint because the testing required on those final stories should be less.

Keeping these simple guidelines in mind should help keep your sprint queue humming along like an assembly line, and make it that much easier to practice Agile with manual testing.

As always, I welcome your comments.

Tuesday, December 15, 2009

Common Sense Guidelines to Creating Speedy Mobile Web Applications

In the current story line on the HBO series "Curb Your Enthusiasm", "Seinfeld" creator Larry David is producing a "Seinfeld" reunion show. In the story line, George Costanza has a brilliant idea for the "iToilet," an iPhone app that directs you to the closest acceptable public toilet anywhere in the world. In preparing to write this blog, I could think of no better example for a mobile application in which speed would be critical.

OK, so it's a fictional humorous example, but the fact of the matter is that mobile web content has grown far beyond simple text messaging. Looking around the train on my daily commute, it's odd to look at the people using laptops for web surfing, instead of mobile devices, and think of them as "old school." But mobile devices have made the leap to the primary information channel for millions of people worldwide. With the level of reliance that there is on all that rich content, it's critically important that your mobile applications focus on speed. And to do that, it helps to think lean.

Here are some basic, common sense guidelines to consider in designing mobile applications for speed:

  1. Limit white space: It sounds simple enough, and it can be. Content files that are smaller in size will transmit faster, and removing extra white space in content is a quick way to slim down the content.
  2. Watch out for the Cookie Monster: Cookies are a standard method for storing identifying data on the client to personalize the user experience on a site. Cookies present a dual problem in mobile web applications, however. For one, every server request contains the cookie information and thus results in extra overhead in the traffic, reducing performance. Also, many mobile devices don't accept cookies, so you can't rely on support for them across mobile devices. One answer for this is the new HTML5 and its support for DOM storage. (NOTE: the acronym here does not stand for "Document Object Model"). DOM storage offers some key advantages over cookies for mobile web applications; chief among them in terms of speed is that DOM data is not transmitted to the web server with every request. This reduces the additional traffic and increases speed. In addition, you can store much more data in DOM storage (5MB-10MB depending on the browser) than in a cookie. And unlike cookies, servers cannot access DOM storage at will; traffic flow and access in both directions is controlled by the client.
  3. Minimize the detours: Redirects (sending a user to another server to pull in additional information) can be flat-out brutal in mobile web applications. When rendering a page, having a redirect between the client and the HTML file will cause delays in downloading the page and rendering elements until the HTML document has arrived. Some redirects may be necessary, for example, in account authentication, but other redirects can happen unintentionally and you should be vigilantly watching for these cases. Some examples to avoid:
    1. Missing Trailing Slash – Failing to have a trailing slash in a URL will cause an unintentional redirect to the directory, assuming the URL ends with a directory name. For instance, http://mywebsite.com/somedir will cause a redirect to http://mywebsite.com/somedir/.
    2. Conditional Redirects – This is redirecting a user based on certain conditions (browser used, etc.) to a different section of a website, or a different site altogether. It's easier to develop in this manner, but again, it adds overhead and time for the user to wait. 
  4. Pack before you move: This is a general guideline to minimize network traffic by reducing round trips, and includes practices such as:
    1. Batching data requests to send in a single batch
    2. Combining static images into a single image by utilizing sprites. This is useful for images that don't often change.
    3. Cache Ajax data – caching this data wherever possible will reduce the response time a user might have to otherwise endure waiting for that asynchronous data request to roundtrip over the mobile network.
The above are just a couple of examples of common sense approaches to optimizing your mobile web applications for speed. Just keep focusing on the notion that leaner is faster.


As always, I welcome your comments.

Monday, December 7, 2009

How to Run Agile with Manual Testing

Contrary to popular belief, you can run Agile with an all-manual testing harness. I’m doing it now, and I’ll tell you how it works.


In Scrum (my particular flavor of Agile), we promise to deliver releasable product at the end of every Sprint (iteration). The software is considered potentially releasable based on the Team and Product Owner’s definition of “Done.” Definitions of “Done” vary greatly from team to team and project to project. At a high level, however, “Done” typically includes similar basic elements. 


Here is a typical basic definition of “Done”:
1.       Code completed
2.       Peer code review completed
3.       Unit tests completed
4.       System tests completed
5.       Requirements documentation updated
6.       Integration tested
7.       Regression tested


Agile devotees like me will tell you that automated testing at all stages is a true key to high throughput, or velocity, on a team. When a team is developing in a four week Sprint utilizing unit tests written into the code, scriptable QA testing tools that automate regression testing, and a Continuous Integration Server that runs a build (and thus runs the unit tests) on each check-in, the amount of releasable product that can be built in a Sprint can be really impressive. Some people advise that some or most of these elements are required in an Agile adoption. However, when your team has few or even none of these automated test harness components, and procurement details put the acquiring of the necessary software and hardware beyond the foreseeable horizon, why let that stop you from adopting Agile?


The key time crunch in the cycle, as is always the case with testing, comes at the end. You want developers to be fully engaged and working, but if they were to develop through the entire four week Sprint, there would be no time left for QA to do manual Integration and Regression testing, which has to wait until all the code is completed. If you want to give the testers a full week, let’s say, to complete these tests, you might have your developers stop developing three weeks into the Sprint to give QA that fourth week. But having idle developers will rightfully cause issues with management in short order.


In Scrum, you have additional set meetings every Sprint. The Sprint review (final demo of product developed during the Sprint) can run several hours. The same is true for the Sprint Retrospective (a process retrospective around what went well, what didn’t, and what processes we want to change for the next Sprint). Sprint Planning for the next Sprint (choosing items from the Product Backlog for the next Sprint, and breaking them into development tasks) can take up an entire day. Throw in a Product Backlog Grooming meeting (fleshing out requirements further down the Product Backlog, slated for a future Sprint, and put preliminary estimates to them), and you have most of a week taken up in necessary meetings for the development team. Try to run a Sprint every four weeks, with the last day of one Sprint butted right up against the first day of the next Sprint, and you’ll find yourself cutting significantly into development time to have these meetings, or shorting the meetings themselves. Either way, you’re headed for disaster.


One way that I use to overcome these constraints is running a five week QA Sprint overlapping a four week development Sprint plus a planning week. That planning week between development Sprints is the time for QA to complete Integration and Regression testing while the development team participates in the planning meetings and conducts analysis to flesh out the Product Backlog. The developers are available to address issues encountered in QA’s testing if need be, without taking away from their development for the next Sprint. A typical schedule with duties might look like this:


(click the image to enlarge)




Week 1 (Sprint 1): Developers begin coding/unit testing, QA begins developing system test scripts
Week 2 (Sprint 1): Developers release code to system testing/code/unit test, QA system testing/refining scripts
Week 3 (Sprint 1): Developers release code to system testing/code/unit test, QA system testing/refining scripts
Week 4 (Sprint 1): Developers release code to system testing/code/unit test, QA system testing/complete Integration/Regression scripts
Week 5: PLANNING WEEK. Sprint Review/Retrospective meetings, Product Backlog Grooming meeting, Sprint 2 Planning; QA Regression/Integration testing
Week 6 (Sprint 2): Developers begin coding/unit testing, QA begins developing system test scripts
Week 7 (Sprint 2): Developers release code to system testing/code/unit test, QA system testing/refining scripts
Week 8 (Sprint 2): Developers release code to system testing/code/unit test, QA system testing/refining scripts
Week 9 (Sprint 2): Developers release code to system testing/code/unit test, QA system testing/complete Integration/Regression scripts
Week 10: PLANNING WEEK. Sprint Review/Retrospective meetings, Product Backlog Grooming meeting, Sprint 3 Planning; QA Regression/Integration testing


Combine this strategy with a “Release Sprint”, where you’ll conduct those activities more appropriate to a production release, such as User Acceptance Testing, a full end-to-end regression test, and any other reviews or activities more appropriate to a release cycle than a development Sprint. Overlay the release iteration on top of another development Sprint, if appropriate, to keep your team developing new features. Plan a lighter workload for that Sprint so that you have bandwidth available to address issues found in, say, UAT, as a top priority. If release testing goes well and the team finds itself with available bandwidth towards the end of that Sprint, pull in additional items off the Product Backlog to fill the Sprint.

As I say, this is one way to implement Agile in an all-manual testing universe. I’d be interested to hear your experiences with similar challenges. As always, I welcome your comments.

Thursday, November 12, 2009

The Agile Voyageur – What a wilderness canoe trip taught me about software development

Those who know me well know two things about me:

1. I LOVE to tell a good story (and I might tell you the same one 5 or 6 times, so watch out);
2. I believe in the analogy, the allegory, and the axiom: basically, that we can find parallels, symbolism, and truths about one area of our lives in an entirely different area.

So with that said, let me tell you a good story that I promise to conclude with an analogy, an allegory, and an axiom!

As a high school sophomore, I went on a school-sponsored canoe expedition (just like the old French voyageurs) in Quetico Provincial Park in Ontario, Canada. This was no ordinary camping trip. It was 6 days, 12 people (3 per canoe), with no powered vehicles (air, land, or water) allowed, no buildings, no power lines, no roads…no sign of civilization. Each person had a 75 lb. backpack with personal items, tent, cooking equipment, and food…everything we needed (but NOT, for sure, everything we wanted, or thought we needed). We had to lay out our personal items before we left and the guide went through and eliminated what he felt we wouldn’t need. And everything we packed in, we had to pack out. No garbage cans.

A great mentor of mine from my high school was supposed to go up and chaperone. He had been a guide up there on several expeditions. He ended up being unable to go, but he gave me this great waterproof guide’s map of Quetico…with all the campsites he’d used in the past marked on it. I was also the most experienced outdoorsmen among the students, so I was set!

On the first day of the trip, our guide Steve decided to keep things light for us, let us get our feet wet (literally!). That first day was a series of small lakes and short (100-200 yards) portages – which, if you’re not familiar, involves beaching the canoe, unloading everything, and carrying all the gear: three packs, paddles, and canoe – over land to the next lake. Our group had never done anything together before…we were just kids from the same school. But we quickly had to learn how to make it work. We had no big guidebook, no real prior experience, and only Steve to answer our questions. That first day seemed rough, but we made it through and looking back, it was REALLY the easiest.

I started off in Steve’s canoe, and by the 2nd or 3rd day, it was clear to him I was more advanced than the others. He took me aside and asked me to command one of the canoes. That sounded great! I’m in! Then the bad news: I would get the canoe with the substitute chaperone. UGH! See, the substitute was the least outdoors-y teacher in the school. He was a canoe leader, and was doing poorly. He didn’t know what he was doing and, to make matters worse, he’d developed a bad attitude about the trip. He was miserable and was making others miserable. I’d be replacing him as canoe leader and he’d be in my canoe! Uncomfortable! I decided to say yes anyways, and off we went. My “favorite” comment he had one day, while canoeing across this beautiful lake, was “I HATE this G** D*** G**-FORSAKEN PLACE!” My answer was a wry “really? I think it’s pretty amazing.” I had to keep things light. Of course, with my trusty guide’s map, I knew exactly where we’d camp every night. But I was wrong…my map was a little old, and Steve had to keep correcting me: this site had a rockslide and was no good, or that site was too bear-infested.

The days went on, the lakes Steve took us across got larger, the portages longer…a half-mile, then the granddaddy one mile portage. I’m proud to say I carried TWO packs over that portage – one strapped in back and one in front – and I only stopped to rest once (a feat that I promise I can’t recreate today). But the teams got tighter, and as we became more experienced, we were able to go farther, produce more.

There were dangers along the way to be sure. One team member got a leach on his leg. One narrow passage between lakes, with faster water, that most people were portaging around, we paddled through. On Elizabeth Lake, there were reports of bears in the area. We had to paddle our food packs out to a tiny island in the middle of the lake and leave them there overnight so the bears wouldn’t be drawn into our site.

We pulled into base camp on day six, 10 grungy-but-seasoned kids, one great guide, and one still-cranky chaperone. That night, we had a little bonfire to relax and review with Steve. “Do you realize what you boys accomplished?” Steve said. “You just traveled 110 miles in six days, with nothing but yourselves and each other to rely on. You overcame challenges, you adapted to adversity, and you’re changed forever for the better because of it.” And he was right, on all counts.

So, do you see the analogy, the allegory, and the axiom? Let me spell it out:
Analogy: Agile software development is a wilderness canoe trip. We start out small to learn, and increment up to bigger and bigger challenges. We adapt to changing circumstances and use the varied talents of our cross-functional team to reach the goal.
Allegory: The expedition is the project, the canoe is a Scrum Team, the whole group is a Scrum of Scrums, the bears and leaches are impediments. Steve was a Product Owner. The chaperone was a discordant team member. The old map is a stale project plan. Sorting through our things before beginning is removing waste and technical debt from our process. The bonfire was a retrospective.
Axiom: Agile, folks, is what we already do in everyday life, certainly on a wilderness canoe trip. So if Agile principles made a potentially life and death trip something fantastic to enjoy and remember forever, what do you think chances are that they can improve our work developing software?

Note: Every word of the story is true. And I bet that chaperone is still cranky about the outdoors today.

As always, I welcome your comments.

Tuesday, August 25, 2009

Shatter the Glass Silo


So, you’re practicing Scrum, are you? You’re having your Sprint Planning meetings, your Retrospectives? If nothing else, you’re having your Daily Standups, right? But what do those Standups look like?

I heard an interesting anecdote from a colleague the other day. It was about a Scrum Master at his work that they had dubbed “The Abusive Scrum Master.” The Abusive Scrum Master was zealous about the Scrum framework. He clearly knew the book information about Scrum and what the book said you should cover at the Daily Standup - the three questions:
  • What did you do yesterday?
  • What are you doing today?
  • What impediments are you encountering?

He also knew that the book said the Standup should be time boxed to 15 minutes maximum. Each morning, the team would gather, and the Abusive Scrum Master would poll the team members, asking them to answer the three questions. In turn, each team member would answer the questions in a monotone voice, and the Abusive Scrum Master ticked off the burn-down hours on the sprint backlog. But if the team member started to talk about anything beyond the three questions, or any other team member started to ask a question, the Abusive Scrum Master would cut them off. As a result, the team members “checked out” during the standup. Each team member would sit, waiting their turn to be called on, staring at their papers. Yes, they had moved to just writing down their answers and reading them to the Abusive Scrum Master!

Eventually, the Abusive Scrum Master moved on and a new Scrum Master came in. At the first standup, it was clear one of the team members wasn’t listening to the updates. When the new Scrum Master mentioned it, the team member said “right, well, it’s not my turn.” This team had moved into the “Glass Silos.”

Have you seen it at your office? The “team” reports at their standup each day, you can see them, they’re standing right there, but there is no communication between team members. The only information flow is up and out of their silo to the Scrum Master. When that happens, it’s no longer a team…it’s a collection of individuals.

The Daily Standup is a meeting for collaborative information exchange. It's not strictly for the team to report their status to the Scrum Master. It’s for the team members to share information with each other about what they’re working on, what they’ve done, what issues they’re encountering. The Scrum Master should be listening, and gently nudging the conversation back on track if it gets too far into the weeds, but this is the time for the team to get up to speed with each other on what’s going on in the project today. When that dynamic gains steam, positive changes will follow. The team members will share insights with each other on how to solve issues. Sidebars for further collaboration will develop outside the standup (and this is good!). Your team will become more versatile because they can cover each other's work. And this all leads to the team's velocity increasing exponentially.

So take stock at your next Standup. Foster the collaboration, keep the team together, and shatter those Glass Silos!