Thursday, December 31, 2009

Running Agile With Manual Testing Part 2: Feeding the Queue

A few weeks ago, I told you about how to practice Agile with an all-manual test harness. Now I'd like to give you some additional principles to keep in mind that will help you in this challenging scenario (and come in handy as good practices even with test automation).

A key notion to focus on in this situation is keeping a flow going at a steady pace, and getting stories through to QA testing early on in the process.

(Note: I'm adapting some of these principles from research work done on the construction of the Empire State Building by Mary and Tom Poppendieck for their new book, Leading Lean Software Development: Results Are Not the Point):

1.   Get ONE to Done:
Focus the team on one user story at a time and moving it to QA testing as soon as possible. So if you have four developers on your team, instead of having them work on four separate stories and moving all four towards QA testing at a relatively equal pace (which could get you four stories ready for QA testing at the same time and create a logjam), have them team pair up, split up tasks and get two stories, or even better just one, moving to QA. Think of automotive assembly lines, but instead of each developer building their own car, the team works on one car at a time, together, to get it complete before moving on to the next car.

2.   Feed the Queue Logically:
Be sure to consult the QA testers regarding the logic of testing order for the stories in the sprint. In Sprint Planning and in the daily Standup, be sure to ask the question “which of these stories make sense to test together, or in a certain order?” It sounds like an obvious point, but if you don’t ask the question, it could be missed. Then take this into consideration in prioritizing the stories in-sprint.

3.   Prioritize by Size:
We always want to work first on the stories with the highest priority per the Product Backlog. But within a sprint, IF the relative priorities of two stories are close and there's little danger of being stopped mid-sprint, and all else being equal, focus on the largest story first. There's generally (though not always) a direct relationship between the amount of development work for a story and the amount of testing required. There's usually also more unknowns, more issues, that can crop up on a larger story than a smaller one. By putting these stories into your testers' hands sooner, you give them more time to test and uncover these issues, and therefore give your developers more time to address the issues, during the sprint.

Just to illustrate the point, let's say you have five user stories in a sprint, A - E (by priority), as illustrated in the first graph below. I've applied a rule of thumb of 25% of development time required for QA. IF all else is equal, and you have the leeway to do so, prioritizing the stories by size (as illustrated in the second graph) results in a cycle of steadily decreasing testing time required for each story flowed to testers by developers:

(click to enlarge)

Also, by leaving the smaller stories to the end (the ones requiring less testing as well, per the rule of thumb), you accomplish several things. First, you lessen the lag time between when your developers complete their work and the end of the sprint. This keeps the developers from potentially “sitting around” at the end of a sprint. Also, you make it easier for QA testers to turn around testing successfully in the limited time at the end of a sprint because the testing required on those final stories should be less.

Keeping these simple guidelines in mind should help keep your sprint queue humming along like an assembly line, and make it that much easier to practice Agile with manual testing.

As always, I welcome your comments.

Tuesday, December 15, 2009

Common Sense Guidelines to Creating Speedy Mobile Web Applications

In the current story line on the HBO series "Curb Your Enthusiasm", "Seinfeld" creator Larry David is producing a "Seinfeld" reunion show. In the story line, George Costanza has a brilliant idea for the "iToilet," an iPhone app that directs you to the closest acceptable public toilet anywhere in the world. In preparing to write this blog, I could think of no better example for a mobile application in which speed would be critical.

OK, so it's a fictional humorous example, but the fact of the matter is that mobile web content has grown far beyond simple text messaging. Looking around the train on my daily commute, it's odd to look at the people using laptops for web surfing, instead of mobile devices, and think of them as "old school." But mobile devices have made the leap to the primary information channel for millions of people worldwide. With the level of reliance that there is on all that rich content, it's critically important that your mobile applications focus on speed. And to do that, it helps to think lean.

Here are some basic, common sense guidelines to consider in designing mobile applications for speed:

  1. Limit white space: It sounds simple enough, and it can be. Content files that are smaller in size will transmit faster, and removing extra white space in content is a quick way to slim down the content.
  2. Watch out for the Cookie Monster: Cookies are a standard method for storing identifying data on the client to personalize the user experience on a site. Cookies present a dual problem in mobile web applications, however. For one, every server request contains the cookie information and thus results in extra overhead in the traffic, reducing performance. Also, many mobile devices don't accept cookies, so you can't rely on support for them across mobile devices. One answer for this is the new HTML5 and its support for DOM storage. (NOTE: the acronym here does not stand for "Document Object Model"). DOM storage offers some key advantages over cookies for mobile web applications; chief among them in terms of speed is that DOM data is not transmitted to the web server with every request. This reduces the additional traffic and increases speed. In addition, you can store much more data in DOM storage (5MB-10MB depending on the browser) than in a cookie. And unlike cookies, servers cannot access DOM storage at will; traffic flow and access in both directions is controlled by the client.
  3. Minimize the detours: Redirects (sending a user to another server to pull in additional information) can be flat-out brutal in mobile web applications. When rendering a page, having a redirect between the client and the HTML file will cause delays in downloading the page and rendering elements until the HTML document has arrived. Some redirects may be necessary, for example, in account authentication, but other redirects can happen unintentionally and you should be vigilantly watching for these cases. Some examples to avoid:
    1. Missing Trailing Slash – Failing to have a trailing slash in a URL will cause an unintentional redirect to the directory, assuming the URL ends with a directory name. For instance, http://mywebsite.com/somedir will cause a redirect to http://mywebsite.com/somedir/.
    2. Conditional Redirects – This is redirecting a user based on certain conditions (browser used, etc.) to a different section of a website, or a different site altogether. It's easier to develop in this manner, but again, it adds overhead and time for the user to wait. 
  4. Pack before you move: This is a general guideline to minimize network traffic by reducing round trips, and includes practices such as:
    1. Batching data requests to send in a single batch
    2. Combining static images into a single image by utilizing sprites. This is useful for images that don't often change.
    3. Cache Ajax data – caching this data wherever possible will reduce the response time a user might have to otherwise endure waiting for that asynchronous data request to roundtrip over the mobile network.
The above are just a couple of examples of common sense approaches to optimizing your mobile web applications for speed. Just keep focusing on the notion that leaner is faster.


As always, I welcome your comments.

Monday, December 7, 2009

How to Run Agile with Manual Testing

Contrary to popular belief, you can run Agile with an all-manual testing harness. I’m doing it now, and I’ll tell you how it works.


In Scrum (my particular flavor of Agile), we promise to deliver releasable product at the end of every Sprint (iteration). The software is considered potentially releasable based on the Team and Product Owner’s definition of “Done.” Definitions of “Done” vary greatly from team to team and project to project. At a high level, however, “Done” typically includes similar basic elements. 


Here is a typical basic definition of “Done”:
1.       Code completed
2.       Peer code review completed
3.       Unit tests completed
4.       System tests completed
5.       Requirements documentation updated
6.       Integration tested
7.       Regression tested


Agile devotees like me will tell you that automated testing at all stages is a true key to high throughput, or velocity, on a team. When a team is developing in a four week Sprint utilizing unit tests written into the code, scriptable QA testing tools that automate regression testing, and a Continuous Integration Server that runs a build (and thus runs the unit tests) on each check-in, the amount of releasable product that can be built in a Sprint can be really impressive. Some people advise that some or most of these elements are required in an Agile adoption. However, when your team has few or even none of these automated test harness components, and procurement details put the acquiring of the necessary software and hardware beyond the foreseeable horizon, why let that stop you from adopting Agile?


The key time crunch in the cycle, as is always the case with testing, comes at the end. You want developers to be fully engaged and working, but if they were to develop through the entire four week Sprint, there would be no time left for QA to do manual Integration and Regression testing, which has to wait until all the code is completed. If you want to give the testers a full week, let’s say, to complete these tests, you might have your developers stop developing three weeks into the Sprint to give QA that fourth week. But having idle developers will rightfully cause issues with management in short order.


In Scrum, you have additional set meetings every Sprint. The Sprint review (final demo of product developed during the Sprint) can run several hours. The same is true for the Sprint Retrospective (a process retrospective around what went well, what didn’t, and what processes we want to change for the next Sprint). Sprint Planning for the next Sprint (choosing items from the Product Backlog for the next Sprint, and breaking them into development tasks) can take up an entire day. Throw in a Product Backlog Grooming meeting (fleshing out requirements further down the Product Backlog, slated for a future Sprint, and put preliminary estimates to them), and you have most of a week taken up in necessary meetings for the development team. Try to run a Sprint every four weeks, with the last day of one Sprint butted right up against the first day of the next Sprint, and you’ll find yourself cutting significantly into development time to have these meetings, or shorting the meetings themselves. Either way, you’re headed for disaster.


One way that I use to overcome these constraints is running a five week QA Sprint overlapping a four week development Sprint plus a planning week. That planning week between development Sprints is the time for QA to complete Integration and Regression testing while the development team participates in the planning meetings and conducts analysis to flesh out the Product Backlog. The developers are available to address issues encountered in QA’s testing if need be, without taking away from their development for the next Sprint. A typical schedule with duties might look like this:


(click the image to enlarge)




Week 1 (Sprint 1): Developers begin coding/unit testing, QA begins developing system test scripts
Week 2 (Sprint 1): Developers release code to system testing/code/unit test, QA system testing/refining scripts
Week 3 (Sprint 1): Developers release code to system testing/code/unit test, QA system testing/refining scripts
Week 4 (Sprint 1): Developers release code to system testing/code/unit test, QA system testing/complete Integration/Regression scripts
Week 5: PLANNING WEEK. Sprint Review/Retrospective meetings, Product Backlog Grooming meeting, Sprint 2 Planning; QA Regression/Integration testing
Week 6 (Sprint 2): Developers begin coding/unit testing, QA begins developing system test scripts
Week 7 (Sprint 2): Developers release code to system testing/code/unit test, QA system testing/refining scripts
Week 8 (Sprint 2): Developers release code to system testing/code/unit test, QA system testing/refining scripts
Week 9 (Sprint 2): Developers release code to system testing/code/unit test, QA system testing/complete Integration/Regression scripts
Week 10: PLANNING WEEK. Sprint Review/Retrospective meetings, Product Backlog Grooming meeting, Sprint 3 Planning; QA Regression/Integration testing


Combine this strategy with a “Release Sprint”, where you’ll conduct those activities more appropriate to a production release, such as User Acceptance Testing, a full end-to-end regression test, and any other reviews or activities more appropriate to a release cycle than a development Sprint. Overlay the release iteration on top of another development Sprint, if appropriate, to keep your team developing new features. Plan a lighter workload for that Sprint so that you have bandwidth available to address issues found in, say, UAT, as a top priority. If release testing goes well and the team finds itself with available bandwidth towards the end of that Sprint, pull in additional items off the Product Backlog to fill the Sprint.

As I say, this is one way to implement Agile in an all-manual testing universe. I’d be interested to hear your experiences with similar challenges. As always, I welcome your comments.