Through experience and a lot of trial and error, testers and musicians have developed skills they can't easily explain. Unfortunately, with software testing, there aren't as many obvious avenues for skill development as there are for musicians. Many software testers don't realize that there are learnable exploratory testing skills they can develop to help them become even more valuable to software development teams.When I work in a new organization, it doesn’t take long for me to get introduced to the testers who are highly valued. Often, when I ask testers about the times they did their best work, they apologize for breaking the rules: "I didn’t follow any pre-scripted test cases, and I didn’t really follow the regular testing process." As they describe how they came through for the team on a crucial bug discover, they outline activities that I identify with skilled exploratory testing. Little do they know that this is often precisely why they are so effective as testers. They have developed analysis and investigative skills that give them the confidence to use the most powerful testing tool we have at our disposal: the human mind.Exploratory testing is something that testers naturally engage in, but because it is the opposite of scripted testing, it is misunderstood and sometimes discouraged. In an industry that encourages pre-scripted testing processes, many testers aren’t aware that there are other ways of testing other than writing and following test scripts. Both software testing and music can be interpreted and performed in a variety of ways.
What does skilled exploratory testing look like? Here is scripted testing and exploratory testing in action. In one test effort, I came across a manual test script and its automated counterpart which had been written several releases ago. They were for an application I was unfamiliar with, using technology I was barely acquainted with. I had never run these tests before, so I ran the automated test first to try to learn more about what was being tested. It passed, but the test execution and results logging didn’t provide much information other than "test passed." To me, this is the equivalent of the emails I get that say: "Congratulations! You may already be a winner!" Statements like that on their own, without some sort of corroboration mean very little.
I didn’t learn much from my initial effort: running the automated test didn’t reveal more information about the application or the technology. Since learning is an important part of testing work, I delved more deeply. I moved on to the manual test script, and followed each step. When I got to the end, I checked for the expected results, and sure enough, the actual result I observed matched what was predicted in the script. Time to pass the test and move on, right? I still didn’t understand exactly what was going on with the test and I couldn’t take responsibility for those test results completely on blind faith. That violates my purpose as a tester; if I believed everything worked as advertised, why test at all? Furthermore, experience has taught me that tests can be wrong, particularly as they get out of date. Re-running the scripted tests provided no new information, so it was time leave the scripted tests behind.
One potential landmine in providing quality-related software information is tunnel vision. Scripted tests have a side effect of creating blinders - narrowing your observation space. To widen my observation possibilities, I began to transition from scripted testing to exploratory testing. I began creating new tests by adding variability to the existing manual test, and I was able to get a better idea of what worked and what caused failures. I didn’t want to write these tests down because I wanted to adjust them on the fly so I could quickly learn more. Writing them down would interrupt the flow of discovery, and I wasn’t sure what tests I wanted to repeat later.
I ran another test, and without the scripted tests to limit my observation, noticed something that raised my suspicions: the application became sluggish. Knowing that letting time pass in a system can cause some problems to intensify, I decided to try a different kind of test. I would follow the last part of the manual test script, wait a few minutes, and then thoroughly inspect the system. I ran this new test, and the system felt even more sluggish than before. The application messaging showed me the system was working properly, but the sluggish behavior was a symptom of a larger problem not exposed by the original tests I had performed.
Investigating behind the user interface, I found that the application was silently failing; while it was recording database transactions as completing successfully, it was actually deleting the data. We had actually been losing data ever since I ran the first tests. Even though the tests appeared to pass, the application was failing in a serious manner. If I had relied only on the scripted manual and automated tests, this would have gone undetected, resulting in a catastrophic failure in production. Furthermore, if I had taken the time to write down the tests first, and then execute them, I would most likely have missed this window of opportunity that allowed me to find the source of the problem. Merely running the scripted tests only felt like repeating an idea resolution, and didn’t lead to any interesting discoveries. On the other hand, the interplay between the tension and resolution of exploratory testing ideas quickly led to a very important discovery. Due to results like this, I don’t tend to use many procedural, pre-scripted manual test cases in my own personal work.
So how did I find a problem that was waiting like a time-bomb for a customer to stumble upon? I treated the test scripts for what they were: imperfect sources of information that could severely limit my abilities to observe useful information about the application. Before, during and after test execution, I designed and re-designed tests based on my observations. I also had a bad feeling when I ran the test. I’ve learned to investigate those unsettled feelings rather than suppress them because feelings of tension don’t fit into a script or process; often, this rapid investigation leads to important discoveries. I didn’t let the scripts dictate to me what to test, or what success meant. I had confidence that skilled exploratory testing would confirm or deny the answers supplied by the scripted tests.
Testers who have learned to use their creativity and intelligence when testing come up with ways to manage their testing thought processes. Skilled exploratory testers use mental tricks to help keep their thinking sharp and consistent. Two tricks testers use to kick start their brains are heuristics (problem-solving approaches) and mnemonics (memory aids).I’m sometimes called in as an outside tester to test an application that is nearing completion. If the software product is new, a technique I might use is the "First Time User" heuristic. With very little application information, and using only the information available to the first time user, I begin testing. It’s important for me to know as little as possible about the application at this time, because once I know too much, I can’t really test the way a first-time user would.
As I work through the application, I may find mismatches between the application and the market it is intended for, and the information supplied to users. If I have trouble figuring out the software’s purpose and how to use it, so will end users. This is important usability information that I write down in my notes. If the software isn’t usable, it isn’t going to sell. I invariably find bugs in this first testing session. I explore them, take notes so I can report them later, and design new tests around those bugs. After the session is over, I will have bugs to report and usability questions to ask. I will now have a model developed in my mind for testing this software. My brain will work on this model constantly, even when I’m not testing, and other testing activities will help build this and other models of the software.
Using heuristics and mnemonics helps me be consistent when testing, but I don’t let them rule my testing actions. If I observe something suspicious, I explore it. If something feels wrong, I investigate, and confirm or deny that feeling with defensible facts. It’s common to switch from heuristics and mnemonics to pure free-form improvisation and back again, or to improvise around highly structured tests. Exploratory testing—like improvising—helps me adapt my thinking and my actions based on what the software is telling me. This is a powerful concept. You can seize upon opportunities as soon as you observe them. Furthermore, you can adapt quickly to project risks, and discover and explore new ones. By developing skills to manage your thinking about testing, you no longer have to wait for spontaneous discoveries to appear out of thin air, and not be able to explain why you found a particular problem, or repeat it.
Developing exploratory testing skill puts you in charge of your testing approach. Skilled software testing, like skilled musicianship, is often referred to as "magic", simply because it is misunderstood. Music follows a set of patterns, heuristics and techniques. If you know a handful, you can make music quite readily. Getting there by trial and error takes a lot longer, and you’ll have a hard time explaining how you got there once you arrive. Testing using pure observation and trial and error can be effective, but can be effective much more quickly if there is a system to frame it.
Skilled exploratory testing can be a powerful way of thinking about testing. However, it is often misunderstood, feared and discouraged. When we dictate that all tests must be scripted, we discourage the wonderful tension and resolution in testing, driven by a curious thinker. We limit the possibilities of our software testing leading to new, important discoveries. We also hamper our ability to identify and adapt to emerging project risks. In environments that are dominated by technology, it shouldn’t be surprising that we constantly look to tools and processes for solutions. But tools and processes on their own are stupid things. They still require human intelligence behind them. In the right hands, software tools and processes, much like musical instruments, enable us to realize the results we seek. There are many ways to perform music, and there are many ways to test software. Skilled exploratory testing is another effective thinking tool to add to the testing repertoire.
What does skilled exploratory testing look like? Here is scripted testing and exploratory testing in action. In one test effort, I came across a manual test script and its automated counterpart which had been written several releases ago. They were for an application I was unfamiliar with, using technology I was barely acquainted with. I had never run these tests before, so I ran the automated test first to try to learn more about what was being tested. It passed, but the test execution and results logging didn’t provide much information other than "test passed." To me, this is the equivalent of the emails I get that say: "Congratulations! You may already be a winner!" Statements like that on their own, without some sort of corroboration mean very little.
I didn’t learn much from my initial effort: running the automated test didn’t reveal more information about the application or the technology. Since learning is an important part of testing work, I delved more deeply. I moved on to the manual test script, and followed each step. When I got to the end, I checked for the expected results, and sure enough, the actual result I observed matched what was predicted in the script. Time to pass the test and move on, right? I still didn’t understand exactly what was going on with the test and I couldn’t take responsibility for those test results completely on blind faith. That violates my purpose as a tester; if I believed everything worked as advertised, why test at all? Furthermore, experience has taught me that tests can be wrong, particularly as they get out of date. Re-running the scripted tests provided no new information, so it was time leave the scripted tests behind.
One potential landmine in providing quality-related software information is tunnel vision. Scripted tests have a side effect of creating blinders - narrowing your observation space. To widen my observation possibilities, I began to transition from scripted testing to exploratory testing. I began creating new tests by adding variability to the existing manual test, and I was able to get a better idea of what worked and what caused failures. I didn’t want to write these tests down because I wanted to adjust them on the fly so I could quickly learn more. Writing them down would interrupt the flow of discovery, and I wasn’t sure what tests I wanted to repeat later.
I ran another test, and without the scripted tests to limit my observation, noticed something that raised my suspicions: the application became sluggish. Knowing that letting time pass in a system can cause some problems to intensify, I decided to try a different kind of test. I would follow the last part of the manual test script, wait a few minutes, and then thoroughly inspect the system. I ran this new test, and the system felt even more sluggish than before. The application messaging showed me the system was working properly, but the sluggish behavior was a symptom of a larger problem not exposed by the original tests I had performed.
Investigating behind the user interface, I found that the application was silently failing; while it was recording database transactions as completing successfully, it was actually deleting the data. We had actually been losing data ever since I ran the first tests. Even though the tests appeared to pass, the application was failing in a serious manner. If I had relied only on the scripted manual and automated tests, this would have gone undetected, resulting in a catastrophic failure in production. Furthermore, if I had taken the time to write down the tests first, and then execute them, I would most likely have missed this window of opportunity that allowed me to find the source of the problem. Merely running the scripted tests only felt like repeating an idea resolution, and didn’t lead to any interesting discoveries. On the other hand, the interplay between the tension and resolution of exploratory testing ideas quickly led to a very important discovery. Due to results like this, I don’t tend to use many procedural, pre-scripted manual test cases in my own personal work.
So how did I find a problem that was waiting like a time-bomb for a customer to stumble upon? I treated the test scripts for what they were: imperfect sources of information that could severely limit my abilities to observe useful information about the application. Before, during and after test execution, I designed and re-designed tests based on my observations. I also had a bad feeling when I ran the test. I’ve learned to investigate those unsettled feelings rather than suppress them because feelings of tension don’t fit into a script or process; often, this rapid investigation leads to important discoveries. I didn’t let the scripts dictate to me what to test, or what success meant. I had confidence that skilled exploratory testing would confirm or deny the answers supplied by the scripted tests.
Testers who have learned to use their creativity and intelligence when testing come up with ways to manage their testing thought processes. Skilled exploratory testers use mental tricks to help keep their thinking sharp and consistent. Two tricks testers use to kick start their brains are heuristics (problem-solving approaches) and mnemonics (memory aids).I’m sometimes called in as an outside tester to test an application that is nearing completion. If the software product is new, a technique I might use is the "First Time User" heuristic. With very little application information, and using only the information available to the first time user, I begin testing. It’s important for me to know as little as possible about the application at this time, because once I know too much, I can’t really test the way a first-time user would.
As I work through the application, I may find mismatches between the application and the market it is intended for, and the information supplied to users. If I have trouble figuring out the software’s purpose and how to use it, so will end users. This is important usability information that I write down in my notes. If the software isn’t usable, it isn’t going to sell. I invariably find bugs in this first testing session. I explore them, take notes so I can report them later, and design new tests around those bugs. After the session is over, I will have bugs to report and usability questions to ask. I will now have a model developed in my mind for testing this software. My brain will work on this model constantly, even when I’m not testing, and other testing activities will help build this and other models of the software.
Using heuristics and mnemonics helps me be consistent when testing, but I don’t let them rule my testing actions. If I observe something suspicious, I explore it. If something feels wrong, I investigate, and confirm or deny that feeling with defensible facts. It’s common to switch from heuristics and mnemonics to pure free-form improvisation and back again, or to improvise around highly structured tests. Exploratory testing—like improvising—helps me adapt my thinking and my actions based on what the software is telling me. This is a powerful concept. You can seize upon opportunities as soon as you observe them. Furthermore, you can adapt quickly to project risks, and discover and explore new ones. By developing skills to manage your thinking about testing, you no longer have to wait for spontaneous discoveries to appear out of thin air, and not be able to explain why you found a particular problem, or repeat it.
Developing exploratory testing skill puts you in charge of your testing approach. Skilled software testing, like skilled musicianship, is often referred to as "magic", simply because it is misunderstood. Music follows a set of patterns, heuristics and techniques. If you know a handful, you can make music quite readily. Getting there by trial and error takes a lot longer, and you’ll have a hard time explaining how you got there once you arrive. Testing using pure observation and trial and error can be effective, but can be effective much more quickly if there is a system to frame it.
Skilled exploratory testing can be a powerful way of thinking about testing. However, it is often misunderstood, feared and discouraged. When we dictate that all tests must be scripted, we discourage the wonderful tension and resolution in testing, driven by a curious thinker. We limit the possibilities of our software testing leading to new, important discoveries. We also hamper our ability to identify and adapt to emerging project risks. In environments that are dominated by technology, it shouldn’t be surprising that we constantly look to tools and processes for solutions. But tools and processes on their own are stupid things. They still require human intelligence behind them. In the right hands, software tools and processes, much like musical instruments, enable us to realize the results we seek. There are many ways to perform music, and there are many ways to test software. Skilled exploratory testing is another effective thinking tool to add to the testing repertoire.
Get More of Exploratory Testing:
Bach, James. (2003) Exploratory Testing
Bach, James. (2003) Exploratory Testing
http://www.satisfice.com/articles/et-article.pdf
Kaner, Cem. (2004) The Ongoing Revolution in Software Testing, Presented at Software Test & Performance Conference, December2004, Baltimore, MD
Kaner, Cem. (2004) The Ongoing Revolution in Software Testing, Presented at Software Test & Performance Conference, December2004, Baltimore, MD
http://www.kaner.com/pdfs/TheOngoingRevolution.pdf
Bach, James. (1999, November) Heuristic Risk-Based Testing, Software Testing and Quality Engineering Magazine
Bach, James. (1999, November) Heuristic Risk-Based Testing, Software Testing and Quality Engineering Magazine
No comments:
Post a Comment