This post is from Code Student Software Testing
Code Student Public Number: Automated Software Testing
Code Student Jitterbug Number: Small Code Brother Chatting About Software Testing
01 About the question itself
I think this question is very misleading, and it is one of the typical logical one of the pitfalls. Is it true that automated testing is expensive and ineffective? Of course not. And I've always believed that the best way to answer a question is to get to the bottom of the question itself. That is, ask questions about the problem. The owner can further clarify the following questions to help him/her understand his/her own question, and to help it be answered accurately:
Define the scope of "automated testing". Automated testing, in simple terms, includes writing use cases, implementing code, setting up the environment, executing use cases, generating reports, analyzing results, reporting defects, and so on. The degree of automation varies from project to project, and testers' understanding of automation is skewed, so the actual scope of automation varies greatly
Define what a "cost" is
Define what a "cost" is
Define what a "high" is. "high". High is relative. Comparisons can be to another project or team, or to the expectations of others
Please define what is meant by "Effective"
Please define what is meant by "Poor".
Please define what "poor" means. Poor is also relative, either to manual testing or to the boss's expectations
If the owner thinks carefully and answers the above questions, I'm 70% sure that the owner either doesn't want to ask this question or wants to change it.
Another way to ask
Well, in order to avoid the suspicion of flooding, I'm going to try to figure out the owner's intention with the greatest goodwill. The question is:
If there are projects where automated tests cost more than we expected and the results don't meet expectations, what might be the problem? How can we tell if automated tests are effective?
02 Here's where the text begins
On false expectations
I'm not at all surprised that someone would tell me that
I don't even know what I or my boss expects from automated testing, no one's told me.
Or:
Doesn't automation mean you don't have to test by hand? The use cases run all by themselves with code implementations, and the testers can go do something else, and can recruit a few less test tappers who don't produce value. That's how the boss planned it.
These are two very typical questions about what to expect from automation testing:
Everyone understands automation testing differently, and every project team does automation differently. Let me tell you a story, a real life example from an Indian automation project before my acquaintance. More than 95% of the test scenarios in this project were complex UI tests (Web +Windows Application) Their automation was done like this:
What do you think of this automation? I was feeling almost like I was going to vomit blood because this was the project I was going to take over. More blood is still in the back, the VP of QA of this department is very dissatisfied with the effect of automated testing (absolutely), his vision includes:
Free to receive code students software testing course notes + super learning materials + complete video + the latest interview questions, you can forward the article + private message "code students 666" to get the information Oh
This is a typical case of a software testing course notes + super learning materials + complete video + the latest interview questions.
This is a typical team that doesn't understand automation + a boss who expects to be out of touch with reality.
On what automation is
James Bach once mentioned in a blog post that the name automated testing is very misleading. It makes the average person think that testing is completely automated, like an automated coffee maker where all I have to do is put the cup in and press a button, which James says is more accurately called "tool-assisted testing". Of course, there's another layer to this, which is that good test cases can't be 100% automated, and that tester experience, logical judgment, and exploratory testing methods can't be effectively automated.
I couldn't agree more. To add to and expand on this assertion, automation should be about looking at every aspect of software development activity, identifying repetitive activities that can be tooled and automated, and then implementing them. Automation in a broad sense should include, but not be limited to, the following segments:
An oversimplified formula could be written as follows:
Benefits of automation = number of iterations * cost of full manual execution - cost of first automation - number of maintenance sessions * cost of maintenance
Or, if one assumes that the number of iterations and the number of maintenance sessions are myopically equal, this could hold true in can hold in some cases, such as a relatively new product:
Benefits of automation = number of iterations * (cost of full manual execution - cost of maintenance) - cost of first automation
Interpretation:
Benefits of automation are proportional to the number of iterations
Benefits of automation Can be negative: i.e. when automation costs and maintenance costs are higher than manual execution costs
Many times automation costs are not higher than manual costs, but maintenance costs are high
Why the emphasis is on oversimplification is because automation gains here only consider savings in time and resource costs. Good automation leads to shorter iteration cycles, which can shorten the project lifecycle , and at some point can change the can't-do to can-do, which in turn leads to opportunity gains that are huge and hard to quantify. This requires decision makers to have a more correct intuition and understanding of software engineering and automation. I don't think it's a good idea to go after the resource savings of automation or to ask for precise quantification of the benefits of automation.
Corollary 1: What projects are suitable for automation
From the simplified formula for ROI, we can see that automation is more suitable for the following situations:
Regression-testing-based support engineering projects, i.e., those that require long-term support and maintenance. This kind of product (such as enterprise software, operating systems, etc.) after the release of a version of the often need to support for many years, do bug fix and patch. this time each small version of the development will increase the number of iterations, and each time the product changes are very limited, the maintenance cost is relatively low, automation benefits are very good. This is the reason why many enterprise software or hardware products have specialized automation teams. Because the product support maintenance and development of regression testing basically rely on automation.
Products with more stable interfaces, ditto
Manual testing is particularly time-consuming and laborious, and even unable to achieve the test purpose of the project. For example, stress testing, big data or large amounts of repetitive data testing must be supported by automation tools.
Corollary 2: The point of intervention for automation
Again, extrapolating from the simplified formula for ROI, the early stages of a project may not be well suited for automation. Because the user interface and interfaces are not stabilized at the beginning of the project, automation code will be passively required to be changed frequently, which is very expensive to maintain. Automation gains are not good. Instead, manual testing can quickly identify problems and provide feedback to developers. And to the late project and maintenance period, automation and then intervene to prepare for regression testing, you can maximize the benefits of automation.
Corollary 3: Degree of automation and rate of automation
The degree of automation here refers to the extent to which automation has been introduced throughout the software development activity. Corollary 2 says that some projects may not be well suited for high levels of automation early on, but there are still certain aspects of the project that can be selected for automation early on. For example, stable common interfaces, compiling and deploying software, setting up environments, and other parts that are more stable from the start.
Automation rates also depend on the product and project characteristics , with lower automation for UI parts of the product that are subject to frequent changes. For service components with more stable interfaces, you can increase the automation rate.
What kind of team do you have , tools and infrastructure, in fact, this factor is to do everything must be considered. Automation testing itself is software development. Good automation testing frameworks, architectural designs are important. These will determine the development cost and maintenance cost of automation. These require strong development skills. If your team has only very limited development skills, how to go about automation, whether to do the most primitive recording and playback or data-driven. Complex automation also requires good infrastructure support. For example, if you have a good DevOps VM management system, you don't have to develop it yourself, and the resources and manpower saved are significant.
Tooling is another piece that can reduce programming requirements if the company has the strength to support commercial test and management software (which of course brings up some other issues). If there is no way to use commercial tools, you can only consider open source and their own development, this automation test development capabilities require high. In short, you must choose an automation strategy that matches your team, skill set, infrastructure, and tools.
Management understanding and support
I won't expand on this. I've seen very bad situations where a VP with hundreds of people juggling product and technology, crosses 3 or 4 levels to give technical requirements and suggestions directly to the test team. Do you think it's a do or don't, how to do? There is another team where the automation testers have never written programs in Java or other OO languages, and were asked to design the automation framework from scratch, which was a disaster. Then there was the team where management asked several times to change the automation tool, which amounted to a wholesale rewrite of the automation script.
Summary
The above should be a very cursory answer. Automation testing is a very specialized field, and automation testing requires a lot of technical breadth and depth from engineers. For team management and decision makers, please don't simplify and isolate automated testing. The most important thing is to make sure you listen to the judgment of the technical people who really understand the product, the team, and automation testing.
END
This article is copyrighted, so please contact the author for authorization to republish in any form and credit the source.