The prophet's dilemma: One of the most daunting tasks in testing is how can you know, when running a test case, that the software under test actually did what it was supposed to do? Does the software under test produce the right results? Does it introduce side effects during the run? How can I be sure of this? Given a user environment, a specific data configuration, and an input sequence, is there a prophet who can assert that the software did, and only did, what it was supposed to do? The reality is that often, because the software's design specification is incomplete, or not available at all, this results in the software's testers not being able to make this assertion either. So: automation is indeed important, but it's not enough, and over-reliance on automated testing creates pitfalls for the ultimate success of the program. If testers can't rely on developers' defect prevention tools and automation, what else can they hope for? The only answer is manual testing
Automated testing is the process of turning human-driven testing behavior into machine execution. Typically, after a test case has been designed and reviewed, a tester executes the test step-by-step according to the protocol described in the test case, and gets a comparison of the actual results with the expected results. In this process, the concept of automated testing is introduced to save labor, time or hardware resources and improve testing efficiency.
Although the current trend in testing is to automate, automation has its limitations and usually requires the following conditions to be met:
Infrequent changes in software requirements
Sufficiently long project lifecycles
Reusability of automated test scripts
Additionally, automated testing can be considered when manual testing is not feasible and a significant amount of time and manpower is required. Consider introducing automated testing. For example, performance testing, configuration testing, and large data input testing.
After all, the machine is not a person, it can only follow the fixed steps to perform calculations, judgment, such as automation run in the middle of the process: the operating system upgrade and restart, the machine disconnected from the network, the browser failure restarted, the page to refresh the slower elements in the time did not appear, HTTP packet loss, and so on, any number of unstable, automated processes are very easy to collapse and ultimately wait for the people to intervene. So over-reliance on automation is unwise, and manual testing will always continue to play a role.
The Moth ( FEIE.WORK ) is an online collaboration tool for test teams and a tester's guide to agile testing practices. We've built this product from the ground up with the idea of getting things organized and thinking about eliminating duplication of effort so testers can focus on improving the quality of the software they're delivering.
Test case management and reuse
Testers can easily manage hundreds of use cases through the Use Case Manager, and if you have accumulated a library of use cases in TestLink or Excel, you can import them into Moth with a single click. Moth supports two common use case scenarios: text use cases and step use cases.
Text use cases are for simple test scenarios with no clear steps. For example, something like "Entering a non-existent product address in the address bar should prompt a message that the product does not exist".
Step use cases, on the other hand, apply to scenarios where there are explicit test steps, expected results, and where you need to test against each step. For example, you now need to test "Do not allow deletion of products in case of non-admin login". Then you should consider using step use cases.
Step 1: Log in to a non-administrative account, Expectation: Login Success;
Step 2: Try to delete a product, Expectation: Deletion Failure, prompting "Do not have permission to delete";
Thanks to the powerful support of the tree structure, in the much-acclaimed "use case management", users can quickly and easily use the test cases with an operating system-level experience.
Because of the powerful tree structure support, in the highly acclaimed "Use Case Management", users can quickly create, multi-select, drag, and remove use cases with an OS-level experience.
Real-time test collaboration
At the core of a tester's operation is the daily cycle of executing thousands of test tasks. Moth has a built-in textbook Agile testing process: create a test plan, assign and execute test tasks, record test results, and quickly submit defects. Even with dozens of test teams working at the same time, the test status can still be kept synchronized in real time, which completely ends the problem of iterative use case merging and work status communication in Excel.
In most cases when testing is going well, testers simply click "pass and go next" and Moth automatically switches the user to the task waiting to be tested. When it comes to logging exception results, Moth provides two convenient result logging pages for text use cases and step use cases.
Integration with defect management tools
Moth provides integration support for major defect management tools, including JIRA, Redmine, and Trello, see Integration Configuration for details. We are committed to helping more teams to popularize an orderly way of test collaboration, users can access via PC? feie.work ? Users can create a team for free by visiting ?