Quality Assurance: 5 Common Bad Practices to Avoid | Hygger.io

Project Management

Quality Assurance: 5 Common Bad Practices to Avoid

Quality Assurance: 5 Common Bad Practices to Avoid

The project management is very important in the development and delivery phases. At the time the project comes to testing, the fact of creating something new has gone out and you should just go through the process to get to the end. 

The testing is usually not a part of the project (albeit it is enjoyable), and as a result, you probably put less effort into it than other areas of the project. It’s true that you can’t concentrate on all project phases. However, if you underestimate the testing, you may face consequences. Specifically, if you put less focus on testing than you should do, you will probably obtain mistakes in the future. In this article we present some aspects of testing that are sometimes neglected:

1. Little time allocated for testing. Testing is one of the project phases that are often cut out. This is dangerous since the testing can give you a level of optimism that is unlikely to be kept in the project life cycle. The opinion that we are building the product so well that we can test it in a short period of time – is a phrase that should never be mentioned in your project meetings.

Testing should take a longer time because it’s important to do it correctly. Testing is not even something that you do once (unless you have good luck). You perform the test, locate errors, then correct the errors, test again, then maybe on your third test cycle you actually pass all the tests and can sign it off.

You should not plan that you do a test once and be done with it: it’s a cyclical effort until the tests pass. This also means that you should have available testers for the whole time so that they can quickly respond when developments come back to be tested again.

2. Scope of testing too narrow. Testing just the use cases is not enough. Anyone can go through the process and get a positive outcome in a test: if the developers have built it to your spec, it will probably pass because you assume they’ve done a good job.

You should try different testing approaches in your projects, and consequently, you should encourage the users to start trying “creative” ways to process data and get around the prescribed workflows, in order to realize whether your app is bug-free.

You can’t expect your end users (whether they are your colleagues in the business or members of the public) to understand how your product works from the beginning. They might be doing it wrong because they don’t know any better, so make sure that your testing covers all aspects and events, not just the expected ones.

3. Formal procedure not established. Each department or organizational sector should sign off the testing – to say that they are happy it has been completed adequately. This is important for your audit process and also for a bit of reserve covering for later.

No matter the quality of your testing, someone at some point will ask who has created the testing strategy – and you should be safe in the knowledge that you’ve got the answers. In other cases, you may look naive.

4. Inexperienced testers. The companies that have established a permanent testing division that is used to test all kinds of developments and products – are in better position. On the other side – many companies hire testers with a particular background or skill – especially when new systems are being implemented and we do not have the knowledge in house to test.

It is a good habit to include the end user representatives throughout the project, and hopefully, these people will have enough of an understanding of the product to be able to test it thoroughly. In other cases, it will perhaps be the first time the clients sees the end result and they won’t know how to use it, and of course – how to break it for the purposes of testing. You will get better results from the testing if your testers feel supported and have the knowledge they need to test accordingly.

5. Lousy documentation. It’s not enough just to run your tests and decide that they are complete. If you just click on the system checking out some scenarios, that’s manipulation with the product, not testing it. The testing can not be serious unless there is something in writing that tells me what you have done and what the outcome was. As a minimum, you should record:

  • Who performed the test;
  • Date of tests;
  • Environment testing carried out in (Development, Quality assurance, etc.)
  • Main data points used in the testing;
  • Test result (i.e., pass or fail).

If a test has failed, you should record what you discovered it has failed: is that the form layout on the screen isn’t per the specification? This is the type of data that the developers will need in order to make the appropriate changes and make the problem solved.

redirected here regbeegtube.com
www.onlychicas.net
Share via
Send this to a friend
We use cookies in order to give you the best possible experience on our website. By continuing to use this site, you agree to our use of cookies.
Accept