The weird and wonderful bugs that get thrown up when real users first start using your code never ceases to amaze. There’s always some odd edge case that had been overlooked, despite you think about little else for several weeks. We’ve been through this many times and concluded that beta testing is the solution to our problems.
Here are 7 things you can do to get the most out of our your beta tests:
Ask for a commitment to provide feedback:
Response rates will be higher if you ask your beta testers upfront to commit for providing feedback. This doesn’t have to be formal, it could be just a part of an application form. But having agreed to it, people are more likely to follow through.
Do not release with known bugs:
Most beta testers will only provide feedback once so you don’t want to burn any tester to just hear about known issues.
Allow enough time:
Use the following as a rough guide. For a major development effort, say about a year’s work, you’d want to spare 10-12 weeks for beta testing. Decrease as necessary – so if it took a month to develop, then, around a week will suffice.
Be feature complete:
Only beta test when your feature complete. Adding in things as you go sets you back to the start. Otherwise, it just means the new code and its impact on existing functionality isn’t as well tested as the rest. Something you’ll regret later.
Make it easy to get in touch:
You want to make it as easy as possible for your beta testers to provide feedback. Give them direct emails and offer to jump on a Hangout/Skype if they’d prefer.
Follow up but don’t annoy:
While your product might be front and center for you, it’s not going to be that way for your beta testers. You’ll want to remind them along the way. However, don’t overdo it, they’re helping you out so you don’t want to annoy them with too many emails.
Don’t forget to provide feedback:
Make sure to send them updates during and after the tests about how you are putting their feedback to use. People like to know that their time wasn’t wasted. And don’t be tight with the swag – a free t-shirt can do wonders!
You don’t need to argue over code style and formatting issues. There are plenty of tools which can consistently highlight those matters. Ensuring that the code is correct, understandable and maintainable is what’s important. Sure, style and formatting form part of that but you should let the tool be the one to point out those things.
Everyone should code review
Some people are better at it than others. The more experienced may well spot more bugs, and that’s important. But what’s more crucial is maintaining a positive attitude to code review in general and that means avoiding any ‘Us vs. Them’ attitude or making code review burdensome for someone.
Review all code
No code is too short or too simple. If you review everything, then, nothing gets missed. What’s more, that makes it a part of the process, a habit and not an afterthought.
Adopt a positive attitude
This is just as important for reviewers as well as submitters. Code reviews are not the time to get all alpha and exert your coding prowess. Nor do you need to get defensive. Go into it with a positive attitude of constructive criticism and you can build trust around the process.
Code review often and for short sessions
The effectiveness of your reviews decreases after about an hour. So putting off reviews and doing them in one almighty session doesn’t help anybody. Set aside time throughout the day including breaks not to disrupt your own flow and help form a habit. Your colleagues will thank you for it. Waiting can be frustrating and they can resolve issues quicker whilst the code is still fresh in their heads.
It’s OK to say “It’s all good”
Don’t get picky, you don’t have to find an issue in every review.
Use a checklist
Code review checklists ensure consistency – they make sure everyone is covering what’s important and avoid common mistakes.
Keep the code short
Beyond 200 lines, the effectiveness of a review drops significantly. By the time you’re at more than 400, they become almost pointless.
Link to any related tickets or the spec. There are code review tools that can help with that. Provide short but useful commit messages and plenty of comments throughout your code. It’ll help the reviewer and you’ll get fewer issues coming back.
Integration tests check whether your application works and presents properly to a customer. They seek to verify your performance, reliability and of course, functional requirements. Integration tests should be run against any of your developer, staging and production environments at any time.
Writing good tests proving your solution works can be challenging. Ensuring these tests to perform the intended actions and to exhibit the expected outcomes requires careful thinking. You should consider what you are testing and how to prove it works – both now and in the future. To help you create tests that work and are maintainable, here are 9 Do’s and 9 Don’ts to contemplate:
When Creating Integration Tests Do…
1. Consider the cost vs. benefit of each test
Should this be a unit test? How much time will it save to write this test over a manual test? Is it run often? If a test takes 30 seconds to run manually every few weeks, taking 12 hours to automate it may not be the best use of resources.
2. Use intention revealing test names
You should be able to figure out or at least get an idea of what a test is doing from the name.
3. Use your public API as much as possible
Otherwise, it’s just more endpoints and calls to maintain when application changes are made.
Or you might miss visual issues that your customers wouldn’t.
6. Use command line parameters for values that will change when tests are re-run
Some examples include items like site name, username, password etc.
7. Test using all the same steps your customers will perform
The closer your tests are to the real thing, the more valuable they’ll become.
8. Switch your system under test back to the original state
Or at least as close to it as you can. If you create a lot of things, try to delete them all.
9. Listen to your customers and support team
They will find ways to use your systems that you will never expect. Use this to your advantage in creating real-world beta tests.
When Creating Integration Tests Don’t…
1. Write an integration test when a unit test suffices
It’ll be extra effort for no benefit.
2. Use anything that a customer cannot use
Databases, web servers, system configurations are all off limits. If your customer can’t touch it, your tests have no business touching it either.
3. Access any part of the system directly
Shortcuts just reduce the quality of your tests.
4. Use constants in the body of your tests
If you must use constants, put them in a block at the top of your test file or a configuration file. There is nothing worse than having to search through all your source files because you changed a price from $199.95 to $199.99.
5. Create an internal-only API
Unless necessary for security or administration.
6. Create an internal only UI
You’re supposed to test what the customer will see after all.
7. Make your test too complex
No matter how brilliant your test is, keep it simple. Complexity just breaks later. If you are finding it hard to write, it will be hard to maintain too.
8. Test more than one thing
Stick to what you need to test. If you try to do too much in one test, it will just get more complex and more fragile.
9. Leave the test system in a bad/unknown state
This means a broken or unusable site, database or UI.
It can be difficult to know just how much detail you should include in your test documentation and particularly in test cases.
Each case has a different set of needs and requirements in terms of purpose, usage, frequency and admin needs.
If it’s written at a too high level, then you’re leaving it open to too much interpretation and risking the accuracy of the testing. If it’s at a too low level, you’re just wasting your own time. It makes the maintenance more difficult and there’s an opportunity cost to other projects with demands on your time.
In this post, we break down some of the factors you should consider helping you find the right level.
Understand the Wider Context
Each of your project’s stakeholders will have concerns that will impact the amount of detail you need to provide. From your organization’s internal politics and appetite for risk to the extent to which the product is relied upon etc. This will provide a wider context for your test cases and start to improve your thinking. The documentation expectations at a lean startup may even differ greatly to that at some of the financial institutions.
Test Requirements and Resources
You need to provide enough information to describe the intent of the test case. This should clear all the elements that need to be tested. A special consideration should be given to any specific input values or a particular sequence of actions.
The amount of time you have to invest in the test case and the human or IT resources you have to enact the tests is obviously another key factor.
Know Your Audience
Also, consider the audience for each case. How technical are they? How much product knowledge do they have and how experienced at testing are they? More experienced testers who are familiar with the product will need fewer details but is the team likely to change in a foreseeable future? If so, then you might want to head off re-writes later by providing extra details now for those with less experience.
Some organizations have specific requirements to provide evidence of test coverage. Usually, it’s to show adherence for compliance to a standard or certification or for other legal issues.
Test and Product Considerations
Each test is different, from the importance of the test, to how long it will be in use for. If it’s likely to convert to an automated test script in the future, then including more details at that time might make it easier to do. There are similar considerations about the product you’re testing. Will the application be used in long-term? And are whereabouts in its lifecycle? The amount of change that you can expect for a recently built, an agile application is far greater than for some old system you’re maintaining. Unless it’s a wild, testless code beast that is.
There’s a Balance to be Found
These factors don’t necessarily mean you should include more detail but crucial and long-lasting tests justify the time if needed. However, there’s a balance to be sought. If you create highly specific tests, then even minor design changes or functionality alterations may mean you have to re-write the cases. They also lead testers to raise bugs for what end up creating problems with the test documentation, rather than impacting customers. They can have a knock-on effect too. They encourage the tester to only consider the specific paths through the application detailed in the case. Meaning they might not consider the functionality from a broader perspective.
There’s no silver bullet for coming to a conclusion, each organization’s requirements differ. And these requirements change depending on the project, product and individual tests. However, considering the factors above, you can find a level that works for you and your team.
Whether you’re working on an existing or new application, you’ll often find yourself playing catch up when it comes to tests. Soon deploying code changes feels like poking at some ugly, sleeping code monster — you aren’t sure what’s going to happen, but you know it won’t be good.
Here are the 4 things you should do first to tame the beast and improve test coverage:
1. Add the Right Tests
Start by adding tests in the areas where it is easiest. It’s important to consider the order in which you do this to make sure you get the most out of your scarce resources. You want to start adding tests in the following order:
i. Create tests as you fix bugs
Add tests to prove that your specific fix is working and keep them running to show that this does not break again. This prevents from being somewhat targeted — you are creating tests in your weakest areas first. The weaker it is (i.e. more bugs) the faster you will build up tests.
ii. Create tests with all new features
All new features will need to include tests created to prove that the feature works as expected. If you’re covering the new aspects of your application, then at least things aren’t getting worse.
iii. Create tests as you add to old features
When updating old features, add tests as you go to show that the older functionality is not breaking in unexpected ways.
iv. Create tests in identified risk areas
Talk to the developers and testers on your team and ask them to point out any weak spots or issues they have experienced. Also, you talk to your support team — they are an excellent resource with a direct line to the customer. They’ll know the features of your product that frequently causes issues.
2. Turn on Code Coverage
Code coverage is a tool included in most continuous integration systems (or one that can be added with a plugin). This tool will instrument and monitor your code as you run the tests to determine how much of your code used by the tests. For this to be useful, follow these steps:
Start running code coverage against all your code
Get a baseline
Find out what the tool can see, where you are currently at etc.
Determine areas that you want to exclude.
There are likely areas of your code that you don’t want to cover — third-party tools, ancient code untouched for years etc.
Determine coverage goals
Sit down with your team and discuss what your current coverage is and what your ideal can realistically be (usually 90% or above).
Work-out steps to improve your coverage
You aren’t going to fix this problem overnight. Put in place some specific tasks which are going to help you achieve your goals over time.
Determine your pass/fail criteria
Is staying the same OK, or should it always go up? Do you define any drop as a fail?
Run Code Coverage constantly
Use automation to run your coverage test and use the criteria you determined as a team to report a pass/fail and do this constantly. It is a lot easier to add tests when the code is still front and center in your mind than later on.
3. Run your Tests on a Scheduled Basis
You should run your tests regularly, on several schedules:
Run them on every check-in
Use CI tools like Jenkins to run (at least) your unit tests on every check-in. Run them simultaneously if they are taking too long to run at this frequency.
Run them on every build (package)
Depending on how your systems work, your CI infrastructure can help you with this. This could be on every check-in if you are on Continuous Deployment, or every day/week/month that you use. This should be a clean install on a test environment and a full run of all your automated tests.
Run them on every deploy
You should run all your automated tests against your environments immediately after a deploy.
Run them every X days/hours/minutes
Run your automation suite against your constant environments as often as you can (Production, Staging etc). This should be at least a “once a day task” and take place during ‘off-peak’ times when it does not interrupt others too much. You can increase the frequency further if your tests are short, just be mindful not to overload the system.
4. Provide a Button to Run the Tests
Again, use a tool like Jenkins to turn test runs into a self-service operation. A developer should not be delayed by asking a QA person to run a test for them. Get a system in place where your tests will run and just give them a button to press. Remove as many barriers for everyone to run the tests as possible.
If you follow these steps, over time, you’ll see that you are able to turn an unwieldy application into something more manageable. First, by adding tests to the key areas, then making things as easier as possible, you can build confidence around your code changes and deploys.