9 Integration Testing Do’s and Don’ts

Integration tests check whether your application works and presents properly to a customer. They seek to verify your performance, reliability and of course, functional requirements. Integration tests should be run against any of your developer, staging and production environments at any time.

Writing good tests proving your solution works can be challenging. Ensuring these tests to perform the intended actions and to exhibit the expected outcomes requires careful thinking. You should consider what you are testing and how to prove it works – both now and in the future. To help you create tests that work and are maintainable, here are 9 Do’s and 9 Don’ts to contemplate:

When Creating Integration Tests Do…

1. Consider the cost vs. benefit of each test

Should this be a unit test? How much time will it save to write this test over a manual test? Is it run often? If a test takes 30 seconds to run manually every few weeks, taking 12 hours to automate it may not be the best use of resources.

2. Use intention revealing test names

You should be able to figure out or at least get an idea of what a test is doing from the name.

3. Use your public API as much as possible

Otherwise, it’s just more endpoints and calls to maintain when application changes are made.

4. Create a new API when one isn’t available

Rather than relying on one of the Don’ts

5. Use the same UI as your customers

Or you might miss visual issues that your customers wouldn’t.

6. Use command line parameters for values that will change when tests are re-run

Some examples include items like site name, username, password etc.

7. Test using all the same steps your customers will perform

The closer your tests are to the real thing, the more valuable they’ll become.

8. Switch your system under test back to the original state

Or at least as close to it as you can. If you create a lot of things, try to delete them all.

9. Listen to your customers and support team

They will find ways to use your systems that you will never expect. Use this to your advantage in creating real-world beta tests.

When Creating Integration Tests Don’t…

1. Write an integration test when a unit test suffices

It’ll be extra effort for no benefit.

2. Use anything that a customer cannot use

Databases, web servers, system configurations are all off limits. If your customer can’t touch it, your tests have no business touching it either.

3. Access any part of the system directly

Shortcuts just reduce the quality of your tests.

4. Use constants in the body of your tests

If you must use constants, put them in a block at the top of your test file or a configuration file. There is nothing worse than having to search through all your source files because you changed a price from $199.95 to $199.99.

5. Create an internal-only API

Unless necessary for security or administration.

6. Create an internal only UI

You’re supposed to test what the customer will see after all.

7. Make your test too complex

No matter how brilliant your test is, keep it simple. Complexity just breaks later. If you are finding it hard to write, it will be hard to maintain too.

8. Test more than one thing

Stick to what you need to test. If you try to do too much in one test, it will just get more complex and more fragile.

9. Leave the test system in a bad/unknown state

This means a broken or unusable site, database or UI.

 

How Low Should You Go? Level of Detail in Test Cases

It can be difficult to know just how much detail you should include in your test documentation and particularly in test cases.

Each case has a different set of needs and requirements in terms of purpose, usage, frequency and admin needs.

If it’s written at a too high level, then you’re leaving it open to too much interpretation and risking the accuracy of the testing. If it’s at a too low level, you’re just wasting your own time. It makes the maintenance more difficult and there’s an opportunity cost to other projects with demands on your time.

In this post, we break down some of the factors you should consider helping you find the right level.

Understand the Wider Context

Each of your project’s stakeholders will have concerns that will impact the amount of detail you need to provide. From your organization’s internal politics and appetite for risk to the extent to which the product is relied upon etc. This will provide a wider context for your test cases and start to improve your thinking. The documentation expectations at a lean startup may even differ greatly to that at some of the financial institutions.

Test Requirements and Resources

You need to provide enough information to describe the intent of the test case. This should clear all the elements that need to be tested. A special consideration should be given to any specific input values or a particular sequence of actions.

The amount of time you have to invest in the test case and the human or IT resources you have to enact the tests is obviously another key factor.

Know Your Audience

Also, consider the audience for each case. How technical are they? How much product knowledge do they have and how experienced at testing are they? More experienced testers who are familiar with the product will need fewer details but is the team likely to change in a foreseeable future? If so, then you might want to head off re-writes later by providing extra details now for those with less experience.

Some organizations have specific requirements to provide evidence of test coverage. Usually, it’s to show adherence for compliance to a standard or certification or for other legal issues.

Test and Product Considerations

Each test is different, from the importance of the test, to how long it will be in use for. If it’s likely to convert to an automated test script in the future, then including more details at that time might make it easier to do. There are similar considerations about the product you’re testing. Will the application be used in long-term? And are whereabouts in its lifecycle? The amount of change that you can expect for a recently built, an agile application is far greater than for some old system you’re maintaining. Unless it’s a wild, testless code beast that is.

There’s a Balance to be Found

These factors don’t necessarily mean you should include more detail but crucial and long-lasting tests justify the time if needed. However, there’s a balance to be sought. If you create highly specific tests, then even minor design changes or functionality alterations may mean you have to re-write the cases. They also lead testers to raise bugs for what end up creating problems with the test documentation, rather than impacting customers. They can have a knock-on effect too. They encourage the tester to only consider the specific paths through the application detailed in the case. Meaning they might not consider the functionality from a broader perspective.

There’s no silver bullet for coming to a conclusion, each organization’s requirements differ. And these requirements change depending on the project, product and individual tests. However, considering the factors above, you can find a level that works for you and your team.

Taming a Wild, Testless Code Beast — 4 Steps To Improve Test Coverage

Whether you’re working on an existing or new application, you’ll often find yourself playing catch up when it comes to tests. Soon deploying code changes feels like poking at some ugly, sleeping code monster — you aren’t sure what’s going to happen, but you know it won’t be good.

Here are the 4 things you should do first to tame the beast and improve test coverage:

1. Add the Right Tests

Start by adding tests in the areas where it is easiest. It’s important to consider the order in which you do this to make sure you get the most out of your scarce resources. You want to start adding tests in the following order:

  • i. Create tests as you fix bugs

Add tests to prove that your specific fix is working and keep them running to show that this does not break again. This prevents from being somewhat targeted — you are creating tests in your weakest areas first. The weaker it is (i.e. more bugs) the faster you will build up tests.

  • ii. Create tests with all new features

All new features will need to include tests created to prove that the feature works as expected. If you’re covering the new aspects of your application, then at least things aren’t getting worse.

  • iii. Create tests as you add to old features

When updating old features, add tests as you go to show that the older functionality is not breaking in unexpected ways.

  • iv. Create tests in identified risk areas

Talk to the developers and testers on your team and ask them to point out any weak spots or issues they have experienced. Also, you talk to your support team — they are an excellent resource with a direct line to the customer. They’ll know the features of your product that frequently causes issues.

2. Turn on Code Coverage

Code coverage is a tool included in most continuous integration systems (or one that can be added with a plugin). This tool will instrument and monitor your code as you run the tests to determine how much of your code used by the tests. For this to be useful, follow these steps:

  • Start running code coverage against all your code
  • Get a baseline

Find out what the tool can see, where you are currently at etc.

  • Determine areas that you want to exclude.

There are likely areas of your code that you don’t want to cover — third-party tools, ancient code untouched for years etc.

  • Determine coverage goals

Sit down with your team and discuss what your current coverage is and what your ideal can realistically be (usually 90% or above).

  • Work-out steps to improve your coverage

You aren’t going to fix this problem overnight. Put in place some specific tasks which are going to help you achieve your goals over time.

  • Determine your pass/fail criteria

Is staying the same OK, or should it always go up? Do you define any drop as a fail?

  • Run Code Coverage constantly

Use automation to run your coverage test and use the criteria you determined as a team to report a pass/fail and do this constantly. It is a lot easier to add tests when the code is still front and center in your mind than later on.

3. Run your Tests on a Scheduled Basis

You should run your tests regularly, on several schedules:

  • Run them on every check-in

Use CI tools like Jenkins to run (at least) your unit tests on every check-in. Run them simultaneously if they are taking too long to run at this frequency.

  • Run them on every build (package)

Depending on how your systems work, your CI infrastructure can help you with this. This could be on every check-in if you are on Continuous Deployment, or every day/week/month that you use. This should be a clean install on a test environment and a full run of all your automated tests.

  • Run them on every deploy

You should run all your automated tests against your environments immediately after a deploy.

  • Run them every X days/hours/minutes

Run your automation suite against your constant environments as often as you can (Production, Staging etc). This should be at least a “once a day task” and take place during ‘off-peak’ times when it does not interrupt others too much. You can increase the frequency further if your tests are short, just be mindful not to overload the system.

4. Provide a Button to Run the Tests

Again, use a tool like Jenkins to turn test runs into a self-service operation. A developer should not be delayed by asking a QA person to run a test for them. Get a system in place where your tests will run and just give them a button to press. Remove as many barriers for everyone to run the tests as possible.

If you follow these steps, over time, you’ll see that you are able to turn an unwieldy application into something more manageable. First, by adding tests to the key areas, then making things as easier as possible, you can build confidence around your code changes and deploys.

Stop More Bugs With This Code Review Checklist!

Checklists are a great tool in code reviews — they ensure that reviews are consistently performed throughout your team. They’re also a handy way to ensure that common issues are identified and resolved.

Research by the Software Engineering Institute suggests that programmers make 15–20 common mistakes. So by adding such mistakes to a checklist, you can make sure that you spot them whenever they occur and help drive them out over time.

To get you started with a checklist, here’s a list of typical items:

Code Review Checklist

General

  • Does the code work? Does it perform its intended function, the logic is correct etc.
  • Is all the code easily understood?
  • Does it conform to your agreed coding conventions? These will usually cover the location of braces, variable and function names, line length, indentations, formatting, and comments.
  • Is there any redundant or duplicate code?
  • Is the code as modular as possible?
  • Can any global variables be replaced?
  • Is there any commented out code?
  • Do loops have a set length and correct termination conditions?
  • Can any of the code be replaced with library functions?
  • Can any logging or debugging code be removed?

Security

  • Are all data inputs checked (for the correct type, length, format, and range) and encoded?
  • Where third-party utilities are used, are returning errors being caught?
  • Are output values checked and encoded?
  • Are invalid parameter values handled?

Documentation

  • Do comments exist and describe the intent of the code?
  • Are all functions commented?
  • Is any unusual behavior or edge-case handling described?
  • Is the use and function of third-party libraries documented?
  • Are data structures and units of measurement explained?
  • Is there any incomplete code? If so, should it be removed or flagged with a suitable marker like ‘TODO’?

Testing

  • Is the code testable? i.e. don’t add too many or hide dependencies, unable to initialize objects, test frameworks can use methods etc.
  • Do tests exist and are they comprehensive? i.e. has at least your agreed on code coverage.
  • Do unit tests actually test that the code is performing the intended functionality?
  • Are arrays checked for ‘out-of-bound’ errors?
  • Could any test code be replaced with the use of an existing API?

You’ll also want to add to this checklist any language-specific issues that can cause problems.

The checklist is deliberately not exhaustive of all issues that can arise. You don’t want a checklist which is so long no-one ever uses it. It’s better to just cover the common issues.

Optimize Your Checklist

Using the checklist as a starting point, you should optimize it for your specific use-case. A great way to do this is to get your team to note the issues that arise during code reviews for a short period of time. With this data, you’ll be able to identify your team’s common mistakes, which you can then build into a custom checklist. Make sure to remove any items that don’t come up (you may wish to keep rarely occurring, yet critical items such as security-related issues).

Get Buy-in and Keep It Up To Date

As a general rule, any items on the checklist should be specific and if possible, something you can make a binary decision about. This helps to avoid inconsistency in judgments. It is also a good idea to share the list with your team and get their approval on the content. Make sure to review the checklist periodically too, to check that each item is still relevant.

Armed with a great checklist, you can raise the number of defects you detect during code reviews. This will help you to drive up coding standards and avoid inconsistent code review quality.

What A Great Software Does?

Great software helps you out when you misunderstand it. If you try to drag a file to a button in the taskbar, Windows pops up a message that says, essentially, “You can’t do that!” but then it goes on to tell you how you can accomplish what you’re obviously trying to do (try it!)

Great software pops up messages that show that the designers have thought about the problem you’re working on, probably more than you have. In FogBugz, for example, if you try to reply to an email message but someone else tries to reply to that same email at the same time, you get a warning and your response is not sent until you can check out what’s going on.

Great software works the way everybody expects it to. What great software has in common is being deeply debugged and the only way to get software that’s deeply debugged is to keep track of your bugs.

A bug tracking database is not just a memory aid or a scheduling tool. It doesn’t make it easier to produce great software, it makes it possible to create great software.

With bug tracking, every idea gets into the system. Every flaw gets into the system. Every tester’s possible misinterpretation of the user interface gets into the system. Every possible improvement that anybody thinks about gets into the system.

Bug tracking software captures the cosmic rays that cause the genetic mutations that make your software evolve into something superior.

And as you constantly evaluate, reprioritize, triage, punt, and assign these flaws, the software evolves, it gets better and better. It learns to deal with more weird situations, more misunderstanding users and more scenarios.

That’s when something magical happens and your software becomes better than just the sum of its features. Suddenly it becomes reliable. Reliable, meaning, it never screws up. It never makes its users angry. It never makes its customers wish they had purchased something else.