Test optimization for continuous integration

“Test often and early.” If you’ve been following my testing agenda, you’re probably sick of hearing me repeat it. However, it makes sense that if your tests detect a problem soon after it occurs, it will be easier to fix. This is one of the guiding concepts that makes continuous integration such an effective method. I’ve come across several teams that have lots of automated tests but don’t use them as part of a continuous integration approach. There are often various reasons why the team believes that these tests cannot be used with continuous integration. Perhaps the tests take too long, or they are not reliable enough to provide correct results on their own, requiring human interpretation.

I begin my evaluation of these suites with a simple task. I start by drawing two axes on the board. The vertical axis represents the cost of the test, while the horizontal axis represents the time it takes to execute the package. Then the team and I write the name of each test suite on a sticky note and stick it in the appropriate place on the board. The graphic below depicts an example network that shows how each test suite is measured.

Here is an example that you can adapt to your situation.

Test suite runtime\Test suite importance<15 minutes<45 minutes>45 minutes
HighTS 1:TS 5:TS 3:
Average:N\A:TS 2:N\A:
LowTS 6:N\A:TS 4:

We confirm the significance of the tests on the team’s personal point of view. So we keep the options simple: low value, medium value, high value. This view is based on the reliability of the tests, or their ability to make the correct findings every time they are run, and the amount of confidence the tests give the team about the quality of the system. Some test suites, for example, are required when selecting, but the results are inconsistent, and when they fail for no apparent reason, one must manually rerun those failed tests. We can still label this test suite, but if it works perfectly every time, it becomes High Value.

On the other hand, there may be a test suite that is run because it is part of a checklist, but no one understands what the findings show. Perhaps the original creator has left the team and no one is monitoring that package. This suite belongs to the “Low Value” category. The horizontal axis is defined simply. that’s just the time it takes to run the package. Now that you’ve evaluated each package, think about how you can improve them by making them more useful or faster. I prefer to divide continuous integration tests into these categories:

  1. High-value tests that run in 15 minutes or less – These tests can be performed on any structure. They are used to accept the structure for further testing; until these tests are passed, the team should consider the build broken. Your developers will not be happy if they wait more than 10 minutes for build results.
  2. High-value tests that can be completed in 45 minutes or less – These tests can be run continuously. For example, you can schedule these tests to run every hour and start again as soon as they finish. If the new build is not yet available, you can wait until the next build is complete.
  3. High-value tests that last more than an hour – These tests can be performed on a daily or nightly basis so that results are ready when your team’s work day begins.
  4. Mean value tests – These tests can be run once a week or once per release cycle.

You’ll notice that I’ve excluded the really worthless tests. They must be excluded in your performance or enhanced to deliver value. Keeping test suites that don’t add value makes no sense. Based on feedback from the development teams, I set time limits of 15 and 45 minutes. They require an immediate response. Consider a developer who waits for build results to complete successfully before leaving for lunch. Your timeline may vary depending on your circumstances; this is just a framework to show the thought processes behind selecting tests that work with build and timer.

A significant advantage of running tests this often is that you are expected to make very few code changes between a passing test and a failing test, making it easier to find the change that caused the test to fail. Several solutions have proven useful for improving existing continuous integration suite tests. Here are five proven and effective methods.

Start tests automatically

You can have multiple test suites that are typically run by a worker during the testing phase of a project. By including these tests in continuous integration, the stack is often as simple as a small PowerShell script. Performance, load, and security tests are two examples of tests that can be performed by an expert who is not a member of a traditional test team and therefore cannot be set to run automatically. Another advantage of doing these tests regularly is that the problems found are usually difficult to overcome; so if a problem is identified sooner, the team has more time to solve it. These tests are generally classified as very significant, but since they take more than an hour to complete, they are usually performed on a regular basis.

Remove the uncertainty

The whole point of automation is to get reliable, accurate test results. When a test fails, professionals need to figure out what went wrong. However, as false positives and discrepancies increase, so does the time required to analyze errors. To avoid this, remove unstable tests from regression packages. Furthermore, older automated tests may miss important checks. Avoid this by doing enough test planning before doing any testing. Always keep track of whether each exam is up to date. Ensure that the sanity and validity of automated tests are thoroughly tested during test cycles.

Be smart about your wait times

We’ve all done it. the problematic test always fails because the backend didn’t respond fast enough or because the resource is still being processed, so we add a sleep clause. We meant it to be a temporary fix, but that was almost a year ago. Look for those horrible sleep statements and see if you can replace them with a better wait statement that terminates when the event occurs rather than after a predetermined time.

Collective ownership of tests

Don’t outsource entire automated testing initiatives to a single tester or developer. The rest of the team won’t be able to make a meaningful contribution if they’re not constantly up-to-date. To properly incorporate automation into the test infrastructure, the entire team must be on board at all times. This allows each team member to be aware of the process, communicate more clearly, and make educated decisions about how to create and execute appropriate tests.

Reconfigure the test configuration

Tests usually configure and then perform a check. For example, one team had a suite of UI-based tests that took a long time to run and had many spurious errors due to timing issues and minor UI tweaks. We refactored that assembly to perform a test setup using API commands and verification via the UI. This improved package had the same functional coverage, but it ran 85% faster and had about half the false positives caused by the updates.

Run the tests in parallel to maximize the value of every minute of execution

Running tests in parallel is significantly more economical thanks to virtual servers, cloud technologies and services that help automatically create environments and distribute your code. Look at test suites that take some time and see if there are options to run those tests at the same time. We had a very important test suite with 5000 test cases per group. We didn’t do this set very often because it takes a lot of hours to complete. It was a very thorough examination covering a wide range of aspects. We were able to split that suite into about a dozen other parallel-capable suites, allowing us to run tests more frequently (daily as opposed to weekly), and we were also able to identify any issues more quickly as the new suites were deployed. by component.

Create small but effective test suites

Take the most important tests and combine them into a smaller, faster running package. These are often relatively basic tests, but they are required to validate your system for further testing. There is absolutely no point going forward if these tests fail. We usually call them construction acceptance tests or construction verification tests. If you already have these suites, that’s fantastic. just make sure they run fast.

Leave a Reply