The art of testing the untestable

It is strange to hear someone declare. “This cannot be tested.” In response, I claim that everything can be tested. However, one must be satisfied with the outcome of the testing, which may include failure, financial loss, or personal injury. Can anything be tested when a claim is filed with this understanding?

I had heard this comment in a slightly different context, and now that I’ve spent more time in the testing position, I’m reconsidering what it might imply. My testing roles, if nothing else, teach me communication skills and patience. When someone comes up to you and makes this comment, what else are they expressing about the product being tested, or perhaps themselves?

When you are informed that something is not verifiable, take a deep breath and open a discussion with the individual. The discussion is not about why something is not testable, at least not overtly. It is a dialogue to better understand what someone is going through, to investigate alternative interpretations of the facts, and to help make the untestable testable.

Is that a fact? or simply Conclusion

My thought process starts with the reason for the tester’s comment. I would like to explore his thinking along those lines. I am not going to defend the product or the ability to test it. Rather, I am interested in whether the tester is making a fact or an inference. The fact serves as a platform for further discussion. It clarifies where we start on the verifiability spectrum. “This cannot be tested” is supposed to be at the bottom of the scale. In this situation, I want to test this tester’s claim by asking him about his knowledge of the product and his thoughts on what can’t be tested.

Likewise, if he draws a conclusion, on what knowledge does he base it? Maybe he tried to evaluate some function and it was difficult to achieve a result, so he says: “This cannot be tested.” My goal is to explore and understand his perceptions.

Let’s examine statements and conclusions using the following examples.

Root cause. Test conditions

A few years ago, I managed a global testing project involving many large testers, but I remember one tester in particular that was indicative, but not positive. In almost any testing task given to him, the first thing they claim is “I can’t test it because I don’t have the right test conditions”, which intrigued me, so I started researching this person’s test conditions via . questions:

  1. Do you have an addiction that is blocking you?
  2. Having problems with the test environment?
  3. Do you have new code with a stable build?
  4. Need some help creating custom test scripts?
  5. Need some help creating custom product feeds?
  6. Is it possible to test negative?

The main reason. it is not a production environment

Differences between non-production (testing) and production environments are often cited as reasons why certain features cannot or should not be tested. These differences include both functional and non-functional tests (load, endurance, etc.).

In production, there may be automated processes that a tester must do manually in a test environment. Production data represents real customers and real circumstances that may or may not exist in a test environment, and the data may contain sensitive data. Production equipment is designed for high efficiency and is usually clustered. Finally, separate credentials may be required for applications to connect to databases and retrieve information from the Internet.

Many of these differences are product risks that development teams should be aware of. To reduce these risks, you should minimize the differences by asking yourself the following questions: Is it possible to automate manual tasks within a day? Is it possible to extract and clean production data for use in test settings, or does it need to be simulated? Is it beneficial to build test environments with infrastructure compatible with production environments and clustering? Can resource availability be evaluated in a production environment to minimize application impact in large deployments?

The main reason. Embedded code

While we’re on the subject of scaffolding code or other highly embedded functionality, I sometimes find that a module is difficult to software test because it’s so deeply embedded. The same can be true for random procedures (month-end processing) or long-term operations. Deeply embedded functionality may include programs located at several levels of the hierarchy or even a hard-to-reach component on an automotive engine.

In many circumstances, creating proof of exploitation is difficult. There may be evidence to suggest that the procedure took place. If there is no proof, I suggest testing to see if the procedure was ever executed (eg adding logs to demonstrate execution).

Main reason. We cannot accept the risk because the test is too dangerous

I have evaluated several products that have some risk in their normal use. Obviously, testing rocket engines is not the same as testing a gaming application, the methods carry some risk for both the tester and the product. Even if these products are less testable, my project manager and I are debating whether to test them and how much risk to take. In other words, what are the likely results of using this product if no specific testing is done? Let’s take the following example.

“I won’t be able to test this DELETE method because it deletes the entire database.”

I question how we can reduce the risk while conducting a test that provides reliable and relevant data.

  • Can we use tools to mock transactions instead of running them?
  • Is it possible to use mock to simulate the database?
  • Can we develop a similar structure and feed it with data from an actual table in the same database?
  • Can we abort the query to examine its syntax? (Before we implement it).

Alert your development team when you run such a test. They can prepare for unusual occurrences in the environment, anticipate additional challenges, or even engage in bug hunting.

Root cause. Too Small/For Basic Testing

As a tester, I’ve heard it said more often than as a developer that change is too easy to test. A change may seem simple to the requester: just one line of code or an update to a configuration file. Other simple adjustments include adding a phrase to the error message, correcting the spelling of the text customers see, or adding contact information. Simple settings can lead to errors. I’ve encountered cases where the error message text suddenly has a spelling problem or the updated contact information no longer works. After experiencing a few of these, spelling and “easy changes” have become two of my top rated scenarios. Also, while they may pass code review, it’s a better test to check them after they’ve been implemented.

The main reason: frustration

Once an upset tester approached me and said: “This cannot be tested.” Although my initial impulse was to help with the trial, I tried to be empathetic first. “Maybe it’s untestable,” I surmised. I discussed the technical difficulties with that particular feature, as well as other aspects that affect testing. This examination will reduce the person’s frustration, distract him from the task and willingly engage him in the discussion.

I started a joint investigation of the product to be tested. Remember to just start by verifying that the product has been launched. How many times have you started evaluating something only to find out it wasn’t included in the previous project? Discuss the purpose of the test and how the test plan works to understand behavior and identify problems. Is the plan aligned with the product’s business expectations?

With this as a starting point, I discussed his experiences and how he came to his conclusions. We used data from the app to validate our observations (database entries, log files, screenshots, etc.). I kept checking along the way to see if any testing was coming up and then headed in that direction. When our combined efforts yielded minimal results, we brought the developer into our discussion and explored ways to improve product transparency.

Root cause. Dealing with difficult personalities

On another occasion, I worked on a project with a developer who, while talented, was not always friendly to testers. This individual once contacted me and rather brazenly stated that his code was unverifiable. He advised me not to waste my time with it.

If you find yourself in this situation, you should start the conversation by asking what the individual did to prevent the product from being inspected. Inevitably, questions must be asked to politely suggest that if a product is released for use without being tested or underrated, there is a risk of failure (the degree of failure depends on the product, I’m sure we can all assume: big enough product defects) , as well as the probability of consequences. My conversation aims to find out this person’s comfort level if this situation is likely to arise.

Alternatively, the discussion may focus on the motivation to reduce product verifiability. While it may seem counterintuitive to reduce testability, an investigation may reveal security vulnerabilities, time constraints, ego (be careful when exploring this), or other variables beyond a person’s control. Regardless, as a tester, you may expose yourself to the risk of not reviewing all or part of the product.

Root cause. Different views on testing and testing

During one project, I talked to a developer about some development activity and suggested that he release the code so I could check it out. He replied that there was nothing to test. I thought about the development we had just talked about and how he had worked hard hours over the past month in my mind. “Is there really nothing to try for so much time and effort?” I asked.

I expressed my concerns to my project manager and we continued to work without my proposed revision. We found bugs and missing requirements while testing the code. It’s a tragic and oft-told story, but as we examined our accomplishments and opportunities after the incident, I discovered a new perspective.

To confirm its development path, the developer experimented and created prototypes. His victories along the way encouraged them to conduct further research. He saw his work as a scaffolding, a code that would help build other codes. He also thought the scaffolding was not worth testing.

As a result of this discussion, we worked with him on the following releases to review the scaffolding code early. I asked to examine the code to better understand how it came together in the final product. The testing team also provided input on inconsistencies in expected functionality and interface elements. We didn’t launch open bugs (because we were looking at continuous development), but instead offered observations about product changes, an iterative idea that many agile teams experiment with.

Leave a Reply