Test automation is an extremely large and complex topic, wouldn’t you agree?

There are so many important pieces of information that can improve your test automation.

Hence, I decided to document all the automation patterns and anti-patterns to help you. I will update and expand this post regularly.

Patterns

Tests Should Be Atomic

one of the automated testing patterns is that testing should be atomic

Your automated test should form a single irreducible unit. This means that your tests should be extremely focused and test only a single thing. A single automated test should not do something like end-to-end automation.

A good rule of thumb that I use on my teams is:

Automated acceptance test should not run longer than 1 minute on your local resources

If your test runs longer than 1 minute, then you should keep reading on why that might be dangerous. The advantages of writing atomic tests are numerous…

Atomic tests fail fast

First, writing atomic tests allows you to fail fast and fail early. This implies that you will get extremely fast and focused feedback. If you want to check the state of a feature, it will take you no longer than 1 minute.

Atomic tests decrease flaky behavior

Second, writing atomic tests reduces flakiness because it decreases the amount of possible breaking points in that test. Flakiness is less of a problem with unit or integration tests. But it is a large problem with acceptance UI automation.

Here’s an example:

  1. Open UltimateQA.com home page
  2. Assert that the page opened
  3. Assert that each section on the page exists
  4. Open Blog page
  5. Search for an article
  6. Assert that the article exists

For UI automation, every single step is a chance for something to go wrong. A locator may have changed, the interaction mechanism may have changed, your synchronization strategy may be broken, and so on. Therefore, the more steps that you add, the more likely your test is to break and convey false positives.

Atomic tests allow for better testing

Third benefit of writing atomic tests is that if it fails, it will not block other functionality from being tested. For example, the test I mentioned above. If it fails on Step 3, then you might never get to check if the Blog page works or that the Search functionality works. Assuming that you don’t have other tests to check this functionality. As a result of a large test, you will reduce your test coverage.

Atomic tests are short and fast

Finally, another great benefit of writing small tests is that they will run quicker when parallelized…

If all of your tests run in one minute max and you have one hundred tests, then throwing one hundred VMs at that suite means that you can have feedback in one minute. However, if you have even a single test that runs in ten minutes, then applying the same number of VMs will return you results in ten minutes. That’s a 10X decrease in speed to feedback. Fast feedback loops are critical in test automation.

The total automated suite feedback time will be as slow as your slowest test

By the way, I have seen automated tests that take 30 – 90 minutes to execute. These tests are extremely annoying to run because they take so long. Even worse, I’ve never seen such a test produce valuable feedback in my entire career. Only false positives.

How to break up giant end-to-end UI tests?

here's how to break giant tests into smaller actionable tests

One of the most common problems I see with clients is that they have extremely long-running UI tests that break the Atomic Tests Pattern.

So the question is, how can you break up your giant UI test? Here is a simple scenario:

  1. Open Amazon.com
  2. Search for an item
  3. Add item to cart
  4. Checkout

The first problem is that many automation engineers assume that you must do an entire end-to-end flow for this automated test.

Basically, you must complete step 1 before step 2 and so on…

The correct approach is to be able to inject data to populate the state of the application prior to any UI interactions.

You can inject data via web requests or via updating your database directly. The former method is preferred.

If you can inject data between the seams of the application, then you can isolate each step and test it on it’s own

For example:

  1. Use an API to send a web request that will generate a user
  2. Use an API that will generate an item in your Amazon cart
  3. Now you can pull up the UI to the cart page and checkout
  4. Clean up all test data after

This is the best practices approach. You tested the checkout process without using the UI for steps one through three.

Using an API is extremely fast… A web request can execute in like 100 ms.

This means that steps 1,2,4 can take less than one second to execute. The only other step you will need to do is to finish the checkout process.

It gets better:

Using an API is much more robust than using a UI for test steps. As a result, you will drastically decrease test flakiness in your UI automation.

What if you don’t have the capability to inject data for testing?

I know that the world isn’t perfect. I’ve worked with clients that are not able to follow the prescribed process above. So they ask what they can do.

You have two options:

First, you should work with the developers to make your application more testable. Not being able to easily test your application is a sign of poor development.

Work with your team to provide you seams where you can inject and manipulate test data and application state. I have done it myself and I have seen it done with my clients.

It’s doable and most developers are happy to help.

Here’s the kicker, and you won’t want to hear it:

If you can’t work with the developers, then just don’t automate such a complicated scenario. I know that your manager asked you to automate it…

However, we are the automation engineers. We are the professionals. We must decide what to automate and not to automate based on our understanding of application requirements.

If your manager is non-technical, they should not be telling you how to do your job. You don’t see managers telling developers how to code. Why is it okay for managers to tell an automation engineer what to automate?

The answer is it’s not okay!

You must be the expert and decide on the correct approach to do your job.

Anti-Patterns

there are some anti-patterns that you should avoid in automation testing

Anti-Pattern: Assuming that more UI automation is better

There are very few automation anti-patterns that will kill an automation effort faster than using UI automated testing to try and automate everything.

If you actually want test automation to succeed at your organization, then you must avoid this anti-pattern at all costs.

automating less is better than automating more tests

Image source

More automation is not necessarily better. In fact, I would argue that for an organization that is starting out, less stable automation is magnitudes of times better than more automation.

Here’s some cool info:

I’m super lucky in that I get to consult and work with clients all over the world. So I’ve seen all sorts of organizations.

lots of UI tests

Organization that ran 123K UI tests in 7 days

This organization has executed 123K automated UI tests in 7 days!

If you want to kill your #automation program really fast and have your organization not trust your results, follow this anti-pattern => Automate everything from the UI layer. Click To Tweet

Here’s the kicker:

Take a look at this graph and how only 15% of the tests passed.

very low passing rate

very low passing rate

Now, can this organization really say that out of 100% of the features that are really being tested here, that 85% of those features contain bugs?

In that case this would mean that approximately ~104,000 bugs were logged in the 7 day period. That seems, highly unlikely, if not impossible…

So then, what are all of these failures and errors?

They’re called false positives. Failing tests that are not a result of actual faults in the software being tested.

Who is sorting through all of these failures?

Is there really someone on the team that is sitting and sorting through all of these failures?

~104,000 non-passing tests… So what is the reason that they failed? Because there is one bug in the application that caused 50,000 failures?

Because there are two or more bugs causing all of these problems?

Or is it because there are no bugs found and all the failures are a result of worthless automation efforts? I’d bet $104,000 that it’s this option :)-

Here’s the problem:

How many automation engineers do you need to sort through 104,000 non-passing tests in one week?

When I ran a team of four automation engineers, we could barely keep up with a few non-passing automated tests per week.

So let’s be honest… nobody is analyzing these non-passing automated tests, would you agree?

So then what value are these test cases serving the entire organization? What decision do they help the business to make about the quality of the software?

If there was an 85% failure rate in your manual testing, do you move your software to production? Of course not…

So why is it acceptable for so many automated tests to run, not pass, and continue to run?

It’s because this automation is just noise now… Noise that nobody listens to… Not even the developers of the automation.

Automation Failed!

But, there’s hope…

There are organizations that do automation correctly as well. Here’s an example of one…

passing tests

automated tests executed over a year

Why is this automation suite more successful?

First, notice that it was executed over a year. And over a year there were not that many failures…

Yes, this doesn’t necessarily imply that the automation is successful. But let me ask you.

Which automation would you trust more? One that is passing for months at a time and gets a failure once every couple months?

Or the automation that has only 15% passing tests of which 104,000 of are not passing?

Food for thought:

If you think about a single feature – Facebook login or Amazon search for example.

How often does that feature break based on your experience? Very rarely, if ever based on my experience…

So if you have an automated test case for one of these features, which of the graphs above look more like how the development of the feature actually behaves?

That’s your answer…

Your automated UI tests should behave almost identical to how actual development of a feature happens.

Meaning, passing majority of the time, like 99.5% of the time and failing once in a blue moon, due to a real regression.

So what can you do to make your automation valuable?

It’s actually really simple…

If your automation is not providing a correct result more than 99.5% of the time, then stop automating and fix your reliability! You’re only allowed 5 false positives out of 1000 test executions. That’s called quality automation.

If your #automation is not providing a correct result more than 99.5% of the time, then stop automating and fix your reliability! You're only allowed 5 false positives out of 1000 test executions. That's called #quality automation. Click To Tweet

I know what you are thinking… Impossible, right?

Not at all. I actually ran the team that had these execution results below…

passing tests

automated tests executed over a year

Sadly, I no longer have the exact passing percentage of these metrics. But if you do a little estimation, you’ll be able to see that the pass rate of this graph is extremely high.

Furthermore, I can say that every failure on this graph was a bug that was introduced into the system. Not a false positive which is so common in UI automation.

By the way, I’m not saying this to impress you. Rather, to impress upon you the idea that 99.5% reliability from UI automation is possible and I’ve seen it.

Give it a shot and let me know your outcome 🙂

Anti-Pattern: Using complicated data store such as Excel

how to use microsoft excel for test data management in automation testing

One of the most common questions from my students and clients is how to use Excel for test data management in test automation.

Don’t use Excel to manage your automation test data

I understand the rationale behind using Excel for your test data. I’ve been doing test automation for a really long time and I know about Keyword Driven Frameworks and trying to let manual testers create automated tests in Excel. My friends…

It just doesn’t work… I wasn’t able to make it work myself and I’ve never seen anyone else make it work successfully.

Why using Excel is an anti-pattern?

  1. The logic to read and manage Excel adds extra overhead to your test automation that isn’t necessary. You will need to write 100s of lines of code just to manage an Excel object and read data based on column headers and row locations. It’s not easy and is prone to error. I’ve done it many years ago. All of this will eat into your automation time and provide no value to the business that employs you.
  2. You will be required to manage an external solution component for your test automation. This means that you can never simply pull the code and have everything work. You will need to have a license for Excel. You will need to download and install it. And you will need to do this for all of your automation environments. Usually local, Dev, Test, and Prod. This means that you need to manage this Excel instance in all of these environments. This is simply another waste of your time.

What are your solutions?

  1. The best solution is if you have an API that you can use to read test data. This is a robust and lightweight solution
  2. If you don’t have an API, you can talk directly to the Database. This takes much less code and it’s much easier to manage than working with and external Excel object.
  3. If you must use some data source, use a .csv or .json file. CSZ and JSON files are extremely lightweight, easy to read, and can be directly inserted into your automation code. This means that you will be able to simply download the code and have everything work without needing to perform other installations.

Anti-Pattern: Trying to use UI automation to replace manual testing

Automated testing CANNOT replace manual testing

I have not read or seen any automation luminary who claims that automation can replace manual testing. Not right now at least… Our tools have a long way to go.

However, I know and have worked with managers and teams whose goal with test automation is exactly what cannot be done.

And so these individuals pursue a goal that is impossible… Obviously leading to failure.

Side note:

Use of automation can drastically enhance the testing process. If used correctly, automation can reduce, not replace the manual testing resources required.

Why can’t automation replace manual testing?

First, it’s impossible to get 100% automated test coverage. It’s actually impossible to get 100% test coverage in general…

That’s why we still have bugs on all the apps in the world, right?

Anyways, if you can’t automate 100% of the application, that means that you will need some sort of manual testing to validate the uncovered portions.

Second, UI automation is too flaky, too hard to write, and too hard to maintain to get over 25% test coverage…

This is based on experience and is an approximation… I don’t have any hard data on this.

However, you actually don’t want higher coverage than 25%. I guess it’s possible that with a well designed, modular system, you might be able to get higher UI automation coverage.

But this is an exception, not the rule.

Here’s the kicker:

Using a combination of unit, integration, and UI automation, you might get close to 90% test coverage…

But that’s really hard. And this is just a side note.

Finally, there are some manual activities that cannot be automated technologically…

That’s for now and for at least a few years in my opinion.

Some examples include UX Testing and Exploratory Testing.

So again, if you are trying to use automation to replace manual testing, it will be a futile effort.

What is the solution?

Use the automation testing pyramid and don’t try to replace manual testing with UI automation.

Use UI automation and any automation to enhance the testing process.

A combination of manual testing and automated testing will create the best end user experience.

Almost Anti-patterns?

This section is a collection of automation techniques that I have seen with my clients that cause them a lot of problems. I can’t quiet classify them as “anti-patterns” because they are not widely accepted as such in the automation community, by the automation experts. However, I do believe that they are on the brink of being bad practices that you should strongly reconsider.

Anti-pattern: Using BDD tools for UI automation

Description coming soon… In the meantime, do your research and think how and whether it will actually help you succeed with test automation.

Nikolay Advolodkin is a self-driven SDET on a lifelong mission to create profound change in the IT world and ultimately leave a legacy for his loved ones, community, and the world at large. Today, he serves as the CEO and Test Automation Instructor at UltimateQA.com and contributes informative articles to leading test automaton websites like SimpleProgrammer.com and TechBeacon.com

Pin It on Pinterest

Shares
Share This