Test automation is an extremely large and complex topic, wouldn’t you agree?

There are so many important pieces of information that can improve your test automation. Hence, I decided to document all the automation patterns and anti-patterns to help you. I will update and expand this post regularly.

Patterns

Pattern: Tests should be fast

Your automated tests should be fast! I can’t stress this enough. Unit and integration tests are already really fast, they run in milliseconds. A unit test can be as fast as one millisecond. So we really don’t need to worry about this.

Here’s the kicker:

UI tests are really slow because they run in seconds. Thousands of times slower than unit and integration tests!

So it’s really important to focus on fast UI tests.

Your automated UI tests should take no longer than one minute on your local resources.

Nikolay Advolodkin, CEO at Ultimate QA

Danny McKeown stated in Automation Guild 2019 that his tests run no longer than two minutes.

Danny McKeown, Automation Guild 2019

The reason for this boils down to the fact that you want your builds to be fast. Meaning no more than 20 minutes. Otherwise, nobody will have patience to wait so long for automation feedback.

In order to accomplish a requirement of fast builds. You need small, fast tests, running in parallel. I’m sorry, there is no other way.


Pattern: Tests Should Be Atomic

one of the automated testing patterns is that testing should be atomic

Your automated test should form a single irreducible unit. This means that your tests should be extremely focused and test only a single thing. A single automated test should not do something like end-to-end automation.

A good rule of thumb that I use on my teams is:

Automated acceptance test should not run longer than 1 minute on your local resources

If your test runs longer than 1 minute, then you should keep reading on why that might be dangerous. The advantages of writing atomic tests are numerous…

500% improvement by running atomic parallel tests.

In a recent case study, we found that 18 long end to end tests ran in ~20 min. Using a much larger suite of 180 atomic tests, with the same exact code coverage, running in parallel, we were able to decrease the entire suite execution time to ~4min.

atomic vs non-atomic tests

Advantages of atomic tests

1. Atomic tests fail fast

First, writing atomic tests allows you to fail fast and fail early. This implies that you will get extremely fast and focused feedback. If you want to check the state of a feature, it will take you no longer than 1 minute.

2. Atomic tests decrease flaky behavior

Second, writing atomic tests reduces flakiness because it decreases the amount of possible breaking points in that test. Flakiness is less of a problem with unit or integration tests. But it is a large problem with acceptance UI automation.

Here’s an example:

  1. Open UltimateQA.com home page
  2. Assert that the page opened
  3. Assert that each section on the page exists
  4. Open Blog page
  5. Search for an article
  6. Assert that the article exists

For UI automation, every single step is a chance for something to go wrong. A locator may have changed, the interaction mechanism may have changed, your synchronization strategy may be broken, and so on. Therefore, the more steps that you add, the more likely your test is to break and convey false positives.

3. Atomic tests allow for better testing

Third benefit of writing atomic tests is that if it fails, it will not block other functionality from being tested. For example, the test I mentioned above. If it fails on Step 3, then you might never get to check if the Blog page works or that the Search functionality works. Assuming that you don’t have other tests to check this functionality. As a result of a large test, you will reduce your test coverage.

4. Atomic tests are short and fast

Finally, another great benefit of writing small tests is that they will run quicker when parallelized…

How I got a 98% performance enhancement in test execution speed with a single change?

98% improvement in average test case execution time by having atomic, parallel tests

In the scenario above, I had a suite of 18 end-to-end tests that were NOT atomic and were not running in parallel.

Maintaining the same code coverage, I broke down my tests into 180 tiny, atomic tests…

Ran them in parallel and decreased the average time of test case to 1.76s from 86s!

Your entire automation suite run time will be as fast as your slowest test

Nikolay Advolodkin

By the way, I have seen automated tests that take 30 – 90 minutes to execute. These tests are extremely annoying to run because they take so long. Even worse, I’ve never seen such a test produce valuable feedback in my entire career. Only false positives.

Are you having errors in your Selenium automation code? Maybe this post of the Most Common Selenium Errors can help you.

How to break up giant end-to-end UI tests?

here's how to break giant tests into smaller actionable tests

Okay, you believe me, atomic tests are good.

But how can you break up your large end-to-end tests, right?

Trust me, you’re not the only one struggling with this situation…

It gets worse:

On a daily basis, I encounter clients that have the same exact issue.

Furthermore, I wish that I could provide a simple answer to this. But I cannot…

For most individuals, this challenge is one of technology and culture.

However, I will provide a step by step guide to help you have atomic tests.

It won’t be easy… But when you achieve it, it will be SO Sweet!

Here is a simple scenario:

  1. Open Amazon.com
  2. Assert that the page opens
  3. Search for an item
  4. Assert that item is found
  5. Add item to cart
  6. Assert that item is added
  7. Checkout
  8. Assert that checkout is complete

The first problem is that many automation engineers assume that you must do an entire end-to-end flow for this automated test.

Basically, you must complete step 1 before step 2 and so on… Because how can you get to the checkout process without having an item in the cart?

The best practices approach is to be able to inject data to populate the state of the application prior to any UI interactions.

How to manipulate test data for UI automation?

You can inject data via three options:

  1. Using something like a RESTful API to set the application into a specific state
  2. Injecting data into the DB to set the application in a certain state
  3. Using cookies

If you can inject data between the seams of the application, then you can isolate each step and test it on it’s own

For example:

  1. Use an API to send a web request that will generate a user
  2. Use an API that will generate an item in your Amazon cart
  3. Now you can pull up the UI to the cart page and checkout using web automation
  4. Clean up all test data after

This is the best practices approach. You tested the checkout process without using the UI for steps one through three.

Using an API is extremely fast… A web request can execute in like 100 ms.

This means that steps 1,2,4 can take less than one second to execute. The only other step you will need to do is to finish the checkout process.

It gets better:

Using an API is much more robust than using a UI for test steps. As a result, you will drastically decrease test flakiness in your UI automation.

What if you don’t have the capability to inject data for testing?

I know that the world isn’t perfect and many of us aren’t lucky enough to have applications that are developed with testability in mind.

So what can you do?

You have two options:

1. Work with Developers to make application more testable

Yes, you should work with the developers to make your application more testable. Not being able to easily test your application is a sign of poor development practices.

This does mean that you will need to leave your cube and communicate across silos to break down communication barriers.

Frankly, this is part of your job. You need to communicate across teams and work together to create a stable product.

If the product fails, the whole team fails, not just a specific group

Again, it’s not easy…

I’ve worked at one company where it took me two years to simply integrate Developers, automation, and manual QA into a single CI pipeline.

It was a weekly grind to have everyone caring about the outcome of the automation suite.

And at the end, our team was stronger and more agile than ever.

Trust me, this is doable and most developers are happy to help. But you must be willing to break down these barriers.

Here’s the second option, and you won’t want to hear it:

2. If your application is not automation friendly, don’t automate

If you can’t work with the developers because you’re unwilling…

Or if the company culture doesn’t allow you to…

Then just don’t automate something that won’t provide value. I know that your manager asked you to automate it…

However, we are the automation engineers. We are the professionals.

We must decide what to automate and not to automate based on our understanding of application requirements.

We were hired because of our technical expertise, because of our abilities to say what is possible, what is not possible, and what will help the project to succeed.

Although it might feel easy to say “yes, I will automate your 30 minute scenario”, it’s not right to do so.

If your manager is non-technical, they should not be telling you how to do your job. You don’t see managers telling developers how to code. Why is it okay for managers to tell an automation engineer what to automate?

The answer is it’s not okay!

You must be the expert and decide on the correct approach to do your job.

If you don’t agree with me…

Check out this video from Robert Martin – arguably one of the best Software Developers of this century.

He does a better job explaining professionalism than I ever could ๐Ÿ™‚

Automation Best Practices

If you like video format, I recorded a video presentation of the most important automation best practices that I could pack in 60 minutes. Enjoy ๐Ÿ™‚


Anti-Patterns

Anti-Pattern: UI tests should not expose interactions with web elements

The benefit of using Page Objects is that they abstract implementation logic from the tests. The tests can be focused on the scenarios and not implementation. The idea is that the scenario doesnโ€™t change, but the implementation does.

For example, this method is performing a bunch of operations for some actions. At any point, we may need to change our steps. Maybe a new field got added and now we need to check a checkbox. Or, maybe one of the fields gets removed.

Even more common, you want to add logging. In that case, every test will need to be accommodated for this new flow (could be 1000s of tests).

It gets better:

The right way to solve this problem is to encapsulate all of the steps into a method called Login().

Now, it doesnโ€™t matter if we have to add logging, add an extra field, remove a field and so on. There will be a single place to update the Login steps, in the Login() method. Take a look below. This test will only need to change for a single reason. If requirements change:


Anti-Pattern: Assuming that more UI automation is better

There are very few automation anti-patterns that will kill an automation effort faster than using UI automated testing to try and automate everything.

If you actually want test automation to succeed at your organization, then you must avoid this anti-pattern at all costs.

automating less is better than automating more tests

Image source

More automation is not necessarily better. In fact, I would argue that for an organization that is starting out, less stable automation is magnitudes of times better than more automation.

Here’s some cool info:

I’m super lucky in that I get to consult and work with clients all over the world. So I’ve seen all sorts of organizations.

lots of UI tests
Organization that ran 123K UI tests in 7 days

This organization has executed 123K automated UI tests in 7 days!

If you want to kill your #automation program really fast and have your organization not trust your results, follow this anti-pattern => Automate everything from the UI layer. Click To Tweet

Here’s the kicker:

Take a look at this graph and how only 15% of the tests passed.

very low passing rate
very low passing rate

Now, can this organization really say that out of 100% of the features that are really being tested here, that 85% of those features contain bugs?

In that case this would mean that approximately ~104,000 bugs were logged in the 7 day period. That seems, highly unlikely, if not impossible…

So then, what are all of these failures and errors?

They’re called false positives. Failing tests that are not a result of actual faults in the software being tested.

Who is sorting through all of these failures?

Is there really someone on the team that is sitting and sorting through all of these failures?

~104,000 non-passing tests… So what is the reason that they failed? Because there is one bug in the application that caused 50,000 failures?

Because there are two or more bugs causing all of these problems?

Or is it because there are no bugs found and all the failures are a result of worthless automation efforts? I’d bet $104,000 that it’s this option :)-

Here’s the problem:

How many automation engineers do you need to sort through 104,000 non-passing tests in one week?

When I ran a team of four automation engineers, we could barely keep up with a few non-passing automated tests per week.

So let’s be honest… nobody is analyzing these non-passing automated tests, would you agree?

So then what value are these test cases serving the entire organization? What decision do they help the business to make about the quality of the software?

If there was an 85% failure rate in your manual testing, do you move your software to production? Of course not…

So why is it acceptable for so many automated tests to run, not pass, and continue to run?

It’s because this automation is just noise now… Noise that nobody listens to… Not even the developers of the automation.

Automation Failed!

But, there’s hope…

There are organizations that do automation correctly as well. Here’s an example of one…

passing tests
automated tests executed over a year

Why is this automation suite more successful?

First, notice that it was executed over a year. And over a year there were not that many failures…

Yes, this doesn’t necessarily imply that the automation is successful. But let me ask you.

Which automation would you trust more? One that is passing for months at a time and gets a failure once every couple months?

Or the automation that has only 15% passing tests of which 104,000 of are not passing?

Food for thought:

If you think about a single feature – Facebook login or Amazon search for example.

How often does that feature break based on your experience? Very rarely, if ever based on my experience…

So if you have an automated test case for one of these features, which of the graphs above look more like how the development of the feature actually behaves?

That’s your answer…

Your automated UI tests should behave almost identical to how actual development of a feature happens.

Meaning, passing majority of the time, like 99.5% of the time and failing once in a blue moon, due to a real regression.

So what can you do to make your automation valuable?

It’s actually really simple…

If your automation is not providing a correct result more than 99.5% of the time, then stop automating and fix your reliability! You’re only allowed 5 false positives out of 1000 test executions. That’s called quality automation.

If your #automation is not providing a correct result more than 99.5% of the time, then stop automating and fix your reliability! You're only allowed 5 false positives out of 1000 test executions. That's called #quality automation. Click To Tweet

I know what you are thinking… Impossible, right?

Not at all. I actually ran the team that had these execution results below…

passing tests
automated tests executed over a year

Sadly, I no longer have the exact passing percentage of these metrics. But if you do a little estimation, you’ll be able to see that the pass rate of this graph is extremely high.

Furthermore, I can say that every failure on this graph was a bug that was introduced into the system. Not a false positive which is so common in UI automation.

By the way, I’m not saying this to impress you. Rather, to impress upon you the idea that 99.5% reliability from UI automation is possible and I’ve seen it.

Give it a shot and let me know your outcome ๐Ÿ™‚

Anti-Pattern: Using complicated data store such as Excel

how to use microsoft excel for test data management in automation testing

One of the most common questions from my students and clients is how to use Excel for test data management in test automation.

Don’t use Excel to manage your automation test data

I understand the rationale behind using Excel for your test data. I’ve been doing test automation for a really long time and I know about Keyword Driven Frameworks and trying to let manual testers create automated tests in Excel. My friends…

It just doesn’t work… I wasn’t able to make it work myself and I’ve never seen anyone else make it work successfully.

Why using Excel is an anti-pattern?

  1. The logic to read and manage Excel adds extra overhead to your test automation that isn’t necessary. You will need to write 100s of lines of code just to manage an Excel object and read data based on column headers and row locations. It’s not easy and is prone to error. I’ve done it many years ago. All of this will eat into your automation time and provide no value to the business that employs you.
  2. You will be required to manage an external solution component for your test automation. This means that you can never simply pull the code and have everything work. You will need to have a license for Excel. You will need to download and install it. And you will need to do this for all of your automation environments. Usually local, Dev, Test, and Prod. This means that you need to manage this Excel instance in all of these environments. This is simply another waste of your time.

What are your solutions?

  1. The best solution is if you have an API that you can use to read test data. This is a robust and lightweight solution
  2. If you don’t have an API, you can talk directly to the Database. This takes much less code and it’s much easier to manage than working with and external Excel object.
  3. If you must use some data source, use a .csv or .json file. CSZ and JSON files are extremely lightweight, easy to read, and can be directly inserted into your automation code. This means that you will be able to simply download the code and have everything work without needing to perform other installations.

Anti-Pattern: Trying to use UI automation to replace manual testing

Automated testing CANNOT replace manual testing

I have not read or seen any automation luminary who claims that automation can replace manual testing. Not right now at least… Our tools have a long way to go.

However, I know and have worked with managers and teams whose goal with test automation is exactly what cannot be done.

And so these individuals pursue a goal that is impossible… Obviously leading to failure.

Side note:

Use of automation can drastically enhance the testing process. If used correctly, automation can reduce, not replace the manual testing resources required.

Why can’t automation replace manual testing?

First, it’s impossible to get 100% automated code coverage. It’s actually impossible to get 100% code coverage in general…

That’s why we still have bugs on all the apps in the world, right?

Anyways, if you can’t automate 100% of the application, that means that you will need some sort of manual testing to validate the uncovered portions.

Second, UI automation is too flaky, too hard to write, and too hard to maintain to get over 25% code coverage…

This is based on experience and is an approximation… I don’t have any hard data on this.

However, you actually don’t want higher coverage than 25%. I guess it’s possible that with a well designed, modular system, you might be able to get higher UI automation coverage.

But this is an exception, not the rule.

Here’s the kicker:

Using a combination of unit, integration, and UI automation, you might get close to 90% test coverage…

But that’s really hard. And this is just a side note.

Finally, there are some manual activities that cannot be automated technologically…

That’s for now and for at least a few years in my opinion.

Some examples include UX Testing and Exploratory Testing.

So again, if you are trying to use automation to replace manual testing, it will be a futile effort.

What is the solution?

Use the automation testing pyramid and don’t try to replace manual testing with UI automation.

Use UI automation and any automation to enhance the testing process.

  • A combination of manual testing and automated testing will create the best end user experience.

Anti-Pattern: Mixing functional automation with performance testing

Description coming soon… In the meantime, do your research about whether this makes sense. Remember, that at the end of the day, it is always faster to run ten, one minute tests in parallel than to run a single five minute test. It’s one minute to suite feedback time versus five minutes. Even if each test takes longer because of the setup and teardown, parallelization is still the most powerful way to scale your automation. Trying to scale your automation by combining tests together is not the right approach.

Anti-Pattern: Keyword Driven Testing

Description coming soon… In the meantime, do your research. What I can say quickly is that I know ZERO successful SDETs that use Keyword Driven Testing. The reason is that it exacerbates code duplication. Watch these videos that will put things into perspective:

Advantages and disadvantages of KDF

How Keyword Driven Tests fall short

Anti-Pattern: Giant BDD Tests

Description is coming soon…

However, you do not want to have large BDD tests with many “And” and “Then” because it means that you are testing too much. It means that your tests will not be atomic. See Atomic Tests Pattern at the top.

Almost Anti-patterns?

This section is a collection of automation techniques that I have seen with my clients that cause them a lot of problems. I can’t quiet classify them as “anti-patterns” because they are not widely accepted as such in the automation community, by the automation experts. However, I do believe that they are on the brink of being bad practices that you should strongly reconsider.

Using BDD tools for UI automation

Description coming soon… In the meantime, do your research and think how and whether it will actually help you succeed with test automation.

It all really starts with a simple question…

What is Behavior Driven Development?

Behaviour-Driven Development (BDD) is a set of practices that aim to reduce some common wasteful activities in software development:

– Rework caused by misunderstood or vague requirements.

– Technical debt caused by reluctance to refactor code.

– Slow feedback cycles caused by silos and hand-overs

BDD aims to narrow the communication gaps between team members, foster better understanding of the customer and promote continuous communication with real world examples.

https://docs.cucumber.io/bdd/overview/

Now that we understand that, let me ask…

Did you see the word tool or tools used a single time?

No, we didn’t.

This implies that BDD is NOT a tool, it is a set of practices.

Here’s where it gets bad:

I’m fortunate in that I’m a Solutions Architect and I get to talk to dozens of new customers and hundreds of automation engineers every year. The common problem that I encounter is that almost nobody uses BDD as a set of practices.

No practices are implemented to remove all the waste and technical debt.

Instead, tools such as Cucumber or Serenity are used to write automated tests and then we claim that we are “doing BDD”.

This wouldn’t be so bad…

The problem is that using BDD tools adds an extra layer of complexity and dependency to test automation code. And if we aren’t using the BDD process for the actual advantages then all we are left with is more complexity.

These are the problems that I see when a BDD tool is used for automation without actually following the BDD practices:

BDD tools create more dependencies

Let’s take a look at a diagram that shows all the dependencies that are added when using a BDD automation framework (I didn’t include all the other dependencies such as test runners and so on as they’re not related to BDD).

bdd
bdd

When you add a BDD tool to your automation suite, you have the BDD framework such as Cucumber used by Feature Files. Feature Files use Step Definitions and Step Definitions use Page Objects.

Hold on to that thought for a second…

What if you don’t use a BDD automation tool?

Without a BDD tool

By not using a BDD tool, we can remove two extra dependencies.

Dependencies in software development are almost always bad. We want to limit the number of dependencies because each one is a chance for something to go wrong.

Most of the software development design patterns focus on dependency management…

Think Single Responsibility Principle or Open-Closed Principle.

In software development, we strive towards having our modules doing less while limiting the number of dependencies.

Nikolay Advolodkin

So why are we adding extra BDD tool dependencies to our automation if we aren’t using the process?

BDD tools help to create more readable code

Yes, this is true. I would agree that a test written in good Gherkin syntax is very readable.

However, is it that big of a difference when compared to a non-BDD test like this:

I don’t believe it’s that drastic of a difference.

Is it really worth it to take the risk of extra dependency management for slightly more readable tests?

It gets worse:

The other problem that seems to happen with majority of the BDD tests is that they don’t follow actual BDD best practices.

11 Shares
Tweet
Share
Share11
%d bloggers like this: