8

8 Tips For Productive Testing

 3 years ago
source link: https://railsadventures.wordpress.com/2019/07/25/8-tips-for-productive-testing/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

In the previous post we discussed the “why” — we went over some of the benefits of integrating automatic testing into your development flow. In this post, we’ll go over the “how” — some guidelines for forming a healthy, safe and rapid development process around your test suite.

Continuous Integration (CI)

The first and most important thing you can do when dealing with tests is integrating them into your development and shipping process. To fully enforce the test suite we need to make sure two conditions are satisfied:

c22a9-1vrghevl-llh5wbt32cndeg.png?w=1000&h=211
An example of a pull request approved by the CI

Under the assumption that any new code includes tests (more on that on the next section), enforcing these two simple rules in your development process ensures that changes don’t break existing behavior and adds confidence in new functionality added to the system.
CI lays the foundation for having a robust and safe development process around your test suite.

Another important benefit of running tests on the CI server is making sure the system is not dependent in any way on the developer’s local machine since the test suite runs in a neutral, isolated environment.

Test Coverage

Ensuring no tests have failed isn’t enough — an empty test suite allows the project to pass CI since no tests have failed (0 tests = 0 failures). It means that a commit that deletes all the test suite actually passes CI but obviously it is not a healthy situation for a project to be in. We need to make sure our test suite actually covers our source code.

Test coverage measures what percentage of the source code is being ran by the test suite. The output of a test coverage report is pretty detailed — you can see each line of source code, colored by green if it was ran during the test suite or red if it hadn’t. Some tools also show the number of times each line has ran during the tests.
The report allows you to easily identify code paths that weren’t attended by the tests:

31689-0q_itksjqmf6rculs.png?w=700&h=394
An example test coverage report

Coverage reports can also be integrated into the CI process to get coverage insights on pull requests — has the test coverage increased or decreased comparing the two branches? in which files exactly has the coverage increased or decreased?
We’re using codecov.io to achieve that but there are many similar services that can be used for that purpose.

c8f8c-1akb4zovh525p8htngs6jzq.png?w=700&h=394
Test coverage integrated into PR process

The above status on the pull request shows us that the coverage has increased by 9.72% in this branch compared to the branch the pull request was opened against, which is generally a good sign.

If the PR introduces new code with no proper tests, the coverage percentage drops, a fact that can be used in order to automatically flag such pull requests and block merges until the coverage reaches a certain threshold accepted by the team members.

Enforce Pull Request Approval & Pay Attention To Test Code

Most code reviews concentrate on the source code itself. I believe the test suite’s code is no less important than the actual source code. Like any other code base, if your test suite’s code isn’t being regularly reviewed, it will eventually become unmaintainable and a burden on your team rather than an enabler for rapid development.

When reviewing a pull request, try going over the test suite’s code first. Look for places to improve on the following aspects:

  1. Do the tests actually validate and verify the behavior they claim to test?
  2. Are they comprehensive? do they cover enough cases?
  3. Is it clear from the test code/description what is it trying to validate?
  4. Is the test suite’s DRY ? Can we re-use existing functionality or extract test functionality to a shared?

Approval of at least one team member should be enforced on pull requests to make sure both the added functionality and the accompanied tests are at high standards.

Pick Inputs Wisely

First I would like to explain what I consider as inputs. Our code’s behavior is affected by two factors: direct input values and state. For example, a function that serves vodka to users by their age might look like that:

(def millis-in-year (* 1000 60 60 24 365))

(defn serve-vodka [user-id]

(let [user (user/find-by-id user-id)

millis-since-born (– (System/currentTimeMillis) (:birthtime user))

years-since-born (/ millis-since-born millis-in-year)]

(if (>= years-since-born 21)

"Here's your vodka!"

"Too Young!")))

The user-id here is a direct input value, while the user record in the database is the state. We use fixtures to set up the state in which our test run, and arguments to pass direct input values to our test code.
When we think of our test inputs we have to take both state and direct input values into consideration.

Since testing every possible input combination is both impossible and counter productive, picking the right inputs can become the difference between an efficient test suite and a useless one.
But how do we pick the right inputs?
There are two very clear rules and a third, kind of obscure one:

  1. Pick values to cover all code paths:
    a. fixture of user with birth date of more than 21 years ago + that user id
    b. fixture of user with birth date of less than 21 years ago + that user id
  2. Pick values to challenge your code:
    a. fixture of user with birth date of exactly 21 years ago + that user id
    b. pass nil as a user_id argument
    c. pass a user_id that doesn’t have a matching record on the database
    d. set up a fixture of a user without a birth date — relevant if that field isn’t mandatory — and pass its user_id as an argument
  3. Pick values to cover more user stories:
    “Cheating” on tests is pretty easy — you can easily follow the two above rules and have a 100% covered code with all test passing but that doesn’t necessarily means the test suite is good enough. Try to think about other states the system might be in and which inputs might be given to transition it. The feature (product) spec is your best candidate to get some ideas of which test cases should be added to the test suite. By covering more user stories we reduce the chances for bugs when releasing the feature.

As the developer assigned with the task, you know the system, its interactions and the spec you’re working by better than anyone else. Use that knowledge to make sure your test suite is comprehensive and covers a reasonable amount of cases.

Don’t Neglect Side Effects

A function has two distinct roles:

  1. It returns a value
  2. It might have one or more side effects

We have to make sure we test for both.
As an example let’s take a REST endpoint for user registration. A typical request spec would look like:
Request: POST /register {"email": "[email protected]", password: "secret"}
Response: 201 CREATED {"id": 1, "email": "[email protected]"}

The server is expected to:
1. Create a new user record on the database with the user’s email and a randomly generated confirmation token
2. Send an email to [email protected] with a link allowing the user to confirm their email with the confirmation token
3. Return a 201 status code with the created user record. Obviously, the response should not include the confirmation token.

This is a classic case in which we have an input (the HTTP request), an output (the HTTP response) and two side effects: database changes and a confirmation email. The easiest test would be verifying a simple request <-> response flow but it leaves a large gap for bugs. We have to make sure we cover all of the endpoint’s responsibilities, including the side effects, to have a comprehensive test suite:

(deftest register-test

(let [email "[email protected]"

response (request :post "/register" {:email email})]

(testing "http response"

(testing "should return 201 status code"

(is (= 201 (:status response))))

(testing "should return the created user in response body"

(is (= email (-> response :body :email)))

(is (integer? (-> response :body :id))))

(testing "should not return the confirmation token in the response body"

(is (not (contains? (:body response) :confirmation_token)))))

(testing "should create the user in the database"

(let [created-user (user/find-by-email email)]

(is (not (nil? created-user)))

(testing "should have a confirmation token"

(is (not (nil? (:confirmation_token created-user)))))))

(testing "should send a confirmation email"

(is (confirmation-sent? email)))))

Note: this is only a happy path test — tests for duplicate emails, wrong email format and other failure scenarios should exist but I wanted to keep this short.

When these tests pass, we can be pretty sure our registration endpoint works as expected. Test will fail if any of the following occurs:
1. returned status code is not 201
2. we don’t return the created user and its id in the response body
3. we do return the confirmation token in the response body (security breach)
4. we don’t create a database record with the provided email
5. we don’t generate a confirmation token for the created user
6. we don’t send a confirmation email to the provided email.
The confirmation email is actually the only gap here since confirmation-sent? is mocked to avoid network calls in our test suite and we haven’t verified the content of the email.

Use Mocks/Stubs Carefully

Mocks are parts of the system we fake solely for test purposes. We use mocks because sometimes we can’t run all the system on our machine or because interacting with some parts is very time consuming — something we can’t afford in tests.

Mocks should be used very carefully and generally should be avoided where possible. We must remember that since they are fake, mocks take us further away from the system as it runs on our production servers. The gap mocks create between our test environment and our actual runtime environment allows bugs to leak from our test suite into our staging/production environments.

Let’s take an example of an application that uses redis to keep an AuthenticationToken=>UserID map. When the user logs in we issue a token and save it in redis with the user’s id as value. When the user performs a request with a token, we can receive the matching user id from redis for authentication:

(ns user-authentication)

(defn get-user-by-token [token]

(if-let [user-id (redis/get token)]

(first (db/select users (where {:id user-id})))

:unauthorized))

If the user id is stored (in redis) under the provided token key, we fetch it from the database and return it, otherwise the token is unauthorized.

Let’s assume that for test purposes we decided to mock redis as a simple key value storage:


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK