Author: | Zahary Karadjov |
---|
Note: Instead of unittest.nim, please consider to use the testament tool which offers process isolation for your tests. Also when isMainModule: doAssert conditionHere is usually a much simpler solution for testing purposes.
This module implements boilerplate to make unit testing easy.
The test status and name is printed after any output or traceback.
Tests can be nested, however failure of a nested test will not mark the parent test as failed. Setup and teardown are inherited. Setup can be overridden locally.
Compiled test files return the number of failed test as exit code, while nim c -r <testfile.nim> exits with 0 or 1
Running a single test
Specify the test name as a command line argument.
nim c -r test "my test name" "another test"
Multiple arguments can be used.
Running a single test suite
Specify the suite name delimited by "::".
nim c -r test "my test name::"
Selecting tests by pattern
A single "*" can be used for globbing.
Delimit the end of a suite name with "::".
Tests matching any of the arguments are executed.
nim c -r test fast_suite::mytest1 fast_suite::mytest2 nim c -r test "fast_suite::mytest*" nim c -r test "auth*::" "crypto::hashing*" # Run suites starting with 'bug #' and standalone tests starting with '#' nim c -r test 'bug #*::' '::#*'
Examples
suite "description for this stuff": echo "suite setup: run once before the tests" setup: echo "run before each test" teardown: echo "run after each test" test "essential truths": # give up and stop if this fails require(true) test "slightly less obvious stuff": # print a nasty message and move on, skipping # the remainder of this block check(1 != 1) check("asd"[2] == 'd') test "out of bounds error is thrown on bad access": let v = @[1, 2, 3] # you can do initialization here expect(IndexError): discard v[4] echo "suite teardown: run once after the tests"
Types
TestStatus = enum OK, FAILED, SKIPPED
- The status of a test when it is done. Source Edit
OutputLevel = enum PRINT_ALL, ## Print as much as possible. PRINT_FAILURES, ## Print only the failed tests. PRINT_NONE ## Print nothing.
- The output verbosity of the tests. Source Edit
TestResult = object suiteName*: string ## Name of the test suite that contains this test case. ## Can be ``nil`` if the test case is not in a suite. testName*: string ## Name of the test case status*: TestStatus
- Source Edit
OutputFormatter = ref object of RootObj
- Source Edit
ConsoleOutputFormatter = ref object of OutputFormatter colorOutput: bool ## Have test results printed in color. ## Default is true for the non-js target, ## for which ``stdout`` is a tty. ## Setting the environment variable ## ``NIMTEST_COLOR`` to ``always`` or ## ``never`` changes the default for the ## non-js target to true or false respectively. ## The deprecated environment variable ## ``NIMTEST_NO_COLOR``, when set, ## changes the default to true, if ## ``NIMTEST_COLOR`` is undefined. outputLevel: OutputLevel ## Set the verbosity of test results. ## Default is ``PRINT_ALL``, unless ## the ``NIMTEST_OUTPUT_LVL`` environment ## variable is set for the non-js target. isInSuite: bool isInTest: bool
- Source Edit
JUnitOutputFormatter = ref object of OutputFormatter stream: Stream testErrors: seq[string] testStartTime: float testStackTrace: string
- Source Edit
Vars
abortOnError: bool
- Set to true in order to quit immediately on fail. Default is false, unless the NIMTEST_ABORT_ON_ERROR environment variable is set for the non-js target. Source Edit
Procs
proc addOutputFormatter(formatter: OutputFormatter) {...}{.raises: [], tags: [].}
- Source Edit
proc delOutputFormatter(formatter: OutputFormatter) {...}{.raises: [], tags: [].}
- Source Edit
proc newConsoleOutputFormatter(outputLevel: OutputLevel = OutputLevel.PRINT_ALL; colorOutput = true): ConsoleOutputFormatter {...}{. raises: [], tags: [].}
- Source Edit
proc defaultConsoleFormatter(): ConsoleOutputFormatter {...}{.raises: [], tags: [ReadEnvEffect].}
- Source Edit
proc newJUnitOutputFormatter(stream: Stream): JUnitOutputFormatter {...}{. raises: [Defect, IOError, OSError], tags: [WriteIOEffect].}
- Creates a formatter that writes report to the specified stream in JUnit format. The stream is NOT closed automatically when the test are finished, because the formatter has no way to know when all tests are finished. You should invoke formatter.close() to finalize the report. Source Edit
proc close(formatter: JUnitOutputFormatter) {...}{. raises: [Defect, IOError, OSError, Exception], tags: [WriteIOEffect].}
- Completes the report and closes the underlying stream. Source Edit
proc checkpoint(msg: string) {...}{.raises: [], tags: [].}
-
Set a checkpoint identified by msg. Upon test failure all checkpoints encountered so far are printed out. Example:
checkpoint("Checkpoint A") check((42, "the Answer to life and everything") == (1, "a")) checkpoint("Checkpoint B")
outputs "Checkpoint A" once it fails.
Source Edit proc disableParamFiltering() {...}{.raises: [], tags: [].}
- disables filtering tests with the command line params Source Edit
Methods
method suiteStarted(formatter: OutputFormatter; suiteName: string) {...}{.base, gcsafe, raises: [], tags: [].}
- Source Edit
method testStarted(formatter: OutputFormatter; testName: string) {...}{.base, gcsafe, raises: [], tags: [].}
- Source Edit
method failureOccurred(formatter: OutputFormatter; checkpoints: seq[string]; stackTrace: string) {...}{.base, gcsafe, raises: [], tags: [].}
- stackTrace is provided only if the failure occurred due to an exception. checkpoints is never nil. Source Edit
method testEnded(formatter: OutputFormatter; testResult: TestResult) {...}{.base, gcsafe, raises: [], tags: [].}
- Source Edit
method suiteEnded(formatter: OutputFormatter) {...}{.base, gcsafe, raises: [], tags: [].}
- Source Edit
method suiteStarted(formatter: ConsoleOutputFormatter; suiteName: string) {...}{. raises: [Exception, IOError], tags: [RootEffect, WriteIOEffect].}
- Source Edit
method testStarted(formatter: ConsoleOutputFormatter; testName: string) {...}{.raises: [], tags: [].}
- Source Edit
method failureOccurred(formatter: ConsoleOutputFormatter; checkpoints: seq[string]; stackTrace: string) {...}{.raises: [], tags: [].}
- Source Edit
method testEnded(formatter: ConsoleOutputFormatter; testResult: TestResult) {...}{. raises: [Exception, IOError], tags: [RootEffect, WriteIOEffect].}
- Source Edit
method suiteEnded(formatter: ConsoleOutputFormatter) {...}{.raises: [], tags: [].}
- Source Edit
method suiteStarted(formatter: JUnitOutputFormatter; suiteName: string) {...}{. raises: [Defect, IOError, OSError, ValueError], tags: [WriteIOEffect].}
- Source Edit
method testStarted(formatter: JUnitOutputFormatter; testName: string) {...}{.raises: [], tags: [TimeEffect].}
- Source Edit
method failureOccurred(formatter: JUnitOutputFormatter; checkpoints: seq[string]; stackTrace: string) {...}{.raises: [], tags: [].}
- stackTrace is provided only if the failure occurred due to an exception. checkpoints is never nil. Source Edit
method testEnded(formatter: JUnitOutputFormatter; testResult: TestResult) {...}{. raises: [Defect, IOError, OSError, ValueError], tags: [TimeEffect, WriteIOEffect].}
- Source Edit
method suiteEnded(formatter: JUnitOutputFormatter) {...}{. raises: [Defect, IOError, OSError], tags: [WriteIOEffect].}
- Source Edit
Macros
macro check(conditions: untyped): untyped
-
Verify if a statement or a list of statements is true. A helpful error message and set checkpoints are printed out on failure (if outputLevel is not PRINT_NONE). Example:
import strutils check("AKB48".toLowerAscii() == "akb48") let teams = {'A', 'K', 'B', '4', '8'} check: "AKB48".toLowerAscii() == "akb48" 'C' in teams
Source Edit macro expect(exceptions: varargs[typed]; body: untyped): untyped
-
Test if body raises an exception found in the passed exceptions. The test passes if the raised exception is part of the acceptable exceptions. Otherwise, it fails. Example:
import math, random proc defectiveRobot() = randomize() case random(1..4) of 1: raise newException(OSError, "CANNOT COMPUTE!") of 2: discard parseInt("Hello World!") of 3: raise newException(IOError, "I can't do that Dave.") else: assert 2 + 2 == 5 expect IOError, OSError, ValueError, AssertionError: defectiveRobot()
Source Edit
Templates
template suite(name, body) {...}{.dirty.}
-
Declare a test suite identified by name with optional setup and/or teardown section.
A test suite is a series of one or more related tests sharing a common fixture (setup, teardown). The fixture is executed for EACH test.
suite "test suite for addition": setup: let result = 4 test "2 + 2 = 4": check(2+2 == result) test "(2 + -2) != 4": check(2 + -2 != result) # No teardown needed
The suite will run the individual test cases in the order in which they were listed. With default global settings the above code prints:
[Suite] test suite for addition [OK] 2 + 2 = 4 [OK] (2 + -2) != 4
Source Edit template test(name, body) {...}{.dirty.}
-
Define a single test case identified by name.
test "roses are red": let roses = "red" check(roses == "red")
The above code outputs:
[OK] roses are red
Source Edit template fail()
-
Print out the checkpoints encountered so far and quit if abortOnError is true. Otherwise, erase the checkpoints and indicate the test has failed (change exit code and test status). This template is useful for debugging, but is otherwise mostly used internally. Example:
checkpoint("Checkpoint A") complicatedProcInThread() fail()
outputs "Checkpoint A" before quitting.
Source Edit template skip()
-
Mark the test as skipped. Should be used directly in case when it is not possible to perform test for reasons depending on outer environment, or certain application logic conditions or configurations. The test code is still executed.
if not isGLContextCreated(): skip()
Source Edit template require(conditions: untyped)
- Same as check except any failed test causes the program to quit immediately. Any teardown statements are not executed and the failed test output is not generated. Source Edit