Idris2Doc : Test.Golden

Test.Golden

Core features required to perform Golden file testing.

We provide the core functionality to run a *single* golden file test, or
a whole test tree.
This allows the developer freedom to use as is or design the rest of the
test harness to their liking.

This was originally used as part of Idris2's own test suite and
the core functionality is useful for the many and not the few.
Please see Idris2 test harness for example usage.

# Test Structure

This harness works from the assumption that each individual golden test
comprises of a directory with the following structure:

+ `run` a *shell* script that runs the test. We expect it to:
  * Use `$1` as the variable standing for the idris executable to be tested
  * May use `${IDRIS2_TESTS_CG}` to pick a codegen that ought to work
  * Clean up after itself (e.g. by running `rm -rf build/`)

+ `expected` a file containting the expected output of `run`

During testing, the test harness will generate an artefact named `output`
and display both outputs if there is a failure.
During an interactive session the following command is used to compare them
as they are:

```sh
 git diff --no-index --exit-code --word-diff-regex=. --color expected output
```

If `git` fails then the runner will simply present the expected and 'given'
files side-by-side.

Of note, it is helpful to add `output` to a local `.gitignore` instance
to ensure that it is not mistakenly versioned.

# Options

The test harness has several options that may be set:

+ `idris2`       The path of the executable we are testing.
+ `codegen`      The backend to use for code generation.
+ `onlyNames`    The tests to run relative to the generated executable.
+ `onlyFile`     The file listing the tests to run relative to the generated executable.
+ `interactive`  Whether to offer to update the expected file or not.
+ `timing`       Whether to display time taken for each test.
+ `threads`      The maximum numbers to use (default: number of cores).
+ `failureFile`  The file in which to write the list of failing tests.

We provide an options parser (`options`) that takes the list of command line
arguments and constructs this for you.

# Usage

When compiled to an executable the expected usage is:

```sh
runtests <path to executable under test>
  [--timing]
  [--interactive]
  [--only-file PATH]
  [--failure-file PATH]
  [--threads N]
  [--cg CODEGEN]
  [[--only|--except] [NAMES]]
```

assuming that the test runner is compiled to an executable named `runtests`.

Definitions

recordOptions : Type
  Options for the test driver.

Totality: total
Visibility: public export
Constructor: 
MkOptions : String->MaybeString->Maybe (String->Bool) ->Bool->Bool->Bool->Nat->MaybeString->Options

Projections:
.codegen : Options->MaybeString
  Which codegen should we use?
.color : Options->Bool
  Should we use colors?
.exeUnderTest : Options->String
  Name of the idris2 executable
.failureFile : Options->MaybeString
  Should we write the list of failing cases to a file?
.interactive : Options->Bool
  Should we run the test suite interactively?
.onlyNames : Options->Maybe (String->Bool)
  Should we only run some specific cases?
.threads : Options->Nat
  How many threads should we use?
.timing : Options->Bool
  Should we time and display the tests
.exeUnderTest : Options->String
  Name of the idris2 executable

Visibility: public export
exeUnderTest : Options->String
  Name of the idris2 executable

Visibility: public export
.codegen : Options->MaybeString
  Which codegen should we use?

Visibility: public export
codegen : Options->MaybeString
  Which codegen should we use?

Visibility: public export
.onlyNames : Options->Maybe (String->Bool)
  Should we only run some specific cases?

Visibility: public export
onlyNames : Options->Maybe (String->Bool)
  Should we only run some specific cases?

Visibility: public export
.interactive : Options->Bool
  Should we run the test suite interactively?

Visibility: public export
interactive : Options->Bool
  Should we run the test suite interactively?

Visibility: public export
.color : Options->Bool
  Should we use colors?

Visibility: public export
color : Options->Bool
  Should we use colors?

Visibility: public export
.timing : Options->Bool
  Should we time and display the tests

Visibility: public export
timing : Options->Bool
  Should we time and display the tests

Visibility: public export
.threads : Options->Nat
  How many threads should we use?

Visibility: public export
threads : Options->Nat
  How many threads should we use?

Visibility: public export
.failureFile : Options->MaybeString
  Should we write the list of failing cases to a file?

Visibility: public export
failureFile : Options->MaybeString
  Should we write the list of failing cases to a file?

Visibility: public export
initOptions : String->Bool->Options
Visibility: export
usage : String
Visibility: export
options : ListString->IO (MaybeOptions)
  Process the command line options.

Visibility: export
runTest : Options->String->IOResult
  Run the specified Golden test with the supplied options.
See the module documentation for more information.
@testPath the directory that contains the test.

Visibility: export
pathLookup : ListString->IO (MaybeString)
  Find the first occurrence of an executable on `PATH`.

Visibility: export
dataRequirement : Type
  Some test may involve Idris' backends and have requirements.
We define here the ones supported by Idris

Totality: total
Visibility: public export
Constructors:
C : Requirement
Chez : Requirement
Node : Requirement
Racket : Requirement
Gambit : Requirement

Hints:
EqRequirement
EqRequirement
ShowRequirement
ShowRequirement
checkRequirement : Requirement->IO (MaybeString)
Visibility: export
findCG : IO (MaybeString)
Visibility: export
dataCodegen : Type
  A choice of a codegen

Totality: total
Visibility: public export
Constructors:
Nothing : Codegen
  Do NOT pass a cg argument to the executable being tested
Default : Codegen
  Use whatever the test runner was passed at the toplevel,
and if nothing was passed guess a sensible default using findCG
Just : Requirement->Codegen
  Use exactly the given requirement
toList : Codegen->ListRequirement
Visibility: export
recordTestPool : Type
  A test pool is characterised by
+ a name
+ a list of requirement
+ a choice of codegen (overriding the default)
+ and a list of directory paths

Totality: total
Visibility: public export
Constructor: 
MkTestPool : String->ListRequirement->Codegen->ListString->TestPool

Projections:
.codegen : TestPool->Codegen
.constraints : TestPool->ListRequirement
.poolName : TestPool->String
.testCases : TestPool->ListString
.poolName : TestPool->String
Visibility: public export
poolName : TestPool->String
Visibility: public export
.constraints : TestPool->ListRequirement
Visibility: public export
constraints : TestPool->ListRequirement
Visibility: public export
.codegen : TestPool->Codegen
Visibility: public export
codegen : TestPool->Codegen
Visibility: public export
.testCases : TestPool->ListString
Visibility: public export
testCases : TestPool->ListString
Visibility: public export
testsInDir : String-> {default (constTrue) _ : (String->Bool)} ->String-> {default [] _ : ListRequirement} -> {defaultNothing_ : Codegen} ->IOTestPool
  Find all the test in the given directory.

Visibility: export
filterTests : Options->ListString->ListString
  Only keep the tests that have been asked for

Visibility: export
recordSummary : Type
  The summary of a test pool run

Totality: total
Visibility: public export
Constructor: 
MkSummary : ListString->ListString->Summary

Projections:
.failure : Summary->ListString
.success : Summary->ListString

Hints:
MonoidSummary
MonoidSummary
SemigroupSummary
SemigroupSummary
.success : Summary->ListString
Visibility: public export
success : Summary->ListString
Visibility: public export
.failure : Summary->ListString
Visibility: public export
failure : Summary->ListString
Visibility: public export
initSummary : Summary
  A new, blank summary

Visibility: export
updateSummary : Result->Summary->Summary
  Update the summary to contain the given result

Visibility: export
bulkUpdateSummary : ListResult->Summary->Summary
  Update the summary to contain the given results

Visibility: export
dataThreadInstruction : Type
  An instruction to a thread which runs tests

Totality: total
Visibility: public export
Constructors:
Run : String->ThreadInstruction
  A test to run
Stop : ThreadInstruction
  An indication for the thread to stop
testSender : ChannelThreadInstruction->Nat->ListString->IO ()
  Sends the given tests on the given @Channel@, then sends `nThreads` many
'Stop' @ThreadInstruction@s to stop the threads running the tests.

@testChan The channel to send the tests over.
@nThreads The number of threads being used to run the tests.
@tests The list of tests to send to the runners/threads.

Visibility: export
dataThreadResult : Type
  A result from a test-runner/thread

Totality: total
Visibility: public export
Constructors:
Res : Result->ThreadResult
  The result of running a test
Done : ThreadResult
  An indication that the thread was told to stop
testReceiver : ChannelThreadResult->Summary->ChannelSummary->Nat->IO ()
  Receives results on the given @Channel@, accumulating them as a @Summary@.
When all results have been received (i.e. @nThreads@ many 'Done'
@ThreadInstruction@s have been encountered), send the resulting Summary over
the @accChan@ Channel (necessary to be able to @fork@ this function and
still obtain the Summary at the end).

@resChan The channel to receives the results on.
@acc The Summary acting as an accumulator.
@accChan The Channel to send the final Summary over.
@nThreads The number of threads being used to run the tests.

Visibility: export
poolRunner : Options->TestPool->IOSummary
  A runner for a test pool. If there are tests in the @TestPool@ that we want
to run, spawns `opts.threads` many runners and sends them the tests,
collecting all the results in the @Summary@ returned at the end.

@opts The options for the TestPool.
@pool The TestPool to run.

Visibility: export
runnerWith : Options->ListTestPool->IO ()
Visibility: export
runner : ListTestPool->IO ()
  A runner for a whole test suite

Visibility: export