We need to fetch our list of tests both outside of our test binary and within. We need
to get the list from within so that we can run the code that drives the test and runs
assertions. To get the list of tests we need to know where the root of the lazygit repo
is, given that the tests live in files under that root.
So far, we've used this GetLazyRootDirectory() function for that, but it assumes that
we're not in a test directory (it just looks for the first .git dir it can find). Because
we didn't want to properly fix this before, we've been setting the working directory of
the test command to the lazygit root, and using the --path CLI arg to override it when
the test itself ran. This was a terrible hack.
Now, we're passing the lazygit root directory as an env var to the integration test, so
that we can set the working directory to the actual path of the test repo; removing the
need to use the --path arg.
This PR captures the code coverage from our unit and integration tests. At the
moment it simply pushes the result to Codacy, a platform that assists with
improving code health. Right now the focus is just getting visibility but I want
to experiment with alerts on PRs when a PR causes a drop in code coverage.
To be clear: I'm not a dogmatist about this: I have no aspirations to get to
100% code coverage, and I don't consider lines-of-code-covered to be a perfect
metric, but it is a pretty good heuristic for how extensive your tests are.
The good news is that our coverage is actually pretty good which was a surprise
to me!
As a conflict of interest statement: I'm in Codacy's 'Pioneers' program which
provides funding and mentorship, and part of the arrangement is to use Codacy's
tooling on lazygit. This is something I'd have been happy to explore even
without being part of the program, and just like with any other static analysis
tool, we can tweak it to fit our use case and values.
## How we're capturing code coverage
This deserves its own section. Basically when you build the lazygit binary you
can specify that you want the binary to capture coverage information when it
runs. Then, if you run the binary with a GOCOVERDIR env var, it will write
coverage information to that directory before exiting.
It's a similar story with unit tests except with those you just specify the
directory inline via `-test.gocoverdir`.
We run both unit tests and integration tests separately in CI, _and_ we run them
parallel with different OS's and git versions. So I've got each step uploading
the coverage files as an artefact, and then in a separate step we combine all
the artefacts together and generate a combined coverage file, which we then
upload to codacy (but in future we can do other things with it like warn in a PR
if code coverage decreases too much).
Another caveat is that when running integration tests, not only do we want to
obtain code coverage from code executed by the test binary, we also want to
obtain code coverage from code executed by the test runner. Otherwise, for each
integration test you add, the setup code (which is run by the test runner, not
the test binary) will be considered un-covered and for a large setup step it may
appear that your PR _decreases_ coverage on net. Go doesn't easily let you
exclude directories from coverage reports so it's better to just track the
coverage from both the runner and the binary.
The binary expects a GOCOVERDIR env var but the test runner expects a
test.gocoverdir positional arg and if you pass the positional arg it will
internally overwrite GOCOVERDIR to some random temp directory and if you then
pass that to the test binary, it doesn't seem to actually write to it by the
time the test finishes. So to get around that we're using LAZYGIT_GOCOVERDIR and
then within the test runner we're mapping that to GOCOVERDIR before running the
test binary. So they both end up writing to the same directory. Coverage data
files are named to avoid conflicts, including something unique to the process,
so we don't need to worry about name collisions between the test runner and the
test binary's coverage files. We then merge the files together purely for the
sake of having fewer artefacts to upload.
## Misc
Initially I was able to have all the instances of '/tmp/code_coverage' confined
to the ci.yml which was good because it was all in one place but now it's spread
across ci.yml and scripts/run_integration_tests.sh and I don't feel great about
that but can't think of a way to make it cleaner.
I believe there's a use case for running scripts/run_integration_tests.sh
outside of CI (so that you can run tests against older git versions locally) so
I've made it that unless you pass the LAZYGIT_GOCOVERDIR env var to that script,
it skips all the code coverage stuff.
On a separate note: it seems that Go's coverage report is based on percentage of
statements executed, whereas codacy cares more about lines of code executed, so
codacy reports a higher percentage (e.g. 82%) than Go's own coverage report
(74%).
For the "cli" and "tui" modes of the test runner there's a "-race" parameter to
turn it on; for running tests on CI with go test, you turn it on by setting the
environment variable LAZYGIT_RACE_DETECTOR to a non-empty value.
This prevents commands like "go test ./..." from looking into it, and it
prevents VS Code's Problems panel from showing errors about the go files in that
folder.
From the go 1.19 release notes:
Command and LookPath no longer allow results from a PATH search to be found relative to the current directory. This removes a common source of security problems but may also break existing programs that depend on using, say, exec.Command("prog") to run a binary named prog (or, on Windows, prog.exe) in the current directory. See the os/exec package documentation for information about how best to update such programs.
We've been sometimes using lo and sometimes using my slices package, and we need to pick one
for consistency. Lo is more extensive and better maintained so we're going with that.
My slices package was a superset of go's own slices package so in some places I've just used
the official one (the methods were just wrappers anyway).
I've also moved the remaining methods into the utils package.
Now that we are running each test 6 times on CI, the risk of flakiness
is higher. I want to fix these tests for good but it'l take time, so
we're just retrying for now