Without this it's not reliably possible to ask whether a given view is visible
by asking
windowHelper.TopViewInWindow(context.GetWindowName()) == context.GetView()
because there could be transient, invisible contexts after it in the Z order.
I guess it's a bit of a coincidence that this has never been a problem so far.
The output of the GetWindowDimensions function is hard to understand just by looking at it,
so I've added a helper function in the tests to render the window layout as text, so that
in order to create a new test you just come up with some args and paste the output as the
expected output.
This has the same downsides that any snapshot-based testing has: it's more brittle than
targeted assertions. But it is much easier to make sense of these snapshots than it is
to make sense of more fine-grained assertions, and I like the fact that these tests can
serve as documentation.
We are also removing the single-character padding on the left/right edges of the bottom
line because it's unnecessary
Unfortunately we need to create views for each spacer: it's not enough to just
layout the existing views with padding inbetween because gocui only renders
views meaning if there is no view in a given position, that position will just
render whatever was there previously (at least that's what I recall from talking
this through with Stefan: I could be way off).
Co-authored-by: Stefan Haller <stefan@haller-berlin.de>
It sounds like at some point we only showed a slash as the search prompt, but I
dug a bit through the history and couldn't find a state of the code where that
was the case. (shrug)
This PR captures the code coverage from our unit and integration tests. At the
moment it simply pushes the result to Codacy, a platform that assists with
improving code health. Right now the focus is just getting visibility but I want
to experiment with alerts on PRs when a PR causes a drop in code coverage.
To be clear: I'm not a dogmatist about this: I have no aspirations to get to
100% code coverage, and I don't consider lines-of-code-covered to be a perfect
metric, but it is a pretty good heuristic for how extensive your tests are.
The good news is that our coverage is actually pretty good which was a surprise
to me!
As a conflict of interest statement: I'm in Codacy's 'Pioneers' program which
provides funding and mentorship, and part of the arrangement is to use Codacy's
tooling on lazygit. This is something I'd have been happy to explore even
without being part of the program, and just like with any other static analysis
tool, we can tweak it to fit our use case and values.
## How we're capturing code coverage
This deserves its own section. Basically when you build the lazygit binary you
can specify that you want the binary to capture coverage information when it
runs. Then, if you run the binary with a GOCOVERDIR env var, it will write
coverage information to that directory before exiting.
It's a similar story with unit tests except with those you just specify the
directory inline via `-test.gocoverdir`.
We run both unit tests and integration tests separately in CI, _and_ we run them
parallel with different OS's and git versions. So I've got each step uploading
the coverage files as an artefact, and then in a separate step we combine all
the artefacts together and generate a combined coverage file, which we then
upload to codacy (but in future we can do other things with it like warn in a PR
if code coverage decreases too much).
Another caveat is that when running integration tests, not only do we want to
obtain code coverage from code executed by the test binary, we also want to
obtain code coverage from code executed by the test runner. Otherwise, for each
integration test you add, the setup code (which is run by the test runner, not
the test binary) will be considered un-covered and for a large setup step it may
appear that your PR _decreases_ coverage on net. Go doesn't easily let you
exclude directories from coverage reports so it's better to just track the
coverage from both the runner and the binary.
The binary expects a GOCOVERDIR env var but the test runner expects a
test.gocoverdir positional arg and if you pass the positional arg it will
internally overwrite GOCOVERDIR to some random temp directory and if you then
pass that to the test binary, it doesn't seem to actually write to it by the
time the test finishes. So to get around that we're using LAZYGIT_GOCOVERDIR and
then within the test runner we're mapping that to GOCOVERDIR before running the
test binary. So they both end up writing to the same directory. Coverage data
files are named to avoid conflicts, including something unique to the process,
so we don't need to worry about name collisions between the test runner and the
test binary's coverage files. We then merge the files together purely for the
sake of having fewer artefacts to upload.
## Misc
Initially I was able to have all the instances of '/tmp/code_coverage' confined
to the ci.yml which was good because it was all in one place but now it's spread
across ci.yml and scripts/run_integration_tests.sh and I don't feel great about
that but can't think of a way to make it cleaner.
I believe there's a use case for running scripts/run_integration_tests.sh
outside of CI (so that you can run tests against older git versions locally) so
I've made it that unless you pass the LAZYGIT_GOCOVERDIR env var to that script,
it skips all the code coverage stuff.
On a separate note: it seems that Go's coverage report is based on percentage of
statements executed, whereas codacy cares more about lines of code executed, so
codacy reports a higher percentage (e.g. 82%) than Go's own coverage report
(74%).
I changed the installation of method for the Gentoo distribution from a user owned overlay to a community owned one. GURU is the community owned overlay, it's safer to use and easier to maintain