In a prior commit, we fixed a performance problem with the -w flag by
doing a little extra work to extract literals. It turns out that using
literals in this case when the -w flag is NOT used results in a
performance regression. The reasoning is that we end up using a "fast"
regex as a prefilter when the regex engine itself uses its own
equivalent prefilter, so ripgrep ends up redoing a fair amount of work.
Instead, we only do this extra work when we know the -w flag is enabled.
... and don't replace them with anything because crates.io does not
support GitHub Actions yet. But it's almost there:
https://github.com/rust-lang/crates.io/pull/1838
Thanks @atouchet for noticing this.
We should not assume that the commondir file actually exists. If it
doesn't, then just move on. This otherwise emits an error message when
searching normal submodules, which is not OK.
This regression was introduced in #1446.
Fixes#1520
This also updates the corpora used, so previous times (and counts) are
not comparable.
We also remove some tools, likt pt, sift and ucg, since they appear to
be no longer maintained. ag isn't really maintained either, but it still
has significant mind share, so we retain a benchmark for it.
We also upgrade ack to version 3, and remove the clarification on how
`-w` is implemented.
We also add `git grep -P` (uses PCRE2) which appears to be much faster
than `git grep -E`.
Finally, we add ugrep which is a new up and comer in this space.
Fixes#1474
If a literal is entirely whitespace, then it's quite likely that it is
very common. So when that case occurs, just don't do (inner) literal
optimizations at all.
The regex engine may still make sub-optimal decisions here, but that's a
problem for another day.
Fixes#1087
The purpose of this flag is to force ripgrep to ignore all --ignore-file
flags (whether they come before or after --no-ignore-files).
This flag can be overridden with --ignore-files.
Fixes#1466
It doesn't really belong in the man page since it's an artifact of a
build/runtime configuration. Moreover, it inhibits reproducible builds.
Fixes#1441
This permits switching between the different regex engine modes that
ripgrep supports. The purpose of this flag is to make it easier to
extend ripgrep with additional regex engines.
Closes#1488, Closes#1502
This is in preparation for adding a new --engine flag which is intended
to eventually supplant --auto-hybrid-regex.
While there are no immediate plans to add more regex engines to ripgrep,
this is intended to make it easier to maintain a patch to ripgrep with
an additional regex engine. See #1488 for more details.
This adds one new dependency, maybe-uninit, which is brought in by
crossbeam-channel[1]. This is to apparently fix some unsound code
without bumping the MSRV. Since ripgrep uses the latest stable release
of Rust, the maybe-uninit crate should compile down to nothing and just
re-export std's `MaybeUninit` type.
[1] - https://github.com/crossbeam-rs/crossbeam/pull/458
It's not clear why removing this makes things work. I've submitted
PRs that passed CI with fetch-depth=1. Maybe it only fails when
PRs are submitted from external contributors?
Either way, for now, we remove this and absorb the extra cost in
order to get PRs passing CI again.
PR #1501
We can just ask the channel whether any work has been loaded. Normally
querying a channel for its length is a strong predictor of bugs, but in
this case, we do it before we ever attempt a `recv`, so it should work.
Kudos to @zsugabubus for suggesting this!
It turns out that the previous version wasn't quite correct. Namely, it
was possible for the following sequence to occur:
1. Consider that all workers, except for one, are `waiting`.
2. The last remaining worker finds one more job to do and sends it on
the channel.
3. One of the previously `waiting` workers wakes up from the job that
the last running worker sent, but `self.resume()` has not been
called yet.
4. The last worker, from (2), calls `get_work` and sees that the
channel has nothing on it, so it executes `self.waiting() ==
1`. Since the worker in (3) hasn't called `self.resume()` yet,
`self.waiting() == 1` evaluates to true.
5. This sets off a chain reaction that stops all workers, despite that
fact that (3) got more work (which could itself spawn more work).
The end result is that the traversal may terminate while their are still
outstanding work items to process. This problem was observed through
spurious failures in CI. I was not actually able to reproduce the bug
locally.
We fix this by changing our strategy to detect termination using a
counter. Namely, we increment the counter just before sending new work
and decrement the counter just after finishing work. In this way, we
guarantee that the counter only ever reaches 0 once there is no more
work to process.
See #1337 for more discussion. Many thanks to @zsugabubus for helping me
work through this.