It seems like a trifle, but if the match frequency is high enough, the
allocation+formatting of line numbers (and columns and byte offsets)
starts to matter. We squash that part of the profile in this commit by
doing our own decimal formatting. I speculate that we get a speed-up
from this by avoiding the formatting machinery and also a possible
allocation.
An alternative would be to use the `itoa` crate, and it is indeed
marginally faster in ad hoc benchmarks, but I'm satisfied enough with
this solution.
ripgrep does not, and likely never will, report which pattern matched.
Because of that, we can dedup the patterns via just their concrete
syntax without any fuss.
This is somewhat of a pathological case because you don't expect the end
user to pass duplicate patterns in general. But if the end user
generated a list of, say, names and did not dedup them, then ripgrep
could end up spending a lot of extra time on those duplicates if there
are many of them. By deduping them explicitly in the application, we
essentially remove their extra cost completely.
It was apparently using a format specific to a particular plugin. I did
know that, but apparently the plugin is not ubiquitous or de facto
standard[1]. Thus, including it I think just leads to more confusion. We
definitely do not want to be in the business of bundling aliases for
every conceivable plugin to different editors, so just drop it. We
expose the ability to write your own format for exactly this sort of
reason.
[1]: https://github.com/BurntSushi/ripgrep/discussions/2611#discussioncomment-7138302
In the time before, we just used a RegexSet from the regex crate. That
allocated unconditionally, so there was nothing we could do and it
didn't expose any APIs to reuse that memory. But now that we're using
the lower level regex-automata, we can reuse a PatternSet.
Ideally we would just provide a way for the caller to build a PatternSet
(perhaps via an opaque type) so that we don't have to shuffle data into
a PatternSet and then back into the caller's `Vec<usize>`. But this at
least avoids allocating for every search.
This brings the code in line with my current style. It also inlines the
dozen or so lines of code for FNV hashing instead of bringing in a
micro-crate for it. Finally, it drops the dependency on regex in favor
of using regex-syntax and regex-automata directly.
This essentially takes the work done in #2483 and does a bit of a
facelift. A brief summary:
* We reduce the hyperlink API we expose to just the format, a
configuration and an environment.
* We move buffer management into a hyperlink-specific interpolator.
* We expand the documentation on --hyperlink-format.
* We rewrite the hyperlink format parser to be a simple state machine
with support for escaping '{{' and '}}'.
* We remove the 'gethostname' dependency and instead insist on the
caller to provide the hostname. (So grep-printer doesn't get it
itself, but the application will.) Similarly for the WSL prefix.
* Probably some other things.
Overall, the general structure of #2483 was kept. The biggest change is
probably requiring the caller to pass in things like a hostname instead
of having the crate do it. I did this for a couple reasons:
1. I feel uncomfortable with code deep inside the printing logic
reaching out into the environment to assume responsibility for
retrieving the hostname. This feels more like an application-level
responsibility. Arguably, path canonicalization falls into this same
bucket, but it is more difficult to rip that out. (And we can do it
in the future in a backwards compatible fashion I think.)
2. I wanted to permit end users to tell ripgrep about their system's
hostname in their own way, e.g., by running a custom executable. I
want this because I know at least for my own use cases, I sometimes
log into systems using an SSH hostname that is distinct from the
system's actual hostname (usually because the system is shared in
some way or changing its hostname is not allowed/practical).
I think that's about it.
Closes#665, Closes#2483
I originally did not put PathPrinter into grep-printer because I
considered it somewhat extraneous to what a "grep" program does, and
also that its implementation was rather simple. But now with hyperlink
support, its implementation has grown a smidge more complicated. And
more importantly, its existence required exposing a lot more of the
hyperlink guts. Without it, we can keep things like HyperlinkPath and
HyperlinkSpan completely private.
We can now also keep `PrinterPath` completely private as well. And this
is a breaking change.
Like a previous commit did for the grep-cli crate, this does some
polishing to the grep-printer crate. We aren't able to achieve as much
as we did with grep-cli, but we at least eliminate all rust-analyzer
lints and group imports in the way I've been doing recently.
Next we'll start doing some more invasive changes.
This will enable us to query for the current system's hostname in both
Unix and Windows environments.
We could have pulled in the 'gethostname' crate for this, but:
1. I'm not a huge fan of micro-crates.
2. The 'gethostname' crate panics if an error occurs. (Which, to be
fair, an error should never occur, but it seems plausible on borked
systems? ripgrep runs in a lot of places, so I'd rather not take the
chance of a panic bringing down ripgrep for an optional convenience
feature.)
3. The 'gethostname' crate uses the 'windows-targets' crate from
Microsoft. This is arguably the "right" thing to do, but ripgrep
doesn't use them yet and they appear high-churn.
So I just added a safe wrapper to do this to winapi-util[1] and then
inlined the Unix version here. This brings in no extra dependencies and
the routine is fallible so that callers can recover from potentially
strange failures.
[1]: https://github.com/BurntSushi/winapi-util/pull/14
This does a variety of polishing.
1. Deprecate the tty methods in favor of std's IsTerminal trait.
2. Trim down un-needed dependencies.
3. Use bstr to implement escaping.
4. Various aesthetic polishing.
I'm doing this as prep work before adding more to this crate. And as
part of a general effort toward reducing ripgrep's dependencies.
This commit represents the initial work to get hyperlinks working and
was submitted as part of PR #2483. Subsequent commits largely retain the
functionality and structure of the hyperlink support added here, but
rejigger some things around.
This represents yet another iteration on how `ignore` enqueues and
distributes work in parallel. The original implementation used a
multi-producer/multi-consumer thread safe queue from crossbeam. At some
point, I migrated to a simple `Arc<Mutex<Vec<_>>>` and treated it as a
stack so that we did depth first traversal. This helped with memory
usage in very wide directories.
But it turns out that a naive stack-behind-a-mutex can be quite a bit
slower than something that's a little smarter, such as a work-stealing
stack used in this commit. My hypothesis for why this helps is that
without the stealing component, work distribution can get stuck in
sub-optimal configurations that depend on which directory entries get
assigned to a particular worker. It's likely that this can result in
some workers getting "more" work than others, just by chance, and thus
remain idle. But the work-stealing approach heads that off.
This does re-introduce a dependency on parts of crossbeam which is kind
of a bummer, but it's carrying its weight for now.
Closes#1823, Closes#2591
Ref https://github.com/sharkdp/fd/issues/28
When searching subdirectories the path was not correctly built and
included duplicate parts. This fix will remove the duplicate part if
possible.
Fixes#1757, Closes#2295
This brings in aarch64 SIMD support for Teddy[1]. In effect, it means
searches that are multiple (but a small number of) literals extracted
will likely get much faster on aarch64 (i.e., Apple silicon). For
example, from the PR, on my M2 mac mini:
$ time rg-before-teddy-aarch64 -i -c 'Sherlock Holmes' OpenSubtitles2018.half.en
3055
real 8.196
user 7.726
sys 0.469
maxmem 5728 MB
faults 17
$ time rg-after-teddy-aarch64 -i -c 'Sherlock Holmes' OpenSubtitles2018.half.en
3055
real 1.127
user 0.701
sys 0.425
maxmem 4880 MB
faults 13
w00t.
[1]: https://github.com/BurntSushi/aho-corasick/pull/129
As of the memchr 2.6 release, its Iterator::count method is specialized
to only count the number of occurrences instead of finding the offset of
each occurrence. This replaces ripgrep's use of the bytecount crate.
While micro-benchmarks suggest that memchr's method has better
throughput than bytecount, it turned out to be an illusion. Namely, on a
~13GB haystack prior to this change:
$ time rg-bytecount 'You killed my friend, my best friend, my lifelong friend!' OpenSubtitles2018.raw.en --line-number
441450441:- You killed my friend, my best friend, my lifelong friend!
real 1.473
user 1.186
sys 0.286
maxmem 12512 MB
faults 0
And then after:
$ time rg 'You killed my friend, my best friend, my lifelong friend!' OpenSubtitles2018.raw.en --line-number
441450441:- You killed my friend, my best friend, my lifelong friend!
real 1.532
user 1.280
sys 0.250
maxmem 12512 MB
faults 0
But perf is just about in the same ballpark. That's good enough for me
at the moment in order to drop the extra dependency.
I did this because the marginal cost of adding the Iterator::count()
specialization to memchr was extremely small.
We drop our MIPS target because it no longer works.[1] We were
previously using it as a means of testing ripgrep in a big endian
environment. So to achieve that without MIPS, we test on powerpc64 and
s390x. (No particular reason to do both, but why not.)
We also add aarch64 as a proxy for at least ensuring everything works
for the same architecture as Apple silicon. It's not a guarantee that
everything works, but it seems better than nothing until we can actually
test Apple silicon in CI.
[1]: c788378d6f
We currently implement globs by converting them to regexes, and in doing
so, sometimes use grouping. In all but one case, we used non-capturing
groups. But for alternations, we used capturing groups, which was likely
just an oversight. We don't make use of capture groups at all, and while
they usually don't have any overhead, they lead to weird cases like this
one: https://github.com/rust-lang/regex/issues/1059
That particular issue is also a bug in the regex crate itself, which is
fixed in https://github.com/rust-lang/regex/pull/1062. Note though that
the bug fix in the regex crate is required. Even with this patch to
globset, memory usage is reduced (by about half in rust-lang/regex#1059)
but is not returned to where it was prior to the regex 1.9 release.
It turns out our fast path for -w/--word-regexp wasn't quite correct in
some cases. Namely, we use `(?m:^|\W)(<original-regex>)(?m:\W|$)` as the
implementation of -w/--word-regexp since `\b(<original-regex>)\b` has
some unintuitive results in certain cases, specifically when
<original-regex> matches non-word characters at match boundaries.
The problem is that using this formulation means that you need to
extract the capture group around <original-regex> to find the "real"
match, since the surrounding (^|\W) and (\W|$) aren't part of the match.
This is fine, but the capture group engine is usually slow, so we have a
fast path where we try to deduce the correct match boundary after an
initial match (before running capture groups). The problem is that doing
this is rather tricky because it's hard to know, in general, whether the
`^` or the `\W` matched.
This still doesn't seem quite right overall, but we at least fix one
more case.
Fixes#2574