Instead, we just roll our own. A slow version of this is pretty simple
to do, and that's what we write here. The `base64` crate supports a lot
more functionality and is quite fast, but we care about neither of those
things for this particular aspect of ripgrep. (base64 is only used for
non-UTF-8 data or file paths, which are both quite rare.)
As suggested by @epage[1].
Ad hoc timings on my i7-12900K:
before cargo build: 4.91s
before cargo build release: 8.05s
after cargo build: 4.69s
after cargo build release: 7.83s
... pretty underwhelming if you ask me. Ah well. And on my M2 mac mini:
before cargo build: 6.18s
before cargo build release: 14.50s
after cargo build: 5.52s
after cargo build release: 13.44s
Still kind of underwhelming, but definitely better. It shaves a full
second off of compile times in release mode. I went back to my
i7-12900K, but passed `-j1` to `cargo build` to force single threaded
mode:
before cargo build: 19.44s
before cargo build release: 50.64s
after cargo build: 16.76s
after cargo build release: 48.00s
Which seems pretty consistent with the modest improvements above.
Looking at `cargo build --timings`, the beefiest chunk of time is spent
in compiling `regex-automata`, by far. This is fine because it's core
functionality. I wish a fast general purpose regex engine with its
internals exposed as a separately versioned library didn't require so
much code... Blech.
[1]: https://old.reddit.com/r/rust/comments/17rd8ww/faster_compilation_with_the_parallel_frontend_in/k8igjlg/
The idea is that by bringing derives in via serde's optional feature, it
was inhibiting compilation speed[1]. We try to fix that by depending on
`serde_derive` as a distinct dependency.
It does seem to improve overall compilation time, but only by about 0.5
seconds. With that said, my machine has a lot of cores, so it's possible
this will help more on less powerful CPUs.
[1]: https://old.reddit.com/r/rust/comments/17rd8ww/faster_compilation_with_the_parallel_frontend_in/k8igjlg/
ripgrep began it's life with docopt for argument parsing. Then it moved
to Clap and stayed there for a number of years. Clap has served ripgrep
well, and it probably could continue to serve ripgrep well, but I ended
up deciding to move off of it.
Why?
The first time I had the thought of moving off of Clap was during the
2->3->4 transition. I thought the 3.x and 4.x releases were great, but
for me, it ended up moving a little too quickly. Since the release of
4.x was telegraphed around when 3.x came out, I decided to just hold off
and wait to migrate to 4.x instead of doing a 3.x migration followed
shortly by another 4.x migration. Of course, I just never ended up doing
the migration at all. I never got around to it and there just wasn't a
compelling reason for me to upgrade. While I never investigated it, I
saw an upgrade as a non-trivial amount of work in part because I didn't
encapsulate the usage of Clap enough.
The above is just what got me started thinking about it. It wasn't
enough to get me to move off of it on its own. What ended up pushing me
over the edge was a combination of factors:
* As mentioned above, I didn't want to run on the migration treadmill.
This has proven to not be much of an issue, but at the time of the
2->3->4 releases, I didn't know how long Clap 4.x would be out before a
5.x would come out.
* The release of lexopt[1] caught my eye. IMO, that crate demonstrates
exactly how something new can arrive on the scene and just thoroughly
solve a problem minimalistically. It has the docs, the reasoning, the
simple API, the tests and good judgment. It gets all the weird corner
cases right that Clap also gets right (and is part of why I was
originally attracted to Clap).
* I have an overall desire to reduce the size of my dependency tree. In
part because a smaller dependency tree tends to correlate with better
compile times, but also in part because it reduces my reliance and trust
on others. It lets me be the "master" of ripgrep's destiny by reducing
the amount of behavior that is the result of someone else's decision
(whether good or bad).
* I perceived that Clap solves a more general problem than what I
actually need solved. Despite the vast number of flags that ripgrep has,
its requirements are actually pretty simple. We just need simple
switches and flags that support one value. No multi-value flags. No
sub-commands. And probably a lot of other functionality that Clap has
that makes it so flexible for so many different use cases. (I'm being
hand wavy on the last point.)
With all that said, perhaps most importantly, the future of ripgrep
possibly demands a more flexible CLI argument parser. In today's world,
I would really like, for example, flags like `--type` and `--type-not`
to be able to accumulate their repeated values into a single sequence
while respecting the order they appear on the CLI. For example, prior
to this migration, `rg regex-automata -Tlock -ttoml` would not return
results in `Cargo.lock` in this repository because the `-Tlock` always
took priority even though `-ttoml` appeared after it. But with this
migration, `-ttoml` now correctly overrides `-Tlock`. We would like to
do similar things for `-g/--glob` and `--iglob` and potentially even
now introduce a `-G/--glob-not` flag instead of requiring users to use
`!` to negate a glob. (Which I had done originally to work-around this
problem.) And some day, I'd like to add some kind of boolean matching to
ripgrep perhaps similar to how `git grep` does it. (Although I haven't
thought too carefully on a design yet.) In order to do that, I perceive
it would be difficult to implement correctly in Clap.
I believe that this last point is possible to implement correctly in
Clap 2.x, although it is awkward to do so. I have not looked closely
enough at the Clap 4.x API to know whether it's still possible there. In
any case, these were enough reasons to move off of Clap and own more of
the argument parsing process myself.
This did require a few things:
* I had to write my own logic for how arguments are combined into one
single state object. Of course, I wanted this. This was part of the
upside. But it's still code I didn't have to write for Clap.
* I had to write my own shell completion generator.
* I had to write my own `-h/--help` output generator.
* I also had to write my own man page generator. Well, I had to do this
with Clap 2.x too, although my understanding is that Clap 4.x supports
this. With that said, without having tried it, my guess is that I
probably wouldn't have liked the output it generated because I
ultimately had to write most of the roff by hand myself to get the man
page I wanted. (This also had the benefit of dropping the build
dependency on asciidoc/asciidoctor.)
While this is definitely a fair bit of extra work, it overall only cost
me a couple days. IMO, that's a good trade off given that this code is
unlikely to change again in any substantial way. And it should also
allow for more flexible semantics going forward.
Fixes#884, Fixes#1648, Fixes#1701, Fixes#1814, Fixes#1966
[1]: https://docs.rs/lexopt/0.3.0/lexopt/index.html
This commit adds `anyhow` as a dependency and switches over to it from
Box<dyn Error>.
It actually looks like I've kept all of my errors rather shallow, such
that we don't get a huge benefit from anyhow at present. But now that
anyhow is in use, I expect to use its "context" feature more going
forward.
This brings the code in line with my current style. It also inlines the
dozen or so lines of code for FNV hashing instead of bringing in a
micro-crate for it. Finally, it drops the dependency on regex in favor
of using regex-syntax and regex-automata directly.
This essentially takes the work done in #2483 and does a bit of a
facelift. A brief summary:
* We reduce the hyperlink API we expose to just the format, a
configuration and an environment.
* We move buffer management into a hyperlink-specific interpolator.
* We expand the documentation on --hyperlink-format.
* We rewrite the hyperlink format parser to be a simple state machine
with support for escaping '{{' and '}}'.
* We remove the 'gethostname' dependency and instead insist on the
caller to provide the hostname. (So grep-printer doesn't get it
itself, but the application will.) Similarly for the WSL prefix.
* Probably some other things.
Overall, the general structure of #2483 was kept. The biggest change is
probably requiring the caller to pass in things like a hostname instead
of having the crate do it. I did this for a couple reasons:
1. I feel uncomfortable with code deep inside the printing logic
reaching out into the environment to assume responsibility for
retrieving the hostname. This feels more like an application-level
responsibility. Arguably, path canonicalization falls into this same
bucket, but it is more difficult to rip that out. (And we can do it
in the future in a backwards compatible fashion I think.)
2. I wanted to permit end users to tell ripgrep about their system's
hostname in their own way, e.g., by running a custom executable. I
want this because I know at least for my own use cases, I sometimes
log into systems using an SSH hostname that is distinct from the
system's actual hostname (usually because the system is shared in
some way or changing its hostname is not allowed/practical).
I think that's about it.
Closes#665, Closes#2483
Like a previous commit did for the grep-cli crate, this does some
polishing to the grep-printer crate. We aren't able to achieve as much
as we did with grep-cli, but we at least eliminate all rust-analyzer
lints and group imports in the way I've been doing recently.
Next we'll start doing some more invasive changes.
This will enable us to query for the current system's hostname in both
Unix and Windows environments.
We could have pulled in the 'gethostname' crate for this, but:
1. I'm not a huge fan of micro-crates.
2. The 'gethostname' crate panics if an error occurs. (Which, to be
fair, an error should never occur, but it seems plausible on borked
systems? ripgrep runs in a lot of places, so I'd rather not take the
chance of a panic bringing down ripgrep for an optional convenience
feature.)
3. The 'gethostname' crate uses the 'windows-targets' crate from
Microsoft. This is arguably the "right" thing to do, but ripgrep
doesn't use them yet and they appear high-churn.
So I just added a safe wrapper to do this to winapi-util[1] and then
inlined the Unix version here. This brings in no extra dependencies and
the routine is fallible so that callers can recover from potentially
strange failures.
[1]: https://github.com/BurntSushi/winapi-util/pull/14
This does a variety of polishing.
1. Deprecate the tty methods in favor of std's IsTerminal trait.
2. Trim down un-needed dependencies.
3. Use bstr to implement escaping.
4. Various aesthetic polishing.
I'm doing this as prep work before adding more to this crate. And as
part of a general effort toward reducing ripgrep's dependencies.
This commit represents the initial work to get hyperlinks working and
was submitted as part of PR #2483. Subsequent commits largely retain the
functionality and structure of the hyperlink support added here, but
rejigger some things around.
This represents yet another iteration on how `ignore` enqueues and
distributes work in parallel. The original implementation used a
multi-producer/multi-consumer thread safe queue from crossbeam. At some
point, I migrated to a simple `Arc<Mutex<Vec<_>>>` and treated it as a
stack so that we did depth first traversal. This helped with memory
usage in very wide directories.
But it turns out that a naive stack-behind-a-mutex can be quite a bit
slower than something that's a little smarter, such as a work-stealing
stack used in this commit. My hypothesis for why this helps is that
without the stealing component, work distribution can get stuck in
sub-optimal configurations that depend on which directory entries get
assigned to a particular worker. It's likely that this can result in
some workers getting "more" work than others, just by chance, and thus
remain idle. But the work-stealing approach heads that off.
This does re-introduce a dependency on parts of crossbeam which is kind
of a bummer, but it's carrying its weight for now.
Closes#1823, Closes#2591
Ref https://github.com/sharkdp/fd/issues/28
This brings in aarch64 SIMD support for Teddy[1]. In effect, it means
searches that are multiple (but a small number of) literals extracted
will likely get much faster on aarch64 (i.e., Apple silicon). For
example, from the PR, on my M2 mac mini:
$ time rg-before-teddy-aarch64 -i -c 'Sherlock Holmes' OpenSubtitles2018.half.en
3055
real 8.196
user 7.726
sys 0.469
maxmem 5728 MB
faults 17
$ time rg-after-teddy-aarch64 -i -c 'Sherlock Holmes' OpenSubtitles2018.half.en
3055
real 1.127
user 0.701
sys 0.425
maxmem 4880 MB
faults 13
w00t.
[1]: https://github.com/BurntSushi/aho-corasick/pull/129
As of the memchr 2.6 release, its Iterator::count method is specialized
to only count the number of occurrences instead of finding the offset of
each occurrence. This replaces ripgrep's use of the bytecount crate.
While micro-benchmarks suggest that memchr's method has better
throughput than bytecount, it turned out to be an illusion. Namely, on a
~13GB haystack prior to this change:
$ time rg-bytecount 'You killed my friend, my best friend, my lifelong friend!' OpenSubtitles2018.raw.en --line-number
441450441:- You killed my friend, my best friend, my lifelong friend!
real 1.473
user 1.186
sys 0.286
maxmem 12512 MB
faults 0
And then after:
$ time rg 'You killed my friend, my best friend, my lifelong friend!' OpenSubtitles2018.raw.en --line-number
441450441:- You killed my friend, my best friend, my lifelong friend!
real 1.532
user 1.280
sys 0.250
maxmem 12512 MB
faults 0
But perf is just about in the same ballpark. That's good enough for me
at the moment in order to drop the extra dependency.
I did this because the marginal cost of adding the Iterator::count()
specialization to memchr was extremely small.