mirror of
https://github.com/BurntSushi/ripgrep.git
synced 2024-12-12 19:18:24 +02:00
082245dadb
ripgrep began it's life with docopt for argument parsing. Then it moved to Clap and stayed there for a number of years. Clap has served ripgrep well, and it probably could continue to serve ripgrep well, but I ended up deciding to move off of it. Why? The first time I had the thought of moving off of Clap was during the 2->3->4 transition. I thought the 3.x and 4.x releases were great, but for me, it ended up moving a little too quickly. Since the release of 4.x was telegraphed around when 3.x came out, I decided to just hold off and wait to migrate to 4.x instead of doing a 3.x migration followed shortly by another 4.x migration. Of course, I just never ended up doing the migration at all. I never got around to it and there just wasn't a compelling reason for me to upgrade. While I never investigated it, I saw an upgrade as a non-trivial amount of work in part because I didn't encapsulate the usage of Clap enough. The above is just what got me started thinking about it. It wasn't enough to get me to move off of it on its own. What ended up pushing me over the edge was a combination of factors: * As mentioned above, I didn't want to run on the migration treadmill. This has proven to not be much of an issue, but at the time of the 2->3->4 releases, I didn't know how long Clap 4.x would be out before a 5.x would come out. * The release of lexopt[1] caught my eye. IMO, that crate demonstrates exactly how something new can arrive on the scene and just thoroughly solve a problem minimalistically. It has the docs, the reasoning, the simple API, the tests and good judgment. It gets all the weird corner cases right that Clap also gets right (and is part of why I was originally attracted to Clap). * I have an overall desire to reduce the size of my dependency tree. In part because a smaller dependency tree tends to correlate with better compile times, but also in part because it reduces my reliance and trust on others. It lets me be the "master" of ripgrep's destiny by reducing the amount of behavior that is the result of someone else's decision (whether good or bad). * I perceived that Clap solves a more general problem than what I actually need solved. Despite the vast number of flags that ripgrep has, its requirements are actually pretty simple. We just need simple switches and flags that support one value. No multi-value flags. No sub-commands. And probably a lot of other functionality that Clap has that makes it so flexible for so many different use cases. (I'm being hand wavy on the last point.) With all that said, perhaps most importantly, the future of ripgrep possibly demands a more flexible CLI argument parser. In today's world, I would really like, for example, flags like `--type` and `--type-not` to be able to accumulate their repeated values into a single sequence while respecting the order they appear on the CLI. For example, prior to this migration, `rg regex-automata -Tlock -ttoml` would not return results in `Cargo.lock` in this repository because the `-Tlock` always took priority even though `-ttoml` appeared after it. But with this migration, `-ttoml` now correctly overrides `-Tlock`. We would like to do similar things for `-g/--glob` and `--iglob` and potentially even now introduce a `-G/--glob-not` flag instead of requiring users to use `!` to negate a glob. (Which I had done originally to work-around this problem.) And some day, I'd like to add some kind of boolean matching to ripgrep perhaps similar to how `git grep` does it. (Although I haven't thought too carefully on a design yet.) In order to do that, I perceive it would be difficult to implement correctly in Clap. I believe that this last point is possible to implement correctly in Clap 2.x, although it is awkward to do so. I have not looked closely enough at the Clap 4.x API to know whether it's still possible there. In any case, these were enough reasons to move off of Clap and own more of the argument parsing process myself. This did require a few things: * I had to write my own logic for how arguments are combined into one single state object. Of course, I wanted this. This was part of the upside. But it's still code I didn't have to write for Clap. * I had to write my own shell completion generator. * I had to write my own `-h/--help` output generator. * I also had to write my own man page generator. Well, I had to do this with Clap 2.x too, although my understanding is that Clap 4.x supports this. With that said, without having tried it, my guess is that I probably wouldn't have liked the output it generated because I ultimately had to write most of the roff by hand myself to get the man page I wanted. (This also had the benefit of dropping the build dependency on asciidoc/asciidoctor.) While this is definitely a fair bit of extra work, it overall only cost me a couple days. IMO, that's a good trade off given that this code is unlikely to change again in any substantial way. And it should also allow for more flexible semantics going forward. Fixes #884, Fixes #1648, Fixes #1701, Fixes #1814, Fixes #1966 [1]: https://docs.rs/lexopt/0.3.0/lexopt/index.html
374 lines
10 KiB
Rust
374 lines
10 KiB
Rust
use std::time;
|
|
|
|
use serde_derive::Deserialize;
|
|
use serde_json as json;
|
|
|
|
use crate::hay::{SHERLOCK, SHERLOCK_CRLF};
|
|
use crate::util::{Dir, TestCommand};
|
|
|
|
#[derive(Clone, Debug, Deserialize, PartialEq, Eq)]
|
|
#[serde(tag = "type", content = "data")]
|
|
#[serde(rename_all = "snake_case")]
|
|
enum Message {
|
|
Begin(Begin),
|
|
End(End),
|
|
Match(Match),
|
|
Context(Context),
|
|
Summary(Summary),
|
|
}
|
|
|
|
impl Message {
|
|
fn unwrap_begin(&self) -> Begin {
|
|
match *self {
|
|
Message::Begin(ref x) => x.clone(),
|
|
ref x => panic!("expected Message::Begin but got {:?}", x),
|
|
}
|
|
}
|
|
|
|
fn unwrap_end(&self) -> End {
|
|
match *self {
|
|
Message::End(ref x) => x.clone(),
|
|
ref x => panic!("expected Message::End but got {:?}", x),
|
|
}
|
|
}
|
|
|
|
fn unwrap_match(&self) -> Match {
|
|
match *self {
|
|
Message::Match(ref x) => x.clone(),
|
|
ref x => panic!("expected Message::Match but got {:?}", x),
|
|
}
|
|
}
|
|
|
|
fn unwrap_context(&self) -> Context {
|
|
match *self {
|
|
Message::Context(ref x) => x.clone(),
|
|
ref x => panic!("expected Message::Context but got {:?}", x),
|
|
}
|
|
}
|
|
|
|
fn unwrap_summary(&self) -> Summary {
|
|
match *self {
|
|
Message::Summary(ref x) => x.clone(),
|
|
ref x => panic!("expected Message::Summary but got {:?}", x),
|
|
}
|
|
}
|
|
}
|
|
|
|
#[derive(Clone, Debug, Deserialize, PartialEq, Eq)]
|
|
struct Begin {
|
|
path: Option<Data>,
|
|
}
|
|
|
|
#[derive(Clone, Debug, Deserialize, PartialEq, Eq)]
|
|
struct End {
|
|
path: Option<Data>,
|
|
binary_offset: Option<u64>,
|
|
stats: Stats,
|
|
}
|
|
|
|
#[derive(Clone, Debug, Deserialize, PartialEq, Eq)]
|
|
struct Summary {
|
|
elapsed_total: Duration,
|
|
stats: Stats,
|
|
}
|
|
|
|
#[derive(Clone, Debug, Deserialize, PartialEq, Eq)]
|
|
struct Match {
|
|
path: Option<Data>,
|
|
lines: Data,
|
|
line_number: Option<u64>,
|
|
absolute_offset: u64,
|
|
submatches: Vec<SubMatch>,
|
|
}
|
|
|
|
#[derive(Clone, Debug, Deserialize, PartialEq, Eq)]
|
|
struct Context {
|
|
path: Option<Data>,
|
|
lines: Data,
|
|
line_number: Option<u64>,
|
|
absolute_offset: u64,
|
|
submatches: Vec<SubMatch>,
|
|
}
|
|
|
|
#[derive(Clone, Debug, Deserialize, PartialEq, Eq)]
|
|
struct SubMatch {
|
|
#[serde(rename = "match")]
|
|
m: Data,
|
|
start: usize,
|
|
end: usize,
|
|
}
|
|
|
|
#[derive(Clone, Debug, Deserialize, PartialEq, Eq)]
|
|
#[serde(untagged)]
|
|
enum Data {
|
|
Text { text: String },
|
|
// This variant is used when the data isn't valid UTF-8. The bytes are
|
|
// base64 encoded, so using a String here is OK.
|
|
Bytes { bytes: String },
|
|
}
|
|
|
|
impl Data {
|
|
fn text(s: &str) -> Data {
|
|
Data::Text { text: s.to_string() }
|
|
}
|
|
fn bytes(s: &str) -> Data {
|
|
Data::Bytes { bytes: s.to_string() }
|
|
}
|
|
}
|
|
|
|
#[derive(Clone, Debug, Deserialize, PartialEq, Eq)]
|
|
struct Stats {
|
|
elapsed: Duration,
|
|
searches: u64,
|
|
searches_with_match: u64,
|
|
bytes_searched: u64,
|
|
bytes_printed: u64,
|
|
matched_lines: u64,
|
|
matches: u64,
|
|
}
|
|
|
|
#[derive(Clone, Debug, Deserialize, PartialEq, Eq)]
|
|
struct Duration {
|
|
#[serde(flatten)]
|
|
duration: time::Duration,
|
|
human: String,
|
|
}
|
|
|
|
/// Decode JSON Lines into a Vec<Message>. If there was an error decoding,
|
|
/// this function panics.
|
|
fn json_decode(jsonlines: &str) -> Vec<Message> {
|
|
json::Deserializer::from_str(jsonlines)
|
|
.into_iter()
|
|
.collect::<Result<Vec<Message>, _>>()
|
|
.unwrap()
|
|
}
|
|
|
|
rgtest!(basic, |dir: Dir, mut cmd: TestCommand| {
|
|
dir.create("sherlock", SHERLOCK);
|
|
cmd.arg("--json").arg("-B1").arg("Sherlock Holmes").arg("sherlock");
|
|
|
|
let msgs = json_decode(&cmd.stdout());
|
|
|
|
assert_eq!(
|
|
msgs[0].unwrap_begin(),
|
|
Begin { path: Some(Data::text("sherlock")) }
|
|
);
|
|
assert_eq!(
|
|
msgs[1].unwrap_context(),
|
|
Context {
|
|
path: Some(Data::text("sherlock")),
|
|
lines: Data::text(
|
|
"Holmeses, success in the province of \
|
|
detective work must always\n",
|
|
),
|
|
line_number: Some(2),
|
|
absolute_offset: 65,
|
|
submatches: vec![],
|
|
}
|
|
);
|
|
assert_eq!(
|
|
msgs[2].unwrap_match(),
|
|
Match {
|
|
path: Some(Data::text("sherlock")),
|
|
lines: Data::text(
|
|
"be, to a very large extent, the result of luck. \
|
|
Sherlock Holmes\n",
|
|
),
|
|
line_number: Some(3),
|
|
absolute_offset: 129,
|
|
submatches: vec![SubMatch {
|
|
m: Data::text("Sherlock Holmes"),
|
|
start: 48,
|
|
end: 63,
|
|
},],
|
|
}
|
|
);
|
|
assert_eq!(msgs[3].unwrap_end().path, Some(Data::text("sherlock")));
|
|
assert_eq!(msgs[3].unwrap_end().binary_offset, None);
|
|
assert_eq!(msgs[4].unwrap_summary().stats.searches_with_match, 1);
|
|
assert_eq!(msgs[4].unwrap_summary().stats.bytes_printed, 494);
|
|
});
|
|
|
|
rgtest!(quiet_stats, |dir: Dir, mut cmd: TestCommand| {
|
|
dir.create("sherlock", SHERLOCK);
|
|
cmd.arg("--json")
|
|
.arg("--quiet")
|
|
.arg("--stats")
|
|
.arg("Sherlock Holmes")
|
|
.arg("sherlock");
|
|
|
|
let msgs = json_decode(&cmd.stdout());
|
|
assert_eq!(msgs[0].unwrap_summary().stats.searches_with_match, 1);
|
|
assert_eq!(msgs[0].unwrap_summary().stats.bytes_searched, 367);
|
|
});
|
|
|
|
#[cfg(unix)]
|
|
rgtest!(notutf8, |dir: Dir, mut cmd: TestCommand| {
|
|
use std::ffi::OsStr;
|
|
use std::os::unix::ffi::OsStrExt;
|
|
|
|
// This test does not work with PCRE2 because PCRE2 does not support the
|
|
// `u` flag.
|
|
if dir.is_pcre2() {
|
|
return;
|
|
}
|
|
// macOS doesn't like this either... sigh.
|
|
if cfg!(target_os = "macos") {
|
|
return;
|
|
}
|
|
|
|
let name = &b"foo\xFFbar"[..];
|
|
let contents = &b"quux\xFFbaz"[..];
|
|
|
|
// APFS does not support creating files with invalid UTF-8 bytes, so just
|
|
// skip the test if we can't create our file. Presumably we don't need this
|
|
// check if we're already skipping it on macOS, but maybe other file
|
|
// systems won't like this test either?
|
|
if !dir.try_create_bytes(OsStr::from_bytes(name), contents).is_ok() {
|
|
return;
|
|
}
|
|
cmd.arg("--json").arg(r"(?-u)\xFF");
|
|
|
|
let msgs = json_decode(&cmd.stdout());
|
|
|
|
assert_eq!(
|
|
msgs[0].unwrap_begin(),
|
|
Begin { path: Some(Data::bytes("Zm9v/2Jhcg==")) }
|
|
);
|
|
assert_eq!(
|
|
msgs[1].unwrap_match(),
|
|
Match {
|
|
path: Some(Data::bytes("Zm9v/2Jhcg==")),
|
|
lines: Data::bytes("cXV1eP9iYXo="),
|
|
line_number: Some(1),
|
|
absolute_offset: 0,
|
|
submatches: vec![SubMatch {
|
|
m: Data::bytes("/w=="),
|
|
start: 4,
|
|
end: 5,
|
|
},],
|
|
}
|
|
);
|
|
});
|
|
|
|
rgtest!(notutf8_file, |dir: Dir, mut cmd: TestCommand| {
|
|
use std::ffi::OsStr;
|
|
|
|
// This test does not work with PCRE2 because PCRE2 does not support the
|
|
// `u` flag.
|
|
if dir.is_pcre2() {
|
|
return;
|
|
}
|
|
|
|
let name = "foo";
|
|
let contents = &b"quux\xFFbaz"[..];
|
|
|
|
// APFS does not support creating files with invalid UTF-8 bytes, so just
|
|
// skip the test if we can't create our file.
|
|
if !dir.try_create_bytes(OsStr::new(name), contents).is_ok() {
|
|
return;
|
|
}
|
|
cmd.arg("--json").arg(r"(?-u)\xFF");
|
|
|
|
let msgs = json_decode(&cmd.stdout());
|
|
|
|
assert_eq!(
|
|
msgs[0].unwrap_begin(),
|
|
Begin { path: Some(Data::text("foo")) }
|
|
);
|
|
assert_eq!(
|
|
msgs[1].unwrap_match(),
|
|
Match {
|
|
path: Some(Data::text("foo")),
|
|
lines: Data::bytes("cXV1eP9iYXo="),
|
|
line_number: Some(1),
|
|
absolute_offset: 0,
|
|
submatches: vec![SubMatch {
|
|
m: Data::bytes("/w=="),
|
|
start: 4,
|
|
end: 5,
|
|
},],
|
|
}
|
|
);
|
|
});
|
|
|
|
// See: https://github.com/BurntSushi/ripgrep/issues/416
|
|
//
|
|
// This test in particular checks that our match does _not_ include the `\r`
|
|
// even though the '$' may be rewritten as '(?:\r??$)' and could thus include
|
|
// `\r` in the match.
|
|
rgtest!(crlf, |dir: Dir, mut cmd: TestCommand| {
|
|
dir.create("sherlock", SHERLOCK_CRLF);
|
|
cmd.arg("--json").arg("--crlf").arg(r"Sherlock$").arg("sherlock");
|
|
|
|
let msgs = json_decode(&cmd.stdout());
|
|
|
|
assert_eq!(
|
|
msgs[1].unwrap_match().submatches[0].clone(),
|
|
SubMatch { m: Data::text("Sherlock"), start: 56, end: 64 },
|
|
);
|
|
});
|
|
|
|
// See: https://github.com/BurntSushi/ripgrep/issues/1095
|
|
//
|
|
// This test checks that we don't drop the \r\n in a matching line when --crlf
|
|
// mode is enabled.
|
|
rgtest!(r1095_missing_crlf, |dir: Dir, mut cmd: TestCommand| {
|
|
dir.create("foo", "test\r\n");
|
|
|
|
// Check without --crlf flag.
|
|
let msgs = json_decode(&cmd.arg("--json").arg("test").stdout());
|
|
assert_eq!(msgs.len(), 4);
|
|
assert_eq!(msgs[1].unwrap_match().lines, Data::text("test\r\n"));
|
|
|
|
// Now check with --crlf flag.
|
|
let msgs = json_decode(&cmd.arg("--crlf").stdout());
|
|
assert_eq!(msgs.len(), 4);
|
|
assert_eq!(msgs[1].unwrap_match().lines, Data::text("test\r\n"));
|
|
});
|
|
|
|
// See: https://github.com/BurntSushi/ripgrep/issues/1095
|
|
//
|
|
// This test checks that we don't return empty submatches when matching a `\n`
|
|
// in CRLF mode.
|
|
rgtest!(r1095_crlf_empty_match, |dir: Dir, mut cmd: TestCommand| {
|
|
dir.create("foo", "test\r\n\n");
|
|
|
|
// Check without --crlf flag.
|
|
let msgs = json_decode(&cmd.arg("-U").arg("--json").arg("\n").stdout());
|
|
assert_eq!(msgs.len(), 4);
|
|
|
|
let m = msgs[1].unwrap_match();
|
|
assert_eq!(m.lines, Data::text("test\r\n\n"));
|
|
assert_eq!(m.submatches[0].m, Data::text("\n"));
|
|
assert_eq!(m.submatches[1].m, Data::text("\n"));
|
|
|
|
// Now check with --crlf flag.
|
|
let msgs = json_decode(&cmd.arg("--crlf").stdout());
|
|
assert_eq!(msgs.len(), 4);
|
|
|
|
let m = msgs[1].unwrap_match();
|
|
assert_eq!(m.lines, Data::text("test\r\n\n"));
|
|
assert_eq!(m.submatches[0].m, Data::text("\n"));
|
|
assert_eq!(m.submatches[1].m, Data::text("\n"));
|
|
});
|
|
|
|
// See: https://github.com/BurntSushi/ripgrep/issues/1412
|
|
rgtest!(r1412_look_behind_match_missing, |dir: Dir, mut cmd: TestCommand| {
|
|
// Only PCRE2 supports look-around.
|
|
if !dir.is_pcre2() {
|
|
return;
|
|
}
|
|
|
|
dir.create("test", "foo\nbar\n");
|
|
|
|
let msgs = json_decode(
|
|
&cmd.arg("-U").arg("--json").arg(r"(?<=foo\n)bar").stdout(),
|
|
);
|
|
assert_eq!(msgs.len(), 4);
|
|
|
|
let m = msgs[1].unwrap_match();
|
|
assert_eq!(m.lines, Data::text("bar\n"));
|
|
assert_eq!(m.submatches.len(), 1);
|
|
});
|