139f186e57
This replaces the use of channels in the parallel directory traversal with a simple stack. The primary motivation for this change is to reduce peak memory usage. In particular, when using a channel (which is a queue), we wind up visiting files in a breadth first fashion. Using a stack switches us to a depth first traversal. While there are no real intrinsic differences, depth first traversal generally tends to use less memory because directory trees are more commonly wide than they are deep. In particular, the queue/stack size itself is not the only concern. In one recent case documented in #1550, a user wanted to search all Rust crates. The directory structure was shallow but extremely wide, with a single directory containing all crates. This in turn results is in descending into each of those directories and building a gitignore matcher for each (since most crates have `.gitignore` files) before ever searching a single file. This means that ripgrep has all such matchers in memory simultaneously, which winds up using quite a bit of memory. In a depth first traversal, peak memory usage is much lower because gitignore matches are built and discarded more quickly. In the case of searching all crates, the peak memory usage decrease is dramatic. On my system, it shrinks by an order magnitude, from almost 1GB to 50MB. The decline in peak memory usage is consistent across other use cases as well, but is typically more modest. For example, searching the Linux repo has a 50% decrease in peak memory usage and searching the Chromium repo has a 25% decrease in peak memory usage. Search times generally remain unchanged, although some ad hoc benchmarks that I typically run have gotten a bit slower. As far as I can tell, this appears to be result of scheduling changes. Namely, the depth first traversal seems to result in searching some very large files towards the end of the search, which reduces the effectiveness of parallelism and makes the overall search take longer. This seems to suggest that a stack isn't optimal. It would instead perhaps be better to prioritize searching larger files first, but it's not quite clear how to do this without introducing more overhead (getting the file size for each file requires a stat call). Fixes #1550 |
||
---|---|---|
.. | ||
examples | ||
src | ||
tests | ||
Cargo.toml | ||
COPYING | ||
LICENSE-MIT | ||
README.md | ||
UNLICENSE |
ignore
The ignore crate provides a fast recursive directory iterator that respects
various filters such as globs, file types and .gitignore
files. This crate
also provides lower level direct access to gitignore and file type matchers.
Dual-licensed under MIT or the UNLICENSE.
Documentation
Usage
Add this to your Cargo.toml
:
[dependencies]
ignore = "0.4"
and this to your crate root:
extern crate ignore;
Example
This example shows the most basic usage of this crate. This code will
recursively traverse the current directory while automatically filtering out
files and directories according to ignore globs found in files like
.ignore
and .gitignore
:
use ignore::Walk;
for result in Walk::new("./") {
// Each item yielded by the iterator is either a directory entry or an
// error, so either print the path or the error.
match result {
Ok(entry) => println!("{}", entry.path().display()),
Err(err) => println!("ERROR: {}", err),
}
}
Example: advanced
By default, the recursive directory iterator will ignore hidden files and
directories. This can be disabled by building the iterator with WalkBuilder
:
use ignore::WalkBuilder;
for result in WalkBuilder::new("./").hidden(false).build() {
println!("{:?}", result);
}
See the documentation for WalkBuilder
for many other options.