mirror of
https://github.com/BurntSushi/ripgrep.git
synced 2024-12-12 19:18:24 +02:00
813c676eca
This commit fixes a subtle bug in how the line buffer was rolling its contents. Specifically, when ripgrep searches without memory maps, it uses a "roll" buffer for incremental line oriented search without needing to read the entire file into memory at once. The roll buffer works by reading a chunk of bytes from the file into memory, and then searching everything in that buffer up to the last `\n` byte. The bytes *after* the last `\n` byte are preserved, since they likely correspond to *part* of the next line. Once ripgrep is done searching the buffer, it "rolls" the buffer such that the start of the next line is at the beginning of the buffer, and then ripgrep reads more data into the buffer starting at the (possibly) partial end of that line. The implication of this strategy, necessarily so, is that a buffer must be big enough to fit a single line in memory. This is because the regex engine needs a contiguous block of memory to search, so there is no way to search anything smaller than a single line. So if a file contains a single line with 7.5 million bytes, then the buffer will grow to be at least that size. (Many files have super long lines like this, but they tend to be *binary* files, which ripgrep will detect and stop searching unless the user forces it with the `-a/--text` flag. So in practice, they aren't usually a problem. However, in this case, #1335 found a case where a plain text file had a line with 7.5 million bytes.) Now, for performance reasons, ripgrep reuses these buffers across its search. Typically, it will create `N` of these line buffers when it starts (where `N` is the number of threads it is using), and then reuse them without creating any new ones as it searches through files. This means that if you search a file with a very long line, that buffer will expand to be big enough to store that line. ripgrep never contracts these buffers, so once it searches the next file, ripgrep will continue to use this large buffer. While it might be prudent to contract these buffers in some circumstances, this isn't otherwise inherently a problem. The memory has already been allocated, and there isn't much cost to using it, other than the fact that ripgrep hangs on to it and never gives it back to the OS. However, the `roll` implementation described above had a really important bug in it that was impacted by the size of the buffer. Specifically, it used the following to "roll" the partial line at the end of the buffer to the beginning: self.buf.copy_within_str(self.pos.., 0); Which means that if the buffer is very large, ripgrep will copy *everything* from `self.pos` (which might be very small, e.g., for small files) to the end of the buffer, and move it to the beginning of the buffer. This will happen repeatedly each time the buffer is used to search small files, which winds up being quite a large slow down if the line was exceptionally large (say, megabytes). It turns out that copying everything is completely unnecessary. We only need to copy the remainder of the last read to the beginning of the buffer. Everything *after* the last read in the buffer is just free space that can be filled for the next read. So, all we need to do is copy just those bytes: self.buf.copy_within_str(self.pos..self.end, 0); ... which is typically much much smaller than the rest of the buffer. This was likely also causing small performance losses in other cases as well. For example, when searching a lot of small files, ripgrep would likely do a lot more copying than necessary. Although, given that the default buffer size is 8KB, this extra copying was likely pretty small, and was thus harder to observe. Fixes #1335 |
||
---|---|---|
.. | ||
examples | ||
src | ||
Cargo.toml | ||
LICENSE-MIT | ||
README.md | ||
UNLICENSE |
grep-searcher
A high level library for executing fast line oriented searches. This handles things like reporting contextual lines, counting lines, inverting a search, detecting binary data, automatic UTF-16 transcoding and deciding whether or not to use memory maps.
Dual-licensed under MIT or the UNLICENSE.
Documentation
NOTE: You probably don't want to use this crate directly. Instead, you
should prefer the facade defined in the
grep
crate.
Usage
Add this to your Cargo.toml
:
[dependencies]
grep-searcher = "0.1"
and this to your crate root:
extern crate grep_searcher;