This frame is invalid because the `Window_Size = 0`, and the
`Block_Maximum_Size = min(128 KB, Window_Size) = 0`. But the empty
compressed block has a `Block_Content` size of 2, which is invalid.
The fix is to switch to using a `Window_Descriptor` instead of the
`Single_Segment_Flag`. This sets the `Window_Size = 1024`.
Hexdump before this PR: `28b5 2ffd 2000 1500 0000 00`
Hexdump after this PR: `28b5 2ffd 0000 1500 0000 00`
For issue #3482.
such scenario can happen, for example,
when trying a decompression-only benchmark on invalid data.
Other possibilities include an allocation error in an intermediate step.
So far, the benchmark would return immediately, but still return 0.
On command line, this would be confusing, as the program appears successful (though it does not display any successful message).
Now it returns !0, which can be interpreted as an error by command line.
In 32-bit mode, ZSTD_getOffsetInfo() can be called when nbSeq == 0, and
in this case the offset table is uninitialized. The function should just
return 0 for both values, because there are no sequences.
Credit to OSS-Fuzz
The 32-bit decoder could corrupt the regenerated data by using regular
offset mode when there were actually long offsets. This is because we
were only considering the window size in the calculation, not the
dictionary size. So a large dictionary could allow longer offsets.
Fix this in two ways:
1. Instead of looking at the window size, look at the total referencable
bytes in the history buffer. Use this in the comparison instead of
the window size. Additionally, we were comparing against the wrong
value, it was too low. Fix that by computing exactly the maximum
offset for regular sequence decoding.
2. If it is possible that we have long offsets due to (1), then check
the offset code decoding table, and if the decoding table's maximum
number of additional bits is no more than STREAM_ACCUMULATOR_MIN,
then we can't have long offsets.
This gates us to be using the long offsets decoder only when we are very
likely to actually have long offsets.
Note that this bug only affects the decoding of the data, and the
original compressed data, if re-read with a patched decoder, will
correctly regenerate the orginal data. Except that the encoder also had
the same issue previously.
This fixes both the open OSS-Fuzz issues.
Credit to OSS-Fuzz
The previous code had an issue when `bitsConsumed == 32` it would read 0
bits for the `ofBits` read, which violates the precondition of
`BIT_readBitsFast()`. This can happen when the stream is corrupted.
Fix thie issue by always reading the maximum possible number of extra
bits. I've measured neutral decoding performance, likely because this
branch is unlikely, but this should be faster anyways. And if not, it is
only 32-bit decoding, so performance isn't as critical.
Credit to OSS-Fuzz
Delete all unused FSE functions, now that we are no longer syncing
to/from upstream.
This avoids confusion about Zstd's stack usage like in Issue #3453.
It also removes dead code, which is always a plus.