This fixes clipping if the encoder input used the full 16 bit
input range (samples with a magnitude below 16383 worked fine).
The filtered subband samples should be 15 bit maximum, while
the code earlier produced them scaled to 16 bit.
This makes the decoder output have double the magnitude
compared to before.
The spec reference samples doesn't test the QMF at all, which
was why this part slipped past initially.
Signed-off-by: Martin Storsjö <martin@martin.st>
Earlier, bits per sample was defined as 8, since
bits_per_coded_sample was used to indicate whether to ignore
the lower bits of the codeword, having values 6, 7 or 8.
g722 encodes 2 samples into one byte codeword, therefore the
bits per sample is 4. By changing this, the generated timestamps
for streams encoded with g722 become correct.
This makes timestamp generation for g722 data correct (both when
encoding and when demuxing from raw g722 files).
Signed-off-by: Martin Storsjö <martin@martin.st>