The encoder is sensitive to changes in precision, and its test target
was a compromise. It was already close to failing on x87 FPUs.
ff_mdct_init used double precision entirely from the scale to computing
the MDCT exp tables. av_tx_init uses single-precision for the scale,
with a small input change which was enough to tip the test into failing on
x87 FPUs.
Increase the fuzz factor in line with other AAC encoder tests to fix.
The twoloop coder is highly loaded with (pseudo-)perceptual metrics,
and the aim of the tests is to piece-wise test each function of the
encoder, for which the 'fast' coder is perfect, since it only decides
on which scalefactors to use, rather than enable or disable encoder
features.
Will prevet FATE from breaking once LIBAVCODEC_VERSION_MINOR is bumped to 100.
Reported-by: zane
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
It tests a useless profile which sounds no better than regular aac and which
takes extremely long to encoder something. Also it has been behind experimental
flag for as long as it has been supported.
Should be removed altogether sometime in the future.
The twoloop coder sounds decent at low bitrates, however at higher bitrates
it sounds worse than the fast coder (which used to be the old twoloop coder
before October 2015) and needs quite a lot more CPU.
Change the default to fast. It has been well tested and has had little changes
over the years so its been confirmed to be quite stable.
Also change its description (not valid for more than a year) and the
documentation.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
This removes the current API violating behavior of overwritting the stream's
extradata during packet filtering, something that should not happen after the
av_bsf_init() call.
The bitstream filter generated extradata is no longer available during
write_header(), and as such not usable with non seekable output. The FATE
tests are updated to reflect this.
Signed-off-by: James Almer <jamrial@gmail.com>
* commit 'b55566db4c51d920a6496455bb30a608e5a50a41':
avconv: use avcodec_parameters_copy() with streamcopy
The fate-aac-autobsf-adtstoasc changes from writing an audio bitdepth
based on the sample format, which is now available.
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
I/S energy, especially when it comes to phase cancellations,
needs to use signed coefficients as input, yet it was using
abs'd coefficients. That was a slight bug.
The relative error between two encoding strategies is the simple
difference of rate-distortion values, and not the absolute
difference. An absolute measure would allow worsening of the
quantization error as well as improving.
1. Fix sf_idx and band_type addressing to address only the first
subwindow in the group (others could hold garbage values)
2. Don't step on ms_mask when is_mask is set. I/S selection
already sets the ms_mask properly and shouldn't be overridden.
3. Use mid/sid cb/sf when computing coding error, as should be
since those are the cb/sfs that will eventually be set.
4. Fix distortion computation on multi-subwindow groups (was
subtracting the bits terms multiple times)
5. Clear ms_mask when one side uses PNS and the other doesn't.
When using PNS, ms_mask signals correlated noise, which can be
detected just like regular M/S detection, so we don't skip
noise bands, but when only one side uses PNS setting the flag
can confuse some encoders, so avoid that.
This should fix this test failing on kfreebsd, a regression since
6e5dbe7, which decreased the CMP_TARGET by 1.
Reviewed-by: Rostislav Pehlivanov <atomnuker@gmail.com>
Signed-off-by: Andreas Cadhalpun <Andreas.Cadhalpun@googlemail.com>
With only 7 coefficients per short window at most the extra precision
makes a difference and seems to reduce crackling and stddev even
further.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
This patch does 4 things, all of which interact and thus it
woudln't be possible to commit them separately without causing
either quality regressions or assertion failures.
Fate comparison targets don't all reflect improvements in
quality, yet listening tests show substantially improved quality
and stability.
1. Increase SF range utilization.
The spec requires SF delta values to be constrained within the
range -60..60. The previous code was applying that range to
the whole SF array and not only the deltas of consecutive values,
because doing so requires smarter code: zeroing or otherwise
skipping a band may invalidate lots of SF choices.
This patch implements that logic to allow the coders to utilize
the full dynamic range of scalefactors, increasing quality quite
considerably, and fixing delta-SF-related assertion failures,
since now the limitation is enforced rather than asserted.
2. PNS tweaks
The previous modification makes big improvements in twoloop's
efficiency, and every time that happens PNS logic needs to be
tweaked accordingly to avoid it from stepping all over twoloop's
decisions. This patch includes modifications of the sort.
3. Account for lowpass cutoff during PSY analysis
The closer PSY's allocation is to final allocation the better
the quality is, and given these modifications, twoloop is now
very efficient at avoiding holes. Thus, to compute accurate
thresholds, PSY needs to account for the lowpass applied
implicitly during twoloop (by zeroing high bands).
This patch makes twoloop set the cutoff in psymodel's context
the first time it runs, and makes PSY account for it during
threshold computation, making PE and threshold computations
closer to the final allocation and thus achieving better
subjective quality.
4. Tweaks to RC lambda tracking loop in relation to PNS
Without this tweak some corner cases cause quality regressions.
Basically, lambda needs to react faster to overall bitrate
efficiency changes since now PNS can be quite successful in
enforcing maximum bitrates, when PSY allocates too many bits
to the lower bands, suppressing the signals RC logic uses to
lower lambda in those cases and causing aggressive PNS.
This tweak makes PNS much less aggressive, though it can still
use some further tweaks.
Also update MIPS specializations and adjust fuzz
Also in lavc/mips/aacpsy_mips.h: remove trailing whitespace
As noted in a comment, pe.min in the reference encoder
is centered around current pe. The bit reservoir algo
needs pe.min to be a local minimum, because it can only
account for local PE variations. If it's set to a global
minimum as was being done, bit reservoir logic doesn't
work as efficiently.
This patch tries to forget old minimums and converge to
a local minimum without losing the stability of the
previous solution. Listening tests until now suggest this
solves numerous RC issues.
This fixes a fate failure after bumping the minor version
Its unknown why this is not needed for the other aac tests,
more investigation needed but for now i dont want to leave
it broken while its investigated
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
There were some errors in the calculation as well as an entire
unnecessary loop to find the gain coefficient. Merge the
two loops.
Thanks to @ubitux for the suggestions and testing.
The fate test command line is supposed to serve as an example. It's
nicer to explicitly state the profile rather than setting options
to force it for you.
GCC 3.4 miscompiles it on sunos. Date of release? The second of
August two thousand and five, anno Domini. That's ten years two
months and fourteen days ago. Three thousand seven hundred and
twenty seven days ago. One sixth of the average life expectancy
of a person living in a country with a human development index
of zero point eight hundred and eight, equality adjusted.
GCC 4.3 also miscompiles it, though not as bad.
The LTP encoding and the test is a bit slow currently, taking twice
the amount of time the other tests do, so in the future the
total time to encode might be cut down on that test.
This finalizes merging of the work in the patches in ticket #2686.
Improvements to twoloop and RC logic are extensive.
The non-exhaustive list of twoloop improvments includes:
- Tweaks to distortion limits on the RD optimization phase of twoloop
- Deeper search in twoloop
- PNS information marking to let twoloop decide when to use it
(turned out having the decision made separately wasn't working)
- Tonal band detection and priorization
- Better band energy conservation rules
- Strict hole avoidance
For rate control:
- Use psymodel's bit allocation to allow proper use of the bit
reservoir. Don't work against the bit reservoir by moving lambda
in the opposite direction when psymodel decides to allocate more/less
bits to a frame.
- Retry the encode if the effective rate lies outside a reasonable
margin of psymodel's allocation or the selected ABR.
- Log average lambda at the end. Useful info for everyone, but especially
for tuning of the various encoder constants that relate to lambda
feedback.
Psy:
- Do not apply lowpass with a FIR filter, instead just let the coder
zero bands above the cutoff. The FIR filter induces group delay,
and while zeroing bands causes ripple, it's lost in the quantization
noise.
- Experimental VBR bit allocation code
- Tweak automatic lowpass filter threshold to maximize audio bandwidth
at all bitrates while still providing acceptable, stable quality.
I/S:
- Phase decision fixes. Unrelated to #2686, but the bugs only surfaced
when the merge was finalized. Measure I/S band energy accounting for
phase, and prevent I/S and M/S from being applied both.
PNS:
- Avoid marking short bands with PNS when they're part of a window
group in which there's a large variation of energy from one window
to the next. PNS can't preserve those and the effect is extremely
noticeable.
M/S:
- Implement BMLD protection similar to the specified in
ISO-IEC/13818:7-2003, Appendix C Section 6.1. Since M/S decision
doesn't conform to section 6.1, a different method had to be
implemented, but should provide equivalent protection.
- Move the decision logic closer to the method specified in
ISO-IEC/13818:7-2003, Appendix C Section 6.1. Specifically,
make sure M/S needs less bits than dual stereo.
- Don't apply M/S in bands that are using I/S
Now, this of course needed adjustments in the compare targets and
fuzz factors of the AAC encoder's fate tests, but if wondering why
the targets go up (more distortion), consider the previous coder
was using too many bits on LF content (far more than required by
psy), and thus those signals will now be more distorted, not less.
The extra distortion isn't audible though, I carried extensive
ABX testing to make sure.
A very similar patch was also extensively tested by Kamendo2 in
the context of #2686.
This patch tweaks search_for_pns to be both more
aggressive and more careful when applying PNS. On
the one side, it will again try to use PNS on zero
(or effectively zero) bands. For this, both zeroes
and band_type have to be checked (some ZERO bands
aren't marked in zeroes). On the other side, a more
accurate rate-distortion measure avoids using PNS
where it would cause audible distortion.
Also fixed a small bug in the computation of freq
that caused PNS usage on low-frequency bands during
8-short windows. This allows re-enabling PNS during
8-short.
This patch modifies the encode frame function to
retry encoding the frame when the resulting bit count
is too far off target, but only adjusting lambda
in small, incremental step. It also makes the logic
more conservative - otherwise it will contend with
bit reservoir-related variations in bit allocation,
and result in artifacts when frame have to be truncated
(usually at high bit rates transitioning from low
complexity to high complexity).
This patch refactors the AAC coders to reuse code
between the MIPS port and the regular, portable C code.
There were two main functions that had to use
hand-optimized versions of quantization code:
- search_for_quantizers_twoloop
- codebook_trellis_rate
Those two were split into their own template header
files so they can be inlined inside both the MIPS port
and the generic code. In each context, they'll link
to their specialized implementations, and thus be
optimized by the compiler.
This approach I believe is better than maintaining
several copies of each function. As past experience has
proven, having to keep those in sync was error prone.
In this way, they will remain in sync by default.
Also, an implementation of the dequantized output
argument for the optimized quantize_and_encode
functions is included in the patch. While the current
implementation of search_for_pred still isn't using
it, future iterations of main prediction probably will.
It should not imply any measurable performance hit while
not being used.
The recent commits change the value slightly. Even though it's
within the threshold it's better to risk as little as possible
especially when different systems, processors, FPUs and compilers
are involved.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
This commit changes a few things about the noise substitution
logic:
- Brings back the quantization factor (reduced to 3) during
scalefactor index calculations.
- Rejects any zeroed bands. They should be inaudiable and it's
a waste transmitting the scalefactor indices for these.
- Uses swb_offsets instead of incrementing a 'start' with every
window group size.
- Rejects all PNS during short windows.
Overall improves quality. There was a plan to use the lfg system
to create the random numbers instead of using whatever the decoder
uses but for now this works fine. Entropy is far from important here.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
This commit once again improves the PNS implementation by scaling the
thresholds with frequency. The thresholds get looser as the frequency
increases since higher frequencies are basically noise to human ears.
Also, this introduces quantization error correction for PNS. Should
the error be too much, no PNS will be used. The energy_ratio is used
to regulate the actual encoded PNS energy: if the generated PNS
energy is higher than the energy from the psy system, energy_ratio
is used to correct it so that hopefully once requantized and
transmitted the value in the decoder will be closer to what the
encoder has.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
This was an oversight when the IS system was being first implemented.
The ener01 part was largely a result of trial and error and the fact
that the sum of coef0 and coef1 could result in a zero was
overlooked. Once ener01 turns to zero it's used to divide the left
channel energy which doesn't turn out so well as it fills IS[]
with -nan's and inf's which in turn confused the quantize_band_cost.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>