Fixes undefined operation
Fixes part of 668007-media
Found-by: Matt Wolenetz <wolenetz@google.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
give very bad quality for soxr resampler.
linear_interp is intended for using linear interpolation
between filter bank so quality will be better.
i guess this is misunderstood as 'do not use filter bank,
but directly interpolate linearly between samples'.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Muhammad Faiz <mfcc64@gmail.com>
separate dsp.resample to dsp.resample_common and dsp.resample_linear
and choose to call faster resample_common even when linear_interp=on
when c->frac and c->dst_incr_mod are both zero
speed up resampling when exact_rational and linear_interp are both
enabled because exact_rational force c->frac and c->dst_incr_mod to
be zero when soft compensation does not happen
benchmark on exact_rational=on:linear_interp=on
old new
real 8.432s 5.097s
user 7.679s 4.989s
sys 0.125s 0.107s
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Muhammad Faiz <mfcc64@gmail.com>
high phase_count is only useful when dst_incr_mod is non zero
in other word, it is only useful on soft compensation
on init, it will build filter with low phase_count
but when soft compensation is enabled, rebuild filter
with high phase_count
this approach saves lots of memory
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Muhammad Faiz <mfcc64@gmail.com>
because exact_rational does not guarantee
that phase_count is even
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Muhammad Faiz <mfcc64@gmail.com>
give high quality resampling
as good as with linear_interp=on
as fast as without linear_interp=on
tested visually with ffplay
ffplay -f lavfi "aevalsrc='sin(10000*t*t)', aresample=osr=48000, showcqt=gamma=5"
ffplay -f lavfi "aevalsrc='sin(10000*t*t)', aresample=osr=48000:linear_interp=on, showcqt=gamma=5"
ffplay -f lavfi "aevalsrc='sin(10000*t*t)', aresample=osr=48000:exact_rational=on, showcqt=gamma=5"
slightly speed improvement
for fair comparison with -cpuflags 0
audio.wav is ~ 1 hour 44100 stereo 16bit wav file
ffmpeg -i audio.wav -af aresample=osr=48000 -f null -
old new
real 13.498s 13.121s
user 13.364s 12.987s
sys 0.131s 0.129s
linear_interp=on
old new
real 23.035s 23.050s
user 22.907s 22.917s
sys 0.119s 0.125s
exact_rational=on
real 12.418s
user 12.298s
sys 0.114s
possibility to decrease memory usage if soft compensation is ignored
Signed-off-by: Muhammad Faiz <mfcc64@gmail.com>
This fixes the sum of the integer coefficients ending up summing to a value
larger than the value representing unity.
This issue occurs with qN0.dts when converting to stereo
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This may be a slightly surprising optimization, but is actually based on
an understanding of how math libraries compute trigonometric functions.
Explanation is given here so that future development uses libm more effectively
across the codebase.
All libm's essentially compute transcendental functions via some kind of
polynomial approximation, be it Taylor-Maclaurin or Chebyshev.
Correction terms are added via polynomial correction factors when needed
to squeeze out the last bits of accuracy. Lookup tables are also
inserted strategically.
In the case of trigonometric functions, periodicity is exploited via
first doing a range reduction to an interval around zero, and then using
some polynomial approximation.
This range reduction is the most natural way of doing things - else one
would need polynomials for ranges in different periods which makes no
sense whatsoever.
To avoid the need for the range reduction, it is helpful to feed in
arguments as close to the origin as possible for the trigonometric
functions. In fact, this also makes sense from an accuracy point of view:
IEEE floating point has far more resolution for small numbers than big ones.
This patch does this for the Blackman-Nuttall filter, and yields a
non-negligible speedup.
Sample benchmark (x86-64, Haswell, GNU/Linux)
test: fate-swr-resample-dblp-2626-44100
old:
18893514 decicycles in build_filter (loop 1000), 256 runs, 0 skips
18599863 decicycles in build_filter (loop 1000), 512 runs, 0 skips
18445574 decicycles in build_filter (loop 1000), 1000 runs, 24 skips
new:
16290697 decicycles in build_filter (loop 1000), 256 runs, 0 skips
16267172 decicycles in build_filter (loop 1000), 512 runs, 0 skips
16251105 decicycles in build_filter (loop 1000), 1000 runs, 24 skips
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Ganesh Ajjanagadde <gajjanagadde@gmail.com>
When upsampling, factor is set to 1 and sines need to be evaluated only
once for each phase, and the complexity should not depend on the number
of filter taps. This does the desired precomputation, yielding
significant speedups. Hard guarantees on the gain are not possible, but gains
themselves are obvious and are illustrated below.
Sample benchmark (x86-64, Haswell, GNU/Linux)
test: fate-swr-resample-dblp-2626-44100
old:
29161085 decicycles in build_filter (loop 1000), 256 runs, 0 skips
28821467 decicycles in build_filter (loop 1000), 512 runs, 0 skips
28668201 decicycles in build_filter (loop 1000), 1000 runs, 24 skips
new:
14351936 decicycles in build_filter (loop 1000), 256 runs, 0 skips
14306652 decicycles in build_filter (loop 1000), 512 runs, 0 skips
14299923 decicycles in build_filter (loop 1000), 1000 runs, 24 skips
Note that this does not statically allocate the sin lookup table. This
may be done for the default 1024 phases, yielding a 512*8 = 4kB array
which should be small enough.
This should yield a small improvement. Nevertheless, this is separate from
this patch, is more ambiguous due to the binary increase, and requires a
lut to be generated offline.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Ganesh Ajjanagadde <gajjanagadde@gmail.com>
This improves accuracy for the bessel function at large arguments, and this in turn
should improve the quality of the Kaiser window. It also improves the
performance of the bessel function and hence build_filter by ~ 20%.
Details are given below.
Algorithm: taken from the Boost project, who have done a detailed
investigation of the accuracy of their method, as compared with e.g the
GNU Scientific Library (GSL):
http://www.boost.org/doc/libs/1_52_0/libs/math/doc/sf_and_dist/html/math_toolkit/special/bessel/mbessel.html.
Boost source code (also cited and licensed in the code):
https://searchcode.com/codesearch/view/14918379/.
Accuracy: sample values may be obtained as follows. i0 denotes the old bessel code,
i0_boost the approach here, and i0_real an arbitrary precision result (truncated) from Wolfram Alpha:
type "bessel i0(6.0)" to reproduce. These are evaluation points that occur for
the default kaiser_beta = 9.
Some illustrations:
bessel(8.0)
i0 (8.000000) = 427.564115721804739678191254
i0_boost(8.000000) = 427.564115721804796521610115
i0_real (8.000000) = 427.564115721804785177396791
bessel(6.0)
i0 (6.000000) = 67.234406976477956163762428
i0_boost(6.000000) = 67.234406976477970374617144
i0_real (6.000000) = 67.234406976477975326188025
Reason for accuracy: Main accuracy benefits come at larger bessel arguments, where the
Taylor-Maclaurin method is not that good: 23+ iterations
(at large arguments, since the series is about 0) can cause
significant floating point error accumulation.
Benchmarks: Obtained on x86-64, Haswell, GNU/Linux via a loop calling
build_filter 1000 times:
test: fate-swr-resample-dblp-44100-2626
new:
995894468 decicycles in build_filter(loop 1000), 256 runs, 0 skips
1029719302 decicycles in build_filter(loop 1000), 512 runs, 0 skips
984101131 decicycles in build_filter(loop 1000), 1024 runs, 0 skips
old:
1250020763 decicycles in build_filter(loop 1000), 256 runs, 0 skips
1246353282 decicycles in build_filter(loop 1000), 512 runs, 0 skips
1220017565 decicycles in build_filter(loop 1000), 1024 runs, 0 skips
A further ~ 5% may be squeezed by enabling -ftree-vectorize. However,
this is a separate issue from this patch.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Ganesh Ajjanagadde <gajjanagadde@gmail.com>
Kaiser windows inherently don't require beta to be an integer. This was
an arbitrary restriction. Moreover, soxr does not require it, and in
fact often estimates beta to a non-integral value.
Thus, this patch allows greater flexibility for swresample clients.
Micro version is updated.
Reviewed-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Ganesh Ajjanagadde <gajjanagadde@gmail.com>
This uses the trigonometric double and triple angle formulae to avoid
repeated (expensive) evaluation of libc's cos().
Sample benchmark (x86-64, Haswell, GNU/Linux)
test: fate-swr-resample-dblp-44100-2626
old:
1104466600 decicycles in build_filter(loop 1000), 256 runs, 0 skips
1096765286 decicycles in build_filter(loop 1000), 512 runs, 0 skips
1070479590 decicycles in build_filter(loop 1000), 1024 runs, 0 skips
new:
588861423 decicycles in build_filter(loop 1000), 256 runs, 0 skips
591262754 decicycles in build_filter(loop 1000), 512 runs, 0 skips
577355145 decicycles in build_filter(loop 1000), 1024 runs, 0 skips
This results in small differences with the old expression:
difference (worst case on [0, 2*M_PI]), argmax 0.008:
max diff (relative): 0.000000000000157289807188
blackman_old(0.008): 0.000363951585488813192382
blackman_new(0.008): 0.000363951585488755946507
These are judged to be insignificant for the performance gain. PSNR to
reference file is unchanged up to second decimal point for instance.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Ganesh Ajjanagadde <gajjanagadde@gmail.com>
This speeds up build_filter by ~ 50%. This gain should be pretty
consistent across all architectures and platforms.
Essentially, this relies on a observation that the filters have some
even/odd symmetry that may be exploited during the construction of the
polyphase filter bank. In particular, phases (scaled to [0, 1]) in [0.5, 1] are
easily derived from [0, 0.5] and expensive reevaluation of function
points are unnecessary. This requires some rather annoying even/odd
bookkeeping as can be seen from the patch.
I vaguely recall from signal processing theory more general symmetries allowing even greater
optimization of the construction. At a high level, "even functions"
correspond to 2, and one can imagine variations. Nevertheless, for the sake
of some generality and because of existing filters, this is all that is
being exploited.
Currently, this patch relies on phase_count being even or (trivially) 1,
though this is not an inherent limitation to the approach. This
assumption is safe as phase_count is 1 << phase_bits, and is hence a
power of two. There is no way for user API to set it to a nontrivial odd
number. This assumption has been placed as an assert in the code.
To repeat, this assumes even symmetry of the filters, which is the most common
way to get generalized linear phase anyway and is true of all currently
supported filters.
As a side note, accuracy should be identical or perhaps slightly better
due to this "forcing" filter symmetries leading to a better phase
characteristic. As before, I can't test this claim easily, though it may
be of interest.
Patch tested with FATE.
Sample benchmark (x86-64, Haswell, GNU/Linux):
test: swr-resample-dblp-44100-2626
new:
527376779 decicycles in build_filter(loop 1000), 256 runs, 0 skips
524361765 decicycles in build_filter(loop 1000), 512 runs, 0 skips
516552574 decicycles in build_filter(loop 1000), 1024 runs, 0 skips
old:
974178658 decicycles in build_filter(loop 1000), 256 runs, 0 skips
972794408 decicycles in build_filter(loop 1000), 512 runs, 0 skips
954350046 decicycles in build_filter(loop 1000), 1024 runs, 0 skips
Note that lower level optimizations are entirely possible, I focussed on
getting the high level semantics correct. In any case, this should
provide a good foundation.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Ganesh Ajjanagadde <gajjanagadde@gmail.com>
This adds const-correctness when needed for the comparators.
Reviewed-by: Ronald S. Bultje <rsbultje@gmail.com>
Signed-off-by: Ganesh Ajjanagadde <gajjanagadde@gmail.com>
It is well known that fabs and fabsf are at least as fast and sometimes
faster than the FFABS macro, at least on the gcc+glibc combination.
For instance, see the reference:
http://patchwork.sourceware.org/patch/6735/.
This was a patch to glibc in order to remove their usages of a macro.
The reason essentially boils down to fabs using the __builtin_fabs of
the compiler, while FFABS needs to infer to not use a branch and to
simply change the sign bit. Usually the inference works, but sometimes
it does not. This may be easily checked by looking at the asm.
This also has the added benefit of reducing macro usage, which has
problems with side-effects.
Note that avcodec is not handled here, as it is huge and
most things there are integer arithmetic anyway.
Tested with FATE.
Reviewed-by: Clément Bœsch <u@pkh.me>
Signed-off-by: Ganesh Ajjanagadde <gajjanagadde@gmail.com>
This will trigger a few warnings that need to be fixed.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Ganesh Ajjanagadde <gajjanagadde@gmail.com>
Proper names should be capitalized in all user facing API as far as
possible. The option names themselves have not been changed since:
1. We consistently keep option names in lower case.
2. Changing them would break existing scripts.
3. I suspect that we want to be similar to Sox and its relevant options.
The converse is also true: improper names should not be capitalized
generally.
Signed-off-by: Ganesh Ajjanagadde <gajjanagadde@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>