These bits are utilized by channel layouts such as 22.2. If those
are dropped, the returned channel layout is no longer a match
against the AV_CH_LAYOUT define when returned from this function.
Rematrixing supports up to 64 channels. However, there is only a limited number of channel layouts defined. Since the in/out channel count is currently obtained from the channel layout, for undefined layouts (e.g. for 9, 10, 11 channels etc.) the rematrixing fails.
This patch changes rematrix init methods to use in (used) and out channel count directly instead of computing it from channel layout.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Branch to global symbol results in reference to PLT, and when compiling
for THUMB-2 - in a R_ARM_THM_JUMP19 relocation. Some linkers don't
support this relocation (ld.gold), while others can end up truncating
the relocation to fit (ld.bfd).
Convert this branch through PLT into a direct branch that the assembler
can resolve locally.
See https://github.com/android-ndk/ndk/issues/337 for background.
The current workaround is to disable neon during gstreamer build,
which is not optimal and can be reverted after this patch:
41556c4157
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Removed +len1 in call to s->mix_2_1_f() as I found no logical explanation for it. After removal, problem was gone.
Signed-off-by: Hendrik Schreiber <hs@tagtraum.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Prefer direct in/out channel count values over channel layout, when
available. Fixes a pan filter bug (ticket #6790).
Signed-off-by: Tobias Rapp <t.rapp@noa-archive.com>
* commit '07a2b155949eb267cdfc7805f42c7b3375f9c7c5':
Bump major versions of all libraries
A few API deprecated ~2 years ago or more are also postponed here for
varying reasons.
FF_API_LOWRES:
Since this functionality depends on AVStream->codec, i figure the two can
be removed at the same time in the next bump or so.
FF_API_AVCTX_TIMEBASE:
Couldn't get this one to work. Not just libavcodec but apparently also
libavformat and ffmpeg.c expect AVCodecContext->time_base to be set for
decoding. Upon removal some tests report a different generic stream time
base (like 1/25), and others lose packet duration values. I guess it's
somehow tied to the AVStream->codec clusterfuck.
It can be dealt with alongside FF_API_LAVF_AVCTX in the next bump.
FF_API_OLD_FILTER_OPTS_ERROR:
This one is meant to remain after FF_API_OLD_FILTER_OPTS is removed.
Its purpose is displaying the corrected command line using the new syntax
as a suggestion as part of the error message.
Merged-by: James Almer <jamrial@gmail.com>
When 'out' is an AVFrame that does not have buffers preallocated,
swr_convert_frame tries to allocate buffers of the right size. However
in calculating this size it failed to check for whether 'in' is NULL
(requesting that swr's internal buffers are to be flushed).
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* commit '92db5083077a8b0f8e1050507671b456fd155125':
build: Generate pkg-config files from Make and not from configure
build: Store library version numbers in .version files
Includes cherry-picked commits 8a34f36593 and
ee164727dd to fix issues.
Changes were also made to retain support for raise_major and build_suffix.
Reviewed-by: ubitux
Merged-by: James Almer <jamrial@gmail.com>
* commit '11a9320de54759340531177c9f2b1e31e6112cc2':
build: Move build-system-related helper files to a separate subdirectory
"ffbuild" directory name is used instead of "avbuild".
Merged-by: Clément Bœsch <u@pkh.me>
use fltp when doing s32 -> s32 resampling
because s32p has no simd optimization
benchmark:
old 17.913s
new 7.584s (use fma3)
Reviewed-by: wm4 <nfxjfg@googlemail.com>
Signed-off-by: Muhammad Faiz <mfcc64@gmail.com>
when set_compensation is called with zero sample_delta,
compensation does not happen (because dst_incr == ideal_dst_incr)
but compensation_distance is set
regression since 01ebb57c03
Found-by: wm4 <nfxjfg@googlemail.com>
Reviewed-by: wm4 <nfxjfg@googlemail.com>
Signed-off-by: Muhammad Faiz <mfcc64@gmail.com>
so tsf option in aresample will have effect
previously tsf/internal_sample_format had no effect
fate is updated
s32p previously used fltp internally
dblp previously used fltp/dblp internally
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Muhammad Faiz <mfcc64@gmail.com>
except filter_length == 1
odd filter_length gives worse frequency response,
even when compared with shorter filter_length
also makes build_filter simpler
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Muhammad Faiz <mfcc64@gmail.com>
integrate it inside multiple_resample
allow some calculations to be performed outside loop
Suggested-by: Michael Niedermayer <michael@niedermayer.cc>
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Muhammad Faiz <mfcc64@gmail.com>
This is faster 2871 -> 2189 cycles for int16 matrixbench -> 23456hz
Fixes a integer overflow in a artificial corner case
Fixes part of 668007-media
Found-by: Matt Wolenetz <wolenetz@google.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes undefined operation
Fixes part of 668007-media
Found-by: Matt Wolenetz <wolenetz@google.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
give very bad quality for soxr resampler.
linear_interp is intended for using linear interpolation
between filter bank so quality will be better.
i guess this is misunderstood as 'do not use filter bank,
but directly interpolate linearly between samples'.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Muhammad Faiz <mfcc64@gmail.com>
separate dsp.resample to dsp.resample_common and dsp.resample_linear
and choose to call faster resample_common even when linear_interp=on
when c->frac and c->dst_incr_mod are both zero
speed up resampling when exact_rational and linear_interp are both
enabled because exact_rational force c->frac and c->dst_incr_mod to
be zero when soft compensation does not happen
benchmark on exact_rational=on:linear_interp=on
old new
real 8.432s 5.097s
user 7.679s 4.989s
sys 0.125s 0.107s
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Muhammad Faiz <mfcc64@gmail.com>
high phase_count is only useful when dst_incr_mod is non zero
in other word, it is only useful on soft compensation
on init, it will build filter with low phase_count
but when soft compensation is enabled, rebuild filter
with high phase_count
this approach saves lots of memory
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Muhammad Faiz <mfcc64@gmail.com>
because exact_rational does not guarantee
that phase_count is even
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Muhammad Faiz <mfcc64@gmail.com>
give high quality resampling
as good as with linear_interp=on
as fast as without linear_interp=on
tested visually with ffplay
ffplay -f lavfi "aevalsrc='sin(10000*t*t)', aresample=osr=48000, showcqt=gamma=5"
ffplay -f lavfi "aevalsrc='sin(10000*t*t)', aresample=osr=48000:linear_interp=on, showcqt=gamma=5"
ffplay -f lavfi "aevalsrc='sin(10000*t*t)', aresample=osr=48000:exact_rational=on, showcqt=gamma=5"
slightly speed improvement
for fair comparison with -cpuflags 0
audio.wav is ~ 1 hour 44100 stereo 16bit wav file
ffmpeg -i audio.wav -af aresample=osr=48000 -f null -
old new
real 13.498s 13.121s
user 13.364s 12.987s
sys 0.131s 0.129s
linear_interp=on
old new
real 23.035s 23.050s
user 22.907s 22.917s
sys 0.119s 0.125s
exact_rational=on
real 12.418s
user 12.298s
sys 0.114s
possibility to decrease memory usage if soft compensation is ignored
Signed-off-by: Muhammad Faiz <mfcc64@gmail.com>
This fixes the sum of the integer coefficients ending up summing to a value
larger than the value representing unity.
This issue occurs with qN0.dts when converting to stereo
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This may be a slightly surprising optimization, but is actually based on
an understanding of how math libraries compute trigonometric functions.
Explanation is given here so that future development uses libm more effectively
across the codebase.
All libm's essentially compute transcendental functions via some kind of
polynomial approximation, be it Taylor-Maclaurin or Chebyshev.
Correction terms are added via polynomial correction factors when needed
to squeeze out the last bits of accuracy. Lookup tables are also
inserted strategically.
In the case of trigonometric functions, periodicity is exploited via
first doing a range reduction to an interval around zero, and then using
some polynomial approximation.
This range reduction is the most natural way of doing things - else one
would need polynomials for ranges in different periods which makes no
sense whatsoever.
To avoid the need for the range reduction, it is helpful to feed in
arguments as close to the origin as possible for the trigonometric
functions. In fact, this also makes sense from an accuracy point of view:
IEEE floating point has far more resolution for small numbers than big ones.
This patch does this for the Blackman-Nuttall filter, and yields a
non-negligible speedup.
Sample benchmark (x86-64, Haswell, GNU/Linux)
test: fate-swr-resample-dblp-2626-44100
old:
18893514 decicycles in build_filter (loop 1000), 256 runs, 0 skips
18599863 decicycles in build_filter (loop 1000), 512 runs, 0 skips
18445574 decicycles in build_filter (loop 1000), 1000 runs, 24 skips
new:
16290697 decicycles in build_filter (loop 1000), 256 runs, 0 skips
16267172 decicycles in build_filter (loop 1000), 512 runs, 0 skips
16251105 decicycles in build_filter (loop 1000), 1000 runs, 24 skips
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Ganesh Ajjanagadde <gajjanagadde@gmail.com>
When upsampling, factor is set to 1 and sines need to be evaluated only
once for each phase, and the complexity should not depend on the number
of filter taps. This does the desired precomputation, yielding
significant speedups. Hard guarantees on the gain are not possible, but gains
themselves are obvious and are illustrated below.
Sample benchmark (x86-64, Haswell, GNU/Linux)
test: fate-swr-resample-dblp-2626-44100
old:
29161085 decicycles in build_filter (loop 1000), 256 runs, 0 skips
28821467 decicycles in build_filter (loop 1000), 512 runs, 0 skips
28668201 decicycles in build_filter (loop 1000), 1000 runs, 24 skips
new:
14351936 decicycles in build_filter (loop 1000), 256 runs, 0 skips
14306652 decicycles in build_filter (loop 1000), 512 runs, 0 skips
14299923 decicycles in build_filter (loop 1000), 1000 runs, 24 skips
Note that this does not statically allocate the sin lookup table. This
may be done for the default 1024 phases, yielding a 512*8 = 4kB array
which should be small enough.
This should yield a small improvement. Nevertheless, this is separate from
this patch, is more ambiguous due to the binary increase, and requires a
lut to be generated offline.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Ganesh Ajjanagadde <gajjanagadde@gmail.com>
This improves accuracy for the bessel function at large arguments, and this in turn
should improve the quality of the Kaiser window. It also improves the
performance of the bessel function and hence build_filter by ~ 20%.
Details are given below.
Algorithm: taken from the Boost project, who have done a detailed
investigation of the accuracy of their method, as compared with e.g the
GNU Scientific Library (GSL):
http://www.boost.org/doc/libs/1_52_0/libs/math/doc/sf_and_dist/html/math_toolkit/special/bessel/mbessel.html.
Boost source code (also cited and licensed in the code):
https://searchcode.com/codesearch/view/14918379/.
Accuracy: sample values may be obtained as follows. i0 denotes the old bessel code,
i0_boost the approach here, and i0_real an arbitrary precision result (truncated) from Wolfram Alpha:
type "bessel i0(6.0)" to reproduce. These are evaluation points that occur for
the default kaiser_beta = 9.
Some illustrations:
bessel(8.0)
i0 (8.000000) = 427.564115721804739678191254
i0_boost(8.000000) = 427.564115721804796521610115
i0_real (8.000000) = 427.564115721804785177396791
bessel(6.0)
i0 (6.000000) = 67.234406976477956163762428
i0_boost(6.000000) = 67.234406976477970374617144
i0_real (6.000000) = 67.234406976477975326188025
Reason for accuracy: Main accuracy benefits come at larger bessel arguments, where the
Taylor-Maclaurin method is not that good: 23+ iterations
(at large arguments, since the series is about 0) can cause
significant floating point error accumulation.
Benchmarks: Obtained on x86-64, Haswell, GNU/Linux via a loop calling
build_filter 1000 times:
test: fate-swr-resample-dblp-44100-2626
new:
995894468 decicycles in build_filter(loop 1000), 256 runs, 0 skips
1029719302 decicycles in build_filter(loop 1000), 512 runs, 0 skips
984101131 decicycles in build_filter(loop 1000), 1024 runs, 0 skips
old:
1250020763 decicycles in build_filter(loop 1000), 256 runs, 0 skips
1246353282 decicycles in build_filter(loop 1000), 512 runs, 0 skips
1220017565 decicycles in build_filter(loop 1000), 1024 runs, 0 skips
A further ~ 5% may be squeezed by enabling -ftree-vectorize. However,
this is a separate issue from this patch.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Ganesh Ajjanagadde <gajjanagadde@gmail.com>
Kaiser windows inherently don't require beta to be an integer. This was
an arbitrary restriction. Moreover, soxr does not require it, and in
fact often estimates beta to a non-integral value.
Thus, this patch allows greater flexibility for swresample clients.
Micro version is updated.
Reviewed-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Ganesh Ajjanagadde <gajjanagadde@gmail.com>
This uses the trigonometric double and triple angle formulae to avoid
repeated (expensive) evaluation of libc's cos().
Sample benchmark (x86-64, Haswell, GNU/Linux)
test: fate-swr-resample-dblp-44100-2626
old:
1104466600 decicycles in build_filter(loop 1000), 256 runs, 0 skips
1096765286 decicycles in build_filter(loop 1000), 512 runs, 0 skips
1070479590 decicycles in build_filter(loop 1000), 1024 runs, 0 skips
new:
588861423 decicycles in build_filter(loop 1000), 256 runs, 0 skips
591262754 decicycles in build_filter(loop 1000), 512 runs, 0 skips
577355145 decicycles in build_filter(loop 1000), 1024 runs, 0 skips
This results in small differences with the old expression:
difference (worst case on [0, 2*M_PI]), argmax 0.008:
max diff (relative): 0.000000000000157289807188
blackman_old(0.008): 0.000363951585488813192382
blackman_new(0.008): 0.000363951585488755946507
These are judged to be insignificant for the performance gain. PSNR to
reference file is unchanged up to second decimal point for instance.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Ganesh Ajjanagadde <gajjanagadde@gmail.com>
This speeds up build_filter by ~ 50%. This gain should be pretty
consistent across all architectures and platforms.
Essentially, this relies on a observation that the filters have some
even/odd symmetry that may be exploited during the construction of the
polyphase filter bank. In particular, phases (scaled to [0, 1]) in [0.5, 1] are
easily derived from [0, 0.5] and expensive reevaluation of function
points are unnecessary. This requires some rather annoying even/odd
bookkeeping as can be seen from the patch.
I vaguely recall from signal processing theory more general symmetries allowing even greater
optimization of the construction. At a high level, "even functions"
correspond to 2, and one can imagine variations. Nevertheless, for the sake
of some generality and because of existing filters, this is all that is
being exploited.
Currently, this patch relies on phase_count being even or (trivially) 1,
though this is not an inherent limitation to the approach. This
assumption is safe as phase_count is 1 << phase_bits, and is hence a
power of two. There is no way for user API to set it to a nontrivial odd
number. This assumption has been placed as an assert in the code.
To repeat, this assumes even symmetry of the filters, which is the most common
way to get generalized linear phase anyway and is true of all currently
supported filters.
As a side note, accuracy should be identical or perhaps slightly better
due to this "forcing" filter symmetries leading to a better phase
characteristic. As before, I can't test this claim easily, though it may
be of interest.
Patch tested with FATE.
Sample benchmark (x86-64, Haswell, GNU/Linux):
test: swr-resample-dblp-44100-2626
new:
527376779 decicycles in build_filter(loop 1000), 256 runs, 0 skips
524361765 decicycles in build_filter(loop 1000), 512 runs, 0 skips
516552574 decicycles in build_filter(loop 1000), 1024 runs, 0 skips
old:
974178658 decicycles in build_filter(loop 1000), 256 runs, 0 skips
972794408 decicycles in build_filter(loop 1000), 512 runs, 0 skips
954350046 decicycles in build_filter(loop 1000), 1024 runs, 0 skips
Note that lower level optimizations are entirely possible, I focussed on
getting the high level semantics correct. In any case, this should
provide a good foundation.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Ganesh Ajjanagadde <gajjanagadde@gmail.com>
This adds const-correctness when needed for the comparators.
Reviewed-by: Ronald S. Bultje <rsbultje@gmail.com>
Signed-off-by: Ganesh Ajjanagadde <gajjanagadde@gmail.com>
It is well known that fabs and fabsf are at least as fast and sometimes
faster than the FFABS macro, at least on the gcc+glibc combination.
For instance, see the reference:
http://patchwork.sourceware.org/patch/6735/.
This was a patch to glibc in order to remove their usages of a macro.
The reason essentially boils down to fabs using the __builtin_fabs of
the compiler, while FFABS needs to infer to not use a branch and to
simply change the sign bit. Usually the inference works, but sometimes
it does not. This may be easily checked by looking at the asm.
This also has the added benefit of reducing macro usage, which has
problems with side-effects.
Note that avcodec is not handled here, as it is huge and
most things there are integer arithmetic anyway.
Tested with FATE.
Reviewed-by: Clément Bœsch <u@pkh.me>
Signed-off-by: Ganesh Ajjanagadde <gajjanagadde@gmail.com>
This will trigger a few warnings that need to be fixed.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Ganesh Ajjanagadde <gajjanagadde@gmail.com>
Proper names should be capitalized in all user facing API as far as
possible. The option names themselves have not been changed since:
1. We consistently keep option names in lower case.
2. Changing them would break existing scripts.
3. I suspect that we want to be similar to Sox and its relevant options.
The converse is also true: improper names should not be capitalized
generally.
Signed-off-by: Ganesh Ajjanagadde <gajjanagadde@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>