1
0
mirror of https://github.com/FFmpeg/FFmpeg.git synced 2024-12-18 03:19:31 +02:00
Commit Graph

564 Commits

Author SHA1 Message Date
Lynne
bb40f800bd
x86/tx_float: fix forgotten 2-argument mulps
Yasm *really* cannot deal with any omitted arguments at all.
2021-04-24 22:33:42 +02:00
Lynne
e2cf0a1f68
x86/tx_float: use all arguments on vperm2f and vpermilps and reindent comments
Apparently even old nasm isn't required to accept incomplete instructions.
2021-04-24 22:21:13 +02:00
James Almer
fddddc7ec2 x86/tx_float: Fixes compilation with old yasm
Use three operand format on some instructions, and lea to load effective
addresses of tables.

Signed-off-by: James Almer <jamrial@gmail.com>
2021-04-24 17:02:31 -03:00
Lynne
e448a4b4ea
lavu/x86/tx_float: fix FMA3 implying AVX2 is available
It's the other way around - AVX2 implies FMA3 is available.
2021-04-24 19:00:27 +02:00
Lynne
119a3f7e8d
lavu/x86: add FFT assembly
This commit adds a pure x86 assembly SIMD version of the FFT in libavutil/tx.
The design of this pure assembly FFT is pretty unconventional.

On the lowest level, instead of splitting the complex numbers into
real and imaginary parts, we keep complex numbers together but split
them in terms of parity. This saves a number of shuffles in each transform,
but more importantly, it splits each transform into two independent
paths, which we process using separate registers in parallel.
This allows us to keep all units saturated and lets us use all available
registers to avoid dependencies.
Moreover, it allows us to double the granularity of our per-load permutation,
skipping many expensive lookups and allowing us to use just 4 loads per register,
rather than 8, or in case FMA3 (and by extension, AVX2), use the vgatherdpd
instruction, which is at least as fast as 4 separate loads on old hardware,
and quite a bit faster on modern CPUs).

Higher up, we go for a bottom-up construction of large transforms, foregoing
the traditional per-transform call-return recursion chains. Instead, we always
start at the bottom-most basis transform (in this case, a 32-point transform),
and continue constructing larger and larger transforms until we return to the
top-most transform.
This way, we only touch the stack 3 times per a complete target transform:
once for the 1/2 length transform and two times for the 1/4 length transform.

The combination algorithm we use is a standard Split-Radix algorithm,
as used in our C code. Although a version with less operations exists
(Steven G. Johnson and Matteo Frigo's "A modified split-radix FFT with fewer
arithmetic operations", IEEE Trans. Signal Process. 55 (1), 111–119 (2007),
which is the one FFTW uses), it only has 2% less operations and requires at least 4x
the binary code (due to it needing 4 different paths to do a single transform).
That version also has other issues which prevent it from being implemented
with SIMD code as efficiently, which makes it lose the marginal gains it offered,
and cannot be performed bottom-up, requiring many recursive call-return chains,
whose overhead adds up.

We go through a lot of effort to minimize load/stores by keeping as much in
registers in between construcring transforms. This saves us around 32 cycles,
on paper, but in reality a lot more due to load/store aliasing (a load from a
memory location cannot be issued while there's a store pending, and there are
only so many (2 for Zen 3) load/store units in a CPU).
Also, we interleave coefficients during the last stage to save on a store+load
per register.

Each of the smallest, basis transforms (4, 8 and 16-point in our case)
has been extremely optimized. Our 8-point transform is barely 20 instructions
in total, beating our old implementation 8-point transform by 1 instruction.
Our 2x8-point transform is 23 instructions, beating our old implementation by
6 instruction and needing 50% less cycles. Our 16-point transform's combination
code takes slightly more instructions than our old implementation, but makes up
for it by requiring a lot less arithmetic operations.

Overall, the transform was optimized for the timings of Zen 3, which at the
time of writing has the most IPC from all documented CPUs. Shuffles were
preferred over arithmetic operations due to their 1/0.5 latency/throughput.

On average, this code is 30% faster than our old libavcodec implementation.
It's able to trade blows with the previously-untouchable FFTW on small transforms,
and due to its tiny size and better prediction, outdoes FFTW on larger transforms
by 11% on the largest currently supported size.
2021-04-24 17:19:18 +02:00
Andreas Rheinhardt
f3c197b129 Include attributes.h directly
Some files currently rely on libavutil/cpu.h to include it for them;
yet said file won't use include it any more after the currently
deprecated functions are removed, so include attributes.h directly.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2021-04-19 14:34:10 +02:00
Henrik Gramner
0b2b03568f avutil/x86inc: fix warnings when assembling with Nasm 2.15
Some new warnings regarding use of empty macro parameters has
been added, so adjust some x86inc code to silence those.

Fixes part of ticket #8771

Signed-off-by: James Almer <jamrial@gmail.com>
2020-07-12 11:30:23 -03:00
Martin Storsjö
1001b6a750 libavutil: x86: Include stdlib.h before using _byteswap_ulong
When clang works in MSVC mode, it does have the _byteswap_ulong
builtin, but one has to include stdlib.h before using it.

Signed-off-by: Martin Storsjö <martin@martin.st>
2020-01-23 18:30:26 +02:00
James Almer
9d002d7818 x86/float_dsp: add ff_vector_dmul_{sse2,avx}
~3x to 5x faster.

Signed-off-by: James Almer <jamrial@gmail.com>
2018-09-14 12:54:42 -03:00
James Almer
481741ece0 x86/pixelutils: don't use the AVX2 functions on CPUs known to be slow with them
Signed-off-by: James Almer <jamrial@gmail.com>
2018-07-31 22:14:53 -03:00
James Almer
d5b3077ecf x86/pixelutils: add missing preprocessor wrapper to the AVX2 functions
Should fix compilation with old yasm/nasm

Signed-off-by: James Almer <jamrial@gmail.com>
2018-07-31 22:14:42 -03:00
Jun Zhao
d36b8394f4 avutil/pixelutils: sad_32x32 sse2/avx2 optimizations.
add ff_pixelutils_sad_32x32_sse2, ff_pixelutils_sad_{a,u}_32x32_sse2,
ff_pixelutils_sad_32x32_avx22, ff_pixelutils_sad_{a,u}_32x32_avx2

use perf record/report profiling, get instructions:u for avx2 sad_32x32:

  72.05%  pixelutils  pixelutils     [.] block_sad_32x32_c
  18.50%  pixelutils  pixelutils     [.] block_sad_16x16_c
   4.78%  pixelutils  pixelutils     [.] block_sad_8x8_c
   2.69%  pixelutils  pixelutils     [.] block_sad_4x4_c
   0.89%  pixelutils  pixelutils     [.] block_sad_2x2_c
   0.16%  pixelutils  pixelutils     [.] ff_pixelutils_sad_32x32_avx2
   0.16%  pixelutils  pixelutils     [.] ff_pixelutils_sad_u_32x32_avx2
   0.12%  pixelutils  pixelutils     [.] ff_pixelutils_sad_a_32x32_avx2

sse2 sad_32x32 instructions:u like:

  71.86%  pixelutils  pixelutils     [.] block_sad_32x32_c
  18.42%  pixelutils  pixelutils     [.] block_sad_16x16_c
   4.81%  pixelutils  pixelutils     [.] block_sad_8x8_c
   2.68%  pixelutils  pixelutils     [.] block_sad_4x4_c
   0.88%  pixelutils  pixelutils     [.] block_sad_2x2_c
   0.29%  pixelutils  pixelutils     [.] ff_pixelutils_sad_32x32_sse2
   0.26%  pixelutils  pixelutils     [.] ff_pixelutils_sad_u_32x32_sse2
   0.23%  pixelutils  pixelutils     [.] ff_pixelutils_sad_a_32x32_sse2

Signed-off-by: Jun Zhao <mypopydev@gmail.com>
2018-07-31 19:17:51 +08:00
alexander schmid
b23c4a9dbd lavu/x86/cpu: Fix aesni detection 2018-07-19 20:17:44 +02:00
Jun Zhao
09628cb1b4 avutil/pixelutils: correct the function name in comments
Signed-off-by: Jun Zhao <mypopydev@gmail.com>
2018-07-11 20:12:33 +08:00
James Almer
35347e7e9b Merge commit '4cf84e254ae75b524e1cacae499a97d7cc9e5906'
* commit '4cf84e254ae75b524e1cacae499a97d7cc9e5906':
  Drop some unnecessary config.h #includes

Merged-by: James Almer <jamrial@gmail.com>
2018-02-11 23:08:48 -03:00
Diego Biurrun
4cf84e254a Drop some unnecessary config.h #includes 2018-02-06 10:03:15 +01:00
Henrik Gramner
6f62b0bd4f x86inc: Drop cpuflags_slowctz 2018-01-20 19:23:37 +01:00
Henrik Gramner
eb5f063e7c x86inc: Correctly set mmreg variables 2018-01-20 19:23:37 +01:00
Henrik Gramner
6b6edd1216 x86inc: Support creating global symbols from local labels
On ELF platforms such symbols needs to be flagged as functions with the
correct visibility to please certain linkers in some scenarios.
2018-01-20 19:23:37 +01:00
Henrik Gramner
9e4b3675f2 x86inc: Use .rdata instead of .rodata on Windows
The standard section for read-only data on Windows is .rdata. Nasm will
flag non-standard sections as executable by default which isn't ideal.
2018-01-20 19:23:37 +01:00
Henrik Gramner
3a02cbe3fa x86inc: Enable AVX emulation for floating-point pseudo-instructions
There are 32 pseudo-instructions for each floating-point comparison
instruction, but only 8 of them are actually valid in legacy-encoded mode.
The remaining 24 requires the use of VEX-encoded (v-prefixed) instructions
and can therefore be disregarded for this purpose.
2018-01-20 19:23:37 +01:00
James Almer
90d216cb90 x86inc: set the correct amount of simd regs in x86_64 when avx512 is enabled but not used
Fixes compilation of libavresample/x86/audio_mix.asm

Reviewed-by: Gramner
Signed-off-by: James Almer <jamrial@gmail.com>
2017-12-24 23:02:54 -03:00
Henrik Gramner
f7197f68dc x86inc: AVX-512 support
AVX-512 consists of a plethora of different extensions, but in order to keep
things a bit more manageable we group together the following extensions
under a single baseline cpu flag which should cover SKL-X and future CPUs:
 * AVX-512 Foundation (F)
 * AVX-512 Conflict Detection Instructions (CD)
 * AVX-512 Byte and Word Instructions (BW)
 * AVX-512 Doubleword and Quadword Instructions (DQ)
 * AVX-512 Vector Length Extensions (VL)

On x86-64 AVX-512 provides 16 additional vector registers, prefer using
those over existing ones since it allows us to avoid using `vzeroupper`
unless more than 16 vector registers are required. They also happen to
be volatile on Windows which means that we don't need to save and restore
existing xmm register contents unless more than 22 vector registers are
required.

Big thanks to Intel for their support.
2017-12-24 22:02:41 +01:00
James Darnley
e2218ed8ce avutil: add alignment needed for AVX-512 2017-12-24 22:02:41 +01:00
James Darnley
4783a01c11 avutil: detect when AVX-512 is available 2017-12-24 22:02:41 +01:00
James Darnley
8b81eabe57 avutil: add AVX-512 flags 2017-12-24 22:02:41 +01:00
Martin Vignali
b37196adff avutil/x86util : add macro for loading a 128 bits constants in an xmm or in each part of an ymm in order to simplify avx2 asm func 2017-12-02 18:25:15 +01:00
Dale Curtis
50e30d9bb7 Don't use _tzcnt instrinics with clang for windows w/o BMI.
Technically _tzcnt* intrinsics are only available when the BMI
instruction set is present. However the instruction encoding
degrades to "rep bsf" on older processors.

Clang for Windows debatably restricts the _tzcnt* instrinics behind
the __BMI__ architecture define, so check for its presence or
exclude the usage of these intrinics when clang is present.

See also:
https://ffmpeg.org/pipermail/ffmpeg-devel/2015-November/183404.html
https://bugs.llvm.org/show_bug.cgi?id=30506
http://lists.llvm.org/pipermail/cfe-dev/2016-October/051034.html

Signed-off-by: Dale Curtis <dalecurtis@chromium.org>
Reviewed-by: Matt Oliver <protogonoi@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2017-10-25 21:50:37 +02:00
James Almer
2904db9045 Merge commit '994c4bc10751e39c7ed9f67ffd0c0dea5223daf2'
* commit '994c4bc10751e39c7ed9f67ffd0c0dea5223daf2':
  x86util: Port all macros to cpuflags

See d5f8a642f6

Merged-by: James Almer <jamrial@gmail.com>
2017-10-21 12:15:57 -03:00
James Almer
3d828c9fd5 cpu: split flag checks per arch in av_cpu_max_align()
Signed-off-by: James Almer <jamrial@gmail.com>
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
2017-10-09 11:48:24 +02:00
James Almer
3b345d389b avutil/cpu: split flag checks per arch in av_cpu_max_align()
Signed-off-by: James Almer <jamrial@gmail.com>
2017-09-27 23:10:09 -03:00
James Almer
0c005fa86f Merge commit '7abdd026df6a9a52d07d8174505b33cc89db7bf6'
* commit '7abdd026df6a9a52d07d8174505b33cc89db7bf6':
  asm: Consistently uppercase SECTION markers

Merged-by: James Almer <jamrial@gmail.com>
2017-09-26 18:48:06 -03:00
Ivan Kalvachev
30ae07d7ef Add macros to x86util.asm .
Improved version of VBROADCASTSS that works like the avx2 instruction.
Emulation of vpbroadcastd.
Horizontal sum HSUMPS that places the result in all elements.
Emulation of blendvps and pblendvb.

Signed-off-by: Ivan Kalvachev <ikalvachev@gmail.com>
2017-08-18 17:18:32 +01:00
James Almer
4d62ee6746 x86inc: don't use read-only data sections on COFF targets
Yasm:
src/libavfilter/x86/af_volume.asm:24: warning: Standard COFF does not support read-only data sections
src/libavfilter/x86/af_volume.asm:24: warning: Unrecognized qualifier `align'

Nasm:
src/libavfilter/x86/af_volume.asm:24: error: standard COFF does not support section alignment specification
src/libavutil/x86/x86inc.asm:92: ... from macro `SECTION_RODATA' defined here

Tested-by: Clément Bœsch <u@pkh.me>
Signed-off-by: James Almer <jamrial@gmail.com>
2017-06-27 12:48:04 -03:00
Diego Biurrun
fd502f4f5f build: Generalize yasm/nasm-related variable names
None of them are specific to the YASM assembler.

(Cherry-picked from libav commit 39e208f4d4)

Signed-off-by: James Almer <jamrial@gmail.com>
2017-06-21 17:00:29 -03:00
James Almer
e229df9478 x86/aacpsdsp: add ff_ps_hybrid_synthesis_deint_{sse,sse4}
About 2x faster than the c version.
2017-06-18 22:33:27 -03:00
Henrik Gramner
aad1b6786e x86inc: Add some additional cpuflag relations
Simplifies writing assembly code that depends on available instructions.

LZCNT implies SSE2
BMI1 implies AVX+LZCNT
AVX2 implies BMI2
2017-06-12 11:41:25 +02:00
Anton Mitrofanov
d991b3e8a8 x86inc: Remove argument from WIN64_RESTORE_XMM
The use of rsp was pretty much hardcoded there and probably didn't work
otherwise with stack_size > 0.
2017-06-09 13:43:01 +02:00
Henrik Gramner
cd4ca82459 x86inc: Prefer r14/r15 over r12/r13 on x86-64
Due to a peculiarity in the ModR/M addressing encoding, the r12 and r13
registers sometimes requires an additional byte when used as a base register.

r14 and r15 doesn't have that issue, so prefer using them.
2017-06-09 13:43:00 +02:00
Henrik Gramner
88dcdfad09 x86inc: Make REP_RET identical to RET in SSSE3+ functions
There's no point in emitting a rep prefix before ret on modern CPUs.
2017-06-09 13:43:00 +02:00
Henrik Gramner
406e0ddc0b x86inc: Fix call with memory operands
We overload the `call` instruction with a macro, but it would misbehave when
the macro argument wasn't a valid identifier. Fix it by explicitly checking
if the argument is an identifier.
2017-06-09 13:43:00 +02:00
James Almer
0fbc7a2169 x86/float_dsp: remove usage of integer instructions 2017-05-12 23:34:49 -03:00
James Almer
f1d80bc630 x86/float_dsp: add ff_vector_fmul_reverse_avx2
~20% faster than AVX.

Signed-off-by: James Almer <jamrial@gmail.com>
2017-04-11 21:35:35 -03:00
James Almer
ed9b25a148 x86/float_dsp: add ff_vector_dmac_scalar_{sse2,avx,fma3} 2017-04-10 12:18:55 -03:00
Clément Bœsch
f291a9a1ad Merge commit '99434f4df81b6801b2b535d5b9143305595784f6'
* commit '99434f4df81b6801b2b535d5b9143305595784f6':
  float_dsp: Have implementation match function pointer prototype

Merged-by: Clément Bœsch <cboesch@gopro.com>
2017-03-30 10:23:25 +02:00
James Almer
c97e986e90 Merge commit '7911186ed616ae81dd8617d6d0e8b08c818db9d8'
* commit '7911186ed616ae81dd8617d6d0e8b08c818db9d8':
  emms: Give apriv_emms_yasm() a more general name

Merged-by: James Almer <jamrial@gmail.com>
2017-03-23 18:28:56 -03:00
James Almer
29db87af52 Merge commit '6be7944ee2ec2f045e6eb9a93237e992c8b20ac4'
* commit '6be7944ee2ec2f045e6eb9a93237e992c8b20ac4':
  x86: Add missing colons after assembly labels

Merged-by: James Almer <jamrial@gmail.com>
2017-03-23 18:05:27 -03:00
James Almer
d8962ffbd8 avutil/x86util: don't use movss in VBROADCASTSS macro when src and dst args are the same
Reviewed-by: Henrik Gramner <henrik@gramner.com>
Signed-off-by: James Almer <jamrial@gmail.com>
2017-03-21 19:15:00 -03:00
Clément Bœsch
3898e346b3 Merge commit '07e1f99a1bb41d1a615676140eefc85cf69fa793'
* commit '07e1f99a1bb41d1a615676140eefc85cf69fa793':
  x86util: Document SBUTTERFLY macro

Merged-by: Clément Bœsch <u@pkh.me>
2017-03-20 18:38:07 +01:00
Clément Bœsch
8200b16a9c Merge commit 'd7bc52bf456deba0f32d9fe5c288ec441f1ebef5'
* commit 'd7bc52bf456deba0f32d9fe5c288ec441f1ebef5':
  imgutils: add a function for copying image data from GPU mapped memory

Merged-by: Clément Bœsch <u@pkh.me>
2017-03-20 08:34:10 +01:00
Diego Biurrun
994c4bc107 x86util: Port all macros to cpuflags
Also do some small cosmetic changes: Drop pointless _MMX suffix from ABSD2
macro name, drop pointless check for MMX support, we always assume MMX is
available in our SIMD code, fix spelling.
2017-03-14 17:23:32 +01:00
Diego Biurrun
39e208f4d4 build: Generalize yasm/nasm-related variable names
None of them are specific to the YASM assembler.
2017-03-01 10:18:15 +01:00
James Darnley
5336887867 avcodec/h264: sse2, avx h luma mbaff deblock/loop filter
x86-64 only

Yorkfield:
- sse2: ~2.17x (434 vs. 200 cycles)

Nehalem:
- sse2: ~2.94x (409 vs. 139 cycles)

Skylake:
- sse2: ~3.10x (370 vs. 119 cycles)
- avx:  ~3.29x (370 vs. 112 cycles)
2017-02-18 20:26:52 +01:00
James Darnley
7627df15d4 x86util: import MOVHL macro
Originally committed to x264 in 1637239a by Henrik Gramner who has
agreed to re-license it as LGPL.  Original commit message follows.

    x86: Avoid some bypass delays and false dependencies

    A bypass delay of 1-3 clock cycles may occur on some CPUs when transitioning
    between int and float domains, so try to avoid that if possible.
2017-02-18 20:26:51 +01:00
James Darnley
9d815b7424 avcodec/x86: deduplicate PASS8ROWS macro 2017-02-18 20:26:49 +01:00
Diego Biurrun
7abdd026df asm: Consistently uppercase SECTION markers 2017-02-03 11:37:53 +01:00
James Almer
8d5df204d0 Merge commit '8e9cd81d291b1010c625b2766058aadf4affb537'
* commit '8e9cd81d291b1010c625b2766058aadf4affb537':
  x86: cpu: Detect Conroe CPUs and their slow shuffle unit

Merged-by: James Almer <jamrial@gmail.com>
2017-01-31 15:20:54 -03:00
James Almer
2eab48177d Merge commit '7d7355aa92bb36ca0765c49a569a999bcb96f332'
* commit '7d7355aa92bb36ca0765c49a569a999bcb96f332':
  x86: Add SSSE3_SLOW CPU flag and related convenience macros

Merged-by: James Almer <jamrial@gmail.com>
2017-01-31 15:17:19 -03:00
Henrik Gramner
cd09e3b349 x86inc: Avoid using eax/rax for storing the stack pointer
When allocating stack space with an alignment requirement that is larger
than the current stack alignment we need to store a copy of the original
stack pointer in order to be able to restore it later.

If we chose to use another register for this purpose we should not pick
eax/rax since it can be overwritten as a return value.
2017-01-09 16:00:29 +01:00
Henrik Gramner
3cba1ad76d x86inc: Avoid using eax/rax for storing the stack pointer
When allocating stack space with an alignment requirement that is larger
than the current stack alignment we need to store a copy of the original
stack pointer in order to be able to restore it later.

If we chose to use another register for this purpose we should not pick
eax/rax since it can be overwritten as a return value.

Signed-off-by: Anton Khirnov <anton@khirnov.net>
2017-01-09 13:21:12 +01:00
Diego Biurrun
99434f4df8 float_dsp: Have implementation match function pointer prototype
libavutil/x86/float_dsp_init.c(144) : warning C4028: formal parameter 1 different from declaration
libavutil/x86/float_dsp_init.c(144) : warning C4028: formal parameter 2 different from declaration
2016-11-03 17:43:55 +01:00
Michael Niedermayer
051517648b avutil/x86/emms: Document the emms_c() vs alloc/free relation.
Reviewed-by: Andreas Cadhalpun <andreas.cadhalpun@googlemail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2016-10-23 13:02:37 +02:00
Diego Biurrun
7911186ed6 emms: Give apriv_emms_yasm() a more general name 2016-10-18 13:09:09 +02:00
Diego Biurrun
6be7944ee2 x86: Add missing colons after assembly labels
This fixes many warnings of the sort
warning: label alone on a line without a colon might be in error
2016-10-17 16:31:26 +02:00
Alexandra Hájková
07e1f99a1b x86util: Document SBUTTERFLY macro
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
2016-09-19 10:02:43 +02:00
Anton Khirnov
d7bc52bf45 imgutils: add a function for copying image data from GPU mapped memory
See https://software.intel.com/en-us/articles/copying-accelerated-video-decode-frame-buffers
2016-08-31 08:15:47 +02:00
Fiona Glaser
8e9cd81d29 x86: cpu: Detect Conroe CPUs and their slow shuffle unit 2016-07-20 18:43:28 +02:00
Diego Biurrun
7d7355aa92 x86: Add SSSE3_SLOW CPU flag and related convenience macros 2016-07-20 18:43:28 +02:00
James Almer
fd5e6a095f x86util: Extend SPLATW for avx2
Integration to Libav by Josh de Kock <josh@itanimul.li>.

Signed-off-by: Alexandra Hájková <alexandra@khirnov.net>
2016-07-18 15:27:13 +02:00
Ronald S. Bultje
f0a2b6249b vp9: add 16x16 idct avx2 (8-bit).
checkasm --bench, 10k runs, for *_add_${bpc}_${sub_idct}_${opt}, shows
that it's about 1.65x as fast as the AVX version for the full IDCT, and
similar speedups for the sub-IDCTs:

nop: 24.6
vp9_inv_dct_dct_16x16_add_8_1_c: 6444.8
vp9_inv_dct_dct_16x16_add_8_1_sse2: 638.6
vp9_inv_dct_dct_16x16_add_8_1_ssse3: 484.4
vp9_inv_dct_dct_16x16_add_8_1_avx: 661.2
vp9_inv_dct_dct_16x16_add_8_1_avx2: 311.5
vp9_inv_dct_dct_16x16_add_8_2_c: 6665.7
vp9_inv_dct_dct_16x16_add_8_2_sse2: 646.9
vp9_inv_dct_dct_16x16_add_8_2_ssse3: 455.2
vp9_inv_dct_dct_16x16_add_8_2_avx: 521.9
vp9_inv_dct_dct_16x16_add_8_2_avx2: 304.3
vp9_inv_dct_dct_16x16_add_8_4_c: 7022.7
vp9_inv_dct_dct_16x16_add_8_4_sse2: 647.4
vp9_inv_dct_dct_16x16_add_8_4_ssse3: 467.1
vp9_inv_dct_dct_16x16_add_8_4_avx: 446.1
vp9_inv_dct_dct_16x16_add_8_4_avx2: 297.0
vp9_inv_dct_dct_16x16_add_8_8_c: 6800.4
vp9_inv_dct_dct_16x16_add_8_8_sse2: 598.6
vp9_inv_dct_dct_16x16_add_8_8_ssse3: 465.7
vp9_inv_dct_dct_16x16_add_8_8_avx: 440.9
vp9_inv_dct_dct_16x16_add_8_8_avx2: 290.2
vp9_inv_dct_dct_16x16_add_8_16_c: 6626.6
vp9_inv_dct_dct_16x16_add_8_16_sse2: 599.5
vp9_inv_dct_dct_16x16_add_8_16_ssse3: 475.0
vp9_inv_dct_dct_16x16_add_8_16_avx: 469.9
vp9_inv_dct_dct_16x16_add_8_16_avx2: 286.4
2016-07-11 10:14:58 -04:00
Matthieu Bouron
9eb3da2f99 asm: FF_-prefix internal macros used in inline assembly
See merge commit '39d6d3618d48625decaff7d9bdbb45b44ef2a805'.
2016-06-27 17:21:18 +02:00
Clément Bœsch
8ef57a0d61 Merge commit '41ed7ab45fc693f7d7fc35664c0233f4c32d69bb'
* commit '41ed7ab45fc693f7d7fc35664c0233f4c32d69bb':
  cosmetics: Fix spelling mistakes

Merged-by: Clément Bœsch <u@pkh.me>
2016-06-21 21:55:34 +02:00
Matt Oliver
5ca44ebd99 lavu/intmath.h: fix compilation with msvc10.
Signed-off-by: Matt Oliver <protogonoi@gmail.com>
2016-06-13 13:49:24 +10:00
James Almer
172af20852 x86/showcqt: use three operand format for some instructions
Fixes failures with yasm 1.1.0 and older

Signed-off-by: James Almer <jamrial@gmail.com>
2016-06-08 19:37:08 -03:00
James Almer
99b899483e avutil/x86util: move haddps sse emulation from showcqt
Signed-off-by: James Almer <jamrial@gmail.com>
2016-06-08 14:18:00 -03:00
Diego Biurrun
1e9c5bf4c1 asm: FF_-prefix internal macros used in inline assembly
These warnings conflict with system macros on Solaris, producing
truckloads of warnings about macro redefinition.
2016-05-28 19:18:26 +02:00
Anton Mitrofanov
2fb1d17a5a x86inc: Enable AVX emulation in additional cases
Allows emulation to work when dst is equal to src2 as long as the
instruction is commutative, e.g. `addps m0, m1, m0`.

Signed-off-by: Anton Khirnov <anton@khirnov.net>
2016-05-16 10:31:24 +02:00
Anton Mitrofanov
300fb0df84 x86inc: Improve handling of %ifid with multi-token parameters
The yasm/nasm preprocessor only checks the first token, which means that
parameters such as `dword [rax]` are treated as identifiers, which is
generally not what we want.

Signed-off-by: Anton Khirnov <anton@khirnov.net>
2016-05-16 10:31:20 +02:00
Anton Mitrofanov
8d02579fae x86inc: Fix AVX emulation of some instructions
Signed-off-by: Anton Khirnov <anton@khirnov.net>
2016-05-16 10:31:17 +02:00
Henrik Gramner
ba3eb745cc x86inc: Fix AVX emulation of scalar float instructions
Those instructions are not commutative since they only change the first
element in the vector and leave the rest unmodified.

Signed-off-by: Anton Khirnov <anton@khirnov.net>
2016-05-16 10:31:13 +02:00
Vittorio Giovara
41ed7ab45f cosmetics: Fix spelling mistakes
Signed-off-by: Diego Biurrun <diego@biurrun.de>
2016-05-04 18:16:21 +02:00
Anton Mitrofanov
e428f3b30c x86inc: Enable AVX emulation in additional cases
Allows emulation to work when dst is equal to src2 as long as the
instruction is commutative, e.g. `addps m0, m1, m0`.
2016-04-20 19:16:22 +02:00
Anton Mitrofanov
4bd5583ace x86inc: Improve handling of %ifid with multi-token parameters
The yasm/nasm preprocessor only checks the first token, which means that
parameters such as `dword [rax]` are treated as identifiers, which is
generally not what we want.
2016-04-20 19:16:22 +02:00
Anton Mitrofanov
42be240ad6 x86inc: Fix AVX emulation of some instructions 2016-04-20 19:16:22 +02:00
Henrik Gramner
8dd3ee9ddd x86inc: Fix AVX emulation of scalar float instructions
Those instructions are not commutative since they only change the first
element in the vector and leave the rest unmodified.
2016-04-20 19:16:22 +02:00
James Almer
70d685a77f x86: use the new helper macros where useful
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: James Almer <jamrial@gmail.com>
2016-02-14 20:00:21 -03:00
James Almer
73a4589d4b x86: add some more helper macros to check for slow cpuflags
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: James Almer <jamrial@gmail.com>
2016-02-14 20:00:17 -03:00
James Almer
be22bd32fe x86/cpu: set avxslow cpuflag on btver2 CPUs
They are also slow when using 256 bit wide registers

Reviewed-by: Hendrik Leppkes <h.leppkes@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
2016-02-07 16:39:21 -03:00
James Almer
b3b0ecee15 x86/emms: empty the mmx state unconditionally on supported targets
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: James Almer <jamrial@gmail.com>
2016-02-04 01:49:01 -03:00
Timothy Gu
44304ae322 all: Add missing header guards 2016-01-28 19:49:48 -08:00
James Almer
b624f0660b x86: Add ymm_reg struct
Needed to declare 32-byte long constants

Signed-off-by: James Almer <jamrial@gmail.com>
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
2016-01-28 00:41:19 +01:00
Geza Lore
cc602061ee x86inc: Add debug symbols indicating sizes of compiled functions
Some debuggers/profilers use this metadata to determine which function a
given instruction is in; without it they get can confused by local labels
(if you haven't stripped those). On the other hand, some tools are still
confused even with this metadata. e.g. this fixes `gdb`, but not `perf`.

Currently only implemented for ELF.

Signed-off-by: Anton Khirnov <anton@khirnov.net>
2016-01-23 20:46:28 +01:00
Henrik Gramner
002c47798d x86inc: Avoid creating unnecessary local labels
The REP_RET workaround is only needed on old AMD cpus, and the labels clutter
up the symbol table and confuse debugging/profiling tools, so use EQU to
create SHN_ABS symbols instead of creating local labels. Furthermore, skip
the workaround completely in functions that definitely won't run on such cpus.

Note that EQU is just creating a local label when using nasm instead of yasm.
This is probably a bug, but at least it doesn't break anything.

Signed-off-by: Anton Khirnov <anton@khirnov.net>
2016-01-23 20:44:25 +01:00
Henrik Gramner
fd6ecac38e x86inc: Simplify AUTO_REP_RET
cpuflags is never undefined any more, it's set to 0 instead.

Also fix an incorrect comment.

Signed-off-by: Anton Khirnov <anton@khirnov.net>
2016-01-23 20:43:39 +01:00
Henrik Gramner
5ca8e195e5 x86inc: Use more consistent indentation
Signed-off-by: Anton Khirnov <anton@khirnov.net>
2016-01-23 20:42:59 +01:00
Henrik Gramner
91ed050f42 x86inc: Preserve arguments when allocating stack space
When allocating stack space with a larger alignment than the known stack
alignment a temporary register is used for storing the stack pointer.
Ensure that this isn't one of the registers used for passing arguments.

Signed-off-by: Anton Khirnov <anton@khirnov.net>
2016-01-23 20:41:59 +01:00
Henrik Gramner
715eb7ca24 x86inc: Improve FMA instruction handling
* Correctly handle FMA instructions with memory operands.
 * Print a warning if FMA instructions are used without the correct cpuflag.
 * Simplify the instantiation code.
 * Clarify documentation.

Only the last operand in FMA3 instructions can be a memory operand. When
converting FMA4 instructions to FMA3 instructions we can utilize the fact
that multiply is a commutative operation and reorder operands if necessary
to ensure that a memory operand is used only as the last operand.

Signed-off-by: Anton Khirnov <anton@khirnov.net>
2016-01-23 20:30:30 +01:00
Henrik Gramner
f60f06d989 x86inc: Be more verbose in assertion failures
Signed-off-by: Anton Khirnov <anton@khirnov.net>
2016-01-23 20:30:07 +01:00
Henrik Gramner
7adcd4e841 x86inc: Make cpuflag() and notcpuflag() return 0 or 1
Makes it possible to use them in arithmetic expressions.

Signed-off-by: Anton Khirnov <anton@khirnov.net>
2016-01-23 20:19:19 +01:00
Geza Lore
d39c229e54 x86inc: Add debug symbols indicating sizes of compiled functions
Some debuggers/profilers use this metadata to determine which function a
given instruction is in; without it they get can confused by local labels
(if you haven't stripped those). On the other hand, some tools are still
confused even with this metadata. e.g. this fixes `gdb`, but not `perf`.

Currently only implemented for ELF.
2016-01-21 23:19:46 +01:00