1
0
mirror of https://github.com/FFmpeg/FFmpeg.git synced 2024-12-18 03:19:31 +02:00
Commit Graph

19 Commits

Author SHA1 Message Date
Thilo Borgmann
d814a839ac Reinstate proper FFmpeg license for all files. 2013-08-30 15:47:38 +00:00
Michael Niedermayer
9d01bf7d66 Merge remote-tracking branch 'qatar/master'
* qatar/master:
  Consistently use "cpu_flags" as variable/parameter name for CPU flags

Conflicts:
	libavcodec/x86/dsputil_init.c
	libavcodec/x86/h264dsp_init.c
	libavcodec/x86/hpeldsp_init.c
	libavcodec/x86/motion_est.c
	libavcodec/x86/mpegvideo.c
	libavcodec/x86/proresdsp_init.c

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2013-07-18 09:53:47 +02:00
Diego Biurrun
3ac7fa81b2 Consistently use "cpu_flags" as variable/parameter name for CPU flags 2013-07-18 00:31:35 +02:00
Christophe Gisquet
2c299d4165 x86: sbrdsp: implement SSE2 qmf_pre_shuffle
From 253 to 51 cycles on Arrandale and Win64.
44 cycles on SandyBridge.

Signed-off-by: Anton Khirnov <anton@khirnov.net>
2013-05-10 09:31:27 +02:00
Christophe Gisquet
fc37cd4333 x86: sbrdsp: force PIC addressing for Win64
MSVC complains about the 32bits addressing, while mingw/gcc does not.

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-05-08 03:07:03 +02:00
Christophe Gisquet
5a97469a4f x86: sbrdsp: Implement SSE2 qmf_deint_bfly
Sandybridge: 47 cycles

Having a loop counter is a 7 cycle gain.
Unrolling is another 7 cycle gain.
Working in reverse scan is another 6 cycles.

Signed-off-by: Diego Biurrun <diego@biurrun.de>
2013-05-03 18:23:14 +02:00
Michael Niedermayer
fc69033371 avcodec/x86/sbrdsp_init: disable using the noise code in x86_64 MSVC, Try #2
This should fix building with MSVC until someone can change the
    code so it works with MSVC

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-04-24 02:02:25 +02:00
Michael Niedermayer
7a617d6c17 avcodec/x86/sbrdsp_init: disable using the noise code in x86_64 MSVC
This should fix building with MSVC until someone can change the
code so it works with MSVC

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-04-23 12:46:28 +02:00
Christophe Gisquet
76c7277385 x86: sbrdsp: implement SSE2 hf_apply_noise
233 to 105 cycles on Arrandale and Win64.
Replacing the multiplication by s_m[m] by a pand and a pxor with
appropriate vectors is slower. Unrolling is a 15 cycles win.
A SSE version was 4 cycles slower.

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-04-19 13:19:45 +02:00
Christophe Gisquet
2383068cbf x86: sbrdsp: implement SSE2 qmf_pre_shuffle
From 253 to 51 cycles on Arrandale and Win64.
44 cycles on SandyBridge.

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-04-10 02:42:22 +02:00
Christophe Gisquet
e2946e5c34 x86: sbrdsp: implement SSE qmf_deint_bfly
From 312 to 89/68 (sse/sse2) cycles on Arrandale and Win64.
Sandybridge: 68/47 cycles.

Having a loop counter is a 7 cycle gain.
Unrolling is another 7 cycle gain.
Working in reverse scan is another 6 cycles.

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-04-08 02:26:34 +02:00
Christophe Gisquet
f4b0d12f5b x86: sbrdsp: Implement SSE neg_odd_64
Timing on Arrandale:
        C   SSE
Win32:  57   44
Win64:  47   38
Unrolling and not storing mask both save some cycles.

Signed-off-by: Diego Biurrun <diego@biurrun.de>
2013-04-05 22:47:04 +02:00
Diego Biurrun
c9f933b5b6 Add av_cold attributes to arch-specific init functions 2013-02-05 17:01:05 +01:00
Christophe Gisquet
4f50646697 x86: sbrdsp: Implement SSE qmf_post_shuffle
255 to 174 cycles on Arrandale / Win64. Unrolling yields no gain.

Signed-off-by: Diego Biurrun <diego@biurrun.de>
2013-01-06 13:57:01 +01:00
Christophe Gisquet
44a0036d10 x86: sbrdsp: Implement SSE sum64x5
698 to 174 cycles on Arrandale. Unrolling is a 6 cycles gain.

Signed-off-by: Diego Biurrun <diego@biurrun.de>
2013-01-06 13:57:01 +01:00
Christophe Gisquet
2aef3d66c9 SBR DSP x86: implement SSE sbr_hf_gen
Start and end index are multiple of 2, therefore guaranteeing aligned access.
Also, this allows to generate 4 floats per loop, keeping the alignment all
along.

Timing:
- 32 bits: 326c -> 172c
- 64 bits: 323c -> 156c

Signed-off-by: Diego Biurrun <diego@biurrun.de>
2012-12-07 11:04:26 +01:00
Diego Biurrun
e0c6cce447 x86: Replace checks for CPU extensions and flags by convenience macros
This separates code relying on inline from that relying on external
assembly and fixes instances where the coalesced check was incorrect.
2012-09-08 18:18:34 +02:00
Christophe GISQUET
2784d18791 SBR DSP x86: implement SSE sbr_hf_g_filt
Unrolling the main loop to process, instead of 4 elements:
- 8: minor gain of 2 cycles (not worth the extra object size)
- 2: loss of 8 cycles.

Assigning STEP to a register is a loss. Output address (Y) is almost always
unaligned.

Timings:
- C (32/64 bits): 117/109 cycles
- SSE: 57 cycles

Signed-off-by: Ronald S. Bultje <rsbultje@gmail.com>
2012-02-23 15:50:09 -08:00
Christophe GISQUET
34454c761f SBR DSP x86: implement SSE sbr_sum_square_sse
The 32bits targets have been compiled with -mfpmath=sse for proper reference.
sbr_sum_square C  /32bits: 82c (unrolled)/102c
               C  /64bits: 69c (unrolled)/82c
               SSE/32bits: 42c
               SSE/64bits: 31c

Use of SSE4.1 dpps to perform the final sum is slower.
Not unrolling to perform 8 operations in a loop yields 10 more cycles.

Signed-off-by: Ronald S. Bultje <rsbultje@gmail.com>
2012-02-23 15:50:06 -08:00