This replaces the riscv specific handling from
7212466e73 (which essentially is
reverted), with a different implementation of the same (plus a bit
more), based on the corresponding feature in dav1d's checkasm,
supporting both Unix and Windows.
See in particular the dav1d commits
0b6ee30eab2400e4f85b735ad29a68a842c34e21,
0421f787ea592fd2cc74c887f20b8dc31393788b,
8501a4b20135f93a4c3b426468e2240e872949c5 and
d23e87f7aee26ddcf5f7a2e185112031477599a7, authored by Henrik Gramner.
The overall approach compared to the existing implementation for
riscv is the same; set up a signal handler, store the state with
sigsetjmp, jump out of the crashing function with siglongjmp.
The main difference is in what happens when the signal handler
is invoked. In the previous implementation, it would resume from
right before calling the crashing function, and then skip that call
based on the setjmp return value.
In the imported implementation from dav1d, we return to right before
the check_func() call, which will skip testing the current function
(as the pointer is the same as it was before).
Other differences are:
- Support for other signal handling mechanisms (Windows
AddVectoredExceptionHandler)
- Using RtlCaptureContext/RtlRestoreContext instead of setjmp/longjmp
on Windows with SEH
- Only catching signals once per function - if more than one
signal is delivered before signal handling is reenabled, any
signal is handled as it would without our handler
- Not using an arch specific signal handler written in assembly
Signed-off-by: Martin Storsjö <martin@martin.st>
For memcpy and memcmp, we need to multiply by the element size,
otherwise we're copying and comparing only a fraction of the buffer.
For decorrelate_sr, the buffer p1 is the one that is mutated;
copy and check p1 instead of p2.
For decorrelate_sm, both buffers are mutated, so copy and check
both of them.
For decorrelate_sm, the memcpy initialization of p1 and p1_2 was
reversed - p1 is filled with randomize, but then memcpy copies from
p1_2 to p1. As p1_2 is uninitialized at this point, clang concluded
that the copy was bogus and omitted it entirely, triggering failures
in this test on x86 (where there was an existing assembly implementation
to test).
Signed-off-by: Martin Storsjö <martin@martin.st>
The ffmpeg coding style doesn't usually use const on scalar
parameters (or on the pointer values - as opposed to the type
that is pointed to, where it has a semantic meaning), contrary
to the dav1d coding style (where this was imported from).
This avoids warnings about differences in the type signatures
between declaration and definition of this function, with older
versions of MSVC.
The issue was observed with one version of MSVC 2017,
19.16.27024.1, with warnings like these:
src/tests/checkasm/checkasm.c(969): warning C4028: formal parameter 3 different from declaration
The warning itself is bogus as the const here is harmless, and
newer versions of MSVC no longer warn about this.
Signed-off-by: Martin Storsjö <martin@martin.st>
The len parameter was changed from unsigned int to size_t in
567c67c6c8.
This fixes crashes in the reference C code, when running checkasm
on aarch64.
Signed-off-by: Martin Storsjö <martin@martin.st>
Terminating the whole checkasm process is not very helpful. This will
report if an illegal instruction occurs while executing a tested
function. This is a common occurrence whilst developping RISC-V
assembler, due to the compatibility between vector configuration and
instruction done at run-time.
decorrelate_ls, _rs and _ms are decorrelate[1], [2] and [3] respectively.
The code ended up testing indep ([0]) as twice, ms never, and misnaming
the other two.
The tested functions treat s_m[i] == 0 as a special case. Other than
that, the functions are slightly complicated vector additions.
This actually makes the zero case happen pseudorandomly.
This already avoids unnecessary indirectly included headers
in the arch-specific vf_bwdif_init.c files; it is also in
preparation for splitting the actual functions out of vf_bwdif.c.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Not every user of idctdsp.h wants to initialize an IDCTDSPContext;
e.g. the proresdsp only uses ff_init_scantable_permutation()
and the IDCT permutation enum; similarly for cavsdsp and wmv2dsp.
Using a forward declaration here avoids an avcodec.h dependency
in the relevant files.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This makes the test stricter because it is checked that the
MMX registers are not accidentally clobbered.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Only sub_media_pred has an MMXEXT version, so one can use
the version with the stricter check (that checks that
the MMX registers have not been clobbered) for sub_left_predict.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Only the idct_dc and add_residual functions have MMX versions,
so one can use the version with the stricter check (that checks
that the MMX registers have not been clobbered) for all the other
checks.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
check_func() might return NULL, in which case the function is not to be
benched. Introduced in cc679054c7.
Signed-off-by: Matthias Dressel <code@deadcode.eu>
Signed-off-by: Martin Storsjö <martin@martin.st>
The code was blindly assuming that Zbb or V implied Zba. While the
earlier is practically always true, the later broke some QEMU setups,
as V was introduced earlier than Zba.
Add an optional filter_line3 to the available optimisations.
filter_line3 is equivalent to filter_line, memcpy, filter_line
filter_line shares quite a number of loads and some calculations in
common with its next iteration and testing shows that using aarch64
neon filter_line3s performance is 30% better than two filter_lines
and a memcpy.
Adds a test for vf_bwdif filter_line3 to checkasm
Rounds job start lines down to a multiple of 4. This means that if
filter_line3 exists then filter_line will not sometimes be called
once at the end of a slice depending on thread count. The final slice
may do up to 3 extra lines but filter_edge is faster than filter_line
so it is unlikely to create any noticable thread load variation.
Signed-off-by: John Cox <jc@kynesim.co.uk>
Signed-off-by: Martin Storsjö <martin@martin.st>
From x86inc:
> On AMD cpus <=K10, an ordinary ret is slow if it immediately follows either
> a branch or a branch target. So switch to a 2-byte form of ret in that case.
> We can automatically detect "follows a branch", but not a branch target.
> (SSSE3 is a sufficient condition to know that your cpu doesn't have this problem.)
x86inc can automatically determine whether to use REP_RET rather than
REP in most of these cases, so impact is minimal. Additionally, a few
REP_RETs were used unnecessary, despite the return being nowhere near a
branch.
The only CPUs affected were AMD K10s, made between 2007 and 2011, 16
years ago and 12 years ago, respectively.
In the future, everyone involved with x86inc should consider dropping
REP_RETs altogether.
This commit enabled assembly code with intel AVX512 VNNI and added unit test for sobel filter
sobel_c: 4537
sobel_avx512icl 2136
Signed-off-by: bwang30 <bin.wang@intel.com>
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
There is no MMX code for (add|put|put_signed)_pixels_clamped
since commit bfb28b5ce8, so use
declare_func instead of declare_func_emms() to also test that
we are not in MMX mode after return.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>