Home | History | Annotate | Download | only in X86

Lines Matching refs:Fast

1859                                                   bool *Fast) const {
1860 if (Fast) {
1863 // 8-byte and under are always assumed to be fast.
1864 *Fast = true;
1867 *Fast = !Subtarget.isUnalignedMem16Slow();
1870 *Fast = !Subtarget.isUnalignedMem32Slow();
2334 // C & StdCall & Fast Calling Convention implementation
2340 // For info on fast calling convention see Fast Calling Convention (tail call)
2393 return (CC == CallingConv::Fast || CC == CallingConv::GHC ||
3440 // Fast Calling Convention (tail call) implementation
7139 /// This is a fast way to test a shuffle mask against a fixed pattern:
7659 /// shuffle+blend operations on newer X86 ISAs where we have very fast blend
8999 // onward this has a single fast instruction with no scary immediates.
9244 // when the V2 input is targeting element 0 of the mask -- that is the fast
9309 // onward this has a single fast instruction with no scary immediates.
10858 /// This allows for fast cases such as subvector extraction/insertion
12775 bool Fast;
12780 OpVT, AS, Alignment, &Fast) && Fast) {
13499 // TODO: Are there any fast-math-flags to propagate here?
13549 // TODO: Are there any fast-math-flags to propagate here?
13583 // as non-fast and always be enabled. Why isn't SDAG FMF enough? Because
13651 // TODO: Are there any fast-math-flags to propagate here?
13783 // TODO: Are there any fast-math-flags to propagate here?
17468 // TODO: Intrinsics should have fast-math-flags to propagate.
18855 case CallingConv::Fast:
21026 // We can't use the fast LUT approach, so fall back on vectorized bitmath.
21899 // TODO: Are there any fast-math-flags to propagate here?
25259 // instructions, but in practice PSHUFB tends to be *very* fast so we're
28791 bool Fast;
28797 AddressSpace, Alignment, &Fast) && !Fast) {
29178 bool Fast;
29183 AddressSpace, Alignment, &Fast) &&
29184 !Fast) {