1 //===---------------------------------------------------------------------===// 2 // Random notes about and ideas for the SystemZ backend. 3 //===---------------------------------------------------------------------===// 4 5 The initial backend is deliberately restricted to z10. We should add support 6 for later architectures at some point. 7 8 -- 9 10 SystemZDAGToDAGISel::SelectInlineAsmMemoryOperand() is passed "m" for all 11 inline asm memory constraints; it doesn't get to see the original constraint. 12 This means that it must conservatively treat all inline asm constraints 13 as the most restricted type, "R". 14 15 -- 16 17 If an inline asm ties an i32 "r" result to an i64 input, the input 18 will be treated as an i32, leaving the upper bits uninitialised. 19 For example: 20 21 define void @f4(i32 *%dst) { 22 %val = call i32 asm "blah $0", "=r,0" (i64 103) 23 store i32 %val, i32 *%dst 24 ret void 25 } 26 27 from CodeGen/SystemZ/asm-09.ll will use LHI rather than LGHI. 28 to load 103. This seems to be a general target-independent problem. 29 30 -- 31 32 The tuning of the choice between LOAD ADDRESS (LA) and addition in 33 SystemZISelDAGToDAG.cpp is suspect. It should be tweaked based on 34 performance measurements. 35 36 -- 37 38 We don't support tail calls at present. 39 40 -- 41 42 We don't support prefetching yet. 43 44 -- 45 46 There is no scheduling support. 47 48 -- 49 50 We don't use the BRANCH ON COUNT or BRANCH ON INDEX families of instruction. 51 52 -- 53 54 We might want to use BRANCH ON CONDITION for conditional indirect calls 55 and conditional returns. 56 57 -- 58 59 We don't use the condition code results of anything except comparisons. 60 61 Implementing this may need something more finely grained than the z_cmp 62 and z_ucmp that we have now. It might (or might not) also be useful to 63 have a mask of "don't care" values in conditional branches. For example, 64 integer comparisons never set CC to 3, so the bottom bit of the CC mask 65 isn't particularly relevant. JNLH and JE are equally good for testing 66 equality after an integer comparison, etc. 67 68 -- 69 70 We don't use the LOAD AND TEST or TEST DATA CLASS instructions. 71 72 -- 73 74 We could use the generic floating-point forms of LOAD COMPLEMENT, 75 LOAD NEGATIVE and LOAD POSITIVE in cases where we don't need the 76 condition codes. For example, we could use LCDFR instead of LCDBR. 77 78 -- 79 80 We don't optimize block memory operations. 81 82 It's definitely worth using things like MVC, CLC, NC, XC and OC with 83 constant lengths. MVCIN may be worthwhile too. 84 85 We should probably implement things like memcpy using MVC with EXECUTE. 86 Likewise memcmp and CLC. MVCLE and CLCLE could be useful too. 87 88 -- 89 90 We don't optimize string operations. 91 92 MVST, CLST, SRST and CUSE could be useful here. Some of the TRANSLATE 93 family might be too, although they are probably more difficult to exploit. 94 95 -- 96 97 We don't take full advantage of builtins like fabsl because the calling 98 conventions require f128s to be returned by invisible reference. 99 100 -- 101 102 ADD LOGICAL WITH SIGNED IMMEDIATE could be useful when we need to 103 produce a carry. SUBTRACT LOGICAL IMMEDIATE could be useful when we 104 need to produce a borrow. (Note that there are no memory forms of 105 ADD LOGICAL WITH CARRY and SUBTRACT LOGICAL WITH BORROW, so the high 106 part of 128-bit memory operations would probably need to be done 107 via a register.) 108 109 -- 110 111 We don't use the halfword forms of LOAD REVERSED and STORE REVERSED 112 (LRVH and STRVH). 113 114 -- 115 116 We could take advantage of the various ... UNDER MASK instructions, 117 such as ICM and STCM. 118 119 -- 120 121 DAGCombiner can detect integer absolute, but there's not yet an associated 122 ISD opcode. We could add one and implement it using LOAD POSITIVE. 123 Negated absolutes could use LOAD NEGATIVE. 124 125 -- 126 127 DAGCombiner doesn't yet fold truncations of extended loads. Functions like: 128 129 unsigned long f (unsigned long x, unsigned short *y) 130 { 131 return (x << 32) | *y; 132 } 133 134 therefore end up as: 135 136 sllg %r2, %r2, 32 137 llgh %r0, 0(%r3) 138 lr %r2, %r0 139 br %r14 140 141 but truncating the load would give: 142 143 sllg %r2, %r2, 32 144 lh %r2, 0(%r3) 145 br %r14 146 147 -- 148 149 Functions like: 150 151 define i64 @f1(i64 %a) { 152 %and = and i64 %a, 1 153 ret i64 %and 154 } 155 156 ought to be implemented as: 157 158 lhi %r0, 1 159 ngr %r2, %r0 160 br %r14 161 162 but two-address optimisations reverse the order of the AND and force: 163 164 lhi %r0, 1 165 ngr %r0, %r2 166 lgr %r2, %r0 167 br %r14 168 169 CodeGen/SystemZ/and-04.ll has several examples of this. 170 171 -- 172 173 Out-of-range displacements are usually handled by loading the full 174 address into a register. In many cases it would be better to create 175 an anchor point instead. E.g. for: 176 177 define void @f4a(i128 *%aptr, i64 %base) { 178 %addr = add i64 %base, 524288 179 %bptr = inttoptr i64 %addr to i128 * 180 %a = load volatile i128 *%aptr 181 %b = load i128 *%bptr 182 %add = add i128 %a, %b 183 store i128 %add, i128 *%aptr 184 ret void 185 } 186 187 (from CodeGen/SystemZ/int-add-08.ll) we load %base+524288 and %base+524296 188 into separate registers, rather than using %base+524288 as a base for both. 189 190 -- 191 192 Dynamic stack allocations round the size to 8 bytes and then allocate 193 that rounded amount. It would be simpler to subtract the unrounded 194 size from the copy of the stack pointer and then align the result. 195 See CodeGen/SystemZ/alloca-01.ll for an example. 196 197 -- 198 199 Atomic loads and stores use the default compare-and-swap based implementation. 200 This is much too conservative in practice, since the architecture guarantees 201 that 1-, 2-, 4- and 8-byte loads and stores to aligned addresses are 202 inherently atomic. 203 204 -- 205 206 If needed, we can support 16-byte atomics using LPQ, STPQ and CSDG. 207 208 -- 209 210 We might want to model all access registers and use them to spill 211 32-bit values. 212