1 Target Independent Opportunities:
2
3 //===---------------------------------------------------------------------===//
4
5 We should recognized various "overflow detection" idioms and translate them into
6 llvm.uadd.with.overflow and similar intrinsics. Here is a multiply idiom:
7
8 unsigned int mul(unsigned int a,unsigned int b) {
9 if ((unsigned long long)a*b>0xffffffff)
10 exit(0);
11 return a*b;
12 }
13
14 The legalization code for mul-with-overflow needs to be made more robust before
15 this can be implemented though.
16
17 //===---------------------------------------------------------------------===//
18
19 Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
20 precision don't matter (ffastmath). Misc/mandel will like this. :) This isn't
21 safe in general, even on darwin. See the libm implementation of hypot for
22 examples (which special case when x/y are exactly zero to get signed zeros etc
23 right).
24
25 //===---------------------------------------------------------------------===//
26
27 On targets with expensive 64-bit multiply, we could LSR this:
28
29 for (i = ...; ++i) {
30 x = 1ULL << i;
31
32 into:
33 long long tmp = 1;
34 for (i = ...; ++i, tmp+=tmp)
35 x = tmp;
36
37 This would be a win on ppc32, but not x86 or ppc64.
38
39 //===---------------------------------------------------------------------===//
40
41 Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)
42
43 //===---------------------------------------------------------------------===//
44
45 Reassociate should turn things like:
46
47 int factorial(int X) {
48 return X*X*X*X*X*X*X*X;
49 }
50
51 into llvm.powi calls, allowing the code generator to produce balanced
52 multiplication trees.
53
54 First, the intrinsic needs to be extended to support integers, and second the
55 code generator needs to be enhanced to lower these to multiplication trees.
56
57 //===---------------------------------------------------------------------===//
58
59 Interesting? testcase for add/shift/mul reassoc:
60
61 int bar(int x, int y) {
62 return x*x*x+y+x*x*x*x*x*y*y*y*y;
63 }
64 int foo(int z, int n) {
65 return bar(z, n) + bar(2*z, 2*n);
66 }
67
68 This is blocked on not handling X*X*X -> powi(X, 3) (see note above). The issue
69 is that we end up getting t = 2*X s = t*t and don't turn this into 4*X*X,
70 which is the same number of multiplies and is canonical, because the 2*X has
71 multiple uses. Here's a simple example:
72
73 define i32 @test15(i32 %X1) {
74 %B = mul i32 %X1, 47 ; X1*47
75 %C = mul i32 %B, %B
76 ret i32 %C
77 }
78
79
80 //===---------------------------------------------------------------------===//
81
82 Reassociate should handle the example in GCC PR16157:
83
84 extern int a0, a1, a2, a3, a4; extern int b0, b1, b2, b3, b4;
85 void f () { /* this can be optimized to four additions... */
86 b4 = a4 + a3 + a2 + a1 + a0;
87 b3 = a3 + a2 + a1 + a0;
88 b2 = a2 + a1 + a0;
89 b1 = a1 + a0;
90 }
91
92 This requires reassociating to forms of expressions that are already available,
93 something that reassoc doesn't think about yet.
94
95
96 //===---------------------------------------------------------------------===//
97
98 This function: (derived from GCC PR19988)
99 double foo(double x, double y) {
100 return ((x + 0.1234 * y) * (x + -0.1234 * y));
101 }
102
103 compiles to:
104 _foo:
105 movapd %xmm1, %xmm2
106 mulsd LCPI1_1(%rip), %xmm1
107 mulsd LCPI1_0(%rip), %xmm2
108 addsd %xmm0, %xmm1
109 addsd %xmm0, %xmm2
110 movapd %xmm1, %xmm0
111 mulsd %xmm2, %xmm0
112 ret
113
114 Reassociate should be able to turn it into:
115
116 double foo(double x, double y) {
117 return ((x + 0.1234 * y) * (x - 0.1234 * y));
118 }
119
120 Which allows the multiply by constant to be CSE'd, producing:
121
122 _foo:
123 mulsd LCPI1_0(%rip), %xmm1
124 movapd %xmm1, %xmm2
125 addsd %xmm0, %xmm2
126 subsd %xmm1, %xmm0
127 mulsd %xmm2, %xmm0
128 ret
129
130 This doesn't need -ffast-math support at all. This is particularly bad because
131 the llvm-gcc frontend is canonicalizing the later into the former, but clang
132 doesn't have this problem.
133
134 //===---------------------------------------------------------------------===//
135
136 These two functions should generate the same code on big-endian systems:
137
138 int g(int *j,int *l) { return memcmp(j,l,4); }
139 int h(int *j, int *l) { return *j - *l; }
140
141 this could be done in SelectionDAGISel.cpp, along with other special cases,
142 for 1,2,4,8 bytes.
143
144 //===---------------------------------------------------------------------===//
145
146 It would be nice to revert this patch:
147 http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
148
149 And teach the dag combiner enough to simplify the code expanded before
150 legalize. It seems plausible that this knowledge would let it simplify other
151 stuff too.
152
153 //===---------------------------------------------------------------------===//
154
155 For vector types, TargetData.cpp::getTypeInfo() returns alignment that is equal
156 to the type size. It works but can be overly conservative as the alignment of
157 specific vector types are target dependent.
158
159 //===---------------------------------------------------------------------===//
160
161 We should produce an unaligned load from code like this:
162
163 v4sf example(float *P) {
164 return (v4sf){P[0], P[1], P[2], P[3] };
165 }
166
167 //===---------------------------------------------------------------------===//
168
169 Add support for conditional increments, and other related patterns. Instead
170 of:
171
172 movl 136(%esp), %eax
173 cmpl $0, %eax
174 je LBB16_2 #cond_next
175 LBB16_1: #cond_true
176 incl _foo
177 LBB16_2: #cond_next
178
179 emit:
180 movl _foo, %eax
181 cmpl $1, %edi
182 sbbl $-1, %eax
183 movl %eax, _foo
184
185 //===---------------------------------------------------------------------===//
186
187 Combine: a = sin(x), b = cos(x) into a,b = sincos(x).
188
189 Expand these to calls of sin/cos and stores:
190 double sincos(double x, double *sin, double *cos);
191 float sincosf(float x, float *sin, float *cos);
192 long double sincosl(long double x, long double *sin, long double *cos);
193
194 Doing so could allow SROA of the destination pointers. See also:
195 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687
196
197 This is now easily doable with MRVs. We could even make an intrinsic for this
198 if anyone cared enough about sincos.
199
200 //===---------------------------------------------------------------------===//
201
202 quantum_sigma_x in 462.libquantum contains the following loop:
203
204 for(i=0; i<reg->size; i++)
205 {
206 /* Flip the target bit of each basis state */
207 reg->node[i].state ^= ((MAX_UNSIGNED) 1 << target);
208 }
209
210 Where MAX_UNSIGNED/state is a 64-bit int. On a 32-bit platform it would be just
211 so cool to turn it into something like:
212
213 long long Res = ((MAX_UNSIGNED) 1 << target);
214 if (target < 32) {
215 for(i=0; i<reg->size; i++)
216 reg->node[i].state ^= Res & 0xFFFFFFFFULL;
217 } else {
218 for(i=0; i<reg->size; i++)
219 reg->node[i].state ^= Res & 0xFFFFFFFF00000000ULL
220 }
221
222 ... which would only do one 32-bit XOR per loop iteration instead of two.
223
224 It would also be nice to recognize the reg->size doesn't alias reg->node[i], but
225 this requires TBAA.
226
227 //===---------------------------------------------------------------------===//
228
229 This isn't recognized as bswap by instcombine (yes, it really is bswap):
230
231 unsigned long reverse(unsigned v) {
232 unsigned t;
233 t = v ^ ((v << 16) | (v >> 16));
234 t &= ~0xff0000;
235 v = (v << 24) | (v >> 8);
236 return v ^ (t >> 8);
237 }
238
239 //===---------------------------------------------------------------------===//
240
241 [LOOP DELETION]
242
243 We don't delete this output free loop, because trip count analysis doesn't
244 realize that it is finite (if it were infinite, it would be undefined). Not
245 having this blocks Loop Idiom from matching strlen and friends.
246
247 void foo(char *C) {
248 int x = 0;
249 while (*C)
250 ++x,++C;
251 }
252
253 //===---------------------------------------------------------------------===//
254
255 [LOOP RECOGNITION]
256
257 These idioms should be recognized as popcount (see PR1488):
258
259 unsigned countbits_slow(unsigned v) {
260 unsigned c;
261 for (c = 0; v; v >>= 1)
262 c += v & 1;
263 return c;
264 }
265 unsigned countbits_fast(unsigned v){
266 unsigned c;
267 for (c = 0; v; c++)
268 v &= v - 1; // clear the least significant bit set
269 return c;
270 }
271
272 BITBOARD = unsigned long long
273 int PopCnt(register BITBOARD a) {
274 register int c=0;
275 while(a) {
276 c++;
277 a &= a - 1;
278 }
279 return c;
280 }
281 unsigned int popcount(unsigned int input) {
282 unsigned int count = 0;
283 for (unsigned int i = 0; i < 4 * 8; i++)
284 count += (input >> i) & i;
285 return count;
286 }
287
288 This should be recognized as CLZ: rdar://8459039
289
290 unsigned clz_a(unsigned a) {
291 int i;
292 for (i=0;i<32;i++)
293 if (a & (1<<(31-i)))
294 return i;
295 return 32;
296 }
297
298 This sort of thing should be added to the loop idiom pass.
299
300 //===---------------------------------------------------------------------===//
301
302 These should turn into single 16-bit (unaligned?) loads on little/big endian
303 processors.
304
305 unsigned short read_16_le(const unsigned char *adr) {
306 return adr[0] | (adr[1] << 8);
307 }
308 unsigned short read_16_be(const unsigned char *adr) {
309 return (adr[0] << 8) | adr[1];
310 }
311
312 //===---------------------------------------------------------------------===//
313
314 -instcombine should handle this transform:
315 icmp pred (sdiv X / C1 ), C2
316 when X, C1, and C2 are unsigned. Similarly for udiv and signed operands.
317
318 Currently InstCombine avoids this transform but will do it when the signs of
319 the operands and the sign of the divide match. See the FIXME in
320 InstructionCombining.cpp in the visitSetCondInst method after the switch case
321 for Instruction::UDiv (around line 4447) for more details.
322
323 The SingleSource/Benchmarks/Shootout-C++/hash and hash2 tests have examples of
324 this construct.
325
326 //===---------------------------------------------------------------------===//
327
328 [LOOP OPTIMIZATION]
329
330 SingleSource/Benchmarks/Misc/dt.c shows several interesting optimization
331 opportunities in its double_array_divs_variable function: it needs loop
332 interchange, memory promotion (which LICM already does), vectorization and
333 variable trip count loop unrolling (since it has a constant trip count). ICC
334 apparently produces this very nice code with -ffast-math:
335
336 ..B1.70: # Preds ..B1.70 ..B1.69
337 mulpd %xmm0, %xmm1 #108.2
338 mulpd %xmm0, %xmm1 #108.2
339 mulpd %xmm0, %xmm1 #108.2
340 mulpd %xmm0, %xmm1 #108.2
341 addl $8, %edx #
342 cmpl $131072, %edx #108.2
343 jb ..B1.70 # Prob 99% #108.2
344
345 It would be better to count down to zero, but this is a lot better than what we
346 do.
347
348 //===---------------------------------------------------------------------===//
349
350 Consider:
351
352 typedef unsigned U32;
353 typedef unsigned long long U64;
354 int test (U32 *inst, U64 *regs) {
355 U64 effective_addr2;
356 U32 temp = *inst;
357 int r1 = (temp >> 20) & 0xf;
358 int b2 = (temp >> 16) & 0xf;
359 effective_addr2 = temp & 0xfff;
360 if (b2) effective_addr2 += regs[b2];
361 b2 = (temp >> 12) & 0xf;
362 if (b2) effective_addr2 += regs[b2];
363 effective_addr2 &= regs[4];
364 if ((effective_addr2 & 3) == 0)
365 return 1;
366 return 0;
367 }
368
369 Note that only the low 2 bits of effective_addr2 are used. On 32-bit systems,
370 we don't eliminate the computation of the top half of effective_addr2 because
371 we don't have whole-function selection dags. On x86, this means we use one
372 extra register for the function when effective_addr2 is declared as U64 than
373 when it is declared U32.
374
375 PHI Slicing could be extended to do this.
376
377 //===---------------------------------------------------------------------===//
378
379 Tail call elim should be more aggressive, checking to see if the call is
380 followed by an uncond branch to an exit block.
381
382 ; This testcase is due to tail-duplication not wanting to copy the return
383 ; instruction into the terminating blocks because there was other code
384 ; optimized out of the function after the taildup happened.
385 ; RUN: llvm-as < %s | opt -tailcallelim | llvm-dis | not grep call
386
387 define i32 @t4(i32 %a) {
388 entry:
389 %tmp.1 = and i32 %a, 1 ; <i32> [#uses=1]
390 %tmp.2 = icmp ne i32 %tmp.1, 0 ; <i1> [#uses=1]
391 br i1 %tmp.2, label %then.0, label %else.0
392
393 then.0: ; preds = %entry
394 %tmp.5 = add i32 %a, -1 ; <i32> [#uses=1]
395 %tmp.3 = call i32 @t4( i32 %tmp.5 ) ; <i32> [#uses=1]
396 br label %return
397
398 else.0: ; preds = %entry
399 %tmp.7 = icmp ne i32 %a, 0 ; <i1> [#uses=1]
400 br i1 %tmp.7, label %then.1, label %return
401
402 then.1: ; preds = %else.0
403 %tmp.11 = add i32 %a, -2 ; <i32> [#uses=1]
404 %tmp.9 = call i32 @t4( i32 %tmp.11 ) ; <i32> [#uses=1]
405 br label %return
406
407 return: ; preds = %then.1, %else.0, %then.0
408 %result.0 = phi i32 [ 0, %else.0 ], [ %tmp.3, %then.0 ],
409 [ %tmp.9, %then.1 ]
410 ret i32 %result.0
411 }
412
413 //===---------------------------------------------------------------------===//
414
415 Tail recursion elimination should handle:
416
417 int pow2m1(int n) {
418 if (n == 0)
419 return 0;
420 return 2 * pow2m1 (n - 1) + 1;
421 }
422
423 Also, multiplies can be turned into SHL's, so they should be handled as if
424 they were associative. "return foo() << 1" can be tail recursion eliminated.
425
426 //===---------------------------------------------------------------------===//
427
428 Argument promotion should promote arguments for recursive functions, like
429 this:
430
431 ; RUN: llvm-as < %s | opt -argpromotion | llvm-dis | grep x.val
432
433 define internal i32 @foo(i32* %x) {
434 entry:
435 %tmp = load i32* %x ; <i32> [#uses=0]
436 %tmp.foo = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
437 ret i32 %tmp.foo
438 }
439
440 define i32 @bar(i32* %x) {
441 entry:
442 %tmp3 = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
443 ret i32 %tmp3
444 }
445
446 //===---------------------------------------------------------------------===//
447
448 We should investigate an instruction sinking pass. Consider this silly
449 example in pic mode:
450
451 #include <assert.h>
452 void foo(int x) {
453 assert(x);
454 //...
455 }
456
457 we compile this to:
458 _foo:
459 subl $28, %esp
460 call "L1$pb"
461 "L1$pb":
462 popl %eax
463 cmpl $0, 32(%esp)
464 je LBB1_2 # cond_true
465 LBB1_1: # return
466 # ...
467 addl $28, %esp
468 ret
469 LBB1_2: # cond_true
470 ...
471
472 The PIC base computation (call+popl) is only used on one path through the
473 code, but is currently always computed in the entry block. It would be
474 better to sink the picbase computation down into the block for the
475 assertion, as it is the only one that uses it. This happens for a lot of
476 code with early outs.
477
478 Another example is loads of arguments, which are usually emitted into the
479 entry block on targets like x86. If not used in all paths through a
480 function, they should be sunk into the ones that do.
481
482 In this case, whole-function-isel would also handle this.
483
484 //===---------------------------------------------------------------------===//
485
486 Investigate lowering of sparse switch statements into perfect hash tables:
487 http://burtleburtle.net/bob/hash/perfect.html
488
489 //===---------------------------------------------------------------------===//
490
491 We should turn things like "load+fabs+store" and "load+fneg+store" into the
492 corresponding integer operations. On a yonah, this loop:
493
494 double a[256];
495 void foo() {
496 int i, b;
497 for (b = 0; b < 10000000; b++)
498 for (i = 0; i < 256; i++)
499 a[i] = -a[i];
500 }
501
502 is twice as slow as this loop:
503
504 long long a[256];
505 void foo() {
506 int i, b;
507 for (b = 0; b < 10000000; b++)
508 for (i = 0; i < 256; i++)
509 a[i] ^= (1ULL << 63);
510 }
511
512 and I suspect other processors are similar. On X86 in particular this is a
513 big win because doing this with integers allows the use of read/modify/write
514 instructions.
515
516 //===---------------------------------------------------------------------===//
517
518 DAG Combiner should try to combine small loads into larger loads when
519 profitable. For example, we compile this C++ example:
520
521 struct THotKey { short Key; bool Control; bool Shift; bool Alt; };
522 extern THotKey m_HotKey;
523 THotKey GetHotKey () { return m_HotKey; }
524
525 into (-m64 -O3 -fno-exceptions -static -fomit-frame-pointer):
526
527 __Z9GetHotKeyv: ## @_Z9GetHotKeyv
528 movq _m_HotKey@GOTPCREL(%rip), %rax
529 movzwl (%rax), %ecx
530 movzbl 2(%rax), %edx
531 shlq $16, %rdx
532 orq %rcx, %rdx
533 movzbl 3(%rax), %ecx
534 shlq $24, %rcx
535 orq %rdx, %rcx
536 movzbl 4(%rax), %eax
537 shlq $32, %rax
538 orq %rcx, %rax
539 ret
540
541 //===---------------------------------------------------------------------===//
542
543 We should add an FRINT node to the DAG to model targets that have legal
544 implementations of ceil/floor/rint.
545
546 //===---------------------------------------------------------------------===//
547
548 Consider:
549
550 int test() {
551 long long input[8] = {1,0,1,0,1,0,1,0};
552 foo(input);
553 }
554
555 Clang compiles this into:
556
557 call void @llvm.memset.p0i8.i64(i8* %tmp, i8 0, i64 64, i32 16, i1 false)
558 %0 = getelementptr [8 x i64]* %input, i64 0, i64 0
559 store i64 1, i64* %0, align 16
560 %1 = getelementptr [8 x i64]* %input, i64 0, i64 2
561 store i64 1, i64* %1, align 16
562 %2 = getelementptr [8 x i64]* %input, i64 0, i64 4
563 store i64 1, i64* %2, align 16
564 %3 = getelementptr [8 x i64]* %input, i64 0, i64 6
565 store i64 1, i64* %3, align 16
566
567 Which gets codegen'd into:
568
569 pxor %xmm0, %xmm0
570 movaps %xmm0, -16(%rbp)
571 movaps %xmm0, -32(%rbp)
572 movaps %xmm0, -48(%rbp)
573 movaps %xmm0, -64(%rbp)
574 movq $1, -64(%rbp)
575 movq $1, -48(%rbp)
576 movq $1, -32(%rbp)
577 movq $1, -16(%rbp)
578
579 It would be better to have 4 movq's of 0 instead of the movaps's.
580
581 //===---------------------------------------------------------------------===//
582
583 http://llvm.org/PR717:
584
585 The following code should compile into "ret int undef". Instead, LLVM
586 produces "ret int 0":
587
588 int f() {
589 int x = 4;
590 int y;
591 if (x == 3) y = 0;
592 return y;
593 }
594
595 //===---------------------------------------------------------------------===//
596
597 The loop unroller should partially unroll loops (instead of peeling them)
598 when code growth isn't too bad and when an unroll count allows simplification
599 of some code within the loop. One trivial example is:
600
601 #include <stdio.h>
602 int main() {
603 int nRet = 17;
604 int nLoop;
605 for ( nLoop = 0; nLoop < 1000; nLoop++ ) {
606 if ( nLoop & 1 )
607 nRet += 2;
608 else
609 nRet -= 1;
610 }
611 return nRet;
612 }
613
614 Unrolling by 2 would eliminate the '&1' in both copies, leading to a net
615 reduction in code size. The resultant code would then also be suitable for
616 exit value computation.
617
618 //===---------------------------------------------------------------------===//
619
620 We miss a bunch of rotate opportunities on various targets, including ppc, x86,
621 etc. On X86, we miss a bunch of 'rotate by variable' cases because the rotate
622 matching code in dag combine doesn't look through truncates aggressively
623 enough. Here are some testcases reduces from GCC PR17886:
624
625 unsigned long long f5(unsigned long long x, unsigned long long y) {
626 return (x << 8) | ((y >> 48) & 0xffull);
627 }
628 unsigned long long f6(unsigned long long x, unsigned long long y, int z) {
629 switch(z) {
630 case 1:
631 return (x << 8) | ((y >> 48) & 0xffull);
632 case 2:
633 return (x << 16) | ((y >> 40) & 0xffffull);
634 case 3:
635 return (x << 24) | ((y >> 32) & 0xffffffull);
636 case 4:
637 return (x << 32) | ((y >> 24) & 0xffffffffull);
638 default:
639 return (x << 40) | ((y >> 16) & 0xffffffffffull);
640 }
641 }
642
643 //===---------------------------------------------------------------------===//
644
645 This (and similar related idioms):
646
647 unsigned int foo(unsigned char i) {
648 return i | (i<<8) | (i<<16) | (i<<24);
649 }
650
651 compiles into:
652
653 define i32 @foo(i8 zeroext %i) nounwind readnone ssp noredzone {
654 entry:
655 %conv = zext i8 %i to i32
656 %shl = shl i32 %conv, 8
657 %shl5 = shl i32 %conv, 16
658 %shl9 = shl i32 %conv, 24
659 %or = or i32 %shl9, %conv
660 %or6 = or i32 %or, %shl5
661 %or10 = or i32 %or6, %shl
662 ret i32 %or10
663 }
664
665 it would be better as:
666
667 unsigned int bar(unsigned char i) {
668 unsigned int j=i | (i << 8);
669 return j | (j<<16);
670 }
671
672 aka:
673
674 define i32 @bar(i8 zeroext %i) nounwind readnone ssp noredzone {
675 entry:
676 %conv = zext i8 %i to i32
677 %shl = shl i32 %conv, 8
678 %or = or i32 %shl, %conv
679 %shl5 = shl i32 %or, 16
680 %or6 = or i32 %shl5, %or
681 ret i32 %or6
682 }
683
684 or even i*0x01010101, depending on the speed of the multiplier. The best way to
685 handle this is to canonicalize it to a multiply in IR and have codegen handle
686 lowering multiplies to shifts on cpus where shifts are faster.
687
688 //===---------------------------------------------------------------------===//
689
690 We do a number of simplifications in simplify libcalls to strength reduce
691 standard library functions, but we don't currently merge them together. For
692 example, it is useful to merge memcpy(a,b,strlen(b)) -> strcpy. This can only
693 be done safely if "b" isn't modified between the strlen and memcpy of course.
694
695 //===---------------------------------------------------------------------===//
696
697 We compile this program: (from GCC PR11680)
698 http://gcc.gnu.org/bugzilla/attachment.cgi?id=4487
699
700 Into code that runs the same speed in fast/slow modes, but both modes run 2x
701 slower than when compile with GCC (either 4.0 or 4.2):
702
703 $ llvm-g++ perf.cpp -O3 -fno-exceptions
704 $ time ./a.out fast
705 1.821u 0.003s 0:01.82 100.0% 0+0k 0+0io 0pf+0w
706
707 $ g++ perf.cpp -O3 -fno-exceptions
708 $ time ./a.out fast
709 0.821u 0.001s 0:00.82 100.0% 0+0k 0+0io 0pf+0w
710
711 It looks like we are making the same inlining decisions, so this may be raw
712 codegen badness or something else (haven't investigated).
713
714 //===---------------------------------------------------------------------===//
715
716 Divisibility by constant can be simplified (according to GCC PR12849) from
717 being a mulhi to being a mul lo (cheaper). Testcase:
718
719 void bar(unsigned n) {
720 if (n % 3 == 0)
721 true();
722 }
723
724 This is equivalent to the following, where 2863311531 is the multiplicative
725 inverse of 3, and 1431655766 is ((2^32)-1)/3+1:
726 void bar(unsigned n) {
727 if (n * 2863311531U < 1431655766U)
728 true();
729 }
730
731 The same transformation can work with an even modulo with the addition of a
732 rotate: rotate the result of the multiply to the right by the number of bits
733 which need to be zero for the condition to be true, and shrink the compare RHS
734 by the same amount. Unless the target supports rotates, though, that
735 transformation probably isn't worthwhile.
736
737 The transformation can also easily be made to work with non-zero equality
738 comparisons: just transform, for example, "n % 3 == 1" to "(n-1) % 3 == 0".
739
740 //===---------------------------------------------------------------------===//
741
742 Better mod/ref analysis for scanf would allow us to eliminate the vtable and a
743 bunch of other stuff from this example (see PR1604):
744
745 #include <cstdio>
746 struct test {
747 int val;
748 virtual ~test() {}
749 };
750
751 int main() {
752 test t;
753 std::scanf("%d", &t.val);
754 std::printf("%d\n", t.val);
755 }
756
757 //===---------------------------------------------------------------------===//
758
759 These functions perform the same computation, but produce different assembly.
760
761 define i8 @select(i8 %x) readnone nounwind {
762 %A = icmp ult i8 %x, 250
763 %B = select i1 %A, i8 0, i8 1
764 ret i8 %B
765 }
766
767 define i8 @addshr(i8 %x) readnone nounwind {
768 %A = zext i8 %x to i9
769 %B = add i9 %A, 6 ;; 256 - 250 == 6
770 %C = lshr i9 %B, 8
771 %D = trunc i9 %C to i8
772 ret i8 %D
773 }
774
775 //===---------------------------------------------------------------------===//
776
777 From gcc bug 24696:
778 int
779 f (unsigned long a, unsigned long b, unsigned long c)
780 {
781 return ((a & (c - 1)) != 0) || ((b & (c - 1)) != 0);
782 }
783 int
784 f (unsigned long a, unsigned long b, unsigned long c)
785 {
786 return ((a & (c - 1)) != 0) | ((b & (c - 1)) != 0);
787 }
788 Both should combine to ((a|b) & (c-1)) != 0. Currently not optimized with
789 "clang -emit-llvm-bc | opt -std-compile-opts".
790
791 //===---------------------------------------------------------------------===//
792
793 From GCC Bug 20192:
794 #define PMD_MASK (~((1UL << 23) - 1))
795 void clear_pmd_range(unsigned long start, unsigned long end)
796 {
797 if (!(start & ~PMD_MASK) && !(end & ~PMD_MASK))
798 f();
799 }
800 The expression should optimize to something like
801 "!((start|end)&~PMD_MASK). Currently not optimized with "clang
802 -emit-llvm-bc | opt -std-compile-opts".
803
804 //===---------------------------------------------------------------------===//
805
806 unsigned int f(unsigned int i, unsigned int n) {++i; if (i == n) ++i; return
807 i;}
808 unsigned int f2(unsigned int i, unsigned int n) {++i; i += i == n; return i;}
809 These should combine to the same thing. Currently, the first function
810 produces better code on X86.
811
812 //===---------------------------------------------------------------------===//
813
814 From GCC Bug 15784:
815 #define abs(x) x>0?x:-x
816 int f(int x, int y)
817 {
818 return (abs(x)) >= 0;
819 }
820 This should optimize to x == INT_MIN. (With -fwrapv.) Currently not
821 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
822
823 //===---------------------------------------------------------------------===//
824
825 From GCC Bug 14753:
826 void
827 rotate_cst (unsigned int a)
828 {
829 a = (a << 10) | (a >> 22);
830 if (a == 123)
831 bar ();
832 }
833 void
834 minus_cst (unsigned int a)
835 {
836 unsigned int tem;
837
838 tem = 20 - a;
839 if (tem == 5)
840 bar ();
841 }
842 void
843 mask_gt (unsigned int a)
844 {
845 /* This is equivalent to a > 15. */
846 if ((a & ~7) > 8)
847 bar ();
848 }
849 void
850 rshift_gt (unsigned int a)
851 {
852 /* This is equivalent to a > 23. */
853 if ((a >> 2) > 5)
854 bar ();
855 }
856
857 All should simplify to a single comparison. All of these are
858 currently not optimized with "clang -emit-llvm-bc | opt
859 -std-compile-opts".
860
861 //===---------------------------------------------------------------------===//
862
863 From GCC Bug 32605:
864 int c(int* x) {return (char*)x+2 == (char*)x;}
865 Should combine to 0. Currently not optimized with "clang
866 -emit-llvm-bc | opt -std-compile-opts" (although llc can optimize it).
867
868 //===---------------------------------------------------------------------===//
869
870 int a(unsigned b) {return ((b << 31) | (b << 30)) >> 31;}
871 Should be combined to "((b >> 1) | b) & 1". Currently not optimized
872 with "clang -emit-llvm-bc | opt -std-compile-opts".
873
874 //===---------------------------------------------------------------------===//
875
876 unsigned a(unsigned x, unsigned y) { return x | (y & 1) | (y & 2);}
877 Should combine to "x | (y & 3)". Currently not optimized with "clang
878 -emit-llvm-bc | opt -std-compile-opts".
879
880 //===---------------------------------------------------------------------===//
881
882 int a(int a, int b, int c) {return (~a & c) | ((c|a) & b);}
883 Should fold to "(~a & c) | (a & b)". Currently not optimized with
884 "clang -emit-llvm-bc | opt -std-compile-opts".
885
886 //===---------------------------------------------------------------------===//
887
888 int a(int a,int b) {return (~(a|b))|a;}
889 Should fold to "a|~b". Currently not optimized with "clang
890 -emit-llvm-bc | opt -std-compile-opts".
891
892 //===---------------------------------------------------------------------===//
893
894 int a(int a, int b) {return (a&&b) || (a&&!b);}
895 Should fold to "a". Currently not optimized with "clang -emit-llvm-bc
896 | opt -std-compile-opts".
897
898 //===---------------------------------------------------------------------===//
899
900 int a(int a, int b, int c) {return (a&&b) || (!a&&c);}
901 Should fold to "a ? b : c", or at least something sane. Currently not
902 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
903
904 //===---------------------------------------------------------------------===//
905
906 int a(int a, int b, int c) {return (a&&b) || (a&&c) || (a&&b&&c);}
907 Should fold to a && (b || c). Currently not optimized with "clang
908 -emit-llvm-bc | opt -std-compile-opts".
909
910 //===---------------------------------------------------------------------===//
911
912 int a(int x) {return x | ((x & 8) ^ 8);}
913 Should combine to x | 8. Currently not optimized with "clang
914 -emit-llvm-bc | opt -std-compile-opts".
915
916 //===---------------------------------------------------------------------===//
917
918 int a(int x) {return x ^ ((x & 8) ^ 8);}
919 Should also combine to x | 8. Currently not optimized with "clang
920 -emit-llvm-bc | opt -std-compile-opts".
921
922 //===---------------------------------------------------------------------===//
923
924 int a(int x) {return ((x | -9) ^ 8) & x;}
925 Should combine to x & -9. Currently not optimized with "clang
926 -emit-llvm-bc | opt -std-compile-opts".
927
928 //===---------------------------------------------------------------------===//
929
930 unsigned a(unsigned a) {return a * 0x11111111 >> 28 & 1;}
931 Should combine to "a * 0x88888888 >> 31". Currently not optimized
932 with "clang -emit-llvm-bc | opt -std-compile-opts".
933
934 //===---------------------------------------------------------------------===//
935
936 unsigned a(char* x) {if ((*x & 32) == 0) return b();}
937 There's an unnecessary zext in the generated code with "clang
938 -emit-llvm-bc | opt -std-compile-opts".
939
940 //===---------------------------------------------------------------------===//
941
942 unsigned a(unsigned long long x) {return 40 * (x >> 1);}
943 Should combine to "20 * (((unsigned)x) & -2)". Currently not
944 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
945
946 //===---------------------------------------------------------------------===//
947
948 int g(int x) { return (x - 10) < 0; }
949 Should combine to "x <= 9" (the sub has nsw). Currently not
950 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
951
952 //===---------------------------------------------------------------------===//
953
954 int g(int x) { return (x + 10) < 0; }
955 Should combine to "x < -10" (the add has nsw). Currently not
956 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
957
958 //===---------------------------------------------------------------------===//
959
960 int f(int i, int j) { return i < j + 1; }
961 int g(int i, int j) { return j > i - 1; }
962 Should combine to "i <= j" (the add/sub has nsw). Currently not
963 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
964
965 //===---------------------------------------------------------------------===//
966
967 This was noticed in the entryblock for grokdeclarator in 403.gcc:
968
969 %tmp = icmp eq i32 %decl_context, 4
970 %decl_context_addr.0 = select i1 %tmp, i32 3, i32 %decl_context
971 %tmp1 = icmp eq i32 %decl_context_addr.0, 1
972 %decl_context_addr.1 = select i1 %tmp1, i32 0, i32 %decl_context_addr.0
973
974 tmp1 should be simplified to something like:
975 (!tmp || decl_context == 1)
976
977 This allows recursive simplifications, tmp1 is used all over the place in
978 the function, e.g. by:
979
980 %tmp23 = icmp eq i32 %decl_context_addr.1, 0 ; <i1> [#uses=1]
981 %tmp24 = xor i1 %tmp1, true ; <i1> [#uses=1]
982 %or.cond8 = and i1 %tmp23, %tmp24 ; <i1> [#uses=1]
983
984 later.
985
986 //===---------------------------------------------------------------------===//
987
988 [STORE SINKING]
989
990 Store sinking: This code:
991
992 void f (int n, int *cond, int *res) {
993 int i;
994 *res = 0;
995 for (i = 0; i < n; i++)
996 if (*cond)
997 *res ^= 234; /* (*) */
998 }
999
1000 On this function GVN hoists the fully redundant value of *res, but nothing
1001 moves the store out. This gives us this code:
1002
1003 bb: ; preds = %bb2, %entry
1004 %.rle = phi i32 [ 0, %entry ], [ %.rle6, %bb2 ]
1005 %i.05 = phi i32 [ 0, %entry ], [ %indvar.next, %bb2 ]
1006 %1 = load i32* %cond, align 4
1007 %2 = icmp eq i32 %1, 0
1008 br i1 %2, label %bb2, label %bb1
1009
1010 bb1: ; preds = %bb
1011 %3 = xor i32 %.rle, 234
1012 store i32 %3, i32* %res, align 4
1013 br label %bb2
1014
1015 bb2: ; preds = %bb, %bb1
1016 %.rle6 = phi i32 [ %3, %bb1 ], [ %.rle, %bb ]
1017 %indvar.next = add i32 %i.05, 1
1018 %exitcond = icmp eq i32 %indvar.next, %n
1019 br i1 %exitcond, label %return, label %bb
1020
1021 DSE should sink partially dead stores to get the store out of the loop.
1022
1023 Here's another partial dead case:
1024 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12395
1025
1026 //===---------------------------------------------------------------------===//
1027
1028 Scalar PRE hoists the mul in the common block up to the else:
1029
1030 int test (int a, int b, int c, int g) {
1031 int d, e;
1032 if (a)
1033 d = b * c;
1034 else
1035 d = b - c;
1036 e = b * c + g;
1037 return d + e;
1038 }
1039
1040 It would be better to do the mul once to reduce codesize above the if.
1041 This is GCC PR38204.
1042
1043
1044 //===---------------------------------------------------------------------===//
1045 This simple function from 179.art:
1046
1047 int winner, numf2s;
1048 struct { double y; int reset; } *Y;
1049
1050 void find_match() {
1051 int i;
1052 winner = 0;
1053 for (i=0;i<numf2s;i++)
1054 if (Y[i].y > Y[winner].y)
1055 winner =i;
1056 }
1057
1058 Compiles into (with clang TBAA):
1059
1060 for.body: ; preds = %for.inc, %bb.nph
1061 %indvar = phi i64 [ 0, %bb.nph ], [ %indvar.next, %for.inc ]
1062 %i.01718 = phi i32 [ 0, %bb.nph ], [ %i.01719, %for.inc ]
1063 %tmp4 = getelementptr inbounds %struct.anon* %tmp3, i64 %indvar, i32 0
1064 %tmp5 = load double* %tmp4, align 8, !tbaa !4
1065 %idxprom7 = sext i32 %i.01718 to i64
1066 %tmp10 = getelementptr inbounds %struct.anon* %tmp3, i64 %idxprom7, i32 0
1067 %tmp11 = load double* %tmp10, align 8, !tbaa !4
1068 %cmp12 = fcmp ogt double %tmp5, %tmp11
1069 br i1 %cmp12, label %if.then, label %for.inc
1070
1071 if.then: ; preds = %for.body
1072 %i.017 = trunc i64 %indvar to i32
1073 br label %for.inc
1074
1075 for.inc: ; preds = %for.body, %if.then
1076 %i.01719 = phi i32 [ %i.01718, %for.body ], [ %i.017, %if.then ]
1077 %indvar.next = add i64 %indvar, 1
1078 %exitcond = icmp eq i64 %indvar.next, %tmp22
1079 br i1 %exitcond, label %for.cond.for.end_crit_edge, label %for.body
1080
1081
1082 It is good that we hoisted the reloads of numf2's, and Y out of the loop and
1083 sunk the store to winner out.
1084
1085 However, this is awful on several levels: the conditional truncate in the loop
1086 (-indvars at fault? why can't we completely promote the IV to i64?).
1087
1088 Beyond that, we have a partially redundant load in the loop: if "winner" (aka
1089 %i.01718) isn't updated, we reload Y[winner].y the next time through the loop.
1090 Similarly, the addressing that feeds it (including the sext) is redundant. In
1091 the end we get this generated assembly:
1092
1093 LBB0_2: ## %for.body
1094 ## =>This Inner Loop Header: Depth=1
1095 movsd (%rdi), %xmm0
1096 movslq %edx, %r8
1097 shlq $4, %r8
1098 ucomisd (%rcx,%r8), %xmm0
1099 jbe LBB0_4
1100 movl %esi, %edx
1101 LBB0_4: ## %for.inc
1102 addq $16, %rdi
1103 incq %rsi
1104 cmpq %rsi, %rax
1105 jne LBB0_2
1106
1107 All things considered this isn't too bad, but we shouldn't need the movslq or
1108 the shlq instruction, or the load folded into ucomisd every time through the
1109 loop.
1110
1111 On an x86-specific topic, if the loop can't be restructure, the movl should be a
1112 cmov.
1113
1114 //===---------------------------------------------------------------------===//
1115
1116 [STORE SINKING]
1117
1118 GCC PR37810 is an interesting case where we should sink load/store reload
1119 into the if block and outside the loop, so we don't reload/store it on the
1120 non-call path.
1121
1122 for () {
1123 *P += 1;
1124 if ()
1125 call();
1126 else
1127 ...
1128 ->
1129 tmp = *P
1130 for () {
1131 tmp += 1;
1132 if () {
1133 *P = tmp;
1134 call();
1135 tmp = *P;
1136 } else ...
1137 }
1138 *P = tmp;
1139
1140 We now hoist the reload after the call (Transforms/GVN/lpre-call-wrap.ll), but
1141 we don't sink the store. We need partially dead store sinking.
1142
1143 //===---------------------------------------------------------------------===//
1144
1145 [LOAD PRE CRIT EDGE SPLITTING]
1146
1147 GCC PR37166: Sinking of loads prevents SROA'ing the "g" struct on the stack
1148 leading to excess stack traffic. This could be handled by GVN with some crazy
1149 symbolic phi translation. The code we get looks like (g is on the stack):
1150
1151 bb2: ; preds = %bb1
1152 ..
1153 %9 = getelementptr %struct.f* %g, i32 0, i32 0
1154 store i32 %8, i32* %9, align bel %bb3
1155
1156 bb3: ; preds = %bb1, %bb2, %bb
1157 %c_addr.0 = phi %struct.f* [ %g, %bb2 ], [ %c, %bb ], [ %c, %bb1 ]
1158 %b_addr.0 = phi %struct.f* [ %b, %bb2 ], [ %g, %bb ], [ %b, %bb1 ]
1159 %10 = getelementptr %struct.f* %c_addr.0, i32 0, i32 0
1160 %11 = load i32* %10, align 4
1161
1162 %11 is partially redundant, an in BB2 it should have the value %8.
1163
1164 GCC PR33344 and PR35287 are similar cases.
1165
1166
1167 //===---------------------------------------------------------------------===//
1168
1169 [LOAD PRE]
1170
1171 There are many load PRE testcases in testsuite/gcc.dg/tree-ssa/loadpre* in the
1172 GCC testsuite, ones we don't get yet are (checked through loadpre25):
1173
1174 [CRIT EDGE BREAKING]
1175 loadpre3.c predcom-4.c
1176
1177 [PRE OF READONLY CALL]
1178 loadpre5.c
1179
1180 [TURN SELECT INTO BRANCH]
1181 loadpre14.c loadpre15.c
1182
1183 actually a conditional increment: loadpre18.c loadpre19.c
1184
1185 //===---------------------------------------------------------------------===//
1186
1187 [LOAD PRE / STORE SINKING / SPEC HACK]
1188
1189 This is a chunk of code from 456.hmmer:
1190
1191 int f(int M, int *mc, int *mpp, int *tpmm, int *ip, int *tpim, int *dpp,
1192 int *tpdm, int xmb, int *bp, int *ms) {
1193 int k, sc;
1194 for (k = 1; k <= M; k++) {
1195 mc[k] = mpp[k-1] + tpmm[k-1];
1196 if ((sc = ip[k-1] + tpim[k-1]) > mc[k]) mc[k] = sc;
1197 if ((sc = dpp[k-1] + tpdm[k-1]) > mc[k]) mc[k] = sc;
1198 if ((sc = xmb + bp[k]) > mc[k]) mc[k] = sc;
1199 mc[k] += ms[k];
1200 }
1201 }
1202
1203 It is very profitable for this benchmark to turn the conditional stores to mc[k]
1204 into a conditional move (select instr in IR) and allow the final store to do the
1205 store. See GCC PR27313 for more details. Note that this is valid to xform even
1206 with the new C++ memory model, since mc[k] is previously loaded and later
1207 stored.
1208
1209 //===---------------------------------------------------------------------===//
1210
1211 [SCALAR PRE]
1212 There are many PRE testcases in testsuite/gcc.dg/tree-ssa/ssa-pre-*.c in the
1213 GCC testsuite.
1214
1215 //===---------------------------------------------------------------------===//
1216
1217 There are some interesting cases in testsuite/gcc.dg/tree-ssa/pred-comm* in the
1218 GCC testsuite. For example, we get the first example in predcom-1.c, but
1219 miss the second one:
1220
1221 unsigned fib[1000];
1222 unsigned avg[1000];
1223
1224 __attribute__ ((noinline))
1225 void count_averages(int n) {
1226 int i;
1227 for (i = 1; i < n; i++)
1228 avg[i] = (((unsigned long) fib[i - 1] + fib[i] + fib[i + 1]) / 3) & 0xffff;
1229 }
1230
1231 which compiles into two loads instead of one in the loop.
1232
1233 predcom-2.c is the same as predcom-1.c
1234
1235 predcom-3.c is very similar but needs loads feeding each other instead of
1236 store->load.
1237
1238
1239 //===---------------------------------------------------------------------===//
1240
1241 [ALIAS ANALYSIS]
1242
1243 Type based alias analysis:
1244 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14705
1245
1246 We should do better analysis of posix_memalign. At the least it should
1247 no-capture its pointer argument, at best, we should know that the out-value
1248 result doesn't point to anything (like malloc). One example of this is in
1249 SingleSource/Benchmarks/Misc/dt.c
1250
1251 //===---------------------------------------------------------------------===//
1252
1253 Interesting missed case because of control flow flattening (should be 2 loads):
1254 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26629
1255 With: llvm-gcc t2.c -S -o - -O0 -emit-llvm | llvm-as |
1256 opt -mem2reg -gvn -instcombine | llvm-dis
1257 we miss it because we need 1) CRIT EDGE 2) MULTIPLE DIFFERENT
1258 VALS PRODUCED BY ONE BLOCK OVER DIFFERENT PATHS
1259
1260 //===---------------------------------------------------------------------===//
1261
1262 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19633
1263 We could eliminate the branch condition here, loading from null is undefined:
1264
1265 struct S { int w, x, y, z; };
1266 struct T { int r; struct S s; };
1267 void bar (struct S, int);
1268 void foo (int a, struct T b)
1269 {
1270 struct S *c = 0;
1271 if (a)
1272 c = &b.s;
1273 bar (*c, a);
1274 }
1275
1276 //===---------------------------------------------------------------------===//
1277
1278 simplifylibcalls should do several optimizations for strspn/strcspn:
1279
1280 strcspn(x, "a") -> inlined loop for up to 3 letters (similarly for strspn):
1281
1282 size_t __strcspn_c3 (__const char *__s, int __reject1, int __reject2,
1283 int __reject3) {
1284 register size_t __result = 0;
1285 while (__s[__result] != '\0' && __s[__result] != __reject1 &&
1286 __s[__result] != __reject2 && __s[__result] != __reject3)
1287 ++__result;
1288 return __result;
1289 }
1290
1291 This should turn into a switch on the character. See PR3253 for some notes on
1292 codegen.
1293
1294 456.hmmer apparently uses strcspn and strspn a lot. 471.omnetpp uses strspn.
1295
1296 //===---------------------------------------------------------------------===//
1297
1298 simplifylibcalls should turn these snprintf idioms into memcpy (GCC PR47917)
1299
1300 char buf1[6], buf2[6], buf3[4], buf4[4];
1301 int i;
1302
1303 int foo (void) {
1304 int ret = snprintf (buf1, sizeof buf1, "abcde");
1305 ret += snprintf (buf2, sizeof buf2, "abcdef") * 16;
1306 ret += snprintf (buf3, sizeof buf3, "%s", i++ < 6 ? "abc" : "def") * 256;
1307 ret += snprintf (buf4, sizeof buf4, "%s", i++ > 10 ? "abcde" : "defgh")*4096;
1308 return ret;
1309 }
1310
1311 //===---------------------------------------------------------------------===//
1312
1313 "gas" uses this idiom:
1314 else if (strchr ("+-/*%|&^:[]()~", *intel_parser.op_string))
1315 ..
1316 else if (strchr ("<>", *intel_parser.op_string)
1317
1318 Those should be turned into a switch.
1319
1320 //===---------------------------------------------------------------------===//
1321
1322 252.eon contains this interesting code:
1323
1324 %3072 = getelementptr [100 x i8]* %tempString, i32 0, i32 0
1325 %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
1326 %strlen = call i32 @strlen(i8* %3072) ; uses = 1
1327 %endptr = getelementptr [100 x i8]* %tempString, i32 0, i32 %strlen
1328 call void @llvm.memcpy.i32(i8* %endptr,
1329 i8* getelementptr ([5 x i8]* @"\01LC42", i32 0, i32 0), i32 5, i32 1)
1330 %3074 = call i32 @strlen(i8* %endptr) nounwind readonly
1331
1332 This is interesting for a couple reasons. First, in this:
1333
1334 The memcpy+strlen strlen can be replaced with:
1335
1336 %3074 = call i32 @strlen([5 x i8]* @"\01LC42") nounwind readonly
1337
1338 Because the destination was just copied into the specified memory buffer. This,
1339 in turn, can be constant folded to "4".
1340
1341 In other code, it contains:
1342
1343 %endptr6978 = bitcast i8* %endptr69 to i32*
1344 store i32 7107374, i32* %endptr6978, align 1
1345 %3167 = call i32 @strlen(i8* %endptr69) nounwind readonly
1346
1347 Which could also be constant folded. Whatever is producing this should probably
1348 be fixed to leave this as a memcpy from a string.
1349
1350 Further, eon also has an interesting partially redundant strlen call:
1351
1352 bb8: ; preds = %_ZN18eonImageCalculatorC1Ev.exit
1353 %682 = getelementptr i8** %argv, i32 6 ; <i8**> [#uses=2]
1354 %683 = load i8** %682, align 4 ; <i8*> [#uses=4]
1355 %684 = load i8* %683, align 1 ; <i8> [#uses=1]
1356 %685 = icmp eq i8 %684, 0 ; <i1> [#uses=1]
1357 br i1 %685, label %bb10, label %bb9
1358
1359 bb9: ; preds = %bb8
1360 %686 = call i32 @strlen(i8* %683) nounwind readonly
1361 %687 = icmp ugt i32 %686, 254 ; <i1> [#uses=1]
1362 br i1 %687, label %bb10, label %bb11
1363
1364 bb10: ; preds = %bb9, %bb8
1365 %688 = call i32 @strlen(i8* %683) nounwind readonly
1366
1367 This could be eliminated by doing the strlen once in bb8, saving code size and
1368 improving perf on the bb8->9->10 path.
1369
1370 //===---------------------------------------------------------------------===//
1371
1372 I see an interesting fully redundant call to strlen left in 186.crafty:InputMove
1373 which looks like:
1374 %movetext11 = getelementptr [128 x i8]* %movetext, i32 0, i32 0
1375
1376
1377 bb62: ; preds = %bb55, %bb53
1378 %promote.0 = phi i32 [ %169, %bb55 ], [ 0, %bb53 ]
1379 %171 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1380 %172 = add i32 %171, -1 ; <i32> [#uses=1]
1381 %173 = getelementptr [128 x i8]* %movetext, i32 0, i32 %172
1382
1383 ... no stores ...
1384 br i1 %or.cond, label %bb65, label %bb72
1385
1386 bb65: ; preds = %bb62
1387 store i8 0, i8* %173, align 1
1388 br label %bb72
1389
1390 bb72: ; preds = %bb65, %bb62
1391 %trank.1 = phi i32 [ %176, %bb65 ], [ -1, %bb62 ]
1392 %177 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1393
1394 Note that on the bb62->bb72 path, that the %177 strlen call is partially
1395 redundant with the %171 call. At worst, we could shove the %177 strlen call
1396 up into the bb65 block moving it out of the bb62->bb72 path. However, note
1397 that bb65 stores to the string, zeroing out the last byte. This means that on
1398 that path the value of %177 is actually just %171-1. A sub is cheaper than a
1399 strlen!
1400
1401 This pattern repeats several times, basically doing:
1402
1403 A = strlen(P);
1404 P[A-1] = 0;
1405 B = strlen(P);
1406 where it is "obvious" that B = A-1.
1407
1408 //===---------------------------------------------------------------------===//
1409
1410 186.crafty has this interesting pattern with the "out.4543" variable:
1411
1412 call void @llvm.memcpy.i32(
1413 i8* getelementptr ([10 x i8]* @out.4543, i32 0, i32 0),
1414 i8* getelementptr ([7 x i8]* @"\01LC28700", i32 0, i32 0), i32 7, i32 1)
1415 %101 = call@printf(i8* ... @out.4543, i32 0, i32 0)) nounwind
1416
1417 It is basically doing:
1418
1419 memcpy(globalarray, "string");
1420 printf(..., globalarray);
1421
1422 Anyway, by knowing that printf just reads the memory and forward substituting
1423 the string directly into the printf, this eliminates reads from globalarray.
1424 Since this pattern occurs frequently in crafty (due to the "DisplayTime" and
1425 other similar functions) there are many stores to "out". Once all the printfs
1426 stop using "out", all that is left is the memcpy's into it. This should allow
1427 globalopt to remove the "stored only" global.
1428
1429 //===---------------------------------------------------------------------===//
1430
1431 This code:
1432
1433 define inreg i32 @foo(i8* inreg %p) nounwind {
1434 %tmp0 = load i8* %p
1435 %tmp1 = ashr i8 %tmp0, 5
1436 %tmp2 = sext i8 %tmp1 to i32
1437 ret i32 %tmp2
1438 }
1439
1440 could be dagcombine'd to a sign-extending load with a shift.
1441 For example, on x86 this currently gets this:
1442
1443 movb (%eax), %al
1444 sarb $5, %al
1445 movsbl %al, %eax
1446
1447 while it could get this:
1448
1449 movsbl (%eax), %eax
1450 sarl $5, %eax
1451
1452 //===---------------------------------------------------------------------===//
1453
1454 GCC PR31029:
1455
1456 int test(int x) { return 1-x == x; } // --> return false
1457 int test2(int x) { return 2-x == x; } // --> return x == 1 ?
1458
1459 Always foldable for odd constants, what is the rule for even?
1460
1461 //===---------------------------------------------------------------------===//
1462
1463 PR 3381: GEP to field of size 0 inside a struct could be turned into GEP
1464 for next field in struct (which is at same address).
1465
1466 For example: store of float into { {{}}, float } could be turned into a store to
1467 the float directly.
1468
1469 //===---------------------------------------------------------------------===//
1470
1471 The arg promotion pass should make use of nocapture to make its alias analysis
1472 stuff much more precise.
1473
1474 //===---------------------------------------------------------------------===//
1475
1476 The following functions should be optimized to use a select instead of a
1477 branch (from gcc PR40072):
1478
1479 char char_int(int m) {if(m>7) return 0; return m;}
1480 int int_char(char m) {if(m>7) return 0; return m;}
1481
1482 //===---------------------------------------------------------------------===//
1483
1484 int func(int a, int b) { if (a & 0x80) b |= 0x80; else b &= ~0x80; return b; }
1485
1486 Generates this:
1487
1488 define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1489 entry:
1490 %0 = and i32 %a, 128 ; <i32> [#uses=1]
1491 %1 = icmp eq i32 %0, 0 ; <i1> [#uses=1]
1492 %2 = or i32 %b, 128 ; <i32> [#uses=1]
1493 %3 = and i32 %b, -129 ; <i32> [#uses=1]
1494 %b_addr.0 = select i1 %1, i32 %3, i32 %2 ; <i32> [#uses=1]
1495 ret i32 %b_addr.0
1496 }
1497
1498 However, it's functionally equivalent to:
1499
1500 b = (b & ~0x80) | (a & 0x80);
1501
1502 Which generates this:
1503
1504 define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1505 entry:
1506 %0 = and i32 %b, -129 ; <i32> [#uses=1]
1507 %1 = and i32 %a, 128 ; <i32> [#uses=1]
1508 %2 = or i32 %0, %1 ; <i32> [#uses=1]
1509 ret i32 %2
1510 }
1511
1512 This can be generalized for other forms:
1513
1514 b = (b & ~0x80) | (a & 0x40) << 1;
1515
1516 //===---------------------------------------------------------------------===//
1517
1518 These two functions produce different code. They shouldn't:
1519
1520 #include <stdint.h>
1521
1522 uint8_t p1(uint8_t b, uint8_t a) {
1523 b = (b & ~0xc0) | (a & 0xc0);
1524 return (b);
1525 }
1526
1527 uint8_t p2(uint8_t b, uint8_t a) {
1528 b = (b & ~0x40) | (a & 0x40);
1529 b = (b & ~0x80) | (a & 0x80);
1530 return (b);
1531 }
1532
1533 define zeroext i8 @p1(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1534 entry:
1535 %0 = and i8 %b, 63 ; <i8> [#uses=1]
1536 %1 = and i8 %a, -64 ; <i8> [#uses=1]
1537 %2 = or i8 %1, %0 ; <i8> [#uses=1]
1538 ret i8 %2
1539 }
1540
1541 define zeroext i8 @p2(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1542 entry:
1543 %0 = and i8 %b, 63 ; <i8> [#uses=1]
1544 %.masked = and i8 %a, 64 ; <i8> [#uses=1]
1545 %1 = and i8 %a, -128 ; <i8> [#uses=1]
1546 %2 = or i8 %1, %0 ; <i8> [#uses=1]
1547 %3 = or i8 %2, %.masked ; <i8> [#uses=1]
1548 ret i8 %3
1549 }
1550
1551 //===---------------------------------------------------------------------===//
1552
1553 IPSCCP does not currently propagate argument dependent constants through
1554 functions where it does not not all of the callers. This includes functions
1555 with normal external linkage as well as templates, C99 inline functions etc.
1556 Specifically, it does nothing to:
1557
1558 define i32 @test(i32 %x, i32 %y, i32 %z) nounwind {
1559 entry:
1560 %0 = add nsw i32 %y, %z
1561 %1 = mul i32 %0, %x
1562 %2 = mul i32 %y, %z
1563 %3 = add nsw i32 %1, %2
1564 ret i32 %3
1565 }
1566
1567 define i32 @test2() nounwind {
1568 entry:
1569 %0 = call i32 @test(i32 1, i32 2, i32 4) nounwind
1570 ret i32 %0
1571 }
1572
1573 It would be interesting extend IPSCCP to be able to handle simple cases like
1574 this, where all of the arguments to a call are constant. Because IPSCCP runs
1575 before inlining, trivial templates and inline functions are not yet inlined.
1576 The results for a function + set of constant arguments should be memoized in a
1577 map.
1578
1579 //===---------------------------------------------------------------------===//
1580
1581 The libcall constant folding stuff should be moved out of SimplifyLibcalls into
1582 libanalysis' constantfolding logic. This would allow IPSCCP to be able to
1583 handle simple things like this:
1584
1585 static int foo(const char *X) { return strlen(X); }
1586 int bar() { return foo("abcd"); }
1587
1588 //===---------------------------------------------------------------------===//
1589
1590 functionattrs doesn't know much about memcpy/memset. This function should be
1591 marked readnone rather than readonly, since it only twiddles local memory, but
1592 functionattrs doesn't handle memset/memcpy/memmove aggressively:
1593
1594 struct X { int *p; int *q; };
1595 int foo() {
1596 int i = 0, j = 1;
1597 struct X x, y;
1598 int **p;
1599 y.p = &i;
1600 x.q = &j;
1601 p = __builtin_memcpy (&x, &y, sizeof (int *));
1602 return **p;
1603 }
1604
1605 This can be seen at:
1606 $ clang t.c -S -o - -mkernel -O0 -emit-llvm | opt -functionattrs -S
1607
1608
1609 //===---------------------------------------------------------------------===//
1610
1611 Missed instcombine transformation:
1612 define i1 @a(i32 %x) nounwind readnone {
1613 entry:
1614 %cmp = icmp eq i32 %x, 30
1615 %sub = add i32 %x, -30
1616 %cmp2 = icmp ugt i32 %sub, 9
1617 %or = or i1 %cmp, %cmp2
1618 ret i1 %or
1619 }
1620 This should be optimized to a single compare. Testcase derived from gcc.
1621
1622 //===---------------------------------------------------------------------===//
1623
1624 Missed instcombine or reassociate transformation:
1625 int a(int a, int b) { return (a==12)&(b>47)&(b<58); }
1626
1627 The sgt and slt should be combined into a single comparison. Testcase derived
1628 from gcc.
1629
1630 //===---------------------------------------------------------------------===//
1631
1632 Missed instcombine transformation:
1633
1634 %382 = srem i32 %tmp14.i, 64 ; [#uses=1]
1635 %383 = zext i32 %382 to i64 ; [#uses=1]
1636 %384 = shl i64 %381, %383 ; [#uses=1]
1637 %385 = icmp slt i32 %tmp14.i, 64 ; [#uses=1]
1638
1639 The srem can be transformed to an and because if %tmp14.i is negative, the
1640 shift is undefined. Testcase derived from 403.gcc.
1641
1642 //===---------------------------------------------------------------------===//
1643
1644 This is a range comparison on a divided result (from 403.gcc):
1645
1646 %1337 = sdiv i32 %1336, 8 ; [#uses=1]
1647 %.off.i208 = add i32 %1336, 7 ; [#uses=1]
1648 %1338 = icmp ult i32 %.off.i208, 15 ; [#uses=1]
1649
1650 We already catch this (removing the sdiv) if there isn't an add, we should
1651 handle the 'add' as well. This is a common idiom with it's builtin_alloca code.
1652 C testcase:
1653
1654 int a(int x) { return (unsigned)(x/16+7) < 15; }
1655
1656 Another similar case involves truncations on 64-bit targets:
1657
1658 %361 = sdiv i64 %.046, 8 ; [#uses=1]
1659 %362 = trunc i64 %361 to i32 ; [#uses=2]
1660 ...
1661 %367 = icmp eq i32 %362, 0 ; [#uses=1]
1662
1663 //===---------------------------------------------------------------------===//
1664
1665 Missed instcombine/dagcombine transformation:
1666 define void @lshift_lt(i8 zeroext %a) nounwind {
1667 entry:
1668 %conv = zext i8 %a to i32
1669 %shl = shl i32 %conv, 3
1670 %cmp = icmp ult i32 %shl, 33
1671 br i1 %cmp, label %if.then, label %if.end
1672
1673 if.then:
1674 tail call void @bar() nounwind
1675 ret void
1676
1677 if.end:
1678 ret void
1679 }
1680 declare void @bar() nounwind
1681
1682 The shift should be eliminated. Testcase derived from gcc.
1683
1684 //===---------------------------------------------------------------------===//
1685
1686 These compile into different code, one gets recognized as a switch and the
1687 other doesn't due to phase ordering issues (PR6212):
1688
1689 int test1(int mainType, int subType) {
1690 if (mainType == 7)
1691 subType = 4;
1692 else if (mainType == 9)
1693 subType = 6;
1694 else if (mainType == 11)
1695 subType = 9;
1696 return subType;
1697 }
1698
1699 int test2(int mainType, int subType) {
1700 if (mainType == 7)
1701 subType = 4;
1702 if (mainType == 9)
1703 subType = 6;
1704 if (mainType == 11)
1705 subType = 9;
1706 return subType;
1707 }
1708
1709 //===---------------------------------------------------------------------===//
1710
1711 The following test case (from PR6576):
1712
1713 define i32 @mul(i32 %a, i32 %b) nounwind readnone {
1714 entry:
1715 %cond1 = icmp eq i32 %b, 0 ; <i1> [#uses=1]
1716 br i1 %cond1, label %exit, label %bb.nph
1717 bb.nph: ; preds = %entry
1718 %tmp = mul i32 %b, %a ; <i32> [#uses=1]
1719 ret i32 %tmp
1720 exit: ; preds = %entry
1721 ret i32 0
1722 }
1723
1724 could be reduced to:
1725
1726 define i32 @mul(i32 %a, i32 %b) nounwind readnone {
1727 entry:
1728 %tmp = mul i32 %b, %a
1729 ret i32 %tmp
1730 }
1731
1732 //===---------------------------------------------------------------------===//
1733
1734 We should use DSE + llvm.lifetime.end to delete dead vtable pointer updates.
1735 See GCC PR34949
1736
1737 Another interesting case is that something related could be used for variables
1738 that go const after their ctor has finished. In these cases, globalopt (which
1739 can statically run the constructor) could mark the global const (so it gets put
1740 in the readonly section). A testcase would be:
1741
1742 #include <complex>
1743 using namespace std;
1744 const complex<char> should_be_in_rodata (42,-42);
1745 complex<char> should_be_in_data (42,-42);
1746 complex<char> should_be_in_bss;
1747
1748 Where we currently evaluate the ctors but the globals don't become const because
1749 the optimizer doesn't know they "become const" after the ctor is done. See
1750 GCC PR4131 for more examples.
1751
1752 //===---------------------------------------------------------------------===//
1753
1754 In this code:
1755
1756 long foo(long x) {
1757 return x > 1 ? x : 1;
1758 }
1759
1760 LLVM emits a comparison with 1 instead of 0. 0 would be equivalent
1761 and cheaper on most targets.
1762
1763 LLVM prefers comparisons with zero over non-zero in general, but in this
1764 case it choses instead to keep the max operation obvious.
1765
1766 //===---------------------------------------------------------------------===//
1767
1768 define void @a(i32 %x) nounwind {
1769 entry:
1770 switch i32 %x, label %if.end [
1771 i32 0, label %if.then
1772 i32 1, label %if.then
1773 i32 2, label %if.then
1774 i32 3, label %if.then
1775 i32 5, label %if.then
1776 ]
1777 if.then:
1778 tail call void @foo() nounwind
1779 ret void
1780 if.end:
1781 ret void
1782 }
1783 declare void @foo()
1784
1785 Generated code on x86-64 (other platforms give similar results):
1786 a:
1787 cmpl $5, %edi
1788 ja LBB2_2
1789 cmpl $4, %edi
1790 jne LBB2_3
1791 .LBB0_2:
1792 ret
1793 .LBB0_3:
1794 jmp foo # TAILCALL
1795
1796 If we wanted to be really clever, we could simplify the whole thing to
1797 something like the following, which eliminates a branch:
1798 xorl $1, %edi
1799 cmpl $4, %edi
1800 ja .LBB0_2
1801 ret
1802 .LBB0_2:
1803 jmp foo # TAILCALL
1804
1805 //===---------------------------------------------------------------------===//
1806
1807 We compile this:
1808
1809 int foo(int a) { return (a & (~15)) / 16; }
1810
1811 Into:
1812
1813 define i32 @foo(i32 %a) nounwind readnone ssp {
1814 entry:
1815 %and = and i32 %a, -16
1816 %div = sdiv i32 %and, 16
1817 ret i32 %div
1818 }
1819
1820 but this code (X & -A)/A is X >> log2(A) when A is a power of 2, so this case
1821 should be instcombined into just "a >> 4".
1822
1823 We do get this at the codegen level, so something knows about it, but
1824 instcombine should catch it earlier:
1825
1826 _foo: ## @foo
1827 ## BB#0: ## %entry
1828 movl %edi, %eax
1829 sarl $4, %eax
1830 ret
1831
1832 //===---------------------------------------------------------------------===//
1833
1834 This code (from GCC PR28685):
1835
1836 int test(int a, int b) {
1837 int lt = a < b;
1838 int eq = a == b;
1839 if (lt)
1840 return 1;
1841 return eq;
1842 }
1843
1844 Is compiled to:
1845
1846 define i32 @test(i32 %a, i32 %b) nounwind readnone ssp {
1847 entry:
1848 %cmp = icmp slt i32 %a, %b
1849 br i1 %cmp, label %return, label %if.end
1850
1851 if.end: ; preds = %entry
1852 %cmp5 = icmp eq i32 %a, %b
1853 %conv6 = zext i1 %cmp5 to i32
1854 ret i32 %conv6
1855
1856 return: ; preds = %entry
1857 ret i32 1
1858 }
1859
1860 it could be:
1861
1862 define i32 @test__(i32 %a, i32 %b) nounwind readnone ssp {
1863 entry:
1864 %0 = icmp sle i32 %a, %b
1865 %retval = zext i1 %0 to i32
1866 ret i32 %retval
1867 }
1868
1869 //===---------------------------------------------------------------------===//
1870
1871 This code can be seen in viterbi:
1872
1873 %64 = call noalias i8* @malloc(i64 %62) nounwind
1874 ...
1875 %67 = call i64 @llvm.objectsize.i64(i8* %64, i1 false) nounwind
1876 %68 = call i8* @__memset_chk(i8* %64, i32 0, i64 %62, i64 %67) nounwind
1877
1878 llvm.objectsize.i64 should be taught about malloc/calloc, allowing it to
1879 fold to %62. This is a security win (overflows of malloc will get caught)
1880 and also a performance win by exposing more memsets to the optimizer.
1881
1882 This occurs several times in viterbi.
1883
1884 Note that this would change the semantics of @llvm.objectsize which by its
1885 current definition always folds to a constant. We also should make sure that
1886 we remove checking in code like
1887
1888 char *p = malloc(strlen(s)+1);
1889 __strcpy_chk(p, s, __builtin_objectsize(p, 0));
1890
1891 //===---------------------------------------------------------------------===//
1892
1893 This code (from Benchmarks/Dhrystone/dry.c):
1894
1895 define i32 @Func1(i32, i32) nounwind readnone optsize ssp {
1896 entry:
1897 %sext = shl i32 %0, 24
1898 %conv = ashr i32 %sext, 24
1899 %sext6 = shl i32 %1, 24
1900 %conv4 = ashr i32 %sext6, 24
1901 %cmp = icmp eq i32 %conv, %conv4
1902 %. = select i1 %cmp, i32 10000, i32 0
1903 ret i32 %.
1904 }
1905
1906 Should be simplified into something like:
1907
1908 define i32 @Func1(i32, i32) nounwind readnone optsize ssp {
1909 entry:
1910 %sext = shl i32 %0, 24
1911 %conv = and i32 %sext, 0xFF000000
1912 %sext6 = shl i32 %1, 24
1913 %conv4 = and i32 %sext6, 0xFF000000
1914 %cmp = icmp eq i32 %conv, %conv4
1915 %. = select i1 %cmp, i32 10000, i32 0
1916 ret i32 %.
1917 }
1918
1919 and then to:
1920
1921 define i32 @Func1(i32, i32) nounwind readnone optsize ssp {
1922 entry:
1923 %conv = and i32 %0, 0xFF
1924 %conv4 = and i32 %1, 0xFF
1925 %cmp = icmp eq i32 %conv, %conv4
1926 %. = select i1 %cmp, i32 10000, i32 0
1927 ret i32 %.
1928 }
1929 //===---------------------------------------------------------------------===//
1930
1931 clang -O3 currently compiles this code
1932
1933 int g(unsigned int a) {
1934 unsigned int c[100];
1935 c[10] = a;
1936 c[11] = a;
1937 unsigned int b = c[10] + c[11];
1938 if(b > a*2) a = 4;
1939 else a = 8;
1940 return a + 7;
1941 }
1942
1943 into
1944
1945 define i32 @g(i32 a) nounwind readnone {
1946 %add = shl i32 %a, 1
1947 %mul = shl i32 %a, 1
1948 %cmp = icmp ugt i32 %add, %mul
1949 %a.addr.0 = select i1 %cmp, i32 11, i32 15
1950 ret i32 %a.addr.0
1951 }
1952
1953 The icmp should fold to false. This CSE opportunity is only available
1954 after GVN and InstCombine have run.
1955
1956 //===---------------------------------------------------------------------===//
1957
1958 memcpyopt should turn this:
1959
1960 define i8* @test10(i32 %x) {
1961 %alloc = call noalias i8* @malloc(i32 %x) nounwind
1962 call void @llvm.memset.p0i8.i32(i8* %alloc, i8 0, i32 %x, i32 1, i1 false)
1963 ret i8* %alloc
1964 }
1965
1966 into a call to calloc. We should make sure that we analyze calloc as
1967 aggressively as malloc though.
1968
1969 //===---------------------------------------------------------------------===//
1970
1971 clang -O3 doesn't optimize this:
1972
1973 void f1(int* begin, int* end) {
1974 std::fill(begin, end, 0);
1975 }
1976
1977 into a memset. This is PR8942.
1978
1979 //===---------------------------------------------------------------------===//
1980
1981 clang -O3 -fno-exceptions currently compiles this code:
1982
1983 void f(int N) {
1984 std::vector<int> v(N);
1985
1986 extern void sink(void*); sink(&v);
1987 }
1988
1989 into
1990
1991 define void @_Z1fi(i32 %N) nounwind {
1992 entry:
1993 %v2 = alloca [3 x i32*], align 8
1994 %v2.sub = getelementptr inbounds [3 x i32*]* %v2, i64 0, i64 0
1995 %tmpcast = bitcast [3 x i32*]* %v2 to %"class.std::vector"*
1996 %conv = sext i32 %N to i64
1997 store i32* null, i32** %v2.sub, align 8, !tbaa !0
1998 %tmp3.i.i.i.i.i = getelementptr inbounds [3 x i32*]* %v2, i64 0, i64 1
1999 store i32* null, i32** %tmp3.i.i.i.i.i, align 8, !tbaa !0
2000 %tmp4.i.i.i.i.i = getelementptr inbounds [3 x i32*]* %v2, i64 0, i64 2
2001 store i32* null, i32** %tmp4.i.i.i.i.i, align 8, !tbaa !0
2002 %cmp.i.i.i.i = icmp eq i32 %N, 0
2003 br i1 %cmp.i.i.i.i, label %_ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.thread.i.i, label %cond.true.i.i.i.i
2004
2005 _ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.thread.i.i: ; preds = %entry
2006 store i32* null, i32** %v2.sub, align 8, !tbaa !0
2007 store i32* null, i32** %tmp3.i.i.i.i.i, align 8, !tbaa !0
2008 %add.ptr.i5.i.i = getelementptr inbounds i32* null, i64 %conv
2009 store i32* %add.ptr.i5.i.i, i32** %tmp4.i.i.i.i.i, align 8, !tbaa !0
2010 br label %_ZNSt6vectorIiSaIiEEC1EmRKiRKS0_.exit
2011
2012 cond.true.i.i.i.i: ; preds = %entry
2013 %cmp.i.i.i.i.i = icmp slt i32 %N, 0
2014 br i1 %cmp.i.i.i.i.i, label %if.then.i.i.i.i.i, label %_ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.i.i
2015
2016 if.then.i.i.i.i.i: ; preds = %cond.true.i.i.i.i
2017 call void @_ZSt17__throw_bad_allocv() noreturn nounwind
2018 unreachable
2019
2020 _ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.i.i: ; preds = %cond.true.i.i.i.i
2021 %mul.i.i.i.i.i = shl i64 %conv, 2
2022 %call3.i.i.i.i.i = call noalias i8* @_Znwm(i64 %mul.i.i.i.i.i) nounwind
2023 %0 = bitcast i8* %call3.i.i.i.i.i to i32*
2024 store i32* %0, i32** %v2.sub, align 8, !tbaa !0
2025 store i32* %0, i32** %tmp3.i.i.i.i.i, align 8, !tbaa !0
2026 %add.ptr.i.i.i = getelementptr inbounds i32* %0, i64 %conv
2027 store i32* %add.ptr.i.i.i, i32** %tmp4.i.i.i.i.i, align 8, !tbaa !0
2028 call void @llvm.memset.p0i8.i64(i8* %call3.i.i.i.i.i, i8 0, i64 %mul.i.i.i.i.i, i32 4, i1 false)
2029 br label %_ZNSt6vectorIiSaIiEEC1EmRKiRKS0_.exit
2030
2031 This is just the handling the construction of the vector. Most surprising here
2032 is the fact that all three null stores in %entry are dead (because we do no
2033 cross-block DSE).
2034
2035 Also surprising is that %conv isn't simplified to 0 in %....exit.thread.i.i.
2036 This is a because the client of LazyValueInfo doesn't simplify all instruction
2037 operands, just selected ones.
2038
2039 //===---------------------------------------------------------------------===//
2040
2041 clang -O3 -fno-exceptions currently compiles this code:
2042
2043 void f(char* a, int n) {
2044 __builtin_memset(a, 0, n);
2045 for (int i = 0; i < n; ++i)
2046 a[i] = 0;
2047 }
2048
2049 into:
2050
2051 define void @_Z1fPci(i8* nocapture %a, i32 %n) nounwind {
2052 entry:
2053 %conv = sext i32 %n to i64
2054 tail call void @llvm.memset.p0i8.i64(i8* %a, i8 0, i64 %conv, i32 1, i1 false)
2055 %cmp8 = icmp sgt i32 %n, 0
2056 br i1 %cmp8, label %for.body.lr.ph, label %for.end
2057
2058 for.body.lr.ph: ; preds = %entry
2059 %tmp10 = add i32 %n, -1
2060 %tmp11 = zext i32 %tmp10 to i64
2061 %tmp12 = add i64 %tmp11, 1
2062 call void @llvm.memset.p0i8.i64(i8* %a, i8 0, i64 %tmp12, i32 1, i1 false)
2063 ret void
2064
2065 for.end: ; preds = %entry
2066 ret void
2067 }
2068
2069 This shouldn't need the ((zext (%n - 1)) + 1) game, and it should ideally fold
2070 the two memset's together.
2071
2072 The issue with the addition only occurs in 64-bit mode, and appears to be at
2073 least partially caused by Scalar Evolution not keeping its cache updated: it
2074 returns the "wrong" result immediately after indvars runs, but figures out the
2075 expected result if it is run from scratch on IR resulting from running indvars.
2076
2077 //===---------------------------------------------------------------------===//
2078
2079 clang -O3 -fno-exceptions currently compiles this code:
2080
2081 struct S {
2082 unsigned short m1, m2;
2083 unsigned char m3, m4;
2084 };
2085
2086 void f(int N) {
2087 std::vector<S> v(N);
2088 extern void sink(void*); sink(&v);
2089 }
2090
2091 into poor code for zero-initializing 'v' when N is >0. The problem is that
2092 S is only 6 bytes, but each element is 8 byte-aligned. We generate a loop and
2093 4 stores on each iteration. If the struct were 8 bytes, this gets turned into
2094 a memset.
2095
2096 In order to handle this we have to:
2097 A) Teach clang to generate metadata for memsets of structs that have holes in
2098 them.
2099 B) Teach clang to use such a memset for zero init of this struct (since it has
2100 a hole), instead of doing elementwise zeroing.
2101
2102 //===---------------------------------------------------------------------===//
2103
2104 clang -O3 currently compiles this code:
2105
2106 extern const int magic;
2107 double f() { return 0.0 * magic; }
2108
2109 into
2110
2111 @magic = external constant i32
2112
2113 define double @_Z1fv() nounwind readnone {
2114 entry:
2115 %tmp = load i32* @magic, align 4, !tbaa !0
2116 %conv = sitofp i32 %tmp to double
2117 %mul = fmul double %conv, 0.000000e+00
2118 ret double %mul
2119 }
2120
2121 We should be able to fold away this fmul to 0.0. More generally, fmul(x,0.0)
2122 can be folded to 0.0 if we can prove that the LHS is not -0.0, not a NaN, and
2123 not an INF. The CannotBeNegativeZero predicate in value tracking should be
2124 extended to support general "fpclassify" operations that can return
2125 yes/no/unknown for each of these predicates.
2126
2127 In this predicate, we know that uitofp is trivially never NaN or -0.0, and
2128 we know that it isn't +/-Inf if the floating point type has enough exponent bits
2129 to represent the largest integer value as < inf.
2130
2131 //===---------------------------------------------------------------------===//
2132
2133 When optimizing a transformation that can change the sign of 0.0 (such as the
2134 0.0*val -> 0.0 transformation above), it might be provable that the sign of the
2135 expression doesn't matter. For example, by the above rules, we can't transform
2136 fmul(sitofp(x), 0.0) into 0.0, because x might be -1 and the result of the
2137 expression is defined to be -0.0.
2138
2139 If we look at the uses of the fmul for example, we might be able to prove that
2140 all uses don't care about the sign of zero. For example, if we have:
2141
2142 fadd(fmul(sitofp(x), 0.0), 2.0)
2143
2144 Since we know that x+2.0 doesn't care about the sign of any zeros in X, we can
2145 transform the fmul to 0.0, and then the fadd to 2.0.
2146
2147 //===---------------------------------------------------------------------===//
2148
2149 We should enhance memcpy/memcpy/memset to allow a metadata node on them
2150 indicating that some bytes of the transfer are undefined. This is useful for
2151 frontends like clang when lowering struct copies, when some elements of the
2152 struct are undefined. Consider something like this:
2153
2154 struct x {
2155 char a;
2156 int b[4];
2157 };
2158 void foo(struct x*P);
2159 struct x testfunc() {
2160 struct x V1, V2;
2161 foo(&V1);
2162 V2 = V1;
2163
2164 return V2;
2165 }
2166
2167 We currently compile this to:
2168 $ clang t.c -S -o - -O0 -emit-llvm | opt -scalarrepl -S
2169
2170
2171 %struct.x = type { i8, [4 x i32] }
2172
2173 define void @testfunc(%struct.x* sret %agg.result) nounwind ssp {
2174 entry:
2175 %V1 = alloca %struct.x, align 4
2176 call void @foo(%struct.x* %V1)
2177 %tmp1 = bitcast %struct.x* %V1 to i8*
2178 %0 = bitcast %struct.x* %V1 to i160*
2179 %srcval1 = load i160* %0, align 4
2180 %tmp2 = bitcast %struct.x* %agg.result to i8*
2181 %1 = bitcast %struct.x* %agg.result to i160*
2182 store i160 %srcval1, i160* %1, align 4
2183 ret void
2184 }
2185
2186 This happens because SRoA sees that the temp alloca has is being memcpy'd into
2187 and out of and it has holes and it has to be conservative. If we knew about the
2188 holes, then this could be much much better.
2189
2190 Having information about these holes would also improve memcpy (etc) lowering at
2191 llc time when it gets inlined, because we can use smaller transfers. This also
2192 avoids partial register stalls in some important cases.
2193
2194 //===---------------------------------------------------------------------===//
2195
2196 We don't fold (icmp (add) (add)) unless the two adds only have a single use.
2197 There are a lot of cases that we're refusing to fold in (e.g.) 256.bzip2, for
2198 example:
2199
2200 %indvar.next90 = add i64 %indvar89, 1 ;; Has 2 uses
2201 %tmp96 = add i64 %tmp95, 1 ;; Has 1 use
2202 %exitcond97 = icmp eq i64 %indvar.next90, %tmp96
2203
2204 We don't fold this because we don't want to introduce an overlapped live range
2205 of the ivar. However if we can make this more aggressive without causing
2206 performance issues in two ways:
2207
2208 1. If *either* the LHS or RHS has a single use, we can definitely do the
2209 transformation. In the overlapping liverange case we're trading one register
2210 use for one fewer operation, which is a reasonable trade. Before doing this
2211 we should verify that the llc output actually shrinks for some benchmarks.
2212 2. If both ops have multiple uses, we can still fold it if the operations are
2213 both sinkable to *after* the icmp (e.g. in a subsequent block) which doesn't
2214 increase register pressure.
2215
2216 There are a ton of icmp's we aren't simplifying because of the reg pressure
2217 concern. Care is warranted here though because many of these are induction
2218 variables and other cases that matter a lot to performance, like the above.
2219 Here's a blob of code that you can drop into the bottom of visitICmp to see some
2220 missed cases:
2221
2222 { Value *A, *B, *C, *D;
2223 if (match(Op0, m_Add(m_Value(A), m_Value(B))) &&
2224 match(Op1, m_Add(m_Value(C), m_Value(D))) &&
2225 (A == C || A == D || B == C || B == D)) {
2226 errs() << "OP0 = " << *Op0 << " U=" << Op0->getNumUses() << "\n";
2227 errs() << "OP1 = " << *Op1 << " U=" << Op1->getNumUses() << "\n";
2228 errs() << "CMP = " << I << "\n\n";
2229 }
2230 }
2231
2232 //===---------------------------------------------------------------------===//
2233
2234 define i1 @test1(i32 %x) nounwind {
2235 %and = and i32 %x, 3
2236 %cmp = icmp ult i32 %and, 2
2237 ret i1 %cmp
2238 }
2239
2240 Can be folded to (x & 2) == 0.
2241
2242 define i1 @test2(i32 %x) nounwind {
2243 %and = and i32 %x, 3
2244 %cmp = icmp ugt i32 %and, 1
2245 ret i1 %cmp
2246 }
2247
2248 Can be folded to (x & 2) != 0.
2249
2250 SimplifyDemandedBits shrinks the "and" constant to 2 but instcombine misses the
2251 icmp transform.
2252
2253 //===---------------------------------------------------------------------===//
2254
2255 This code:
2256
2257 typedef struct {
2258 int f1:1;
2259 int f2:1;
2260 int f3:1;
2261 int f4:29;
2262 } t1;
2263
2264 typedef struct {
2265 int f1:1;
2266 int f2:1;
2267 int f3:30;
2268 } t2;
2269
2270 t1 s1;
2271 t2 s2;
2272
2273 void func1(void)
2274 {
2275 s1.f1 = s2.f1;
2276 s1.f2 = s2.f2;
2277 }
2278
2279 Compiles into this IR (on x86-64 at least):
2280
2281 %struct.t1 = type { i8, [3 x i8] }
2282 @s2 = global %struct.t1 zeroinitializer, align 4
2283 @s1 = global %struct.t1 zeroinitializer, align 4
2284 define void @func1() nounwind ssp noredzone {
2285 entry:
2286 %0 = load i32* bitcast (%struct.t1* @s2 to i32*), align 4
2287 %bf.val.sext5 = and i32 %0, 1
2288 %1 = load i32* bitcast (%struct.t1* @s1 to i32*), align 4
2289 %2 = and i32 %1, -4
2290 %3 = or i32 %2, %bf.val.sext5
2291 %bf.val.sext26 = and i32 %0, 2
2292 %4 = or i32 %3, %bf.val.sext26
2293 store i32 %4, i32* bitcast (%struct.t1* @s1 to i32*), align 4
2294 ret void
2295 }
2296
2297 The two or/and's should be merged into one each.
2298
2299 //===---------------------------------------------------------------------===//
2300
2301 Machine level code hoisting can be useful in some cases. For example, PR9408
2302 is about:
2303
2304 typedef union {
2305 void (*f1)(int);
2306 void (*f2)(long);
2307 } funcs;
2308
2309 void foo(funcs f, int which) {
2310 int a = 5;
2311 if (which) {
2312 f.f1(a);
2313 } else {
2314 f.f2(a);
2315 }
2316 }
2317
2318 which we compile to:
2319
2320 foo: # @foo
2321 # BB#0: # %entry
2322 pushq %rbp
2323 movq %rsp, %rbp
2324 testl %esi, %esi
2325 movq %rdi, %rax
2326 je .LBB0_2
2327 # BB#1: # %if.then
2328 movl $5, %edi
2329 callq *%rax
2330 popq %rbp
2331 ret
2332 .LBB0_2: # %if.else
2333 movl $5, %edi
2334 callq *%rax
2335 popq %rbp
2336 ret
2337
2338 Note that bb1 and bb2 are the same. This doesn't happen at the IR level
2339 because one call is passing an i32 and the other is passing an i64.
2340
2341 //===---------------------------------------------------------------------===//
2342
2343 I see this sort of pattern in 176.gcc in a few places (e.g. the start of
2344 store_bit_field). The rem should be replaced with a multiply and subtract:
2345
2346 %3 = sdiv i32 %A, %B
2347 %4 = srem i32 %A, %B
2348
2349 Similarly for udiv/urem. Note that this shouldn't be done on X86 or ARM,
2350 which can do this in a single operation (instruction or libcall). It is
2351 probably best to do this in the code generator.
2352
2353 //===---------------------------------------------------------------------===//
2354
2355 unsigned foo(unsigned x, unsigned y) { return (x & y) == 0 || x == 0; }
2356 should fold to (x & y) == 0.
2357
2358 //===---------------------------------------------------------------------===//
2359
2360 unsigned foo(unsigned x, unsigned y) { return x > y && x != 0; }
2361 should fold to x > y.
2362
2363 //===---------------------------------------------------------------------===//
2364
2365 int f(double x) { return __builtin_fabs(x) < 0.0; }
2366 should fold to false.
2367
2368 //===---------------------------------------------------------------------===//
2369