Home | History | Annotate | Download | only in Target
      1 Target Independent Opportunities:
      2 
      3 //===---------------------------------------------------------------------===//
      4 
      5 We should recognized various "overflow detection" idioms and translate them into
      6 llvm.uadd.with.overflow and similar intrinsics.  Here is a multiply idiom:
      7 
      8 unsigned int mul(unsigned int a,unsigned int b) {
      9  if ((unsigned long long)a*b>0xffffffff)
     10    exit(0);
     11   return a*b;
     12 }
     13 
     14 The legalization code for mul-with-overflow needs to be made more robust before
     15 this can be implemented though.
     16 
     17 //===---------------------------------------------------------------------===//
     18 
     19 Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
     20 precision don't matter (ffastmath).  Misc/mandel will like this. :)  This isn't
     21 safe in general, even on darwin.  See the libm implementation of hypot for
     22 examples (which special case when x/y are exactly zero to get signed zeros etc
     23 right).
     24 
     25 //===---------------------------------------------------------------------===//
     26 
     27 On targets with expensive 64-bit multiply, we could LSR this:
     28 
     29 for (i = ...; ++i) {
     30    x = 1ULL << i;
     31 
     32 into:
     33  long long tmp = 1;
     34  for (i = ...; ++i, tmp+=tmp)
     35    x = tmp;
     36 
     37 This would be a win on ppc32, but not x86 or ppc64.
     38 
     39 //===---------------------------------------------------------------------===//
     40 
     41 Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)
     42 
     43 //===---------------------------------------------------------------------===//
     44 
     45 Reassociate should turn things like:
     46 
     47 int factorial(int X) {
     48  return X*X*X*X*X*X*X*X;
     49 }
     50 
     51 into llvm.powi calls, allowing the code generator to produce balanced
     52 multiplication trees.
     53 
     54 First, the intrinsic needs to be extended to support integers, and second the
     55 code generator needs to be enhanced to lower these to multiplication trees.
     56 
     57 //===---------------------------------------------------------------------===//
     58 
     59 Interesting? testcase for add/shift/mul reassoc:
     60 
     61 int bar(int x, int y) {
     62   return x*x*x+y+x*x*x*x*x*y*y*y*y;
     63 }
     64 int foo(int z, int n) {
     65   return bar(z, n) + bar(2*z, 2*n);
     66 }
     67 
     68 This is blocked on not handling X*X*X -> powi(X, 3) (see note above).  The issue
     69 is that we end up getting t = 2*X  s = t*t   and don't turn this into 4*X*X,
     70 which is the same number of multiplies and is canonical, because the 2*X has
     71 multiple uses.  Here's a simple example:
     72 
     73 define i32 @test15(i32 %X1) {
     74   %B = mul i32 %X1, 47   ; X1*47
     75   %C = mul i32 %B, %B
     76   ret i32 %C
     77 }
     78 
     79 
     80 //===---------------------------------------------------------------------===//
     81 
     82 Reassociate should handle the example in GCC PR16157:
     83 
     84 extern int a0, a1, a2, a3, a4; extern int b0, b1, b2, b3, b4; 
     85 void f () {  /* this can be optimized to four additions... */ 
     86         b4 = a4 + a3 + a2 + a1 + a0; 
     87         b3 = a3 + a2 + a1 + a0; 
     88         b2 = a2 + a1 + a0; 
     89         b1 = a1 + a0; 
     90 } 
     91 
     92 This requires reassociating to forms of expressions that are already available,
     93 something that reassoc doesn't think about yet.
     94 
     95 
     96 //===---------------------------------------------------------------------===//
     97 
     98 This function: (derived from GCC PR19988)
     99 double foo(double x, double y) {
    100   return ((x + 0.1234 * y) * (x + -0.1234 * y));
    101 }
    102 
    103 compiles to:
    104 _foo:
    105 	movapd	%xmm1, %xmm2
    106 	mulsd	LCPI1_1(%rip), %xmm1
    107 	mulsd	LCPI1_0(%rip), %xmm2
    108 	addsd	%xmm0, %xmm1
    109 	addsd	%xmm0, %xmm2
    110 	movapd	%xmm1, %xmm0
    111 	mulsd	%xmm2, %xmm0
    112 	ret
    113 
    114 Reassociate should be able to turn it into:
    115 
    116 double foo(double x, double y) {
    117   return ((x + 0.1234 * y) * (x - 0.1234 * y));
    118 }
    119 
    120 Which allows the multiply by constant to be CSE'd, producing:
    121 
    122 _foo:
    123 	mulsd	LCPI1_0(%rip), %xmm1
    124 	movapd	%xmm1, %xmm2
    125 	addsd	%xmm0, %xmm2
    126 	subsd	%xmm1, %xmm0
    127 	mulsd	%xmm2, %xmm0
    128 	ret
    129 
    130 This doesn't need -ffast-math support at all.  This is particularly bad because
    131 the llvm-gcc frontend is canonicalizing the later into the former, but clang
    132 doesn't have this problem.
    133 
    134 //===---------------------------------------------------------------------===//
    135 
    136 These two functions should generate the same code on big-endian systems:
    137 
    138 int g(int *j,int *l)  {  return memcmp(j,l,4);  }
    139 int h(int *j, int *l) {  return *j - *l; }
    140 
    141 this could be done in SelectionDAGISel.cpp, along with other special cases,
    142 for 1,2,4,8 bytes.
    143 
    144 //===---------------------------------------------------------------------===//
    145 
    146 It would be nice to revert this patch:
    147 http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
    148 
    149 And teach the dag combiner enough to simplify the code expanded before 
    150 legalize.  It seems plausible that this knowledge would let it simplify other
    151 stuff too.
    152 
    153 //===---------------------------------------------------------------------===//
    154 
    155 For vector types, TargetData.cpp::getTypeInfo() returns alignment that is equal
    156 to the type size. It works but can be overly conservative as the alignment of
    157 specific vector types are target dependent.
    158 
    159 //===---------------------------------------------------------------------===//
    160 
    161 We should produce an unaligned load from code like this:
    162 
    163 v4sf example(float *P) {
    164   return (v4sf){P[0], P[1], P[2], P[3] };
    165 }
    166 
    167 //===---------------------------------------------------------------------===//
    168 
    169 Add support for conditional increments, and other related patterns.  Instead
    170 of:
    171 
    172 	movl 136(%esp), %eax
    173 	cmpl $0, %eax
    174 	je LBB16_2	#cond_next
    175 LBB16_1:	#cond_true
    176 	incl _foo
    177 LBB16_2:	#cond_next
    178 
    179 emit:
    180 	movl	_foo, %eax
    181 	cmpl	$1, %edi
    182 	sbbl	$-1, %eax
    183 	movl	%eax, _foo
    184 
    185 //===---------------------------------------------------------------------===//
    186 
    187 Combine: a = sin(x), b = cos(x) into a,b = sincos(x).
    188 
    189 Expand these to calls of sin/cos and stores:
    190       double sincos(double x, double *sin, double *cos);
    191       float sincosf(float x, float *sin, float *cos);
    192       long double sincosl(long double x, long double *sin, long double *cos);
    193 
    194 Doing so could allow SROA of the destination pointers.  See also:
    195 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687
    196 
    197 This is now easily doable with MRVs.  We could even make an intrinsic for this
    198 if anyone cared enough about sincos.
    199 
    200 //===---------------------------------------------------------------------===//
    201 
    202 quantum_sigma_x in 462.libquantum contains the following loop:
    203 
    204       for(i=0; i<reg->size; i++)
    205 	{
    206 	  /* Flip the target bit of each basis state */
    207 	  reg->node[i].state ^= ((MAX_UNSIGNED) 1 << target);
    208 	} 
    209 
    210 Where MAX_UNSIGNED/state is a 64-bit int.  On a 32-bit platform it would be just
    211 so cool to turn it into something like:
    212 
    213    long long Res = ((MAX_UNSIGNED) 1 << target);
    214    if (target < 32) {
    215      for(i=0; i<reg->size; i++)
    216        reg->node[i].state ^= Res & 0xFFFFFFFFULL;
    217    } else {
    218      for(i=0; i<reg->size; i++)
    219        reg->node[i].state ^= Res & 0xFFFFFFFF00000000ULL
    220    }
    221    
    222 ... which would only do one 32-bit XOR per loop iteration instead of two.
    223 
    224 It would also be nice to recognize the reg->size doesn't alias reg->node[i], but
    225 this requires TBAA.
    226 
    227 //===---------------------------------------------------------------------===//
    228 
    229 This isn't recognized as bswap by instcombine (yes, it really is bswap):
    230 
    231 unsigned long reverse(unsigned v) {
    232     unsigned t;
    233     t = v ^ ((v << 16) | (v >> 16));
    234     t &= ~0xff0000;
    235     v = (v << 24) | (v >> 8);
    236     return v ^ (t >> 8);
    237 }
    238 
    239 //===---------------------------------------------------------------------===//
    240 
    241 [LOOP DELETION]
    242 
    243 We don't delete this output free loop, because trip count analysis doesn't
    244 realize that it is finite (if it were infinite, it would be undefined).  Not
    245 having this blocks Loop Idiom from matching strlen and friends.  
    246 
    247 void foo(char *C) {
    248   int x = 0;
    249   while (*C)
    250     ++x,++C;
    251 }
    252 
    253 //===---------------------------------------------------------------------===//
    254 
    255 [LOOP RECOGNITION]
    256 
    257 These idioms should be recognized as popcount (see PR1488):
    258 
    259 unsigned countbits_slow(unsigned v) {
    260   unsigned c;
    261   for (c = 0; v; v >>= 1)
    262     c += v & 1;
    263   return c;
    264 }
    265 unsigned countbits_fast(unsigned v){
    266   unsigned c;
    267   for (c = 0; v; c++)
    268     v &= v - 1; // clear the least significant bit set
    269   return c;
    270 }
    271 
    272 BITBOARD = unsigned long long
    273 int PopCnt(register BITBOARD a) {
    274   register int c=0;
    275   while(a) {
    276     c++;
    277     a &= a - 1;
    278   }
    279   return c;
    280 }
    281 unsigned int popcount(unsigned int input) {
    282   unsigned int count = 0;
    283   for (unsigned int i =  0; i < 4 * 8; i++)
    284     count += (input >> i) & i;
    285   return count;
    286 }
    287 
    288 This should be recognized as CLZ:  rdar://8459039
    289 
    290 unsigned clz_a(unsigned a) {
    291   int i;
    292   for (i=0;i<32;i++)
    293     if (a & (1<<(31-i)))
    294       return i;
    295   return 32;
    296 }
    297 
    298 This sort of thing should be added to the loop idiom pass.
    299 
    300 //===---------------------------------------------------------------------===//
    301 
    302 These should turn into single 16-bit (unaligned?) loads on little/big endian
    303 processors.
    304 
    305 unsigned short read_16_le(const unsigned char *adr) {
    306   return adr[0] | (adr[1] << 8);
    307 }
    308 unsigned short read_16_be(const unsigned char *adr) {
    309   return (adr[0] << 8) | adr[1];
    310 }
    311 
    312 //===---------------------------------------------------------------------===//
    313 
    314 -instcombine should handle this transform:
    315    icmp pred (sdiv X / C1 ), C2
    316 when X, C1, and C2 are unsigned.  Similarly for udiv and signed operands. 
    317 
    318 Currently InstCombine avoids this transform but will do it when the signs of
    319 the operands and the sign of the divide match. See the FIXME in 
    320 InstructionCombining.cpp in the visitSetCondInst method after the switch case 
    321 for Instruction::UDiv (around line 4447) for more details.
    322 
    323 The SingleSource/Benchmarks/Shootout-C++/hash and hash2 tests have examples of
    324 this construct. 
    325 
    326 //===---------------------------------------------------------------------===//
    327 
    328 [LOOP OPTIMIZATION]
    329 
    330 SingleSource/Benchmarks/Misc/dt.c shows several interesting optimization
    331 opportunities in its double_array_divs_variable function: it needs loop
    332 interchange, memory promotion (which LICM already does), vectorization and
    333 variable trip count loop unrolling (since it has a constant trip count). ICC
    334 apparently produces this very nice code with -ffast-math:
    335 
    336 ..B1.70:                        # Preds ..B1.70 ..B1.69
    337        mulpd     %xmm0, %xmm1                                  #108.2
    338        mulpd     %xmm0, %xmm1                                  #108.2
    339        mulpd     %xmm0, %xmm1                                  #108.2
    340        mulpd     %xmm0, %xmm1                                  #108.2
    341        addl      $8, %edx                                      #
    342        cmpl      $131072, %edx                                 #108.2
    343        jb        ..B1.70       # Prob 99%                      #108.2
    344 
    345 It would be better to count down to zero, but this is a lot better than what we
    346 do.
    347 
    348 //===---------------------------------------------------------------------===//
    349 
    350 Consider:
    351 
    352 typedef unsigned U32;
    353 typedef unsigned long long U64;
    354 int test (U32 *inst, U64 *regs) {
    355     U64 effective_addr2;
    356     U32 temp = *inst;
    357     int r1 = (temp >> 20) & 0xf;
    358     int b2 = (temp >> 16) & 0xf;
    359     effective_addr2 = temp & 0xfff;
    360     if (b2) effective_addr2 += regs[b2];
    361     b2 = (temp >> 12) & 0xf;
    362     if (b2) effective_addr2 += regs[b2];
    363     effective_addr2 &= regs[4];
    364      if ((effective_addr2 & 3) == 0)
    365         return 1;
    366     return 0;
    367 }
    368 
    369 Note that only the low 2 bits of effective_addr2 are used.  On 32-bit systems,
    370 we don't eliminate the computation of the top half of effective_addr2 because
    371 we don't have whole-function selection dags.  On x86, this means we use one
    372 extra register for the function when effective_addr2 is declared as U64 than
    373 when it is declared U32.
    374 
    375 PHI Slicing could be extended to do this.
    376 
    377 //===---------------------------------------------------------------------===//
    378 
    379 Tail call elim should be more aggressive, checking to see if the call is
    380 followed by an uncond branch to an exit block.
    381 
    382 ; This testcase is due to tail-duplication not wanting to copy the return
    383 ; instruction into the terminating blocks because there was other code
    384 ; optimized out of the function after the taildup happened.
    385 ; RUN: llvm-as < %s | opt -tailcallelim | llvm-dis | not grep call
    386 
    387 define i32 @t4(i32 %a) {
    388 entry:
    389 	%tmp.1 = and i32 %a, 1		; <i32> [#uses=1]
    390 	%tmp.2 = icmp ne i32 %tmp.1, 0		; <i1> [#uses=1]
    391 	br i1 %tmp.2, label %then.0, label %else.0
    392 
    393 then.0:		; preds = %entry
    394 	%tmp.5 = add i32 %a, -1		; <i32> [#uses=1]
    395 	%tmp.3 = call i32 @t4( i32 %tmp.5 )		; <i32> [#uses=1]
    396 	br label %return
    397 
    398 else.0:		; preds = %entry
    399 	%tmp.7 = icmp ne i32 %a, 0		; <i1> [#uses=1]
    400 	br i1 %tmp.7, label %then.1, label %return
    401 
    402 then.1:		; preds = %else.0
    403 	%tmp.11 = add i32 %a, -2		; <i32> [#uses=1]
    404 	%tmp.9 = call i32 @t4( i32 %tmp.11 )		; <i32> [#uses=1]
    405 	br label %return
    406 
    407 return:		; preds = %then.1, %else.0, %then.0
    408 	%result.0 = phi i32 [ 0, %else.0 ], [ %tmp.3, %then.0 ],
    409                             [ %tmp.9, %then.1 ]
    410 	ret i32 %result.0
    411 }
    412 
    413 //===---------------------------------------------------------------------===//
    414 
    415 Tail recursion elimination should handle:
    416 
    417 int pow2m1(int n) {
    418  if (n == 0)
    419    return 0;
    420  return 2 * pow2m1 (n - 1) + 1;
    421 }
    422 
    423 Also, multiplies can be turned into SHL's, so they should be handled as if
    424 they were associative.  "return foo() << 1" can be tail recursion eliminated.
    425 
    426 //===---------------------------------------------------------------------===//
    427 
    428 Argument promotion should promote arguments for recursive functions, like 
    429 this:
    430 
    431 ; RUN: llvm-as < %s | opt -argpromotion | llvm-dis | grep x.val
    432 
    433 define internal i32 @foo(i32* %x) {
    434 entry:
    435 	%tmp = load i32* %x		; <i32> [#uses=0]
    436 	%tmp.foo = call i32 @foo( i32* %x )		; <i32> [#uses=1]
    437 	ret i32 %tmp.foo
    438 }
    439 
    440 define i32 @bar(i32* %x) {
    441 entry:
    442 	%tmp3 = call i32 @foo( i32* %x )		; <i32> [#uses=1]
    443 	ret i32 %tmp3
    444 }
    445 
    446 //===---------------------------------------------------------------------===//
    447 
    448 We should investigate an instruction sinking pass.  Consider this silly
    449 example in pic mode:
    450 
    451 #include <assert.h>
    452 void foo(int x) {
    453   assert(x);
    454   //...
    455 }
    456 
    457 we compile this to:
    458 _foo:
    459 	subl	$28, %esp
    460 	call	"L1$pb"
    461 "L1$pb":
    462 	popl	%eax
    463 	cmpl	$0, 32(%esp)
    464 	je	LBB1_2	# cond_true
    465 LBB1_1:	# return
    466 	# ...
    467 	addl	$28, %esp
    468 	ret
    469 LBB1_2:	# cond_true
    470 ...
    471 
    472 The PIC base computation (call+popl) is only used on one path through the 
    473 code, but is currently always computed in the entry block.  It would be 
    474 better to sink the picbase computation down into the block for the 
    475 assertion, as it is the only one that uses it.  This happens for a lot of 
    476 code with early outs.
    477 
    478 Another example is loads of arguments, which are usually emitted into the 
    479 entry block on targets like x86.  If not used in all paths through a 
    480 function, they should be sunk into the ones that do.
    481 
    482 In this case, whole-function-isel would also handle this.
    483 
    484 //===---------------------------------------------------------------------===//
    485 
    486 Investigate lowering of sparse switch statements into perfect hash tables:
    487 http://burtleburtle.net/bob/hash/perfect.html
    488 
    489 //===---------------------------------------------------------------------===//
    490 
    491 We should turn things like "load+fabs+store" and "load+fneg+store" into the
    492 corresponding integer operations.  On a yonah, this loop:
    493 
    494 double a[256];
    495 void foo() {
    496   int i, b;
    497   for (b = 0; b < 10000000; b++)
    498   for (i = 0; i < 256; i++)
    499     a[i] = -a[i];
    500 }
    501 
    502 is twice as slow as this loop:
    503 
    504 long long a[256];
    505 void foo() {
    506   int i, b;
    507   for (b = 0; b < 10000000; b++)
    508   for (i = 0; i < 256; i++)
    509     a[i] ^= (1ULL << 63);
    510 }
    511 
    512 and I suspect other processors are similar.  On X86 in particular this is a
    513 big win because doing this with integers allows the use of read/modify/write
    514 instructions.
    515 
    516 //===---------------------------------------------------------------------===//
    517 
    518 DAG Combiner should try to combine small loads into larger loads when 
    519 profitable.  For example, we compile this C++ example:
    520 
    521 struct THotKey { short Key; bool Control; bool Shift; bool Alt; };
    522 extern THotKey m_HotKey;
    523 THotKey GetHotKey () { return m_HotKey; }
    524 
    525 into (-m64 -O3 -fno-exceptions -static -fomit-frame-pointer):
    526 
    527 __Z9GetHotKeyv:                         ## @_Z9GetHotKeyv
    528 	movq	_m_HotKey@GOTPCREL(%rip), %rax
    529 	movzwl	(%rax), %ecx
    530 	movzbl	2(%rax), %edx
    531 	shlq	$16, %rdx
    532 	orq	%rcx, %rdx
    533 	movzbl	3(%rax), %ecx
    534 	shlq	$24, %rcx
    535 	orq	%rdx, %rcx
    536 	movzbl	4(%rax), %eax
    537 	shlq	$32, %rax
    538 	orq	%rcx, %rax
    539 	ret
    540 
    541 //===---------------------------------------------------------------------===//
    542 
    543 We should add an FRINT node to the DAG to model targets that have legal
    544 implementations of ceil/floor/rint.
    545 
    546 //===---------------------------------------------------------------------===//
    547 
    548 Consider:
    549 
    550 int test() {
    551   long long input[8] = {1,0,1,0,1,0,1,0};
    552   foo(input);
    553 }
    554 
    555 Clang compiles this into:
    556 
    557   call void @llvm.memset.p0i8.i64(i8* %tmp, i8 0, i64 64, i32 16, i1 false)
    558   %0 = getelementptr [8 x i64]* %input, i64 0, i64 0
    559   store i64 1, i64* %0, align 16
    560   %1 = getelementptr [8 x i64]* %input, i64 0, i64 2
    561   store i64 1, i64* %1, align 16
    562   %2 = getelementptr [8 x i64]* %input, i64 0, i64 4
    563   store i64 1, i64* %2, align 16
    564   %3 = getelementptr [8 x i64]* %input, i64 0, i64 6
    565   store i64 1, i64* %3, align 16
    566 
    567 Which gets codegen'd into:
    568 
    569 	pxor	%xmm0, %xmm0
    570 	movaps	%xmm0, -16(%rbp)
    571 	movaps	%xmm0, -32(%rbp)
    572 	movaps	%xmm0, -48(%rbp)
    573 	movaps	%xmm0, -64(%rbp)
    574 	movq	$1, -64(%rbp)
    575 	movq	$1, -48(%rbp)
    576 	movq	$1, -32(%rbp)
    577 	movq	$1, -16(%rbp)
    578 
    579 It would be better to have 4 movq's of 0 instead of the movaps's.
    580 
    581 //===---------------------------------------------------------------------===//
    582 
    583 http://llvm.org/PR717:
    584 
    585 The following code should compile into "ret int undef". Instead, LLVM
    586 produces "ret int 0":
    587 
    588 int f() {
    589   int x = 4;
    590   int y;
    591   if (x == 3) y = 0;
    592   return y;
    593 }
    594 
    595 //===---------------------------------------------------------------------===//
    596 
    597 The loop unroller should partially unroll loops (instead of peeling them)
    598 when code growth isn't too bad and when an unroll count allows simplification
    599 of some code within the loop.  One trivial example is:
    600 
    601 #include <stdio.h>
    602 int main() {
    603     int nRet = 17;
    604     int nLoop;
    605     for ( nLoop = 0; nLoop < 1000; nLoop++ ) {
    606         if ( nLoop & 1 )
    607             nRet += 2;
    608         else
    609             nRet -= 1;
    610     }
    611     return nRet;
    612 }
    613 
    614 Unrolling by 2 would eliminate the '&1' in both copies, leading to a net
    615 reduction in code size.  The resultant code would then also be suitable for
    616 exit value computation.
    617 
    618 //===---------------------------------------------------------------------===//
    619 
    620 We miss a bunch of rotate opportunities on various targets, including ppc, x86,
    621 etc.  On X86, we miss a bunch of 'rotate by variable' cases because the rotate
    622 matching code in dag combine doesn't look through truncates aggressively 
    623 enough.  Here are some testcases reduces from GCC PR17886:
    624 
    625 unsigned long long f5(unsigned long long x, unsigned long long y) {
    626   return (x << 8) | ((y >> 48) & 0xffull);
    627 }
    628 unsigned long long f6(unsigned long long x, unsigned long long y, int z) {
    629   switch(z) {
    630   case 1:
    631     return (x << 8) | ((y >> 48) & 0xffull);
    632   case 2:
    633     return (x << 16) | ((y >> 40) & 0xffffull);
    634   case 3:
    635     return (x << 24) | ((y >> 32) & 0xffffffull);
    636   case 4:
    637     return (x << 32) | ((y >> 24) & 0xffffffffull);
    638   default:
    639     return (x << 40) | ((y >> 16) & 0xffffffffffull);
    640   }
    641 }
    642 
    643 //===---------------------------------------------------------------------===//
    644 
    645 This (and similar related idioms):
    646 
    647 unsigned int foo(unsigned char i) {
    648   return i | (i<<8) | (i<<16) | (i<<24);
    649 } 
    650 
    651 compiles into:
    652 
    653 define i32 @foo(i8 zeroext %i) nounwind readnone ssp noredzone {
    654 entry:
    655   %conv = zext i8 %i to i32
    656   %shl = shl i32 %conv, 8
    657   %shl5 = shl i32 %conv, 16
    658   %shl9 = shl i32 %conv, 24
    659   %or = or i32 %shl9, %conv
    660   %or6 = or i32 %or, %shl5
    661   %or10 = or i32 %or6, %shl
    662   ret i32 %or10
    663 }
    664 
    665 it would be better as:
    666 
    667 unsigned int bar(unsigned char i) {
    668   unsigned int j=i | (i << 8); 
    669   return j | (j<<16);
    670 }
    671 
    672 aka:
    673 
    674 define i32 @bar(i8 zeroext %i) nounwind readnone ssp noredzone {
    675 entry:
    676   %conv = zext i8 %i to i32
    677   %shl = shl i32 %conv, 8
    678   %or = or i32 %shl, %conv
    679   %shl5 = shl i32 %or, 16
    680   %or6 = or i32 %shl5, %or
    681   ret i32 %or6
    682 }
    683 
    684 or even i*0x01010101, depending on the speed of the multiplier.  The best way to
    685 handle this is to canonicalize it to a multiply in IR and have codegen handle
    686 lowering multiplies to shifts on cpus where shifts are faster.
    687 
    688 //===---------------------------------------------------------------------===//
    689 
    690 We do a number of simplifications in simplify libcalls to strength reduce
    691 standard library functions, but we don't currently merge them together.  For
    692 example, it is useful to merge memcpy(a,b,strlen(b)) -> strcpy.  This can only
    693 be done safely if "b" isn't modified between the strlen and memcpy of course.
    694 
    695 //===---------------------------------------------------------------------===//
    696 
    697 We compile this program: (from GCC PR11680)
    698 http://gcc.gnu.org/bugzilla/attachment.cgi?id=4487
    699 
    700 Into code that runs the same speed in fast/slow modes, but both modes run 2x
    701 slower than when compile with GCC (either 4.0 or 4.2):
    702 
    703 $ llvm-g++ perf.cpp -O3 -fno-exceptions
    704 $ time ./a.out fast
    705 1.821u 0.003s 0:01.82 100.0%	0+0k 0+0io 0pf+0w
    706 
    707 $ g++ perf.cpp -O3 -fno-exceptions
    708 $ time ./a.out fast
    709 0.821u 0.001s 0:00.82 100.0%	0+0k 0+0io 0pf+0w
    710 
    711 It looks like we are making the same inlining decisions, so this may be raw
    712 codegen badness or something else (haven't investigated).
    713 
    714 //===---------------------------------------------------------------------===//
    715 
    716 Divisibility by constant can be simplified (according to GCC PR12849) from
    717 being a mulhi to being a mul lo (cheaper).  Testcase:
    718 
    719 void bar(unsigned n) {
    720   if (n % 3 == 0)
    721     true();
    722 }
    723 
    724 This is equivalent to the following, where 2863311531 is the multiplicative
    725 inverse of 3, and 1431655766 is ((2^32)-1)/3+1:
    726 void bar(unsigned n) {
    727   if (n * 2863311531U < 1431655766U)
    728     true();
    729 }
    730 
    731 The same transformation can work with an even modulo with the addition of a
    732 rotate: rotate the result of the multiply to the right by the number of bits
    733 which need to be zero for the condition to be true, and shrink the compare RHS
    734 by the same amount.  Unless the target supports rotates, though, that
    735 transformation probably isn't worthwhile.
    736 
    737 The transformation can also easily be made to work with non-zero equality
    738 comparisons: just transform, for example, "n % 3 == 1" to "(n-1) % 3 == 0".
    739 
    740 //===---------------------------------------------------------------------===//
    741 
    742 Better mod/ref analysis for scanf would allow us to eliminate the vtable and a
    743 bunch of other stuff from this example (see PR1604): 
    744 
    745 #include <cstdio>
    746 struct test {
    747     int val;
    748     virtual ~test() {}
    749 };
    750 
    751 int main() {
    752     test t;
    753     std::scanf("%d", &t.val);
    754     std::printf("%d\n", t.val);
    755 }
    756 
    757 //===---------------------------------------------------------------------===//
    758 
    759 These functions perform the same computation, but produce different assembly.
    760 
    761 define i8 @select(i8 %x) readnone nounwind {
    762   %A = icmp ult i8 %x, 250
    763   %B = select i1 %A, i8 0, i8 1
    764   ret i8 %B 
    765 }
    766 
    767 define i8 @addshr(i8 %x) readnone nounwind {
    768   %A = zext i8 %x to i9
    769   %B = add i9 %A, 6       ;; 256 - 250 == 6
    770   %C = lshr i9 %B, 8
    771   %D = trunc i9 %C to i8
    772   ret i8 %D
    773 }
    774 
    775 //===---------------------------------------------------------------------===//
    776 
    777 From gcc bug 24696:
    778 int
    779 f (unsigned long a, unsigned long b, unsigned long c)
    780 {
    781   return ((a & (c - 1)) != 0) || ((b & (c - 1)) != 0);
    782 }
    783 int
    784 f (unsigned long a, unsigned long b, unsigned long c)
    785 {
    786   return ((a & (c - 1)) != 0) | ((b & (c - 1)) != 0);
    787 }
    788 Both should combine to ((a|b) & (c-1)) != 0.  Currently not optimized with
    789 "clang -emit-llvm-bc | opt -std-compile-opts".
    790 
    791 //===---------------------------------------------------------------------===//
    792 
    793 From GCC Bug 20192:
    794 #define PMD_MASK    (~((1UL << 23) - 1))
    795 void clear_pmd_range(unsigned long start, unsigned long end)
    796 {
    797    if (!(start & ~PMD_MASK) && !(end & ~PMD_MASK))
    798        f();
    799 }
    800 The expression should optimize to something like
    801 "!((start|end)&~PMD_MASK). Currently not optimized with "clang
    802 -emit-llvm-bc | opt -std-compile-opts".
    803 
    804 //===---------------------------------------------------------------------===//
    805 
    806 unsigned int f(unsigned int i, unsigned int n) {++i; if (i == n) ++i; return
    807 i;}
    808 unsigned int f2(unsigned int i, unsigned int n) {++i; i += i == n; return i;}
    809 These should combine to the same thing.  Currently, the first function
    810 produces better code on X86.
    811 
    812 //===---------------------------------------------------------------------===//
    813 
    814 From GCC Bug 15784:
    815 #define abs(x) x>0?x:-x
    816 int f(int x, int y)
    817 {
    818  return (abs(x)) >= 0;
    819 }
    820 This should optimize to x == INT_MIN. (With -fwrapv.)  Currently not
    821 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
    822 
    823 //===---------------------------------------------------------------------===//
    824 
    825 From GCC Bug 14753:
    826 void
    827 rotate_cst (unsigned int a)
    828 {
    829  a = (a << 10) | (a >> 22);
    830  if (a == 123)
    831    bar ();
    832 }
    833 void
    834 minus_cst (unsigned int a)
    835 {
    836  unsigned int tem;
    837 
    838  tem = 20 - a;
    839  if (tem == 5)
    840    bar ();
    841 }
    842 void
    843 mask_gt (unsigned int a)
    844 {
    845  /* This is equivalent to a > 15.  */
    846  if ((a & ~7) > 8)
    847    bar ();
    848 }
    849 void
    850 rshift_gt (unsigned int a)
    851 {
    852  /* This is equivalent to a > 23.  */
    853  if ((a >> 2) > 5)
    854    bar ();
    855 }
    856 
    857 All should simplify to a single comparison.  All of these are
    858 currently not optimized with "clang -emit-llvm-bc | opt
    859 -std-compile-opts".
    860 
    861 //===---------------------------------------------------------------------===//
    862 
    863 From GCC Bug 32605:
    864 int c(int* x) {return (char*)x+2 == (char*)x;}
    865 Should combine to 0.  Currently not optimized with "clang
    866 -emit-llvm-bc | opt -std-compile-opts" (although llc can optimize it).
    867 
    868 //===---------------------------------------------------------------------===//
    869 
    870 int a(unsigned b) {return ((b << 31) | (b << 30)) >> 31;}
    871 Should be combined to  "((b >> 1) | b) & 1".  Currently not optimized
    872 with "clang -emit-llvm-bc | opt -std-compile-opts".
    873 
    874 //===---------------------------------------------------------------------===//
    875 
    876 unsigned a(unsigned x, unsigned y) { return x | (y & 1) | (y & 2);}
    877 Should combine to "x | (y & 3)".  Currently not optimized with "clang
    878 -emit-llvm-bc | opt -std-compile-opts".
    879 
    880 //===---------------------------------------------------------------------===//
    881 
    882 int a(int a, int b, int c) {return (~a & c) | ((c|a) & b);}
    883 Should fold to "(~a & c) | (a & b)".  Currently not optimized with
    884 "clang -emit-llvm-bc | opt -std-compile-opts".
    885 
    886 //===---------------------------------------------------------------------===//
    887 
    888 int a(int a,int b) {return (~(a|b))|a;}
    889 Should fold to "a|~b".  Currently not optimized with "clang
    890 -emit-llvm-bc | opt -std-compile-opts".
    891 
    892 //===---------------------------------------------------------------------===//
    893 
    894 int a(int a, int b) {return (a&&b) || (a&&!b);}
    895 Should fold to "a".  Currently not optimized with "clang -emit-llvm-bc
    896 | opt -std-compile-opts".
    897 
    898 //===---------------------------------------------------------------------===//
    899 
    900 int a(int a, int b, int c) {return (a&&b) || (!a&&c);}
    901 Should fold to "a ? b : c", or at least something sane.  Currently not
    902 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
    903 
    904 //===---------------------------------------------------------------------===//
    905 
    906 int a(int a, int b, int c) {return (a&&b) || (a&&c) || (a&&b&&c);}
    907 Should fold to a && (b || c).  Currently not optimized with "clang
    908 -emit-llvm-bc | opt -std-compile-opts".
    909 
    910 //===---------------------------------------------------------------------===//
    911 
    912 int a(int x) {return x | ((x & 8) ^ 8);}
    913 Should combine to x | 8.  Currently not optimized with "clang
    914 -emit-llvm-bc | opt -std-compile-opts".
    915 
    916 //===---------------------------------------------------------------------===//
    917 
    918 int a(int x) {return x ^ ((x & 8) ^ 8);}
    919 Should also combine to x | 8.  Currently not optimized with "clang
    920 -emit-llvm-bc | opt -std-compile-opts".
    921 
    922 //===---------------------------------------------------------------------===//
    923 
    924 int a(int x) {return ((x | -9) ^ 8) & x;}
    925 Should combine to x & -9.  Currently not optimized with "clang
    926 -emit-llvm-bc | opt -std-compile-opts".
    927 
    928 //===---------------------------------------------------------------------===//
    929 
    930 unsigned a(unsigned a) {return a * 0x11111111 >> 28 & 1;}
    931 Should combine to "a * 0x88888888 >> 31".  Currently not optimized
    932 with "clang -emit-llvm-bc | opt -std-compile-opts".
    933 
    934 //===---------------------------------------------------------------------===//
    935 
    936 unsigned a(char* x) {if ((*x & 32) == 0) return b();}
    937 There's an unnecessary zext in the generated code with "clang
    938 -emit-llvm-bc | opt -std-compile-opts".
    939 
    940 //===---------------------------------------------------------------------===//
    941 
    942 unsigned a(unsigned long long x) {return 40 * (x >> 1);}
    943 Should combine to "20 * (((unsigned)x) & -2)".  Currently not
    944 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
    945 
    946 //===---------------------------------------------------------------------===//
    947 
    948 int g(int x) { return (x - 10) < 0; }
    949 Should combine to "x <= 9" (the sub has nsw).  Currently not
    950 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
    951 
    952 //===---------------------------------------------------------------------===//
    953 
    954 int g(int x) { return (x + 10) < 0; }
    955 Should combine to "x < -10" (the add has nsw).  Currently not
    956 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
    957 
    958 //===---------------------------------------------------------------------===//
    959 
    960 int f(int i, int j) { return i < j + 1; }
    961 int g(int i, int j) { return j > i - 1; }
    962 Should combine to "i <= j" (the add/sub has nsw).  Currently not
    963 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
    964 
    965 //===---------------------------------------------------------------------===//
    966 
    967 unsigned f(unsigned x) { return ((x & 7) + 1) & 15; }
    968 The & 15 part should be optimized away, it doesn't change the result. Currently
    969 not optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
    970 
    971 //===---------------------------------------------------------------------===//
    972 
    973 This was noticed in the entryblock for grokdeclarator in 403.gcc:
    974 
    975         %tmp = icmp eq i32 %decl_context, 4          
    976         %decl_context_addr.0 = select i1 %tmp, i32 3, i32 %decl_context 
    977         %tmp1 = icmp eq i32 %decl_context_addr.0, 1 
    978         %decl_context_addr.1 = select i1 %tmp1, i32 0, i32 %decl_context_addr.0
    979 
    980 tmp1 should be simplified to something like:
    981   (!tmp || decl_context == 1)
    982 
    983 This allows recursive simplifications, tmp1 is used all over the place in
    984 the function, e.g. by:
    985 
    986         %tmp23 = icmp eq i32 %decl_context_addr.1, 0            ; <i1> [#uses=1]
    987         %tmp24 = xor i1 %tmp1, true             ; <i1> [#uses=1]
    988         %or.cond8 = and i1 %tmp23, %tmp24               ; <i1> [#uses=1]
    989 
    990 later.
    991 
    992 //===---------------------------------------------------------------------===//
    993 
    994 [STORE SINKING]
    995 
    996 Store sinking: This code:
    997 
    998 void f (int n, int *cond, int *res) {
    999     int i;
   1000     *res = 0;
   1001     for (i = 0; i < n; i++)
   1002         if (*cond)
   1003             *res ^= 234; /* (*) */
   1004 }
   1005 
   1006 On this function GVN hoists the fully redundant value of *res, but nothing
   1007 moves the store out.  This gives us this code:
   1008 
   1009 bb:		; preds = %bb2, %entry
   1010 	%.rle = phi i32 [ 0, %entry ], [ %.rle6, %bb2 ]	
   1011 	%i.05 = phi i32 [ 0, %entry ], [ %indvar.next, %bb2 ]
   1012 	%1 = load i32* %cond, align 4
   1013 	%2 = icmp eq i32 %1, 0
   1014 	br i1 %2, label %bb2, label %bb1
   1015 
   1016 bb1:		; preds = %bb
   1017 	%3 = xor i32 %.rle, 234	
   1018 	store i32 %3, i32* %res, align 4
   1019 	br label %bb2
   1020 
   1021 bb2:		; preds = %bb, %bb1
   1022 	%.rle6 = phi i32 [ %3, %bb1 ], [ %.rle, %bb ]	
   1023 	%indvar.next = add i32 %i.05, 1	
   1024 	%exitcond = icmp eq i32 %indvar.next, %n
   1025 	br i1 %exitcond, label %return, label %bb
   1026 
   1027 DSE should sink partially dead stores to get the store out of the loop.
   1028 
   1029 Here's another partial dead case:
   1030 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12395
   1031 
   1032 //===---------------------------------------------------------------------===//
   1033 
   1034 Scalar PRE hoists the mul in the common block up to the else:
   1035 
   1036 int test (int a, int b, int c, int g) {
   1037   int d, e;
   1038   if (a)
   1039     d = b * c;
   1040   else
   1041     d = b - c;
   1042   e = b * c + g;
   1043   return d + e;
   1044 }
   1045 
   1046 It would be better to do the mul once to reduce codesize above the if.
   1047 This is GCC PR38204.
   1048 
   1049 
   1050 //===---------------------------------------------------------------------===//
   1051 This simple function from 179.art:
   1052 
   1053 int winner, numf2s;
   1054 struct { double y; int   reset; } *Y;
   1055 
   1056 void find_match() {
   1057    int i;
   1058    winner = 0;
   1059    for (i=0;i<numf2s;i++)
   1060        if (Y[i].y > Y[winner].y)
   1061               winner =i;
   1062 }
   1063 
   1064 Compiles into (with clang TBAA):
   1065 
   1066 for.body:                                         ; preds = %for.inc, %bb.nph
   1067   %indvar = phi i64 [ 0, %bb.nph ], [ %indvar.next, %for.inc ]
   1068   %i.01718 = phi i32 [ 0, %bb.nph ], [ %i.01719, %for.inc ]
   1069   %tmp4 = getelementptr inbounds %struct.anon* %tmp3, i64 %indvar, i32 0
   1070   %tmp5 = load double* %tmp4, align 8, !tbaa !4
   1071   %idxprom7 = sext i32 %i.01718 to i64
   1072   %tmp10 = getelementptr inbounds %struct.anon* %tmp3, i64 %idxprom7, i32 0
   1073   %tmp11 = load double* %tmp10, align 8, !tbaa !4
   1074   %cmp12 = fcmp ogt double %tmp5, %tmp11
   1075   br i1 %cmp12, label %if.then, label %for.inc
   1076 
   1077 if.then:                                          ; preds = %for.body
   1078   %i.017 = trunc i64 %indvar to i32
   1079   br label %for.inc
   1080 
   1081 for.inc:                                          ; preds = %for.body, %if.then
   1082   %i.01719 = phi i32 [ %i.01718, %for.body ], [ %i.017, %if.then ]
   1083   %indvar.next = add i64 %indvar, 1
   1084   %exitcond = icmp eq i64 %indvar.next, %tmp22
   1085   br i1 %exitcond, label %for.cond.for.end_crit_edge, label %for.body
   1086 
   1087 
   1088 It is good that we hoisted the reloads of numf2's, and Y out of the loop and
   1089 sunk the store to winner out.
   1090 
   1091 However, this is awful on several levels: the conditional truncate in the loop
   1092 (-indvars at fault? why can't we completely promote the IV to i64?).
   1093 
   1094 Beyond that, we have a partially redundant load in the loop: if "winner" (aka 
   1095 %i.01718) isn't updated, we reload Y[winner].y the next time through the loop.
   1096 Similarly, the addressing that feeds it (including the sext) is redundant. In
   1097 the end we get this generated assembly:
   1098 
   1099 LBB0_2:                                 ## %for.body
   1100                                         ## =>This Inner Loop Header: Depth=1
   1101 	movsd	(%rdi), %xmm0
   1102 	movslq	%edx, %r8
   1103 	shlq	$4, %r8
   1104 	ucomisd	(%rcx,%r8), %xmm0
   1105 	jbe	LBB0_4
   1106 	movl	%esi, %edx
   1107 LBB0_4:                                 ## %for.inc
   1108 	addq	$16, %rdi
   1109 	incq	%rsi
   1110 	cmpq	%rsi, %rax
   1111 	jne	LBB0_2
   1112 
   1113 All things considered this isn't too bad, but we shouldn't need the movslq or
   1114 the shlq instruction, or the load folded into ucomisd every time through the
   1115 loop.
   1116 
   1117 On an x86-specific topic, if the loop can't be restructure, the movl should be a
   1118 cmov.
   1119 
   1120 //===---------------------------------------------------------------------===//
   1121 
   1122 [STORE SINKING]
   1123 
   1124 GCC PR37810 is an interesting case where we should sink load/store reload
   1125 into the if block and outside the loop, so we don't reload/store it on the
   1126 non-call path.
   1127 
   1128 for () {
   1129   *P += 1;
   1130   if ()
   1131     call();
   1132   else
   1133     ...
   1134 ->
   1135 tmp = *P
   1136 for () {
   1137   tmp += 1;
   1138   if () {
   1139     *P = tmp;
   1140     call();
   1141     tmp = *P;
   1142   } else ...
   1143 }
   1144 *P = tmp;
   1145 
   1146 We now hoist the reload after the call (Transforms/GVN/lpre-call-wrap.ll), but
   1147 we don't sink the store.  We need partially dead store sinking.
   1148 
   1149 //===---------------------------------------------------------------------===//
   1150 
   1151 [LOAD PRE CRIT EDGE SPLITTING]
   1152 
   1153 GCC PR37166: Sinking of loads prevents SROA'ing the "g" struct on the stack
   1154 leading to excess stack traffic. This could be handled by GVN with some crazy
   1155 symbolic phi translation.  The code we get looks like (g is on the stack):
   1156 
   1157 bb2:		; preds = %bb1
   1158 ..
   1159 	%9 = getelementptr %struct.f* %g, i32 0, i32 0		
   1160 	store i32 %8, i32* %9, align  bel %bb3
   1161 
   1162 bb3:		; preds = %bb1, %bb2, %bb
   1163 	%c_addr.0 = phi %struct.f* [ %g, %bb2 ], [ %c, %bb ], [ %c, %bb1 ]
   1164 	%b_addr.0 = phi %struct.f* [ %b, %bb2 ], [ %g, %bb ], [ %b, %bb1 ]
   1165 	%10 = getelementptr %struct.f* %c_addr.0, i32 0, i32 0
   1166 	%11 = load i32* %10, align 4
   1167 
   1168 %11 is partially redundant, an in BB2 it should have the value %8.
   1169 
   1170 GCC PR33344 and PR35287 are similar cases.
   1171 
   1172 
   1173 //===---------------------------------------------------------------------===//
   1174 
   1175 [LOAD PRE]
   1176 
   1177 There are many load PRE testcases in testsuite/gcc.dg/tree-ssa/loadpre* in the
   1178 GCC testsuite, ones we don't get yet are (checked through loadpre25):
   1179 
   1180 [CRIT EDGE BREAKING]
   1181 loadpre3.c predcom-4.c
   1182 
   1183 [PRE OF READONLY CALL]
   1184 loadpre5.c
   1185 
   1186 [TURN SELECT INTO BRANCH]
   1187 loadpre14.c loadpre15.c 
   1188 
   1189 actually a conditional increment: loadpre18.c loadpre19.c
   1190 
   1191 //===---------------------------------------------------------------------===//
   1192 
   1193 [LOAD PRE / STORE SINKING / SPEC HACK]
   1194 
   1195 This is a chunk of code from 456.hmmer:
   1196 
   1197 int f(int M, int *mc, int *mpp, int *tpmm, int *ip, int *tpim, int *dpp,
   1198      int *tpdm, int xmb, int *bp, int *ms) {
   1199  int k, sc;
   1200  for (k = 1; k <= M; k++) {
   1201      mc[k] = mpp[k-1]   + tpmm[k-1];
   1202      if ((sc = ip[k-1]  + tpim[k-1]) > mc[k])  mc[k] = sc;
   1203      if ((sc = dpp[k-1] + tpdm[k-1]) > mc[k])  mc[k] = sc;
   1204      if ((sc = xmb  + bp[k])         > mc[k])  mc[k] = sc;
   1205      mc[k] += ms[k];
   1206    }
   1207 }
   1208 
   1209 It is very profitable for this benchmark to turn the conditional stores to mc[k]
   1210 into a conditional move (select instr in IR) and allow the final store to do the
   1211 store.  See GCC PR27313 for more details.  Note that this is valid to xform even
   1212 with the new C++ memory model, since mc[k] is previously loaded and later
   1213 stored.
   1214 
   1215 //===---------------------------------------------------------------------===//
   1216 
   1217 [SCALAR PRE]
   1218 There are many PRE testcases in testsuite/gcc.dg/tree-ssa/ssa-pre-*.c in the
   1219 GCC testsuite.
   1220 
   1221 //===---------------------------------------------------------------------===//
   1222 
   1223 There are some interesting cases in testsuite/gcc.dg/tree-ssa/pred-comm* in the
   1224 GCC testsuite.  For example, we get the first example in predcom-1.c, but 
   1225 miss the second one:
   1226 
   1227 unsigned fib[1000];
   1228 unsigned avg[1000];
   1229 
   1230 __attribute__ ((noinline))
   1231 void count_averages(int n) {
   1232   int i;
   1233   for (i = 1; i < n; i++)
   1234     avg[i] = (((unsigned long) fib[i - 1] + fib[i] + fib[i + 1]) / 3) & 0xffff;
   1235 }
   1236 
   1237 which compiles into two loads instead of one in the loop.
   1238 
   1239 predcom-2.c is the same as predcom-1.c
   1240 
   1241 predcom-3.c is very similar but needs loads feeding each other instead of
   1242 store->load.
   1243 
   1244 
   1245 //===---------------------------------------------------------------------===//
   1246 
   1247 [ALIAS ANALYSIS]
   1248 
   1249 Type based alias analysis:
   1250 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14705
   1251 
   1252 We should do better analysis of posix_memalign.  At the least it should
   1253 no-capture its pointer argument, at best, we should know that the out-value
   1254 result doesn't point to anything (like malloc).  One example of this is in
   1255 SingleSource/Benchmarks/Misc/dt.c
   1256 
   1257 //===---------------------------------------------------------------------===//
   1258 
   1259 Interesting missed case because of control flow flattening (should be 2 loads):
   1260 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26629
   1261 With: llvm-gcc t2.c -S -o - -O0 -emit-llvm | llvm-as | 
   1262              opt -mem2reg -gvn -instcombine | llvm-dis
   1263 we miss it because we need 1) CRIT EDGE 2) MULTIPLE DIFFERENT
   1264 VALS PRODUCED BY ONE BLOCK OVER DIFFERENT PATHS
   1265 
   1266 //===---------------------------------------------------------------------===//
   1267 
   1268 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19633
   1269 We could eliminate the branch condition here, loading from null is undefined:
   1270 
   1271 struct S { int w, x, y, z; };
   1272 struct T { int r; struct S s; };
   1273 void bar (struct S, int);
   1274 void foo (int a, struct T b)
   1275 {
   1276   struct S *c = 0;
   1277   if (a)
   1278     c = &b.s;
   1279   bar (*c, a);
   1280 }
   1281 
   1282 //===---------------------------------------------------------------------===//
   1283 
   1284 simplifylibcalls should do several optimizations for strspn/strcspn:
   1285 
   1286 strcspn(x, "a") -> inlined loop for up to 3 letters (similarly for strspn):
   1287 
   1288 size_t __strcspn_c3 (__const char *__s, int __reject1, int __reject2,
   1289                      int __reject3) {
   1290   register size_t __result = 0;
   1291   while (__s[__result] != '\0' && __s[__result] != __reject1 &&
   1292          __s[__result] != __reject2 && __s[__result] != __reject3)
   1293     ++__result;
   1294   return __result;
   1295 }
   1296 
   1297 This should turn into a switch on the character.  See PR3253 for some notes on
   1298 codegen.
   1299 
   1300 456.hmmer apparently uses strcspn and strspn a lot.  471.omnetpp uses strspn.
   1301 
   1302 //===---------------------------------------------------------------------===//
   1303 
   1304 simplifylibcalls should turn these snprintf idioms into memcpy (GCC PR47917)
   1305 
   1306 char buf1[6], buf2[6], buf3[4], buf4[4];
   1307 int i;
   1308 
   1309 int foo (void) {
   1310   int ret = snprintf (buf1, sizeof buf1, "abcde");
   1311   ret += snprintf (buf2, sizeof buf2, "abcdef") * 16;
   1312   ret += snprintf (buf3, sizeof buf3, "%s", i++ < 6 ? "abc" : "def") * 256;
   1313   ret += snprintf (buf4, sizeof buf4, "%s", i++ > 10 ? "abcde" : "defgh")*4096;
   1314   return ret;
   1315 }
   1316 
   1317 //===---------------------------------------------------------------------===//
   1318 
   1319 "gas" uses this idiom:
   1320   else if (strchr ("+-/*%|&^:[]()~", *intel_parser.op_string))
   1321 ..
   1322   else if (strchr ("<>", *intel_parser.op_string)
   1323 
   1324 Those should be turned into a switch.
   1325 
   1326 //===---------------------------------------------------------------------===//
   1327 
   1328 252.eon contains this interesting code:
   1329 
   1330         %3072 = getelementptr [100 x i8]* %tempString, i32 0, i32 0
   1331         %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
   1332         %strlen = call i32 @strlen(i8* %3072)    ; uses = 1
   1333         %endptr = getelementptr [100 x i8]* %tempString, i32 0, i32 %strlen
   1334         call void @llvm.memcpy.i32(i8* %endptr, 
   1335           i8* getelementptr ([5 x i8]* @"\01LC42", i32 0, i32 0), i32 5, i32 1)
   1336         %3074 = call i32 @strlen(i8* %endptr) nounwind readonly 
   1337         
   1338 This is interesting for a couple reasons.  First, in this:
   1339 
   1340 The memcpy+strlen strlen can be replaced with:
   1341 
   1342         %3074 = call i32 @strlen([5 x i8]* @"\01LC42") nounwind readonly 
   1343 
   1344 Because the destination was just copied into the specified memory buffer.  This,
   1345 in turn, can be constant folded to "4".
   1346 
   1347 In other code, it contains:
   1348 
   1349         %endptr6978 = bitcast i8* %endptr69 to i32*            
   1350         store i32 7107374, i32* %endptr6978, align 1
   1351         %3167 = call i32 @strlen(i8* %endptr69) nounwind readonly    
   1352 
   1353 Which could also be constant folded.  Whatever is producing this should probably
   1354 be fixed to leave this as a memcpy from a string.
   1355 
   1356 Further, eon also has an interesting partially redundant strlen call:
   1357 
   1358 bb8:            ; preds = %_ZN18eonImageCalculatorC1Ev.exit
   1359         %682 = getelementptr i8** %argv, i32 6          ; <i8**> [#uses=2]
   1360         %683 = load i8** %682, align 4          ; <i8*> [#uses=4]
   1361         %684 = load i8* %683, align 1           ; <i8> [#uses=1]
   1362         %685 = icmp eq i8 %684, 0               ; <i1> [#uses=1]
   1363         br i1 %685, label %bb10, label %bb9
   1364 
   1365 bb9:            ; preds = %bb8
   1366         %686 = call i32 @strlen(i8* %683) nounwind readonly          
   1367         %687 = icmp ugt i32 %686, 254           ; <i1> [#uses=1]
   1368         br i1 %687, label %bb10, label %bb11
   1369 
   1370 bb10:           ; preds = %bb9, %bb8
   1371         %688 = call i32 @strlen(i8* %683) nounwind readonly          
   1372 
   1373 This could be eliminated by doing the strlen once in bb8, saving code size and
   1374 improving perf on the bb8->9->10 path.
   1375 
   1376 //===---------------------------------------------------------------------===//
   1377 
   1378 I see an interesting fully redundant call to strlen left in 186.crafty:InputMove
   1379 which looks like:
   1380        %movetext11 = getelementptr [128 x i8]* %movetext, i32 0, i32 0 
   1381  
   1382 
   1383 bb62:           ; preds = %bb55, %bb53
   1384         %promote.0 = phi i32 [ %169, %bb55 ], [ 0, %bb53 ]             
   1385         %171 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
   1386         %172 = add i32 %171, -1         ; <i32> [#uses=1]
   1387         %173 = getelementptr [128 x i8]* %movetext, i32 0, i32 %172       
   1388 
   1389 ...  no stores ...
   1390        br i1 %or.cond, label %bb65, label %bb72
   1391 
   1392 bb65:           ; preds = %bb62
   1393         store i8 0, i8* %173, align 1
   1394         br label %bb72
   1395 
   1396 bb72:           ; preds = %bb65, %bb62
   1397         %trank.1 = phi i32 [ %176, %bb65 ], [ -1, %bb62 ]            
   1398         %177 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
   1399 
   1400 Note that on the bb62->bb72 path, that the %177 strlen call is partially
   1401 redundant with the %171 call.  At worst, we could shove the %177 strlen call
   1402 up into the bb65 block moving it out of the bb62->bb72 path.   However, note
   1403 that bb65 stores to the string, zeroing out the last byte.  This means that on
   1404 that path the value of %177 is actually just %171-1.  A sub is cheaper than a
   1405 strlen!
   1406 
   1407 This pattern repeats several times, basically doing:
   1408 
   1409   A = strlen(P);
   1410   P[A-1] = 0;
   1411   B = strlen(P);
   1412   where it is "obvious" that B = A-1.
   1413 
   1414 //===---------------------------------------------------------------------===//
   1415 
   1416 186.crafty has this interesting pattern with the "out.4543" variable:
   1417 
   1418 call void @llvm.memcpy.i32(
   1419         i8* getelementptr ([10 x i8]* @out.4543, i32 0, i32 0),
   1420        i8* getelementptr ([7 x i8]* @"\01LC28700", i32 0, i32 0), i32 7, i32 1) 
   1421 %101 = call@printf(i8* ...   @out.4543, i32 0, i32 0)) nounwind 
   1422 
   1423 It is basically doing:
   1424 
   1425   memcpy(globalarray, "string");
   1426   printf(...,  globalarray);
   1427   
   1428 Anyway, by knowing that printf just reads the memory and forward substituting
   1429 the string directly into the printf, this eliminates reads from globalarray.
   1430 Since this pattern occurs frequently in crafty (due to the "DisplayTime" and
   1431 other similar functions) there are many stores to "out".  Once all the printfs
   1432 stop using "out", all that is left is the memcpy's into it.  This should allow
   1433 globalopt to remove the "stored only" global.
   1434 
   1435 //===---------------------------------------------------------------------===//
   1436 
   1437 This code:
   1438 
   1439 define inreg i32 @foo(i8* inreg %p) nounwind {
   1440   %tmp0 = load i8* %p
   1441   %tmp1 = ashr i8 %tmp0, 5
   1442   %tmp2 = sext i8 %tmp1 to i32
   1443   ret i32 %tmp2
   1444 }
   1445 
   1446 could be dagcombine'd to a sign-extending load with a shift.
   1447 For example, on x86 this currently gets this:
   1448 
   1449 	movb	(%eax), %al
   1450 	sarb	$5, %al
   1451 	movsbl	%al, %eax
   1452 
   1453 while it could get this:
   1454 
   1455 	movsbl	(%eax), %eax
   1456 	sarl	$5, %eax
   1457 
   1458 //===---------------------------------------------------------------------===//
   1459 
   1460 GCC PR31029:
   1461 
   1462 int test(int x) { return 1-x == x; }     // --> return false
   1463 int test2(int x) { return 2-x == x; }    // --> return x == 1 ?
   1464 
   1465 Always foldable for odd constants, what is the rule for even?
   1466 
   1467 //===---------------------------------------------------------------------===//
   1468 
   1469 PR 3381: GEP to field of size 0 inside a struct could be turned into GEP
   1470 for next field in struct (which is at same address).
   1471 
   1472 For example: store of float into { {{}}, float } could be turned into a store to
   1473 the float directly.
   1474 
   1475 //===---------------------------------------------------------------------===//
   1476 
   1477 The arg promotion pass should make use of nocapture to make its alias analysis
   1478 stuff much more precise.
   1479 
   1480 //===---------------------------------------------------------------------===//
   1481 
   1482 The following functions should be optimized to use a select instead of a
   1483 branch (from gcc PR40072):
   1484 
   1485 char char_int(int m) {if(m>7) return 0; return m;}
   1486 int int_char(char m) {if(m>7) return 0; return m;}
   1487 
   1488 //===---------------------------------------------------------------------===//
   1489 
   1490 int func(int a, int b) { if (a & 0x80) b |= 0x80; else b &= ~0x80; return b; }
   1491 
   1492 Generates this:
   1493 
   1494 define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
   1495 entry:
   1496   %0 = and i32 %a, 128                            ; <i32> [#uses=1]
   1497   %1 = icmp eq i32 %0, 0                          ; <i1> [#uses=1]
   1498   %2 = or i32 %b, 128                             ; <i32> [#uses=1]
   1499   %3 = and i32 %b, -129                           ; <i32> [#uses=1]
   1500   %b_addr.0 = select i1 %1, i32 %3, i32 %2        ; <i32> [#uses=1]
   1501   ret i32 %b_addr.0
   1502 }
   1503 
   1504 However, it's functionally equivalent to:
   1505 
   1506          b = (b & ~0x80) | (a & 0x80);
   1507 
   1508 Which generates this:
   1509 
   1510 define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
   1511 entry:
   1512   %0 = and i32 %b, -129                           ; <i32> [#uses=1]
   1513   %1 = and i32 %a, 128                            ; <i32> [#uses=1]
   1514   %2 = or i32 %0, %1                              ; <i32> [#uses=1]
   1515   ret i32 %2
   1516 }
   1517 
   1518 This can be generalized for other forms:
   1519 
   1520      b = (b & ~0x80) | (a & 0x40) << 1;
   1521 
   1522 //===---------------------------------------------------------------------===//
   1523 
   1524 These two functions produce different code. They shouldn't:
   1525 
   1526 #include <stdint.h>
   1527  
   1528 uint8_t p1(uint8_t b, uint8_t a) {
   1529   b = (b & ~0xc0) | (a & 0xc0);
   1530   return (b);
   1531 }
   1532  
   1533 uint8_t p2(uint8_t b, uint8_t a) {
   1534   b = (b & ~0x40) | (a & 0x40);
   1535   b = (b & ~0x80) | (a & 0x80);
   1536   return (b);
   1537 }
   1538 
   1539 define zeroext i8 @p1(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
   1540 entry:
   1541   %0 = and i8 %b, 63                              ; <i8> [#uses=1]
   1542   %1 = and i8 %a, -64                             ; <i8> [#uses=1]
   1543   %2 = or i8 %1, %0                               ; <i8> [#uses=1]
   1544   ret i8 %2
   1545 }
   1546 
   1547 define zeroext i8 @p2(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
   1548 entry:
   1549   %0 = and i8 %b, 63                              ; <i8> [#uses=1]
   1550   %.masked = and i8 %a, 64                        ; <i8> [#uses=1]
   1551   %1 = and i8 %a, -128                            ; <i8> [#uses=1]
   1552   %2 = or i8 %1, %0                               ; <i8> [#uses=1]
   1553   %3 = or i8 %2, %.masked                         ; <i8> [#uses=1]
   1554   ret i8 %3
   1555 }
   1556 
   1557 //===---------------------------------------------------------------------===//
   1558 
   1559 IPSCCP does not currently propagate argument dependent constants through
   1560 functions where it does not not all of the callers.  This includes functions
   1561 with normal external linkage as well as templates, C99 inline functions etc.
   1562 Specifically, it does nothing to:
   1563 
   1564 define i32 @test(i32 %x, i32 %y, i32 %z) nounwind {
   1565 entry:
   1566   %0 = add nsw i32 %y, %z                         
   1567   %1 = mul i32 %0, %x                             
   1568   %2 = mul i32 %y, %z                             
   1569   %3 = add nsw i32 %1, %2                         
   1570   ret i32 %3
   1571 }
   1572 
   1573 define i32 @test2() nounwind {
   1574 entry:
   1575   %0 = call i32 @test(i32 1, i32 2, i32 4) nounwind
   1576   ret i32 %0
   1577 }
   1578 
   1579 It would be interesting extend IPSCCP to be able to handle simple cases like
   1580 this, where all of the arguments to a call are constant.  Because IPSCCP runs
   1581 before inlining, trivial templates and inline functions are not yet inlined.
   1582 The results for a function + set of constant arguments should be memoized in a
   1583 map.
   1584 
   1585 //===---------------------------------------------------------------------===//
   1586 
   1587 The libcall constant folding stuff should be moved out of SimplifyLibcalls into
   1588 libanalysis' constantfolding logic.  This would allow IPSCCP to be able to
   1589 handle simple things like this:
   1590 
   1591 static int foo(const char *X) { return strlen(X); }
   1592 int bar() { return foo("abcd"); }
   1593 
   1594 //===---------------------------------------------------------------------===//
   1595 
   1596 functionattrs doesn't know much about memcpy/memset.  This function should be
   1597 marked readnone rather than readonly, since it only twiddles local memory, but
   1598 functionattrs doesn't handle memset/memcpy/memmove aggressively:
   1599 
   1600 struct X { int *p; int *q; };
   1601 int foo() {
   1602  int i = 0, j = 1;
   1603  struct X x, y;
   1604  int **p;
   1605  y.p = &i;
   1606  x.q = &j;
   1607  p = __builtin_memcpy (&x, &y, sizeof (int *));
   1608  return **p;
   1609 }
   1610 
   1611 This can be seen at:
   1612 $ clang t.c -S -o - -mkernel -O0 -emit-llvm | opt -functionattrs -S
   1613 
   1614 
   1615 //===---------------------------------------------------------------------===//
   1616 
   1617 Missed instcombine transformation:
   1618 define i1 @a(i32 %x) nounwind readnone {
   1619 entry:
   1620   %cmp = icmp eq i32 %x, 30
   1621   %sub = add i32 %x, -30
   1622   %cmp2 = icmp ugt i32 %sub, 9
   1623   %or = or i1 %cmp, %cmp2
   1624   ret i1 %or
   1625 }
   1626 This should be optimized to a single compare.  Testcase derived from gcc.
   1627 
   1628 //===---------------------------------------------------------------------===//
   1629 
   1630 Missed instcombine or reassociate transformation:
   1631 int a(int a, int b) { return (a==12)&(b>47)&(b<58); }
   1632 
   1633 The sgt and slt should be combined into a single comparison. Testcase derived
   1634 from gcc.
   1635 
   1636 //===---------------------------------------------------------------------===//
   1637 
   1638 Missed instcombine transformation:
   1639 
   1640   %382 = srem i32 %tmp14.i, 64                    ; [#uses=1]
   1641   %383 = zext i32 %382 to i64                     ; [#uses=1]
   1642   %384 = shl i64 %381, %383                       ; [#uses=1]
   1643   %385 = icmp slt i32 %tmp14.i, 64                ; [#uses=1]
   1644 
   1645 The srem can be transformed to an and because if %tmp14.i is negative, the
   1646 shift is undefined.  Testcase derived from 403.gcc.
   1647 
   1648 //===---------------------------------------------------------------------===//
   1649 
   1650 This is a range comparison on a divided result (from 403.gcc):
   1651 
   1652   %1337 = sdiv i32 %1336, 8                       ; [#uses=1]
   1653   %.off.i208 = add i32 %1336, 7                   ; [#uses=1]
   1654   %1338 = icmp ult i32 %.off.i208, 15             ; [#uses=1]
   1655   
   1656 We already catch this (removing the sdiv) if there isn't an add, we should
   1657 handle the 'add' as well.  This is a common idiom with it's builtin_alloca code.
   1658 C testcase:
   1659 
   1660 int a(int x) { return (unsigned)(x/16+7) < 15; }
   1661 
   1662 Another similar case involves truncations on 64-bit targets:
   1663 
   1664   %361 = sdiv i64 %.046, 8                        ; [#uses=1]
   1665   %362 = trunc i64 %361 to i32                    ; [#uses=2]
   1666 ...
   1667   %367 = icmp eq i32 %362, 0                      ; [#uses=1]
   1668 
   1669 //===---------------------------------------------------------------------===//
   1670 
   1671 Missed instcombine/dagcombine transformation:
   1672 define void @lshift_lt(i8 zeroext %a) nounwind {
   1673 entry:
   1674   %conv = zext i8 %a to i32
   1675   %shl = shl i32 %conv, 3
   1676   %cmp = icmp ult i32 %shl, 33
   1677   br i1 %cmp, label %if.then, label %if.end
   1678 
   1679 if.then:
   1680   tail call void @bar() nounwind
   1681   ret void
   1682 
   1683 if.end:
   1684   ret void
   1685 }
   1686 declare void @bar() nounwind
   1687 
   1688 The shift should be eliminated.  Testcase derived from gcc.
   1689 
   1690 //===---------------------------------------------------------------------===//
   1691 
   1692 These compile into different code, one gets recognized as a switch and the
   1693 other doesn't due to phase ordering issues (PR6212):
   1694 
   1695 int test1(int mainType, int subType) {
   1696   if (mainType == 7)
   1697     subType = 4;
   1698   else if (mainType == 9)
   1699     subType = 6;
   1700   else if (mainType == 11)
   1701     subType = 9;
   1702   return subType;
   1703 }
   1704 
   1705 int test2(int mainType, int subType) {
   1706   if (mainType == 7)
   1707     subType = 4;
   1708   if (mainType == 9)
   1709     subType = 6;
   1710   if (mainType == 11)
   1711     subType = 9;
   1712   return subType;
   1713 }
   1714 
   1715 //===---------------------------------------------------------------------===//
   1716 
   1717 The following test case (from PR6576):
   1718 
   1719 define i32 @mul(i32 %a, i32 %b) nounwind readnone {
   1720 entry:
   1721  %cond1 = icmp eq i32 %b, 0                      ; <i1> [#uses=1]
   1722  br i1 %cond1, label %exit, label %bb.nph
   1723 bb.nph:                                           ; preds = %entry
   1724  %tmp = mul i32 %b, %a                           ; <i32> [#uses=1]
   1725  ret i32 %tmp
   1726 exit:                                             ; preds = %entry
   1727  ret i32 0
   1728 }
   1729 
   1730 could be reduced to:
   1731 
   1732 define i32 @mul(i32 %a, i32 %b) nounwind readnone {
   1733 entry:
   1734  %tmp = mul i32 %b, %a
   1735  ret i32 %tmp
   1736 }
   1737 
   1738 //===---------------------------------------------------------------------===//
   1739 
   1740 We should use DSE + llvm.lifetime.end to delete dead vtable pointer updates.
   1741 See GCC PR34949
   1742 
   1743 Another interesting case is that something related could be used for variables
   1744 that go const after their ctor has finished.  In these cases, globalopt (which
   1745 can statically run the constructor) could mark the global const (so it gets put
   1746 in the readonly section).  A testcase would be:
   1747 
   1748 #include <complex>
   1749 using namespace std;
   1750 const complex<char> should_be_in_rodata (42,-42);
   1751 complex<char> should_be_in_data (42,-42);
   1752 complex<char> should_be_in_bss;
   1753 
   1754 Where we currently evaluate the ctors but the globals don't become const because
   1755 the optimizer doesn't know they "become const" after the ctor is done.  See
   1756 GCC PR4131 for more examples.
   1757 
   1758 //===---------------------------------------------------------------------===//
   1759 
   1760 In this code:
   1761 
   1762 long foo(long x) {
   1763   return x > 1 ? x : 1;
   1764 }
   1765 
   1766 LLVM emits a comparison with 1 instead of 0. 0 would be equivalent
   1767 and cheaper on most targets.
   1768 
   1769 LLVM prefers comparisons with zero over non-zero in general, but in this
   1770 case it choses instead to keep the max operation obvious.
   1771 
   1772 //===---------------------------------------------------------------------===//
   1773 
   1774 define void @a(i32 %x) nounwind {
   1775 entry:
   1776   switch i32 %x, label %if.end [
   1777     i32 0, label %if.then
   1778     i32 1, label %if.then
   1779     i32 2, label %if.then
   1780     i32 3, label %if.then
   1781     i32 5, label %if.then
   1782   ]
   1783 if.then:
   1784   tail call void @foo() nounwind
   1785   ret void
   1786 if.end:
   1787   ret void
   1788 }
   1789 declare void @foo()
   1790 
   1791 Generated code on x86-64 (other platforms give similar results):
   1792 a:
   1793 	cmpl	$5, %edi
   1794 	ja	LBB2_2
   1795 	cmpl	$4, %edi
   1796 	jne	LBB2_3
   1797 .LBB0_2:
   1798 	ret
   1799 .LBB0_3:
   1800 	jmp	foo  # TAILCALL
   1801 
   1802 If we wanted to be really clever, we could simplify the whole thing to
   1803 something like the following, which eliminates a branch:
   1804 	xorl    $1, %edi
   1805 	cmpl	$4, %edi
   1806 	ja	.LBB0_2
   1807 	ret
   1808 .LBB0_2:
   1809 	jmp	foo  # TAILCALL
   1810 
   1811 //===---------------------------------------------------------------------===//
   1812 
   1813 We compile this:
   1814 
   1815 int foo(int a) { return (a & (~15)) / 16; }
   1816 
   1817 Into:
   1818 
   1819 define i32 @foo(i32 %a) nounwind readnone ssp {
   1820 entry:
   1821   %and = and i32 %a, -16
   1822   %div = sdiv i32 %and, 16
   1823   ret i32 %div
   1824 }
   1825 
   1826 but this code (X & -A)/A is X >> log2(A) when A is a power of 2, so this case
   1827 should be instcombined into just "a >> 4".
   1828 
   1829 We do get this at the codegen level, so something knows about it, but 
   1830 instcombine should catch it earlier:
   1831 
   1832 _foo:                                   ## @foo
   1833 ## BB#0:                                ## %entry
   1834 	movl	%edi, %eax
   1835 	sarl	$4, %eax
   1836 	ret
   1837 
   1838 //===---------------------------------------------------------------------===//
   1839 
   1840 This code (from GCC PR28685):
   1841 
   1842 int test(int a, int b) {
   1843   int lt = a < b;
   1844   int eq = a == b;
   1845   if (lt)
   1846     return 1;
   1847   return eq;
   1848 }
   1849 
   1850 Is compiled to:
   1851 
   1852 define i32 @test(i32 %a, i32 %b) nounwind readnone ssp {
   1853 entry:
   1854   %cmp = icmp slt i32 %a, %b
   1855   br i1 %cmp, label %return, label %if.end
   1856 
   1857 if.end:                                           ; preds = %entry
   1858   %cmp5 = icmp eq i32 %a, %b
   1859   %conv6 = zext i1 %cmp5 to i32
   1860   ret i32 %conv6
   1861 
   1862 return:                                           ; preds = %entry
   1863   ret i32 1
   1864 }
   1865 
   1866 it could be:
   1867 
   1868 define i32 @test__(i32 %a, i32 %b) nounwind readnone ssp {
   1869 entry:
   1870   %0 = icmp sle i32 %a, %b
   1871   %retval = zext i1 %0 to i32
   1872   ret i32 %retval
   1873 }
   1874 
   1875 //===---------------------------------------------------------------------===//
   1876 
   1877 This code can be seen in viterbi:
   1878 
   1879   %64 = call noalias i8* @malloc(i64 %62) nounwind
   1880 ...
   1881   %67 = call i64 @llvm.objectsize.i64(i8* %64, i1 false) nounwind
   1882   %68 = call i8* @__memset_chk(i8* %64, i32 0, i64 %62, i64 %67) nounwind
   1883 
   1884 llvm.objectsize.i64 should be taught about malloc/calloc, allowing it to
   1885 fold to %62.  This is a security win (overflows of malloc will get caught)
   1886 and also a performance win by exposing more memsets to the optimizer.
   1887 
   1888 This occurs several times in viterbi.
   1889 
   1890 Note that this would change the semantics of @llvm.objectsize which by its
   1891 current definition always folds to a constant. We also should make sure that
   1892 we remove checking in code like
   1893 
   1894   char *p = malloc(strlen(s)+1);
   1895   __strcpy_chk(p, s, __builtin_objectsize(p, 0));
   1896 
   1897 //===---------------------------------------------------------------------===//
   1898 
   1899 This code (from Benchmarks/Dhrystone/dry.c):
   1900 
   1901 define i32 @Func1(i32, i32) nounwind readnone optsize ssp {
   1902 entry:
   1903   %sext = shl i32 %0, 24
   1904   %conv = ashr i32 %sext, 24
   1905   %sext6 = shl i32 %1, 24
   1906   %conv4 = ashr i32 %sext6, 24
   1907   %cmp = icmp eq i32 %conv, %conv4
   1908   %. = select i1 %cmp, i32 10000, i32 0
   1909   ret i32 %.
   1910 }
   1911 
   1912 Should be simplified into something like:
   1913 
   1914 define i32 @Func1(i32, i32) nounwind readnone optsize ssp {
   1915 entry:
   1916   %sext = shl i32 %0, 24
   1917   %conv = and i32 %sext, 0xFF000000
   1918   %sext6 = shl i32 %1, 24
   1919   %conv4 = and i32 %sext6, 0xFF000000
   1920   %cmp = icmp eq i32 %conv, %conv4
   1921   %. = select i1 %cmp, i32 10000, i32 0
   1922   ret i32 %.
   1923 }
   1924 
   1925 and then to:
   1926 
   1927 define i32 @Func1(i32, i32) nounwind readnone optsize ssp {
   1928 entry:
   1929   %conv = and i32 %0, 0xFF
   1930   %conv4 = and i32 %1, 0xFF
   1931   %cmp = icmp eq i32 %conv, %conv4
   1932   %. = select i1 %cmp, i32 10000, i32 0
   1933   ret i32 %.
   1934 }
   1935 //===---------------------------------------------------------------------===//
   1936 
   1937 clang -O3 currently compiles this code
   1938 
   1939 int g(unsigned int a) {
   1940   unsigned int c[100];
   1941   c[10] = a;
   1942   c[11] = a;
   1943   unsigned int b = c[10] + c[11];
   1944   if(b > a*2) a = 4;
   1945   else a = 8;
   1946   return a + 7;
   1947 }
   1948 
   1949 into
   1950 
   1951 define i32 @g(i32 a) nounwind readnone {
   1952   %add = shl i32 %a, 1
   1953   %mul = shl i32 %a, 1
   1954   %cmp = icmp ugt i32 %add, %mul
   1955   %a.addr.0 = select i1 %cmp, i32 11, i32 15
   1956   ret i32 %a.addr.0
   1957 }
   1958 
   1959 The icmp should fold to false. This CSE opportunity is only available
   1960 after GVN and InstCombine have run.
   1961 
   1962 //===---------------------------------------------------------------------===//
   1963 
   1964 memcpyopt should turn this:
   1965 
   1966 define i8* @test10(i32 %x) {
   1967   %alloc = call noalias i8* @malloc(i32 %x) nounwind
   1968   call void @llvm.memset.p0i8.i32(i8* %alloc, i8 0, i32 %x, i32 1, i1 false)
   1969   ret i8* %alloc
   1970 }
   1971 
   1972 into a call to calloc.  We should make sure that we analyze calloc as
   1973 aggressively as malloc though.
   1974 
   1975 //===---------------------------------------------------------------------===//
   1976 
   1977 clang -O3 doesn't optimize this:
   1978 
   1979 void f1(int* begin, int* end) {
   1980   std::fill(begin, end, 0);
   1981 }
   1982 
   1983 into a memset.  This is PR8942.
   1984 
   1985 //===---------------------------------------------------------------------===//
   1986 
   1987 clang -O3 -fno-exceptions currently compiles this code:
   1988 
   1989 void f(int N) {
   1990   std::vector<int> v(N);
   1991 
   1992   extern void sink(void*); sink(&v);
   1993 }
   1994 
   1995 into
   1996 
   1997 define void @_Z1fi(i32 %N) nounwind {
   1998 entry:
   1999   %v2 = alloca [3 x i32*], align 8
   2000   %v2.sub = getelementptr inbounds [3 x i32*]* %v2, i64 0, i64 0
   2001   %tmpcast = bitcast [3 x i32*]* %v2 to %"class.std::vector"*
   2002   %conv = sext i32 %N to i64
   2003   store i32* null, i32** %v2.sub, align 8, !tbaa !0
   2004   %tmp3.i.i.i.i.i = getelementptr inbounds [3 x i32*]* %v2, i64 0, i64 1
   2005   store i32* null, i32** %tmp3.i.i.i.i.i, align 8, !tbaa !0
   2006   %tmp4.i.i.i.i.i = getelementptr inbounds [3 x i32*]* %v2, i64 0, i64 2
   2007   store i32* null, i32** %tmp4.i.i.i.i.i, align 8, !tbaa !0
   2008   %cmp.i.i.i.i = icmp eq i32 %N, 0
   2009   br i1 %cmp.i.i.i.i, label %_ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.thread.i.i, label %cond.true.i.i.i.i
   2010 
   2011 _ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.thread.i.i: ; preds = %entry
   2012   store i32* null, i32** %v2.sub, align 8, !tbaa !0
   2013   store i32* null, i32** %tmp3.i.i.i.i.i, align 8, !tbaa !0
   2014   %add.ptr.i5.i.i = getelementptr inbounds i32* null, i64 %conv
   2015   store i32* %add.ptr.i5.i.i, i32** %tmp4.i.i.i.i.i, align 8, !tbaa !0
   2016   br label %_ZNSt6vectorIiSaIiEEC1EmRKiRKS0_.exit
   2017 
   2018 cond.true.i.i.i.i:                                ; preds = %entry
   2019   %cmp.i.i.i.i.i = icmp slt i32 %N, 0
   2020   br i1 %cmp.i.i.i.i.i, label %if.then.i.i.i.i.i, label %_ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.i.i
   2021 
   2022 if.then.i.i.i.i.i:                                ; preds = %cond.true.i.i.i.i
   2023   call void @_ZSt17__throw_bad_allocv() noreturn nounwind
   2024   unreachable
   2025 
   2026 _ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.i.i:    ; preds = %cond.true.i.i.i.i
   2027   %mul.i.i.i.i.i = shl i64 %conv, 2
   2028   %call3.i.i.i.i.i = call noalias i8* @_Znwm(i64 %mul.i.i.i.i.i) nounwind
   2029   %0 = bitcast i8* %call3.i.i.i.i.i to i32*
   2030   store i32* %0, i32** %v2.sub, align 8, !tbaa !0
   2031   store i32* %0, i32** %tmp3.i.i.i.i.i, align 8, !tbaa !0
   2032   %add.ptr.i.i.i = getelementptr inbounds i32* %0, i64 %conv
   2033   store i32* %add.ptr.i.i.i, i32** %tmp4.i.i.i.i.i, align 8, !tbaa !0
   2034   call void @llvm.memset.p0i8.i64(i8* %call3.i.i.i.i.i, i8 0, i64 %mul.i.i.i.i.i, i32 4, i1 false)
   2035   br label %_ZNSt6vectorIiSaIiEEC1EmRKiRKS0_.exit
   2036 
   2037 This is just the handling the construction of the vector. Most surprising here
   2038 is the fact that all three null stores in %entry are dead (because we do no
   2039 cross-block DSE).
   2040 
   2041 Also surprising is that %conv isn't simplified to 0 in %....exit.thread.i.i.
   2042 This is a because the client of LazyValueInfo doesn't simplify all instruction
   2043 operands, just selected ones.
   2044 
   2045 //===---------------------------------------------------------------------===//
   2046 
   2047 clang -O3 -fno-exceptions currently compiles this code:
   2048 
   2049 void f(char* a, int n) {
   2050   __builtin_memset(a, 0, n);
   2051   for (int i = 0; i < n; ++i)
   2052     a[i] = 0;
   2053 }
   2054 
   2055 into:
   2056 
   2057 define void @_Z1fPci(i8* nocapture %a, i32 %n) nounwind {
   2058 entry:
   2059   %conv = sext i32 %n to i64
   2060   tail call void @llvm.memset.p0i8.i64(i8* %a, i8 0, i64 %conv, i32 1, i1 false)
   2061   %cmp8 = icmp sgt i32 %n, 0
   2062   br i1 %cmp8, label %for.body.lr.ph, label %for.end
   2063 
   2064 for.body.lr.ph:                                   ; preds = %entry
   2065   %tmp10 = add i32 %n, -1
   2066   %tmp11 = zext i32 %tmp10 to i64
   2067   %tmp12 = add i64 %tmp11, 1
   2068   call void @llvm.memset.p0i8.i64(i8* %a, i8 0, i64 %tmp12, i32 1, i1 false)
   2069   ret void
   2070 
   2071 for.end:                                          ; preds = %entry
   2072   ret void
   2073 }
   2074 
   2075 This shouldn't need the ((zext (%n - 1)) + 1) game, and it should ideally fold
   2076 the two memset's together.
   2077 
   2078 The issue with the addition only occurs in 64-bit mode, and appears to be at
   2079 least partially caused by Scalar Evolution not keeping its cache updated: it
   2080 returns the "wrong" result immediately after indvars runs, but figures out the
   2081 expected result if it is run from scratch on IR resulting from running indvars.
   2082 
   2083 //===---------------------------------------------------------------------===//
   2084 
   2085 clang -O3 -fno-exceptions currently compiles this code:
   2086 
   2087 struct S {
   2088   unsigned short m1, m2;
   2089   unsigned char m3, m4;
   2090 };
   2091 
   2092 void f(int N) {
   2093   std::vector<S> v(N);
   2094   extern void sink(void*); sink(&v);
   2095 }
   2096 
   2097 into poor code for zero-initializing 'v' when N is >0. The problem is that
   2098 S is only 6 bytes, but each element is 8 byte-aligned. We generate a loop and
   2099 4 stores on each iteration. If the struct were 8 bytes, this gets turned into
   2100 a memset.
   2101 
   2102 In order to handle this we have to:
   2103   A) Teach clang to generate metadata for memsets of structs that have holes in
   2104      them.
   2105   B) Teach clang to use such a memset for zero init of this struct (since it has
   2106      a hole), instead of doing elementwise zeroing.
   2107 
   2108 //===---------------------------------------------------------------------===//
   2109 
   2110 clang -O3 currently compiles this code:
   2111 
   2112 extern const int magic;
   2113 double f() { return 0.0 * magic; }
   2114 
   2115 into
   2116 
   2117 @magic = external constant i32
   2118 
   2119 define double @_Z1fv() nounwind readnone {
   2120 entry:
   2121   %tmp = load i32* @magic, align 4, !tbaa !0
   2122   %conv = sitofp i32 %tmp to double
   2123   %mul = fmul double %conv, 0.000000e+00
   2124   ret double %mul
   2125 }
   2126 
   2127 We should be able to fold away this fmul to 0.0.  More generally, fmul(x,0.0)
   2128 can be folded to 0.0 if we can prove that the LHS is not -0.0, not a NaN, and
   2129 not an INF.  The CannotBeNegativeZero predicate in value tracking should be
   2130 extended to support general "fpclassify" operations that can return 
   2131 yes/no/unknown for each of these predicates.
   2132 
   2133 In this predicate, we know that uitofp is trivially never NaN or -0.0, and
   2134 we know that it isn't +/-Inf if the floating point type has enough exponent bits
   2135 to represent the largest integer value as < inf.
   2136 
   2137 //===---------------------------------------------------------------------===//
   2138 
   2139 When optimizing a transformation that can change the sign of 0.0 (such as the
   2140 0.0*val -> 0.0 transformation above), it might be provable that the sign of the
   2141 expression doesn't matter.  For example, by the above rules, we can't transform
   2142 fmul(sitofp(x), 0.0) into 0.0, because x might be -1 and the result of the
   2143 expression is defined to be -0.0.
   2144 
   2145 If we look at the uses of the fmul for example, we might be able to prove that
   2146 all uses don't care about the sign of zero.  For example, if we have:
   2147 
   2148   fadd(fmul(sitofp(x), 0.0), 2.0)
   2149 
   2150 Since we know that x+2.0 doesn't care about the sign of any zeros in X, we can
   2151 transform the fmul to 0.0, and then the fadd to 2.0.
   2152 
   2153 //===---------------------------------------------------------------------===//
   2154 
   2155 We should enhance memcpy/memcpy/memset to allow a metadata node on them
   2156 indicating that some bytes of the transfer are undefined.  This is useful for
   2157 frontends like clang when lowering struct copies, when some elements of the
   2158 struct are undefined.  Consider something like this:
   2159 
   2160 struct x {
   2161   char a;
   2162   int b[4];
   2163 };
   2164 void foo(struct x*P);
   2165 struct x testfunc() {
   2166   struct x V1, V2;
   2167   foo(&V1);
   2168   V2 = V1;
   2169 
   2170   return V2;
   2171 }
   2172 
   2173 We currently compile this to:
   2174 $ clang t.c -S -o - -O0 -emit-llvm | opt -scalarrepl -S
   2175 
   2176 
   2177 %struct.x = type { i8, [4 x i32] }
   2178 
   2179 define void @testfunc(%struct.x* sret %agg.result) nounwind ssp {
   2180 entry:
   2181   %V1 = alloca %struct.x, align 4
   2182   call void @foo(%struct.x* %V1)
   2183   %tmp1 = bitcast %struct.x* %V1 to i8*
   2184   %0 = bitcast %struct.x* %V1 to i160*
   2185   %srcval1 = load i160* %0, align 4
   2186   %tmp2 = bitcast %struct.x* %agg.result to i8*
   2187   %1 = bitcast %struct.x* %agg.result to i160*
   2188   store i160 %srcval1, i160* %1, align 4
   2189   ret void
   2190 }
   2191 
   2192 This happens because SRoA sees that the temp alloca has is being memcpy'd into
   2193 and out of and it has holes and it has to be conservative.  If we knew about the
   2194 holes, then this could be much much better.
   2195 
   2196 Having information about these holes would also improve memcpy (etc) lowering at
   2197 llc time when it gets inlined, because we can use smaller transfers.  This also
   2198 avoids partial register stalls in some important cases.
   2199 
   2200 //===---------------------------------------------------------------------===//
   2201 
   2202 We don't fold (icmp (add) (add)) unless the two adds only have a single use.
   2203 There are a lot of cases that we're refusing to fold in (e.g.) 256.bzip2, for
   2204 example:
   2205 
   2206  %indvar.next90 = add i64 %indvar89, 1     ;; Has 2 uses
   2207  %tmp96 = add i64 %tmp95, 1                ;; Has 1 use
   2208  %exitcond97 = icmp eq i64 %indvar.next90, %tmp96
   2209 
   2210 We don't fold this because we don't want to introduce an overlapped live range
   2211 of the ivar.  However if we can make this more aggressive without causing
   2212 performance issues in two ways:
   2213 
   2214 1. If *either* the LHS or RHS has a single use, we can definitely do the
   2215    transformation.  In the overlapping liverange case we're trading one register
   2216    use for one fewer operation, which is a reasonable trade.  Before doing this
   2217    we should verify that the llc output actually shrinks for some benchmarks.
   2218 2. If both ops have multiple uses, we can still fold it if the operations are
   2219    both sinkable to *after* the icmp (e.g. in a subsequent block) which doesn't
   2220    increase register pressure.
   2221 
   2222 There are a ton of icmp's we aren't simplifying because of the reg pressure
   2223 concern.  Care is warranted here though because many of these are induction
   2224 variables and other cases that matter a lot to performance, like the above.
   2225 Here's a blob of code that you can drop into the bottom of visitICmp to see some
   2226 missed cases:
   2227 
   2228   { Value *A, *B, *C, *D;
   2229     if (match(Op0, m_Add(m_Value(A), m_Value(B))) && 
   2230         match(Op1, m_Add(m_Value(C), m_Value(D))) &&
   2231         (A == C || A == D || B == C || B == D)) {
   2232       errs() << "OP0 = " << *Op0 << "  U=" << Op0->getNumUses() << "\n";
   2233       errs() << "OP1 = " << *Op1 << "  U=" << Op1->getNumUses() << "\n";
   2234       errs() << "CMP = " << I << "\n\n";
   2235     }
   2236   }
   2237 
   2238 //===---------------------------------------------------------------------===//
   2239 
   2240 define i1 @test1(i32 %x) nounwind {
   2241   %and = and i32 %x, 3
   2242   %cmp = icmp ult i32 %and, 2
   2243   ret i1 %cmp
   2244 }
   2245 
   2246 Can be folded to (x & 2) == 0.
   2247 
   2248 define i1 @test2(i32 %x) nounwind {
   2249   %and = and i32 %x, 3
   2250   %cmp = icmp ugt i32 %and, 1
   2251   ret i1 %cmp
   2252 }
   2253 
   2254 Can be folded to (x & 2) != 0.
   2255 
   2256 SimplifyDemandedBits shrinks the "and" constant to 2 but instcombine misses the
   2257 icmp transform.
   2258 
   2259 //===---------------------------------------------------------------------===//
   2260 
   2261 This code:
   2262 
   2263 typedef struct {
   2264 int f1:1;
   2265 int f2:1;
   2266 int f3:1;
   2267 int f4:29;
   2268 } t1;
   2269 
   2270 typedef struct {
   2271 int f1:1;
   2272 int f2:1;
   2273 int f3:30;
   2274 } t2;
   2275 
   2276 t1 s1;
   2277 t2 s2;
   2278 
   2279 void func1(void)
   2280 {
   2281 s1.f1 = s2.f1;
   2282 s1.f2 = s2.f2;
   2283 }
   2284 
   2285 Compiles into this IR (on x86-64 at least):
   2286 
   2287 %struct.t1 = type { i8, [3 x i8] }
   2288 @s2 = global %struct.t1 zeroinitializer, align 4
   2289 @s1 = global %struct.t1 zeroinitializer, align 4
   2290 define void @func1() nounwind ssp noredzone {
   2291 entry:
   2292   %0 = load i32* bitcast (%struct.t1* @s2 to i32*), align 4
   2293   %bf.val.sext5 = and i32 %0, 1
   2294   %1 = load i32* bitcast (%struct.t1* @s1 to i32*), align 4
   2295   %2 = and i32 %1, -4
   2296   %3 = or i32 %2, %bf.val.sext5
   2297   %bf.val.sext26 = and i32 %0, 2
   2298   %4 = or i32 %3, %bf.val.sext26
   2299   store i32 %4, i32* bitcast (%struct.t1* @s1 to i32*), align 4
   2300   ret void
   2301 }
   2302 
   2303 The two or/and's should be merged into one each.
   2304 
   2305 //===---------------------------------------------------------------------===//
   2306 
   2307 Machine level code hoisting can be useful in some cases.  For example, PR9408
   2308 is about:
   2309 
   2310 typedef union {
   2311  void (*f1)(int);
   2312  void (*f2)(long);
   2313 } funcs;
   2314 
   2315 void foo(funcs f, int which) {
   2316  int a = 5;
   2317  if (which) {
   2318    f.f1(a);
   2319  } else {
   2320    f.f2(a);
   2321  }
   2322 }
   2323 
   2324 which we compile to:
   2325 
   2326 foo:                                    # @foo
   2327 # BB#0:                                 # %entry
   2328        pushq   %rbp
   2329        movq    %rsp, %rbp
   2330        testl   %esi, %esi
   2331        movq    %rdi, %rax
   2332        je      .LBB0_2
   2333 # BB#1:                                 # %if.then
   2334        movl    $5, %edi
   2335        callq   *%rax
   2336        popq    %rbp
   2337        ret
   2338 .LBB0_2:                                # %if.else
   2339        movl    $5, %edi
   2340        callq   *%rax
   2341        popq    %rbp
   2342        ret
   2343 
   2344 Note that bb1 and bb2 are the same.  This doesn't happen at the IR level
   2345 because one call is passing an i32 and the other is passing an i64.
   2346 
   2347 //===---------------------------------------------------------------------===//
   2348 
   2349 I see this sort of pattern in 176.gcc in a few places (e.g. the start of
   2350 store_bit_field).  The rem should be replaced with a multiply and subtract:
   2351 
   2352   %3 = sdiv i32 %A, %B
   2353   %4 = srem i32 %A, %B
   2354 
   2355 Similarly for udiv/urem.  Note that this shouldn't be done on X86 or ARM,
   2356 which can do this in a single operation (instruction or libcall).  It is
   2357 probably best to do this in the code generator.
   2358 
   2359 //===---------------------------------------------------------------------===//
   2360 
   2361 unsigned foo(unsigned x, unsigned y) { return (x & y) == 0 || x == 0; }
   2362 should fold to (x & y) == 0.
   2363 
   2364 //===---------------------------------------------------------------------===//
   2365 
   2366 unsigned foo(unsigned x, unsigned y) { return x > y && x != 0; }
   2367 should fold to x > y.
   2368 
   2369 //===---------------------------------------------------------------------===//
   2370