1 Editors note: This document is _heavily_ cribbed from the Linux Kernel, with 2 really only the section about "Alignment vs. Networking" removed. 3 4 UNALIGNED MEMORY ACCESSES 5 ========================= 6 7 Linux runs on a wide variety of architectures which have varying behaviour 8 when it comes to memory access. This document presents some details about 9 unaligned accesses, why you need to write code that doesn't cause them, 10 and how to write such code! 11 12 13 The definition of an unaligned access 14 ===================================== 15 16 Unaligned memory accesses occur when you try to read N bytes of data starting 17 from an address that is not evenly divisible by N (i.e. addr % N != 0). 18 For example, reading 4 bytes of data from address 0x10004 is fine, but 19 reading 4 bytes of data from address 0x10005 would be an unaligned memory 20 access. 21 22 The above may seem a little vague, as memory access can happen in different 23 ways. The context here is at the machine code level: certain instructions read 24 or write a number of bytes to or from memory (e.g. movb, movw, movl in x86 25 assembly). As will become clear, it is relatively easy to spot C statements 26 which will compile to multiple-byte memory access instructions, namely when 27 dealing with types such as u16, u32 and u64. 28 29 30 Natural alignment 31 ================= 32 33 The rule mentioned above forms what we refer to as natural alignment: 34 When accessing N bytes of memory, the base memory address must be evenly 35 divisible by N, i.e. addr % N == 0. 36 37 When writing code, assume the target architecture has natural alignment 38 requirements. 39 40 In reality, only a few architectures require natural alignment on all sizes 41 of memory access. However, we must consider ALL supported architectures; 42 writing code that satisfies natural alignment requirements is the easiest way 43 to achieve full portability. 44 45 46 Why unaligned access is bad 47 =========================== 48 49 The effects of performing an unaligned memory access vary from architecture 50 to architecture. It would be easy to write a whole document on the differences 51 here; a summary of the common scenarios is presented below: 52 53 - Some architectures are able to perform unaligned memory accesses 54 transparently, but there is usually a significant performance cost. 55 - Some architectures raise processor exceptions when unaligned accesses 56 happen. The exception handler is able to correct the unaligned access, 57 at significant cost to performance. 58 - Some architectures raise processor exceptions when unaligned accesses 59 happen, but the exceptions do not contain enough information for the 60 unaligned access to be corrected. 61 - Some architectures are not capable of unaligned memory access, but will 62 silently perform a different memory access to the one that was requested, 63 resulting in a subtle code bug that is hard to detect! 64 65 It should be obvious from the above that if your code causes unaligned 66 memory accesses to happen, your code will not work correctly on certain 67 platforms and will cause performance problems on others. 68 69 70 Code that does not cause unaligned access 71 ========================================= 72 73 At first, the concepts above may seem a little hard to relate to actual 74 coding practice. After all, you don't have a great deal of control over 75 memory addresses of certain variables, etc. 76 77 Fortunately things are not too complex, as in most cases, the compiler 78 ensures that things will work for you. For example, take the following 79 structure: 80 81 struct foo { 82 u16 field1; 83 u32 field2; 84 u8 field3; 85 }; 86 87 Let us assume that an instance of the above structure resides in memory 88 starting at address 0x10000. With a basic level of understanding, it would 89 not be unreasonable to expect that accessing field2 would cause an unaligned 90 access. You'd be expecting field2 to be located at offset 2 bytes into the 91 structure, i.e. address 0x10002, but that address is not evenly divisible 92 by 4 (remember, we're reading a 4 byte value here). 93 94 Fortunately, the compiler understands the alignment constraints, so in the 95 above case it would insert 2 bytes of padding in between field1 and field2. 96 Therefore, for standard structure types you can always rely on the compiler 97 to pad structures so that accesses to fields are suitably aligned (assuming 98 you do not cast the field to a type of different length). 99 100 Similarly, you can also rely on the compiler to align variables and function 101 parameters to a naturally aligned scheme, based on the size of the type of 102 the variable. 103 104 At this point, it should be clear that accessing a single byte (u8 or char) 105 will never cause an unaligned access, because all memory addresses are evenly 106 divisible by one. 107 108 On a related topic, with the above considerations in mind you may observe 109 that you could reorder the fields in the structure in order to place fields 110 where padding would otherwise be inserted, and hence reduce the overall 111 resident memory size of structure instances. The optimal layout of the 112 above example is: 113 114 struct foo { 115 u32 field2; 116 u16 field1; 117 u8 field3; 118 }; 119 120 For a natural alignment scheme, the compiler would only have to add a single 121 byte of padding at the end of the structure. This padding is added in order 122 to satisfy alignment constraints for arrays of these structures. 123 124 Another point worth mentioning is the use of __attribute__((packed)) on a 125 structure type. This GCC-specific attribute tells the compiler never to 126 insert any padding within structures, useful when you want to use a C struct 127 to represent some data that comes in a fixed arrangement 'off the wire'. 128 129 You might be inclined to believe that usage of this attribute can easily 130 lead to unaligned accesses when accessing fields that do not satisfy 131 architectural alignment requirements. However, again, the compiler is aware 132 of the alignment constraints and will generate extra instructions to perform 133 the memory access in a way that does not cause unaligned access. Of course, 134 the extra instructions obviously cause a loss in performance compared to the 135 non-packed case, so the packed attribute should only be used when avoiding 136 structure padding is of importance. 137 138 139 Code that causes unaligned access 140 ================================= 141 142 With the above in mind, let's move onto a real life example of a function 143 that can cause an unaligned memory access. The following function taken 144 from the Linux Kernel's include/linux/etherdevice.h is an optimized routine 145 to compare two ethernet MAC addresses for equality. 146 147 bool ether_addr_equal(const u8 *addr1, const u8 *addr2) 148 { 149 #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 150 u32 fold = ((*(const u32 *)addr1) ^ (*(const u32 *)addr2)) | 151 ((*(const u16 *)(addr1 + 4)) ^ (*(const u16 *)(addr2 + 4))); 152 153 return fold == 0; 154 #else 155 const u16 *a = (const u16 *)addr1; 156 const u16 *b = (const u16 *)addr2; 157 return ((a[0] ^ b[0]) | (a[1] ^ b[1]) | (a[2] ^ b[2])) == 0; 158 #endif 159 } 160 161 In the above function, when the hardware has efficient unaligned access 162 capability, there is no issue with this code. But when the hardware isn't 163 able to access memory on arbitrary boundaries, the reference to a[0] causes 164 2 bytes (16 bits) to be read from memory starting at address addr1. 165 166 Think about what would happen if addr1 was an odd address such as 0x10003. 167 (Hint: it'd be an unaligned access.) 168 169 Despite the potential unaligned access problems with the above function, it 170 is included in the kernel anyway but is understood to only work normally on 171 16-bit-aligned addresses. It is up to the caller to ensure this alignment or 172 not use this function at all. This alignment-unsafe function is still useful 173 as it is a decent optimization for the cases when you can ensure alignment, 174 which is true almost all of the time in ethernet networking context. 175 176 177 Here is another example of some code that could cause unaligned accesses: 178 void myfunc(u8 *data, u32 value) 179 { 180 [...] 181 *((u32 *) data) = cpu_to_le32(value); 182 [...] 183 } 184 185 This code will cause unaligned accesses every time the data parameter points 186 to an address that is not evenly divisible by 4. 187 188 In summary, the 2 main scenarios where you may run into unaligned access 189 problems involve: 190 1. Casting variables to types of different lengths 191 2. Pointer arithmetic followed by access to at least 2 bytes of data 192 193 194 Avoiding unaligned accesses 195 =========================== 196 197 The easiest way to avoid unaligned access is to use the get_unaligned() and 198 put_unaligned() macros provided by the <asm/unaligned.h> header file. 199 200 Going back to an earlier example of code that potentially causes unaligned 201 access: 202 203 void myfunc(u8 *data, u32 value) 204 { 205 [...] 206 *((u32 *) data) = cpu_to_le32(value); 207 [...] 208 } 209 210 To avoid the unaligned memory access, you would rewrite it as follows: 211 212 void myfunc(u8 *data, u32 value) 213 { 214 [...] 215 value = cpu_to_le32(value); 216 put_unaligned(value, (u32 *) data); 217 [...] 218 } 219 220 The get_unaligned() macro works similarly. Assuming 'data' is a pointer to 221 memory and you wish to avoid unaligned access, its usage is as follows: 222 223 u32 value = get_unaligned((u32 *) data); 224 225 These macros work for memory accesses of any length (not just 32 bits as 226 in the examples above). Be aware that when compared to standard access of 227 aligned memory, using these macros to access unaligned memory can be costly in 228 terms of performance. 229 230 If use of such macros is not convenient, another option is to use memcpy(), 231 where the source or destination (or both) are of type u8* or unsigned char*. 232 Due to the byte-wise nature of this operation, unaligned accesses are avoided. 233 234 -- 235 In the Linux Kernel, 236 Authors: Daniel Drake <dsd (a] gentoo.org>, 237 Johannes Berg <johannes (a] sipsolutions.net> 238 With help from: Alan Cox, Avuton Olrich, Heikki Orsila, Jan Engelhardt, 239 Kyle McMartin, Kyle Moffett, Randy Dunlap, Robert Hancock, Uli Kunitz, 240 Vadim Lobanov 241