I'm using godbolt to get assembly of the following program:
#include <stdio.h>
volatile int a = 5;
volatile int res = 0;
int main() {
res = a * 36;
return 1;
}
If I use -Os optimization, the generated code is natural:
mov eax, DWORD PTR a[rip]
imul eax, eax, 36
mov DWORD PTR res[rip], eax
But if I use -O2, the generated code is this:
mov eax, DWORD PTR a[rip]
lea eax, [rax rax*8]
sal eax, 2
mov DWORD PTR res[rip], eax
So instead of multiplying 5*36, it does 5 -> 5 5*8=45 -> 45*4 = 180. I assume this is because 1 imul is slower than 1 lea 1 shift left.
But in the lea instruction, it needs to calculate rax rax*8
, which contains 1 addition 1 mul. So why is it still faster than just 1 imul? Is it because memory addressing inside lea is free?
Edit 1: also, how does [rax rax*8]
get translated into machine code? Does it gets compiled down to additional 2 instructions (shl, rbx, rax, 3; add rax, rax, rbx;
), or something else?
Edit 2: Surprising results below. I make a loop, then generate code using -O2, then copy the file and replace the segment above with code from -Os. So 2 assembly files are the same everywhere, except for the instructions we're benchmarking. Running on Windows, the commands are
gcc mul.c -O2 -S -masm=intel -o mulo2.s
gcc mulo2.s -o mulo2
// replace line of code in mulo2.s, save as muls.s
gcc muls.s -o muls
cmd /v:on /c "echo !time! & START "TestAgente" /W mulo2 & echo !time!"
cmd /v:on /c "echo !time! & START "TestAgente" /W muls & echo !time!"
#include <stdio.h>
volatile int a = 5;
volatile int res = 0;
int main() {
size_t LOOP = 1000 * 1000 * 1000;
LOOP = LOOP * 10;
size_t i = 0;
while (i < LOOP) {
i ;
res = a * 36;
}
return 0;
}
; mulo2.s
.file "mul.c"
.intel_syntax noprefix
.text
.def __main; .scl 2; .type 32; .endef
.section .text.startup,"x"
.p2align 4
.globl main
.def main; .scl 2; .type 32; .endef
.seh_proc main
main:
sub rsp, 40
.seh_stackalloc 40
.seh_endprologue
call __main
movabs rdx, 10000000000
.p2align 4,,10
.p2align 3
.L2:
mov eax, DWORD PTR a[rip]
lea eax, [rax rax*8] ; replaces these 2 lines with
sal eax, 2 ; imul eax, eax, 36
mov DWORD PTR res[rip], eax
sub rdx, 1
jne .L2
xor eax, eax
add rsp, 40
ret
.seh_endproc
.globl res
.bss
.align 4
res:
.space 4
.globl a
.data
.align 4
a:
.long 5
.ident "GCC: (GNU) 9.3.0"
Surprisingly, the result is that the -Os
version is consistently faster than -O2
(4.1s vs 5s average, Intel 8750H CPU, each .exe file is run several times). So in this case, the compiler has optimized wrongly. Could someone provide a new explanation given this benchmark?
Edit 3: To measure the effects of instruction cache line, here's a python script to generate different addresses for the main loop by adding nop
instructions to the program right before the main loop. It's for Window, for Linux it just needs to be modified a bit.
#cd "D:\Learning\temp"
import os
import time
import datetime as dt
f = open("mulo2.s","r")
lines = [line for line in f]
f.close()
def addNop(cnt, outputname):
f = open(outputname, "w")
for i in range(17):
f.write(lines[i])
for i in range(cnt):
f.write("\tnop\n")
for i in range(17, len(lines)):
f.write(lines[i])
f.close()
if os.path.isdir("nop_files")==False:
os.mkdir("nop_files")
MAXN = 100
for t in range(MAXN 1):
sourceFile = "nop_files\\mulo2_" str(t) ".s" # change \\ to / on Linux
exeFile = "nop_files\\mulo2_" str(t)
if os.path.isfile(sourceFile)==False:
addNop(t, sourceFile)
os.system("gcc " sourceFile " -o " exeFile)
runtime = os.popen("timecmd " exeFile).read() # use time
print(str(t) " nop: " str(runtime))
Result:
0 nop: command took 0:0:4.96 (4.96s total)
1 nop: command took 0:0:4.94 (4.94s total)
2 nop: command took 0:0:4.90 (4.90s total)
3 nop: command took 0:0:4.90 (4.90s total)
4 nop: command took 0:0:5.26 (5.26s total)
5 nop: command took 0:0:4.94 (4.94s total)
6 nop: command took 0:0:4.92 (4.92s total)
7 nop: command took 0:0:4.98 (4.98s total)
8 nop: command took 0:0:5.02 (5.02s total)
9 nop: command took 0:0:4.97 (4.97s total)
10 nop: command took 0:0:5.12 (5.12s total)
11 nop: command took 0:0:5.01 (5.01s total)
12 nop: command took 0:0:5.01 (5.01s total)
13 nop: command took 0:0:5.07 (5.07s total)
14 nop: command took 0:0:5.08 (5.08s total)
15 nop: command took 0:0:5.07 (5.07s total)
16 nop: command took 0:0:5.09 (5.09s total)
17 nop: command took 0:0:7.96 (7.96s total) # slow 17
18 nop: command took 0:0:7.93 (7.93s total)
19 nop: command took 0:0:7.88 (7.88s total)
20 nop: command took 0:0:7.88 (7.88s total)
21 nop: command took 0:0:7.94 (7.94s total)
22 nop: command took 0:0:7.90 (7.90s total)
23 nop: command took 0:0:7.92 (7.92s total)
24 nop: command took 0:0:7.99 (7.99s total)
25 nop: command took 0:0:7.89 (7.89s total)
26 nop: command took 0:0:7.88 (7.88s total)
27 nop: command took 0:0:7.88 (7.88s total)
28 nop: command took 0:0:7.84 (7.84s total)
29 nop: command took 0:0:7.84 (7.84s total)
30 nop: command took 0:0:7.88 (7.88s total)
31 nop: command took 0:0:7.91 (7.91s total)
32 nop: command took 0:0:7.89 (7.89s total)
33 nop: command took 0:0:7.88 (7.88s total)
34 nop: command took 0:0:7.94 (7.94s total)
35 nop: command took 0:0:7.81 (7.81s total)
36 nop: command took 0:0:7.89 (7.89s total)
37 nop: command took 0:0:7.90 (7.90s total)
38 nop: command took 0:0:7.92 (7.92s total)
39 nop: command took 0:0:7.83 (7.83s total)
40 nop: command took 0:0:4.95 (4.95s total) # fast 40
41 nop: command took 0:0:4.91 (4.91s total)
42 nop: command took 0:0:4.97 (4.97s total)
43 nop: command took 0:0:4.97 (4.97s total)
44 nop: command took 0:0:4.97 (4.97s total)
45 nop: command took 0:0:5.11 (5.11s total)
46 nop: command took 0:0:5.13 (5.13s total)
47 nop: command took 0:0:5.01 (5.01s total)
48 nop: command took 0:0:5.01 (5.01s total)
49 nop: command took 0:0:4.97 (4.97s total)
50 nop: command took 0:0:5.03 (5.03s total)
51 nop: command took 0:0:5.32 (5.32s total)
52 nop: command took 0:0:4.95 (4.95s total)
53 nop: command took 0:0:4.97 (4.97s total)
54 nop: command took 0:0:4.94 (4.94s total)
55 nop: command took 0:0:4.99 (4.99s total)
56 nop: command took 0:0:4.99 (4.99s total)
57 nop: command took 0:0:5.04 (5.04s total)
58 nop: command took 0:0:4.97 (4.97s total)
59 nop: command took 0:0:4.97 (4.97s total)
60 nop: command took 0:0:4.95 (4.95s total)
61 nop: command took 0:0:4.99 (4.99s total)
62 nop: command took 0:0:4.94 (4.94s total)
63 nop: command took 0:0:4.94 (4.94s total)
64 nop: command took 0:0:4.92 (4.92s total)
65 nop: command took 0:0:4.91 (4.91s total)
66 nop: command took 0:0:4.98 (4.98s total)
67 nop: command took 0:0:4.93 (4.93s total)
68 nop: command took 0:0:4.95 (4.95s total)
69 nop: command took 0:0:4.92 (4.92s total)
70 nop: command took 0:0:4.93 (4.93s total)
71 nop: command took 0:0:4.97 (4.97s total)
72 nop: command took 0:0:4.93 (4.93s total)
73 nop: command took 0:0:4.94 (4.94s total)
74 nop: command took 0:0:4.96 (4.96s total)
75 nop: command took 0:0:4.91 (4.91s total)
76 nop: command took 0:0:4.92 (4.92s total)
77 nop: command took 0:0:4.91 (4.91s total)
78 nop: command took 0:0:5.03 (5.03s total)
79 nop: command took 0:0:4.96 (4.96s total)
80 nop: command took 0:0:5.20 (5.20s total)
81 nop: command took 0:0:7.93 (7.93s total) # slow 81
82 nop: command took 0:0:7.88 (7.88s total)
83 nop: command took 0:0:7.85 (7.85s total)
84 nop: command took 0:0:7.91 (7.91s total)
85 nop: command took 0:0:7.93 (7.93s total)
86 nop: command took 0:0:8.06 (8.06s total)
87 nop: command took 0:0:8.03 (8.03s total)
88 nop: command took 0:0:7.85 (7.85s total)
89 nop: command took 0:0:7.88 (7.88s total)
90 nop: command took 0:0:7.91 (7.91s total)
91 nop: command took 0:0:7.86 (7.86s total)
92 nop: command took 0:0:7.99 (7.99s total)
93 nop: command took 0:0:7.86 (7.86s total)
94 nop: command took 0:0:7.91 (7.91s total)
95 nop: command took 0:0:8.12 (8.12s total)
96 nop: command took 0:0:7.88 (7.88s total)
97 nop: command took 0:0:7.81 (7.81s total)
98 nop: command took 0:0:7.88 (7.88s total)
99 nop: command took 0:0:7.85 (7.85s total)
100 nop: command took 0:0:7.90 (7.90s total)
101 nop: command took 0:0:7.93 (7.93s total)
102 nop: command took 0:0:7.85 (7.85s total)
103 nop: command took 0:0:7.88 (7.88s total)
104 nop: command took 0:0:5.00 (5.00s total) # fast 104
105 nop: command took 0:0:5.03 (5.03s total)
106 nop: command took 0:0:4.97 (4.97s total)
107 nop: command took 0:0:5.06 (5.06s total)
108 nop: command took 0:0:5.01 (5.01s total)
109 nop: command took 0:0:5.00 (5.00s total)
110 nop: command took 0:0:4.95 (4.95s total)
111 nop: command took 0:0:4.91 (4.91s total)
112 nop: command took 0:0:4.94 (4.94s total)
113 nop: command took 0:0:4.93 (4.93s total)
114 nop: command took 0:0:4.92 (4.92s total)
115 nop: command took 0:0:4.92 (4.92s total)
116 nop: command took 0:0:4.92 (4.92s total)
117 nop: command took 0:0:5.13 (5.13s total)
118 nop: command took 0:0:4.94 (4.94s total)
119 nop: command took 0:0:4.97 (4.97s total)
120 nop: command took 0:0:5.14 (5.14s total)
121 nop: command took 0:0:4.94 (4.94s total)
122 nop: command took 0:0:5.17 (5.17s total)
123 nop: command took 0:0:4.95 (4.95s total)
124 nop: command took 0:0:4.97 (4.97s total)
125 nop: command took 0:0:4.99 (4.99s total)
126 nop: command took 0:0:5.20 (5.20s total)
127 nop: command took 0:0:5.23 (5.23s total)
128 nop: command took 0:0:5.19 (5.19s total)
129 nop: command took 0:0:5.21 (5.21s total)
130 nop: command took 0:0:5.33 (5.33s total)
131 nop: command took 0:0:4.92 (4.92s total)
132 nop: command took 0:0:5.02 (5.02s total)
133 nop: command took 0:0:4.90 (4.90s total)
134 nop: command took 0:0:4.93 (4.93s total)
135 nop: command took 0:0:4.99 (4.99s total)
136 nop: command took 0:0:5.08 (5.08s total)
137 nop: command took 0:0:5.02 (5.02s total)
138 nop: command took 0:0:5.15 (5.15s total)
139 nop: command took 0:0:5.07 (5.07s total)
140 nop: command took 0:0:5.03 (5.03s total)
141 nop: command took 0:0:4.94 (4.94s total)
142 nop: command took 0:0:4.92 (4.92s total)
143 nop: command took 0:0:4.96 (4.96s total)
144 nop: command took 0:0:4.92 (4.92s total)
145 nop: command took 0:0:7.86 (7.86s total) # slow 145
146 nop: command took 0:0:7.87 (7.87s total)
147 nop: command took 0:0:7.83 (7.83s total)
148 nop: command took 0:0:7.83 (7.83s total)
149 nop: command took 0:0:7.84 (7.84s total)
150 nop: command took 0:0:7.87 (7.87s total)
151 nop: command took 0:0:7.84 (7.84s total)
152 nop: command took 0:0:7.88 (7.88s total)
153 nop: command took 0:0:7.87 (7.87s total)
154 nop: command took 0:0:7.83 (7.83s total)
155 nop: command took 0:0:7.85 (7.85s total)
156 nop: command took 0:0:7.91 (7.91s total)
157 nop: command took 0:0:8.18 (8.18s total)
158 nop: command took 0:0:7.94 (7.94s total)
159 nop: command took 0:0:7.92 (7.92s total)
160 nop: command took 0:0:7.92 (7.92s total)
161 nop: command took 0:0:7.97 (7.97s total)
162 nop: command took 0:0:8.12 (8.12s total)
163 nop: command took 0:0:7.89 (7.89s total)
164 nop: command took 0:0:7.92 (7.92s total)
165 nop: command took 0:0:7.88 (7.88s total)
166 nop: command took 0:0:7.80 (7.80s total)
167 nop: command took 0:0:7.82 (7.82s total)
168 nop: command took 0:0:4.97 (4.97s total) # fast
169 nop: command took 0:0:4.97 (4.97s total)
170 nop: command took 0:0:4.95 (4.95s total)
171 nop: command took 0:0:5.00 (5.00s total)
172 nop: command took 0:0:4.95 (4.95s total)
173 nop: command took 0:0:4.93 (4.93s total)
174 nop: command took 0:0:4.91 (4.91s total)
175 nop: command took 0:0:4.92 (4.92s total)
Points where the program switch from fast to slow (then slow to fast) are: 17S-40F-81S-104F-145S-168F. We can see the distance from slow->fast code is 23 nop
, and the distance from fast->slow code is 41 nop
. When we check objdump, we can see that the main loop occupies 24 bytes; that means if we place it at the start of a cache line (address mod 64 == 0
), inserting 41 bytes will cause the main loop to cross the cache-line boundary, causing slowdown. So in the default code (no nop
added), the main loop is already inside the same cache line.
So we know that the -O2
version being slower is not because of instruction address alignment. The only culprit left is instruction decoding speed, like @Jérôme Richard answer.
Edit 4: Skylake decodes 16 bytes per cycle. However, the size of -Os
and -O2
version are 21 and 24 respectively, so both requires 2 cycles to read the main loop. So where does speed the difference come from?
Conclusion: while the compiler is theoretically correct (lea sal
are 2 super cheap instructions, and addressing inside lea is free since it uses a separate hardware circuit), in practice 1 single expensive instruction imul
might be faster due to instruction cache line and decoding speed.
CodePudding user response:
You can see the cost of instructions on most mainstream architecture here and there. Based on that and assuming you use for example an Intel Skylake processor, you can see that one 32-bit imul
instruction can be computed per cycle but with a latency of 3 cycles. In the optimized code, 2 lea
instructions (which are very cheap) can be executed per cycle with a 1 cycle latency. The same thing apply for the sal
instruction (2 per cycle and 1 cycle of latency).
This means that the optimized version can be executed with only 2 cycle of latency while the first one takes 3 cycle of latency (not taking into account load/store instructions that are the same). Moreover, the second version can be better pipelined since the two instructions can be executed for two different input data in parallel thanks to a superscalar out-of-order execution. Note that two loads can be executed in parallel too although only one store can be executed in parallel per cycle. This means that the execution is bounded by the throughput of store instructions. Overall, only 1 value can only computed per cycle. AFAIK, recent Intel Icelake processors can do two stores in parallel like new AMD Ryzen processors. The second one is expected to be as fast or possibly faster on the chosen use-case (Intel Skylake processors). It should be significantly faster on very recent x86-64 processors.
Note that the lea
instruction is very fast because the multiply-add is done on a dedicated CPU unit (hard-wired) and it only supports some specific constant for the multiplication (supported factors are 1, 2, 4 and 8, which mean that lea can be used to multiply an integer by the constants 2, 3, 4, 5, 8 and 9). This is why lea
is faster than imul
/mul
.
UPDATE:
I can reproduce the slower execution with -O2
using GCC 10.3.
This slower execution may be due to the alignment of the loop instructions. Indeed, the loop instructions of the -O2
version cross a cache line boundary (they are stored in 2 cache line) while this is not the case with -Os
(only 1 cache line is used). This introduces an additional cost as many processors can only load&decode no more than one cache line of code per cycle.
The generated assembly for the two versions can be found here. With -Os
, we can see this loop:
401029: (16 bytes)
mov edx,DWORD PTR [rip 0x2ff9] # 404028 <a>
imul edx,edx,0x24
mov DWORD PTR [rip 0x2ff8],edx # 404030 <res>
dec rax
jne 401029 <main 0x9>
With -O2
, we can see this loop:
401030: (20 bytes)
mov eax,DWORD PTR [rip 0x2ff2] # 404028 <a>
lea eax,[rax rax*8]
shl eax,0x2
mov DWORD PTR [rip 0x2fee],eax # 404030 <res>
sub rdx,0x1
jne 401030 <main 0x10>
Another additional probable source of slowdown with -O2
is that the loop is bigger and need more time to be decoded. This can have a huge impact on performance in this case since Skylake can only decode 16 bytes/cycle. Thus, the loop is likely bound by the speed of the decoder with -O2
.
Related:
- What's the purpose of the LEA instruction?
- Why is my loop much faster when it is contained in one cache line?
- How many ways-superscalar are modern Intel processors?
- https://en.wikipedia.org/wiki/Superscalar_processor
CodePudding user response:
tl;dr: Because LEA doesn't do full-fledged multiplication.
While @JeromeRichard's answer is correct, the underlying kernel of truth is hidden in its last sentence: With LEA, you can only multiple by a specific constant, which is a power of two. Thus, instead of needing a large dedicated circuit for multiplication, it only needs a small sub-circuit for shifting one of its operands by a fixed amount.
LEA can support other fixed shift amounts, such as 5 or 10, but those will still not be general multiplication: They can be realized as 5x = (x << 2) x or 10x = (x << 8) (x << 1), which is still much smaller than general multiplication.