I just got my new MacBook Pro with M1 Max chip and am setting up Python. I've tried several combinational settings to test speed - now I'm quite confused. First put my questions here:
- Why python run natively on M1 Max is greatly (~100%) slower than on my old MacBook Pro 2016 with Intel i5?
- On M1 Max, why there isn't significant speed difference between native run (by miniforge) and run via Rosetta (by anaconda) - which is supposed to be slower ~20%?
- On M1 Max and native run, why there isn't significant speed difference between conda installed Numpy and TensorFlow installed Numpy - which is supposed to be faster?
- On M1 Max, why run in PyCharm IDE is constantly slower ~20% than run from terminal, which doesn't happen on my old Intel Mac.
Evidence supporting my questions is as follows:
Here are the settings I've tried:
1. Python installed by
- Miniforge-arm64, so that python is natively run on M1 Max Chip. (Check from Activity Monitor,
Kind
of python process isApple
). - Anaconda. Then python is run via Rosseta. (Check from Activity Monitor,
Kind
of python process isIntel
).
2. Numpy installed by
conda install numpy
: numpy from original conda-forge channel, or pre-installed with anaconda.- Apple-TensorFlow: with python installed by miniforge, I directly install tensorflow, and numpy will also be installed. It's said that, numpy installed in this way is optimized for Apple M1 and will be faster. Here is the installation commands:
conda install -c apple tensorflow-deps
python -m pip install tensorflow-macos
python -m pip install tensorflow-metal
3. Run from
- Terminal.
- PyCharm (Apple Silicon version).
Here is the test code:
import time
import numpy as np
np.random.seed(42)
a = np.random.uniform(size=(300, 300))
runtimes = 10
timecosts = []
for _ in range(runtimes):
s_time = time.time()
for i in range(100):
a = 1
np.linalg.svd(a)
timecosts.append(time.time() - s_time)
print(f'mean of {runtimes} runs: {np.mean(timecosts):.5f}s')
and here are the results:
----------------------------------- ----------------------- --------------------
| Python installed by (run on)→ | Miniforge (native M1) | Anaconda (Rosseta) |
---------------------- ------------ ------------ ---------- ---------- ---------
| Numpy installed by ↓ | Run from → | Terminal | PyCharm | Terminal | PyCharm |
---------------------- ------------ ------------ ---------- ---------- ---------
| Apple Tensorflow | 4.19151 | 4.86248 | / | / |
----------------------------------- ------------ ---------- ---------- ---------
| conda install numpy | 4.29386 | 4.98370 | 4.10029 | 4.99271 |
----------------------------------- ------------ ---------- ---------- ---------
This is quite slow. For comparison,
- run the same code on my old MacBook Pro 2016 with i5 chip - it costs
2.39917s
. - another post (but not in English) reports that run with M1 chip (not Pro or Max), miniforge conda_installed_numpy is
2.53214s
, and miniforge apple_tensorflow_numpy is1.00613s
. - you may also try on it your own.
Here is the CPU information details:
- My old i5:
$ sysctl -a | grep -e brand_string -e cpu.core_count
machdep.cpu.brand_string: Intel(R) Core(TM) i5-6360U CPU @ 2.00GHz
machdep.cpu.core_count: 2
- My new M1 Max:
% sysctl -a | grep -e brand_string -e cpu.core_count
machdep.cpu.brand_string: Apple M1 Max
machdep.cpu.core_count: 10
I follow instructions strictly from tutorials - but why would all these happen? Is it because of my installation flaws, or because of M1 Max chip? Since my work relies heavily on local runs, local speed is very important to me. Any suggestions to possible solution, or any data points on your own device would be greatly appreciated :)
CodePudding user response:
Possible Cause: Different BLAS Libraries
Since the benchmark is running linear algebra routines, what is likely being tested here are the BLAS implementations. A default Anaconda distribution for osx-64 platform is going to come with Intel's MKL implementation; the osx-arm64 platform only has the generic Netlib BLAS and the OpenBLAS implementation options.
For me (MacOS w/ Intel i9), I get the following benchmark results:
BLAS Implmentation | Mean Timing (s) |
---|---|
mkl |
0.95932 |
blis |
1.72059 |
openblas |
2.17023 |
netlib |
5.72782 |
So, I suspect the old MBP had MKL installed, and the M1 system is installing either Netlib or OpenBLAS. Maybe try figuring out whether Netlib or OpenBLAS are faster on M1, and keep the faster one.
Specifying BLAS Implementation
Here are specifically the different environments I tested:
# MKL
conda create -n np_mkl python=3.9 numpy blas=*=*mkl*
# BLIS
conda create -n np_blis python=3.9 numpy blas=*=*blis*
# OpenBLAS
conda create -n np_openblas python=3.9 numpy blas=*=*openblas*
# Netlib
conda create -n np_netlib python=3.9 numpy blas=*=*netlib*
and ran the benchmark script (so-np-bench.py
) with
conda run -n np_mkl python so-np-bench.py
# etc.
CodePudding user response:
How to install numpy on M1 Max, with the most accelerated performance (Apple's vecLib)? Here's the answer as of Dec 6 2021.
Steps
I. Install miniforge
So that your Python is run natively on arm64, not translated via Rosseta.
- Download Miniforge3-MacOSX-arm64.sh, then
- Run the script, then open another shell
$ bash Miniforge3-MacOSX-arm64.sh
- Create an environment (here I use name
np_veclib
)
$ conda create -n np_veclib python=3.9
$ conda activate np_veclib
II. Install Numpy with BLAS interface specified as vecLib
- To compile
numpy
, first need to installcython
andpybind11
:
$ conda install cython pybind11
- Compile
numpy
by (Thanks @Marijn's answer) - don't useconda install
!
$ pip install --no-binary :all: --no-use-pep517 numpy
- An alternative of 2. is to build from source
$ git clone https://github.com/numpy/numpy
$ cd numpy
$ cp site.cfg.example site.cfg
$ nano site.cfg
Edit the copied site.cfg
: add the following lines:
[accelerate]
libraries = Accelerate, vecLib
Then build and install:
$ NPY_LAPACK_ORDER=accelerate python setup.py build
$ python setup.py install
- After either 2 or 3, now test whether numpy is using vecLib:
>>> import numpy
>>> numpy.show_config()
Then, info like /System/Library/Frameworks/vecLib.framework/Headers
should be printed.
III. For further installing other packages using conda
Make conda recognize packages installed by pip
conda config --set pip_interop_enabled true
This must be done, otherwise if e.g. conda install pandas
, then numpy
will be in The following packages will be installed
list and installed again. But the new installed one is from conda-forge
channel and is slow.
Comparisons to other installations:
1. Competitors:
Except for the above optimal one, I also tried several other installations
- A.
np_default
:conda create -n np_default python=3.9 numpy
- B.
np_openblas
:conda create -n np_openblas python=3.9 numpy blas=*=*openblas*
- C.
np_netlib
:conda create -n np_netlib python=3.9 numpy blas=*=*netlib*
The above ABC options are directly installed from conda-forge channel. numpy.show_config()
will show identical results. To see the difference, examine by conda list
- e.g. openblas
packages are installed in B. Note that mkl
or blis
is not supported on arm64.
- D.
np_openblas_source
: First install openblas bybrew install openblas
. Then add[openblas]
path/opt/homebrew/opt/openblas
tosite.cfg
and build Numpy from source. M1
andi9–9880H
in this post.- My old
i5-6360U
2cores on MacBook Pro 2016 13in.
2. Benchmarks:
Here I use two benchmarks:
mysvd.py
: My SVD decomposition
import time
import numpy as np
np.random.seed(42)
a = np.random.uniform(size=(300, 300))
runtimes = 10
timecosts = []
for _ in range(runtimes):
s_time = time.time()
for i in range(100):
a = 1
np.linalg.svd(a)
timecosts.append(time.time() - s_time)
print(f'mean of {runtimes} runs: {np.mean(timecosts):.5f}s')
dario.py
: A benchmark script by Dario Radečić at the post above.
3. Results:
------- ----------- ------------ ------------- ----------- -------------------- ---- ---------- ----------
| sec | np_veclib | np_default | np_openblas | np_netlib | np_openblas_source | M1 | i9–9880H | i5-6360U |
------- ----------- ------------ ------------- ----------- -------------------- ---- ---------- ----------
| mysvd | 1.02300 | 4.29386 | 4.13854 | 4.75812 | 12.57879 | / | / | 2.39917 |
------- ----------- ------------ ------------- ----------- -------------------- ---- ---------- ----------
| dario | 21 | 41 | 39 | 323 | 40 | 33 | 23 | 78 |
------- ----------- ------------ ------------- ----------- -------------------- ---- ---------- ----------