I have the following MATLAB snippet:
>> R = randn(3000,6000); % build a random 3000 by 6000 matrix
>> tic; norm(R, 1); toc;
Elapsed time is 0.005586 seconds.
>> tic; norm(R, 2); toc;
Elapsed time is 3.019667 seconds.
>> tic; norm(R, inf); toc;
Elapsed time is 0.005393 seconds.
>>
My question is, why would the L2 norm computation be that much slower than the L1 or L infinity norm? This is a random matrix for testing purposes, of course, but for the actual matrix in my work I can see a similar pattern in terms of elapsed time.
However, on Julia, the results are as follows
julia> @time norm(R, 1);
0.007156 seconds (1 allocation: 16 bytes)
julia> @time norm(R, 2);
0.009142 seconds (1 allocation: 16 bytes)
julia> @time norm(R, Inf);
0.034633 seconds (1 allocation: 16 bytes)
This makes no sense at all. Any help is appreciated!
CodePudding user response:
I invite you to read the documentation for norm
. It is a good idea to always read the documentation to a function and not make assumptions about what it does. In short, with a matrix input, norm
computes the matrix norm:
norm(R,1)
is the maximum absolute column sum ofR
.norm(R,Inf)
is the maximum absolute row sum ofR
.norm(R,2)
is approximatelymax(svd(R))
.
The 1-norm and infinity-norm of the matrix are computed in a similar way, and are therefore expected to be similar in cost. Computing the sum over rows or columns, and the max of the result, is quite cheap.
The 2-norm of the matrix in contrast requires a singular value decomposition, which is significantly more expensive.
In Julia, norm
computes the vector norm. To compute a matrix norm, use opnorm
.
To compute the vector norm of rows or columns of a matrix in MATLAB, use vecnorm
(since R2017b). To compute the norm of the vectorized matrix, use norm(R(:))
.
PS: The real question is why is the infinity-norm in Julia so slow? It should be cheaper than the 1-norm and much cheaper than the 2-norm!