Home > Mobile >  Julia Vector: Excessive Memory Usage
Julia Vector: Excessive Memory Usage

Time:11-09

I want to brute force an 64-bit RSA encrypted text using a meet-in-the-middle attack (This is for university, nothing malicious).

To do this, I essentially created a Julia vector with 2^34 BigInt values and broadcasted the powermod() method on it to replace the values with the results.

v = powermod.(collect(1:2^34), e, n)

n in this case is 1024-bits long which should theoretically result in a vector of 2^34 * 1024 bits size plus overhead. However, if I try to create a smaller vector (E.g., 2^20) it will already allocate 4GB of memory.

const e = 65537
const n = 146524179203462820907751077702895222709717245613911342138636679265720963659264803540209990978140003809112749926543448691815554807130673470903067642157383639213843567573216381956709789503739105865173848988830139432801516289108538638198344024523424071181688467967187076534718264943427915623567859427045475866239

@time begin
    v = (powermod.(collect(1:2^24), e, n))
end

Output of @time:

125.598926 seconds (117.44 M allocations: 4.000 GiB, 5.35% gc time)

Not sure what and if I am doing something wrong here. Any help would be appreciated..

CodePudding user response:

I don't think you're doing anything wrong, when you're using a BigInt you're using a different method of powermod which isn't allocation free:

julia> @btime powermod(1000, e, 100);
  370.048 ns (0 allocations: 0 bytes)

julia> @btime powermod(1000, e, n);
  6.320 μs (9 allocations: 280 bytes)

julia> @which powermod(1000, e, 100)
powermod(x::Integer, p::Integer, m::T) where T<:Integer in Base at intfuncs.jl:358

julia> @which powermod(1000, e, n)
powermod(x::Integer, p::Integer, m::BigInt) in Base.GMP at gmp.jl:615

so you're not just allocating the result vector, but also during intermediate calculations. There's quite a few discussions on the Julia Discourse around the performance limitations of BigInts, but I've never really worked with them so unfortunately don't have any advice on how to speed things up here!

CodePudding user response:

This is something that can be indexed and iterated like v, but it does the calculation upon indexing. Since it doesn't allocate an array of results, you cannot set an index like v[i] = x, which might be something you need.

struct Laz{F}
  f::F
  size::Int64
end

# v[i]
function Base.getindex(v::Laz, i)
  if i < 1 || i > v.size
    throw(BoundsError(v, i))
  else
    v.f(i)
  end
end

# for el in v
function Base.iterate(v::Laz, state=1)
  if state > v.size
    nothing
  else
    (v[state], state 1)
  end
end

v_lazy = Laz( (x) -> powermod(x, e, n) , 2^24)

v_lazy gets to go on the stack and Base.summarysize(v_lazy) reports 8 bytes, so this is as memory-efficient as you can get. Each time you index, you do allocate, though. If you iterate v_lazy you end up allocating the amount of memory as v, just not all at once and ultimately freed.

  • Related