In my code i am converting lots of list represented as string to list object (in a loop) like the following:
s="[[1,2,3],[4,5,6]]"
ast.literal_eval(s)
problem is, for a big list it takes lot of time.
is there any way to make this process faster?
CodePudding user response:
I'm thinking of using json.loads
. It's significantly faster than ast.literal_eval
as the latter should go through more complex Python parser/compiler stuff.
Of course this is only fine if you have to store your data as string and you do want to stick with stdlib. If you can use binary formats, like some methods of Numpy or pickle module, they should be much more performant.
from ast import literal_eval
from json import loads as j_loads
from json import dumps as j_dumps
from pickle import loads as p_loads
from pickle import dumps as p_dumps
from timeit import timeit
lst = [[1, 2, 3, 4]] * 100_000
lst_str = j_dumps(lst)
pickled = p_dumps(lst)
def use_json():
return j_loads(lst_str)
def use_ast():
return literal_eval(lst_str)
def use_pickle():
return p_loads(pickled)
print(use_ast() == use_json() == use_pickle())
print(timeit(use_json, globals=globals(), number=5))
print(timeit(use_ast, globals=globals(), number=5))
print(timeit(use_pickle, globals=globals(), number=5))
output:
True
0.13874139200015634
4.015027895000458
0.006450980001318385