I'm currently reviewing a program that we're using to populate a custom table since it was experiencing performance issues, and I'm looking for ways to optimize performance. One idea that I came up with was to consolidate loop statements that iterate over similar tables, but I wasn't sure how it would affect the performance and would like to ask for confirmation and/or recommendations if possible. So like the title says, does minimizing LOOP AT-statements improve overall runtime efficiency?
Shown below is a simplified example of the original code with 2 loops that are separated with different WHERE-clause, and then the latter is my idea to consolidate the loop and separate it into multiple ELSEIF-statements as needed.
ORIGINAL CODE:
IF param = 'X'.
LOOP AT lt_intab
INTO wa_intab
WHERE field_one = const_one.
SELECT *
FROM db_table
WHERE field_two = wa_intab-field_two.
MOVE-CORRESPONDING db_table TO wa_outtab.
APPEND wa_outtab TO lt_outtab.
CLEAR wa_outtab.
ENDSELECT.
ENDLOOP.
ENDIF.
LOOP AT lt_intab
INTO wa_intab
WHERE field_one = const_two.
SELECT *
FROM db_table
WHERE field_three = wa_intab-field_three.
MOVE-CORRESPONDING db_table TO wa_outtab.
APPEND wa_outtab TO lt_outtab.
CLEAR wa_outtab.
ENDSELECT.
ENDLOOP.
OPTIMIZED CODE:
LOOP AT lt_intab
INTO wa_intab
IF param = 'X'
AND wa_intab-field_one = const_one.
SELECT *
FROM db_table
WHERE field_two = wa_intab-field_two.
MOVE-CORRESPONDING db_table TO wa_outtab.
APPEND wa_outtab TO lt_outtab.
CLEAR wa_outtab.
ENDSELECT.
ELSEIF wa_intab-field_one = const_two.
SELECT *
FROM db_table
WHERE field_three = wa_intab-field_three.
MOVE-CORRESPONDING db_table TO wa_outtab.
APPEND wa_outtab TO lt_outtab.
CLEAR wa_outtab.
ENDSELECT.
ENDIF.
ENDLOOP.
Is the optimized code a better approach of looping through similar tables?
CodePudding user response:
Assuming field_one
isn't a accessible via any sorted or hashed key:
Original code
Param = 'X'
n n loop iterations
Param = ''
n loop iterations
Optimized code
Always n loop iterations
You can basically already aggregate it to WHERE field_one = const_one OR field_one = const_two
.
But I'd expect much more performance by using FOR ALL ENTRIES
and also using LOOP AT ASSIGNING
instead. So group the keys first and then hit the database with a list of values for db_table~field_two
and field_two~field_three
.
Would be interesting to know of much db time you spent. Transaction SAT and the ABAP Profiler in ADT can also show you time spent on internal tables itself and database.
CodePudding user response:
According to peterulb, your problem isn't the loop in the first place. Normally, especially with a hana database, you want to access only the data you really need and read it from the database. There are many ways to get the best values, like FOR ALL ENTRIES
or, if you want to get the entries from more than one database table, you could use UNION
statement to connect to select statements. So you could possibly ignore LOOP AT
at all.