Home > Blockchain >  Reducing nested for loop
Reducing nested for loop

Time:09-21

My nested for loop looks like this:

for(int i=0; i<productGroups.size(); i  ) {
     for(int j=0; j<productGroups.get(i).getProducts().size(); j  ) { 
        AmountEntity storageValue = productGroups.get(i).getStorageValue();
        productGroups.get(i).getProducts().get(j).setQuantity(storageValue);
      }
}

I am pretty sure that there must be a more efficient way to do it without having n² complexity.

productGroup is a List which has Products Lists inside, and I am just traversing the both lists to set the amount.

I would be very happy if someone can help me here.

CodePudding user response:

If you want to set a value on all products in all productGroups, you will need to visit each one. How many? Well if you have x productGroups and y products in each productGroup (for argument's sake) then you would need to visit x * y products. No way around that. You could make the code a bit more concise and readable though, which is usually way more important anyway:

for(ProductGroup productGroup : productGroups) {
    AmountEntity storageValue = productGroup.getStorageValue();
    for(Product product : productGroup.getProducts()) {
        product.setQuantity(storageValue);
    }
}

or in lambda form:

productGroups.forEach(pg -> {
    AmountEntity storageValue = pg.getStorageValue();
    pg.getProducts().forEach(p -> p.setQuantity(storageValue));
});

Note that each product in a productGroup gets the same storageValue so you can define that variable in the outer loop as shown above. If you can change the domain model, you could ask yourself why the storageValue needs to be copied from the productGroup to each product anyway since this is basically redundant information, but maybe there is a good reason for that.

If I saw logic like this I would not worry about the performance initially. Calling getters and setters is usually not a performance bottleneck. Only if you do this for millions and millions of products it might take some time. In that case the solution would not lie in speeding up these iterations, but in reducing the fetched number of products (e.g. by paging).

  • Related