Home > database >  Prolog: Reducing Inferences
Prolog: Reducing Inferences

Time:01-19

I am trying to lower the inferences needed for executing my program in Prolog. My task is exactly the same as in: Prolog: Comparing Lists from Lists of Lists and Prolog - split a list into lists of lists. My code so far:

make_dessert(Starter, Main, Dessert, List_of_Persons, Size):-
    permutation(List_of_Persons, Permuted),
    split_list(Permuted, Size, Dessert),
    at_most_one_common(Starter, Dessert),
    at_most_one_common(Main, Dessert).

split_list(List, N, Splitted_List) :-
    split_helper(List, N, [], Splitted_List).

split_helper([], _, Acc, Acc).

split_helper(List, N, Acc, Splitted_List) :-
    append(H, T, List),
    length(H, N),
    split_helper(T, N, [H|Acc], Splitted_List).

at_most_one_common([], _).
at_most_one_common([H|T], List2) :-
    check_list(H, List2),
    at_most_one_common(T, List2).

check_list(_, []).
check_list(X, [H|T]) :-
    intersection(X, H, Z),
    length(Z, L),
    L =< 1,
    check_list(X, T).

Following query generates more than 4,000,000 inferences:

make_dessert([[1,2,3],[4,5,6],[7,8,9]],[[1,4,7],[2,5,8],[3,6,9]], D, [1,2,3,4,5,6,7,8,9], 3)).
% 4,380,057 inferences, 0.281 CPU in 0.286 seconds (98% CPU, 15578798 Lips)
D = [[3, 4, 8], [2, 6, 7], [1, 5, 9]].]

As all predicates tested by themselves alone are not producing a high number of inferences at all. I am seeing space for improvement in maybe finding the right permutation faster? Or maybe if one permutation is proofed to not fulfil the constraints to stop processing earlier on? I am grateful for any small improvement.

CodePudding user response:

I am grateful for any small improvement.

Get rid of split_list and split_helper and use Prolog unification to split the list into threes:

make_dessert(Starter, Main, Dessert, List_of_Persons, _):-
    permutation(List_of_Persons, [A,B,C,D,E,F,G,H,I]),
    Dessert = [[A,B,C], [D,E,F], [G,H,I]],
    at_most_one_common(Starter, Dessert),
    at_most_one_common(Main, Dessert).

at_most_one_common([], _).
at_most_one_common([H|T], List2) :-
    check_list(H, List2),
    at_most_one_common(T, List2).

check_list(_, []).
check_list(X, [H|T]) :-
    intersection(X, H, Z),
    length(Z, L),
    L =< 1,
    check_list(X, T).

That drops the inferences from 3.4 Million to 1.06 Million on SWISH:

% before
?- time(make_dessert([[1,2,3],[4,5,6],[7,8,9]],[[1,4,7],[2,5,8],[3,6,9]], D, [1,2,3,4,5,6,7,8,9], 3)).
D = [[3, 4, 8], [2, 6, 7], [1, 5, 9]]

3,397,265 inferences, 2.034 CPU in 2.034 seconds (100% CPU, 1670143 Lips)


% after
?- time( make_dessert([[1,2,3],[4,5,6],[7,8,9]],[[1,4,7],[2,5,8],[3,6,9]], D, [1,2,3,4,5,6,7,8,9], 3)).
D = [[1, 5, 9], [2, 6, 7], [3, 4, 8]]

1,061,482 inferences, 0.508 CPU in 0.508 seconds (100% CPU, 2091297 Lips)

It is possible to use length/2, maplist and to make the Dessert length dynamic with the input Size instead of hard coded to three by three.

CodePudding user response:

permutation creates too many unwanted, erm, permutations. Instead, choose "1 from each segment", to greatly reduce the search space:

permute_segments(Segs, Perm) :-
    length(Perm, 3),
    permute_segments_(Perm, Segs).

permute_segments_([], _).
permute_segments_([E|T], Segs) :-
    length(E, 3),
    select_segments(Segs, E, Segs0),
    permute_segments_(T, Segs0).

select_segments([], [], []).
select_segments([H|T], [E|P], [H0|L]) :-
    select(E, H, H0),
    select_segments(T, P, L).

Result in swi-prolog:

?- permute_segments([[1,2,3],[4,5,6],[7,8,9]], P).
P = [[1, 4, 7], [2, 5, 8], [3, 6, 9]] ;
P = [[1, 4, 7], [2, 5, 9], [3, 6, 8]] ;
...
P = [[3, 6, 9], [2, 5, 7], [1, 4, 8]] ;
P = [[3, 6, 9], [2, 5, 8], [1, 4, 7]].

To show the efficiency improvement:

?- bagof(P, permute_segments([[1,2,3],[4,5,6],[7,8,9]], P), Ps), length(Ps, Len).
Len = 216.

?- bagof(P, permutation([1,2,3,4,5,6,7,8,9], P), Ps), length(Ps, Len).
Len = 362880.
  • Related