I have a list 'S'
s=[[1,2,1],[2,2,1],[1,2,1]]
Case 1 : Remove the duplicate groups in the list
Case 2 : Remove the duplicate values within the groups
Desired Result:
Case 1 : [[1,2,1],[2,2,1]]
Case 2 : [[1,2],[2,1],[1,2]]
I tried using the list(set(s)) but it throws up an error : "unhashable type: 'list' "
CodePudding user response:
IIUC,
Case 1:
Convert the lists to tuple for hashing, then apply a set on the list of tuples to remove the duplicates. Finally, convert back to lists.
out1 = list(map(list, set(map(tuple, s))))
# [[1, 2, 1], [2, 2, 1]]
Case 2:
For each sublist, remove the duplicates while keeping order with conversion to dictionary keys (that are unique), then back to list:
out2 = [list(dict.fromkeys(l)) for l in s]
# [[1, 2], [2, 1], [1, 2]]
CodePudding user response:
You need to know that it's not possible in Python to have a set of lists. The reason is that lists are not hashable. The simplest way to do your task in the first case is to use a new list of lists without duplicates such as below:
temp_list = []
for each_element in [[1,2,1],[2,2,1],[1,2,1]]:
if each_element not in temp_list:
temp_set.append(each_element)
print(temp_list)
The output:
[[1, 2, 1], [2, 2, 1]]
The case2 is more simple:
temp_list = []
for each_element in [[1,2,1],[2,2,1],[1,2,1]]:
temp_list.append(list(set(each_element)))
print(temp_list)
And this is the output:
[[1, 2], [1, 2], [1, 2]]
However these codes are not the pythonic way of doing things, they are very simple to be understood by beginners.