I have this function, which task is to find out how many numbers, can be selected from the list in such a way, that the difference between any two selected numbers is not greater than t. How can I make it so that its time complexity is O(nlogn)?
def find_numbers(num_list, t):
sorted_list=sorted(num_list)
counter=0
k=0
n=0
for i in range(len(sorted_list)):
for j in range(k, len(sorted_list)):
if sorted_list[j]-sorted_list[k]<=t:
counter =1
else:
break
k =1
if counter>n:
n=counter
counter=0
return n
Some examples of how it should work
print(find_numbers([2, 7, 14, 11, 7, 15], 11)) # 5
print(find_numbers([4, 2, 7, 1], 0)) # 1
print(find_numbers([7, 3, 1, 5, 2], 2)) # 3
In the third example, three numbers can be selected from the list [7,3,1,5,2]: 3, 1 and 2, and the differences between these numbers are all at most 2.
CodePudding user response:
You can do with combinations
from itertools import combinations
def find_numbers(num_list, t):
return len([i for i in combinations(lst, 2) if abs(i[0]-i[1]) > t])
CodePudding user response:
Sort it once and then keep an index at the front and end of the sorted list
arr=[0, 1, 2, 3]
i=0, j=3
if arr[i] arr[j] < num
then you can increment the count and shift i
to the right. If arr[i] arr[j] >= num
then you shift j
to the left. Do this until i >= j
.
CodePudding user response:
Your general design, of sorting the list, then scanning through to see what the longest run of numbers separated by no more than t
it is sound.
However, you can be more efficient with your scan if you realize you don't need to reset j
each time you increase i
. All numbers up to sorted_list[j]
that were within t
of sorted_list[i]
will also be within that range of sorted_list[i 1]
, since the latter is larger. This means the scan can be O(n)
rather than O(n**2)
:
def find_numbers(num_list, t):
sorted_list=sorted(num_list)
n = 0
j = 1 # j points to the first index that might be larger by more than t
for i in range(len(sorted_list)):
while j < len(sorted_list) and sorted_list[j] - sorted_list[i] < t:
j = 1
if j - i > n: # no need to manually count, the indexes can do that for us
n = j - i
return n
There might be a slightly more elegant way to code this, but this version solves the complexity issue you were asking about.