Given a large unordered list with duplicates, how do I find the the count of values in the list that lie between the lower and upper bound, inclusive with a good time and space complexity? Will be great if there's an explanation in python. Looking for a O(nlog(n)) approach
Sample input
5 # number of elements in unordered list
2 4 98 3 100 # unordered list. values in list from 1 to 10 ^7
4 # number of subsequent bounds as input
99 101 # left is lower bound right is upper bound
1 5
100 100
2 2
Sample output
1 # 1 number in list between 99 and 101 inclusive
3 # 3 numbers in list between 1 and 5 inclusive
1
1
CodePudding user response:
Unsure if this is what you are looking for but in order to find the count, in a interval of a list you could do something similar to the following:
list[start:stop].count(element)
CodePudding user response:
One way to do this is to iterate the list and compare each value with each set of bounds. This is O(n*m)
where n
is the length of the list and m
is the length of the bounds
. Unless there are a lot of bounds relative to the length of the list, this should be lower than O(nlogn)
. Space-wise this uses only m
additional values.
ll = [2,4,98,3,100]
bounds = [[99,101], [1,5], [100,100], [2,2]]
result = [0 for _ in range(len(bounds))]
for num in ll:
for i, [upper, lower] in enumerate(bounds):
if upper <= num <= lower:
result[i] = 1
print(result)
Output:
[1, 3, 1, 1]
CodePudding user response:
Solved it in python
- Merged sort the list
- Used binary search (bisect library)