I wrote two functions maxSubSum2
and maxSubSum3
, both of which try to find the maximum continuous sum of a sub-sequence in a given sequence.
maxSubSum2()
- Loops through the entire vector and sets a beginning marker
i
on each iteration. - For each beginning marker
i
it sets a corresponding ending markerj
. - The elements bound between these two markers form a valid sub-sequence; therefore, it calculates its sum.
- And checks if it is greater than the previous highest sum.
int maxSubSum2(const std::vector<int> &v)
{
int maxSum = 0;
for(std::size_t begin = 0; begin < v.size(); begin)
{
int thisSum = 0;
for(std::size_t end = begin; end < v.size(); end)
{
thisSum = v[end];
if(thisSum > maxSum)
maxSum = thisSum;
}
}
return maxSum;
}
maxSubSum3()
- Is itself a driver function to the recursive function
maxSumRec()
. maxSumRec
uses divide-and-conquer to calculate the maximum sub-sequence sum.- The maximum sub-sequence sum can be either in the entirety of the left part, or right part, or it can between the two (in which case, it is the sum of maximum sum in left part which includes its border:
center
, and the maximum sum in right part which also includes its border:center 1
.
int maxSubSum3(const std::vector<int> &v)
{
return maxSumRec(v, 0, v.size() - 1);
}
int maxSumRec(const std::vector<int> &v, std::vector<int>::size_type left, std::vector<int>::size_type right)
{
if(left == right)
if(v[left] > 0)
return v[left];
else
return 0;
std::vector<int>::size_type center = (left right) / 2;
int maxLeftSum = maxSumRec(v, left, center);
int maxRightSum = maxSumRec(v, center 1, right);
int maxLeftBorderSum = 0;
int leftBorderSum = 0;
for(std::vector<int>::size_type idx = center; idx < v.size(); --idx)
{
leftBorderSum = v[idx];
if(leftBorderSum > maxLeftBorderSum)
maxLeftBorderSum = leftBorderSum;
}
int maxRightBorderSum = 0;
int rightBorderSum = 0;
for(std::vector<int>::size_type idx = center 1; idx <= right; idx)
{
rightBorderSum = v[idx];
if(rightBorderSum > maxRightBorderSum)
maxRightBorderSum = rightBorderSum;
}
return max3(maxLeftSum, maxRightSum, maxLeftBorderSum maxRightBorderSum);
}
int max3(int n1, int n2, int n3)
{
if(n1 >= n2 && n1 >= n3) return n1;
if(n2 >= n1 && n2 >= n3) return n2;
if(n3 >= n1 && n3 >= n2) return n3;
return 0; // <--- Should never happen
}
Because maxSubSum2()
has a double nested for
loops, its time complexity needs to be O(N), and because maxSubSum3()
uses divide and conquer, its time complexity needs to be O(NlogN).
However, I created a simple running time calculation function stopwatch()
to measure the actual running time runtime
for each function. It looks like the following:
void stopwatch(int (*maxSubSumN)(const std::vector<int>&), const std::vector<int> &v)
{
std::chrono::time_point start = std::chrono::steady_clock::now();
maxSubSumN(v);
std::chrono::time_point end = std::chrono::steady_clock::now();
std::chrono::duration<double> runtime = end - start;
std::cout << std::fixed << std::setprecision(9) << std::left << std::setw(9) << runtime.count();
}
Before the program begins, I populate two vectors small
and big
with 1000 and 10000 randomly generated int
respectively; like this:
int randInt()
{
return std::rand() % 101 - 50;
}
void populate(std::vector<int> &v)
{
for(int &i : v)
i = randInt();
}
int main()
{
std::srand(std::time(NULL));
std::vector<int> small(1000);
std::vector<int> big(10000);
populate(small);
populate(big);
std::cout << "[OPTIMIZED BRUTE FORCE] \t: ";
stopwatch(maxSubSum2, small);
std::cout << std::endl;
std::cout << "[OPTIMIZED BRUTE FORCE] \t: ";
stopwatch(maxSubSum2, big);
std::cout << std::endl;
std::cout << "[DIVIDE AND CONQUER] \t\t: ";
stopwatch(maxSubSum3, small);
std::cout << std::endl;
std::cout << "[DIVIDE AND CONQUER] \t\t: ";
stopwatch(maxSubSum3, big);
std::cout << std::endl;
return 0;
}
However, after running the program several times, the execution time for both maxSubSum2
and maxSubSum3
is almost identical. Below are the results on my machine.
small.size() == 10 && big.size() == 100
[OPTIMIZED BRUTE FORCE] : 0.000001015
[OPTIMIZED BRUTE FORCE] : 0.000041509
[DIVIDE AND CONQUER] : 0.000001789
[DIVIDE AND CONQUER] : 0.000058110
small.size() == 100 && big.size() == 1000
[OPTIMIZED BRUTE FORCE] : 0.000042093
[OPTIMIZED BRUTE FORCE] : 0.003870208
[DIVIDE AND CONQUER] : 0.000053203
[DIVIDE AND CONQUER] : 0.003899243
small.size() == 1000 && big.size() == 10000
[OPTIMIZED BRUTE FORCE] : 0.002765456
[OPTIMIZED BRUTE FORCE] : 0.271172096
[DIVIDE AND CONQUER] : 0.002931273
[DIVIDE AND CONQUER] : 0.274476880
small.size() == 1000 && big.size() == 1000000
[OPTIMIZED BRUTE FORCE] : 0.002730444
[OPTIMIZED BRUTE FORCE] : 26.383375030
[DIVIDE AND CONQUER] : 0.002903615
[DIVIDE AND CONQUER] : 26.508168165
- Is there something that I did wrong in calculation of time complexities?
- Did I make some mistake in the implementation of
maxSumRec
it gives the right results though? - Is there some other bottleneck in the implementation that I did not consider?
- Or, is there something that I am missing or did not understand?
Any help would be highly appreciated. For reference, I am learning from the book: Data Structures and Algorithms in C (4th Edition).
CodePudding user response:
I believe maxSubSum3
(the divide and conquer) is still O(N^2). You're not doing anything clever to avoid calculating all the same sums that were calculated by maxSubSum2
(brute force), you're just dividing up the work in a way that makes it harder to tell what is going on. If you're not convinced, I suggest you make a counter that counts how many times the =
operator is used in either algorithm, and look at that.
CodePudding user response:
for(std::vector<int>::size_type idx = center; idx < v.size(); --idx)
This loop doesn't make sense. You're decreasing idx
while idx < v.size()
, which for an unsigned number is the same as decreasing it until it underflows past 0. That will both increase your asymptotic runtime, and possibly give incorrect results.
You may have intended idx >= left
, but that won't work for left == 0
due to using unsigned types, so you may want to use difference_type
instead of size_type
.
In any case, I suggest making sure that both algorithms give the same results if you haven't done so.