I want to make the biggersort function recall itself recursively on the tail of the list. My code below works and gives me the following output :
[6,7,8]
I want to continue by starting next with 3 then 1 .. to the last element.
What I want is something like :
[[6,7,8],[3,5,7,8],[1,5,7,8]..]
My code:
import Data.List
import System.IO
biggersort :: Ord a => [a] -> [a]
biggersort [] = []
biggersort (p:xs) = [p] (biggersort greater)
where
greater = filter (>= p) xs
main = do
print $ biggersort [6,3,1,5,2,7,8,1]
CodePudding user response:
You can make a function that will use biggersort
as a helper function, so:
biggersort :: Ord a => [a] -> [a]
biggersort [] = []
biggersort (p:xs) = p : biggersort (filter (>= p) xs)
biggersorts :: Ord a => [a] -> [[a]]
biggersorts [] = []
biggersorts xa@(_:xs) = biggersort xa : biggersorts xs
main = print (biggersorts [6,3,1,5,2,7,8,1])
This then prints:
Prelude> biggersorts [6,3,1,5,2,7,8,1]
[[6,7,8],[3,5,7,8],[1,5,7,8],[5,7,8],[2,7,8],[7,8],[8],[1]]
CodePudding user response:
You can access the list starting from each position using tails
, so that a particularly concise implementation of the function you want in terms of the function you have is:
biggersorts = map biggersort . tails
There is one very noticeable downside (and one less noticeable downside) to this implementation, though. The noticeable downside is that computation on shorter lists is repeated when processing longer lists, leading to O(n^2) best-case time; the less obvious downside is that there is no memory sharing between elements of the result list, leading to O(n^2) worst-case memory usage. These bounds can be improved to O(n) average-case/O(n^2) worst-case time* and O(n) worst-case memory usage.
The idea is to start from the end of the list, and build towards the front. At each step, we look at all the rest of the results to see if there's one we can reuse. So:
biggersorts :: Ord a => [a] -> [[a]]
biggersorts [] = [[]]
biggersorts (a:as) = (a:as') : deeper where
as' = fromMaybe [] (find bigger rec)
rec = biggersorts as
bigger (a':_) = a <= a'
bigger _ = False -- True is also fine
Beware that because of the sharing, it can be tricky to compare performance of these two fairly. The usual tricks for fully evaluating the output of this function don't play super nicely with sharing; so it's tricky to write something that obviously fully evaluates the outputs but also visits each subterm O(1) times (I think it's less important that this latter property be obvious). The most obvious way to do it (to me) involves rewriting both functions using some mildly advanced techniques, so I will elide that here.
* It seems like it ought to be possible to improve this to O(n*log(n)) worst-case time. During the recursion, build a cache :: Map a [a]
to go with the rec :: [[a]]
; the intuition for cache
is that it tells which element of rec
to use at each existing "boundary" a
value. Updating the cache at each step involves splitting it at the current value and throwing away the bottom half.
I find the average-case time analysis harder for this variant; is it still O(n)? It seems plausible that the Map
has O(1) average size during the run of this variant, but I wasn't able to convince myself of this guess. If not, is there another variant that achieves O(n) average-case/O(n*log(n)) best-case? ...is there one that does O(n) worst-case? (Probably the same counting argument used to bound the runtime of sorting rules this out...?)
Dunno!