Home > Software design >  Algorithm to arrange audio tracks on a number of sides of vinyl, optimizing for smallest maximum sid
Algorithm to arrange audio tracks on a number of sides of vinyl, optimizing for smallest maximum sid

Time:12-19

We have a number of audio tracks, and a number of groups called “sides” (each corresponding to one side of a vinyl record). For each track, we know its name and its length in seconds. Each side also has a length, which is the sum of the lengths of the tracks on the side. Typically one side will be longer than all the others. This the maximum side length, and it varies depending on how we arrange the tracks. What arrangement of the tracks results in the SMALLEST maximum side length?

I have already developed a solution which is exact, in the sense that it finds the optimal solution. It uses a recursive tree-based search that considers all the possible combinations. Unfortunately this means processing time increases exponentially with the number of tracks. Above twenty tracks, it takes too long (days or weeks).

I considered parallel processing, but I concluded it was unlikely to help enough to justify the effort. Such situations can only be solved by finding a more efficient solution, which does less iterations. It’s possible that the problem is unsolvable without relaxing the constraints, but I am skeptical of that. The problem seems possibly related to the cutting stock problem.

Here's a typical input and output:

Input file (CSV format):

23 skidoo,5:38
400 blows,6:07
A Certain Ratio,3:15
African Headcharge,3:36
Allez Allez ,3:40
Animal Magic,3:49
Basement 5,5:14
Disconnection,5:15
EP4,4:42
Lifetones,6:43
Mark Stewart,3:43
Maximum Joy,7:53
PIL,7:35
Snakefinger,4:41
Startled Insects,5:35
The Unknown Case,5:51

Output for four sides (despite diverse track lengths, this one achieved an excellent fit, with a standard deviation of only one second):

A 20:51
01 07:53 "Maximum Joy"
02 06:07 "400 blows"
03 03:36 "African Headcharge"
04 03:15 "A Certain Ratio"

B 20:49
01 07:35 "PIL"
02 05:51 "The Unknown Case"
03 03:43 "Mark Stewart"
04 03:40 "Allez Allez "

C 20:49
01 06:43 "Lifetones"
02 05:35 "Startled Insects"
03 04:42 "EP4"
04 03:49 "Animal Magic"

D 20:48
01 05:38 "23 skidoo"
02 05:15 "Disconnection"
03 05:14 "Basement 5"
04 04:41 "Snakefinger"

min: 20:48
max: 20:51
mean: 20:49
stdev: 00:01

I have tried various tree-pruning strategies, similar to those used in chess programs, but so far none has proved adequate. For example, I tried introducing a side length limit, and abandoning a branch if it exceeds that limit. This can improve performance, but also requires a method of estimating a workable limit, because if the limit is too low, no solution is possible. I suspect that this problem can be solved with linear programming, but I haven't figured out how to reduce it to a set of constraints that are acceptable to a linear programming calculator.

Edit: code below

int Round(double x)
{
    return x < 0 ? int(x - 0.5) : int(x   0.5);
}

INT_PTR CountCombinations(INT_PTR n, INT_PTR k)
{
    INT_PTR count = n;
    for (INT_PTR x = 1; x <= k - 1; x  )
    {
        count = count * (n - x) / x;
    }
    return count / k;
}

CString SecsToStr(int nSecs)
{
    CTimeSpan   ts(0, 0, 0, nSecs);
    return ts.Format(_T("%M:%S"));
}

int StrToSecs(LPCTSTR pszStr)
{
    int nMinutes, nSeconds;
    if (_stscanf_s(pszStr, _T("%d:%d"), &nMinutes, &nSeconds) == 2) {
        return nMinutes * 60   nSeconds;
    }
    return 0;
}

class CLayout {
public:
// Types
    typedef CArrayEx<CIntArrayEx, CIntArrayEx&> CLayoutArray;

// Operations
    void    GetLayouts(int nGroups, int nItems, CLayoutArray& arrLayout);

protected:
// Data members
    int     m_nGroups;
    int     m_nItems;
    int     m_nItemsRemain;
    int     m_iDepth;
    CLayoutArray    *m_parrLayout;
    CIntArrayEx m_arrState;

// Helpers
    void    Recurse(int nItems);
    static int  SortCompare(const void *p1, const void *p2);
};

int CLayout::SortCompare(const void *p1, const void *p2)
{
    CIntArrayEx *parr1 = *(CIntArrayEx **)p1;
    CIntArrayEx *parr2 = *(CIntArrayEx **)p2;
    int nItems1 = parr1->GetSize();
    int nItems2 = parr2->GetSize();
    int nItems = min(nItems1, nItems2);
    for (int iItem = 0; iItem < nItems; iItem  ) {  // for each item
        int a = parr1->GetAt(iItem);
        int b = parr2->GetAt(iItem);
        if (a < b)
            return -1;
        if (a > b)
            return 1;
    }
    if (nItems1 < nItems2)
        return -1;
    if (nItems1 > nItems2)
        return 1;
    return 0;
}

void CLayout::GetLayouts(int nGroups, int nItems, CLayoutArray& arrLayout)
{
    m_nGroups = nGroups;
    m_nItems = nItems;
    m_iDepth = 0;
    m_arrState.SetSize(nGroups);
    arrLayout.RemoveAll();
    m_parrLayout = &arrLayout;
    Recurse(nItems);
    arrLayout.SortIndirect(SortCompare);
}

void CLayout::Recurse(int nItems)
{
    m_iDepth  ;
    int nStart = nItems / m_nGroups;
    nStart = max(nStart, 1);
    for (int iItem = nStart; iItem < nItems; iItem  ) { // for each item
        m_arrState[m_iDepth - 1] = iItem;
        int nRemain = nItems - iItem;
        if (m_iDepth < m_nGroups - 1) { // if not at maximum depth
            Recurse(nRemain);
        } else {    // maximum depth reached
            ASSERT(m_iDepth == m_nGroups - 1);
            m_arrState[m_iDepth] = nRemain;
            CIntArrayEx arrLayout(m_arrState);
            // sort descending, so bigger groups can be eliminated without recursion
            arrLayout.Sort(true);   // descending order
            if (m_parrLayout->Find(arrLayout) < 0) {    // if not duplicate layout
                m_parrLayout->Add(arrLayout);
            }
        }
    }
    m_iDepth--;
}

class CPacker {
public:
// Types
    class CTrackInfo {
    public:
        int     m_nLength;
        CString m_sTitle;
        bool    operator<(const CTrackInfo& info) const { return m_nLength < info.m_nLength; }
        bool    operator>(const CTrackInfo& info) const { return m_nLength > info.m_nLength; }
    };
    class CSideInfo {
    public:
        int     m_nLength;
        CIntArrayEx m_arrTrackIdx;
        bool    operator==(const CSideInfo& info) const { return m_nLength == info.m_nLength && m_arrTrackIdx == info.m_arrTrackIdx; }
        bool    operator!=(const CSideInfo& info) const { return !operator==(info); }
    };
    typedef CArrayEx<CTrackInfo, CTrackInfo&> CTrackArray;
    typedef CArrayEx<CSideInfo, CSideInfo&> CSideArray;
    typedef CArrayEx<CSideArray, CSideArray&> CSideArrayArray;
    class CStats {
    public:
        int     m_nMin;
        int     m_nMax;
        double  m_fMean;
        double  m_fStdDev;
    };

// Operations
    bool    Pack(const CTrackArray& arrTrack, int nSides, CSideArrayArray& arrSide);
    static  int     GetTotalLength(const CTrackArray& arrTrack);
    static  void    GetStats(const CSideArray& arrSide, CStats& stats);

protected:
// Types
    class CState {
    public:
        CByteArrayEx    m_arrCombo;
        CIntArrayEx m_arrTrackIdx;
    };
    typedef CArrayEx<CState, CState&> CStateArray;
    class CResultIdx {
    public:
        double  m_fStdDev;
        int     m_iResult;
        bool    operator<(const CResultIdx& idx) const { return m_fStdDev < idx.m_fStdDev; }
        bool    operator>(const CResultIdx& idx) const { return m_fStdDev > idx.m_fStdDev; }
    };
    typedef CArrayEx<CResultIdx, CResultIdx&> CResultIdxArray;

// Data members
    CLayout::CLayoutArray   m_arrLayout;
    int     m_iLayout;
    int     m_iDepth;
    int     m_nSides;
    int     m_nMaxSideLen;
    int     m_nWinnerMaxSideLen;
    CSideArray  m_arrWinner;
    CSideArrayArray m_arrResult;
    CStateArray m_arrState;
    CIntArrayEx m_arrSideLen;
    CIntArrayEx m_arrTrackLen;

// Helpers
    void    Init(int nSides, int nTracks, int nMaxSideLen);
    void    Recurse();
    void    UpdateWinner();
    static int  SideSortCompare(const void *p1, const void *p2);
};

int CPacker::GetTotalLength(const CTrackArray& arrTrack)
{
    int nTotalLength = 0;
    int nTracks = arrTrack.GetSize();
    for (int iTrack = 0; iTrack < nTracks; iTrack  ) {  // for each track
        nTotalLength  = arrTrack[iTrack].m_nLength;
    }
    return nTotalLength;
}

void CPacker::Init(int nSides, int nTracks, int nMaxSideLen)
{
    m_iLayout = 0;
    m_iDepth = 0;
    m_nSides = nSides;
    m_nMaxSideLen = nMaxSideLen;
    m_nWinnerMaxSideLen = INT_MAX;
    m_arrWinner.SetSize(nSides);
    m_arrResult.FastRemoveAll();
    m_arrState.SetSize(nSides);
    m_arrState[0].m_arrTrackIdx.SetSize(nTracks);
    m_arrSideLen.SetSize(nSides);
    m_arrTrackLen.SetSize(nTracks);
}

// Koepf is at least three times faster than CCombination

void CPacker::Recurse()
{
    CState& state = m_arrState[m_iDepth];
    int nTracks = state.m_arrTrackIdx.GetSize();
    int nGroupSize = m_arrLayout[m_iLayout][m_iDepth];
    int nSides = m_nSides;
    state.m_arrCombo.SetSize(nGroupSize);
    int gen_result = gen_comb_norep_lex_init(state.m_arrCombo.GetData(), nTracks, nGroupSize);
    while (gen_result == GEN_NEXT) {    // while combinations remain
        int nSideLength = 0;
        for (int iVal = 0; iVal < nGroupSize; iVal  ) { // for each group member        
            int iTrack = state.m_arrTrackIdx[state.m_arrCombo[iVal]];
            nSideLength  = m_arrTrackLen[iTrack];
        }
        if (nSideLength >= m_nMaxSideLen) { // if side is too long
            goto lblNextCombo;  // don't waste more time on this branch
        }
        m_arrSideLen[m_iDepth] = nSideLength;
        if (m_iDepth < nSides - 1) {    // if not at maximum depth
            m_iDepth  ;
            CState& stateNew = m_arrState[m_iDepth];
            int nNewGroupSize = nTracks - nGroupSize;
            stateNew.m_arrTrackIdx.FastSetSize(nNewGroupSize);
            int iSrcPos = 0;
            int iDestPos = 0;
            for (int iTrack = 0; iTrack < nTracks; iTrack  ) {  // for each track
                if (iSrcPos < nGroupSize && state.m_arrCombo[iSrcPos] == iTrack) {  // if group member
                    iSrcPos  ;
                } else {    // track doesn't belong to current group
                    stateNew.m_arrTrackIdx[iDestPos] = state.m_arrTrackIdx[iTrack]; // copy track to new state
                    iDestPos  ;
                }
            }
            if (m_iDepth < nSides - 1) {    // if not at maximum depth
                Recurse();  // traverse new state
                m_iDepth--;
            } else {    // final group has only one combination, so no need for recursion
                m_iDepth--;
                int nNewSideLength = 0;
                for (int iVal = 0; iVal < nNewGroupSize; iVal  ) {  // for each group member
                    int iTrack = stateNew.m_arrTrackIdx[iVal];
                    nNewSideLength  = m_arrTrackLen[iTrack];
                }
                if (nNewSideLength >= m_nMaxSideLen) {  // if side is too long
                    goto lblNextCombo;  // don't waste more time on this branch
                }
                stateNew.m_arrCombo.SetSize(nNewGroupSize);
                for (int iVal = 0; iVal < nNewGroupSize; iVal  ) {  // for each group member
                    stateNew.m_arrCombo[iVal] = iVal;   // sequential order
                }
                m_arrSideLen[nSides - 1] = nNewSideLength;
                goto lblTallySides;
            }
        } else {    // maximum depth reached
lblTallySides:
            int nMaxSideLen = m_arrSideLen[0];
            for (int iSide = 1; iSide < nSides; iSide  ) {  // for each side
                int nSideLen = m_arrSideLen[iSide];
                if (nSideLen > nMaxSideLen)
                    nMaxSideLen = nSideLen;
            }
            if (nMaxSideLen < m_nWinnerMaxSideLen) {    // if maximum side length beats winner
                m_nWinnerMaxSideLen = nMaxSideLen;
                UpdateWinner();
                m_arrResult.SetSize(1);
                m_arrResult[0] = m_arrWinner;
            } else if (nMaxSideLen == m_nWinnerMaxSideLen) {    // if tie
                UpdateWinner();
                if (m_arrResult.Find(m_arrWinner) < 0) {    // if new solution
                    m_arrResult.Add(m_arrWinner);   // store solution
                }
            }
        }
lblNextCombo:
        gen_result = gen_comb_norep_lex_next(state.m_arrCombo.GetData(), nTracks, nGroupSize);
    }
}

void CPacker::UpdateWinner()
{
    int nSides = m_nSides;
    for (int iSide = 0; iSide < nSides; iSide  ) {
        const CState& state = m_arrState[iSide];
        int nGroupSize = m_arrLayout[m_iLayout][iSide];
        m_arrWinner[iSide].m_nLength = m_arrSideLen[iSide];
        m_arrWinner[iSide].m_arrTrackIdx.FastSetSize(nGroupSize);
        for (int iVal = 0; iVal < nGroupSize; iVal  ) {
            int iTrack = state.m_arrTrackIdx[state.m_arrCombo[iVal]];
            m_arrWinner[iSide].m_arrTrackIdx[iVal] = iTrack;
        }
        m_arrWinner[iSide].m_arrTrackIdx.Sort();
    }
    m_arrWinner.SortIndirect(SideSortCompare);
}

int CPacker::SideSortCompare(const void *p1, const void *p2)
{
    CSideInfo   *ps1 = *(CSideInfo **)p1;
    CSideInfo   *ps2 = *(CSideInfo **)p2;
    if (ps1->m_arrTrackIdx[0] < ps2->m_arrTrackIdx[0])
        return -1;
    if (ps1->m_arrTrackIdx[0] > ps2->m_arrTrackIdx[0])
        return 1;
    return 0;
}

bool CPacker::Pack(const CTrackArray& arrTrack, int nSides, CSideArrayArray& arrSide)
{
    arrSide.RemoveAll();    // empty destination array first
    int nTracks = arrTrack.GetSize();
    if (nTracks < nSides) { // if less tracks than sides
        return false;   // error, empty sides not allowed
    }
    if (nTracks > nSides) { // if more tracks than sides
        {
            CLayout layout;
            layout.GetLayouts(nSides, nTracks, m_arrLayout);    // get possible layouts
        }
        int nTotalLength = GetTotalLength(arrTrack);
        int nMaxSideLen = nTotalLength / nSides;    // start from average
        while (1) {
            Init(nSides, nTracks, nMaxSideLen);
            for (int iTrack = 0; iTrack < nTracks; iTrack  ) {  // for each track
                m_arrTrackLen[iTrack] = arrTrack[iTrack].m_nLength;
                m_arrState[0].m_arrTrackIdx[iTrack] = iTrack;   // root state's track index always contains all tracks
            }
            int nLayouts = m_arrLayout.GetSize();
            for (m_iLayout = 0; m_iLayout < nLayouts; m_iLayout  ) {    // for each possible layout
                Recurse();
            }
            if (m_nWinnerMaxSideLen != INT_MAX)
                break;
            nMaxSideLen  ;
        }
    } else {    // one track per side; trivial case
        m_arrResult.SetSize(1);
        m_arrResult[0].SetSize(nSides);
        for (int iSide = 0; iSide < nSides; iSide  ) {  // for each side
            m_arrResult[0][iSide].m_arrTrackIdx.Add(iSide);
            m_arrResult[0][iSide].m_nLength = arrTrack[iSide].m_nLength;
        }
    }
    // sort results by standard deviation
    CResultIdxArray arrIdx;
    int nResults = m_arrResult.GetSize();
    arrIdx.SetSize(nResults);
    for (int iResult = 0; iResult < nResults; iResult  ) {
        CStats  stats;
        GetStats(m_arrResult[iResult], stats);
        arrIdx[iResult].m_fStdDev = stats.m_fStdDev;
        arrIdx[iResult].m_iResult = iResult;
    }
    arrIdx.Sort();
    arrSide.SetSize(nResults);
    for (int iResult = 0; iResult < nResults; iResult  ) {
        arrSide[iResult] = m_arrResult[arrIdx[iResult].m_iResult];
    }
    return true;
}

void CPacker::GetStats(const CSideArray& arrSide, CStats& stats)
{
    int nTotLen = 0;
    int nMinLen = INT_MAX;
    int nMaxLen = 0;
    int nSides = arrSide.GetSize();
    for (int iSide = 0; iSide < nSides; iSide  ) {
        const CSideInfo& side = arrSide[iSide];
        nTotLen  = side.m_nLength;
        if (side.m_nLength < nMinLen)
            nMinLen = side.m_nLength;
        if (side.m_nLength > nMaxLen)
            nMaxLen = side.m_nLength;
    }
    double  fMean = double(nTotLen) / nSides;
    double  fSum = 0;
    for (int iSide = 0; iSide < nSides; iSide  ) {
        const CSideInfo& side = arrSide[iSide];
        double  fDev = side.m_nLength - fMean;
        fSum  = fDev * fDev;
    }
    stats.m_nMin = nMinLen;
    stats.m_nMax = nMaxLen;
    stats.m_fMean = fMean;
    stats.m_fStdDev = sqrt(fSum / nSides);
}

bool OutputResults(const CPacker::CTrackArray& arrTrack, const CPacker::CSideArrayArray& arrSide, CStdioFile& fOut, bool bShowAll)
{
    int nResults;
    if (bShowAll) {
        nResults = arrSide.GetSize();
    } else {
        nResults = 1;
    }
    CString sLine;
    for (int iResult = 0; iResult < nResults; iResult  ) {
        if (bShowAll) {
            sLine.Format(_T("Solution %d\n\n"), iResult   1);   // separator
            fOut.WriteString(sLine);
        }
        int nSides = arrSide[iResult].GetSize();
        for (int iSide = 0; iSide < nSides; iSide  ) {
            const CPacker::CSideInfo& side = arrSide[iResult][iSide];
            sLine.Format(_T("%c %s\n"), iSide   'A', SecsToStr(side.m_nLength));
            fOut.WriteString(sLine);
            int nSideTracks = side.m_arrTrackIdx.GetSize();
            for (int iSeq = 0; iSeq < nSideTracks; iSeq  ) {
                int iTrack = side.m_arrTrackIdx[iSeq];
                sLine.Format(_T("d %s \"%s\"\n"), iSeq   1, SecsToStr(arrTrack[iTrack].m_nLength), arrTrack[iTrack].m_sTitle);
                fOut.WriteString(sLine);
            }
            fOut.WriteString(_T("\n"));
        }
        CPacker::CStats stats;
        CPacker::GetStats(arrSide[iResult], stats);
        sLine.Format(_T("min: %s\nmax: %s\nmean: %s\nstdev: %s\n\n"), 
            SecsToStr(stats.m_nMin), SecsToStr(stats.m_nMax), SecsToStr(Round(stats.m_fMean)), SecsToStr(Round(stats.m_fStdDev)));
        fOut.WriteString(sLine);
    }
    return true;
}

struct TRACK {
    int     nLength;
    LPCTSTR pszTitle;
};

void ReadCSVFile(LPCTSTR pszTracksFilePath, CPacker::CTrackArray& arrTrack)
{
    arrTrack.RemoveAll();
    CStdioFile fIn(pszTracksFilePath, CFile::modeRead);
    CString sLine;
    while (fIn.ReadString(sLine)) {
        CParseCSV   parse(sLine);
        CString sTitle, sLength;
        if (parse.GetString(sTitle) && parse.GetString(sLength)) {
            int nSeconds = StrToSecs(sLength);
            if (nSeconds) {
                CPacker::CTrackInfo info;
                info.m_sTitle = sTitle;
                info.m_nLength = nSeconds;
                arrTrack.Add(info);
            }
        }
    }
}

bool Pack(LPCTSTR pszTracksFilePath, int nSides, CString sOutPath, bool bShowAll)
{
    CPacker::CTrackArray    arrTrack;
    ReadCSVFile(pszTracksFilePath, arrTrack);
    if (nSides <= 1) {
        printf("number of sides must be greater than one\n");
        return false;
    }
    if (nSides > 26) {
        printf("number of sides is too big\n");
        return false;
    }
    if (!arrTrack.GetSize()) {
        printf("no track definitions found\n");
        return false;
    }
    if (nSides > arrTrack.GetSize()) {
        printf("more sides than tracks\n");
        return false;
    }
    arrTrack.Sort(true);    // improves performance
    CPacker packer;
    CPacker::CSideArrayArray arrSide;
//  ULONGLONG   nStart = GetTickCount64();
    if (!packer.Pack(arrTrack, nSides, arrSide))
        return false;
//  ULONGLONG   nStop = GetTickCount64(); printf("%lld ms\n", nStop - nStart);
    if (sOutPath.IsEmpty()) {
        CStdioFile  fOut(stdout);
        OutputResults(arrTrack, arrSide, fOut, bShowAll);
    } else {
        CStdioFile  fOut(sOutPath, CFile::modeCreate | CFile::modeWrite);
        OutputResults(arrTrack, arrSide, fOut, bShowAll);
    }
    return true;
}

class CMyCommandLineInfo : public CCommandLineInfo {
public:
    enum {  // define parameters
        PARAM_SIDES,    // number of sides
        PARAM_OUT_PATH, // output file path
        PARAM_SHOW_ALL, // show all solutions
    };
    static const LPCTSTR m_pszParam[];
    CMyCommandLineInfo();
    virtual void ParseParam(const TCHAR* pszParam, BOOL bFlag, BOOL bLast);
    static  int     FindParam(const TCHAR* pszParam);
    int     m_nSides;
    CString m_sOutPath;
    int     m_iParam;
    bool    m_bShowAll;
};

const LPCTSTR CMyCommandLineInfo::m_pszParam[] = {
    _T("sides"),
    _T("out"),
    _T("all"),
};

CMyCommandLineInfo::CMyCommandLineInfo()
{
    m_iParam = -1;
    m_nSides = 2;
    m_bShowAll = false;
}

int CMyCommandLineInfo::FindParam(const TCHAR* pszParam)
{
    int iParam;
    int nParams = _countof(m_pszParam);
    for (iParam = 0; iParam < nParams; iParam  ) {
        if (!_tcsicmp(pszParam, m_pszParam[iParam])) {  // case insensitive
            return iParam;
        }
    }
    return -1;
}

void CMyCommandLineInfo::ParseParam(const TCHAR* pszParam, BOOL bFlag, BOOL bLast)
{
    if (bFlag) {
        m_iParam = FindParam(pszParam);
        switch (m_iParam) {
        case PARAM_SHOW_ALL:
            m_bShowAll = true;
            m_iParam = -1;
            break;
        }
    } else {
        if (m_iParam >= 0) {    // if parameter is ours
            switch (m_iParam) {
            case PARAM_SIDES:
                _stscanf_s(pszParam, _T("%d"), &m_nSides);
                break;
            case PARAM_OUT_PATH:
                m_sOutPath = pszParam;
                break;
            }
            m_iParam = -1;
            return; // skip base class behavior
        }
    }
    CCommandLineInfo::ParseParam(pszParam, bFlag, bLast);
}

static const LPCSTR pszHelp = 
"Optimally packs tracks onto a specified number of sides.\n\n"
"TrackPacker [-sides numSides] [-out outPath] [-all] inPath\n\n"
"numSides  Number of sides to pack onto; defaults to two.\n"
"outPath   Path of output layout file; defaults to console.\n"
"all       Show all solutions, otherwise only show best one.\n"
"inPath    Path of input CSV file; required.\n\n"
"Example:\n"
"TrackPacker -sides 4 -out layout.txt tracks.csv\n\n"
"Input CSV format is title,length\n"
"If title contains commas, enclose it in double quotes.\n"
"Length must be in mm:ss (minutes:seconds) time format.\n\n"
"Sample CSV file:\n"
"Ida Lupino,1:23\n"
"\"Open, to Love\",4:56\n";

int _tmain(int argc, TCHAR* argv[], TCHAR* envp[])
{
    int nRetCode = 0;

    HMODULE hModule = ::GetModuleHandle(NULL);

    if (hModule != NULL)
    {
        // initialize MFC and print and error on failure
        if (!AfxWinInit(hModule, NULL, ::GetCommandLine(), 0))
        {
            // TODO: change error code to suit your needs
            _tprintf(_T("Fatal Error: MFC initialization failed\n"));
            nRetCode = 1;
        }
        else
        {
            CMyCommandLineInfo  cmdInfo;
            theApp.ParseCommandLine(cmdInfo);
            if (!cmdInfo.m_strFileName.IsEmpty()) {
                TRY {
                    Pack(cmdInfo.m_strFileName, cmdInfo.m_nSides, cmdInfo.m_sOutPath, cmdInfo.m_bShowAll);
                }
                CATCH(CException, e) {
                    TCHAR   szErrMsg[MAX_PATH];
                    e->GetErrorMessage(szErrMsg, _countof(szErrMsg));
                    _tprintf(_T("%s\n"), szErrMsg);
                }
                END_CATCH;
            } else {
                printf(pszHelp);
            }
        }
    }
    else
    {
        // TODO: change error code to suit your needs
        _tprintf(_T("Fatal Error: GetModuleHandle failed\n"));
        nRetCode = 1;
    }

    return nRetCode;
}

CodePudding user response:

The problem is NP-complete. It is closely related to the in-packing problem. Here is a proof:

You have a set of N items (audio track) that you want to balance in M containers (sides) so to minimize the size of the container (length of a side). The bin-packing decision problem is basically: find if the N items can be packed optimally so that they fit in M containers of a given size. This decision problem is NP-complete. You can reduce your problem to this bin-packing one. Indeed, the size of the container (the same in your case) is bounded: you can easily find a lower-bound where all items are guaranteed not to fit in the M container whatever the assignment, and find an upper-bound so all items are guaranteed to fit in the M container. This can be done by sorting the duration of the items and computing the sum the K first/last items. The idea is then do a binary search so to find the minimal size of the container where all items can fit. The reduction can be done in O(log(U-L) * T) time where U is the upper-bound duration, L is the lower-bound duration and T is the complexity of solving the bin-packing instance.

Note that the reduction can also be applied in the reverse way: some instance of the bin-packing decision problems (the one where all containers are equal like above), known to be NP-complete, can be reduced in your problem. Indeed, one can compute the optimal container size and check if it is bigger than a given size.

Since the best algorithm to solve the bin-packing instance is exponential, the best algorithm known for your algorithm is exponential too (if you succeed to find a polynomial algorithm, then it means you have proven that P=NP or that the above proof is wrong). The logarithmic transformation is great since this factor should be often small in your case (typically 10-15) and it means you can find an algorithm that is close to the speed of the best bin-packing algorithm. The good news is that you can reuse the best known bin-packing algorithm from decades of research. Thus, while this strategy may not give you best possible implementation, it certainly give you a very good algorithm based on the state-of-the-art (likely better than an algorithm written from scratch). Note that the better the lower/upper bound, the faster the binary search, so it is a good idea to find a pretty good bound. One can use randomized methods to quickly find a relatively small bound.

CodePudding user response:

One improvement I can think of: in order to eliminate redundant combinations, always select the longest title (still) available when "starting" a new side -- including the first. This ensures the following invariant: the longest title overall will always be on side A, the longest title that is not on A will be on B etc.

Also, keep track of the "globally" best side length you have found so far and don't consider combinations where a side exceeds this length (this can be initialized to maxint or infinity)

(This assumes that you fill records sequentially -- it's typically best to include your solution in the question so people can better see what you have done so far...)

Edit: To make your code more readable, I'd recommend to reduce the data representation to record lengths in seconds as integers. The names can be assigned at the end, they are not really relevant for the core of the problem here and they reduce readability.

You'll probably want to represent the data as follows:

  • Fixed:

    • sorted integer array of title lengths (descending)
  • Recursion State

    • current total length assigned
    • sorted integer list of indices of currently unused titles (ascending)
    • list of used title indices in assignment order
    • list of current side lengths
  • "Winner"

    • minimum side length so far
    • list of globally best side lengths so far
    • list of used titles for best side lengths so far

Where possible, operate on data structures in place during recursion, restoring the state as you exit a recursion step. This avoids data handover / allocation costs.

CodePudding user response:

The mathematical model for a MIP solver is:

Let the decision variables be:
   x[i,j]=1 if song i is assigned to side j
          0 otherwise
   y[j]:  length of side j
   z : maximum length
 

min z
    z >= sum(i, x[i,j]*length[i])     ∀j
    sum(j, x[i,j]) = 1                ∀i
    x[i,j] ∈ {0,1}

This should work with any MIP solver.

  • Related