Home > Software engineering >  Iterating massive CSVs for comparisons
Iterating massive CSVs for comparisons

Time:09-20

I have two very large CSV files that will only continue to get larger with time. The documents I'm using to test are 170 columns wide and roughly 57,000 rows. This is using data from 2018 to now, ideally the end result will be sufficient to run on CSVs with data going as far back as 2008 which will result in the CSVs being massive.

Currently I'm using Univocity, but the creator has been inactive on answering questions for quite some time and their website has been down for weeks, so I'm open to changing parsers if need be.

Right now I have the following code:

public void test() {
    CsvParserSettings parserSettings = new CsvParserSettings();
    parserSettings.setLineSeparatorDetectionEnabled(true);
    parserSettings.setHeaderExtractionEnabled(false);
    CsvParser sourceParser = new CsvParser(parserSettings);
    sourceParser.beginParsing(sourceFile));

    Writer writer = new OutputStreamWriter(new FileOutputStream(outputPath), StandardCharsets.UTF_8);
    CsvWriterSettings writerSettings = new CsvWriterSettings();
    CsvWriter csvWriter = new CsvWriter(writer, writerSettings);
    csvWriter.writeRow(headers);

    String[] sourceRow;
    String[] compareRow;
    while ((sourceRow = sourceParser.parseNext()) != null) {
            CsvParser compareParser = new CsvParser(parserSettings);
            compareParser.beginParsing(Path.of("src/test/resources/"   compareCsv   ".csv").toFile());
        while ((compareRow = compareParser.parseNext()) != null) {
            if (Arrays.equals(sourceRow, compareRow)) {
                    break;
            } else {
                if (compareRow[KEY_A].trim().equals(sourceRow[KEY_A].trim()) &&
                    compareRow[KEY_B].trim().equals(sourceRow[KEY_B].trim()) &&
                    compareRow[KEY_C].trim().equals(sourceRow[KEY_C].trim())) {
                        for (String[] result : getOnlyDifferentValues(sourceRow, compareRow)) {
                            csvWriter.writeRow(result);
                        }
                        break;
                    }
                }
            }
            compareParser.stopParsing();
        }
}

This all works exactly as I need it to, but of course as you can obviously tell it takes forever. I'm stopping and restarting the parsing of the compare file because order is not guaranteed in these files, so what is in row 1 in the source CSV could be in row 52,000 in the compare CSV.

The Question:

How do I get this faster? Here are my requirements:

Print row under following conditions:

  • KEY_A, KEY_B, KEY_C are equal but any other column is not equal
  • Source row is not found in compare CSV
  • Compare row is not found in source CSV

Presently I only have the first requirement working, but I need to tackle the speed issue first and foremost. Also, if I try to parse the file into memory I immediately run out of heap space and the application laughs at me.

Thanks in advance.

CodePudding user response:

Also, if I try to parse the file into memory I immediately run out of heap space

Have you tried increasing the heap size? You don't say how large your data file is, but 57000 rows * 170 columns * 100 bytes per cell = 1 GB, which should pose no difficulty on a modern hardware. Then, you can keep the comparison file in a HashMap for efficient lookup by key.

Alternatively, you could import the CSVs into a database and make use of its join algorithms.

Or if you'd rather reinvent the wheel while scrupolously avoiding memory use, you could first sort the CSVs (by partitioning them into sets small enough to sort in memory, and then doing a k-way merge to merge the sublists), and then to a merge join. But the other solutions are likely to be a lot easier to implement :-)

  • Related