Home > front end >  How to add name of csv files as values in a column while merging 1000 files?
How to add name of csv files as values in a column while merging 1000 files?

Time:01-13

I am trying to merge 1000 csv files using the following code:

path = r'path_to_files/' 
all_files = glob.glob(path   "/*.csv")

import shutil

with open('updated_thirteen_jan.csv','wb') as wfd:
    for f in all_files:
        with open(f,'rb') as fd:
            shutil.copyfileobj(fd, wfd)

I am using the above code to avoid ram crashing problem, it is working fine. However, I would like to do what the following code does for me:

path = r'path_to_files/'
all_files = glob.glob(path   "/*.csv")
fields = ['col1', 'col2', 'col3', 'col4', 'col5', 'col6', 'col7', 'col8']
li = []

first_one = True
for filename in all_files:

    if not first_one: # if it is not the first csv file then skip the header row (row 0) of that file
        skip_row = [0]
    else:
        skip_row = []

for filename in all_files:
    df = pd.read_csv(filename, index_col=None, skiprows = skip_row, engine='python', usecols=fields)
    df = df[(df['lang'] == 'en')]
    filename = os.path.basename(filename)
    df['file_name'] = filename


    li.append(df)

frame = pd.concat(li, axis=0, ignore_index=True)

From this code, I would like to be able to perform column selection fileds, row_skip and adding file_name as a value.

Any guidance please?

CodePudding user response:

If memory is the constraint, then one pandas-based solution is to iterate over chunks of rows:

import os

import pandas as pd

print(pd.__version__)
# works with this version: '1.3.4'

# gen sample files
all_files = [f"{_}.csv" for _ in range(3)]
for filename in all_files:
    df = pd.DataFrame(range(3))
    df.to_csv(filename, index=False)

# combine into one
mode = "w"
header = True
for filename in all_files:
    with pd.read_csv(
        filename,
        engine="python",
        iterator=True,
        chunksize=10_000,
    ) as reader:
        for df in reader:
            filename = os.path.basename(filename)
            df["file_name"] = filename
            df.to_csv("some_file.csv", index=False, mode=mode, header=header)
            mode = "a"
            header = False

CodePudding user response:

Another solution is to use dask:

# pip install dask
import dask.dataframe as dd

# dd.read_csv is mostly compatible with pd.read_csv options
# so can specify reading specific columns, etc.
ddf = dd.read_csv("some_path/*.csv")
ddf.to_csv('merged_file.csv', index=False, single_file=True)

CodePudding user response:

The good old csv module can process one row at a time, so memory will not be an issue. The following code will concatenate the csv files keeping only the first header, and adding a filename column populated with the filename.

path = r'path_to_files/' 
all_files = glob.glob(path   "/*.csv")

import csv

with open('updated_thirteen_jan.csv','w', newline='') as wfd:
    wr = csv.writer(wfd)
    first = True
    for f in all_files:
        with open(f) as fd:
            rd = csv.reader(fd)
            # skip header line, except for the first file
            row = next(rd)
            if first:
                row.append('filename')
                wr.writerow(row)
                first = False
            for row in rd:
                row.append(f)
                wr.writerow(row)

CodePudding user response:

Read one file at a time into pandas dataframe, add new column to it and write it to a new file.

import os
import glob
import pathlib

path = 'path_to_files/'
out_file = 'updated_thirteen_jan.csv'
all_files = glob.glob(path   '*.csv')
all_files = sorted([pathlib.Path(i) for i in all_files])

keep_cols = ['list', 'of', 'columns', 'to', 'keep']
skip_row = 2  # number of rows to skip

for fn in all_files:
    temp = pd.read_csv(fn, usecols=keep_cols, skiprows=skip_row)
    temp['filename'] = fn.stem
    temp.to_csv(out_file, mode='a', index=False, header=not os.path.isfile(out_file))

If reading entire csv into memory isn't feasible, then use chunksize. Modify this value per your machine capacity.

for fn in all_files:
    reader = pd.read_csv(fn, usecols=keep_cols, skiprows=skip_row, chunksize=5000)
    for idx, df in enumerate(reader):
        df['filename'] = fn.stem
        df.to_csv(out_file, mode='a', index=False, header=not os.path.isfile(out_file))
  •  Tags:  
  • Related