I use .py files on two different pcs and synch the files using google drive. As I handle files quite often with subfolders I use the complete path to read csv
passport = pd.read_csv(r'C:\Users\turbo\Google Drive\Studium\Master_thesis\Python\Databases\passport_uzb.csv')
However, when switching pcs I have to change the path manually since for my second pc its:
C:\Users\turbo\My Drive\Studium\Master_thesis\Python\Databases
so the only difference really is 'Google Drive' =/= 'My Drive'
Is there a work around using the complete filepath to read files?
CodePudding user response:
You can use a relative path to access the CSV instead of an absolute one. The pathlib
module is useful for this. For example, assuming your script is directly inside the ...Python/Databases
folder, you can compute the path to the CSV like so, using the __file__
module attribute:
from pathlib import Path
# this is a path object that always refers to the script in which it is defined
this_file = Path(__file__).resolve()
# this path refers to .../Python/Databases, no matter where it is located
this_folder = this_file.parent
csv_path = this_folder / "passport_uzb.csv"
passport = pd.read_csv(csv_path)
Edit: no matter where your script is located, you can use some combination of .parent
and / "child"
to construct a relative path that will work. If your script is in ...Python/Databases/nasapower
then simply add another .parent
:
this_file = Path(__file__).resolve()
nasapower_folder = this_file.parent
databases_folder = nasapower_folder.parent
Or you can use the .parents
sequence to get there faster:
databases_folder = Path(__file__).resolve().parents[1]
Likewise, for the output folder:
output_folder = Path(__file__).resolve().parent / "json"