Home > Software design >  SQL vs PySpark/Spark SQL
SQL vs PySpark/Spark SQL

Time:08-17

Could someone please help me understand why we need to use PySpark or SprakSQL etc if the source and target of my data is the same DB?

For example, lets say I need to load data to table X in PostgresDB from tables X and Y. Would it not be simpler and faster to just do it in Postgres instead of using SprakSQL or PySpark etc?

I understand the need for these solutions if data is from multiple sources, but if it is from same source, do I need to use PySpark?

CodePudding user response:

You can use spark when you want to do heavy data transformations, it makes it easier to load and process due to distributed processing.

It totally depends on how large is the data and how you want to transform it.

Using Postgres will be a good idea if data is relatively small and no transformation is required.

CodePudding user response:

It is not necessary to use PySpark. Both PySpark & SparkSQL have their value in managing/manipulating large volumes of data few hundred of GBs, TBs, or PBs in a distributed computing setup. If this is your case, please use PySpark, it will be more efficient to load, manipulate, process/shape the data before inserting it into another table.

  • Related