I’m sorry…I still don’t understand. I thought it I sampled it would be faster? Isn’t that what people do with large datasets? And if it’s like you say, what’s the option during the development phase? I can’t really wait 15 minutes between instructions (if I want to keep my job haha)
So, if I take a sample and save it on disk with spark.write.parquet(…. It will become a separate entity from the original table right?
Sorry you must find these questions so trivial but for a newbie like me your answers are super helpful