If I have for example a (multitask) Databricks job with 3 tasks in series and the second one fails - is there a way to start from the second task instead of running the whole pipeline again?
CodePudding user response:
Right now this is not possible, but if you refer to the Databrick's Q3 public roadmap, there were some items around improving multi-task jobs.