• Design and develop data pipelines using Databricks-Spark SQL-Scala/Python/Java.
• Create ETL processes to extract, transform, and load data from various sources.
• Perform data analysis and data modeling on large datasets.
• Monitor and optimize data pipelines for efficiency and performance.
• Collaborate with other data engineers and data scientists to ensure the accuracy of data.
• Troubleshoot and debug data pipelines and data quality issues.
• Work with business stakeholders to ensure data pipelines meet their needs.
Develop Power BI reports leveraging datasets available in the Databricks lakehouse platform
—
You received this message because you are subscribed to the Google Groups “DailyC2CRequirments” group.
To unsubscribe from this group and stop receiving emails from it, send an email to dailyopen_c2c__requirments+unsubscribe@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/dailyopen_c2c__requirments/af55c107-1ca2-4da1-a4c8-7290916783d5n%40googlegroups.com.