Location: Plano, TX (100% Remote)
Overview:
10+ years of work experience with expertise in data engineering and enterprise data warehousing
Good experience with Databricks (Pyspark), ETL (Talend and Informatica Cloud), programming (Python), AWS, SQL (Azure SQL, SQL Server, Teradata, Redshift, and PostgreSQL), scripting (Unix, PowerShell), data modeling, and good business acumen.
Job Duties:
Design, develop, and maintain scalable and robust applications using Databricks, Python, SQL and various AWS technologies.
Collaborate with data scientists and analysts to understand data requirements and deliver solutions.
Design and build scalable data pipelines using AWS services like AWS Glue, Amazon Redshift, and S3.
Develop efficient ETL processes for data extraction, transformation, and loading into data warehouses and lakes.
Optimize and troubleshoot existing data pipelines for performance and reliability.
Ensure data quality and integrity across various data sources.
Implement data security and compliance best practices.
Monitor data pipeline performance and conduct necessary maintenance and updates.
Document data pipeline processes and technical specifications.
Mandatory Skills:
10+ years of experience in data engineering, data modeling & data warehousing.
Experience and proficiency with Databricks and PySpark.
10+ years of experience on SQL skills and experience with relational databases (Azure SQL, SQL Server, Teradata, Redshift, and PostgreSQL)
Strong experience of data warehousing concepts and ETL processes (Talend and Informatica Cloud)
Experience using software and tools including big data tools like Kafka, Spark and Hadoop
Experience with scripting within Unix/Linux/CentOS – Unix,PowerShell, Perl, Python, Regular Expressions
Hands-on experience with AWS services including S3, Lambda, API Gateway, and SQS.
Strong skills in data engineering on AWS, with proficiency in Python.
Experience with batch job scheduling and managing data dependencies.
Excellent problem-solving and analytical skills.
Basic Qualifications:
Bachelor’s degree in computer science, Engineering, MIS, or a related field.
Nice to have:
Experience with AWS Big Data services like Amazon EMR and Kinesis.
Familiarity with data visualization tools such as Tableau or Power BI.
Knowledge of containerization technologies like Docker and Kubernetes.
—
You received this message because you are subscribed to the Google Groups “MydailyReksforC2C” group.
To unsubscribe from this group and stop receiving emails from it, send an email to mydailyreksforc2c+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/mydailyreksforc2c/aff693bc-cd0d-44ed-a605-a54f370f3e71n%40googlegroups.com.