Data Engineer
Location: Dallas, TX (Hybrid – 3 days onsite)
Duration: Long-term Project | Interview: In-person
Join our growing data team working on exciting cloud migration and real-time data solutions. We’re looking for a hands-on Data Engineer who can build strong pipelines, work with cloud tools, and help us manage and process data at scale.
Roles and Responsibilities:
- Migrate data from on-prem Teradata to AWS Redshift with zero data loss.
- Build batch and real-time data pipelines using Python .
- Set up and manage Kafka on AWS for streaming data.
- Automate ETL workflows using AWS Glue and store data in S3.
- Design data solutions in Redshift for fast querying.
- Create reusable frameworks to reduce development time and improve code.
- Build dashboards and reports for client data insights.
- Use GitLab CI/CD for testing and deployments.
- Work with AWS tools like Lambda, SQS, SNS, Glue, DynamoDB, OpenSearch, and more.
- Develop tools to generate mock data (JSON, CSV, XML).
- Deploy containerized apps using Kubernetes.
- Follow best practices to ensure data security and compliance.
Qualifications:
- Strong experience in Python, and cloud technologies – AWS.
- Good understanding of Kafka, ETL, and data warehouses.
- Familiarity with CI/CD, boto3, and Kubernetes is a big plus.
—
—
You received this message because you are subscribed to the Google Groups “Exclusive C2C Requirements” group.
To unsubscribe from this group and stop receiving emails from it, send an email to hot-requirements-2022+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/hot-requirements-2022/CA%2BNhCqYWfc6%3DOXuUe9x0U21T8bV8DpL%3D-m01Tyxi6%3DWXBB0daw%40mail.gmail.com.