Dallas job only for Local

Contract

Tachyon Technology

Hi,

 

Hope you are doing well.

 

Please find below high-priority C2C contract requirements. We are looking for Senior Candidate for this position.

All positions are Irving, TX (Hybrid), Long-Term Contracts, and require senior-level consultants.

 

Experience level – 14+

Kindly share suitable Candidate along with the following details for each candidate:
Full Name | Contact | Email | Visa | LinkedIn | Location | Rate | Availability

 

 

1. Senior Data Engineer – Database Specialist (Heavy DB Focus) – C2C – 1 Position

Location: Remote / Hybrid (as per project) Duration: Long-term Contract (C2C)

Key Responsibilities:

Design, develop, and optimize large-scale data models and databases on AWS (RDS PostgreSQL, Aurora)
Performance tuning, partitioning, indexing, and query optimization for high-volume transactional and analytical workloads
Implement data warehousing / data lakehouse patterns (S3 + PostgreSQL + Athena/Redshift exposure)
Build and maintain dashboards & reports (QuickSight, Tableau, or Power BI)
Ensure data quality, governance, security, and backup/recovery strategies
Must-Have Skills:

8+ years in Data Engineering with strong RDBMS expertise
Expert-level PostgreSQL (or Aurora PostgreSQL) – complex SQL, PL/pgSQL, performance tuning
AWS S3, Glue Catalog, Athena
Python for scripting and data manipulation
Dashboards / Reporting tools (QuickSight preferred)
Solid understanding of data modeling (3NF, Dimensional modeling)
Good to Have:

Glue ETL, Spark (PySpark)
Exposure to Step Functions or Airflow
 

2. Senior Data Engineer – Integration & Orchestration Specialist – C2C – 1 Position

Location: Remote / Hybrid Duration: Long-term Contract (C2C)

Key Responsibilities:

Design and implement end-to-end data pipelines and integration workflows
Build and maintain complex ETL/ELT jobs using AWS Glue and custom Python/Spark
Orchestrate workflows using AWS Step Functions and/or Apache Airflow (MWAA)
Integrate with multiple source systems using Boomi and native AWS services
Ensure scalability, monitoring, error handling, and retry logic in pipelines
Must-Have Skills:

8+ years in Data Integration & Orchestration
AWS Glue ETL (Crawlers, Jobs, PySpark)
Python + PySpark (large-scale data processing)
AWS Step Functions (state machines, complex workflows)
Apache Airflow (MWAA) – DAG authoring and management
Boomi integration experience (iPaaS)
S3 data lake architecture
Good to Have:

PostgreSQL exposure
CI/CD for data pipelines
 

3. Senior MLOps Engineer / ML Engineer (SageMaker Heavy) – Lead/Architect Level – 1 Position

Location: Remote / Hybrid Duration: Long-term Contract

Key Responsibilities:

Architect and implement end-to-end MLOps pipelines on AWS SageMaker
Productionize models (training, deployment, monitoring, retraining)
Build feature stores, model registries, and CI/CD for ML
Work with data scientists on time-series forecasting models
Must-Have Skills:

8–12+ years overall, 4+ years hands-on MLOps
AWS SageMaker (full lifecycle – notebooks, training jobs, endpoints, pipelines, model monitor)
SageMaker Feature Store, Clarify, Model Registry
Python (pandas, numpy, scikit-learn, etc.)
Jupyter Notebooks (including SageMaker Studio)
MLOps best practices (GitOps, automated retraining, drift detection)
Exposure to time-series algorithms (Prophet, ARIMA, LSTM, CNN-QR, etc.)
Good to Have:

Terraform / CloudFormation for ML infrastructure
Kubeflow or similar experience
 

4. Senior Software Engineer / Full-Stack Developer – 1 Position

Location: Remote / Hybrid Duration: Long-term Contract

Key Responsibilities:

Develop full-stack applications supporting the data & ML platform
Build internal tools, UIs for data exploration, model monitoring dashboards, and operational portals
Serverless and traditional backend development
Must-Have Skills:

Python (FastAPI, Flask, or Django)
JavaScript / TypeScript + modern frameworks (React.js / Next.js preferred)
AWS Lambda, API Gateway, EC2
PostgreSQL (query writing, basic administration)
UI/UX design sensibility (responsive, clean dashboards)
REST/GraphQL APIs
Good to Have:

AWS Amplify / AppSync
Experience building data-centric internal tools
 

5. Cloud Infrastructure Engineer – Deployment & Infra Lead

Location: Remote / Hybrid Duration: Long-term Contract

Key Responsibilities:

Own the cloud infrastructure and deployment strategy for the entire platform
Implement IaC, CI/CD pipelines, security, and backup strategies
Support SageMaker, Lambda, EC2, RDS, and data pipeline deployments
Must-Have Skills:

8–12+ years in cloud infrastructure (AWS expert)
EC2, Lambda, RDS (PostgreSQL/Aurora), S3
SageMaker production deployments (endpoints, async, multi-model, etc.)
CloudFormation (or Terraform)
CI/CD (CodePipeline, GitHub Actions, Jenkins)
Security best practices (IAM, VPC, Encryption, WAF, GuardDuty)
Backup, DR, and monitoring (CloudWatch, backups)
 

 

Thank You!

Talent Acquisition Specialist

E: Javed.r@tachyontech.com 

To apply for this job email your details to Javed.r@tachyontech.com

×

Post your C2C job instantly

Quick & easy posting in 10 seconds

Keep it concise - you can add details later
Please use your company/professional email address
Simple math question to prevent spam