Role – DevOps Engineer
Location: Remote USA or Canada
#req 1:DevOps Engineer with Automation using Groovy/Java/Python: – Strong in Python Scripting, Kubernetes, Helm Chart, Terraform, Jenkins, TDD, Development background is mandatory
#req 2: DevOps Engineer – Strong in Terraform, Ansible, Jenkins, Kubernetes, Development background is mandatory
Responsibilities:
- As a DevOps engineer setup CI/CD pipeline for application deployment
- Work in the DevOps team, to build new shared infrastructure services for on premises failover environment: S3, Kafka, Data Store
- Work with DevOps team to establish connectivity between new failover shared services and existing shared services: Secret, identity, LDA, DNS, Artifactory, Jenkins, Splunk services
- Work with DevOps team to automate deployment DR strategy Automation of data replication between Cloud and failover environment required between all applications
- Continuously improve the processes and the DevOps team using thoughtful, calculated approaches to identify opportunities, and challenge those around you to strive for perfection.
- Ideate solutions to complex technical challenges; code, test, troubleshoot, debug, and document the solutions you develop. Use agile software development model to produce well-designed programs, scripts, and tools required to provision, configure, and monitor new shared infrastructure services for on premises failover environment
Must have :
- We need someone who's really good at setting up CI/CD pipelines using DevOps tools like Jenkins, Artifactory, Vault, SonarQube, GitHub, Terraform, Rancher, and Harness. It's also important that they have experience with deploying on Kubernetes using Rancher or Harness. Plus, they should know about deploying Kafka MRC and using monitoring tools like Logic Monitor and Splunk. And it'd be great if they have experience with AWS services, especially setting up S3 and AWS Artifactory, and know how to do S3 replication.
- Experience with K8 and S3/Cloudian for shared infra
- Specializing in Cloud Infrastructure Modernization, virtualization, data center setup, DR & BC Strategies, and DevOps
- Experience in on-premises data center operations, AWS hosted data center and operations management
- Pilot Kafka data replication between AWS and On-Prem
- Pilot SQL DB (CockroachDB) to run in AWS and On-Prem and the data replication between them
- Hands on experience with migrations tools
- Experience with continuous deployment tools, techniques, and automation frameworks – especially Terraform Enterprise and Ansible.
- Hands-on experience writing testable scripts using Python or other languages.
- Experience managing helm charts and deploying into Kubernetes (k8s)
- Expertise with monitoring related tools and frameworks like Splunk, LogicMonitor, SignalFX, and Prometheus.
- Worked on projects involving deployment and management of micro services, and hybrid cloud/on-prem infrastructure
- Intermediate working knowledge of development tools like Maven/Gradle, Java, and distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc.
Good to Have
- Knowledge of setting up CI/CD pipelines for streaming Flink based applications is an added advantage
Thanks
Yogeshsharma K,
Reveille Technologies, Inc
yogesh@reveilletechnologies.com
—
——————– US STAFFING ESSENTIALS ————————————–
For daily US JOBS / Updated Hotlist / Post hotlist / Vendor Lists from the trusted sources
For Everything in US Staffing Just Search on google ” C2C HOTLIST ” for daily 5000+ US JOBS and Updated 10000+ Hotlists.
Have you Checked this No.1 US Staffing Whatsapp Channel for Daily C2C Jobs/ Hotlists and Top US staffing Telegram Channel of 50k American vendors