Role – Big Data-Python Engineer || Location: Malvern, PA

ONLY H1B AND PP IS MUST  AND MUST NE LOCAL PLEASE MENTIONED THE LOCATION 

Role – Big Data-Python Engineer

Location: Malvern, PA

Skill

Years of Experience

Proficiency

Spark

Hive

Presto

Python

SQL

 

 

developing distributed computing applications using PySpark

Building APIs

Trading and investment data (Preferred)

Responsibilities

Vanguard’s Trading Analytics and Strategy (TAS) team works with our global trading desks to optimize their trading strategies, saving millions of dollars for clients every year. The team works with our trader and portfolio manager partners, across asset classes and passive as well as active mandates, to conduct data-driven analyses and build tools to help shape our trading strategy. As an Engineer on the full stack team, you will be dedicated to help design, implement, and maintain a modern, robust, and scalable platform which will enable the TAS team to meet the increasing demands from the various trading desks.

 

Qualifications

Skillsets needed:

Proficiency in Python programming

Strong expertise in SQL, Presto, HIVE, and Spark

Knowledge of trading and investment data

Experience in big data technologies such as Spark and developing distributed computing applications using PySpark

Experience with libraries for data manipulation and analysis, such as Pandas, Polars and NumPy Understanding of data pipelines, ETL processes, and data warehousing concepts

Strong experience in building and orchestrating data pipelines

Experience in building APIs

Write, maintain, and execute automated unit tests using Python

Follow Test-Driven Development (TDD) practices in all stages of software development

Extensive experience with key AWS services/components including EMR, Lambda, Glue ETL, Step Functions, S3, ECS, Kinesis, IAM, RDS PostgreSQL, Dynamodb, Timeseries database, CloudWatch Events/Event Bridge, Athena, SNS, SQS, and VPC

Proficiency in developing serverless architectures using AWS services

Experience with both relational and NoSQL databases

Skills in designing and implementing data models, including normalization, denormalization, and schema design

Knowledge of data warehousing solutions like Amazon Redshift

Strong analytical skills with the ability to troubleshoot data issues

Good understanding of source control, unit testing, test-driven development, and CI/CD

Ability to write clean, maintainable code and comprehend code written by others

Strong communication skills

Experience with OneTick or KDB is a must Skillsets Preferred:

Proficiency in data visualization tools and ability to create visual representations of data, particularly using Tableau


You received this message because you are subscribed to the Google Groups “Exclusive C2C Requirements” group.
To unsubscribe from this group and stop receiving emails from it, send an email to hot-requirements-2022+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/hot-requirements-2022/7a983f9d-91e6-4180-8d13-ead9534e7ba1n%40googlegroups.com.

Leave a Reply

Your email address will not be published. Required fields are marked *