Data Engineer 2
Intuit
Company Overview
Intuit is the global financial technology platform that powers prosperity for the people and communities we serve. With approximately 100 million customers worldwide using products such as TurboTax, Credit Karma, QuickBooks, and Mailchimp, we believe that everyone should have the opportunity to prosper. We never stop working to find new, innovative ways to make that possible.
Job Overview
Come join the "Unified Ingestion Platform (UIP)" under A2D org as a "Data Engineer 2". UIP is a designated paved platform at Intuit for data ingestion/movement from one hosting location to another. As a Data Engineer, you will be working on cutting edge technologies to create a world class data movement platform. This is the place to be if it is your passion to build highly reliable and scalable ingestion capabilities on cloud and push the boundaries of automation!
Responsibilities
- Design and build capabilities to support Batch and Realtime ingestion at scale using open source technologies which are fault tolerant.
- Design solutions that involve complex, multi-system and multi cloud integration, possibly across BUs or domains
- End to end engineering – design, development, testing, deployment and operations
- Ability to work in a dynamic environment, adapt to business requirements using Agile methodologies and DevOps culture
- Conducts code reviews to ensure code quality, consistency and best practices adherence.
- Conducts quick Proof of Concept (POCs) for feasibility studies and take it to the prod
- Lead by example, demonstrating best practices for unit testing, CI/CD, performance testing, capacity planning, documentation, monitoring, alerting, and incident response
Qualifications
- BE/B.Tech/MS in Computer Science (or equivalent)
- 2 to 5 years of experience in a Data Engineering role with good knowledge of Software engineering.
- Strong CS fundamentals including data structures, algorithms and distributed systems.
- Demonstrate robust problem solving, decision-making, and analytical skills.
- Expert level experience in designing high throughput data solutions / services.
- Hands-on experience on AWS (EC2, EMR, S3, Athena, EMR, Kinesis, Lambda etc). Knowledge of GCP(DataProc, GCS, BigQuery etc) is a plus.
- Strong programming knowledge in one of the languages - Java, Scala or Python.
- Expert level experience in developing data pipelines/solutions using processing engines like Hive, Spark,Spark Streaming, Flink etc
- Good knowledge on Lake House architecture for data persistence. DeltaLake, Iceberg or Hudi knowledge isa plus
- Adequate experience with RESTful web services and micro service architectures