Staff Data Engineer - Trust and Safety

Intuit

Intuit

Software Engineering, Data Science
New York, NY, USA
USD 197k-266,500 / year + Equity
Posted on Dec 12, 2025

Staff Data Engineer - Trust and Safety

Category Software Engineering Location New York, New York Job ID 18686

Company Overview

Intuit is the global financial technology platform that powers prosperity for the people and communities we serve. With approximately 100 million customers worldwide using products such as TurboTax, Credit Karma, QuickBooks, and Mailchimp, we believe that everyone should have the opportunity to prosper. We never stop working to find new, innovative ways to make that possible.

Job Overview

Come join the Trust & Safety Data Engineering Team in New York as a Staff Data Engineer (Staff Software Engineer role). We are leveraging big data technologies to develop solutions to detect and combat fraudulent activities initiated by external users across all Intuit’s products. Some of the technologies we are leveraging include Apache Spark, Athena, Redshift, Kafka, Hive, S3, EMR, AWS, and GCP. We foster an open team environment where we value direct interactions and working code above working in a cave.

You will lead the entire product lifecycle for multiple big data solutions that are broad in scope and complexity, applying a full understanding of software engineering methodologies and industry best practices for data products. Work with Industry Experts; Senior, Staff and Principal Engineers, utilize specialized knowledge to develop, and maintain Intuit’s data solutions.


Responsibilities

RESPONSIBILITIES

  • Own the tech vision and strategy for multiple domains or capabilities

  • Drive Strategic Impact. Focus on the direct and strategic application of data to deliver measurable business outcomes, even if it means working with imperfect data to capture significant value.

  • Hands-on development in all phases of the software life cycle.

  • Build batch and real-time/streaming data pipelines that are able to handle large volumes of data

  • Design data models

  • Gather data needs from internal customers like product and analysts and translate

those requirements into a working big data solution design.

  • Partner with product managers, fraud analysts, data scientists, data architects, software engineers, and other data engineers from around the world.

  • Participate in design and code reviews

  • Designing/developing ETL jobs across multiple big data platforms and tools including S3, EMR, Hive, Spark SQL, PYSpark.

  • Rapidly fix bugs and solve problems; effectively remediate defects

  • Clean, transform and validate data for use in analytics and reporting

  • Proactively monitor data quality and pipeline performance, troubleshoot and resolve data issues

  • Collaborate effectively with senior engineers and architects to solve problems spanning their respective areas to deliver end-to-end quality in our technology and customer experience.

  • Helps to align to overall strategies and reconcile competing priorities across the organization.

  • Contribute to the design and architecture of projects across the data landscape.

  • Actively stay abreast of industry best practices, share learnings, and experiment and apply cutting edge technologies while proactively identifying opportunities to enhance software applications with AI technology

  • Participate in a scheduled on-call rotation to handle production incidents outside of business hours

  • Mentor other team members and contribute to the growth of the team

  • Be available to work in a hybrid mode (3 days in the office and 2 days remote per week)


Qualifications

  • 8+ years of hand-on experience building big data solutions and ETL data pipelines using Apache Spark or similar technologies

  • BS or MS in Computer Science, Data Engineering or related field

  • Hands-on experience developing data solutions on the AWS platform (preferred), GCP, or Azure

  • Experience with Agile Development, SCRUM, and/or Extreme Programming methodologies

  • Expert in SQL, Python and Linux, and strong working knowledge of XML, JSON, YAML.

  • Knowledgeable with tools and frameworks Docker, Spark, Jupiter Notebook, Databricks Notebook, Kubernetes, Feature Management Platforms, SageMaker

  • Advanced experience with scripting language – Python or Shell is a must have

  • Expert knowledge of software development methodologies and practices

  • Experience with cloud platforms such as AWS, Azure or GCP - Amazon web services: EC2, S3, EMR, and Redshift or equivalent cloud computing approaches

  • Strong expertise in Data Warehousing and analytic architecture

  • Experience working with large data volumes, data visualization

  • Experience with low-latency NoSQL datastores (such as DynamoDB, HBase, Redshift) is a plus

  • Experience with building stream-processing applications using Spark Streaming, Flink, etc. is a plus

  • Hands on experience in AI

  • Solid communication skills

  • Demonstrated expertise in Software design and architecture process.

  • Experience with unit testing and data quality automation checks

  • Should be results oriented, self-motivated, accountable and work under minimal supervision.


Intuit provides a competitive compensation package with a strong pay for performance rewards approach. This position will be eligible for a cash bonus, equity rewards and benefits, in accordance with our applicable plans and programs (see more about our compensation and benefits at Intuit®: Careers | Benefits). Pay offered is based on factors such as job-related knowledge, skills, experience, and work location. To drive ongoing fair pay for employees, Intuit conducts regular comparisons across categories of ethnicity and gender. The expected base pay range for this position is: 197k -266.5K