Data Engineer, WW Ops Finance S&A
Amazon
Description
Are you passionate about standardizing data platforms and automating data engineering to drive analytics and reporting? Do you excel in dynamic, fast-paced environments and find joy in converting data into actionable insights? Are you adept at implementing data governance practices and defining data access and quality standards? If you thrive in innovation and can deliver scalable Data Engineering Solutions, then the Worldwide Operations Finance Standardization & Automation (S&A) team has an exciting opportunity for you! We are seeking a customer-centric Data Engineer to establish a reliable and accessible data platform, ensuring Operations Finance customers have complete trust in the data, technology, and tools to make data-driven business decisions. We are looking for a top-notch Data Engineer to be part of our Global Data Delivery organization, working with one of the world's largest and most complex data warehouse environments.
As a Data Engineer in WW Ops S&A, you will design, implement and support scalable data infrastructure solutions and implement complex data models for Amazon Customer Fulfillment business. You will create solutions to integrate with multi heterogeneous data sources, aggregate and retrieve data in a fast and safe mode, curate data that can be used in reporting, analysis, GenAI models and ad-hoc data requests. You should have excellent business and communication skills to be able to work with business owners, Finance and Product teams along with tech leaders to gather infrastructure requirements, design data infrastructure, build up data pipelines and data-sets to meet business needs. You will be responsible for developing and operating a data service platform using Python, Airflow, and SQL to build various ETL, analytics, and data quality components. You'll automate deployments using AWS CodeDeploy, AWS CodePipeline, AWS Cloud Development Kit (CDK), and AWS Cloud Formation. You will work with AWS services like Redshift, Glue, S3, IAM, CloudWatch, and more. Strong experience in Data Warehouse and Business Intelligence application development, expert knowledge in SQL query optimization, and experience with programming languages such as Scala/Python are essential for this role.
Key job responsibilities
• Design, build, and maintain complex data solutions and ETL pipelines using Python, Spark, SQL, and AWS services (S3, Glue, Redshift, MWAA, EMR, Lambda)
• Develop high-quality data architecture and scalable pipelines to support customer reporting needs, while implementing and supporting analytics infrastructure for internal Finance customers
• Interface with technology teams to extract, transform, and load data from various sources, ensuring data quality and making appropriate trade-offs in design decisions
• Collaborate with business users, developers, and BI Engineers to deliver on data architecture projects and next-generation financial solutions
• Actively participate in code reviews, design discussions, and team planning, while continually improving reporting processes and automating self-service support
• Identify and implement process improvements to drive innovation, scale existing solutions, and create new ones based on stakeholder needs
• Diagnose and resolve operational issues through detailed root cause analysis, maintaining high standards of system availability and reliability
About the team
The Operations finance data ecosystem supports global data management solutions inclusive of the scaling solutions to support 10+ data producer teams and 2500 finance customers. The role will work with BIE/DE teams across the organization, identifying critical customer pain points and help design , implement and drive customer's data journey. In the age of AI capabilities, the role will identify newer mechanisms to help incorporate AI driven workflows both for our builder and customer communities.