Data Engineer III - GBS IND

Bank of America

Bank of America

Data Science
Chennai, Tamil Nadu, India · United States · Remote
Posted on Jul 28, 2025

Job Description:

About Us

At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities, and shareholders every day.

One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being.

Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization.

Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us!

Global Business Services

Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations.

Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation.

In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services.

Process Overview*

The Data Analytics Strategy platform and decision tool team is responsible for Data strategy for entire CSWT and development of platforms which supports the Data Strategy. Data Science platform, Graph Data Platform, Enterprise Events Hub are key platforms of Data Platform initiative.

Job Description*

We're looking for a highly skilled Container Platform Engineer to architect, implement, and manage our cloud-agnostic Data Science and Analytical Platform. Leveraging OpenShift (or other Kubernetes distributions) as the core container orchestration layer, you'll build a scalable and secure infrastructure vital for ML workloads and shared services. This role is key to establishing a robust hybrid architecture, paving the way for seamless future migration to AWS, Azure, or GCP. This individual will closely with data scientists, MLOps engineers, and platform teams to enable efficient model development, versioning, deployment, and monitoring within a multi-tenant environment.

Responsibilities*

  • Responsible for developing risk solutions to meet enterprise-wide regulatory requirements.
  • Performs Monitoring and managing of large systems/platforms efficiently.
  • Contributes to story refinement and definition of requirements.
  • Participates in estimating work necessary to realize a story/requirement through the delivery lifecycle.
  • Mentor team members, advocate best practices, and promote a culture if continuous improvement and innovation in engineering processes.
  • Develop efficient utilities, automation frameworks, data science platforms that can be utilized across multiple Data Science teams.
  • Propose/Build variety of efficient Data pipelines to support the ML Model building & deployment.
  • Propose/Build automated deployment pipelines to enable self-help continuous deployment process for the Data Science teams.
  • Analyze, understand, execute and resolve the issues in user scripts / model / code.
  • Perform release and upgrade activities as required.
  • Well versed in the open-source technology and aware of emerging 3rd party technology & tools in AI-ML space.
  • Ability to fire fight, propose fix, guide the team towards day-to-day issues in production.
  • Ability to train partner Data Science teams on frameworks and platform.
  • Flexible with time and shift to support the project requirements. It doesn’t include any night shift.
  • This position doesn’t include any L1 or L2 (first line of support) responsibility.

Requirements*

Education*

  • Graduation / Post Graduation: BE/B.Tech/MCA/MTech

Certifications If Any: Azure, AWS, GCP, Data Bricks

Experience Range*

  • 9+ Years

Foundational Skills*

  • Platform Design & Deployment: Design and deploy a comprehensive data science tech stack on OpenShift (or other Kubernetes distributions), including support for Jupyter notebooks, model training pipelines, inference services, and internal APIs.
  • Cloud-Agnostic Architecture: Proven ability to build a cloud-agnostic container platform capable of seamless migration from on-prem OpenShift to cloud-native Kubernetes on AWS, Azure, or GCP.
  • Container Platform Management: Expertise in configuring and managing multi-tenant namespaces, RBAC, network policies, and resource quotas within Kubernetes/OpenShift environments.
  • API Gateway & Security: Hands-on experience with API gateway technologies like Apache APISIX (or similar tools) for managing and securing API traffic, including JWT/OAuth2-based authentication.
  • MLOps Toolchain Support: Experience deploying and maintaining critical MLOps toolchains such as MLflow, Kubeflow, model registries, and feature stores.
  • CI/CD & GitOps: Strong integration experience with GitOps and CI/CD tools (e.g., ArgoCD, Jenkins, GitHub Actions) for automating ML model and infrastructure deployment workflows.
  • Microservices Deployment: Ability to deploy and maintain containerized microservices using Python frameworks (FastAPI, Flask) or Node.js to serve ML APIs.
  • Observability: Ensure comprehensive observability across platform components using industry-standard tools like Prometheus, Grafana, and EFK/ELK stacks.
  • Infrastructure as Code (IaC): Proficiency in automating platform provisioning and configuration using Infrastructure as Code tools (Terraform, Ansible, or Helm).
  • Policy & Governance: Expertise with Open Policy Agent (OPA) or similar policy-as-code frameworks for implementing and enforcing robust governance policies.

Desired Skills*

  • Lead the design, development, and implementation of scalable, high-performance applications using Python/Java/Scala.
  • Apply expertise in Machine Learning (ML) to build predictive models, enhance decision-making capabilities, and drive business insights.
  • Collaborate with cross-functional teams to design, implement, and optimize cloud-based architectures on AWS and Azure.
  • Work with large-scale distributed technologies like Apache Kafka, Apache Spark, and Apache Storm to ensure seamless data processing and messaging at scale.
  • Provide expertise in Java multi-threading, concurrency, and other advanced Java concepts to ensure the development of high-performance, thread-safe, and optimized applications.
  • Architect and build data lakes and data pipelines for large-scale data ingestion, processing, and analytics.
  • Ensure integration of complex systems and applications across various platforms while adhering to best practices in coding, testing, and deployment.
  • Collaborate closely with stakeholders to understand business requirements and translate them into technical specifications.
  • Manage technical risk and work on performance tuning, scalability, and optimization of systems.
  • Provide leadership to junior team members, offering guidance and mentorship to help develop their technical skills.
  • Effective communication, Strong stakeholder engagement skills, Proven ability in leading and mentoring a team of software engineers in a dynamic environment.
  • Security Architecture: Understanding of zero-trust security architecture and secure API design patterns.
  • Model Serving Frameworks: Knowledge of specialized model serving frameworks like Triton Inference Server.
  • Vector Databases: Familiarity with Vector databases (e.g., Redis, Qdrant) and embedding stores.
  • Data Lineage & Metadata: Exposure to data lineage and metadata management using tools like DataHub or OpenMetadata.

Work Timings*

  • 11:30 AM to 8:30 PM IST

Job Location*

Chennai