Hero Image

AnitaB.org Talent Network

Connecting women in tech with the best professional opportunities!
0
Companies
0
Jobs

Software Developer 3

Oracle

Oracle

Software Engineering
India
Posted on Feb 17, 2026
Senior Software Engineer (IC3) – Lakehouse / Batch Data Platform (Oracle Health Data Intelligence, HDI)

Location: Bangalore, India

About the Role

Oracle Health Data Intelligence (HDI) is hiring a Senior Software Engineer (IC3) to help build and evolve our next-generation Data Platform powering intelligent AI agents at scale. This role is focused on lakehouse and batch processing—designing reliable, scalable ETL/ELT pipelines and foundational data platform services on OCI to ingest, transform, curate, and serve high-quality healthcare data for analytics and AI workloads.

You’ll work closely with platform engineers, data engineers, and applied AI teams to deliver durable, governed datasets and platform capabilities that enable downstream product experiences.

Key Responsibilities
  • Build and operate batch-first ETL/ELT pipelines that ingest and transform data into curated lakehouse layers (e.g., raw → refined → curated).
  • Design scalable data processing jobs using distributed compute frameworks (e.g., Spark/Beam) with strong attention to correctness, performance, and cost.
  • Contribute to the architecture and evolution of our data lakehouse, including data layout/partitioning, compaction strategies, schema evolution, and backfills/reprocessing.
  • Develop and maintain platform components for metadata management, dataset publishing, and pipeline orchestration.
  • Implement data quality validation, lineage/metadata capture, and operational best practices (SLAs/SLOs, alerting, runbooks, auditing).
  • Optimize pipelines for reliability and efficiency in a distributed environment (idempotency, retries, incremental loads, late-arriving data handling).
  • Participate in code reviews, design discussions, and technical planning; collaborate across teams to deliver end-to-end solutions on OCI.
Required Qualifications
  • 5–10 years of relevant industry experience in software engineering and/or data engineering.
  • Strong programming skills in Java, Python, Scala, or Go with solid software engineering fundamentals (OO/design, testing, debugging, performance).
  • Hands-on experience building large-scale batch pipelines using Apache Spark (preferred) and/or Apache Beam (or equivalent).
  • Experience with lakehouse/data platform concepts: partitioning, schema management, incremental processing, file formats (Parquet/ORC), and dataset versioning.
  • Exposure to cloud data services (OCI preferred) such as Object Storage, compute, networking/IAM, and managed data/processing services (e.g., Oracle BDS or equivalents on AWS/GCP/Azure).
  • Strong understanding of data modeling and governance fundamentals (access controls, auditing, retention, PII handling concepts).
  • Practical experience with pipeline observability: metrics, logs, alerts, job monitoring, and troubleshooting production workflows.
Preferred Qualifications (Bonus)
  • Experience with feature store, metadata catalogs, or data discovery/governance tooling.
  • Familiarity with semantic indexing / vector search (e.g., Oracle Database 23ai vector capabilities) and/or building retrieval datasets for AI workloads.
  • Experience with Docker and Kubernetes; CI/CD for data/compute workloads.
  • Healthcare domain exposure and comfort operating in regulated-data environments.
Why Join Oracle HDI
  • Work on foundational data infrastructure that directly enables AI-driven healthcare intelligence.
  • Build at scale on OCI with high-impact ownership and strong cross-team collaboration.
  • Solve challenging problems in data reliability, governance, and performance for real-world enterprise workloads.

Only Oracle brings together the data, infrastructure, applications, and expertise to power everything from industry innovations to life-saving care. And with AI embedded across our products and services, we help customers turn that promise into a better future for all. Discover your potential at a company leading the way in AI and cloud solutions that impact billions of lives.

True innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing a workforce that promotes opportunities for all with competitive benefits that support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs.

We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling 1-888-404-2494 in the United States.

Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.


Senior Software Engineer (5–10 years) to build scalable, cloud-native data pipelines and lakehouse infrastructure on OCI for AI agents—streaming + batch ETL (Spark/Beam/Flink) plus semantic indexing/vector search (Oracle 23AI) to power healthcare data intelligence at scale

Career Level - IC3


Key Responsibilities
  • Build and operate batch-first ETL/ELT pipelines that ingest and transform data into curated lakehouse layers (e.g., raw → refined → curated).
  • Design scalable data processing jobs using distributed compute frameworks (e.g., Spark/Beam) with strong attention to correctness, performance, and cost.
  • Contribute to the architecture and evolution of our data lakehouse, including data layout/partitioning, compaction strategies, schema evolution, and backfills/reprocessing.
  • Develop and maintain platform components for metadata management, dataset publishing, and pipeline orchestration.
  • Implement data quality validation, lineage/metadata capture, and operational best practices (SLAs/SLOs, alerting, runbooks, auditing).
  • Optimize pipelines for reliability and efficiency in a distributed environment (idempotency, retries, incremental loads, late-arriving data handling).
  • Participate in code reviews, design discussions, and technical planning; collaborate across teams to deliver end-to-end solutions on OCI.