Senior Data Engineer – Pyspark, Azure, Devops

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together.
Primary Responsibilities:
- Design, build, and optimize data ingestion and transformation pipelines on Azure Databricks (Delta Lake/Spark) with an eye for performance, cost, and operational simplicity
- Implement streaming and batch patterns; leverage CDC, eventing (Service Bus/Event Grid), and orchestration (Azure Data Factory, Functions) to move and process data reliably
- Engineer high quality PySpark code and SQL, enforce coding standards, and enable re use via libraries and notebooks
- Build/extend APIs and microservices that expose curated data for real time and near real time use cases; ensure backward compatibility and observability
- Contribute to CI/CD (Azure Pipelines or equivalent), infra as code, secrets management (Key Vault), and automated testing for data and services
- Partner with architects/SMEs to migrate mainframe workloads (batch & inquiry APIs) to cloud native paradigms with clear SLAs and error budgets
- Instrument solutions with end to end monitoring/alerting; drive MTTR down through robust runbooks, dashboards, and noise free alerts
- Collaborate with business & platform teams to onboard new consumers, model data contracts, and scale for rapid growth in volume and users
- Contribute to data governance (catalog, lineage, access controls/Unity Catalog) and ensure compliance with enterprise standards
- Participate in on call/production support rotations to maintain 24×7 reliability and SLA adherence; perform root cause analysis and preventive hardening
-
Document designs, patterns, and operational guides that enable fast onboarding and consistent delivery across teams
- Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
Required Qualifications:
- Bachelor degree in Software Engineering with having relevant experience from 4+ years
- Hands on Azure Databricks (Spark, Delta Lake) with strong PySpark and SQL skills for large scale data processing
- Experience building event driven and batch data flows using Azure Data Factory, Service Bus/Event Grid, and Azure Functions
- API engineering (REST/gRPC), contract versioning, and service observability (logs, metrics, traces)
- CI/CD exposure on Azure (Pipelines/GitHub), including test automation and deployment strategies for data & services
- Familiarity with secrets & access via Azure Key Vault, and role based access in data platforms (Unity Catalog/ACLs)
- Production mindset: on call readiness, incident response, and proactive post incident improvements to protect SLAs for a large consumer base
- Proven ability to design for reliability (idempotency, retries, transactional writes, schema evolution) and to operate at scale (performance & cost tuning)
Preferred Qualifications:
- Experience migrating mainframe data and workloads to cloud; healthcare claims domain exposure
- Experience with Azure DevOps governance and enterprise templates for JDs (tone/structure)
- Delta Live Tables, Structured Streaming, or Kafka ingestion patterns
- Data quality frameworks (expectations, unit tests), and lineage/catalog tools
- Performance engineering for Spark (partitioning, caching, joins, skew mitigation) and SQL Warehouse optimization
- Secure engineering practices (PII handling, encryption at rest/in transit, network isolation) (Alex Overv…or Enduser | Word)
At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes – an enterprise priority reflected in our mission.
Additional Job Detail Information
Requisition Number 2344704
Business Segment Optum
Employee Status Regular
Travel No
Country: IN
Overtime Status Exempt
Schedule Full-time
Shift Day Job
Telecommuter Position No
Similar Jobs:
Our Hiring Process
We want you to know what our hiring process looks like. Watch the video and find out what to expect along the way.
What It’s Like
Watch the video and hear how our employees describe what it’s like to work here in Customer Service.
Careers at Optum
If you want to use your abilities to help us challenge the status quo and achieve on our ambitious mission, this is the right place for you. We are creating and delivering quality health care solutions that deeply impact the health care system. And this means opportunities for people like you to grow and innovate with us.
Closing the GAP
Our team members help close the gap in health care. Take a closer look and see how Lisa helps members navigate a complex health care system.

