Pursue your passion and potential
Data Engineering Analyst - Python, Scala, Spark
Hyderabad, India
Caring. Connecting. Growing together.
With these values to guide us, our people are committed to making a meaningful difference in the lives of those we are honored to serve.
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together.
The Data Engineer (Grade 25) will support the design, development, and maintenance of scalable Big Data solutions on cloud platforms. This role is ideal for early career data engineers with hands-on experience in Spark-based data processing, Azure cloud services, and data pipeline orchestration.
The individual will work closely with senior engineers, architects, and cross-functional teams to build reliable data pipelines, improve existing solutions, and ensure secure and efficient data operations. The role emphasizes solid fundamentals in data engineering, cloud technologies, and production support, while providing opportunities to grow into advanced Big Data and cloud-native architectures.
Primary Responsibilities:
- Design, code, test, document, and maintain high-quality, scalable Big Data applications using PySpark and Scala Spark on Azure Cloud platforms
- Develop and manage data pipelines and schedule workflows using Apache Airflow, ensuring proper job dependencies and execution order
- Securely manage secrets and credentials using Azure Key Vault, following enterprise security best practices
- Analyze existing data pipelines and applications to identify gaps, risks, and opportunities for improvement
- Assist in analyzing data architecture and design frameworks, working with multiple databases and data warehouses
- Create prototypes and proof-of-concepts (POCs) and participate in design and code reviews
- Write and maintain technical documentation for data pipelines, workflows, and operational procedures
- Participate in production support activities, including monitoring, troubleshooting, and issue resolution
- Perform performance optimization of data pipelines and assist with application migration efforts across environments
- Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
Required Qualifications:
- Bachelor's or Master's degree in Computer Science, Information Technology, or equivalent, with more than 1 year of relevant work experience
- Hands-on experience with Python and Scala for data engineering and Big Data development
- Hands-on exposure to Azure Data Lake, Azure Databricks, Azure Data Factory, and Azure Key Vault
- Hands-on exposure to AI-assisted development tools such as Microsoft Copilot, with a basic understanding of prompt engineering to improve coding efficiency, data analysis productivity, and documentation quality
- Working experience with Apache Spark and good understanding of Hadoop ecosystem concepts
- Experience with job scheduling and orchestration tools, particularly Apache Airflow
- Experience with Snowflake and writing Shell scripts for automation and operational tasks
- Good experience working in cloud environments, preferably Microsoft Azure
- Solid experience in writing complex SQL and PL/SQL queries
- Exposure to CI/CD pipelines using tools such as Jenkins and GitHub Actions
- Basic understanding of software development best practices, version control, and collaborative development
- Proven good analytical skills, attention to detail, and willingness to learn new technologies and frameworks
- Proven ability to work effectively in a team-oriented and fast-paced environment
At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Benefits
Our mission of helping people live healthier lives extends to our team members. Learn more about our range of benefits designed to help you live well.
Life
Resources and support to focus on what matters most to you, in every facet of your life.
Emotional
Education, tools and resources to help you reduce and manage stress, build resilience and more.
Physical
Health plans and other coverage to support wellness for you and your loved ones.
Financial
Benefits for today and to help you plan for the future, including your retirement.
We’re honored to be recognized for our exceptional work culture
Connect with us


