Caring. Connecting. Growing Together
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale.
Primary Responsibilities:
- Participate in scrum process and deliver stories/features according to the schedule
- Collaborate with team, architects and product stakeholders to understand the scope and design of a deliverable
- Participate in product support activities as needed by Team
- Understand product architecture, features being built and come up with product improvement ideas and POCs
- Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
Required Qualifications:
- Deep experience in data analysis, including source data analysis, data profiling and mapping
- Good experience in building data pipelines using ADF/Azure Databricks
- Hands-on experience with a large scale data warehouse
- Hands-on data migration experience from legacy systems to new solutions, such as on-premise clusters to cloud
- Experience:
- Experience on building ML models
- DevOps, implementation of Bigdata, Apache Spark and Azure Cloud
- Large scale data processing using PySpark on azure ecosystem
- Implementation of self-service analytics platform ETL framework using PySpark on Azure
- Good knowledge on Gen-AI, RAG's and LLM
- Tools/Technologies:
- Programming languages: Python, PySpark
- Cloud technologies: Azure (ADF), Databricks, WebApp, Key vault, SQL Server, function app, logic app, Synapse, Azure Machine Learning, DevOps)
- ML Models, GPT, NLP Algorithms
- Expert skills in Azure data processing tools like Azure Data Factory, Azure Databricks
- Solid proficiency in SQL and complex queries
- Proven ability to lead and adapt to new data technologies
- Proven good problem solving skills
- Proven good communication skills
Preferred Qualifications:
- Knowledge on US healthcare industry/Pharmacy data
- Knowledge or experience using Azure Synapse and Power BI
At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.