Responsibilities
Design and support database and table schemas for new and existent data sources
Translate business needs into technical data designs Design and implement implementation of ETL (Extract, Transform, Load) data pipelines
Work on feasibility studies of new technologies through proof of concepts
Work with data science and analytics development teams in Plant Monitoring & Reliability area and cross functional teams across M&A Module and IT teams (Digitalization) to ensure best practice, performance and reliability is in place
Qualifications
Minimum of 5-7 years' Relevant experience with Masters' or PhD degree in STEM (Science, Technology, Engineering, and Mathematics)
Work experience where data engineering and Data warehouse development was a major part of your work Experience in automating software deployments & tests, and following a continuous delivery and deployment model
Experience with mathematical modeling
Competencies
Must have Databricks, Spark, Pandas, Python, Azure cloud, Data factory, SQL, Data/DeltaLake, git, Azure DevOps, testing, CI/CD
Good to have; Cloud infrastructure deployment and IaC, Azure Biceps and ARM templates, Snowflake, Snowpark, Unity Catalog, dbt, Scala/Rust/R Familiarity with: Statistics, signal processing, Jupyter notebooks, data testing, and data science best-practices
Data warehouse data models and proficiency with ETL tools
MS SQL, SSIS, SSAS (MDX/DAX)
JIRA or similar tools
Experience with Cloud platform (SNOWFLAKE)
Knowledge of Kafka, C#
DevOps process and associated tools, e.g. Azure DevOps
Masters
M.E
Minimum of 5-7 years' Relevant experience with Masters' or PhD degree in STEM (Science, Technology, Engineering, and Mathematics).
Azure,Azure Data Factory,c#,Data analytics,Data Analytics Testing,ETL,ETL SQL/PLSQL /SAP BW,Kafka,Ms SQL,Spark,SSAS,SSIS,azure devops,snowflake,ZIRA,
Manufacturing