Posted

30+ days ago

Location

Dearborn, MI

Description

Position Description:

Data Engineer You will be responsible for providing the data support for enterprise data management tasks, including ingestion, standardization, enrichment, mastering and assembly of data products for downstream applications. Provide visibility to Data Quality issues and work with the business owners to fix the issues. Implement an Enterprise Data Governance model and actively promote the concept of data standardization, integration, fusion and quality. Support the data requirements of the different functional teams like MS&S, PD, Quality, etc. and all the regional KPI / Metrics initiatives. Evaluate, explore and select the right data platform technologies including Big Data, RDBMS & NoSQL to meet the analytics requirements. Continuously increase Data Coverage by working closely with stakeholders and Data Scientists, understanding and evaluating their data requirements to create meaningful, organized and structured "information".

Skills Required:

Candidates should have experience with using data analysis tools such as: Hive, PySpark, Apache Pig, Hadoop, SQL, QlikView, ETL tools. Post-graduate degree in Engineering / Computer Science or academic equivalent. Experience of working within a complex business environment, including at least 5 years in a single function, with deep understanding of the information constructs of that business. Demonstrated experience and expertise in conceptual thinking of how to apply information solutions to a business challenge. Experience of applying problem solving capabilities. Proven analytics capability to robustly examine large data sets and highlight patterns, anomalies, relationships and trends. Self-starter, demonstrating high levels of data integrity. Excellent data and statistical analysis skills in Excel. Ability to manage deliverables according to a robust project plan.

Experience Required:

Minimum of 5 years of experience in a Data Engineering role creating data products, writing codes/queries/scripts and building data visualizations. - Minimum of 3 years of experience in data design, data architecture and data modeling (both transactional and analytic) - Minimum of 2 years of experience in Hadoop Big Data technology (HDFS, Hive, PySpark, Oozie, QlikView etc.), especially PySpark, Hive, QlikView and Oozie experience transforming, visualizing and scheduling workflows

Education Required:

- Bachelor's Degree in Computer Science or related field from an accredited college or university

- Prefer Master's Degree in Computer Science or related field from an accredited college or university