JOB DETAILS
LOCATION
Knoxville, TN
POSTED
30+ days ago

Description

Company Description


Job Description

The Enterprise Data Architect will play a crucial role in setting the strategy, technical direction and driving the adoption of our next generation data platform. This highly visible position will be front and center as Pilot Company works to continuously modernize its solutions and change the way we think about and apply data in solutions across our enterprise. A successful candidate for this position must possess both a deep and wide background in data technologies spanning solutions built on traditional and non-traditional platforms. 

As a key member of the architecture team, the candidate should be comfortable with driving technical ideas and communicating clearly with technical as well as non-technical audiences. 

Specific Responsibilities 

  • Gain an understanding of our Current State Ecosystem and then define the Target State Data Architecture and Strategy. 

  • Work as the Technical Subject Matter Expert in partnership with our Product Owners and IT Solution Owners to define the technical strategy for our data driven products and applications. 

  • Provide leadership to the project development teams for successful project implementation on the selected data platform. 

  • Work with our product organization to develop business requirements into architecture and integrate into our longer term data platform strategy. 

  • Define solution level architecture for project teams including guidance on development tools, target platforms, operations, and security. 

  • Drive towards adoption of the target state architecture by executing on the strategy. Must possess the leadership qualities necessary to drive change and adoption of the strategy. 

  • Develop Architectures for highly scalable and fault-tolerant applications using Cloud, Relational database technologies, Data Warehousing, Machine Learning, and Modern Big Data Compute platforms. 

  • Provide technical and architectural oversight for systems and projects that are required to be reliable, massively scalable, highly available (99.999% uptime), and maintainable. 

  • Introduce best practices and principles to enable consistent delivery and enable alignment with long term direction. 

  • Lead and mentor other team members. Provide expertise to project team engineers as needed. Foster development best practices within the team. 

  • Identify and drive process improvements. Facilitate communication with cross-functional groups. 

  • Stay up to date on new tools & techniques in the data space. Conduct proof of concept activities with key business users in support of advanced use cases. 

 

 


Qualifications

 

  • BS in Computer Science, Data, Analytics, Mathematics, or related degree from an accredited university required, MS + preferred 

  • 10+ years of experience architecting, designing, and developing large scale data solutions utilizing a mixture of Big Data, Machine Learning, and Relational database platforms. 

  • Direct experience with AWS-based technologies. 

  • Proven experience leading teams resulting in the successful deployment of applications built on data platforms. 

  • Candidates having experience with migrating from legacy applications to a solution utilizing Cloud will be preferred. 

  • Advanced Relational Database Experience (RDBMS) in one or more of the following: Microsoft SQL Server or PostgreSQL. 

 

  • Experience as the technical lead, organizing and mentoring junior and intermediate level developers, DBAs, data engineers, data scientists, and data strategists. 

  • Experience developing software with Java, Scala, PySpark, and Python. 

  • Proven Linux experience including; 

  • Basic Administration. 

  • Files and Permissions. 

  • Directory Navigation. 

  • Job Scheduling. 

  • Shell Scripts. 

  • Big Data experience including: 

  • Use and set up of Blocks, Name nodes, Data nodes. 

  • File Systems Interfaces, parallel copies, cluster balance and archiving. 

  • Scaling Out including data flow, combiner functions, running distributed jobs. 

  • Data "Ingestion" techniques such as streaming, ETL, etc. 

  • Hadoop Pipes. 

  • Sorting, joins and side data distributions. 

  • Machine Learning model management and training. 

  • Big Data tools including: 

  • Spark. 

  • Kafka. 

  • Sqoop and Flume. 

  • Oozie. 

  • MLflow 

  • Prefect 

  • Kedro 

  • Hive/Impala and other Map Reduce techniques and approaches. 

  • Tableau 


Additional Information



By submitting your interest in this job, you agree to receive text notifications with additional steps to complete your job application. You will receive up to 6 messages from the number "63879". Message & data rates may apply. Please refer to our privacy policy for more information.