Lake Buena Vista, FLToday
This role communicates data engineering progress to the project leadership team and actively participates in meetings and discussions Basic Qualifications: Bachelor’s degree (Computer Science, Mathematics, Software Engineering or related field, or equivalent experience) 5+ years of experience with ELT/ETL development using SQL and Python 2+ years of experience developing and maintaining data pipeline processing with a framework such as Apache Flink, Beam or Kafka Streams Experience and understanding of one or more business domains to assist in gathering/refining data requirements and data design solutions Experience with developing in a multi environment (Dev, QA, Prod, etc.) and DevOps procedures for code deployment/promotion Strong understanding of database design and proficiency utilizing various database platforms, such as PostgreSQL or Snowflake Experience managing and deploying code using a source control product such as GitLab/GitHub Able to effectively formulate solutions and communicate complex technical concepts to non-technical team members Preferred Qualifications: Master’s degree (Computer Science, Mathematics, Engineering or related field preferred) Experience with infrastructure as Code, Docker and containerization and elastic scaling with Kubernetes or a similar framework and with AWS Experience working with large datasets and big data technologies, preferably cloud-based, such as Snowflake, Databricks, or similar Demonstrated proficiency with API development Knowledgeable on cloud architecture and product offerings, preferably AWS Required Education: BS STEM Degree= BS STEM + 5yrs, Masters +3yrs, BA/BS unrelated degree 11 yrs, NO DEGREE = 17 yrs experience About Software Resources Software Resources, founded more than 3 decades ago, is a trusted staffing partner specializing in Technology (IT, Creative, & Marketing), Finance, & Accounting placements. Responsibilities of the Role: Work assignments may cover activities such as participation in data requirements gathering, source-to-target mapping, data validation scripting and review, developing and monitoring ETL/ELT data pipelines, and producing datasets as input to science models and visualizations.