$140,000–$238,000 Per Year
Algorithms, Amazon Web Services (AWS), Analysis Skills, Big Data, Business Intelligence, Compensation and Benefits, Computer Programming, Computer Science, Continuous Deployment/Delivery, Continuous Improvement, Continuous Integration, Cross-Functional, Data Analysis, Data Formats, Data Management, Data Quality, Database Extract Transform and Load (ETL), Debugging Skills, Ecosystems, Internet/Online Service, Leadership, Machine Learning, Management of Information Systems/Technology (MIS), Mentoring, Metadata, Oracle Business Intelligence Enterprise Edition (OBIEE), People Management, Performance Analysis, Presentation/Verbal Skills, Problem Solving Skills, Process Improvement, Python Programming/Scripting Language, Resource Management, SQL (Structured Query Language), Software Engineering, Source Code/Configuration Management (SCM), Standards Development, Systems Reliability, Tableau, Team Lead/Manager, Unit Test
Requisition ID # 171886
Job Category: Information Technology
Job Level: Individual Contributor
Business Unit: Strategy & Growth
Work Type: Hybrid
Job Location: Oakland
Department Overview
The System Performance, Reliability and Resiliency Strategy team within the overall Electric Transmission and Distribution Engineering organization is responsible for planning, organizing, and managing the resources necessary to successfully execute PG&E’s Electric Reliability Strategy and initiatives. This team of forward–thinking individuals is tasked with deploying the technology and infrastructure to achieve the company’s reliability goals. The team is responsible for implementing programs required to modernize the electric grid allowing for safe, resilient and efficient operations.
Position Summary
Designs, develops, modifies, configures, debugs and evaluates jobs for extracting data from various sources, implements transformation logic, and stores data in various formats fit for use by stakeholders. Collects metadata about jobs including data lineage and transformation logic. Works with teams, clients, data owners and leadership throughout the development cycle practicing continuous improvement.
This position is hybrid, working from your remote office and your assigned work location based on business need. The assigned work location will be within the Bay Area of the PG&E Service Territory.
PG&E is providing the salary range that the company in good faith believes it might pay for this position at the time of the job posting. This compensation range is specific to the locality of the job. The actual salary paid to an individual will be based on multiple factors, including, but not limited to, specific skills, education, licenses or certifications, experience, market value, geographic location, and internal equity. Although we estimate the successful candidate hired into this role will be placed towards the middle or entry point of the range, the decision will be made on a case-by-case basis related to these factors.
Bay Minimum: $140,000
Bay Maximum: $238,000
This job is also eligible to participate in PG&E’s discretionary incentive compensation programs.
Job Responsibilities
- Leads a team on moderately complex to complex data and analytics-centric problems having broad impact that require in-depth analysis and judgment to obtain results or solutions.
- May contribute to the resolution of uniquely complex data and analytics-centric problems having significant impact
- Identifies, designs and implements internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes.
- Resolves application programming analysis problems of broad scope within procedural guidelines.
- Provides assistance to other programmers/analysts on unusual or especially complex problems that cross multiple functional/technology areas.
- Conceptualizes and generates infrastructure that allows big data to be accessed and analyzed with verified data quality and metadata is appropriately captured and catalogued.
- Collaborates with peers to develop departmental standards, norms, and new goals/objectives.
- Plans work to meet assigned general objectives; reviews progress regularly and solutions may provide an opportunity for creative/non-standard approaches.
- Assesses data pipeline performance and suggests/implements changes as required.
- Communicates (oral and written) recommendations.
- Mentors/provides guidance to less experienced colleagues.
Qualifications
Minimum:
- BA/BS in Computer Science, Management Information Systems or related field of study, or equivalent experience
- 7 years of experience with data engineering/ETL ecosystems such as AWS/Palantir Foundry, Spark, Python, SQL, Tableau/OBIEE
- Experience with multiple data engineering/ETL ecosystems
- Experience with machine learning algorithm deployment
Desired:
- Masters Degree in Computer Science or job-related discipline or equivalent experience
- Leadership experience, development teams
- Business Intelligence and data access tool expertise.
- Knowledge of software engineering principals such as unit testing, CI/CD, source control.
--s-p-m1--
Powered by SonicJobs (an advertiser on Monster). By applying, you consent to share your data with SonicJobs and the employer. Monster or SonicJobs does not store or use your application data beyond facilitating the application.
See PG&E Corporation Privacy Policy at https://www.pge.com/en/privacy-center.html and SonicJobs Privacy Policy at https://www.sonicjobs.com/us/privacy-policy and Terms of Use at https://www.sonicjobs.com/us/terms-conditions