DLA Piper is, at its core, bold, exceptional, collaborative and supportive. Our people are the backbone, heart and soul of our firm. Wherever you are in your professional journey, DLA Piper is a place you can engage in meaningful work and grow your career. Let's see what we can achieve. Together.
Summary
The Data Engineer, Solutions & Data role designs, builds, and operates data pipelines and data integration processes that translate raw data into trusted, usable datasets for analytics, reporting, and downstream solutions. The role focuses on operationalizing pipelines with governance and service expectations (SLAs), improving data quality and reusability, and enabling secure access to integrated data in support of business initiatives. In current initiatives, data engineering includes consolidating data from multiple sources into a central SQL-based integration point and performing field mapping and transformations, so solution teams can consume data consistently.
Location
This position can sit in any of our U.S. offices and offers a hybrid work schedule.
Responsibilities
Data Pipeline Engineering & Integration
* Build and operationalize data pipelines across heterogeneous environments, aligning to governance principles and service expectations (SLAs).
* Build and maintain ingestion, transformation, and publication of pipelines (data engineering practice) to deliver analytics-ready data.
* Consolidate data from multiple sources into a centralized integration point (e.g., a single SQL Server instance) and manage field mappings and transformations to support consistent downstream consumption.
Data Platform & Storage
* Design and implement data pipelines using Azure data technologies (e.g., Azure Data Factory, Azure Databricks, Azure Event Hubs, SSIS) to ingest, process, and deliver data from sources such as APIs and other systems.
* Build and maintain data warehousing capabilities (e.g., Azure Synapse Analytics) to support analytics and reporting workloads.
Data Quality, Reliability & Operations
* Identify, troubleshoot, and resolve data issues including data quality, integrity, latency, and security concerns; apply monitoring and operational best practices to keep pipelines reliable and performant.
* Contribute to data quality and governance practices, including profiling datasets, defining quality rules, and establishing monitoring/remediation approaches.
Collaboration & Delivery (Agile Pod Model)
* Work cross-functionally with engineers, analysts, and stakeholders to understand requirements and deliver data solutions that support sprint-based delivery.
* Support pod-level delivery by producing reusable data assets and integration components that can be leveraged across multiple initiatives.
Desired Skills
* Proficiency in SQL and Python.
* Data pipeline tooling and cloud data services experience (Azure Data Factory, Azure Databricks, Azure Event Hubs, SSIS).
* Data warehousing experience (Azure Synapse Analytics) and strong fundamentals in data modeling, warehousing, and governance.
* Scripting/automation skills (PowerShell and related tooling) for platform operations and troubleshooting.
* Preferred experience includes familiarity with additional programming languages such as Java, Scala, or Go; experience integrating data from multiple enterprise source systems into a central SQL-based integration layer; and familiarity with DataOps concepts and operating in cross-functional teams that include data engineering personas.
* The measures of success for this role include delivering data pipelines with trusted, quality data with agreed service levels, enabling faster onboarding of new data and more consistent analytics/AI consumption and creating reduced manual effort through reusable integrations and standardized tran