Job Summary: • Azure Data Factory - the ability to develop a dynamic table-driven pipeline in Azure Data Factory to ingest multiple cloud and on-prem data sources. The goal is to build a highly parameterized pipeline, configurable via the control table with the ability to define dataload frequencies, incremental rules, error handling, notifications, logging, etc. Datasources include SQL, Google BigQuery, Teradata, Excel, etc. • T-SQL - very strong SQL abilities, including the creation of stored procs, dynamic SQL, and performance tuning of very large datasets • Experienced in ingesting data from various datasources into a data lakehouse. We will be landing the data in deltalake format and need to consider partitioning schemes and other optimizations. • Microsoft Fabric (or Synapse / Databricks) - Deltalake tables will be exposed in the Fabric Bronze Lakehouse, and the candidate will need to be able to implement either pySpark notebooks or SQL Queries to build Silver and Gold layers for Power BI reporting. • Power BI - should have experience in building and optimizing Power BI Semantic datasets. Familiar with direct lake datasets. Able to build Power BI reports, dashboards, paginated reports. • Familiar with Lakehouse Architecture, Bronze / Silver / Gold layers, ETL/ELT best practices for populating layers utilizing Spark, T-SQL, or Enterprise ETL/ELT tools. • Automation / Orchestration of dataloads • Use of Git / DevOps Source Control Desired: • Familiar with Azure Client or Fabric Client / AI capabilities • Proficiency in pySpark (Synapse, Fabric, or Databricks experienc