Driving the data operations engine that powers insights, decisions, and the business itself.

This phase is where everything gains velocity. From frontline support to cloud and data engineering, I now lead the operational heartbeat of Inspire Brands’ Enterprise Data Platform—fueling real-time transactions, customer intelligence, loyalty insights, marketing activations, and core business operations.

I manage a high-performing team of data engineers and own the reliability, scalability, and day-to-day performance of 700+ production pipelines across ADF, Databricks, Snowflake, ADLS, Airflow, and multi-cloud data sources.

I also leverage Azure OpenAI and Copilot to embed AI into data operations—powering intelligent automation, accelerating root-cause analysis, and generating actionable insights that improve platform reliability and operational efficiency.

My work sits at the center of data operations: directing critical incidents, strengthening platform resilience, modernizing workflows, enforcing data quality, and driving automation that removes friction from the business.

This is leadership rooted in operational excellence—ensuring pipelines run, SLAs hold, issues are resolved fast, and the platform continuously improves.

This chapter marks the shift from leading teams to leading the entire data operations engine—the pipelines, platforms, and processes that keep the organization running end to end.

The Present: Leading the Data Operations That Run the Enterprise

Manager – Data Engineering | Inspire Brands | Oct’23 – Present

As Manager – Data Engineering, I lead the operations and evolution of Inspire Brands’ Enterprise Data Platform—an ecosystem of 700+ high-stakes pipelines powering transactions, customer intelligence, loyalty, campaigns, and operational insights across the business.

I guide a high-performing team of engineers and own the stability, scalability, and performance of a deeply complex modern data stack spanning ADF, Databricks, Snowflake, Airflow, ADLS, and multi-cloud sources.

I also leverage Azure OpenAI and Copilot to embed AI into data operations—powering intelligent automation, accelerating root-cause analysis, and generating actionable insights that improve platform reliability and operational efficiency.

My work blends decisive leadership with hands-on technical oversight: critical incident management, platform hardening, architectural modernization, automation, and data quality governance.

This role is where I cement the engineering and operational rigor required to run a large-scale, cloud-native data platform that the business depends on every single day.

Highlights

  • Reduced job failure rates through robust retry logic, standardized error-handling patterns, and conditional handling across Airflow & ADF.

  • Led migration of ETL workflows from Databricks (PySpark) to Snowflake (SnowSQL), integrating orchestration into Airflow for unified monitoring and retries.

  • Designed and deployed centralized Power BI observability dashboards for Airflow DAGs, ADF pipelines, REST APIs, SLAs, and data validations.

  • Delivered real-time data lineage and job status visualizations, accelerating issue identification and preventing SLA breaches.

  • Standardized DQ checks and validation alerts across pipelines, improving trust and consistency in enterprise data.

  • Redesigned alerting logic to eliminate false positives and significantly reduce incident volume.

Takeaways & Learnings

This role made me what I am, I evolved into a modern data platform leader—driving reliability, quality, and innovation across a complex, cloud-scale engineering ecosystem.

  • Built deep expertise in operating and scaling a modern, cloud-native data platform across ADF, Snowflake, Databricks, ADLS, and Airflow.

  • Strengthened leadership capability by managing high-stakes operations, critical incidents, and cross-team engineering efforts.

  • Developed a product-mindset toward pipelines—focusing on reliability, observability, automation, and continuous improvement.

  • Improved architectural thinking through large-scale migrations, workflow redesigns, and platform-wide reliability patterns.

  • Sharpened decision-making under pressure, balancing business impact with technical constraints.

  • Learned to build a high-performing team culture centered on accountability, ownership, and technical excellence.

Responsibilities

  • Operations Management, Technical Support, Troubleshooting & Incident Resolution

    • Manage end-to-end data operations.

    • Drove the shift from a reactive support culture to a proactive, prevention-first approach.

    • Provide L2/L3 support for Data Platform, optimize performance and error recovery mechanisms.

    • Leveraged CoPilot to enhance support efficiency—using it for rapid code generation, knowledge retrieval, and guided troubleshooting to shorten resolution cycles.

    • Lead incident triage, RCA, and recovery for high-impact issues.

    • Developed troubleshooting playbooks and collaborated with Cross-Functional teams to resolve platform issues.

  • Development, Enhancements & Production Fixes | Data Quality & Observability

    • Own sprint planning, backlog prioritization, and resource allocation across operations, enhancements, and production fixes.

    • Oversee design and deployment of new pipelines, enhancements; lead fixes for production issues.

    • Implement robust DQ checks, schema validations, and real-time anomaly detection across pipelines.

    • Built centralized observability dashboards (Power BI) and end-to-end lineage tracking.

  • Team Leadership & Management | Collaboration & Stakeholder Management

    • Mentor and lead a high-performing team of data engineers, fostering a collaborative, growth-driven culture.

    • Conduct performance reviews, support hiring, and ensure technical skill development within the team.

    • Liaise with data scientists, engineers, and business teams to define requirements, deliver solutions, and enforce data engineering best practices.

  • Process Improvement & Automation | Technology Adoption & Continuous Learning

    • Identify automation opportunities to reduce manual effort and improve operational efficiency.

    • Set up proactive monitoring, alerting, and self-healing mechanisms for critical pipelines.

    • Stay abreast of evolving trends in data engineering, cloud platforms, and automation tools.