Senior Data Engineer – Customer Growth

Senior Data Engineer – Customer Growth

R5349

Location

Mumbai

Career Track

Technology

Senior Data Engineer – Customer Growth

This role is eligible for our hybrid work model: Two days in-office.

Our Technology team is the backbone of our company: constantly creating, testing, learning and iterating to better meet the needs of our customers. If you thrive in a fast-paced, ideas-led environment, you’re in the right place. 

Why this job’s a big deal:
 

You will serve as the guardian of our measurement integrity, unifying diverse datasets—partner reports, internal order systems, and raw tracking signals—to build a complete, trustworthy view of marketing ROI. In this role, you’ll architect advanced data models within our modern data warehouse and tackle complex attribution challenges that shape how we invest millions in marketing spend.

In this role you will get to:

  • Own the end-to-end data lifecycle for campaign performance, transforming disparate data sources (partner reports, raw logs, internal orders) into a unified, clean, and modeled analytical asset.

  • Build next-generation ETL/ELT pipelines using modern orchestration tools (Airflow, Spark) to automate complex data joining and transformation required for multi-touch attribution.

  • Architect and optimize our columnar data warehouse (BigQuery) to ensure rapid query performance, enabling analysts to answer complex ROI questions on demand.

  • Be the ultimate owner of data quality, implementing robust governance and validation processes to ensure every dollar of marketing spend is accurately measured and reported.

  • Diagnose and troubleshoot any issues within the data infrastructure.

  • Work collaboratively with the team, provide guidance to junior members, and confidently champion the solutions you believe in—while engaging in respectful dialogue and finding constructive compromises when needed.

Who you are: 

  • Bachelor’s degree in Computer Science or a related field.

  • Minimum of 6+ years of experience in software engineering and development, with a strong focus on Big Data technologies and data-driven solutions.

  • Demonstrated expertise in Big Data solutions, including frameworks such as Apache Spark and Kafka. Proficiency in the Google Cloud Platform (GCP) stack is highly desirable.

  • Advanced knowledge in data modeling and schema design for optimized data storage and retrieval.

  • Strong command of databases, including CloudSQL (MySQL, PostgreSQL), BigQuery, BigTable, and Oracle.

  • Extensive experience with GCP tools, specifically Dataflow, Dataproc and datastream for building and managing large-scale data processing pipelines.

  • Proficiency in data orchestration frameworks, particularly Apache Airflow, for scheduling and managing complex workflows.

  • Proven ability to analyze, debug, and resolve data processing issues efficiently.

  • Proficient in Python and/or Java, Unix scripting  with the ability to write efficient, maintainable code in either language.

  • Experience with data visualization tools, such as Tableau, Looker for representing data insights.

  • Familiarity with advanced data processing strategies, including indexing, machine-learned ranking algorithms, clustering techniques, and distributed computing concepts.

  • Illustrated history of living the values necessary to Priceline:  Customer, Innovation, Team, Accountability and Trust. 

  • The Right Results, the Right Way is not just a motto at Priceline; it’s a way of life. Unquestionable integrity and ethics is essential.
    #LI-hybrid