Senior Data Engineer
This role is eligible for our hybrid work model: Two days in-office.
Senior Data Engineer
Our Technology team is the backbone of our company: constantly creating, testing, learning and iterating to better meet the needs of our customers. If you thrive in a fast-paced, ideas-led environment, you’re in the right place.
Why this job’s a big deal:
As a Senior Data Engineer in our FinTech data engineering team, you’ll own the cloud-scale data backbone that powers finance, accounting, and ERP across the company. Our financial data warehouse and GCP-based batch pipelines (processing millions of financial events per day) are the system of record for revenue, costs, and compliance reporting. You will be designing and hardening the platforms that ensure every transaction is accurate, auditable, and timely—directly influencing decisions made by finance leaders and executives while mentoring other engineers and shaping our long-term data architecture on GCP.
In this role you will get to:
-
Participate in cloud migration and modernizing current data systems.
-
Oversee data systems that serve millions of events a day.
-
Maintain and design early warning and analytics systems using cutting edge big data technologies.
-
Collaborate on building software that collects and queries data, while also composing queries for investigation purposes.
-
Analyze and provide data supported recommendations to improve product performance and customer acquisition.
-
Diagnose and troubleshoot any issues within data infrastructure.
-
Effectively collaborate and engage in team efforts, mentoring the junior members in the team, speak up for what you think are the best solutions and be able to converse respectfully and compromise when necessary.
Who you are:
-
Bachelor’s degree in Computer Science or a related field.
-
Minimum of 5+ years of experience in software engineering and development, with a strong focus on Big Data technologies including frameworks such as Apache Spark and Kafka. Proficiency in the Google Cloud Platform (GCP) – Dataflow and Dataproc, for building and managing large-scale data processing pipelines.
-
Advanced knowledge in data modeling and schema design for optimized data storage and retrieval.
-
Strong command of databases, including CloudSQL (MySQL, PostgreSQL), BigQuery, and Oracle.
-
Proficiency in data orchestration frameworks, particularly Apache Airflow, for scheduling and managing complex workflows.
-
Proven ability to analyze, debug, and resolve data processing issues efficiently.
-
Experience with enterprise ETL tools such as Informatica (PowerCenter or similar) is a plus, especially for understanding and helping migrate legacy batch processes into GCP-based pipelines.
-
Experience with data visualization tools, such as Tableau, is a plus for representing data insights.
-
Familiarity with advanced data processing strategies, including indexing, machine-learned ranking algorithms, clustering techniques, and distributed computing concepts. (advance concepts of Spark).
-
Illustrated history of living the values necessary to Priceline: Customer, Innovation, Team, Accountability and Trust.
-
The Right Results, the Right Way is not just a motto at Priceline; it’s a way of life. Unquestionable integrity and ethics is essential.
#LI-hybrid