Data Engineer
Data Engineer
Our Technology team is the backbone of our company: constantly creating, testing, learning and iterating to better meet the needs of our customers. If you thrive in a fast-paced, ideas-led environment, you’re in the right place.
Why this job’s a big deal
As a Data Engineer in the Product Operating Model (POM), your focus is on building and maintaining reliable, scalable, and high-performing data pipelines that deliver trusted data for analytics and product initiatives. You are accountable for designing, developing, and optimizing data ingestion, transformation, and validation processes, while collaborating closely with Data Architects, Product Managers, and other stakeholders to ensure data solutions meet business and product requirements.
In this role, you will get to:-
1. Build and Optimize Data Pipeline
-
Build and optimize data pipelines for ingestion, transformation, and validation
-
Assist with CI/CD practices to automate pipeline deployment and monitoring
-
Develop and maintain data validation and quality assurance tests to ensure reliable outputs
2. Data Architecture & Modeling
-
Work in tandem with Data Architects to align on data architecture requirements
-
Build and maintain data models that support analytics, reporting, and product features
-
Define solution parameters for data preparation, integration, and storage.
3. Collaboration & Cross-Functional Alignment
-
Partner with Product Managers, Analysts, and Engineering teams to understand data requirements
-
Facilitate communication between data, product, and development teams to ensure alignment
-
Provide guidance on best practices for data management and pipeline design.
4. Continuous Improvement
-
Identify opportunities to improve pipeline performance, reliability, and maintainability
-
Evaluate and implement new data tools, technologies, and frameworks
-
Share knowledge and mentor team members on data engineering practices
5. Reporting & Quality Metrics
-
Monitor and report on data pipeline performance, failures, and data quality issues
-
Ensure transparency of data availability, integrity, and lineage for stakeholders.
-
Support release readiness by validating that data pipelines deliver accurate and timely data.
Who you are:-
-
Bachelor’s degree in Computer Science or a related field
-
3–5 years of hands-on experience building and maintaining data pipelines and architectures in a corporate or product-driven environment.
-
Proficient with SQL and modern data tools (e.g., Spark, Airflow, Snowflake, dbt)
-
Demonstrated expertise in Big Data solutions, including frameworks such as ApacheSpark and Kafka. Proficiency in the Google Cloud Platform (GCP) stack is highly desirable.
-
Strong command of databases, including CloudSQL (MySQL, PostgreSQL), BigQuery, and Oracle.
-
Familiar with data governance, security protocols, and compliance best practices
-
Proficiency in data orchestration frameworks, particularly Apache Airflow, for scheduling and managing complex workflows.
-
Proven ability to analyze, debug, and resolve data processing issues efficiently.
-
Proficient in Java or Python, with the ability to write efficient, maintainable code in either language.
-
Illustrated history of living the values necessary to Priceline: Customer, Innovation, Team, Accountability and Trust
-
The Right Results, the Right Way is not just a motto at Priceline; it’s a way of life. Unquestionable integrity and ethics is essential.
#LI-hybrid