Data Platform Owner

Job Locations RS-Beograd
ID
2026-6693
Category
Technology
Position Type
Regular Full-Time

About HireRight

HireRight is the premier global background screening and workforce solutions provider. We bring clarity and confidence to vetting and hiring decisions through integrated, tailored solutions, driving a higher standard of accuracy in everything we do. Combining in-house talent, personalized services, and proprietary technology, we ensure the best candidate experience possible. PBSA accredited and based in Nashville, Tennessee, we offer expertise from our regional centers across 200 countries and territories in The Americas, Europe, Asia, and the Middle East. Our commitment to get it right every time, everywhere, makes us the trusted partner of businesses and organizations worldwide.

Overview

The Data Platform Owner is responsible for the strategic direction, governance, and operational excellence of a cloud-based data platform built on a medallion architecture and powered by Databricks. This role ensures the platform delivers scalable, reliable, and high-quality data products that support analytics, data science, and business decision-making. The Data Platform Owner partners closely with engineering, analytics, and business teams to drive platform adoption, optimize performance, and continuously evolve data capabilities in alignment with organizational goals. 

Responsibilities

  • Design and maintain standards for data ingestion, transformation and serving in our cloud data eco-systems  

  • Coordinates oversight into existing data pipelines, including the implementation of solutions that will simplify and increase performance. 

  • Ensures team follows best practices and execute architected techniques and solutions for data collection, management and usage to aid in company-wide data governance and management framework. 

  • Work closely with data analysts and scientists, and database and systems administrators to identify data solutions.  

  • Evaluate new data sources for quality and attribution to support product requirements. 

  • Documents new/existing pipelines, Data Sets and Data Sets lineage. 

  • Drive cost optimization & monitoring strategies across cloud  environments and Databricks. 

  • Define strategies for data retention (Archival & Deletion) as per compliance and legal guidelines. 

  • Sets team priorities to ensure task completion. 

  • Participate in the formulation of departmental objectives, plans, and policies. 

  • Acts as a liaison between data engineering, business units, business analysis personnel, and IT infrastructure. 

  • Establishes platform engineering best practices. 

  • Interprets customer needs and assesses requirements; works with team members to aid in understanding tasks to support the production of solutions. 

  • Engages in recruiting and training new staff. 

  • Provides concise status updates and challenges to senior management. 

 

 

Additionally, The Ideal Candidate May Also Have: 
 

  • Must be familiar with Databricks  data intelligence platform (Administration, workspace management, jobs/pipelines, clusters, delta lake, cost opitimization). 

  • Must be familiar and hands on delta lake concepts, as well as the ability to fine tune delta lake performance. 

  • Must be familiar with the Databricks Declarative Pipeline framework. 

  • Familiar with cloud services (AWS preferred, including S3, IAM, EC2, Lambda, Glue, and VPC). 

  • Strong communication skills, with the ability to present complex technical solutions in a manner that can be interpreted by a general audience.  This includes proficiency in PowerPoint, Visio, Drawio, etc. 

  • Familiar with the software development life cycle as it relates to data integration and data quality. 

  • Have good understanding of Data Governance and experience on Data Governance tools like Alation, DataHub, Collate or others. 

  • Experience in designing and building production data pipelines from ingestion to consumption within a hybrid data architecture, using PySpark, PySql, Java, Python, Scala, C#, etc. 

  • Experience in Designing and implementing scalable and secure data processing pipelines. 

  • Strong experience in common data warehouse modelling principles including Kimball, Inmon. 

  • Strong experience with data processing frameworks such as Apache Spark and Hadoop 

  • Ensuring data quality and consistency through data cleaning, transformation, and integration processes. 

  • Must Have Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code. 

  • Experience in Managing Monitoring and troubleshooting data-related issues within the cloud environment to maintain high availability and performance. 

  • Experience establishing data lineage frameworks. 

  • Collaborating with data scientists, business analysts, and other stakeholders to understand data requirements and implement appropriate data solutions. 

  • Implementing data security measures, including encryption, access controls, and auditing, to protect sensitive information. 

  • Automating data pipelines and workflows to streamline data ingestion, processing, and distribution tasks. 

  • Knowledge of BI tools like Tableau and Microsoft BI Stack (SSRS/SSAS (Tabular with DAX & OLAP with MDX) SSIS) is desirable. 

  • Keeping abreast of the latest AWS features and technologies to enhance data engineering processes and capabilities. 

  • Experiece on AWS Sagemaker Unified Studio is a plus. 

  • Documenting data procedures, systems, and architectures to maintain clarity and ensure compliance with regulatory standards. 

  • Providing guidance and support for data governance, including metadata management, data lineage, and data cataloging. 

  • Knowledge of FinOps / cloud cost optimization. 

  • Experience implementing Unity catalogue. 

  • Skills in navigating interpersonal dynamics, resolving team conflicts while maintaining a positive work culture. 

Qualifications

  • BA/BS in Computer Science, engineering or related field, or equivalent experience.
  • AWS Certified Solutions Architect – Associate or Professional
  • Databricks Certified Data Engineer – Associate, Professional or Administrator
  • Certification as a Terraform Associate or FinOps Practitioner a plus.
  • 5+ years of experience in area of data management and/or data curation.
  • 5+ years development experience with Oracle and/or SQL Server.
  • 5+ years of Experience with Python/Pyspark frameworks/libraries.
  • Ability to develop, implement and optimize code using procedural languages such as PL/SQL, T-SQL, etc..
  • Experienced with SSIS and C# development
  • Experienced with data normalization and denormalization techniques
  • Experienced in implementing largescale event based streaming architectures
  • Experienced data transformation and data processing techniques
  • Knowledge of API and Microservice development
  • Experienced in Agile methodology and/or pair programming
  • Preferred knowledge of AI/ML concepts and technologies
  • Preferred experience with Stream-processing systems
  • Preferred working knowledge of cloud architectures on AWS, Azure and GCP
  • Preferred 2+ years of experience in a leadership role, managing and mentoring teams.
  • Strong English verbal and written communcication skills
  • Experienced in working with cross functional teams, building alignment and collaboration
  • Preferred certification Databricks Certified Data Engineer Associate/Professional

What do we offer

 

  • Hybrid work (flexibility to work 2 days/week from home)

Please submit resume/CV in English.

 

All resumes are held in confidence. Only candidates whose profiles closely match requirements will be contacted during this search.

HireRight does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of HireRight and HireRight will not be obligated to pay a placement fee.

Options

Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
Share on your newsfeed