Job Description
As a Data Reliability Engineer, you will design and maintain data pipelines while implementing automated testing and validation frameworks to guarantee data integrity. You will work closely with senior engineers and product teams to deliver trusted data products. In this role, you will leverage modularized capabilities across key pipelines and integrate teams, processes, and products to enable consistent, automated, and secure data management. You will simplify complex data operations and enable self-service predictive analytics by automating processes and implementing security and governance. We are committed to evolving and utilizing cutting-edge technology to develop scalable tools that simplify data operations, empowering customers to perform self-service operations to meet business needs. This role requires a passion for building data products that are trusted, discoverable, and reusable. Responsibilities: Develop and maintain ETL/ELT pipelines using tools like dbt, MageAI, and ADF. Implement test automation for data pipelines, including unit tests, integration tests, and data quality checks. Collaborate with QA and DataOps teams to design validation frameworks leveraging metadata-driven approaches. Profile and monitor data quality using tools like Data Profiling and Data Validation tests. Work with cloud platforms (Azure or AWS) and data warehouses (Snowflake, MSSQL). Ensure compliance with data governance and security standards. Partner with senior engineers to optimize performance and scalability. Contribute to documentation and knowledge sharing for pipeline QA best practices.