Search by job, company or skills

  • Posted a day ago
  • Be among the first 10 applicants
Early Applicant

Job Description

We are seeking a highly skilled Data Engineer with experience in cloud-based data platforms to build scalable, reliable data pipelines and robust data models. This role will work closely with data teams, AI teams, and business stakeholders to ensure a solid data foundation that supports analytics, reporting, machine learning, and downstream data products.

Job Responsibilities

  • Design, develop, and maintain scalable ETL/ELT data pipelines, including ingestion, cleaning, transformation, and loading into data lakes and data warehouses.
  • Collaborate with Data Science, BI, Product, and Backend teams to translate business and analytical needs into reliable data models and table structures.
  • Build and optimize Bronze, Silver, and Gold layers to ensure data consistency, performance, and usability.
  • Manage batch and streaming data processing frameworks such as Spark, Flink, or Kafka, ensuring system stability and efficiency.
  • Implement and maintain data quality monitoring, including schema validation, row-count checks, anomaly detection, and pipeline automation.
  • Provide foundational datasets and feature pipelines to support AI and analytics teams.
  • Work with platform and infrastructure teams to ensure availability, security, and scalability of the data platform.
  • Contribute to data governance practices, including metadata management, data cataloging, field definitions, and versioning standards.
  • Continuously improve pipeline performance, reduce processing costs, and enhance maintainability.

Qualifications

  • 35 years of experience in data engineering or backend engineering, with hands-on experience in large-scale data processing.
  • Bachelor's degree or above in Computer Science, Information Systems, Data Engineering, or related fields.
  • Strong proficiency in SQL and experience with Python or Scala for data processing.
  • Experience with at least one major cloud provider (AWS / GCP / Azure); familiarity with S3, Glue, Lambda, Databricks, or similar platforms.
  • Knowledge of distributed data processing technologies such as Spark, Flink, or Kafka.
  • Solid understanding of data warehousing concepts and data modeling (Star Schema, Data Vault, Medallion Architecture).
  • Experience with ETL/ELT pipeline orchestration tools such as Airflow, dbt, or Dagster.
  • Strong communication skills and ability to collaborate with cross-functional stakeholders.
  • Detail-oriented, proactive, and strong problem-solving mindset.
  • Experience in fintech, trading platforms, or risk-related data is a strong advantage.

What we offer

  • Clear role definition with well-defined objectives
  • Extensive cross-functional and cross-regional collaboration opportunities
  • Diverse data scenarios with challenging product strategy initiatives
  • Fast-paced and dynamic industry environment
  • Strong sense of ownership
  • Competitive compensation package within a performance-driven culture

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 138842003

Similar Jobs