What You’ll Do
- Design, build, and maintain scalable ETL/ELT pipelines
- Architect and optimize data warehouses and data lakes
- Implement data models to support analytics and product use cases
- Ensure data quality, reliability, and governance across systems
- Integrate third-party APIs and internal services into unified datasets
- Partner with product, engineering, and leadership to define data requirements
- Monitor and improve performance, cost efficiency, and scalability of data systems
What We’re Looking For
- 3+ years of experience in data engineering or backend engineering with heavy data focus
- Strong proficiency in SQL and Python
- Experience with modern data stack tools (e.g., Airflow, dbt, Snowflake, BigQuery, Redshift, Databricks)
- Experience building pipelines in cloud environments (AWS, GCP, or Azure)
- Familiarity with data modeling (star/snowflake schemas)
- Experience working in fast-paced startup environments preferred
- Strong ownership mindset and ability to operate autonomously