Careers
Work on Platforms That Matter
We're a small, senior team solving real data platform problems for enterprise clients. If you know your stack deeply and care about quality, we'd like to talk.
Open Positions
Current Vacancies
Senior Snowflake Data Engineer
About the Role
You'll be embedded in client data teams—designing and building scalable Snowflake-based data platforms. The work spans architecture decisions, hands-on development, and collaborating with client engineers to raise the quality bar. Expect varied clients, real problems, and no repetitive ticket-churning.
What You'll Do
- Design and implement Snowflake data warehouses and data vault / dimensional models
- Build and maintain ELT pipelines using dbt, Airflow, or similar orchestration tools
- Optimize query performance and warehouse costs through clustering, materialization strategies, and resource monitoring
- Implement data quality frameworks and testing strategies
- Advise clients on best practices around RBAC, data governance, and platform architecture
- Participate in architecture reviews and technical discussions with client stakeholders
Requirements
- 4+ years of hands-on experience with Snowflake in production environments
- Strong SQL skills and deep understanding of Snowflake-specific features (time travel, zero-copy cloning, streams, tasks)
- Experience with dbt for transformation layer development and testing
- Proficiency with Python for data engineering tasks and scripting
- Familiarity with data modeling approaches (dimensional, data vault, or medallion architecture)
- Experience with a cloud data platform (AWS, Azure, or GCP)
- Ability to communicate technical concepts clearly to non-technical stakeholders
- Professional proficiency in English
Send your CV to careers@southriversdata.com
Senior Databricks Data Engineer
About the Role
You'll work directly with enterprise clients building and optimizing Databricks-based lakehouse platforms. The role covers end-to-end platform work—from ingestion and transformation to governance and performance tuning. You'll be expected to own technical decisions and push quality standards within client teams.
What You'll Do
- Design and implement lakehouse architectures using Databricks, Delta Lake, and Unity Catalog
- Build scalable batch and streaming data pipelines using PySpark and Spark SQL
- Implement medallion (bronze / silver / gold) architecture and enforce data quality at each layer
- Tune Spark jobs for performance and cluster cost efficiency
- Integrate Databricks workflows with orchestration tools such as Airflow or Databricks Workflows
- Advise clients on Unity Catalog setup, data governance, and access control patterns
Requirements
- 4+ years of experience with Databricks or Apache Spark in production environments
- Strong proficiency in PySpark and Python for data pipeline development
- Hands-on experience with Delta Lake (ACID transactions, schema evolution, time travel)
- Experience with Databricks Workflows, Job Clusters, and Unity Catalog
- Solid understanding of data modeling and lakehouse architecture patterns
- Experience deploying Databricks workspaces on at least one major cloud (AWS, Azure, or GCP)
- Good understanding of CI/CD practices for data pipelines (Terraform, GitHub Actions, or similar)
- Professional proficiency in English
Send your CV to careers@southriversdata.com
Why SouthRivers
What Working Here Looks Like
Senior-only environment
You'll work alongside experienced engineers, not manage juniors. Technical depth is the norm here.
Varied, real problems
Different clients, different stacks, different challenges. No repetitive CRUD work. Every engagement teaches you something.
Async-friendly culture
Remote-first, flexible hours, no micromanagement. We care about the output, not the clock.