OverviewWe are looking for a Principal Data Engineer that is excited about the mission and outcomes over the next 6-12 months.MissionWork closely with cross-functional teams to translate our business vision into impactful data solutions.
Drive the alignment of data architecture requirements with strategic goals, ensuring each solution meets analytical needs and advances core objectives.
Bridge the gap between business insights and technical execution by tackling complex challenges in data integration, modeling, and security, and by setting the stage for exceptional data performance and insights.
Shape the data roadmap, influence design decisions, and empower our team to deliver innovative, scalable, high-quality data solutions every day.OutcomesArchitecture & DesignDefine the overall greenfield data architecture (batch + streaming) using GCP - BigQuery.Establish best practices for ingestion, transformation, data quality, and governanceData Ingestion & ProcessingLead the design and implementation of ETL/ELT pipelinesIngestion: Datastream, Pub/Sub, Dataflow, Airbyte, Fivetran, RiveryStorage & Compute: BigQuery, GCSTransformations: dbt, Cloud Composer (Airflow), DagsterEnsure data quality and reliability with dbt tests, Outstanding Expectations/Soda, and monitoringGovernance & SecurityImplement Dataplex & Data Catalog for metadata, lineage, and discoverabilityDefine IAM policies, row/column-level security, DLP strategies, and compliance controlsMonitoring, Observability & ReliabilityDefine and enforce SLAs, SLOs, and SLIs for pipelines and data productsImplement observability tooling:Cloud-native: Cloud Monitoring, Logging, Error Reporting, Cloud TraceThird-party (nice-to-have): Monte Carlo, Datafold, Databand, BigeyeBuild alerting and incident response playbooks for data failures and anomaliesEnsure pipeline resilience (idempotency, retries, backfills, incremental loads)Establish disaster recovery and high availability strategies (multi-region storage, backup/restore policies)Analytics EnablementPartner with BI/analytics teams to deliver governed self-service through Looker, Looker Studio, and other toolsSupport squad-level data product ownership with clear contracts and SLAsTeam LeadershipMentor a small data engineering team; set coding, CI/CD, and operational standardsCollaborate with squads, product managers, and leadership to deliver trusted dataRequirements10+ years experience in data engineering, architecture, or platform rolesStrong expertise in GCP data stack: BigQuery, GCS, Dataplex, Data Catalog, Pub/Sub, DataflowHands-on experience building ETL/ELT pipelines with dbt + orchestration (Composer/Airflow/Dagster)Deep knowledge of data modeling, warehousing, partitioning/clustering strategiesExperience with monitoring, reliability engineering, and observability for data systemsFamiliarity with data governance, lineage, and security policies (IAM, DLP, encryption)Strong SQL skills and solid knowledge of Python for data engineeringNice-to-HaveExperience with Snowflake, Databricks, AWS (Redshift, Glue, Athena), or Azure SynapseKnowledge of open-source catalogs (DataHub, Amundsen, OpenMetadata)Background in streaming systems (Kafka, Kinesis, Flink, Beam)Exposure to data observability tools (Monte Carlo, Bigeye, Datafold, Databand)Prior work with Looker, Hex, or other BI/analytics toolsStartup or scale-up experience (fast-moving, resource-constrained environments)Behavioural fitOwnership, Humility, Structured Thinking, Attention to detail, Excellent listener and clear communicatorInterview ProcessThe successful candidate will participate in the interview stages described below.
The order might vary.
Expect the process to last no more than 3 weeks from start to finish.
Interviews may be conducted via video or in person depending on location and role.Screening call - 30-minute chat with Talent Acquisition to learn about you and your goals.Technical screening - 30-minute call with a Senior Data Engineer on core concepts.Technical competency panel - 60-minute panel focusing on Python and SQL.Behavioural panel interview - 60-minute conversation with business leaders.Final interview - Closing conversation with the CTO.Offer + references - Non-binding offer and references check.Background checks & ComplianceSleek is a regulated entity and performs background checks appropriate to the role.
Consent will be obtained.
Checks may include education verification, criminal history, political exposure, and bankruptcy/adverse credit history.
Depending on role, an adverse result may affect probation.
By applying, you confirm you have read our Data Privacy Statement for Candidates at sleek.com.BenefitsHumility and kindness; Flexibility to work from home 5 days per week (fully remote from anywhere in the world for 1 month per year); Financial benefits including market-competitive salaries, generous time off, and potential employee share ownership; Personal growth through responsibility, autonomy, and training programs.
Sleek is a certified B Corp and aims for carbon neutrality by 2030.
#J-18808-Ljbffr
📌 Principal Data Engineer
🏢 Sleek
📍 Adelaide