Overview
Build the data foundations of our monetisation platform. Thanks is building a customer-first monetisation platform that delivers growth without compromise - for advertisers, publishers, and customers. We are operating at scale today, and entering a phase where data reliability, performance, and intelligence are critical to everything we do. We are hiring a Senior Data Engineer to build our data foundations from the ground up. This is our first dedicated data engineering hire; a senior individual contributor who will design and deliver the data architecture, models and products that will scale the success of Thanks. This role is deeply hands-on, highly influential, and foundational to the future of our engineering and product organisation.
Responsibilities
* Build the data platform: Design and deliver a scalable platform that serves as the primary engine for the Thanks Network. Move beyond operational databases into a well-modeled environment that supports both business intelligence and high-scale feature engineering.
* Own the models: Own the data science, technical implementation and performance of ranking systems. Responsible for the models that determine how we prioritise, personalise and deliver content across the network.
* Build for real-time inference: Own the end-to-end lifecycle of models—from training and validation to real-time inference. Ensure the ranking system is fast, reliable, and fed by high-quality, near-real-time data.
* Unlock model experimentation: Build the framework that allows experiments on ranking systems, enabling accurate measurement of lift, attribution, and model success.
* Own pipelines & observability: Build robust batch and near-real-time pipelines that are resilient and observable. Ensure the data feeding models and experimentation frameworks is flawless.
* Enable self-serve analytics: Design clean, trusted datasets and data marts for product, engineering, and commercial teams to answer questions without bottlenecks.
* Set the data direction: Be opinionated about tooling, architecture, and trade-offs—define what to build, what to buy, and what to retire as data needs evolve.
* Lead through expertise: Act as the go-to data expert across the business, influencing roadmaps and decisions through strong technical judgment.
Requirements
* Experience operating as a senior, hands-on individual contributor in high-growth environments—able to build for scale without over-engineering early.
* Deep strength in both data engineering and applied data science—comfortable writing production-grade Python and performance-optimised SQL.
* Experience building and operating data pipelines in cloud environments.
* Hands-on experience with analytical databases and comfort across operational and analytical data stores.
* Familiarity with streaming or event-driven data architectures.
* Comfortable operating as a senior IC in a greenfield environment—balancing long-term direction with hands-on delivery.
* Excellent communication skills and the ability to partner effectively across Product, Engineering, and Commercial teams.
* Uses AI thoughtfully to augment exploration, modelling, and engineering workflows—accelerating experimentation, debugging, and analysis, while maintaining high standards for data quality, correctness, and ownership.
* Strong internal drive—care deeply about performance, correctness, and building systems that last.
Nice to have
* Experience in adtech, marketplaces, or performance-driven platforms.
* Exposure to experimentation frameworks and attribution models.
* Experience enabling analytics for non-technical teams.
Technical Skills
* Data Engineering: PySpark, dbt, strong SQL skills (must have).
* Data Workflow Pipeline: one of Airflow, Dagster, Step Functions or equivalent (must have, at least 1).
* DevOps / DataOps: Terraform, CloudFormation, Azure ARM, Kubernetes (must have, at least 1).
* Data Warehouse: Databricks, Snowflake, BigQuery, ClickHouse, Redshift, etc. (must have, at least 1).
* Data Catalog / Feature Store: Databricks Unity Catalog, Atlas (nice to have).
* Event Streaming: Kafka, Kinesis, or equivalent (nice to have).
* Data Analytics / Reporting: Experience with Tableau, Power BI, Superset, etc. (nice to have).
* Data QA: Great Expectations, DBT testing, etc. (nice to have).
Benefits
Why Thanks? At Thanks, we're building a customer-first monetisation platform that delivers growth without compromise - for advertisers, publishers, and customers alike. We power growth for the world\'s leading brands, delivering tens of millions of high-value "thanks" moments every month. This is a genuine inflection point for the business.
Foundational ownership: You\'ll build and own core data foundations from the ground up - shaping how ranking, reporting, and decision-making work as the business scales.
Impact you can see: Your work directly influences product performance, experimentation, and how the business learns and scales.
Strategy meets execution: This is a hands-on role - operating deep in the data while setting direction for a fast-growing, complex platform.
Growth, without chaos: You\'ll work closely with our founders and Head of Product in a culture that values courage over comfort, high standards without ego, and kindness without complacency.
Attractive compensation: Including meaningful equity.
Flexibility with intent: We\'re Sydney-headquartered and value in-person collaboration. That said, we care more about leadership, impact, and outcomes than rigid rules - and we\'re open to exceptional candidates across Australia\'s east coast.
We\'re building something deliberately - not copying what already exists. If you\'re excited by foundational ownership, complex data problems, and building systems that genuinely matter, we\'d love to hear from you.
Let\'s build something extraordinary together.
#J-18808-Ljbffr