Overview
At Mutinex, we believe marketing deserves to be treated as a performance discipline — not a cost centre. We're building the growth decision engine that replaces slow, conflicted measurement with fast, independent, AI-powered intelligence. Trusted by brands like Samsung, and Domino's, our platform empowers marketers to make confident, data-driven decisions that drive real business growth. We're an Australian-born, globally scaling B2B SaaS company — and we're just getting started.
Hybrid across Sydney, Melbourne, and New York. We highly value communication, open-mindedness, and a culture of feedback.
The Role
This is a hybrid technical leadership and people management role. You'll lead a team building and scaling our core platform: GrowthOS / MAITE (our customer-facing growth co-pilot). You stay technical — you're in the code, directing agents, reviewing output, making architectural calls. But your primary impact is through your team: how they work, how fast they're learning, and the quality of what they ship.
What the work looks like
You're managing humans who manage AI agents. That's a new kind of leadership. The old playbook — sprint planning, ticket estimation, velocity tracking — breaks down when agents compress two days of work into hours. You need to design the operating rhythm, review processes, and guardrails that keep quality high when your team is shipping faster than ever before.
What We're Looking For
You've led engineering teams. You've built production systems. And you've genuinely adapted how you or your team work with AI. All three are required — the combination is what makes this role hard to fill.
Technical Leadership
You've led teams that shipped production software. You've made architectural decisions under real constraints, navigated technical debt, and built systems that scaled. You've hired, developed, and retained engineers. You know what good looks like — in code, in process, and in people.
You think in systems, not features. You design team structures, review processes, and delivery cadences — not just technical architectures. You understand that how a team works is as much an engineering problem as what they build.
You've managed the tension between speed and quality. You know that velocity without oversight creates haunted codebases. You've built the guardrails — testing strategies, review standards, deployment gates — that let teams move fast without accumulating problems they haven\'t found yet.
AI-Native Practice (Non-Negotiable)
You work this way yourself. You direct AI agents, review output, and have a genuine AI-driven development workflow. You can walk through how you approach a non-trivial feature from planning to shipping. You're opinionated about tools — Claude Code, Cursor, MCP servers — and you\'ve built systematic guardrails based on real failure modes you\'ve encountered.
You've started this journey — and you're ready to take it further with a team. Maybe you've already led a team through this transition. Maybe you've gone deep on AI-augmented development in your own work — nights, weekends, side projects — and you're itching to bring that to how your team operates. Either path is valid. What matters is that you've thought seriously about what changes in team rituals, code review, mentorship, and metrics when agents do most of the implementation. You have a point of view, even if you haven\'t had the environment to fully test it yet.
You're thinking about the mentorship challenge. You recognise that AI intercepts the learning loop for earlier-career engineers. Maybe you\'ve already built approaches to keep learning visible — annotated reviews, architectural interrogation, deliberate pairing. Maybe you\'ve just started thinking about how you\'d tackle it. What matters is that you see it as a real problem, not a side concern. It\'s central to building a team that gets better, not just faster.
You've built and maintained production-grade systems. Distributed systems, data-intensive applications, cloud infrastructure — you\'ve worked at this level. You understand SOLID principles, design patterns, concurrency, and the unique challenges of operating at scale. This depth is what lets you evaluate AI output and guide your team\'s architectural decisions.
Full-stack fluency. You\'ve worked across backend and frontend. You can evaluate and direct work across the entire stack, even if you have a natural centre of gravity.
Our stack, for context: GCP, TypeScript, React, Python (in some places), Pulumi. Specific language experience matters less than architectural range and the ability to ramp fast.
What Matters Most
* Judgment over velocity. You optimise for sustainable speed. You know that the fastest team is the one that doesn\'t have to stop and untangle what it shipped last month.
* Builder who leads. You haven\'t left the code behind. You\'re selective about where you go deep, but you work the way you\'re asking your team to work.
* People development as craft. You take mentorship seriously — not as a checkbox, but as a skill you\'re deliberately building. You think about how your engineers grow, not just what they deliver.
* Comfort with ambiguity. The playbook for managing an AI-native team is being written in real time. You\'re energised by that, not paralysed.
Why This Role
You've led teams before. Here\'s what\'s different about leading one here.
You'll define how AI-native engineering management works — and you won\'t do it alone. The playbook doesn\'t exist yet. You'll write it alongside our Head of Engineering — the team rituals, the review cadences, the mentorship patterns, the metrics, how product gets delivered. If you\'ve been experimenting with AI-augmented workflows and dreaming about what it would look like to run a whole team this way, this is the environment to make it real.
A team that\'s already moving. You\'re not dragging people toward AI adoption. You\'re leading engineers who\'ve already made the shift — and developing hungry earlier-career talent who arrived AI-native. The challenge is quality, architecture, and growth at velocity — not convincing anyone to change.
Hard problems with real constraints. Complex data systems, cloud infrastructure at scale, a product that handles real investment decisions for global brands. This is worth building, and worth building well.
Autonomy over how your team works. We\'ve bet on Claude Code and Forge as our foundation. How you build your team\'s operating rhythm on top of that is yours to design.
Speed of decision-making. Our product process is built to keep pace with engineering velocity. Your team won\'t ship in days and then wait weeks for the next decision.
Equity ownership. All team members receive equity (ESOP).
Generous time off. 20 days annual leave to start, plus 5 extra after your first year, and 1 additional day each year after that (up to 30).
Parental leave. 12 weeks paid for the primary carer, 6 weeks for the secondary carer after 2 years.
Work from anywhere. Up to 6 weeks each year to work from anywhere in the world, with time zone crossover with Australia.
Committed to inclusion. Mutinex is proud to be an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We welcome applicants of all backgrounds, experiences, identities, and abilities.
Ready?
This is a full-time permanent position.
If you\'ve led engineering teams and you\'ve been experimenting with AI-augmented ways of working — whether with a team or on your own — and you\'re excited about building the playbook for what engineering leadership looks like next, we want to hear from you.
Send us your resume and a short note. Tell us how you work today, how you\'d want your team to work, and what excites you about building this at Mutinex.
We move fast on candidates we\'re excited about.
#J-18808-Ljbffr