About The Role
Maincode builds foundation models from first principles on Australian infrastructure. We design architectures, run our own compute, shape the training process, and operate the systems that serve our models.
We have built Matilda, the first large language model built and trained from scratch in Australia. Our new compute cluster is live; we are scaling the next version of Matilda and deploying and serving it live for public access.
We are looking for AI researchers who want to work on the core architecture, training, and evaluation of large-scale language models that power Matilda.
This role is not focused on incremental benchmarking or paper output. You will work directly with the engineers running large-scale training systems and help design models that learn efficiently and behave reliably in production.
What You Would Actually Do
You will work across the model development loop, from research questions to training runs to evaluation.
This Includes
* Designing and testing architecture changes and training regimes for large language models
* Running controlled experiments at scale and isolating causal effects
* Studying failure modes in reasoning, generalisation, robustness, and representation
* Shaping objectives, data mixtures, and optimisation choices that influence model behaviour
* Building and refining evaluations that measure capability and reliability, not just scores
* Analysing training dynamics using logs, metrics, and model outputs
* Collaborating with ML systems engineers on distributed training and training operations
* Writing clear internal notes that turn experimental results into design decisions
You will spend substantial time in code, training runs, logs, and evaluation outputs. The goal is clarity about what improves the model and why.
What We Are Looking For
We care about depth of reasoning, experimental discipline, and the ability to make progress under ambiguity.
We Expect
* Hands‐on experience writing and running production‐grade ML or research code
* Strong Python and experience with PyTorch or JAX
* Solid understanding of transformer‐based language models and the basics of pre‐training and evaluation
* Ability to design experiments, interpret results, and communicate trade‐offs clearly
* Comfort working close to infrastructure, performance constraints, and operational reality
* Interest and exposure to reasoning‐oriented architectures and training methods beyond standard approaches, and beyond standard LLMs
Nice to have
* Experience with distributed training concepts and tooling (data parallel, tensor parallel, sharding, checkpointing)
* Experience running training across multiple nodes and managing long training cycles
* Familiarity with large‐model training stacks and frameworks (for example Megatron‐style systems, DeepSpeed‐like tooling, FDSP or similar)
* Comfort across the full workflow: training, evaluation, and deployment constraints
* Experience working in ROCm‐based environments
How You Would Work
This is hands‐on research. You will use code as a primary tool for thinking.
You Will Be Expected To
* Move between theory and implementation quickly and precisely
* Prefer controlled experiments over broad sweeps
* Use logs, metrics, and model behaviour to guide decisions
* Work closely with engineering counterparts to scale and validate ideas
What This Role Is Not
* It is not a product research role
* It is not prompt engineering
* It is not fine‐tuning someone else's model and shipping wrappers around external APIs
You will work on Matilda, trained from scratch on our infrastructure, and pushed until its behaviour is understood and improved.
Why Maincode
Maincode builds and operates the full stack: training infrastructure, model code, evaluation systems, and deployment. We run one of the largest private AI compute environments in Australia, built for the sole purpose of training and deploying large scale models.
If you want to work directly on training and evaluating a large language model built from scratch, this is the only role in Australia that will put you inside that work.
Note
This is a full time role based in Melbourne, working closely with our in person team. At this time we are not able to offer visa sponsorship, so applicants must have existing and unrestricted work rights in Australia.
#J-18808-Ljbffr