Job Opportunity
We're developing AI systems that control real-world machines, not just simulations or benchmarks.
The role involves designing and training Large Language Models from scratch, working with genuine datasets, and helping us build recursive agentic frameworks that think before they act.
* Designing, training, and fine-tuning LLMs without relying on shortcuts.
* Developing chain-of-thought frameworks and multi-step reasoning agents.
* Collaborating closely with robotics and speech teams to bring models to life.
* Contributing to an evolving agent architecture with plans for vision-language integration.
Requirements:
* Proven experience in training transformer-based LLMs (e.g. GPT, LLaMA, custom).
* Comfort building and managing your own datasets.
* Familiarity with agentic AI, tool-calling, or recursive planning.
* Exposure to speech-to-text pipelines or model context protocols.
Preferred candidates have a research background in LLMs/NLP/ML and are willing to leave academia behind to make a tangible impact. We also offer relocation support and sponsorship for the right person if they're not based in Australia yet.