Jobs
My ads
My job alerts
Sign in
Find a job Career Tips Companies
Find

High performance ai engineer for large scale generative models

Brisbane
beBeeEngineering
Model
Posted: 11 December
Offer description

Job Description

AWS Neuron is the software stack powering AWS Inferentia and Trainium machine learning accelerators, designed to deliver high-performance, low-cost inference at scale. The Neuron Serving team develops infrastructure to serve modern machine learning models—including large language models (LLMs) and multimodal workloads—reliably and efficiently on AWS silicon.

We are seeking a skilled professional to lead and architect our next-generation model serving infrastructure, with a particular focus on large-scale generative AI applications.

Key Job Responsibilities:

* Architect and lead the design of distributed ML serving systems optimized for generative AI workloads
* Drive technical excellence in performance optimization and system reliability across the Neuron ecosystem
* Design and implement scalable solutions for both offline and online inference workloads
* Lead integration efforts with frameworks such as vLLM, SGLang, Torch XLA, TensorRT, and Triton
* Develop and optimize system components for tensor/data parallelism and disaggregated serving
* Implement and optimize custom PyTorch operators and NKI kernels
* Mentor team members and provide technical leadership across multiple work streams
* Drive architectural decisions that impact the entire Neuron serving stack
* Collaborate with customers, product owners, and engineering teams to define technical strategy
* Author technical documentation, design proposals, and architectural guidelines

A Day in the Life:

You'll lead critical technical initiatives while mentoring team members. You'll collaborate with cross-functional teams of applied scientists, system engineers, and product managers to architect and deliver state-of-the-art inference capabilities. Your day might involve:

* Leading design reviews and architectural discussions
* Rapidly prototyping software to show customer value
* Debugging complex performance issues across the stack
* Mentoring junior engineers on system design and optimization
* Collaborating with research teams on new ML serving capabilities
* Driving technical decisions that shape the future of Neuron's inference stack

About the Team:

The Neuron Serving team focuses on developing model-agnostic inference innovations, including disaggregated serving, distributed KV cache management, CPU offloading, and container-native solutions. We're committed to pushing the boundaries of what's possible in large-scale ML serving.

],

Send an application
Create a job alert
Alert activated
Saved
Save
Similar job
Foundational ai model optimizer
Brisbane
beBeeEnhancement
Model
Similar job
Accounts professional seeking role with hybrid work model
Brisbane
beBeeaccounts
Model
Similar job
Pricing model lead - gaming industry
Brisbane
beBeeAnalytics
Model
Similar jobs
Arts & Creative jobs in Brisbane
jobs Brisbane
jobs Queensland
Home > Jobs > Arts & Creative jobs > Model jobs > Model jobs in Brisbane > High Performance AI Engineer for Large Scale Generative Models

About Jobstralia

  • Career Advice
  • Company Reviews

Search for jobs

  • Jobs by job title
  • Jobs by sector
  • Jobs by company
  • Jobs by location

Contact / Partnership

  • Contact
  • Publish your job offers on Jobijoba

Legal notice - Terms of Service - Privacy Policy - Manage my cookies - Accessibility: Not compliant

© 2025 Jobstralia - All Rights Reserved

Send an application
Create a job alert
Alert activated
Saved
Save