The role focuses on optimizing AI models for efficiency, involving GPU/CPU code profiling, high-performance programming, and developing performance tools.
About Luma AI
About the Role
Responsibilities
Experience
Luma's mission is to build multimodal AI to expand human imagination and capabilities. We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision. So we are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change.
The Performance Optimization team at Luma is dedicated to maximizing the efficiency and performance of our AI models. Working closely with both research and engineering teams, this group ensures that our cutting-edge multimodal models can be trained efficiently and deployed at scale while maintaining the highest quality standards.
- Profile and optimize GPU/CPU/Accelerator code for maximum utilization and minimal latency
- Write high-performance PyTorch, Triton, CUDA, deferring to custom PyTorch operations if necessary
- Develop fused kernels and leverage tensor cores and modern hardware features for optimal hardware utilization on different hardware platforms
- Optimize model architectures and implementations for distributed multi-node production deployment
- Build performance monitoring and analysis tools and automation
- Research and implement cutting-edge optimization techniques for transformer model
- Expert-level proficiency in Triton/CUDA programming and GPU optimization
- Strong PyTorch skills
- Experience with PyTorch kernel development and custom operations
- Proficiency with profiling tools (NVIDIA Nsight, torch profiler, custom tooling)
- Deep understanding of transformer architectures and attention mechanisms
- (Preferred) Experience with compilers/exporters such as torch.compile, TensorRT, ONNX, XLA
- (Preferred) Experience optimizing inference workloads for latency and throughput
- (Preferred) Experience with Triton compiler and kernel fusion techniques
- (Preferred) Knowledge of warp-level intrinsics and advanced CUDA optimization
Your applications are reviewed by real people.
CompensationThe base pay range for this role is $187,500 – $395,000 per year.
Top Skills
Cuda
Nvidia Nsight
Onnx
PyTorch
Tensorrt
Torch Profiler
Triton
Xla
Similar Jobs
Cloud • Information Technology • Security • Software • Cybersecurity
The Solutions Architect will lead efforts in enterprise cybersecurity, focusing on SecOps, vulnerability management, and customer engagement, while collaborating within a global sales team.
Top Skills:
AICybersecurityDevsecopsSecopsVulnerability Management
Artificial Intelligence • Fintech • Software
The Enterprise Account Executive will drive sales for FloQast's Accounting Workflow Automation solution by cultivating relationships, managing sales cycles, and collaborating with internal teams to meet revenue goals.
Top Skills:
ClariOutreach.IoSalesforceZoominfo
Cloud • Software
Design and deliver components for a real-time data platform, focusing on processing events, enhancing anomaly detection, and improving system reliability.
Top Skills:
AWSDockerFlinkGCPJavaKafkaKotlinKubernetesScalaSpark
What you need to know about the Vancouver Tech Scene
Raincouver, Vancity, The Big Smoke — Vancouver is known by many names, and in recent years, it has gained a reputation as a growing hub for both tech and sustainability. Renowned for its natural beauty, the city has become a magnet for professionals eager to create environmental solutions, and with an emphasis on clean technology, renewable energy and environmental innovation, it's attracted companies across various industries, all working toward a shared goal: advancing clean technology.



