설명
About the Mission
GM’s vision of Zero Crashes, Zero Emissions, and Zero Congestion guides everything we do in autonomous and assisted driving. The AV organization is building advanced automated driving technologies, including Level 4–capable fully self-driving systems, to move us toward safer, more sustainable, and more accessible mobility.
For the AI Kernels & Compilers team, that mission shows up in the details: turning cutting‑edge perception, prediction, and planning research into production‑grade software that can run efficiently and reliably on real vehicles at scale. We pioneer new approaches to model export, kernel development, and performance engineering so that every cycle on our accelerators translates into better situational awareness, faster reaction times, and more robust behavior on the road.
If you want your compiler and kernels work to directly influence how automated vehicles understand and react to the world — while operating at the safety, reliability and scale of a company like GM — this is where that impact becomes real.
About the Team
The AI Kernels team builds high‑performance GPU kernels and custom libraries that sit at the heart of our on‑vehicle ML inference for ADAS and autonomous driving . We own making core AI workloads faster, more reliable, and easier to maintain and deploy on real cars, under real‑world constraints.
That means:
-
Designing and implementing custom operators when vendor libraries hit their limits
-
Integrating those kernels deep into our ML runtime stack
-
Debugging and tuning GPU performance across the AV software stack, often on hardware‑in‑the‑loop
We partner closely with AI Solutions, AI Compilers, AI Architecture, and AI Tooling to ensure models deploy efficiently to the car while consistently meeting strict latency, throughput, and reliability targets. If you enjoy pushing GPUs to their limits and seeing your work directly impact how autonomous vehicles perceive and act in the world, this is the team for you.
What you’ll be doing (Responsibilities)
-
Design, implement, benchmark, and iterate on CUDA-based kernels and custom operators to squeeze every last drop of performance out of on-vehicle inference workloads.
-
Build and improve tooling and infrastructure that make it easier to profile, debug, and validate CUDA kernels and accelerator-backend code across the AV stack.
-
Partner with AI Solutions, Compilers, and Architecture to translate model and system requirements into concrete kernel roadmaps, priorities, and project plans.
-
Collaborate with cross-functional teams (compiler, performance tooling, runtime, deployment solutions) to deliver reusable, reliable, high-performance libraries into production.
-
Maintain high technology standards, methodologies, processes, and guidelines for GPU kernel development and performance engineering through code review.
-
Manage relationships with internal customers to ensure our kernels and libraries meet real-world needs
Your Skills & Abilities (Required Qualifications)
-
Minimum 2+ years of relevant industry experience or equivalent experience
-
BS, MS or PhD in CS, or related technical field
-
Excellent GPU programming skills in CUDA, with a thorough understanding of parallel programming patterns and GPU architecture.
-
Hands-on experience benchmarking, profiling, debugging and optimizing accelerator libraries and kernels to extract optimal performance using the NSight suite of tools or similar.
-
Strong background in software architecture, library design, and design patterns.
-
Strong C++ programming skills with the ability to feel comfortable in large codebases.
-
Solid background in system performance, high performance computing and/or architecture-aware optimizations.
-
Strong communication skills and the ability to work collaboratively within a team
-
Excellent analytical and problem-solving skills
What Will Give You A Competitive Edge (Preferred Qualifications)
-
2+ years of relevant industry experience or equivalent experience
-
Experience with tensor core programming, CUTLASS and/or CuTe
-
Experience with ML model architectures, in particular transformer-based
-
Experience with low latency or real time systems
-
Experience with lower levels of an accelerator software stack (i.e. drivers, runtimes, and compilers)
Compensation: The compensation information is a good faith estimate only. It is based on what a successful applicant might be paid in accordance with applicable state laws. The compensation may not be representative for positions located outside of New York, Colorado, California, or Washington.
-
The salary range for this role: is $128,700 to $261,300. The actual base salary a successful candidate will be offered within this range will vary based on factors relevant to the position.
-
Bonus Potential: An incentive pay program offers payouts based on company performance, job level, and individual performance.
-
Benefits: GM offers a variety of health and wellbeing benefit programs. Benefit options include medical, dental, vision, Health Savings Account, Flexible Spending Accounts, retirement savings plan, sickness and accident benefits, life insurance, paid vacation & holidays, tuition assistance programs, employee assistance program, GM vehicle discounts and more.
#GM-AV-1
이 직무는 하이브리드 직무로 분류됩니다. 즉, 선발된 지원자는 특정 근무지로 주 3일 이상(또는 관리자가 지정한 다른 빈도로) 특정 근무지로 출근해야 합니다.
선발된 지원자는 이 직무를 위해 25% 미만의 출장을 다녀야 합니다.
이 직무는 리로케이션 혜택을 받을 수 있습니다.
다양성 정보
General Motors는 법적으로 금지된 차별을 배제하는 것은 물론 포용성과 소속감을 진정으로 장려하는 직장이 되기 위해 노력하고 있습니다. 당사는 다양성이 보장되는 환경에서 직원들이 역량을 발휘하고 우리 고객을 위한 더 좋은 제품을 개발할 수 있다고 믿습니다. 따라서 입사에 관심 있는 사람이 있다면 포지션별 주요 업무와 자격을 확인하고 본인이 보유한 기술과 능력에 부합하는 모든 포지션에 적극적으로 지원하기를 장려합니다. 지원자는 채용 과정에서 역할 관련 평가(해당하는 경우) 및/또는 채용 전 스크리닝을 통과해야 합니다. 자세한 정보는 GM 채용 과정 안내를 참고하십시오.
공평한 취업 기회 선언 (미국)
General Motors는 공평한 기회를 제공하는 고용주임을 자부합니다. 자격을 만족하는 지원자는 인종과 피부색, 성별, 성적 지향, 성별 정체성, 국적, 장애, 재향 군인 보호법 적용 여부와 상관없이 채용 후보로서 심사를 받습니다.
숙소 (미국 및 캐나다)
General Motors는 장애인을 포함한 모든 구직자들에게 취업 기회를 제공합니다. 구직이나 취업 지원에 도움이 되는 합리적인 숙소가 필요한 경우 [email protected]으로 이메일을 보내시거나 800-865-7580으로 전화주십시오. 이메일에, 귀하가 요청하는 특정한 숙소에 대한 설명과 귀하가 지원하는 직무와 채용 요청서 번호를 포함해주세요.
