Skip to yearly menu bar Skip to main content


MLSys 2025 Career Opportunities

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting MLSys 2025.

Search Opportunities

Location: Redwood City, CA or New York, NY

About Us:

Here at Fireworks, we’re building the future of generative AI infrastructure. Fireworks offers the generative AI platform with the highest-quality models and the fastest, most scalable inference. We’ve been independently benchmarked to have the fastest LLM inference and have been getting great traction with innovative research projects, like our own function calling and multi-modal models. Fireworks is funded by top investors, like Benchmark and Sequoia, and we’re an ambitious, fun team composed primarily of veterans from Pytorch and Google Vertex AI.

The Role:

As the Lead Security Engineer at Fireworks AI, you will be responsible for envisioning and implementing a world class security program from the ground up. Our cutting edge infrastructure, world-class inference technology, proprietary in-house research, and the open source large language models we host operate at the most extreme scale and are prime targets for sophisticated threat actors the world over. You will be entrusted to secure our AI platform, models, and infrastructure from all manner of attackers.

Key Responsibilities:

  • Hardening our multi-cloud infrastructure to secure customer models and compute clusters
  • Defining and implementing a right-sized Secure Software Development Lifecycle
  • Performing code, architecture, and system security reviews
  • Designing a scalable security program by leveraging automation wherever possible
  • Secure corporate managed devices
  • Conduct security assessments and risk analyses
  • Ensure compliance with security frameworks, regulations, and standards (e.g., SOC 2, ISO 27001, GDPR)

Minimum Qualifications:

  • 5+ years of experience in application, product, or infrastructure security
  • Experience working with Python and/or Go
  • Experience working with AWS, GCP, and/or Oracle Cloud
  • Knowledge of Docker and Kubernetes concepts
  • Experience integrating security tools such as Snyk or Semgrep with common CI tools such as Jenkins, Circle CI, or GitHub Actions
  • A high degree of comfort working in a Linux server environment, including on the CLI

Preferred Qualifications

  • Experience securing Kubernetes clusters
  • Familiarity with common web frameworks for Python and/or Go
  • Experience securing multi-cloud environments
  • Familiarity with Oracle Cloud security controls
  • Experience working in complex codebases
  • Experience working with EDR and/or XDR solutions
  • Experience with IaC technology, such as Terraform
  • Familiarity with mobile device management systems
  • Experience securing Google Workspace
  • Experience configuring identity providers such as Okta or One Login

Why Fireworks AI?

  • Solve Hard Problems: Tackle challenges at the forefront of AI infrastructure, from low-latency inference to scalable model serving.
  • Build What’s Next: Work with bleeding-edge technology that impacts how businesses and developers harness AI globally.
  • Ownership & Impact: Join a fast-growing, passionate team where your work directly shapes the future of AI—no bureaucracy, just results.
  • Learn from the Best: Collaborate with world-class engineers and AI researchers who thrive on curiosity and innovation.

Fireworks AI is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all innovators.

Bath


Lecturer / Senior Lecturer

Department: Computer Science

Salary: Salary for a Lecturer (Grade 8) is £46,735 rising to £55,755 per annum. Salary for a Senior Lecturer (Grade 9) is £57,422 rising to £66,537 per annum

The Department of Computer Science wishes to appoint up to seven academics in Artificial Intelligence and Machine Learning.

About the role

You will work with colleagues, students and researchers to develop and publish papers. You will apply for research funding to support your ideas. You will find ways of making your research available to society.

You will design and deliver teaching materials for lectures, tutorials, and labs.

You will have a few internal roles to help the Department run smoothly.

The support and growth opportunities we provide

Training for an HEA fellowship qualification

New Lecturers will be enrolled into the Pathway to HEA Fellowship. Senior Lecturers will have the option to do so.

Mentoring

All of our staff are allocated a mentor when they join. Your mentor will support you in your day-to-day job and help you progress.

About you

Our ideal candidate for both Lecturer and Senior Lecturer positions will hold a PhD or equivalent in a relevant discipline, along with a UG degree or equivalent experience.

  • You should demonstrate substantial research experience in your field, with an emerging track record for Lecturers and an established research profile with funding success for Senior Lecturers
  • A deep conceptual understanding of their subject, alongside experience teaching at UG/PG levels, is essential
  • Strong written, verbal, and interpersonal skills, along with the ability to form positive collaborations, are required
  • Senior Lecturers should also exhibit academic leadership and a clear research vision. Both roles demand excellent organisational and administrative skills
  • A commitment to excellence in research and teaching, student experience, and ethical professional conduct is essential

Location Redwood City, CA or New York, NY


About Us:

Here at Fireworks, we’re building the future of generative AI infrastructure. Fireworks offers the generative AI platform with the highest-quality models and the fastest, most scalable inference. We’ve been independently benchmarked to have the fastest LLM inference and have been getting great traction with innovative research projects, like our own function calling and multi-modal models. Fireworks is funded by top investors, like Benchmark and Sequoia, and we’re an ambitious, fun team composed primarily of veterans from Pytorch and Google Vertex AI.

The Role:

As a Training Infrastructure Engineer, you'll design, build, and optimize the infrastructure that powers our large-scale model training operations. Your work will be essential to developing high-performance AI training infrastructure. You'll collaborate with AI researchers and engineers to create robust training pipelines, optimize distributed training workloads, and ensure reliable model development.

Key Responsibilities:

  • Design and implement scalable infrastructure for large-scale model training workloads
  • Develop and maintain distributed training pipelines for LLMs and multimodal models
  • Optimize training performance across multiple GPUs, nodes, and data centers
  • Implement monitoring, logging, and debugging tools for training operations
  • Architect and maintain data storage solutions for large-scale training datasets
  • Automate infrastructure provisioning, scaling, and orchestration for model training
  • Collaborate with researchers to implement and optimize training methodologies
  • Analyze and improve efficiency, scalability, and cost-effectiveness of training systems
  • Troubleshoot complex performance issues in distributed training environments

Minimum Qualifications:

  • Bachelor's degree in Computer Science, Computer Engineering, or related field, or equivalent practical experience
  • 3+ years of experience with distributed systems and ML infrastructure
  • Experience with PyTorch
  • Proficiency in cloud platforms (AWS, GCP, Azure)
  • Experience with containerization, orchestration (Kubernetes, Docker)
  • Knowledge of distributed training techniques (data parallelism, model parallelism, FSDP)

Preferred Qualifications:

  • Master's or PhD in Computer Science or related field
  • Experience training large language models or multimodal AI systems
  • Experience with ML workflow orchestration tools
  • Background in optimizing high-performance distributed computing systems
  • Familiarity with ML DevOps practices
  • Contributions to open-source ML infrastructure or related projects

Why Fireworks AI?

  • Solve Hard Problems: Tackle challenges at the forefront of AI infrastructure, from low-latency inference to scalable model serving.
  • Build What’s Next: Work with bleeding-edge technology that impacts how businesses and developers harness AI globally.
  • Ownership & Impact: Join a fast-growing, passionate team where your work directly shapes the future of AI—no bureaucracy, just results.
  • Learn from the Best: Collaborate with world-class engineers and AI researchers who thrive on curiosity and innovation.

Fireworks AI is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all innovators.

San Francisco, California


Description Founded in late 2020 by a small group of machine learning engineers and researchers, MosaicML enables companies to securely fine-tune, train and deploy custom AI models on their own data, for maximum security and control. Compatible with all major cloud providers, the MosaicML platform provides maximum flexibility for AI development. Introduced in 2023, MosaicML’s pretrained transformer models have established a new standard for open source, commercially usable LLMs and have been downloaded over 3 million times. MosaicML is committed to the belief that a company’s AI models are just as valuable as any other core IP, and that high-quality AI models should be available to all.

Now part of Databricks since July 2023, we are passionate about enabling our customers to solve the world's toughest problems — from making the next mode of transportation a reality to accelerating the development of medical breakthroughs. We do this by building and running the world's best data and AI platform so our customers can use deep data insights to improve their business. We leap at every opportunity to solve technical challenges, striving to empower our customers with the best data and AI capabilities.

You will: - Design and productionize state of the art tooling and open source technologies to enable the development of frontier foundation models for Databricks customers - Solve complex problems at scale around data preprocessing, model training, hyperparameter tuning and model evaluation Implement advanced optimization techniques to reduce the resource footprint of models while preserving their performance and balancing usability for our developers and customers - Collaborate with product managers and cross-functional teams to drive technology-first initiatives that enable novel business strategies and product roadmaps - Facilitate our user community through documentation, talks, tutorials, and collaborations - Contribute to the broader AI community by publishing research, presenting at conferences, and actively participating in open-source projects, enhancing Databricks' reputation as an industry leader.

Below are some example projects: - Composer: Large-scale distributed deep learning training library - Streaming: Library for efficient data loading from cloud object storage -LLM Foundry: Framework for developing and evaluating Large Language Models

We look for: - Hands on experience with the internals of deep learning frameworks (e.g. PyTorch, TensorFlow) and GenAI models (e.g. GPT, StableDiffusion) - Experience with large scale, distributed training on GPUs (e.g., Nvidia, AMD) and alternative deep learning accelerators - Strong sense of design and usability - Effective communication skills and the ability to articulate complex technical ideas to cross-disciplinary internal and external stakeholders - Prior history of contributing to or developing open source projects is a bonus but not a requirement

We value candidates who are curious about all parts of the company's success and are willing to learn new technologies along the way.

Location: Redwood City, CA or New York, NY


About Us:

Here at Fireworks, we’re building the future of generative AI infrastructure. Fireworks offers the generative AI platform with the highest-quality models and the fastest, most scalable inference. We’ve been independently benchmarked to have the fastest LLM inference and have been getting great traction with innovative research projects, like our own function calling and multi-modal models. Fireworks is funded by top investors, like Benchmark and Sequoia, and we’re an ambitious, fun team composed primarily of veterans from Pytorch and Google Vertex AI.

The Role:

As a Technical Developer Advocate at Fireworks AI, you will serve as a technical ambassador for our state-of-the-art platform. In this role, you’ll focus on building strong relationships with developers, crafting compelling technical content, and sharing insights that directly influence our product evolution. This role calls for deep technical expertise, creative communication, and a genuine passion for community engagement.

Key Responsibilities:

  • Community Engagement: Foster a vibrant developer community through forums, social media, webinars, meetups, hackathons, and conferences. Drive initiatives that encourage connection, collaboration, and innovation.
  • Technical Evangelism: Deliver engaging presentations, live demos, and hands-on workshops that highlight Fireworks' capabilities and inspire platform adoption.
  • Content Creation & Thought Leadership: Produce high-quality technical content—blogs, tutorials, docs, and videos—that simplify complex AI topics and provide actionable value.
  • Advocacy & Feedback: Serve as a trusted advocate by capturing developer feedback and sharing insights with internal teams to inform product development.
  • Cross-Functional Collaboration: Partner with Product and Engineering to translate developer needs into impactful roadmap decisions and innovations.
  • Impact Measurement: Leverage data and metrics to assess engagement effectiveness and optimize community strategies.

Minimum qualifications:

  • Bachelor’s degree in Computer Science, Engineering, or a related field—or equivalent practical experience.
  • 3+ years in developer relations, developer advocacy, technical evangelism, or similar roles within developer products or SaaS.
  • Proven experience in engaging and nurturing developer communities.
  • Strong technical proficiency in programming languages (e.g., Python, JavaScript) and familiarity with AI/ML concepts.
  • Excellent communication and presentation skills, with the ability to explain complex technical topics in an accessible manner.
  • A data-driven mindset with strong analytical abilities.

Preferred qualifications:

  • Experience in the AI or machine learning industry, with a deep understanding of generative AI technologies.
  • Previous experience in a startup or fast-paced tech environment.
  • Demonstrated success in creating compelling technical content for diverse developer audiences.
  • Experience contributing to and engaging with open-source communities.
  • Familiarity with tools for community engagement, content management, and performance analytics.

Why Fireworks AI?

  • Solve Hard Problems: Tackle challenges at the forefront of AI infrastructure, from low-latency inference to scalable model serving.
  • Build What’s Next: Work with bleeding-edge technology that impacts how businesses and developers harness AI globally.
  • Ownership & Impact: Join a fast-growing, passionate team where your work directly shapes the future of AI—no bureaucracy, just results.
  • Learn from the Best: Collaborate with world-class engineers and AI researchers who thrive on curiosity and innovation.

Fireworks AI is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all innovators.

Reader (Assistant/Associate Professor)
The Department of Computer Science at the University of Bath invites applications for up to seven faculty positions at various ranks from candidates who are passionate about research and teaching in artificial intelligence and machine learning. These are permanent positions with no tenure process. The start date is flexible.

The University of Bath is based on an attractive, single-site campus that facilitates interdisciplinary research. It is located on the edge of the World Heritage City of Bath and offers the lifestyle advantages of working and living in one of the most beautiful areas in the United Kingdom.

For more information and to apply, please visit: https://www.bath.ac.uk/campaigns/join-the-department-of-computer-science/

Location


Description Megagon Labs is an innovation hub within the Recruit Group, conducting top-notch research and building technologies in Mountain View and Tokyo. We are making impacts through the Recruit Group’s worldwide services and products by collaborating with its subsidiaries such as Indeed and Glassdoor. Our mission is to empower people with better information to make their best decision. The areas we focus are Data Management, Data Integration, Machine Learning, Natural Language Processing, and Human-Computer Interaction.

As a Senior Research Engineer, you will be working on the productization of cutting-edge work on NLP from our research team and play a key role in tech transfer to our subsidiaries for production deployment. You role will require you to work closely with research and engineering teams and help bridge the gap in transforming research into deliverables, starting from reading academic research papers, to delivering designing, architecting, and building service-oriented systems. Additionally, you will be interfacing with product managers to work towards the adoption of solutions, identifying and meeting technical and other requirements.

Responsibilities - Serve as a liaison between research and engineering teams and product managers - Lead technical aspects of the project, mentor junior engineers for professional development - Transform needs and requirements into technical specifications and solutions - Design software architectures and lead development of solutions - Keep track of new developments in research and engineering in ML/NLP and related fields - Utilize and develop state-of-the-art algorithms and models for ML/NLP, perform analysis to improve models, and clean and validate data for uniformity and accuracy - Lead technical development of solutions in dynamic research and engineering teams - Communicate effectively within a cross-functional teams and management to ensure timely delivery - Actively help build an open, transparent, and collaborative engineering culture

Qualifications - Degree in Computer Science or related discipline or equivalent practical experience. - Ability to design software systems utilizing microservices architectures, container orchestration, data management, and cloud-computing - Ability to apply engineering best practices to make architectural and design decisions that have been made in terms of functionalities, user-experience, performance, reliability, and scalability - Demonstrated experience of production quality software development in Python, Java, and/or C or C++, - Demonstrated problem solving, interpersonal, and time management skills to handle complex situations effectively - Strong communication skills, and the ability to work as a team lead with high EQ - Self-motivated decision maker, frequently taking initiative to improve the code base and share best practices

Additional Benefits We offer a very competitive salary package and full benefits (medical, dental, vision & life insurance, etc). Megagon Labs is located in downtown Mountain View, CA, is in close proximity to the resources and opportunities of Silicon Valley and benefits from nearby leading universities such as Stanford and Berkeley. It is also close to many amenities, top schools, and outdoor activities. Megagon Labs provides a highly diverse environment and is proud to be an equal-opportunity employer.

New York


Quantitative Strategies / Technology

Overview

At the D. E. Shaw group, technology is integral to virtually everything we do. We’re seeking exceptional software developers with expertise in generative AI (GAI) to join our team. As a lead software developer in GAI, you’ll lead innovative projects and teams, leveraging your extensive experience and leadership skills to advance our GAI initiatives. By making GAI more accessible for both technical and non-technical users across the firm, you’ll drive substantial business impact.

What you’ll do day-to-day

You’ll join a dynamic environment, leading efforts in advancing GAI capabilities. Depending on your skills and interests, potential areas of focus may include:

  • Leading the development and maintenance of shared GAI infrastructure and applications, ensuring data is prepared and integrated for effective use in GAI initiatives, and enhancing software development team productivity through GAI.
  • Building sophisticated retrieval-augmented generation (RAG) pipelines over large document sets to improve data utility and accessibility across the firm.
  • Managing collaboration with internal groups and end users, accelerating AI product development and deployment, and customizing solutions to their needs.
  • Leading experimentation with new AI-driven tools and applications, integrating them into various platforms, and fostering collaboration to enhance AI effectiveness.
  • Driving greenfield projects, which offer significant opportunities for ownership and growth in a rapidly expanding GAI landscape.

Who we’re looking for

  • We’re looking for candidates who have a strong background in software development and a solid understanding of GAI technologies.
  • Successful developers have traditionally been top performers in their academic programs and possess a strong foundation in AI-related projects.
  • We’re particularly interested in outstanding candidates who have 6+ years of overall experience; who are eager to thrive in an inclusive, collaborative, and fast-paced environment; and who have a proven track record of leading projects and successfully leading or managing teams.
  • The expected annual base salary for this position is USD 275,000 USD to USD 350,000. Our compensation and benefits package includes substantial variable compensation in the form of a year-end bonus, guaranteed in the first year of hire, and benefits including medical and prescription drug coverage, 401(k) contribution matching, wellness reimbursement, family building benefits, and a charitable gift match program.

Location: San Jose, California, US
Alternate Location: San Francisco, CA; Seattle, WA


Meet the Team

The Cisco’s AI Research team consists of AI research scientists, data scientists, and network engineers with subject matter expertise who collaborate on both basic and applied research projects. We are motivated by tackling unique research challenges that arise when connecting people and devices at a world-wide scale.


Who You’ll Work With

You will join a newly formed, dynamic AI team as one of the core members, and have the opportunity to influence the culture and direction of the growing team. Our team includes AI experts and networking domain experts who work together and learn from each other. We work closely with engineers, product managers and strategists who have deep expertise and experience in AI and/or distributed systems.


What You’ll Do

Your primary role is to produce research advances in the field of Generative AI that improve the capabilities of models or agents for networking automation, human-computer interaction, model safety, or other strategic gen-AI powered networking areas. You will research building domain-specific foundational representations relevant to networking, etc. that provide differentiative value across diverse sets of applications. You will be a thought leader in the global research community via publishing papers, giving technical talks, organizing workshops etc.


Minimum qualifications

  • PhD in Computer Science or a relevant technical field and experience within an industry or academic research lab or a Masters Degree with strong LLM pre-training and post training experience within an industry or academic research lab and a minimum of 3 publications within top AI Venues such as ACL, EMNLP, ICLR, ICML, NAACL, NeurIPS
  • Experience working with Machine Learning Models (MLMs) and familiarity with associated frameworks, such as TensorFlow, PyTorch, Hugging Face, or equivalent platforms

Preferred qualifications

  • Experience driving research projects within an industry or university lab
  • Interest in combining representation learning and problem-specific properties
  • Experience in building, fine-tuning foundation models including LLMs and multi-modal models or domain specific models
  • Ability to maintain cutting-edge knowledge in generative AI, Large Language Models (LLMs), and multi-modal models and apply these technologies innovatively to emerging business problems, use cases, and scenarios
  • Outstanding communication, interpersonal, relationship building skills conducive to collaboration
  • Experience working in an industrial research lab (full-time, internship, etc.)

Location: San Jose, California, US
Alternate Location: San Francisco, CA; Seattle, WA


Why You’ll Love Cisco

Everything is converging on the Internet, making networked connections more meaningful than ever before in our lives. Our employees' groundbreaking ideas impact everything. Here, that means we take creative ideas from the drawing board to build dynamic solutions that have real world impact. You'll collaborate with Cisco leaders, partner with mentors, and develop incredible relationships with colleagues who share your interest in connecting the unconnected. You'll be part a team that cares about its customers, enjoys having fun, and you'll take part in changing the lives of those in our local communities. Come prepared to be encouraged and inspired.


Who We Are

The Cisco’s AI Research team consists of AI research scientists, data scientists, and network engineers with subject matter expertise who collaborate on both basic and applied research projects. We are motivated by tackling unique research challenges that arise when connecting people and devices at a world-wide scale.


Who You’ll Work With

You will join a newly formed, dynamic AI team as one of the core members, and have the opportunity to influence the culture and direction of the growing team. Our team includes AI experts and networking domain experts who work together and learn from each other. We work closely with engineers, product managers and strategists who have deep expertise and experience in AI and/or distributed systems.


What You’ll Do

Your primary role is to produce research advances in the field of Generative AI that improve the capabilities of models or agents for networking automation, human-computer interaction, model safety, or other strategic gen-AI powered networking areas. You will research building domain-specific foundational representations relevant to networking, etc. that provide differentiative value across diverse sets of applications. You will be a thought leader in the global research community via publishing papers, giving technical talks, organizing workshops etc.


Minimum qualifications

  • PhD in Computer Science or a relevant technical field and experience within an industry or academic research lab or a Masters Degree and 3+ years of experience within an industry or academic research lab and a minimum of 3 publications within top AI Venues such as ACL, EMNLP, ICLR, ICML, NAACL, NeurIPS
  • Experience working with Machine Learning Models (MLMs) and familiarity with associated frameworks, such as TensorFlow, PyTorch, Hugging Face, or equivalent platforms

Preferred qualifications

  • Experience driving research projects within an industry or university lab
  • Interest in combining representation learning and problem-specific properties
  • Experience in building, fine-tuning foundation models including LLMs and multi-modal models or domain specific models
  • Ability to maintain cutting-edge knowledge in generative AI, Large Language Models (LLMs), and multi-modal models and apply these technologies innovatively to emerging business problems, use cases, and scenarios
  • Outstanding communication, interpersonal, relationship building skills conducive to collaboration
  • Experience working in an industrial research lab (full-time, internship, sabbatical, etc.)