Tech Session Sustainability / Innovation, AI

AI & MLOps infrastructure for enterprise-grade LLMs

Despite the onset of commercially viable open-source Large Language Models and generative AI, companies are struggling to leverage cutting-edge models like Llama2 and Stable Diffusion for production-ready applications. Creating a simple demo page on a personal laptop and training, fine-tuning, and serving multi-billion parameter LLMs on HPC-scale infrastructure - with proprietary enterprise data - involves an entirely different engineering challenge. In this session, Yongseon, who co-founded and now leads the infrastructure team at VESSL AI, explores the common data, security, and cost challenges of Enterprise AI, and shares how companies can leverage the MLOps infrastructure to go from a model playground to deploying enterprise-scale LLM services in weeks.

Speakers

Yongseon Lee

VESSL AI

Preferences Submitted

You have successfully updated your cookie preferences.