Enable javascript in your browser for better experience. Need to know to enable it? Go here.

Solving GenAI's great challenge: Evaluating your LLM in production

Tech Horizons executive webinar series

Can you trust the accuracy and reliability of your Large Language Model (LLM) outputs? The opaque nature of LLMs is one of the biggest challenges preventing organizations from getting great AI concepts into production. Traditional machine learning evaluation techniques simply fall short with LLMs and new LLM evaluation frameworks seem to be popping up every week. What’s the right approach for your RAG and fine-tuning use cases? In this webinar, our AI experts will discuss how to evaluate LLM effectiveness and risks.

 

Attendees will learn:

  • Tips for defining clear objectives for what the LLM should achieve.
  • What performance metrics to assess for accuracy, relevance, response time, toxicity and user satisfaction.
  • Error analysis identification and categorization.
  • Benchmarking considerations.
  • How and when to consider qualitative user feedback.

 

Date: Thursday, July 18, 2024

Time: 8am PDT | 11am EDT | 5pm CEST

Moderators and panelists

Headshot of Prasanna Pendse
Prasanna Pendse

Director of AI Strategy, Thoughtworks (Moderator)

Headshot of Aaron Erickson
Aaron Erickson

Senior Engineering Manager, NVIDIA

Headshot of Musa Parmaksiz
Musa Parmaksiz

Head of AI and Data Center Excellence, UBS Investment Bank

Headshot of Shayan Mohanty
Shayan Mohanty

Head of AI Research, Thoughtworks

Register today

Register for this session by completing the form below.

Marketo Form ID is invalid !!!

Thanks for registering! We'll reach out with more details soon.


Discover more insightful sessions in our series