Low Level Technicals of LLMs: Daniel Han

Опубликовано: 02 Июнь 2025
на канале: AI Engineer
42,985
1.8k

This workshop will be split into 3x one hour blocks:

How to analyze & fix LLMs - how to find and fix bugs in Gemma, Phi-3, Llama & tokenizers
Finetuning with Unsloth - continued pretraining, reward modelling, QLoRA & more
Deep dive into LLM technicals - hand deriving derivatives, SOTA finetuning tricks
It's recommended you have Python with Pytorch and Unsloth installed (or use online Google Colab / Kaggle). College level maths and programming would be helpful.

Recorded live in San Francisco at the AI Engineer World's Fair. See the full schedule of talks at https://www.ai.engineer/worldsfair/20... & join us at the AI Engineer World's Fair in 2025! Get your tickets today at https://ai.engineer/2025

About Daniel
Hey I'm Daniel, the algos guy behind Unsloth. I love making LLM training go fast! We're the guys who fixed 8 of Google's Gemma bugs, a 2048 SWA Phi-3 issue, found tokenization issues and fixed untrained tokens with Llama-3, and I run Unsloth with my brother Michael!

Our open source package makes finetuning of LLMs 2x faster and uses 70% less VRAM with no accuracy degradation. I used to work at NVIDIA making GPU algos go fast and helped NASA engineers process data from a Mars rover faster!