Deploy Molmo-7B an Open-Source multimodal LLM on Runpod

Опубликовано: 21 Декабрь 2024
на канале: Patrick Gerard
377
14

In this video I am deploying the SOTA-Molmo-7B multimodal LLM on a Runpod VPS with 48 VRAM.

Actually use a L40S instead of the RTX GPU I am using in the video!

This model is just insane and I am going to release another episode regarding this model soon! Make sure you subscribe!

Checkout https://finetunefast.com for more.

Code: https://lightning.ai/patrick-1hupo/st...

Runpod (Affiliate Link): https://runpod.io?ref=dcdwr5q2