We've talked about the theory and concepts of Prompt Engineering before, but lets see it in action. LangChain has several useful functions for building testable, repeatable prompts, one of which is Prompt Templates. But what is a prompt template? Where would we use it? How does it actually work?
In this video, Simon and Gavi look at a quick Dolly 2.0 example using LangChain to build a prompt template, inject some parameters and control the structure of the prompts being used, before discussing some of the techniques people use to further improve their results!
If you're looking at integrating LLMs into your application, and need help with fine-tuning, automating and managing your models, give Advancing Analytics a call.