LLM apps typically require user data, which is not part of the model.
One way of "making the user data part of the model" is through the Retrieval Augmented Generation (RAG).
In this process, external data is retrieved (e.g., from a database, file system, etc.) and then passed to the LLM during generation.
In this stream, I'll go through the retrieval and document loaders section of LangChain and explore the features it offers.
If you like the video, consider subscribing:
/ peterjausovec
▬▬▬▬▬▬ Resources ▬▬▬▬▬▬
▶ Github repo with code from the video: https://github.com/peterj/langchain-r...
▶ LangChain: https://www.langchain.com/
▶ Episode notes: https://github.com/peterj/aistreams
▶ Watch the previous episodes here: • AI topics
▬▬▬▬▬▬ Connect with me ▬▬▬▬▬▬
▶ Discord (AI chat): / discord
▶ Twitter: / pjausovec
▶ LinkedIn: / pjausovec