ChatGPT: In-context Retrieval-Augmented Learning (IC-RALM) | In-context Learning (ICL) Examples

Опубликовано: 03 Октябрь 2024
на канале: Discover AI
6,684
108

From ICL to In-Context Retrieval-Augmented Language Models (in-context RALM). Tune your ChatGPT, let it learn new stuff! I show you how. Even without paying for OpenAI's API.

Fine-tuning is too expensive? And by the way, even if you pay for OpenAI's API, you can't currently fine-tune GPT-3.5-Turbo! No way! ..... So? ICL!

For the time of your free session on ChatGPT, I show you how you can provide new content to ChatGPT, from one-shot prompting and data extracted in real-time from the internet (RALM).

What is ICL? During in-context learning (ICL), we give the LLM a prompt that consists of a list of input-output pairs that demonstrate a task. At the end of the prompt, we append a test input and allow the LLM to make a prediction just by conditioning on the prompt and predicting the next tokens. ( “few-shot learning”, or "in-context learning" where we allow as many demonstrations as will fit into the model’s context window)

In-context learning (ICL) allows users to quickly build models for a new use case without worrying about fine-tuning and storing new parameters for each task. It typically requires very few training examples to get a prototype working, and the natural language interface is intuitive even for non-experts.
Nice: https://ai.stanford.edu/blog/understa...

FINE-TUNING large language models is becoming ever more impractical due to their rapidly-growing scale. This motivates instead the use of
1. parameter-efficient adaptation methods such as prompt tuning (PT), which adds a small number of tunable embeddings to an otherwise frozen model, and
2. in-context learning (ICL), in which demonstrations of the task are provided to the model in natural language 8eg English) without any additional training of the system parameters.

Shout out to @OpenAI for providing the free ChatGPT access.

Literature:
In-Context Retrieval-Augmented Language Models
https://arxiv.org/pdf/2302.00083.pdf

How Does In-Context Learning Help Prompt Tuning? (Feb 2023)
https://arxiv.org/abs/2302.11521

#chatgpt
#chatgpttutorial
#chatgptprompts
#chatgptcoding
#chatgptexplained