From this blog post onwards, we will talk about different fine-tuning approaches for LLMs. As discussed in the last last post In context learning helps in below mentioned two situations:
1. We don’t have access to the full model. We only have access to the API of the model.
2. When we don’t have much data to train any model.
Using OpenAI API key below I tried to show how we can do in context learning.
Few of the limitation of in context learning is that the more examples we add in the prompt the context length increases with number of examples which is not an efficient fine-tuning approach. If we lot of data better approach to fine tuning with instruction in given in the OpenAI documentation.
Do like, share and comment if you have any questions.