In the last post we talked about how to do In-context finetuning using few shot techniques, In-context finetuning works when we don’t have much data, or we don’t have access to the full model. This technique has certain limitations like the more examples you add in the prompt the context length increases a lot and…
Tag: Finetuning
Generative AI: LLMs: In Context Learning 1.2
From this blog post onwards, we will talk about different fine-tuning approaches for LLMs. As discussed in the last last post In context learning helps in below mentioned two situations: 1. We don’t have access to the full model. We only have access to the API of the model.2. When we don’t have much data…
Generative AI: LLMs: Finetuning Approaches 1.1
In the last post in this Generative AI with LLMs series we talked about different types of LLM model and how they are generally pre-trained. These Deep Learning language models with large numbers of parameters are generally trained on open-sourced data like Common Crawl, The Pile, MassiveText, blogs, Wikipedia, GitHub etc. These datasets are generally…