Generative AI: LLMs: Semantic Search and Conversation Retrieval QA using Vector Store and LangChain 1.7

In the last few blogposts, we have gone through the basics of LLMs and different fine-tuning approaches and basics of LangChain. In this post we will mainly work with the embeddings from LLM, how we can store these LLM embeddings in Vector Store and using this persistent vector db how we can do Semantic search. … Continue reading Generative AI: LLMs: Semantic Search and Conversation Retrieval QA using Vector Store and LangChain 1.7