Implementing Anthropic’s Contextual Retrieval for Highly effective RAG Efficiency | by Eivind Kjosbakken | Oct, 2024


This text will present you the best way to implement the contextual retrieval concept proposed by Anthropic

Retrieval augmented era (RAG) is a robust approach that makes use of massive language fashions (LLMs) and vector databases to create extra correct responses to consumer queries. RAG permits LLMs to make the most of massive information bases when responding to consumer queries, enhancing the standard of the responses. Nonetheless, RAG additionally has some downsides. One draw back is that RAG makes use of vector similarity when retrieving context to reply to a consumer question. Vector similarity is just not at all times constant and might, for instance, battle with distinctive consumer key phrases. Moreover, RAG additionally struggles as a result of the textual content is split into smaller chunks, which prohibits the LLM from using the total contexts of paperwork when responding to queries. Anthropic’s article on contextual retrieval makes an attempt to unravel each issues through the use of BM25 indexing and including contexts to chunks.

Learn to implement Anthropic’s contextual retrieval RAG on this article. Picture by ChatGPT.

My motivation for this text is twofold. First, I wish to check out the latest fashions and methods inside machine studying. Maintaining updated with the most recent developments inside machine studying is important for any ML engineer and information scientist to most…

Leave a Reply

Your email address will not be published. Required fields are marked *