With textual materials comprising a large portion of its content, the web is a continuously growing repository of real-world knowledge. Changes to information necessitate either the inclusion of new documents or revisions to older ones. This allows for the coexistence and eventual growth of numerous versions of information across different historical periods. Ensuring people can always obtain the most current and relevant information is a huge problem in information retrieval.

With the advent of chatGPT, question-answering systems powered by large language models (LLMs) have grown in popularity, adding another layer of difficulty to this problem. Evidence shows that LLMs can take in and process massive amounts of data from text. Data like this is usually culled from a static image of many online documents retrieved instantly. However, the information in the real world is subject to constant change, often occurring daily, hourly, or even in real-time.

An increasing number of researchers have begun to look at Retrieval Augmented Language Models (RALMs) as a potential solution to the issues caused by information that is always changing and by the tendency of LLMs to generate false positives or hallucinations. In contrast to traditional LLMs, which depend entirely on parametric memory, RALMs draw their knowledge from an external document corpus. This database can be enhanced and updated to reflect the most recent versions of the documents it contains, such as web pages and Wikipedia articles, since it is structured as an index of documents (a way that facilitates efficient document retrieval). While RALMs excel at answering factual questions, they usually rely on a document index that only has one version of each document. Nevertheless, fresh data is consistently added to the database in numerous practical contexts without erasing or altering older records, leading to numerous document versions.

Studies have demonstrated that even in less complex and more organized contexts, RALMs struggle with timing. As an example, researchers demonstrate that Atlas, a representative state-of-the-art RALM model with few-shot learning extensions, typically fails to deliver a meaningful answer about the time of question when dealing with information that is subject to frequent changes, like the names of the most recent Wimbledon tennis champions. 

A new study by San Jose State University presents a new, easy-to-understand, and very successful way to get documents that are correct in time relative to a given query. It is used to enhance Atlas. They have extended the RALM retriever’s document retrieval and ranking algorithm in their model TempRALM to consider documents relevant to each query in terms of semantics and time instead of only semantic similarity. 

The Atlas model was the first to present the architecture of the Retrieval Augmented Language Model (RALM), which they improved upon in their study by adding few-shot learning. Specifically, the temporal components of a query cannot be considered by current RALM methods (including Atlas). They achieve this goal by enhancing Atlas with a new temporal retrieval mechanism and testing the model’s efficacy.

Using their temporal extensions, the TempRALM retriever augments the normal Atlas-large configuration. Specifically, it adapts T5-1.1 from the Fusion-in-Decoder architecture with a language modeling tweak, and it relies on a dual-encoder architecture based on the Contriever and a sequence-to-sequence model. The researchers used the identical pre-training for the generator and retriever as they did with Atlas. 

They experimented with different values across their hyper-parameters, such as the number of training steps, the retrieval and language model learning rates, the sampling temperatures, and the number of documents to retrieve for each question, before settling on the parameters to configure TempRALM and Atlas-large. The team demonstrated that their method outperforms the basic Atlas model by as much as 74% while using fewer computational resources. There is no need to pre-train, recalculate, or replace the document index or add any other computationally costly components using TempRALM.

For future study, the team intends to investigate several ways to expand upon this paper’s findings, such as investigating the relationship between LLM and the retriever and testing out various learning methodologies to adjust the parameters of the temporal relevance function. Fact-checking, recommender systems, and retrieval-augmented dialog agents are just a few of the various applications the researchers have highlighted in their paper to investigate with their temporal retrieval method.


Check out the PaperAll credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our 36k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and LinkedIn Group.

If you like our work, you will love our newsletter..

Don’t Forget to join our Telegram Channel


Dhanshree Shenwai is a Computer Science Engineer and has a good experience in FinTech companies covering Financial, Cards & Payments and Banking domain with keen interest in applications of AI. She is enthusiastic about exploring new technologies and advancements in today’s evolving world making everyone’s life easy.






Source link