TopicGPT: A Prompt-based AI Framework that Uses Large Language Models (LLMs) to Uncover Latent Topics in a Text Collection

Topic modeling is a technique to uncover the underlying thematic structure in large text corpora. Traditional topic modeling methods, such as Latent Dirichlet Allocation (LDA), have limitations in terms of their ability to generate topics that are both specific and interpretable. This can lead to difficulties in understanding the content of the documents and making meaningful connections between them. These models also offer limited control over the specificity and formatting of topics, hindering their practical application in content analysis and other fields requiring clear thematic categorization. The paper aims to address these limitations by proposing a new method, TopicGPT, which leverages large language models (LLMs) to generate and refine topics in a corpus.

Traditional topic modeling methods, such as LDA, SeededLDA, and BERTopic, have been widely used for exploring latent thematic structures in text collections. LDA represents topics as distributions over words, which can result in incoherent and difficult-to-interpret topics. SeededLDA attempts to guide the topic generation process with user-defined seed words, while BERTopic uses contextualized embeddings for topic extraction. Despite their utility, these models often fail to produce high-quality and easily interpretable topics.

TopicGPT, a novel framework, stands out from traditional methods in several key ways. It leverages large language models (LLMs) for prompt-based topic generation and assignment, aiming to produce topics that are more in line with human categorizations. Unlike traditional methods, TopicGPT provides natural language labels and descriptions for topics, enhancing their interpretability. This framework also allows for the generation of high-quality topics and offers users the ability to refine and customize the topics without the need for model retraining.

TopicGPT operates in two main stages: topic generation and topic assignment. In the topic generation stage, the framework iteratively prompts an LLM to generate topics based on a sample of documents from the input dataset and a list of previously generated topics. This process encourages the creation of distinctive and specific topics. The generated topics are then refined to remove redundant and infrequent topics, ensuring a coherent and comprehensive set. The LLM used for topic generation is GPT-4, while GPT-3.5-turbo is used for the assignment phase.

In the topic assignment stage, the LLM assigns topics to new documents by providing a quotation from the document that supports its assignment, enhancing the verifiability of the topics. This method has been shown to produce higher-quality topics compared to traditional methods, achieving a harmonic mean purity of 0.74 against human-annotated Wikipedia topics, compared to 0.64 for the strongest baseline. TopicGPT’s topics are also more semantically aligned with human-labeled topics, with significantly fewer misaligned topics than LDA.

The framework’s performance was evaluated on two datasets: Wikipedia articles and Congressional bills. The results demonstrated that TopicGPT’s topics and assignments align more closely with human-annotated ground truth topics than those generated by LDA, SeededLDA, and BERTopic. The researchers measured topical alignment using external clustering metrics such as harmonic mean purity, normalized mutual information, and the adjusted Rand index, finding substantial improvements over baseline methods.

TopicGPT, a groundbreaking advancement in topic modeling, not only overcomes the limitations of traditional methods but also offers practical benefits. By using a prompt-based framework and the combined power of GPT-4 and GPT-3.5-turbo, TopicGPT generates coherent, human-aligned topics that are both interpretable and customizable. This versatility makes it a valuable tool for a wide range of applications in content analysis and beyond, promising to revolutionize the field of topic modeling.


Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter

Join our Telegram Channel and LinkedIn Group.

If you like our work, you will love our newsletter..

Don’t Forget to join our 44k+ ML SubReddit

Shreya Maji is a consulting intern at MarktechPost. She is pursued her B.Tech at the Indian Institute of Technology (IIT), Bhubaneswar. An AI enthusiast, she enjoys staying updated on the latest advancements. Shreya is particularly interested in the real-life applications of cutting-edge technology, especially in the field of data science.

🐝 Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and many others...