CB-LMs: language models for central banking

BIS Working Papers  |  No 1215  | 
01 October 2024

Summary

Focus

Economists are increasingly applying natural language processing (NLP) techniques to analyse monetary policy communications. While these studies offer valuable insights, they often rely on language models trained on collections of general texts. This limitation may hinder the models' ability to fully capture the nuances unique to central banking and monetary economics. Recent literature suggests that retraining language models on domain-specific data can enhance performance in specialised NLP analyses.

Contribution

We introduce CB-LMs (central bank language models) – language models retrained on a large-scale collection of central banking texts. By using prominent models like BERT and RoBERTa and adding texts tailored to central banking – including speeches, policy notes and research papers – CB-LMs capture domain-specific semantics, terminologies and contextual nuances. Our primary goal is to develop and publicly release CB-LMs to advance NLP analysis in monetary economics and central banking. Additionally, our comprehensive assessment of different large language models (LLMs) across various training settings provides insights into model selection tailored to central bankers' specific tasks and technical requirements.

Findings

We find that CB-LMs outperform their foundational models in predicting masked words within central bank idioms. Some CB-LMs surpass not only their original models but also state-of-the-art generative LLMs in classifying monetary policy stances from Federal Open Market Committee statements. CB-LMs excel at understanding nuanced expressions of monetary policy, which could make them valuable tools for central banks in real-time analysis and decision-making. Nonetheless, in more challenging scenarios – such as with limited data for fine-tuning and processing longer text inputs – the largest LLMs, like ChatGPT-4 and Llama-3 70B, may outperform CB-LMs. Deploying these LLMs, however, presents substantial challenges for central banks regarding confidentiality, transparency, replicability and cost-efficiency.


Abstract

We introduce central bank language models (CB-LMs) - specialised encoder-only language models retrained on a comprehensive corpus of central bank speeches, policy documents and research papers. We show that CB-LMs outperform their foundational models in predicting masked words in central bank idioms. Some CB-LMs not only outperform their foundational models, but also surpass state-of-the-art generative Large Language Models (LLMs) in classifying monetary policy stance from Federal Open Market Committee (FOMC) statements. In more complex scenarios, requiring sentiment classification of extensive news related to the US monetary policy, we find that the largest LLMs  outperform the domain-adapted encoder-only models. However, deploying such large LLMs presents substantial challenges for central banks in terms of confidentiality, transparency, replicability and cost-efficiency.

JEL Classification: E58, C55, C63, G17

Keywords: large language models, gen AI, central banks, monetary policy analysis