RAG Foundry

RAG Foundry: A Framework for Enhancing LLMs for Retrieval Augmented Generation

RAG Foundry is a library designed to improve LLMs ability to use external information by fine-tuning models on specially created RAG-augmented datasets. The library helps create the data for training, given a RAG technique, helps easily train models using parameter-efficient finetuning (PEFT), and finally can help users measure the improved performance using various, RAG-specific metrics. The library is modular, workflows are customizable using configuration files.

https://github.com/IntelLabs/RAGFoundry

DeepEval: evaluating the performance of an LLM

In deepeval, a metric serves as a standard of measurement for evaluating the performance of an LLM output based on a specific criteria of interest. Essentially, while the metric acts as the ruler, a test case represents the thing you’re trying to measure. deepeval offers a range of default metrics for you to quickly get started with, such as:

  • G-Eval
  • Summarization
  • Faithfulness
  • Answer Relevancy
  • Contextual Relevancy
  • Contextual Precision
  • Contextual Recall
  • Ragas
  • Hallucination
  • Toxicity
  • Bias

deepeval also offers conversational metrics, which are metrics used to evaluate conversations instead of individual, granular LLM interactions. These include:

  • Conversation Completeness
  • Conversation Relevancy
  • Knowledge Retention

https://docs.confident-ai.com/docs/metrics-introduction

BANKER++ Embedding for RAG

Fine-tuning an embedding model is a powerful technique for optimizing retrieval augmented generation (RAG) systems in finance. By training a smaller open-source embedding model like BAAI/bge-small-en on a domain-specific dataset, the model learns more meaningful vector representations that capture the nuances and semantics of financial language. This leads to significantly improved retrieval performance compared to using generic pre-trained embeddings.

https://huggingface.co/baconnier/Finance_embedding_large_en-V1.5

Fine-tuned financial embedding models, such as Banker++ RAG, demonstrate superior accuracy on tasks like semantic search, text similarity, and clustering. They enable RAG systems to better understand complex financial jargon and retrieve the most relevant information given a query.

Integrating these specialized embeddings is straightforward using libraries like LlamaIndex or Sentence-Transformers.

As the financial industry increasingly adopts AI, fine-tuned embedding models will play a crucial role in powering domain-specific NLP applications. From analyzing market sentiment to personalizing investment recommendations, these optimized embeddings unlock new possibilities for harnessing unstructured financial data. By combining the power of open-source models with the domain expertise embedded in financial corpora, fine-tuning paves the way for more intelligent and impactful RAG systems in finance.

BANKER ++

Banker++ is trained to act like a Senior Banker.

I’m excited to share a resource that could potentially be a valuable addition to your financial toolkit. It’s a Financial LLM Model tailored for corporate entities and financial institutions.

In today’s complex financial landscape, having access to reliable analytical tools is crucial. This model, available at link:

https://huggingface.co/spaces/baconnier/Finance

https://huggingface.co/baconnier/Finance_dolphin-2.9.1-yi-1.5-9b

It offers a framework for assessing various financial scenarios with a level of precision and insight that could benefit your decision-making processes.

However, it’s important to note that this content is strictly for educational purposes and should not be construed as financial advice. Please exercise caution when applying any information provided.

While I’m not suggesting it’s a one-size-fits-all solution or a replacement for professional financial advice, it may offer valuable insights into areas such as risk management, investment strategies, and portfolio optimization.

To illustrate, let’s consider a common question in finance:

Question: What is CDS and how does it compare to a swap?

Answer:

  • CDS: Credit Default Swap (CDS) is a financial derivative contract between two parties (buyer and seller) for insurance against default or credit risk associated with a bond or loan. The protection buyer pays a premium to the protection seller in exchange for the right to receive payment if a credit event occurs. Typically, the protection seller is a financial institution, while the protection buyer can be an investor or a bank.
  • Swap: A swap is an agreement between two parties to exchange cash flows, typically involving interest payments or principal at a future date. Common types of swaps include interest rate swaps, currency swaps, and commodity swaps.

CDS differs from swaps as it focuses specifically on credit risk protection, providing insurance against default events. In contrast, swaps involve the exchange of cash flows, often related to interest rates, currencies, or commodities, without directly addressing credit risk.

Feel free to explore the model and see if it aligns with your organization’s needs and objectives. Remember to approach its use with caution and consider consulting with financial experts when making significant decisions.

As we navigate the complexities of the financial world together, let’s remain humble in our pursuit of knowledge and improvement.

GPT in 60 Lines of NumPy

In this post, they implement a GPT from scratch in just 60 lines of numpy. We’ll then load the trained GPT-2 model weights released by OpenAI into our implementation and generate some text.

Note:

  • This post assumes familiarity with Python, NumPy, and some basic experience training neural networks.
  • This implementation is missing tons of features on purpose to keep it as simple as possible while remaining complete. The goal is to provide a simple yet complete technical introduction to the GPT as an educational tool.
  • The GPT architecture is just one small part of what makes LLMs what they are today.[1].
  • All the code for this blog post can be found at github.com/jaymody/picoGPT.
  • Hacker news thread
  • Chinese translation

Text splitting

Large language models (LLMs) can be used for many tasks, but often have a limited context size that can be smaller than documents you might want to use. To use documents of larger length, you often have to split your text into chunks to fit within this context size.

This crate provides methods for splitting longer pieces of text into smaller chunks, aiming to maximize a desired chunk size, but still splitting at semantically sensible boundaries whenever possible.

Levels Of Text Splitting

Semantic text splitting library

https://github.com/benbrandt/text-splitter

Chunks Vizualizer

https://chunkviz.up.railway.app/

DoRA: Weight-Decomposed Low-Rank Adaptation

  • Objective Exploration: Investigates the disparities between full fine-tuning (FT) and LoRA through a novel weight decomposition analysis.
  • Innovative Method: Introduces Weight-Decomposed LowRank Adaptation (DoRA), which splits pre-trained weights into magnitude and direction for fine-tuning.
  • Strategic Approach: Employs LoRA for directional updates, significantly reducing the number of trainable parameters.
  • Enhanced Performance: By adopting DoRA, it improves learning capacity and training stability of LoRA, without extra inference costs.
  • Proven Superiority: Demonstrates that DoRA outperforms LoRA in fine-tuning LLAMA, LLaVA, and VL-BART on tasks like commonsense reasoning, visual instruction tuning, and image/video-text understanding.
  • https://arxiv.org/abs/2402.09353

https://github.com/catid/dora

Bunkatopics

Bunkatopics is a package designed for Data Cleaning, Topic Modeling Visualization and Frame Analysis. Its primary goal is to assist developers in gaining insights from unstructured data, potentially facilitating data cleaning and optimizing LLMs through fine-tuning processes. Bunkatopics is constructed using well-known libraries like langchain, chroma, and transformers, enabling seamless integration into various environments.

https://github.com/charlesdedampierre/BunkaTopics?tab=readme-ov-file

LiPO: Listwise Preference Optimization through Learning-to-Rank

  • Innovative Framework: LiPO revolutionizes language model alignment by approaching it as a listwise ranking challenge.
  • Cutting-Edge Techniques: Utilizes advanced LTR algorithms for a more refined optimization process.
  • Superior Performance: LiPO-X method surpasses traditional methods in aligning models with human preferences.

Enhanced Learning Efficiency: Offers a more effective learning paradigm from ranked response lists.

  • Scalable Solution: Shows promise for scaling up to larger language model policies across various applications

https://arxiv.org/html/2402.01878v1#S1

PyOD, a versatile Python library for detecting anomalies in multivariate data.

Whether you’re tackling a small-scale project or large datasets, PyOD offers a range of algorithms to suit your needs.

https://pyod.readthedocs.io/en/latest/