Knowledge Graph RAG Query Engine

Graph RAG is an Knowledge-enabled RAG approach to retrieve information from Knowledge Graph on given task. Typically, this is to build context based on entities’ SubGraph related to the task.

GraphStore backed RAG vs VectorStore RAG

As we compared how Graph RAG helps in some use cases in this tutorial, it’s shown Knowledge Graph as the unique format of information could mitigate several issues caused by the nature of the “split and embedding” RAG approach.

Graph RAG

Graph-Based Prompting and Reasoning with Language Models

  • Advanced prompting techniques (e.g., chain of thought and tree of thought) improve the problem-solving capabilities of large language models (LLMs).
  • These techniques require LLMs to construct step-by-step responses.
  • They assume linear reasoning, which differs from human reasoning involving multiple chains of thought and insights combination.
  • This overview focuses on prompting techniques using a graph structure to capture non-linear problem-solving patterns.

Graph Prompts

The Novice’s LLM Training Guide

https://rentry.org/llm-training

A modern Large Language Model (LLM) is trained using the Transformers library, which leverages the power of the Transformer network architecture. This architecture has revolutionized the field of natural language processing and is widely adopted for training LLMs. Python, a high-level programming language, is commonly used for implementing LLMs, making them more accessible and easier to comprehend compared to lower-level frameworks such as OpenXLA’s IREE or GGML. The intuitive nature of Python allows researchers and developers to focus on the logic and algorithms of the model without getting caught up in intricate implementation details.

This rentry won’t go over pre-training LLMs (training from scratch), but rather fine-tuning and low-rank adaptation (LoRA) methods. Pre-training is prohibitively expensive, and if you have the compute for it, you’re likely smart enough not to need this rentry at all.