BANKER ++

Banker++ is trained to act like a Senior Banker.

I’m excited to share a resource that could potentially be a valuable addition to your financial toolkit. It’s a Financial LLM Model tailored for corporate entities and financial institutions.

In today’s complex financial landscape, having access to reliable analytical tools is crucial. This model, available at link:

https://huggingface.co/spaces/baconnier/Finance

https://huggingface.co/baconnier/Finance_dolphin-2.9.1-yi-1.5-9b

It offers a framework for assessing various financial scenarios with a level of precision and insight that could benefit your decision-making processes.

However, it’s important to note that this content is strictly for educational purposes and should not be construed as financial advice. Please exercise caution when applying any information provided.

While I’m not suggesting it’s a one-size-fits-all solution or a replacement for professional financial advice, it may offer valuable insights into areas such as risk management, investment strategies, and portfolio optimization.

To illustrate, let’s consider a common question in finance:

Question: What is CDS and how does it compare to a swap?

Answer:

  • CDS: Credit Default Swap (CDS) is a financial derivative contract between two parties (buyer and seller) for insurance against default or credit risk associated with a bond or loan. The protection buyer pays a premium to the protection seller in exchange for the right to receive payment if a credit event occurs. Typically, the protection seller is a financial institution, while the protection buyer can be an investor or a bank.
  • Swap: A swap is an agreement between two parties to exchange cash flows, typically involving interest payments or principal at a future date. Common types of swaps include interest rate swaps, currency swaps, and commodity swaps.

CDS differs from swaps as it focuses specifically on credit risk protection, providing insurance against default events. In contrast, swaps involve the exchange of cash flows, often related to interest rates, currencies, or commodities, without directly addressing credit risk.

Feel free to explore the model and see if it aligns with your organization’s needs and objectives. Remember to approach its use with caution and consider consulting with financial experts when making significant decisions.

As we navigate the complexities of the financial world together, let’s remain humble in our pursuit of knowledge and improvement.

GPT in 60 Lines of NumPy

In this post, they implement a GPT from scratch in just 60 lines of numpy. We’ll then load the trained GPT-2 model weights released by OpenAI into our implementation and generate some text.

Note:

  • This post assumes familiarity with Python, NumPy, and some basic experience training neural networks.
  • This implementation is missing tons of features on purpose to keep it as simple as possible while remaining complete. The goal is to provide a simple yet complete technical introduction to the GPT as an educational tool.
  • The GPT architecture is just one small part of what makes LLMs what they are today.[1].
  • All the code for this blog post can be found at github.com/jaymody/picoGPT.
  • Hacker news thread
  • Chinese translation

Text splitting

Large language models (LLMs) can be used for many tasks, but often have a limited context size that can be smaller than documents you might want to use. To use documents of larger length, you often have to split your text into chunks to fit within this context size.

This crate provides methods for splitting longer pieces of text into smaller chunks, aiming to maximize a desired chunk size, but still splitting at semantically sensible boundaries whenever possible.

Levels Of Text Splitting

Semantic text splitting library

https://github.com/benbrandt/text-splitter

Chunks Vizualizer

https://chunkviz.up.railway.app/

DoRA: Weight-Decomposed Low-Rank Adaptation

  • Objective Exploration: Investigates the disparities between full fine-tuning (FT) and LoRA through a novel weight decomposition analysis.
  • Innovative Method: Introduces Weight-Decomposed LowRank Adaptation (DoRA), which splits pre-trained weights into magnitude and direction for fine-tuning.
  • Strategic Approach: Employs LoRA for directional updates, significantly reducing the number of trainable parameters.
  • Enhanced Performance: By adopting DoRA, it improves learning capacity and training stability of LoRA, without extra inference costs.
  • Proven Superiority: Demonstrates that DoRA outperforms LoRA in fine-tuning LLAMA, LLaVA, and VL-BART on tasks like commonsense reasoning, visual instruction tuning, and image/video-text understanding.
  • https://arxiv.org/abs/2402.09353

https://github.com/catid/dora

Bunkatopics

Bunkatopics is a package designed for Data Cleaning, Topic Modeling Visualization and Frame Analysis. Its primary goal is to assist developers in gaining insights from unstructured data, potentially facilitating data cleaning and optimizing LLMs through fine-tuning processes. Bunkatopics is constructed using well-known libraries like langchain, chroma, and transformers, enabling seamless integration into various environments.

https://github.com/charlesdedampierre/BunkaTopics?tab=readme-ov-file

LiPO: Listwise Preference Optimization through Learning-to-Rank

  • Innovative Framework: LiPO revolutionizes language model alignment by approaching it as a listwise ranking challenge.
  • Cutting-Edge Techniques: Utilizes advanced LTR algorithms for a more refined optimization process.
  • Superior Performance: LiPO-X method surpasses traditional methods in aligning models with human preferences.

Enhanced Learning Efficiency: Offers a more effective learning paradigm from ranked response lists.

  • Scalable Solution: Shows promise for scaling up to larger language model policies across various applications

https://arxiv.org/html/2402.01878v1#S1

PyOD, a versatile Python library for detecting anomalies in multivariate data.

Whether you’re tackling a small-scale project or large datasets, PyOD offers a range of algorithms to suit your needs.

https://pyod.readthedocs.io/en/latest/

« Advanced Prompting Techniques for Enhancing Large Language Model Performance: A Comprehensive Guide »

Here’s the revised list including the Proactive Chain-of-Thought Prompting (ProCoT) from recent research:

  1. Zero-Shot Prompting:
  • Description: The model is given a task without any prior examples.
  • Goal: To provide a concise explanation or answer based on general knowledge.
  • Example 1: « Describe the process of photosynthesis. »
  • Example 2: « Explain the significance of the Treaty of Versailles. »
  1. Few-Shot Prompting:
  • Description: The model is provided with a few examples to better understand the task.
  • Goal: To generate responses that follow the pattern shown in the examples.
  • Example 1: « Text: Neil Armstrong landed on the moon in 1969. Event: Moon Landing, 1969. Text: The first iPhone was released in 2007. Event: »
  • Example 2: « Problem: If I have 5 apples and give away 2, how many do I have left? Solution: 3 apples. Problem: If a train travels 100 miles in 1 hour, how far will it travel in 3 hours? Solution: »
  1. Chain of Thought (CoT) Prompting:
  • Description: Guides the model to decompose the problem into intermediate steps before providing the final answer.
  • Goal: To facilitate complex problem-solving by breaking down the process.
  • Example 1: « To solve the math problem ‘8 divided by 2(2+2)’, let’s think step by step. »
  • Example 2: « To determine the capital of France, let’s consider the major cities in France and identify the one that is the political and cultural center. »
  1. Proactive Chain-of-Thought Prompting (ProCoT):
  • Description: Involves planning and taking initiative towards a conversational goal, enhancing proactivity in dialogue systems.
  • Goal: To develop a proactive and strategic response to a situation or problem.
  • Example 1: « A customer complains about a late delivery. Let’s plan the steps to address this issue. »
  • Example 2: « To decide on a marketing strategy for a new product, let’s first analyze the target market and then determine the most effective approach. »
  1. Contrastive Chain of Thoughts:
  • Description: Uses contrasting explanations (correct and incorrect) to enhance understanding.
  • Goal: To clarify a concept by differentiating correct information from misconceptions.
  • Example 1: « Correct: Plants release oxygen during photosynthesis. Incorrect: Plants consume oxygen during photosynthesis. Now, explain photosynthesis. »
  • Example 2: « Correct: The Earth orbits the Sun. Incorrect: The Sun orbits the Earth. Describe the solar system’s structure. »
  1. Self-Reflection Prompting:
  • Description: Adds a verification layer to the generated response to detect errors or inconsistencies.
  • Goal: To ensure accuracy and completeness in the response.
  • Example 1: « After summarizing the article, review the summary for accuracy and completeness. »
  • Example 2: « Translate this text and then evaluate the translation for any possible errors or improvements. »
  1. Decomposed Prompting:
  • Description: Breaks down a complex prompt into sub-prompts, each addressing a part of the overall task.
  • Goal: To tackle complex, multi-faceted tasks by addressing each aspect separately.
  • Example 1: « Break down the process of cellular respiration into its major stages and then explain each stage. »
  • Example 2: « Divide the history of the Roman Empire into key periods and summarize the main events of each period. »
  1. Self Consistency Prompting:
  • Description: Generates multiple responses to the same prompt and selects the most consistent or frequent answer.
  • Goal: To achieve a more reliable and consistent answer by considering multiple possibilities.
  • Example 1: « Generate multiple explanations for why the sky is blue, and then identify the most accurate explanation. »
  • Example 2: « Provide several predictions for the future of renewable energy, and then choose the most likely scenario. »
  1. System 2 Attention Prompting:
  • Description: Focuses on extracting relevant information from a text while ignoring biases and irrelevant details.
  • Goal: To respond to queries based solely on pertinent and factual information.
  • Example 1: « Extract factual information from this political speech, then analyze the policy proposals based on these facts. »
  • Example 2: « Identify key scientific facts in this article about global warming, then summarize the implications of these facts. »
  1. Simulation Theory of Mind Prompting:
    • Description: Involves taking the perspective of a specific persona or knowledge base to answer questions.
    • Goal: To provide insights or answers from a unique or specific perspective.
    • Example 1: « Imagine you’re an AI from the future. What information do you have about advanced technology? Based on this, predict future tech trends. »
    • Example 2: « Assume you are a historian from the 1800s. What is your understanding of industrialization? Explain its impact based on that perspective. »
    • These methods showcase the versatility of LLMs in handling various tasks and queries, each technique offering unique advantages for specific types of problems.

Merging all the prompting methods into a single approach is complex but conceivable, especially for multifaceted or highly nuanced tasks. The goal would be to leverage the strengths of each method for different aspects of the task. Here are 10 samples illustrating how multiple methods can be integrated:

  1. Integrated Sample 1:
  • Prompt: « Imagine you’re an AI analyzing climate data. First, identify key patterns in this climate report (System 2 Attention). Then, using a Chain of Thought approach, evaluate how these patterns indicate global warming trends. Finally, compare these findings with historical climate data (Contrastive Chain of Thoughts) and provide a summarized prognosis (Decomposed Prompting). »
  1. Integrated Sample 2:
  • Prompt: « As a historian from the 1800s (Simulation Theory of Mind), review this modern article on the Industrial Revolution. Extract factual data (System 2 Attention), then critically analyze the differences in perspectives (Contrastive Chain of Thoughts). Conclude with a few-shot prompted summary comparing past and present views. »
  1. Integrated Sample 3:
  • Prompt: « To solve this complex math problem, let’s break it down (CoT). Consider alternative methods to solve each step (Self Consistency). Reflect on each solution’s validity (Self-Reflection). Finally, provide a concise explanation of the solution process (Decomposed Prompting). »
  1. Integrated Sample 4:
  • Prompt: « First, read these contrasting opinions on renewable energy (Contrastive Chain of Thoughts). Analyze their factual accuracy (System 2 Attention) and reasoning (CoT). Then, synthesize a proactive plan to increase renewable energy adoption, considering economic and environmental factors (ProCoT). »
  1. Integrated Sample 5:
  • Prompt: « Imagine you’re a 22nd-century environmentalist (Simulation Theory of Mind). Review these historical documents on deforestation (System 2 Attention), identify key changes over time (Contrastive Chain of Thoughts), and predict future trends (Decomposed Prompting). Summarize your findings with potential solutions (Few-Shot Prompting). »
  1. Integrated Sample 6:
  • Prompt: « As a medical AI, analyze these patient reports (System 2 Attention), identify symptoms (CoT), and diagnose (Decomposed Prompting). Compare your diagnosis with similar historical cases (Contrastive Chain of Thoughts) and suggest a treatment plan (ProCoT). »
  1. Integrated Sample 7:
  • Prompt: « Read this debate on AI ethics (System 2 Attention). Identify the main arguments (CoT), compare them with established ethical standards (Contrastive Chain of Thoughts), and propose a balanced ethical guideline for AI development (ProCoT). »
  1. Integrated Sample 8:
  • Prompt: « Review this economic report as a 20th-century economist (Simulation Theory of Mind), extract key economic indicators (System 2 Attention), compare with current economic data (Contrastive Chain of Thoughts), and predict future economic trends (Decomposed Prompting). »
  1. Integrated Sample 9:
  • Prompt: « Analyze this new technology from a future perspective (Simulation Theory of Mind). Break down its potential impacts (CoT), compare with past technological advancements (Contrastive Chain of Thoughts), and propose future applications (ProCoT). »
  1. Integrated Sample 10:
    • Prompt: « Examine this legal case file (System 2 Attention). Identify key legal precedents (CoT), compare with similar cases (Contrastive Chain of Thoughts), and predict the outcome (Self Consistency). Finally, draft a judgment summary (Decomposed Prompting). »

In each of these samples, multiple prompting methods are combined to tackle different parts of the task, creating a comprehensive approach that leverages the strengths of each technique. This integrated approach can be particularly useful for complex tasks requiring nuanced analysis, synthesis of different viewpoints, and strategic planning.

NoGAN: Ultrafast Data Synthesizer

  • Introduces NoGAN, a faster alternative to traditional tabular data synthetization.
  • Runs 1000x quicker than GAN, delivering superior results with a new, sophisticated evaluation metric.
  • A significant cost reducer, minimizing cloud/GPU time and training time.
  • Replaces manual fine-tuning parameters with auto-tuning.
  • Now available as open-source software.
  • Real-life case studies: synthetization in <5 seconds (compared to 10 minutes with GAN).
  • Produces higher quality results, confirmed via cross-validation.
  • Fast implementation enables automatic, efficient hyperparameter fine-tuning.
  • Future improvements discussed: speed enhancement, data faithfulness, auto-tuning, Gaussian NoGAN, and broader applications.

https://docs.google.com/presentation/d/1kDlAhS8yh_-Yu19ICxFk0Hxfq3ZXc4iy/mobilepresent?slide=id.p1

Time series Forcast TimeGPT

Nixtla’s TimeGPT is a generative pre-trained forecasting model for time series data. TimeGPT can produce accurate forecasts for new time series without training, using only historical values as inputs. TimeGPT can be used across a plethora of tasks including demand forecasting, anomaly detection, financial forecasting, and more.

The TimeGPT model “reads” time series data much like the way humans read a sentence – from left to right. It looks at windows of past data, which we can think of as “tokens”, and predicts what comes next. This prediction is based on patterns the model identifies in past data and extrapolates into the future.

The API provides an interface to TimeGPT, allowing users to leverage its forecasting capabilities to predict future events. TimeGPT can also be used for other time series-related tasks, such as what-if scenarios, anomaly detection, and more.

https://nixtla.github.io/nixtla/docs/getting-started/getting_started_short.html