AI Coding Assistant Rules for Windsurf and Cursor

These optimized rules will transform how Windsurf and Cursor AI work with your Python backend and Next.js frontend projects. By adding these configurations, you’ll get more accurate, consistent code suggestions that follow best practices and avoid common AI-generated code issues.

How to Implement in Windsurf

  1. Option 1 – File Method:
  • Create a file named .windsurfrules in your project’s root directory
  • Copy and paste the entire code block below into this file
  • Save the file
  1. Option 2 – Settings Method:
  • Open Windsurf AI
  • Navigate to Settings > Set Workspace AI Rules > Edit Rules
  • Paste the entire code block below
  • Save your settings

How to Implement in Cursor

  1. Option 1 – File Method:
  • Create a file named .cursorrules in your project’s root directory
  • Copy and paste the same code block below (it works for both platforms)
  • Save the file
  1. Option 2 – Settings Method:
  • Open Cursor AI
  • Click on your profile picture in the bottom left
  • Select « Settings »
  • Navigate to « AI » section
  • Find « Custom Instructions » and click « Edit »
  • Paste the entire code block below
  • Click « Save »

After Implementation

  • Restart your AI coding assistant or reload your workspace
  • The AI will now follow these comprehensive rules in all your coding sessions
  • You should immediately notice more relevant, project-specific code suggestions

These rules will significantly improve how your AI coding assistant understands your project requirements, coding standards, and technical preferences. You’ll get more relevant suggestions, fewer hallucinated functions, and code that better integrates with your existing codebase.

# .windsurfrules
# Copy and paste this entire file into your project root or
# Navigate to Settings > Set Workspace AI Rules > Edit Rules

# Core Configuration
CONFIGURATION = {
"version": "v5",
"project_type": "web_application",
"primary_language": "python",
"frontend_framework": "next.js",
"code_style": "clean_and_maintainable"
}

# Python Backend Best Practices
BACKEND_RULES = {
"frameworks": ["FastAPI", "Flask", "Django"],
"follow_pep8": True,
"use_type_hints": True,
"implement_error_handling": True,
"prefer_async_when_appropriate": True,
"create_modular_code": True,
"use_environment_variables": True,
"write_comprehensive_tests": True,
"use_uv_for_package_management": True,
"always_use_latest_library_versions": True,
"avoid_deprecated_libraries": True
}

# Next.js Frontend Practices
FRONTEND_RULES = {
"use_typescript": True,
"implement_react_hooks_properly": True,
"create_reusable_components": True,
"optimize_for_performance": True,
"follow_app_router_conventions": True,
"implement_responsive_design": True,
"use_tailwind_for_styling": True
}

# Coding Pattern Preferences
CODING_PATTERNS = {
"prefer_simple_solutions": True,
"avoid_code_duplication": True,
"consider_all_environments": ["dev", "test", "prod"],
"only_make_requested_changes": True,
"prioritize_existing_implementation": True,
"keep_codebase_clean": True,
"avoid_scripts_in_files": True,
"refactor_files_over_300_lines": True,
"mock_data_only_for_tests": True,
"no_fake_data_in_dev_or_prod": True,
"never_overwrite_env_without_confirmation": True,
"max_file_length": 300,
"database_type": "SQL",
"vector_search": "Qdrant",
"separate_databases_per_environment": True
}

# Coding Workflow Preferences
WORKFLOW_PREFERENCES = {
"focus_on_relevant_code": True,
"avoid_unrelated_changes": True,
"write_thorough_tests": True,
"avoid_unnecessary_architecture_changes": True,
"consider_code_impact": True,
"add_explanatory_comments": True,
"place_tests_in_test_directory": True
}

# Technical Stack
TECH_STACK = {
"backend": "Python",
"frontend": "Next.js",
"database": "SQL",
"vector_search": "Qdrant",
"testing_framework": "Python tests",
"package_manager": "uv",
"never_use_json_file_storage": True
}

# AI Assistant Behavior
AI_BEHAVIOR = {
"provide_complete_solutions": True,
"focus_on_working_code": True,
"follow_existing_patterns": True,
"suggest_best_practices": True,
"be_concise_unless_asked_for_details": True
}

# Things to Avoid
AVOID_THESE = {
"never_generate_incomplete_code": True,
"never_invent_nonexistent_functions": True,
"never_ignore_explicit_requirements": True,
"never_mix_framework_patterns": True,
"never_overcomplicate_simple_tasks": True
}

# Apply all rules
WINDSURF_CONFIG = {
"config": CONFIGURATION,
"backend": BACKEND_RULES,
"frontend": FRONTEND_RULES,
"coding_patterns": CODING_PATTERNS,
"workflow": WORKFLOW_PREFERENCES,
"tech_stack": TECH_STACK,
"ai_behavior": AI_BEHAVIOR,
"avoid": AVOID_THESE
}

VERIFICATION_RULE: Every time you choose to apply a rule from this file, explicitly state the rule in your output. You can abbreviate the rule description to a single word or phrase.

        These rules combine best practices for Python backend and Next.js frontend development with your specific coding patterns, workflow preferences, and technical stack requirements. The configuration instructs Windsurf AI to maintain clean, modular code that follows established patterns while avoiding common pitfalls in AI-assisted development.

        Loic Baconnier

        Enhancing Document Retrieval with Topic-Based Chunking and RAPTOR

        In the evolving landscape of information retrieval, combining topic-based chunking with hierarchical retrieval methods like RAPTOR represents a significant advancement for handling complex, multi-topic documents. This article explores how these techniques work together to create more effective document understanding and retrieval systems.

        Topic-Based Chunking: Understanding Document Themes

        Topic-based chunking segments text by identifying and grouping content related to specific topics, creating more semantically meaningful chunks than traditional fixed-size approaches. This method is particularly valuable for multi-topic documents where maintaining thematic coherence is essential.

        The TopicNodeParser in LlamaIndex provides an implementation of this approach:

        1. It analyzes documents to identify natural topic boundaries
        2. It segments text based on semantic similarity rather than arbitrary token counts
        3. It preserves the contextual relationships between related content

        After processing documents with TopicNodeParser, you can extract the main topics from each node using an LLM. This creates a comprehensive topic map of your document collection, which serves as the foundation for more sophisticated retrieval.

        RAPTOR: Hierarchical Retrieval for Complex Documents

        RAPTOR (Recursive Abstractive Processing for Tree Organized Retrieval) builds on chunked documents by organizing information in a hierarchical tree structure through recursive clustering and summarization. This approach outperforms traditional retrieval methods by preserving document relationships and providing multiple levels of abstraction.

        Choosing the Right RAPTOR Method

        RAPTOR offers two primary retrieval methods, each with distinct advantages for different use cases:

        Tree Traversal Retrieval navigates the hierarchical structure sequentially, starting from root nodes and moving down through relevant branches. This method is ideal for:

        • Getting comprehensive overviews of multiple documents
        • Understanding the big picture before exploring details
        • Queries requiring progressive exploration from general to specific information
        • Press reviews or reports where logical flow between concepts is important

        Collapsed Tree Retrieval flattens the tree structure, evaluating all nodes simultaneously regardless of their position in the hierarchy. This method excels at:

        • Complex multi-topic queries requiring information from various levels
        • Situations needing both summary-level and detailed information simultaneously
        • Multiple recall scenarios where information is scattered across documents
        • Syndicate press reviews with multiple intersecting topics

        Research has shown that the collapsed tree method consistently outperforms traditional top-k retrieval, achieving optimal results when searching for the top 20 nodes containing up to 2,000 tokens. For most multi-document scenarios with diverse topics, the collapsed tree approach is generally superior.

        Creating Interactive Topic-Based Summaries

        The final piece of an effective document retrieval system is interactive topic-based summarization, which allows users to explore document collections at varying levels of detail.

        An interactive topic-based summary:

        • Presents topics hierarchically, showing their development throughout documents
        • Allows users to expand or collapse sections based on interest
        • Provides contextual placement of topics within the overall document structure
        • Uses visual cues like indentation, bullets, or font changes to indicate hierarchy

        This approach transforms complex summarization results into comprehensible visual summaries that help users navigate large text collections more effectively.

        Implementing a Complete Pipeline

        A comprehensive implementation combines these techniques into a seamless pipeline:

        1. Topic Identification: Use TopicNodeParser to segment documents into coherent topic-based chunks
        2. Topic Extraction: Apply an LLM to identify and name the main topics in each chunk
        3. Hierarchical Organization: Process these topic-based chunks with RAPTOR to create a multi-level representation
        4. Retrieval Optimization: Select the appropriate RAPTOR method based on your specific use case
        5. Interactive Summary: Create an interactive interface that allows users to explore topics at multiple levels of detail

        This pipeline ensures that no topics are lost during processing while providing users with both high-level overviews and detailed information when needed.

        Conclusion

        The combination of topic-based chunking, RAPTOR’s hierarchical retrieval, and interactive summarization represents a powerful approach for handling complex, multi-topic document collections. By preserving the semantic structure of documents while enabling flexible retrieval at multiple levels of abstraction, these techniques significantly enhance our ability to extract meaningful information from large text collections.

        As these technologies continue to evolve, we can expect even more sophisticated approaches to document understanding and retrieval that will further transform how we interact with textual information.

        Loic Baconnier