Skip to main content
Log in

Using AI coding assistants

You can use large language models (LLMs) to accelerate your development with Modular by providing structured context about Modular Platform’s docs and code to your projects . We provide two mechanisms:

  • llms.txt files for broad documentation access.
  • .cursorules files for specific coding guidelines.

Supply documentation to LLMs with llms.txt

Modular supports the llms.txt proposed standard, enabling LLMs to access our documentation at inference time. This allows LLMs the most up-to-date documentation providing more accurate and context-aware responses.

Modular provides the following llms.txt files:

Integrate llms.txt with AI-assisted IDEs

You can leverage llms.txt files with IDEs that support tool calling, such as Cursor or Windsurf, to provide context directly within your development environment.

For example, when writing Mojo code, you can reference the llms-mojo.txt file by using @docs.modular.com/llms-mojo.txt in your chat window. Your IDE will then use this documentation to inform its suggestions, completions, and error corrections.

Enhance LLM guidance with .cursorules

.cursorules, also known as project rules, are a powerful way to give LLMs consistent reusable information. These rules are usually stored in a .cursor/rules directory right within your project, so they can be version-controlled and specifically scoped to your codebase.

You can use Modular's .cursorules to assist in coding tasks or working with Modular based projects:

  • general_behavior_rules.mdc: General rules for code creation. Emphasizes simplicity, thorough investigation, using existing solutions, descriptive naming, environment variables for configuration, robust error handling, documentation, assertions, virtual environments, and workspace-relative operations.

  • git.mdc: Outlines best practices for using Git effectively. Includes guidance on code organization, commit strategies, branching models, and collaborative workflows.

  • mojo.mdc: Enforces Mojo coding standards, performance optimizations, and best practices. Aims to ensure efficient and maintainable GPU-accelerated code, with guidance on code organization, memory management, and error handling.