Skip to content

AI Assistant

The AI Assistant helps you generate, refactor, and explain code β€” fully offline using local models powered by Ollama.
No cloud calls, no uploads, and your code never leaves your machine.

Use this button

Ask AI

AI chat panel


πŸš€ Quick Start

  1. Open the AI panel

    • Click the AI Assistant icon.
  2. Pick a model

    • Use ollama command or ods command to install models. Refer Terminal
    • Use the model picker.
    • File attachment is under development (in future release it will work )
  3. Ask Question

    • Type your request or select code + Copy Paste in the editor and ask the AI to explain, optimize, refactor, document, or write tests.
  4. Review & apply & Copy

    • Apply suggestions manually or copy blocks into your file. Keep the chat open while you iterate.

Tip

Shift + Enter inserts a newline in the chat (multiline support); Enter sends the message.


🧠 Models (via Ollama)

Own DevStudio talks to the Ollama runtime on your machine. You can run and switch between models such as:

  • Llama 3 (general coding & reasoning)
  • Mistral / Mixtral
  • Phi-3 (small, fast)
  • Qwen / Qwen-Coder
  • Code Llama / other code-tuned variants

Install Ollama and pull models from you OS terminal:

# Manual Install (Linux/macOS)
curl -fsSL https://ollama.com/install.sh | sh

# Example models
ollama pull llama3:8b
ollama pull gemma3:12b
ollama pull qwen2.5-coder

Then pick the model inside AI Assistant.

Remote

Intranet Ollama URL also supported and you can manage multiple Ollama servers.


✨ What the Assistant Can Do

  • Explain code (file, selection, or function)
  • Refactor for readability or performance
  • Generate code (snippets, functions, boilerplate)
  • Document with comments and docstrings
  • Write tests from examples or requirements
  • Summarize large files or diffs
  • Plan changes and produce step-by-step tasks

All prompts are run locally against the model you select. So virtually you can do anything.


πŸ”’ Privacy & Offline

  • Runs entirely on your machine using Ollama.
  • No telemetry or cloud inference.
  • You choose which files/selection become context.

🧩 Context & Best Practices

  • Provide best possible Prompt & context.
  • Provide focused context (select only relevant code).
  • Break big tasks into smaller prompts.
  • Prefer model-appropriate sizes (use lighter models for quick edits; larger ones for deep refactors).
  • If results feel generic, paste examples from your codebase to ground the answer.

πŸ–ΌοΈ UI Guide

Editor + AI side panel

Look for this button in the bottom-right when the panel is collapsed:

AI Assistant button

Click it to reopen the chat instantly.


⌨️ Handy Shortcuts

Action Default
Open/close AI panel No keyboard shortcu, click on button
Send message Enter
New line Shift + Enter
Ask AI about selection Right-click selection β†’ Ask AI

πŸ› οΈ Troubleshooting

  • β€œModel not found” β†’ Pull it first: ollama pull <model>
  • Slow responses β†’ Try a smaller model (e.g., phi3:mini), or reduce context size.
  • Ollama not running β†’ Start the service/app, then reopen the AI panel.
  • Context too large β†’ Send only the relevant function/file portion.
  • Known Issue β†’ Sometimes receive continue repeated response. So refresh the page.

Next Steps

  • Configure models and parameters in Git
  • Configure models and parameters in Terminal