AI Assistant
The AI Assistant helps you generate, refactor, and explain code β fully offline using local models powered by Ollama.
No cloud calls, no uploads, and your code never leaves your machine.
Use this button


π Quick Start
-
Open the AI panel
- Click the AI Assistant icon.
-
Pick a model
- Use
ollamacommand orodscommand to install models. Refer Terminal - Use the model picker.
- File attachment is under development (in future release it will work )
- Use
-
Ask Question
- Type your request or select code +
Copy Pastein the editor and ask the AI to explain, optimize, refactor, document, or write tests.
- Type your request or select code +
-
Review & apply & Copy
- Apply suggestions manually or copy blocks into your file. Keep the chat open while you iterate.
Tip
Shift + Enter inserts a newline in the chat (multiline support); Enter sends the message.
π§ Models (via Ollama)
Own DevStudio talks to the Ollama runtime on your machine. You can run and switch between models such as:
- Llama 3 (general coding & reasoning)
- Mistral / Mixtral
- Phi-3 (small, fast)
- Qwen / Qwen-Coder
- Code Llama / other code-tuned variants
Install Ollama and pull models from you OS terminal:
# Manual Install (Linux/macOS)
curl -fsSL https://ollama.com/install.sh | sh
# Example models
ollama pull llama3:8b
ollama pull gemma3:12b
ollama pull qwen2.5-coder
Then pick the model inside AI Assistant.
Remote
Intranet Ollama URL also supported and you can manage multiple Ollama servers.
β¨ What the Assistant Can Do
- Explain code (file, selection, or function)
- Refactor for readability or performance
- Generate code (snippets, functions, boilerplate)
- Document with comments and docstrings
- Write tests from examples or requirements
- Summarize large files or diffs
- Plan changes and produce step-by-step tasks
All prompts are run locally against the model you select. So virtually you can do anything.
π Privacy & Offline
- Runs entirely on your machine using Ollama.
- No telemetry or cloud inference.
- You choose which files/selection become context.
π§© Context & Best Practices
- Provide best possible Prompt & context.
- Provide focused context (select only relevant code).
- Break big tasks into smaller prompts.
- Prefer model-appropriate sizes (use lighter models for quick edits; larger ones for deep refactors).
- If results feel generic, paste examples from your codebase to ground the answer.
πΌοΈ UI Guide

Look for this button in the bottom-right when the panel is collapsed:

Click it to reopen the chat instantly.
β¨οΈ Handy Shortcuts
| Action | Default |
|---|---|
| Open/close AI panel | No keyboard shortcu, click on button |
| Send message | Enter |
| New line | Shift + Enter |
| Ask AI about selection | Right-click selection β Ask AI |
π οΈ Troubleshooting
- βModel not foundβ β Pull it first:
ollama pull <model> - Slow responses β Try a smaller model (e.g.,
phi3:mini), or reduce context size. - Ollama not running β Start the service/app, then reopen the AI panel.
- Context too large β Send only the relevant function/file portion.
- Known Issue β Sometimes receive continue repeated response. So refresh the page.