| Components | All | New | MacOS | Windows | Linux | iOS | ||||
| Examples | Mac & Win | Server | Client | Guides | Statistic | FMM | Blog | Deprecated | Old | |
Llama.LoadModel
Load the model from a file.
| Component | Version | macOS | Windows | Linux | Server | iOS SDK |
| Llama | 16.1 | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes |
MBS( "Llama.LoadModel"; LlamaSession; Path { ; Options } ) More
Parameters
| Parameter | Description | Example | Flags |
|---|---|---|---|
| LlamaSession | The session identifier. | $$session | |
| Path | The native path to the model. | "/Users/cs/gemma-3-1b-it.Q8_0.gguf" | |
| Options | The JSON with parameters. Optionally you can pass a number for n_gpu_layers or main_gpu. n_gpu_layers: number of layers to store in VRAM, a negative value means all layers |
Optional |
Result
Returns OK or error.
Description
Load the model from a file.If the file is split into multiple parts, the file name must follow this pattern: <name>-%05d-of-%05d.gguf
e.g. gemma-3-1b-it.Q8_0.gguf or gpt-oss-20b-Q4_K_M.gguf
Examples
Load a model on macOS:
Set Variable [ $path ; Value: "/Users/cs/gemma-3-1b-it.Q8_0.gguf" ]
Set Variable [ $r ; Value: MBS("Llama.LoadModel"; $$session; $path) ]
If [ MBS("IsError") ]
Show Custom Dialog [ "Failed to load model." ; $r ]
End If
Set Variable [ $r ; Value: MBS("Llama.LoadModel"; $$session; $path) ]
If [ MBS("IsError") ]
Show Custom Dialog [ "Failed to load model." ; $r ]
End If
Load a model on Windows:
Set Variable [ $path ; Value: "C:\Users\User\Modesl\gemma-3-1b-it.Q8_0.gguf" ]
Set Variable [ $r ; Value: MBS("Llama.LoadModel"; $$session; $path) ]
If [ MBS("IsError") ]
Show Custom Dialog [ "Failed to load model." ; $r ]
End If
Set Variable [ $r ; Value: MBS("Llama.LoadModel"; $$session; $path) ]
If [ MBS("IsError") ]
Show Custom Dialog [ "Failed to load model." ; $r ]
End If
See also
Example Databases
Blog Entries
Created 10th February 2026, last changed 16th February 2026