| Components | All | New | MacOS | Windows | Linux | iOS | ||||
| Examples | Mac & Win | Server | Client | Guides | Statistic | FMM | Blog | Deprecated | Old | |
Llama.Transcript
Queries the transcript.
| Component | Version | macOS | Windows | Linux | Server | iOS SDK |
| Llama | 16.1 | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes |
MBS( "Llama.Transcript"; LlamaSession { ; AddAssistant; TemplateName } ) More
Parameters
| Parameter | Description | Example | Flags |
|---|---|---|---|
| LlamaSession | The session identifier. | $$session | |
| AddAssistant | Whether to end the prompt with the token(s) that indicate the start of an assistant message. Pass 1 to enable or 0 to disable. Default is 1. |
1 | Optional |
| TemplateName | The chat template name. Default is "" to pick the default template. |
"chatml" | Optional |
Result
Returns text or error.
Description
Queries the transcript.This will apply the stored messages with the chat template to produce the transcript as the LLM would read it.
Learn more here:
https://github.com/ggml-org/llama.cpp/wiki/Templates-supported-by-llama_chat_apply_template
Examples
Query the transcript:
MBS( "Llama.Transcript"; $$session)
Example result:
<start_of_turn>user
Hello<end_of_turn>
<start_of_turn>model
Hello there! How's your day going so far? 😊
Is there anything you'd like to chat about, or anything I can help you with?<end_of_turn>
<start_of_turn>model
Example result:
<start_of_turn>user
Hello<end_of_turn>
<start_of_turn>model
Hello there! How's your day going so far? 😊
Is there anything you'd like to chat about, or anything I can help you with?<end_of_turn>
<start_of_turn>model
See also
Blog Entries
Created 10th February 2026, last changed 12nd February 2026