Python SDK and CLI for the Promptic platform — tracing, prompt optimization, and experiment management.
pip install promptic-sdkInstall extras to auto-instrument specific providers or agent frameworks:
# LLM providers
pip install promptic-sdk[openai] # OpenAI
pip install promptic-sdk[anthropic] # Anthropic
pip install promptic-sdk[bedrock] # AWS Bedrock
pip install promptic-sdk[vertexai] # Google Vertex AI
pip install promptic-sdk[mistralai] # Mistral
# Agent frameworks
pip install promptic-sdk[langchain] # LangChain / LangGraph / create_agent / deepagents
pip install promptic-sdk[openai-agents] # OpenAI Agents SDK
pip install promptic-sdk[claude-agent] # Claude Agent SDK
pip install promptic-sdk[all] # Everything abovePydantic AI ships its own OpenTelemetry emitter — enable it with
Agent(..., instrument=True); no extras needed.
Log in via browser (recommended for local development):
promptic loginThis opens your browser for authentication, then auto-selects your workspace. Credentials are saved to ~/.promptic/config.toml.
For CI/CD or headless environments, use an API key instead:
promptic configure
# or set the environment variable:
export PROMPTIC_API_KEY="pk_..."import promptic_sdk
from openai import OpenAI
# Initialize tracing (auto-instruments installed LLM libraries)
promptic_sdk.init()
client = OpenAI()
# Tag traces with an AI Component name
with promptic_sdk.ai_component("customer-support-agent"):
response = client.chat.completions.create(
model="gpt-4.1-nano",
messages=[{"role": "user", "content": "Hello!"}],
)from promptic_sdk import PrompticClient
with PrompticClient() as client:
# List traces
traces = client.list_traces(limit=10)
# Get workspace info
workspace = client.get_workspace()
# Manage experiments
experiment = client.create_experiment(
ai_component_id="comp_...",
target_model="gpt-4.1-nano",
task_type="classification",
initial_prompt="Classify the following text.",
)
# Deploy the best prompt
client.deploy(component_id="comp_...", experiment_id="exp_...")
# Fetch a deployed prompt at runtime
prompt = client.get_deployed_prompt("comp_...")promptic_sdk.init() sets up OpenTelemetry to export spans to the Promptic platform.
| Parameter | Description | Default |
|---|---|---|
api_key |
Promptic API key (falls back to PROMPTIC_API_KEY) |
— |
endpoint |
Platform URL (falls back to PROMPTIC_ENDPOINT) |
https://promptic.eu |
auto_instrument |
Auto-detect and instrument LLM client libraries | True |
service_name |
OpenTelemetry service.name resource attribute |
— |
Auto-detected instrumentors: OpenAI, Anthropic, Google Generative AI, Vertex AI,
Bedrock, Mistral, Cohere, LangChain (with LangGraph / deepagents), OpenAI Agents
SDK, Claude Agent SDK. All emit the official OpenTelemetry GenAI semantic
conventions (gen_ai.*), so traces work uniformly across frameworks.
Since Promptic uses standard OpenTelemetry under the hood, you can add any OTel-compatible instrumentor alongside the auto-detected ones. Just call promptic_sdk.init() first, then instrument manually:
import promptic_sdk
from opentelemetry.instrumentation.requests import RequestsInstrumentor
from opentelemetry.instrumentation.sqlalchemy import SQLAlchemyInstrumentor
promptic_sdk.init()
# Add any OpenTelemetry instrumentor — spans will be exported to Promptic
RequestsInstrumentor().instrument()
SQLAlchemyInstrumentor().instrument(engine=engine)This works with any package from the opentelemetry-python-contrib ecosystem (HTTP clients, databases, web frameworks, etc.). All spans are exported to the Promptic platform as long as init() has been called.
Use ai_component() to tag spans with a component name. The platform links traces to the matching AI Component in your workspace:
with promptic_sdk.ai_component("my-component"):
# All LLM calls here are tagged
...Most users don't need this. With the right [extras] installed, auto-instrumentation already
creates spans for every LLM and tool call. Reach for custom spans only when you have meaningful
non-LLM workflow logic (retrieval, normalization, business rules, control flow) you want
represented in the trace.
When you do need it, wrap your workflow stages in custom OpenTelemetry spans. Auto-instrumented LLM and tool spans automatically nest under whichever custom span is active.
import json
import promptic_sdk
from opentelemetry import trace
promptic_sdk.init()
tracer = trace.get_tracer(__name__)
with promptic_sdk.ai_component("support-agent"):
with tracer.start_as_current_span("answer_question") as root:
root.set_attribute("traceloop.span.kind", "workflow")
root.set_attribute("traceloop.entity.input", json.dumps(user_input))
with tracer.start_as_current_span("retrieve_context") as span:
span.set_attribute("traceloop.span.kind", "task")
span.set_attribute("traceloop.entity.input", json.dumps(query))
context = retrieve(query)
span.set_attribute("traceloop.entity.output", json.dumps(context))
with tracer.start_as_current_span("generate_answer") as span:
span.set_attribute("traceloop.span.kind", "task")
# The auto-instrumented LLM call nests under this task span
answer = llm_call(context)
root.set_attribute("traceloop.entity.output", json.dumps(answer))Span attribute conventions:
traceloop.span.kind="workflow"— the top-level runtraceloop.span.kind="task"— an internal pipeline stagetraceloop.entity.input/traceloop.entity.output— JSON-serialized stage payloads, surfaced in the Promptic UI
For large payloads, log a small preview plus a count rather than the full object — traces are not designed to store data:
span.set_attribute(
"traceloop.entity.output",
json.dumps({
"items": items[:5],
"item_count": len(items),
"additional_item_count": max(len(items) - 5, 0),
}),
)See the Tracing guide for the full pattern.
Both a sync (PrompticClient) and async (AsyncPrompticClient) client are available. They share the same method signatures and return types.
from promptic_sdk import PrompticClient
with PrompticClient() as client:
traces = client.list_traces(limit=10)from promptic_sdk import AsyncPrompticClient
async with AsyncPrompticClient() as client:
traces = await client.list_traces(limit=10)Both clients provide typed methods for the full Promptic REST API:
| Resource | Methods |
|---|---|
| Workspace | get_workspace |
| Traces | list_traces, get_trace, get_stats |
| Components | list_components, get_component, create_component, delete_component |
| Experiments | list_experiments, get_experiment, create_experiment, update_experiment, delete_experiment, start_experiment |
| Observations | list_observations, create_observations, update_observation, delete_observation |
| Evaluators | list_evaluators, create_evaluators, update_evaluator, delete_evaluator |
| Iterations | list_iterations, get_iteration, get_best_iteration |
| Deployments | get_deployment, deploy, undeploy, get_deployed_prompt |
The client reads PROMPTIC_API_KEY and PROMPTIC_ENDPOINT from the environment, or accepts them as constructor arguments.
The promptic CLI mirrors the API client and supports both human-readable tables and --json output.
promptic [command] [subcommand] [options]
| Command | Description |
|---|---|
promptic login |
Authenticate via browser (device flow) |
promptic logout |
Clear saved credentials |
promptic configure |
Save API key and endpoint (CI/CD) |
promptic workspace list |
List accessible workspaces |
promptic workspace select <id> |
Select a workspace |
promptic workspace info |
Show workspace info |
promptic traces list |
List recent traces |
promptic traces get <id> |
Get a trace with spans |
promptic traces stats |
Show aggregated tracing stats |
promptic components list |
List AI components |
promptic components create |
Create a component |
promptic components get <id> |
Get component details |
promptic components delete <id> |
Delete a component |
promptic experiments list |
List experiments |
promptic experiments create |
Create an experiment (interactive) |
promptic experiments get <id> |
Get experiment details |
promptic experiments update <id> |
Update an experiment |
promptic experiments delete <id> |
Delete an experiment |
promptic experiments start <id> |
Start an experiment |
promptic observations list |
List observations for an experiment |
promptic observations add |
Add an observation |
promptic observations delete <id> |
Delete an observation |
promptic evaluators list |
List evaluators for an experiment |
promptic evaluators add |
Add an evaluator |
promptic evaluators delete <id> |
Delete an evaluator |
promptic iterations list |
List iterations for an experiment |
promptic iterations get <id> |
Get iteration details |
promptic iterations best |
Get the best iteration |
promptic deployments status <id> |
Show deployment for a component |
promptic deployments deploy |
Deploy an experiment |
promptic deployments prompt <id> |
Show the deployed prompt |
promptic deployments undeploy <id> |
Remove a deployment |
promptic datasets create |
Create a dataset |
promptic datasets list |
List datasets |
promptic datasets get <id> |
Get dataset details |
promptic datasets delete <id> |
Delete a dataset |
promptic runs create |
Create a run |
promptic runs list |
List runs |
promptic runs get <id> |
Get run details |
promptic runs delete <id> |
Delete a run |
promptic annotations create |
Create an annotation |
promptic annotations list |
List annotations |
promptic annotations delete <id> |
Delete an annotation |
promptic evaluations run |
Run an evaluation |
promptic evaluations list |
List evaluations |
promptic evaluations get <id> |
Get evaluation details |
All list commands support --json for machine-readable output.
The SDK and CLI resolve configuration in this order:
- Explicit arguments (
api_key=,endpoint=) - Environment variables (
PROMPTIC_API_KEY,PROMPTIC_ENDPOINT) - Config file (
~/.promptic/config.toml, written bypromptic loginorpromptic configure)
| Variable | Description | Default |
|---|---|---|
PROMPTIC_API_KEY |
API key (for tracing & CI/CD) | — |
PROMPTIC_ENDPOINT |
Platform URL | https://promptic.eu |
Requires Python 3.11+ and uv.
# Install dependencies
uv sync
# Run tests
uv run pytest
# Lint
uv run ruff check .
uv run ruff format .MIT — see LICENSE for details.