Cadenza Python SDK

Async Python client for programmatic access to the Evolutionary Database. Build autonomous research loops, query experiments, and generate LLM-ready context snapshots.

Client Setup

Import and Initialize

Constructorpython
from cadenza import CadenzaClient

client = CadenzaClient(
    token="evodb_sk_your_token_here",
    base_url="https://api.myluca.ai",  # optional, this is the default
    user_id="usr_abc123",               # optional
)

Recommended: Context Manager

Use async with to ensure the HTTP client is properly closed.

Context manager patternpython
async with CadenzaClient(token="evodb_sk_...") as client:
    projects = await client.list_projects()
    # client is automatically closed when the block exits

Constructor Parameters

tokenstr — API token (must start with evodb_sk_)
base_urlstr — API server URL (default: https://api.myluca.ai)
user_idstr | None — User ID, sent as X-User-Id header

Authentication

CadenzaClient.verify_token(token, base_url)classmethod

Verify a token without creating a client instance. Returns user info and token metadata.

ReturnsVerifyTokenResponse
Examplepython
result = await CadenzaClient.verify_token("evodb_sk_...")
print(result.user_id)      # "usr_abc123"
print(result.token_name)   # "my-token"
print(result.scopes)       # ["read", "write"]
await client.whoami()

Get information about the current token: name, prefix, scopes, and expiration.

ReturnsTokenInfo
Examplepython
info = await client.whoami()
print(info.token_name)    # "my-token"
print(info.expires_at)    # "2026-01-01T00:00:00Z" or None

Projects

await client.list_projects()

List all projects accessible with the current token.

Returnslist[Project]
Examplepython
projects = await client.list_projects()
for p in projects:
    print(f"{p.name} ({p.slug}) — {p.description}")

Genotype & Islands

await client.get_genotype(project_name)

Get the active genotype (search space definition) including its behavioral dimensions.

project_namestr
ReturnsGenotype
Examplepython
genotype = await client.get_genotype("my-project")
for dim in genotype.behavioral_dimensions:
    print(f"{dim.name}: {dim.description}")
await client.list_islands(project_name)

List all islands (subpopulations) for a project.

project_namestr
Returnslist[IslandSummary]
Examplepython
islands = await client.list_islands("my-project")
for island in islands:
    print(f"{island.island_name}: {island.phenotype_count} experiments")
await client.list_island_experiments(project_name, island_name, page, page_size)

List experiments within a specific island, with pagination.

project_namestr
island_namestr
pageint = 1
page_sizeint = 10
Returnslist[IslandExperiment]
Examplepython
experiments = await client.list_island_experiments(
    "my-project", "learning-rates", page=1, page_size=5
)
for exp in experiments:
    print(f"{exp.title} — {exp.status}")

Elites & Sampling

await client.list_elites(project_name)

Get the elite archive — top-performing experiments across all islands.

project_namestr
Returnslist[Elite]
Examplepython
elites = await client.list_elites("my-project")
for elite in elites:
    print(f"Island: {elite.island_name}, Fitness: {elite.fitness_json}")
await client.sample(project_name, top_phenotypes)Key Method

Generate a compact LLM-ready snapshot of the current evolutionary state. This is the primary interface for AI agent consumption.

project_namestr
top_phenotypesint = 2
ReturnsSampleContext
Examplepython
context = await client.sample("my-project", top_phenotypes=5)

# SampleContext fields:
# context.genotype       — dict: search space + behavioral dimensions
# context.islands        — list[dict]: island summaries
# context.elite_archive  — list[dict]: top experiments with fitness scores

# Serialize for LLM consumption
import json
llm_input = json.dumps(context.model_dump())

Agent tip: Use context.model_dump() to get a JSON-serializable dict. The output is already optimized for token efficiency.

Imports

await client.start_wandb_import(entity, api_key, projects, **kwargs)

Start importing W&B projects into the evolutionary database.

entitystr— W&B team or username
api_keystr— W&B API key
projectslist[str]— W&B project names to import
max_behavioral_dimensionsint | None— optional
elite_archive_sizeint | None— optional
max_islandsint | None— optional
ReturnsImportListResult
Examplepython
result = await client.start_wandb_import(
    entity="my-team",
    api_key="wand_abc123...",
    projects=["my-wandb-project"],
    max_behavioral_dimensions=8,
    elite_archive_size=50,
)
for job in result.imports:
    print(f"{job.project}: {job.status}")
await client.list_imports(status_filter)

List import jobs, optionally filtered by status.

status_filterstr | None— “queued” | “running” | “completed” | “failed”
ReturnsImportListResult
Examplepython
# List all imports
all_imports = await client.list_imports()

# Filter by status
running = await client.list_imports(status_filter="running")

Error Handling

All exceptions inherit from CadenzaError, which has message and status_code attributes.

Exception Hierarchy

CadenzaError— Base exception for all API errors
AuthenticationError— Invalid, expired, or revoked token (401)
NotFoundError— Resource not found (404)
RateLimitError— Rate limit exceeded (429)
Error handling patternpython
from cadenza.sdk.exceptions import (
    AuthenticationError,
    CadenzaError,
    NotFoundError,
    RateLimitError,
)

try:
    elites = await client.list_elites("my-project")
except AuthenticationError:
    # Token expired — re-authenticate
    pass
except NotFoundError:
    # Project doesn't exist
    pass
except RateLimitError:
    # Back off and retry
    pass
except CadenzaError as e:
    # Any other API error
    print(f"Error {e.status_code}: {e.message}")

Models Reference

All models are Pydantic BaseModel classes. Import from cadenza.sdk.models.

Project

project_id: strname: strslug: strdescription: str | Nonecreated_at: str

Genotype

genotype_id: strversion_no: intstatus: strbehavioral_dimensions: list[BehavioralDimension]created_at: str

BehavioralDimension

name: strdescription: str

IslandSummary

island_id: strisland_name: strisland_description: strphenotype_count: int

IslandExperiment

experiment_id: strtitle: strdescription: strstatus: strbehavior_json: dict | Noneprovider_source: dict | None

Elite

experiment_id: strisland_name: str | Nonebehavior_json: dict | Nonefitness_json: dict

SampleContext

genotype: dictislands: list[dict]elite_archive: list[dict]

TokenInfo

token_name: strtoken_prefix: strscopes: list[str]created_at: strexpires_at: str | None

ImportJob

import_id: strproject_id: str | Noneentity: strproject: strstatus: strcreated_at: str | Nonestarted_at: str | Nonecompleted_at: str | Noneprovider: str

ImportListResult

imports: list[ImportJob]total: int | None

VerifyTokenResponse

user_id: strtoken_name: strtoken_prefix: strscopes: list[str]