Skip to content

Cookbook

Classifies untagged feedback against your portal’s tags and applies them in batches.

What it does

  1. Fetches your portal’s tags via GET /tags
  2. Asks you to describe what each tag means (1–2 sentences) — this is the classification rubric
  3. Pulls untagged feedback in batches of 100 using cursor pagination (pageSize=100, createdBefore cursor)
  4. Classifies each item against your tag descriptions
  5. Applies tags via POST /feedback/batch-tag, grouped by tag combination

Prerequisites

  • API key exported as APPZI_API_KEY
  • An LLM agent with HTTP access
---
name: appzi-auto-tag
description: Classify untagged Appzi feedback with an LLM and apply tags via the Public API batch endpoint. Uses cursor pagination in batches of 100.
---
# Appzi feedback auto-tagging
Classify untagged feedback in an Appzi portal against the portal's existing tags and apply tags via the Public API batch endpoint.
## Setup
- Base URL: `https://portal.api.appzi.io/api/v1`
- Auth header on every request: `Authorization: Bearer $APPZI_API_KEY`. If the env var is unset, ask once.
- Defaults (the user can override in the prompt): all surveys, all history, up to 10 pages (~1000 items).
## Procedure
### 1. Discover tags and gather descriptions
- `GET /tags` → list the returned tags (name + id) to the user.
- Ask the user to write a **one- or two-sentence description of each tag** — what kind of feedback should receive it. These descriptions are the ONLY classification rubric you have; without them, stop and explain why.
- The user may say "skip" for any tag they do not want applied this run. Record only the in-scope tags.
### 2. Paginate untagged feedback
- Set `cursor = null` (no `createdBefore` on the first call).
- Loop up to the user's max pages:
1. `GET /feedback?pageSize=100&untagged=true` plus any optional scope, plus `createdBefore=<cursor>` when `cursor != null`.
2. If `results` is empty, stop.
3. Classify each `results[i]` (step 3).
4. Group by the set of tag ids that apply (step 4).
5. Send one batch-tag request per group (step 5).
6. Set `cursor = results[last].createdAt`. If `results.length < 100`, stop.
### 3. Classify one feedback
For each feedback item, read `source.origin`, `answers[]` (each answer has a `type` and relevant fields — `text`, `score`, `selected`, `label`, etc.). Skip `custom` answers unless they contain obvious signal.
For each in-scope tag, decide yes/no whether this feedback matches the tag description. Output the set of tag ids that apply — empty set means no tags for this feedback.
Be conservative: if uncertain, do NOT apply the tag. Never invent tags or ids.
### 4. Group by tag-combination
Collect `{ feedbackIds: [...], tagIds: [...] }` groups where every feedback in the group receives the same set of tags. Each group becomes one batch-tag request.
Caps per call: **500 feedbackIds, 20 tagIds**. Split any group that exceeds these.
### 5. Batch-tag
```
POST /feedback/batch-tag
{ "feedbackIds": [...], "tagIds": [...] }
```
Response: `{ updated, skipped, errors[] }`. Per-feedback `errors[]` do NOT fail the batch — log and continue.
### 6. Rate-limit and error handling
- `429 Too Many Requests` — read `Retry-After` (seconds), sleep, retry the same call.
- `401 Unauthorized` — key rejected; stop and tell the user.
- Transient network errors — retry once, then skip the page. Do not lose the cursor.
### 7. Dry-run first
Before applying tags on the first page, print the first five classifications and ask the user to confirm. If the user is unhappy, revisit the tag descriptions in step 1 and rerun.
### 8. Report
At the end, print:
- Pages processed
- Feedback seen / tagged / left untagged (no rule matched)
- Per-tag counts
- Any ids that returned per-feedback errors in batch-tag
## Boundaries
- **Never create, rename, or delete tags.** The Public API is read-only for tags. Direct the user to the Appzi portal (portal.appzi.com) to manage tags.
- **Never apply tags outside the user's declared scope** (respect `surveyId` / `createdAfter`).
- **Never call `DELETE /feedback/{id}/tags/{tagId}`** in this skill — tagging is additive-only.

Export your key:

Terminal window
export APPZI_API_KEY=apk_...

Then feed the skill to your agent:

  • Claude Code — save to ~/.claude/skills/appzi-auto-tag/SKILL.md, then run /appzi-auto-tag from any project.
  • Claude API / Agents SDK — pass the markdown as the system prompt.
  • OpenAI Assistants / Responses API — pass the markdown as instructions.
  • LangChain / AutoGen / CrewAI — use as the agent’s system prompt.
  • Any other LLM + HTTP tool — paste the markdown as the prompt; grant tool access to https://portal.api.appzi.io/api/v1.

Tags are additive. POST /feedback/{id}/tags and POST /feedback/batch-tag attach without disturbing existing tags. DELETE /feedback/{id}/tags/{tagId} removes one tag. There is no setTags endpoint. All tag operations are idempotent.

Survey questions reflect the last publish. questions[] on a survey is the last-published question set, not what a particular respondent saw. Older feedback may reference a question set that has since been replaced.