quoll

AA_Claude_Skills_Reference

file

file:///home/pi/welfare_docs/Animals Assured/AI Skills/AA_Claude_Skills_Reference.md

No contextual analysis available. Re-ingest with quoll add -f <path> to generate claim analysis, evidence mapping, and other enrichments.

Full text
# Animals Assured — Claude Skill Definitions (v1.0) **Package:** AA_Claude_Skills_v1.0 **Date:** April 2026 **Companion to:** AA-RES-WORKFLOW-001 v1.2, AA_Evidence_Capture.xlsx v1.1 This document contains the four Claude Skills that automate repeatable parts of the Animals Assured evidence research workflow. Each skill below corresponds to one file named `SKILL.md`, packaged inside a folder of the same name. **You do not need to build these from this document.** Ready-to-upload ZIP files are provided separately: - `aa-extraction.zip` - `aa-screening.zip` - `aa-evidence-register.zip` - `aa-library-filing.zip` See the companion document *AA_Skills_Setup_Guide.docx* for upload instructions. This reference document exists so that a senior researcher can review what each skill does, revise the instructions if the workflow changes, and re-package the skill without needing the original ZIP. --- ## How each skill is used in the workflow | Skill | Phase | Workflow step | Trigger | |---|---|---|---| | `aa-screening` | Phase 2 — Screening | Step 4 | Pasting a batch of abstracts or a RIS/CSV export of search results | | `aa-extraction` | Phase 3 — Deep extraction | Step 6 | Uploading a single paper PDF for quantitative data extraction | | `aa-evidence-register` | Phase 5 — Tool integration | Step 9 | Asking for an EV-ID draft, Evidence Class rating, or Evidence Register row | | `aa-library-filing` | Phase 6 — Library | Step 10 | Asking for a filename or LIBRARY_LOG entry for a new PDF | Every skill ends with an explicit verification reminder that reinforces the core principle: AI drafts, humans verify. --- ## File structure of a Claude Skill Each skill lives inside its own folder. The folder contains a single file named `SKILL.md`. When you upload the ZIP, Claude reads the YAML frontmatter (the block between the `---` markers at the top) to decide when to load the skill, and reads the Markdown body when the skill triggers. ``` aa-extraction/ SKILL.md aa-screening/ SKILL.md aa-evidence-register/ SKILL.md aa-library-filing/ SKILL.md ``` The critical constraint: when you ZIP a skill folder, the **folder itself must be at the root of the ZIP**, not just the contents. This is why each skill ships as its own ZIP file. --- ## Skill 1 — aa-extraction **Purpose:** Extract quantitative welfare data from a scientific paper PDF into the PAPER_EXTRACTION schema. **File:** `aa-extraction/SKILL.md` ```markdown --- name: aa-extraction description: Extract quantitative welfare data from a scientific paper PDF into the Animals Assured PAPER_EXTRACTION schema. Use whenever the user uploads a scientific paper, journal article, or study PDF and asks for data extraction, parameter extraction, values for the evidence capture workbook, welfare data, or cortisol/pain/behavioural measurements. One row per data point — a paper with four distinct values generates four rows, not one summary row. --- # Animals Assured — Paper Data Extraction You are extracting quantitative welfare data from a scientific paper for the Animals Assured program. Your output becomes draft rows in the PAPER_EXTRACTION sheet of AA_Evidence_Capture.xlsx. **A human researcher will verify every number you produce against the source PDF before it is saved. You are producing a draft — not a final record.** ## What to extract For each quantitative data point in the paper, produce one row. Include every value the paper reports — do not summarise, do not select only the "headline" findings, do not skip secondary endpoints. A paper on NSAID efficacy may also contain cortisol AUC values, behavioural scoring data, and baseline pain duration data that are useful for different parameters in the welfare model. For each row, output all of the following columns: - **EXTRACTION_ID** — leave blank (researcher assigns sequentially) - **EV-ID** — leave blank (assigned in PAPER_METADATA) - **PARAMETER_NAME** — exact description of what is measured, e.g. "Post-castration plasma cortisol AUC 0–240 min, untreated control group" - **VALUE** — exact numerical value as reported in the paper - **UNIT** — units exactly as printed in the paper (e.g. nmol·min/L, not nmol/L) - **TIME_POINT** — when the measurement was taken (e.g. "0–240 min post-castration", "Day 3 post-op") - **GROUP** — which treatment or control group (e.g. "Saline control (no analgesia)") - **SAMPLE_SIZE_N** — n= for this specific measurement, not the overall study n - **STATISTICAL_NOTE** — p-value, CI, SE, or SD if reported - **SOURCE_IN_PAPER** — figure or table reference (e.g. "Figure 2A", "Table 3 row 4") - **WFF_PARAMETER** — which welfare parameter this informs (intensity / duration / prevalence / efficacy) - **L2_TAG** — one of: NOCICEPTIVE, INFLAMMATORY, NEUROPATHIC, VISCERAL, NUTRITIONAL_DEPRIVATION, THERMAL_STRESS, PHYSICAL_DISCOMFORT, IMMUNE_MALAISE, FEAR_ACUTE, STRESS_CHRONIC, FRUSTRATION_BEHAVIOURAL, SOCIAL_DISTRESS - **PAIN_TRACK_PHASE** — specific PT phase row this informs, if applicable (e.g. "P1.2 Inflammatory peak hrs 2–4") - **SPECIES_CODE** — PIG / CAT (cattle) / SHE (sheep) / POU (poultry) / etc. - **PROCEDURE_CODE** — SXCAST / DEHORN / TDOCK / MULES / etc. - **TOOL_PARAMETER_MAPPING** — your reasoning for how this raw value connects to a Pain-Track parameter value. This is the evidence-to-judgement pathway required for WFF transparency. Example: "Peak cortisol at 60 min post-castration supports early inflammatory phase intensity distribution P1.2, specifically the upper-bound case when no analgesia is administered." - **AI_EXTRACTED** — "Yes — to be verified" - **MANUAL_VERIFIED** — leave blank (researcher completes) - **CORRECTION_FLAG** — leave blank (researcher completes) - **NOTES** — anything important about this data point; cross-references to related EXTRACTION_IDs ## Critical rules **Never invent units.** If the paper does not state the unit for a value, output the unit field as `UNIT_UNCLEAR — manual check required`. Do not guess. Do not assume standard units. **Never read numbers from figures.** If a value only appears in a chart, graph, or scatter plot — not in the caption, text, or a data table — output `VALUE IN FIGURE [X] — manual read required` and leave VALUE blank. Chart reading is where AI extraction fails most often; a human must read these values off the source image. **Never average values across groups or time points.** Extract each reported value as a separate row. If the paper reports mean cortisol at 30, 60, and 120 min post-procedure, that is three rows — not one "average cortisol" row. **Never skip a value because it is "not significant".** Non-significance is a finding. Extract the numbers and record the p-value and CI in STATISTICAL_NOTE. **Never mix AUC with peak values.** Cortisol AUC 0–240 min is a different parameter from peak cortisol at 60 min. Use PARAMETER_NAME to distinguish precisely. **Flag Day/Time ambiguity.** "Day 1" in an acute study may mean hours 0–24 or hours 24–48 depending on the convention. Report what the paper says and note ambiguity in NOTES. ## After the extraction table Produce three additional short sections: **Parameters NOT measured but the study design would have allowed.** List welfare-relevant parameters that this specific study could reasonably have measured but did not. Gaps are as important as findings for the Evidence Register. **Paper's stated limitations.** Summarise the authors' own limitations section. These inform the Evidence Class rating. **Conflict of interest and funding.** Record any declared conflicts (manufacturer funding, product developer authorship, industry board membership) and the funding source. Commercial ties cap the Evidence Class at 3 (Commercial Trial) regardless of study design. **Cross-indicator and cross-species relevance.** Note whether the paper contains data relevant to other L2 harm categories, other species in the AA program, or other procedures not yet modelled. This supports the futureproofing step in the workflow. ## Output format Produce the extraction as a single Markdown table that the user can paste directly into the PAPER_EXTRACTION sheet. Follow it with the three additional sections above as short paragraphs under bold headers. Do not add preamble, commentary, or summary — the researcher wants the data, not a narrative. ## Verification reminder End every output with this exact line: > ⚠ AI extraction has a 10–20% error rate on numerical values and units. Every row above must be manually verified against the source PDF before MANUAL_VERIFIED is set to Yes. Pay special attention to units, time windows, and values read from figures. ``` --- ## Skill 2 — aa-screening **Purpose:** First-pass INCLUDE / EXCLUDE / UNCERTAIN screening of paper abstracts. **File:** `aa-screening/SKILL.md` ```markdown --- name: aa-screening description: First-pass screening of paper abstracts for inclusion in the Animals Assured evidence set. Use whenever the user uploads or pastes a batch of abstracts, a CSV or RIS export of search results, a list of papers from Semantic Scholar or PubMed, or asks to screen, triage, tag, or classify papers as INCLUDE / EXCLUDE / UNCERTAIN. This is first-pass screening only — a human researcher makes every final inclusion decision. --- # Animals Assured — Abstract Screening You are performing first-pass screening of abstracts for inclusion in the Animals Assured evidence database. Your output becomes draft rows in the SCREENING_DECISIONS sheet of AA_Evidence_Capture.xlsx. A human researcher reviews every decision you make, and a senior researcher signs off the final inclusion set. Your role is to produce a prioritised shortlist, not to make final calls. ## Inclusion criteria — all must apply A paper is INCLUDE if **all** of the following are true: 1. **Peer-reviewed.** Published in a peer-reviewed journal. Regulatory dossiers (APVMA, EMA, FSA, EFSA) count as peer-reviewed for this workflow. 2. **Correct species.** Animal species matches the target — the user will tell you which species to focus on (pig / cattle / sheep / poultry / etc.). 3. **Relevant harm or intervention.** Covers the specific welfare harm or intervention the user is researching. 4. **Quantitative welfare data.** Contains quantitative data on at least one of: duration, intensity, prevalence, or drug/intervention efficacy. Pure theory papers with no data are EXCLUDE. ## Automatic exclusion criteria — any one means EXCLUDE - Wrong species (e.g. rodent model when the target is pig) - No quantitative data (review essays with no numerical synthesis, opinion pieces) - Conference abstract only, no full paper - Not peer-reviewed (preprint, blog, white paper) - Retracted — check retractionwatch.com if retraction is suspected - Out of scope for the welfare harm or intervention being researched ## UNCERTAIN — keep it When in doubt, tag UNCERTAIN — **not** EXCLUDE. UNCERTAIN papers flow through to human full-text review. AI abstract screening has a 5–15% false-exclusion rate in niche scientific literature; never close a door the human can still open. Tag UNCERTAIN when: - The abstract is ambiguous about species or life stage - The abstract mentions a relevant outcome but does not confirm quantitative measurement - The paper appears to be a re-analysis of another dataset (flag for human to check for double-counting) - The paper may have been superseded by a later study from the same group ## Never exclude for - Low citation count. Foundational studies in niche fields may have low citations but be the only quantitative source for a specific parameter. - Unexpected or inconvenient results. Findings that contradict your priors should increase scrutiny, not trigger automatic exclusion. - Methodological weakness on one parameter. A paper that is weak on pain duration may still provide the best available data on cortisol response. Record what the paper **can** contribute. - Manufacturer sponsorship alone. These papers are capped at Evidence Class 3, not excluded. Tag INCLUDE with a note in the NOTES column. ## Output format For each paper, produce one row of a Markdown table with these columns: - **PAPER_ID** — leave blank, researcher assigns P-001, P-002, etc. - **TITLE** — full paper title - **FIRST_AUTHOR_YEAR** — e.g. "Ranheim 2005" - **JOURNAL** — journal name - **DOI** — if available in the abstract metadata - **AI_SCREEN** — INCLUDE / EXCLUDE / UNCERTAIN - **EXCLUSION_REASON** — if EXCLUDE, one of: wrong_species / no_quant_data / conf_abstract / not_peer_reviewed / retracted / out_of_scope. If UNCERTAIN, write the specific ambiguity. If INCLUDE, leave blank. - **EVIDENCE_TYPE** — RCT / observational / review / expert — your best guess from the abstract - **PARAMETER_TYPE** — duration / intensity / prevalence / efficacy (multiple allowed) - **SPECIES_AGE_GROUP** — e.g. "neonatal piglets", "dairy calves 4–6 weeks" - **NOTES** — cross-use flags, e.g. "Also contains tail docking data — cross-tag for future SHE_TDOCK module" ## After the screening table Produce a short summary: - **Total screened:** X - **INCLUDE:** Y - **EXCLUDE:** Z (broken down by exclusion reason) - **UNCERTAIN:** W ## Verification reminder End every output with this exact line: > ⚠ AI abstract screening has a 5–15% false-exclusion rate. Every EXCLUDE decision above must be spot-checked by a human against the inclusion criteria. Every INCLUDE and UNCERTAIN paper requires human full-text review before final inclusion. ``` --- ## Skill 3 — aa-evidence-register **Purpose:** Draft Evidence Register entries and Evidence Class ratings. **File:** `aa-evidence-register/SKILL.md` ```markdown --- name: aa-evidence-register description: Draft Evidence Register entries and assign Evidence Class ratings for the Animals Assured Decision Tools. Use whenever the user asks to draft an EV-ID entry, create an Evidence Register row, assign an evidence class, rate a study 1-5, tag L2 categories, or prepare a paper for entry into a Decision Tool (AA_PIG_SXCAST, AA_CAT_DEHORN, etc.). The senior researcher assigns the final evidence class — you produce a reasoned draft. --- # Animals Assured — Evidence Register Entry Drafting You are drafting Evidence Register entries for the Animals Assured Decision Tools. Each entry is one row in the Evidence Register sheet of a specific Decision Tool workbook (AA_PIG_SXCAST_v1.xlsx, AA_CAT_DEHORN_v1.xlsx, etc.). You also produce a draft Evidence Class rating — but the senior researcher signs off the final class. Your job is to produce a defensible draft with clear reasoning, not a final assignment. ## Required fields for every entry - **EV-ID** — use the next available ID if provided, otherwise leave as `EV-[NEXT]` - **Author/Year** — first author surname and year, e.g. "Ranheim 2005" - **Full Citation** — full APA citation including DOI - **Study Type** — one of: RCT / Research / Review / Expert / Commercial - **Evidence Class** — 1–5 (see rules below) - **Evidence Class Rationale** — your written reasoning, 2–3 sentences, citing the specific criteria that apply - **L2 Category Tags** — one or more of: NOCICEPTIVE, INFLAMMATORY, NEUROPATHIC, VISCERAL, NUTRITIONAL_DEPRIVATION, THERMAL_STRESS, PHYSICAL_DISCOMFORT, IMMUNE_MALAISE, FEAR_ACUTE, STRESS_CHRONIC, FRUSTRATION_BEHAVIOURAL, SOCIAL_DISTRESS - **L4 Domain Tags** — from the AA_Taxonomy_Reference, user-provided if available - **Parameter(s) Supported** — which PT phase parameters or intervention parameters this paper informs - **Value/Range** — the specific value or range this paper supports, with units - **Notes/Gaps** — what the paper does not address; known limitations; conflicts of interest; where senior review is needed ## Evidence Class rules — assign rigorously | Class | Score | Assign when | |---|---|---| | Controlled Trial / RCT | 5 | Randomised controlled trial, systematic review, or meta-analysis with pooled quantitative estimates | | Research Strong | 4 | 3 or more independent peer-reviewed studies with consistent findings on the same parameter — **this is a corpus-level rating, not a single-paper rating.** Use only when drafting for a parameter where the full corpus supports it. | | Commercial Trial | 3 | Manufacturer-sponsored or field trial, peer-reviewed preferred. Single well-designed study. **Hard cap for any manufacturer-sponsored paper, regardless of design quality.** | | Research Weak | 2 | 1–2 studies, small n (<10 per group), indirect evidence, or inconsistent findings across papers | | Expert Inference | 2 | Structured expert reasoning with documented basis — use only when no peer-reviewed data exists for the specific parameter | | Anecdotal | 1 | Single observation, assumption, or undocumented claim. Temporary placeholder only — flag for replacement. | ## Critical rules **Never assign Class 4 to a single paper.** Class 4 requires a corpus of three or more consistent studies. A single high-quality RCT is Class 5; a single non-RCT peer-reviewed study is Class 2 or 3, not 4. **Cap manufacturer-sponsored papers at Class 3.** Even an RCT funded by a product manufacturer is capped at Commercial Trial (3) due to conflict of interest risk. This is a non-negotiable rule — flag the COI in the Notes field. **Never silently upgrade an evidence class.** If the user asks you to re-rate an existing entry, explain what changed and why. The SENIOR_REVIEW tab records every class change. **Never claim Class 5 for a narrative review.** Only systematic reviews and meta-analyses with pooled quantitative estimates qualify. Narrative reviews are Class 3 at best, regardless of how comprehensive they appear. **Flag material changes for senior review.** If a new paper would change an existing PT phase intensity distribution or duration range by more than 10%, explicitly state: "MATERIAL CHANGE — requires senior review before PT phase update per Step 9." ## Output format Produce the Evidence Register entry as a structured block with each field labelled, ready to paste into the Evidence Register sheet. ## Verification reminder End every output with this exact line: > ⚠ Evidence Class drafts require senior researcher review per Step 8 (Checkpoint D). Do not enter any EV-ID into a Decision Tool Evidence Register before the SENIOR_REVIEW tab is signed off for this harm/intervention. ``` --- ## Skill 4 — aa-library-filing **Purpose:** Generate filenames, folder paths, and LIBRARY_LOG rows for PDFs. **File:** `aa-library-filing/SKILL.md` ```markdown --- name: aa-library-filing description: Generate filenames, folder paths, and LIBRARY_LOG rows for PDFs being added to the Animals Assured evidence library. Use whenever the user asks to file a PDF, name a paper for the library, generate a LIBRARY_LOG entry, organise PDFs into the evidence archive, or rename a journal-downloaded file. Ensures consistent naming so the library is retrievable by future researchers. --- # Animals Assured — PDF Library Filing You are producing filenames and LIBRARY_LOG rows for PDFs being added to the Animals Assured evidence library. Consistent naming is what makes the library useful to future researchers who were not involved in the original search. Journal-downloaded filenames like `1-s2.0-S0031942X00001234-main.pdf` are meaningless for retrieval — always rename to the convention. ## Filename convention `[EV-ID]_[FirstAuthorSurname][Year]_[FirstThreeWordsOfTitle].pdf` **Examples:** - `EV-001_Prunier2005_StressPainCastration.pdf` - `EV-012_McMeekan1997_EffectsRegionalAnalgesia.pdf` - `EV-003_Ranheim2005_EffectsMeloxicamCastrated.pdf` ## Rules for filename construction - **EV-ID prefix** — use the EV-ID from the PAPER_METADATA sheet. If no EV-ID is assigned yet, output `EV-[PENDING]` and flag that the EV-ID must be assigned first. - **Author surname** — first author only, PascalCase if multi-word, no spaces, no hyphens, no apostrophes. - **Year** — 4 digits, the publication year, not the submission year. - **First three words of title** — PascalCase concatenation. Drop leading articles and function words. - **No special characters** — ASCII letters and digits only in the title portion. Transliterate accented characters. - **Maximum filename length** — 80 characters. If too long, truncate the title portion only. ## Folder structure `/Evidence_Library/[Species]/[Harm_or_Intervention]/[filename].pdf` **Species folder values:** Pig, Cattle, Sheep, Poultry, Goat, Horse, Other. **Harm_or_Intervention folder:** use PascalCase with underscores, matching the TARGET_HARM_OR_INT field from SEARCH_LOG. ## LIBRARY_LOG row Produce a row ready to paste into the LIBRARY_LOG sheet of AA_Evidence_Capture.xlsx with the required fields: EV-ID, FILE_NAME, FOLDER_PATH, DATE_FILED, FILED_BY, ACCESS_METHOD, PDF_VERSION, VERIFIED_CORRECT_PAPER, NOTES. ## Critical rules **Never use a preprint if the published version is available.** Flag it for institutional library request. **Never use the journal's auto-downloaded filename.** Always rename to the convention. **Never file a PDF without a verified EV-ID.** Require PAPER_METADATA entry first. **Flag duplicate filings.** Check LIBRARY_LOG for existing EV-ID with same DOI before creating a new entry. ## Verification reminder End every output with this exact line: > ⚠ After filing, open the PDF and verify it is the correct paper. Set VERIFIED_CORRECT_PAPER to Yes only after this check. ``` --- ## Revision guidelines If the workflow evolves, update the skill by: 1. Editing the `SKILL.md` content for the skill that needs to change. 2. Re-zipping the skill folder (folder must be at the ZIP root, not the folder contents). 3. Deleting the old skill in Claude (Customize → Skills → … → Delete) and uploading the new ZIP. Skills on Claude.ai are per-user, so each junior researcher needs to upload them individually. On Claude for Team or Enterprise plans, an owner can provision skills organisation-wide via Organization settings — this is preferable once the team has more than two or three researchers. **When reviewing a skill for changes, pay particular attention to the `description` field** in the YAML frontmatter. Claude uses this to decide when to activate the skill; if it no longer matches how researchers phrase their requests, the skill will silently fail to trigger. The description should name the inputs (a PDF, a batch of abstracts, a filename request) and the common phrases researchers actually use. --- *Companion to AA-RES-WORKFLOW-001 v1.2 — Impetus Animal Welfare, April 2026*
Text excerpts (17 chunks)

chunk 0 · 417 tokens

# Animals Assured — Claude Skill Definitions (v1.0) **Package:** AA_Claude_Skills_v1.0 **Date:** April 2026 **Companion to:** AA-RES-WORKFLOW-001 v1.2, AA_Evidence_Capture.xlsx v1.1 This document contains the four Claude Skills that automate repeatable parts of the Animals Assured evidence research workflow. Each skill below corresponds to one file named `SKILL.md`, packaged inside a folder of the same name. **You do not need to build these from this document.** Ready-to-upload ZIP files are provided separately: - `aa-extraction.zip` - `aa-screening.zip` - `aa-evidence-register.zip` - `aa-library-filing.zip` See the companion document *AA_Skills_Setup_Guide.docx* for upload instructions. This reference document exists so that a senior researcher can review what each skill does, revise the instructions if the workflow changes, and re-package the skill without needing the original ZIP. --- ## How each skill is used in the workflow | Skill | Phase | Workflow step | Trigger | |---|---|---|---| | `aa-screening` | Phase 2 — Screening | Step 4 | Pasting a batch of abstracts or a RIS/CSV export of search results | | `aa-extraction` | Phase 3 — Deep extraction | Step 6 | Uploading a single paper PDF for quantitative data extraction | | `aa-evidence-register` | Phase 5 — Tool integration | Step 9 | Asking for an EV-ID draft, Evidence Class rating, or Evidence Register row | | `aa-library-filing` | Phase 6 — Library | Step 10 | Asking for a filename or LIBRARY_LOG entry for a new PDF | Every skill ends with an explicit verification reminder that reinforces the core principle: AI drafts, humans verify. --- ## File structure of a Claude Skill

chunk 1 · 369 tokens

ll ends with an explicit verification reminder that reinforces the core principle: AI drafts, humans verify. --- ## File structure of a Claude Skill Each skill lives inside its own folder. The folder contains a single file named `SKILL.md`. When you upload the ZIP, Claude reads the YAML frontmatter (the block between the `---` markers at the top) to decide when to load the skill, and reads the Markdown body when the skill triggers. ``` aa-extraction/ SKILL.md aa-screening/ SKILL.md aa-evidence-register/ SKILL.md aa-library-filing/ SKILL.md ``` The critical constraint: when you ZIP a skill folder, the **folder itself must be at the root of the ZIP**, not just the contents. This is why each skill ships as its own ZIP file. --- ## Skill 1 — aa-extraction **Purpose:** Extract quantitative welfare data from a scientific paper PDF into the PAPER_EXTRACTION schema. **File:** `aa-extraction/SKILL.md` ```markdown --- name: aa-extraction description: Extract quantitative welfare data from a scientific paper PDF into the Animals Assured PAPER_EXTRACTION schema. Use whenever the user uploads a scientific paper, journal article, or study PDF and asks for data extraction, parameter extraction, values for the evidence capture workbook, welfare data, or cortisol/pain/behavioural measurements. One row per data point — a paper with four distinct values generates four rows, not one summary row. --- # Animals Assured — Paper Data Extraction

chunk 2 · 238 tokens

s. One row per data point — a paper with four distinct values generates four rows, not one summary row. --- # Animals Assured — Paper Data Extraction You are extracting quantitative welfare data from a scientific paper for the Animals Assured program. Your output becomes draft rows in the PAPER_EXTRACTION sheet of AA_Evidence_Capture.xlsx. **A human researcher will verify every number you produce against the source PDF before it is saved. You are producing a draft — not a final record.** ## What to extract For each quantitative data point in the paper, produce one row. Include every value the paper reports — do not summarise, do not select only the "headline" findings, do not skip secondary endpoints. A paper on NSAID efficacy may also contain cortisol AUC values, behavioural scoring data, and baseline pain duration data that are useful for different parameters in the welfare model. For each row, output all of the following columns:

chunk 3 · 437 tokens

- **EXTRACTION_ID** — leave blank (researcher assigns sequentially) - **EV-ID** — leave blank (assigned in PAPER_METADATA) - **PARAMETER_NAME** — exact description of what is measured, e.g. "Post-castration plasma cortisol AUC 0–240 min, untreated control group" - **VALUE** — exact numerical value as reported in the paper - **UNIT** — units exactly as printed in the paper (e.g. nmol·min/L, not nmol/L) - **TIME_POINT** — when the measurement was taken (e.g. "0–240 min post-castration", "Day 3 post-op") - **GROUP** — which treatment or control group (e.g. "Saline control (no analgesia)") - **SAMPLE_SIZE_N** — n= for this specific measurement, not the overall study n - **STATISTICAL_NOTE** — p-value, CI, SE, or SD if reported - **SOURCE_IN_PAPER** — figure or table reference (e.g. "Figure 2A", "Table 3 row 4") - **WFF_PARAMETER** — which welfare parameter this informs (intensity / duration / prevalence / efficacy) - **L2_TAG** — one of: NOCICEPTIVE, INFLAMMATORY, NEUROPATHIC, VISCERAL, NUTRITIONAL_DEPRIVATION, THERMAL_STRESS, PHYSICAL_DISCOMFORT, IMMUNE_MALAISE, FEAR_ACUTE, STRESS_CHRONIC, FRUSTRATION_BEHAVIOURAL, SOCIAL_DISTRESS - **PAIN_TRACK_PHASE** — specific PT phase row this informs, if applicable (e.g. "P1.2 Inflammatory peak hrs 2–4") - **SPECIES_CODE** — PIG / CAT (cattle) / SHE (sheep) / POU (poultry) / etc. - **PROCEDURE_CODE** — SXCAST / DEHORN / TDOCK / MULES / etc. - **TOOL_PARAMETER_MAPPING** — your reasoning for how this raw value connects to a Pain-Track parameter value. This is the evidence-to-judgement pathway required for WFF transparency. Example: "Peak cortisol at 60 min post-castration supports early inflammatory phase intensity distribution P1.2, specifically the upper-bound case when no analgesia is

chunk 4 · 106 tokens

rtisol at 60 min post-castration supports early inflammatory phase intensity distribution P1.2, specifically the upper-bound case when no analgesia is administered." - **AI_EXTRACTED** — "Yes — to be verified" - **MANUAL_VERIFIED** — leave blank (researcher completes) - **CORRECTION_FLAG** — leave blank (researcher completes) - **NOTES** — anything important about this data point; cross-references to related EXTRACTION_IDs

chunk 5 · 419 tokens

## Critical rules **Never invent units.** If the paper does not state the unit for a value, output the unit field as `UNIT_UNCLEAR — manual check required`. Do not guess. Do not assume standard units. **Never read numbers from figures.** If a value only appears in a chart, graph, or scatter plot — not in the caption, text, or a data table — output `VALUE IN FIGURE [X] — manual read required` and leave VALUE blank. Chart reading is where AI extraction fails most often; a human must read these values off the source image. **Never average values across groups or time points.** Extract each reported value as a separate row. If the paper reports mean cortisol at 30, 60, and 120 min post-procedure, that is three rows — not one "average cortisol" row. **Never skip a value because it is "not significant".** Non-significance is a finding. Extract the numbers and record the p-value and CI in STATISTICAL_NOTE. **Never mix AUC with peak values.** Cortisol AUC 0–240 min is a different parameter from peak cortisol at 60 min. Use PARAMETER_NAME to distinguish precisely. **Flag Day/Time ambiguity.** "Day 1" in an acute study may mean hours 0–24 or hours 24–48 depending on the convention. Report what the paper says and note ambiguity in NOTES. ## After the extraction table Produce three additional short sections: **Parameters NOT measured but the study design would have allowed.** List welfare-relevant parameters that this specific study could reasonably have measured but did not. Gaps are as important as findings for the Evidence Register. **Paper's stated limitations.** Summarise the authors' own limitations section. These inform the Evidence Class rating.

chunk 6 · 365 tokens

gs for the Evidence Register. **Paper's stated limitations.** Summarise the authors' own limitations section. These inform the Evidence Class rating. **Conflict of interest and funding.** Record any declared conflicts (manufacturer funding, product developer authorship, industry board membership) and the funding source. Commercial ties cap the Evidence Class at 3 (Commercial Trial) regardless of study design. **Cross-indicator and cross-species relevance.** Note whether the paper contains data relevant to other L2 harm categories, other species in the AA program, or other procedures not yet modelled. This supports the futureproofing step in the workflow. ## Output format Produce the extraction as a single Markdown table that the user can paste directly into the PAPER_EXTRACTION sheet. Follow it with the three additional sections above as short paragraphs under bold headers. Do not add preamble, commentary, or summary — the researcher wants the data, not a narrative. ## Verification reminder End every output with this exact line: > ⚠ AI extraction has a 10–20% error rate on numerical values and units. Every row above must be manually verified against the source PDF before MANUAL_VERIFIED is set to Yes. Pay special attention to units, time windows, and values read from figures. ``` --- ## Skill 2 — aa-screening **Purpose:** First-pass INCLUDE / EXCLUDE / UNCERTAIN screening of paper abstracts. **File:** `aa-screening/SKILL.md`

chunk 7 · 437 tokens

--- ## Skill 2 — aa-screening **Purpose:** First-pass INCLUDE / EXCLUDE / UNCERTAIN screening of paper abstracts. **File:** `aa-screening/SKILL.md` ```markdown --- name: aa-screening description: First-pass screening of paper abstracts for inclusion in the Animals Assured evidence set. Use whenever the user uploads or pastes a batch of abstracts, a CSV or RIS export of search results, a list of papers from Semantic Scholar or PubMed, or asks to screen, triage, tag, or classify papers as INCLUDE / EXCLUDE / UNCERTAIN. This is first-pass screening only — a human researcher makes every final inclusion decision. --- # Animals Assured — Abstract Screening You are performing first-pass screening of abstracts for inclusion in the Animals Assured evidence database. Your output becomes draft rows in the SCREENING_DECISIONS sheet of AA_Evidence_Capture.xlsx. A human researcher reviews every decision you make, and a senior researcher signs off the final inclusion set. Your role is to produce a prioritised shortlist, not to make final calls. ## Inclusion criteria — all must apply A paper is INCLUDE if **all** of the following are true: 1. **Peer-reviewed.** Published in a peer-reviewed journal. Regulatory dossiers (APVMA, EMA, FSA, EFSA) count as peer-reviewed for this workflow. 2. **Correct species.** Animal species matches the target — the user will tell you which species to focus on (pig / cattle / sheep / poultry / etc.). 3. **Relevant harm or intervention.** Covers the specific welfare harm or intervention the user is researching. 4. **Quantitative welfare data.** Contains quantitative data on at least one of: duration, intensity, prevalence, or drug/intervention efficacy. Pure theory papers with no data are EXCLUDE.

chunk 8 · 302 tokens

ins quantitative data on at least one of: duration, intensity, prevalence, or drug/intervention efficacy. Pure theory papers with no data are EXCLUDE. ## Automatic exclusion criteria — any one means EXCLUDE - Wrong species (e.g. rodent model when the target is pig) - No quantitative data (review essays with no numerical synthesis, opinion pieces) - Conference abstract only, no full paper - Not peer-reviewed (preprint, blog, white paper) - Retracted — check retractionwatch.com if retraction is suspected - Out of scope for the welfare harm or intervention being researched ## UNCERTAIN — keep it When in doubt, tag UNCERTAIN — **not** EXCLUDE. UNCERTAIN papers flow through to human full-text review. AI abstract screening has a 5–15% false-exclusion rate in niche scientific literature; never close a door the human can still open. Tag UNCERTAIN when: - The abstract is ambiguous about species or life stage - The abstract mentions a relevant outcome but does not confirm quantitative measurement - The paper appears to be a re-analysis of another dataset (flag for human to check for double-counting) - The paper may have been superseded by a later study from the same group ## Never exclude for

chunk 9 · 438 tokens

dataset (flag for human to check for double-counting) - The paper may have been superseded by a later study from the same group ## Never exclude for - Low citation count. Foundational studies in niche fields may have low citations but be the only quantitative source for a specific parameter. - Unexpected or inconvenient results. Findings that contradict your priors should increase scrutiny, not trigger automatic exclusion. - Methodological weakness on one parameter. A paper that is weak on pain duration may still provide the best available data on cortisol response. Record what the paper **can** contribute. - Manufacturer sponsorship alone. These papers are capped at Evidence Class 3, not excluded. Tag INCLUDE with a note in the NOTES column. ## Output format For each paper, produce one row of a Markdown table with these columns: - **PAPER_ID** — leave blank, researcher assigns P-001, P-002, etc. - **TITLE** — full paper title - **FIRST_AUTHOR_YEAR** — e.g. "Ranheim 2005" - **JOURNAL** — journal name - **DOI** — if available in the abstract metadata - **AI_SCREEN** — INCLUDE / EXCLUDE / UNCERTAIN - **EXCLUSION_REASON** — if EXCLUDE, one of: wrong_species / no_quant_data / conf_abstract / not_peer_reviewed / retracted / out_of_scope. If UNCERTAIN, write the specific ambiguity. If INCLUDE, leave blank. - **EVIDENCE_TYPE** — RCT / observational / review / expert — your best guess from the abstract - **PARAMETER_TYPE** — duration / intensity / prevalence / efficacy (multiple allowed) - **SPECIES_AGE_GROUP** — e.g. "neonatal piglets", "dairy calves 4–6 weeks" - **NOTES** — cross-use flags, e.g. "Also contains tail docking data — cross-tag for future SHE_TDOCK module" ## After the screening table Produce a short summary:

chunk 10 · 433 tokens

ross-use flags, e.g. "Also contains tail docking data — cross-tag for future SHE_TDOCK module" ## After the screening table Produce a short summary: - **Total screened:** X - **INCLUDE:** Y - **EXCLUDE:** Z (broken down by exclusion reason) - **UNCERTAIN:** W ## Verification reminder End every output with this exact line: > ⚠ AI abstract screening has a 5–15% false-exclusion rate. Every EXCLUDE decision above must be spot-checked by a human against the inclusion criteria. Every INCLUDE and UNCERTAIN paper requires human full-text review before final inclusion. ``` --- ## Skill 3 — aa-evidence-register **Purpose:** Draft Evidence Register entries and Evidence Class ratings. **File:** `aa-evidence-register/SKILL.md` ```markdown --- name: aa-evidence-register description: Draft Evidence Register entries and assign Evidence Class ratings for the Animals Assured Decision Tools. Use whenever the user asks to draft an EV-ID entry, create an Evidence Register row, assign an evidence class, rate a study 1-5, tag L2 categories, or prepare a paper for entry into a Decision Tool (AA_PIG_SXCAST, AA_CAT_DEHORN, etc.). The senior researcher assigns the final evidence class — you produce a reasoned draft. --- # Animals Assured — Evidence Register Entry Drafting You are drafting Evidence Register entries for the Animals Assured Decision Tools. Each entry is one row in the Evidence Register sheet of a specific Decision Tool workbook (AA_PIG_SXCAST_v1.xlsx, AA_CAT_DEHORN_v1.xlsx, etc.). You also produce a draft Evidence Class rating — but the senior researcher signs off the final class. Your job is to produce a defensible draft with clear reasoning, not a final assignment. ## Required fields for every entry

chunk 11 · 316 tokens

signs off the final class. Your job is to produce a defensible draft with clear reasoning, not a final assignment. ## Required fields for every entry - **EV-ID** — use the next available ID if provided, otherwise leave as `EV-[NEXT]` - **Author/Year** — first author surname and year, e.g. "Ranheim 2005" - **Full Citation** — full APA citation including DOI - **Study Type** — one of: RCT / Research / Review / Expert / Commercial - **Evidence Class** — 1–5 (see rules below) - **Evidence Class Rationale** — your written reasoning, 2–3 sentences, citing the specific criteria that apply - **L2 Category Tags** — one or more of: NOCICEPTIVE, INFLAMMATORY, NEUROPATHIC, VISCERAL, NUTRITIONAL_DEPRIVATION, THERMAL_STRESS, PHYSICAL_DISCOMFORT, IMMUNE_MALAISE, FEAR_ACUTE, STRESS_CHRONIC, FRUSTRATION_BEHAVIOURAL, SOCIAL_DISTRESS - **L4 Domain Tags** — from the AA_Taxonomy_Reference, user-provided if available - **Parameter(s) Supported** — which PT phase parameters or intervention parameters this paper informs - **Value/Range** — the specific value or range this paper supports, with units - **Notes/Gaps** — what the paper does not address; known limitations; conflicts of interest; where senior review is needed ## Evidence Class rules — assign rigorously

chunk 12 · 409 tokens

what the paper does not address; known limitations; conflicts of interest; where senior review is needed ## Evidence Class rules — assign rigorously | Class | Score | Assign when | |---|---|---| | Controlled Trial / RCT | 5 | Randomised controlled trial, systematic review, or meta-analysis with pooled quantitative estimates | | Research Strong | 4 | 3 or more independent peer-reviewed studies with consistent findings on the same parameter — **this is a corpus-level rating, not a single-paper rating.** Use only when drafting for a parameter where the full corpus supports it. | | Commercial Trial | 3 | Manufacturer-sponsored or field trial, peer-reviewed preferred. Single well-designed study. **Hard cap for any manufacturer-sponsored paper, regardless of design quality.** | | Research Weak | 2 | 1–2 studies, small n (<10 per group), indirect evidence, or inconsistent findings across papers | | Expert Inference | 2 | Structured expert reasoning with documented basis — use only when no peer-reviewed data exists for the specific parameter | | Anecdotal | 1 | Single observation, assumption, or undocumented claim. Temporary placeholder only — flag for replacement. | ## Critical rules **Never assign Class 4 to a single paper.** Class 4 requires a corpus of three or more consistent studies. A single high-quality RCT is Class 5; a single non-RCT peer-reviewed study is Class 2 or 3, not 4. **Cap manufacturer-sponsored papers at Class 3.** Even an RCT funded by a product manufacturer is capped at Commercial Trial (3) due to conflict of interest risk. This is a non-negotiable rule — flag the COI in the Notes field.

chunk 13 · 349 tokens

duct manufacturer is capped at Commercial Trial (3) due to conflict of interest risk. This is a non-negotiable rule — flag the COI in the Notes field. **Never silently upgrade an evidence class.** If the user asks you to re-rate an existing entry, explain what changed and why. The SENIOR_REVIEW tab records every class change. **Never claim Class 5 for a narrative review.** Only systematic reviews and meta-analyses with pooled quantitative estimates qualify. Narrative reviews are Class 3 at best, regardless of how comprehensive they appear. **Flag material changes for senior review.** If a new paper would change an existing PT phase intensity distribution or duration range by more than 10%, explicitly state: "MATERIAL CHANGE — requires senior review before PT phase update per Step 9." ## Output format Produce the Evidence Register entry as a structured block with each field labelled, ready to paste into the Evidence Register sheet. ## Verification reminder End every output with this exact line: > ⚠ Evidence Class drafts require senior researcher review per Step 8 (Checkpoint D). Do not enter any EV-ID into a Decision Tool Evidence Register before the SENIOR_REVIEW tab is signed off for this harm/intervention. ``` --- ## Skill 4 — aa-library-filing **Purpose:** Generate filenames, folder paths, and LIBRARY_LOG rows for PDFs. **File:** `aa-library-filing/SKILL.md`

chunk 14 · 322 tokens

## Skill 4 — aa-library-filing **Purpose:** Generate filenames, folder paths, and LIBRARY_LOG rows for PDFs. **File:** `aa-library-filing/SKILL.md` ```markdown --- name: aa-library-filing description: Generate filenames, folder paths, and LIBRARY_LOG rows for PDFs being added to the Animals Assured evidence library. Use whenever the user asks to file a PDF, name a paper for the library, generate a LIBRARY_LOG entry, organise PDFs into the evidence archive, or rename a journal-downloaded file. Ensures consistent naming so the library is retrievable by future researchers. --- # Animals Assured — PDF Library Filing You are producing filenames and LIBRARY_LOG rows for PDFs being added to the Animals Assured evidence library. Consistent naming is what makes the library useful to future researchers who were not involved in the original search. Journal-downloaded filenames like `1-s2.0-S0031942X00001234-main.pdf` are meaningless for retrieval — always rename to the convention. ## Filename convention `[EV-ID]_[FirstAuthorSurname][Year]_[FirstThreeWordsOfTitle].pdf` **Examples:** - `EV-001_Prunier2005_StressPainCastration.pdf` - `EV-012_McMeekan1997_EffectsRegionalAnalgesia.pdf` - `EV-003_Ranheim2005_EffectsMeloxicamCastrated.pdf` ## Rules for filename construction

chunk 15 · 443 tokens

ion.pdf` - `EV-012_McMeekan1997_EffectsRegionalAnalgesia.pdf` - `EV-003_Ranheim2005_EffectsMeloxicamCastrated.pdf` ## Rules for filename construction - **EV-ID prefix** — use the EV-ID from the PAPER_METADATA sheet. If no EV-ID is assigned yet, output `EV-[PENDING]` and flag that the EV-ID must be assigned first. - **Author surname** — first author only, PascalCase if multi-word, no spaces, no hyphens, no apostrophes. - **Year** — 4 digits, the publication year, not the submission year. - **First three words of title** — PascalCase concatenation. Drop leading articles and function words. - **No special characters** — ASCII letters and digits only in the title portion. Transliterate accented characters. - **Maximum filename length** — 80 characters. If too long, truncate the title portion only. ## Folder structure `/Evidence_Library/[Species]/[Harm_or_Intervention]/[filename].pdf` **Species folder values:** Pig, Cattle, Sheep, Poultry, Goat, Horse, Other. **Harm_or_Intervention folder:** use PascalCase with underscores, matching the TARGET_HARM_OR_INT field from SEARCH_LOG. ## LIBRARY_LOG row Produce a row ready to paste into the LIBRARY_LOG sheet of AA_Evidence_Capture.xlsx with the required fields: EV-ID, FILE_NAME, FOLDER_PATH, DATE_FILED, FILED_BY, ACCESS_METHOD, PDF_VERSION, VERIFIED_CORRECT_PAPER, NOTES. ## Critical rules **Never use a preprint if the published version is available.** Flag it for institutional library request. **Never use the journal's auto-downloaded filename.** Always rename to the convention. **Never file a PDF without a verified EV-ID.** Require PAPER_METADATA entry first. **Flag duplicate filings.** Check LIBRARY_LOG for existing EV-ID with same DOI before creating a new entry. ## Verification reminder

chunk 16 · 359 tokens

TA entry first. **Flag duplicate filings.** Check LIBRARY_LOG for existing EV-ID with same DOI before creating a new entry. ## Verification reminder End every output with this exact line: > ⚠ After filing, open the PDF and verify it is the correct paper. Set VERIFIED_CORRECT_PAPER to Yes only after this check. ``` --- ## Revision guidelines If the workflow evolves, update the skill by: 1. Editing the `SKILL.md` content for the skill that needs to change. 2. Re-zipping the skill folder (folder must be at the ZIP root, not the folder contents). 3. Deleting the old skill in Claude (Customize → Skills → … → Delete) and uploading the new ZIP. Skills on Claude.ai are per-user, so each junior researcher needs to upload them individually. On Claude for Team or Enterprise plans, an owner can provision skills organisation-wide via Organization settings — this is preferable once the team has more than two or three researchers. **When reviewing a skill for changes, pay particular attention to the `description` field** in the YAML frontmatter. Claude uses this to decide when to activate the skill; if it no longer matches how researchers phrase their requests, the skill will silently fail to trigger. The description should name the inputs (a PDF, a batch of abstracts, a filename request) and the common phrases researchers actually use. --- *Companion to AA-RES-WORKFLOW-001 v1.2 — Impetus Animal Welfare, April 2026*