Config driven issue labeling
Found a neat GitHub Action that auto-assesses GitHub issues with an AI model and adds clean, standardized labels. I like the approach a lot: prompt files plus a label-to-prompt mapping, so it stays configuration driven instead of more custom code.
Quick flow: add a trigger label like “request ai review”. It collects the issue’s labels, maps them to prompt files, runs one AI call per prompt via GitHub Models, regex-grabs an “AI Assessment” header, and applies labels like ai::. It can also drop a structured comment.
Prompts live as .
Why I’m into it: faster, consistent triage. Think completeness checks, priority, area labels. It supports multiple prompts per issue and runs are idempotent since it removes the trigger label, so you can re-run. TypeScript Action. Mapping looks like “bug,