A colleague recently published an exploration of a Claude Code skill called Linear Issue Enricher. The skill basically scores Linear issues for agent-readiness (to be read and developed by an agent) and enriches them with codebase context when needed. I’m a LibreChat fan, so my question was if I could port this to a LibreChat Agent without much pain.
Spoiler: yes. And it’s a good proof of concept for something will become increasingly common: writing AI skills once, then deploying them across multiple platforms.
This post walks through what I did, what worked, what didn’t, and what I learned.
TL;DR
This is what I did:
- Ported a Claude Code skill to a LibreChat Agent and tested it in about 20 minutes
- The process was surprisingly straightforward: extract files → translate to system prompt → attach reference files
- It didn’t work perfectly on the first try—the agent got lost in the variety of markdown tables, but it had a quick fix.
- It’s a solid proof of concept that skills can move across AI platforms with minimal effort
- The pattern can also be useful: develop in Claude Code, deploy to LibreChat for broader team access
The use case. The linear issue enricher
First things first: Linear is a project management and issue tracking tool designed for software development teams to plan, track, and organize work efficiently. So an issue is a task. More or less.
What’s the Linear Issue Enricher?
The original skill does something genuinely useful: it takes a Linear issue (often written in vague and, or, business language) and transforms it into an agent-ready specification. It does this through several phases:
- Reads the issue from Linear
- Scores it against a readiness rubric (0-12 scale)
- Explores relevant codebases and documents to fill the gaps
- Completes the specification with all the context an AI coding agent would need
- Adds the result back to Linear (This is possible depending on the MCP you have access to; in some cases, you’d need to manually copy and paste the context).
The skill is published in a public GitHub repo. It uses a scoring rubric (see md file) with four criteria—Actionable Scope, Technical Mapping, Acceptance Criteria, and Supporting Context—and classifies issues as Agent-Ready, Nearly Ready, Needs Work, or Not Ready. And it has a complementary template for enriched specifications, also a md file.

The Migration Process
Step 1: Extract the Key Files
Take a look at the skill repo in GitHub. I grabbed four essential files:
| File | Purpose |
|-----------------------------|------------------------------------------|
| `README.md` | Overview and context |
| `SKILL.md` | The core workflow (phases, steps, logic) |
| `scoring-rubric.md` | The 4-criteria evaluation framework |
| `enriched-spec-template.md` | The output structure |
To mention that the SKILL.md file was particularly well-organized—it already had a phase-by-phase structure that translated almost directly into a system prompt, and this saved me a couple of minutes (I wouldn’t have created that sequence myself, but asked a LM to infer it).
Step 2: Translate the Workflow to a System Prompt
The original skill defined 8 phases:
- Phase 0: Parse Input
- Phase 1: Fetch Issue Data
- Phase 2: Load Configuration
- Phase 3: Determine Relevant Repos
- Phase 4: Score Issue Readiness
- Phase 5: Codebase Exploration
- Phase 6: Produce Enriched Spec
- Phase 7: Present & Post
I converted these stages into sections of a system prompt, keeping the logic but adapting for a conversational agent context. The main changes were (a) suggest asking the user which repos to explore and (b) output the spec for manual copy and paste, as there is no guaranteed write access to Linear if the MCP is read-only
Step 3: Use File Context for Reference Materials
Here’s where LibreChat’s File Context feature comes handy. Instead of embedding the full rubric and template in the system prompt, I mimicked the skill structure and uploaded two tables to the LibreChat File Context.
- I get a shorter system prompt (~1,200 tokens instead of ~2,500)
- It’s easier to update. We can change the rubric or template without touching the prompt
- It has a cleaner, more readable -at least for humans- separation, between instructions and reference materials

PHOTO: LibreChat agent configuration showing File Context with the two files attached
The system prompt references these files explicitly
## Reference Files
You have access to two reference files that you MUST use:
- **scoring-rubric.md** — Use this to score issues on the 4 criteria (0-3 each, max 12)
- **enriched-spec-template.md** — Use this exact template structure for your final output
Step 4: Connect the MCP Tools
The agent needs:
- Linear MCP — to fetch issues (our instance is read-only, which is fine)
- GitHub MCP / others — to explore codebases
- Documentation MCP / others – to explore relevant docs
Since it might happen that your Linear MCP doesn’t have write permissions, I added fallback logic in the “Present Results” phase:
**Try** to use Linear MCP to add the comment
- **If MCP lacks write permissions**: Say "I couldn't post directly. Here's the spec ready to copy:" and provide it in a clean format

How It Went
The Good
The workflow translated cleanly. The phase-based structure of the original skill mapped almost to the system prompt. Claude Code skills seem to be designed to be naturally portable.
It worked. On the second attempt, the agent successfully:
- Fetched a Linear issue
- Scored it against the rubric
- Explored the relevant repo
- Generated an enriched specification with
[ENRICHED]markers for discovered context so that you can validate or remove them.
The output was genuinely useful—it identified technical details that weren’t in the original issue and structured everything in a way that another agent (or a human) could act on immediately.
The Not-So-Good
I think the agent was initially designed for frontend developers. I mainly work with data, and I used a fairly complex issue in the first attempt, and the agent got lost in the tables. There’s a lot of structured content, and the agent occasionally lost track of which table was which. Nothing that couldn’t be fixed in a heartbeat.
The scoring rubric and spec template both have multiple markdown tables with similar structures. On the first run, the agent was confused about which table to use where. To be honest, I think this is more about a lack of adequacy for the problem than due to a skill misbehavior (Would it be a lack of skills?).
The second run, with a clearer, more atomic issue, worked better. Complex issues with ambiguous scope still challenge it—but that’s kind of the point of the tool. If the issue scores low on “Actionable Scope,” the agent correctly identifies that as a gap.

What I Learned
1. Claude Code Skills Are VERY Portable, at least to LibreChat
The structured, phase-based approach translates well to system prompts. If you’re building skills in Claude Code, you’re probably already writing something that can move to other platforms.
2. File Context > Embedding Everything
Using attached files for reference materials is cleaner than cramming everything into the system prompt. It also lets you update the rubric or template without touching the main instructions.
3. Plan for Missing Capabilities
My Linear MCP is read-only, so the “post to Linear” feature doesn’t work automatically. But with a fallback (“here’s the spec, copy and paste it”), the agent is still useful. Design for degraded modes.
4. Complex Tables Need Clear Disambiguation
When your reference materials have multiple similar-looking tables, the model can get confused. Some strategies:
- Short term: explicit instructions
- Long term: rework your tables documentation to make them fully (or as much as) agent-readable.
5. The Pattern: Develop Once, Deploy Everywhere?
This exercise suggests a workflow that might be useful
- Develop skills in Claude Code
- Port to LibreChat for broader team access (no CLI needed, just a chat interface)
- Iterate on both as needed
But it suggests that porting skills / similar artifacts between platforms should be equally straightforward. The effort to convert is low ==> The main work is already done when you write the skill for the first time.
Final Thought
This was an interesting exercise. Not because the result is perfect—it isn’t—but because it demonstrates that the AI skill ecosystem is interoperable. A well-structured Claude Code skill can become a LibreChat agent in 20 minutes That’s useful.


Leave a Reply