V2.1

Build at the speed of thought with HUX Skills.

HUX Skills for Claude

HUX Skills are step-by-step guides that teach Claude how to do real design systems work the HUX way. Instead of just answering questions, Skills help Claude complete entire workflows — structure design tokens with the right naming conventions, build Webflow components that follow CEL methodology, generate documentation that matches your team's standards, and work through your actual patterns.

V2.0 adds the CEL Audit Skill. Use it to audit class naming and token architecture against the CEL specification, flag drift, and generate machine-readable component definitions in Markdown, JSON, or YAML.

Use them to turn a brief into a design system architecture, run an audit that ends up as a Notion doc, prep token structures for Figma-to-Webflow handoff, or capture design decisions into your knowledge base. A faster way to go from thought to output — and everything ends up coherent, consistent and ready for your team to use.

WHY SKILLS MATTER NOW

AI has made every team faster. It hasn't made any team smarter. Without a Skill, Claude guesses at your workflow. With a Skill, Claude follows your system — every time, without drift.

Skills turn Claude from a capable assistant into a team member who understands your standards, your structure and your way of working. That's the difference between speed and velocity.

NEW

INSTALL THE SKILLS IN CLAUDE

To get started: download the .zip below, then upload each Skill folder to Claude. Once loaded, Claude will use them automatically when you're working.

Your download has started.
Oops! Something went wrong while submitting the form.
Before you begin
In Claude

For more information, read Anthropic's launch blog and check out the Help Centre articles What are Skills? and Using Skills in Claude.

Ready to use HUX Skills

V2.0 includes three Skills. Download the bundle above to get all of them, then upload each folder to Claude individually.

NEW
CEL AUDIT

Audits front-end codebases against the CEL (Context · Element · Layer) methodology. Scores class naming, token architecture and data attributes. Produces a compliance report with fixes, then generates component definitions in Markdown, JSON or YAML. Works on Webflow sites, static HTML and any front-end codebase.

UX SKILLS

Step-by-step guides for the full five-phase UX process — Empathise, Define, Ideate, Build and Test. Structures briefs, audits, token handoffs and design decisions into coherent, reusable Notion docs.

Knowledge Capture

Turns discussions into structured knowledge in Notion. Captures insights, decisions and design rationale from chat, formats them clearly, and files them to the right wiki or database with proper linking. Stops good thinking from disappearing into Slack threads.

intellectual property, data and AI

Skills are powerful, but they run inside an AI platform — and that means your data passes through infrastructure you don't own. Before using any Skill in a business context, understand what you're sharing and where it goes.

What happens to your data

When you use a Skill in Claude, your prompts, uploaded files and conversation content are processed by Anthropic's servers. On consumer plans (Free, Pro, Max), Anthropic may use your conversations to train future models — this is on by default since September 2025. If you opt in (or don't opt out), your data can be retained for up to five years. If you opt out, retention drops to 30 days.

Commercial plans (Team, Enterprise, API, Bedrock, Vertex AI) are excluded from model training entirely. Your data is not used to improve Claude on these plans.

What this means for your business

If you're using a consumer Claude account for client work, design system architecture, token naming, internal documentation or anything commercially sensitive — that content could end up in Anthropic's training pipeline unless you've explicitly turned it off.

This isn't theoretical. It applies to every prompt, every file upload, every Skill interaction.

Protect yourself: minimum steps

Check your account type. Consumer plans have different data rules to commercial plans. If you're doing client work, a Team or Enterprise plan is the safer choice.

Review your privacy settings. In Claude, go to

Settings → Privacy → Privacy Settings and confirm whether "Help improve Claude" is on or off. Turn it off if you don't want your conversations used for training.

Don't share what you can't afford to lose. Avoid pasting API keys, passwords, client credentials, proprietary algorithms or personally identifiable information into any AI tool — including Claude.

Audit your Skills. Third-party Skills may include dependencies, network calls or instructions you haven't reviewed. Read the SKILL.md before installing anything from an unknown source.

Brief your team. If colleagues use personal Claude accounts for work, they may be unknowingly sharing company data with training pipelines. Set a clear policy: no client data in consumer accounts.

Broader risks to be aware of

Beyond Anthropic's specific policies, using AI tools in professional workflows carries inherent risks worth considering. Prompt injection is a real and evolving threat — malicious content in documents or web pages can manipulate AI behaviour in ways that are difficult to predict. Supply chain exposure through third-party Skills and MCP connectors can introduce unexpected data flows. And the regulatory landscape around AI data processing continues to shift, particularly under frameworks like the EU AI Act and GDPR.

None of this means you shouldn't use Skills. It means you should use them with your eyes open.

Appendix

  1. User guide: Teach Claude your way of working
  2. The complete guide to building skills for Claude (PDF)
  3. Finsweet client-first quick guide

All flow.
No friction.

If you want a team that moves with ease and delivers with intention, HUX builds the system that makes it possible.

I'm here when you want to start.