✓ Verified 💻 Development ✓ Enhanced Data

Modelready

Start using a local or Hugging Face model instantly, directly from chat.

Rating
4.7 (157 reviews)
Downloads
1,848 downloads
Version
1.0.0

Overview

Start using a local or Hugging Face model instantly, directly from chat.

Complete Documentation

View Source →


name: modelready description: Start using a local or Hugging Face model instantly, directly from chat. metadata: {"openclaw":{"requires":{"bins":["bash", "curl"]}, "env": ["URL"]}} ---

ModelReady

ModelReady lets you start using a local or Hugging Face model immediately, without leaving clawdbot.

It turns a model into a running, OpenAI-compatible endpoint and allows you to chat with it directly from a conversation.

When to use

Use this skill when you want to:

  • Quickly start using a local or Hugging Face model
  • Chat with a locally running model
  • Test or interact with a model directly from chat

Commands

Start a model server

text
/modelready start repo=<path-or-hf-repo> port=<port> [tp=<n>] [dtype=<dtype>]
`

Examples:

text
/modelready start repo=Qwen/Qwen2.5-7B-Instruct port=19001
/modelready start repo=/home/user/models/Qwen-2.5 port=8010 tp=4 dtype=bfloat16

Chat with a running model

text
/modelready chat port=<port> text="<message>"

Example:

text
/modelready chat port=8010 text="hello"

Check status or stop the server

text
/modelready status port=<port>
/modelready stop port=<port>

Set default host or port

text
/modelready set_ip   ip=<host>
/modelready set_port port=<port>

Notes

  • The model is served locally using vLLM.
  • The exposed endpoint follows the OpenAI API format.
  • The server must be started before sending chat requests.

Installation

Terminal bash

openclaw install modelready
    
Copied!

Tags

#ai_and-llms

Quick Info

Category Development
Model Claude 3.5
Complexity One-Click
Author carol-gutianle
Last Updated 3/10/2026
🚀
Optimized for
Claude 3.5
🧠

Ready to Install?

Get started with this skill in seconds

openclaw install modelready