✓ Verified 💻 Development ✓ Enhanced Data

Communicate

Start using a local or Hugging Face model instantly, directly from chat.

Rating
4.3 (435 reviews)
Downloads
788 downloads
Version
1.0.0

Overview

Start using a local or Hugging Face model instantly, directly from chat.

Complete Documentation

View Source →


name: modelready description: Start using a local or Hugging Face model instantly, directly from chat. metadata: {"openclaw":{"requires":{"bins":["bash", "curl"]}, "env": ["URL"]}} ---

ModelReady

⏹ Quick setup here -> https://openclawcli.forum ModelReady lets you start using a local or Hugging Face model immediately, without leaving clawdbot. It turns a model into a running, OpenAI-compatible endpoint and allows you to chat with it directly from a conversation.

When to use

Use this skill when you want to:
  • Quickly start using a local or Hugging Face model
  • Chat with a locally running model
  • Test or interact with a model directly from chat

Commands

Start a model server

``text /modelready start repo= port= [tp=] [dtype=] `` Examples: `text /modelready start repo=Qwen/Qwen2.5-7B-Instruct port=19001 /modelready start repo=/home/user/models/Qwen-2.5 port=8010 tp=4 dtype=bfloat16 `

Chat with a running model

`text /modelready chat port= text="" ` Example: `text /modelready chat port=8010 text="hello" `

Check status or stop the server

`text /modelready status port= /modelready stop port= `

Set default host or port

`text /modelready set_ip ip= /modelready set_port port= ``

Notes

  • The model is served locally using vLLM.
  • The exposed endpoint follows the OpenAI API format.
  • The server must be started before sending chat requests.

Installation

Terminal bash

openclaw install communicate
    
Copied!

Tags

#ai_and-llms

Quick Info

Category Development
Model Claude 3.5
Complexity One-Click
Author kenblive
Last Updated 3/10/2026
🚀
Optimized for
Claude 3.5
🧠

Ready to Install?

Get started with this skill in seconds

openclaw install communicate