From 0b1a25b5976b6153d698911862cf24310d3b119c Mon Sep 17 00:00:00 2001 From: Darren Oberst Date: Sat, 5 Jul 2025 13:40:19 +0000 Subject: [PATCH] Upload README.md --- README.md | 38 +++++++++++++++++++++++++++++++++++--- 1 file changed, 35 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 7b95401..ad7620c 100644 --- a/README.md +++ b/README.md @@ -1,3 +1,35 @@ ---- -license: apache-2.0 ---- +--- +license: apache-2.0 +inference: false +base_model: Qwen/Qwen3-4B-Instruct +base_model_relation: quantized +tags: [green, llmware-chat, p4, gguf,emerald] +--- + +# qwen3-4b-instruct-gguf + +**qwen3-4b-instruct-gguf** is a GGUF Q4_K_M int4 quantized version of [Qwen3-4B-Instruct](https://www.huggingface.co/Qwen/Qwen3-4B-Instruct), providing a very fast inference implementation, optimized for AI PCs. + +This is from the latest release series from Qwen, and has 'thinking' capability expressed as 'think' tokens. + +This model will run on an AI PC, with GPU acceleration and 32 GB of memory. + +### Model Description + +- **Developed by:** Qwen +- **Model type:** qwen3 +- **Parameters:** 4 billion +- **Model Parent:** Qwen/Qwen3-4B-Instruct +- **Language(s) (NLP):** English +- **License:** Apache 2.0 +- **Uses:** Chat, general-purpose LLM +- **Quantization:** int4 + + +## Model Card Contact + +[llmware on github](https://www.github.com/llmware-ai/llmware) + +[llmware on hf](https://www.huggingface.co/llmware) + +[llmware website](https://www.llmware.ai)