A GGUF-optimized version of Phi-3-Mini-4K-Instruct, designed for efficient low-memory AI inference, supporting quantized and full-precision formats for balanced performance and quality.
Phi-3-Mini-4K-Instruct GGUF is a lightweight, high-performance AI model from Microsoft, optimized for efficient execution using GGUF format. This version supports quantized (Q4_K_M) and full-precision (fp16) formats, enabling flexible deployment across memory-constrained and high-performance environments while maintaining 4K token context length for structured reasoning, instruction-following, and long-context processing.
MIT
Microsoft
Text Generation
N.A.
Open
Sector Agnostic
12/03/25 06:35:45
0
MIT
© 2026 - Copyright AIKosh. All rights reserved. This portal is developed by National e-Governance Division for AIKosh mission.