Krutrim-1 is a 7.3B parameter multilingual foundation model trained on a 2 trillion token dataset, designed for Indian linguistic and demographic needs. It supports 11 Indic languages and matches or exceeds comparable state-of-the-art models in multilingual tasks.
Krutrim-1 is a transformer-based Large Language Model (LLM) developed by OLA Krutrim Labs, optimized for multilingual Indic benchmarks. It has been trained on the largest known Indic-language dataset, addressing challenges in low-resource Indian dialects and linguistic diversity. The model demonstrates strong performance on Indic language benchmarks, matching or surpassing models like LLAMA-2 in multilingual fluency and generation tasks. It supports various NLP applications, including text generation, creative writing, summarization, and translation. Krutrim-1 features 32 transformer layers, 48 attention heads, and a vocabulary size of 70,400. Released in January 2024, it is available in both base and instruction-tuned versions. Ethical safeguards have been implemented, but users are advised to exercise human oversight, as the model may reflect biases from its training data.
Krutrim Community License Agreement Version 1.0
Ola Krutrim
Large Language Models
N.A.
Open
Sector Agnostic
28/02/25 07:00:42
0
Krutrim Community License Agreement Version 1.0
© 2026 - Copyright AIKosh. All rights reserved. This portal is developed by National e-Governance Division for AIKosh mission.