Sarvam-1 is a 2-billion parameter language model designed for Indian languages, optimized for token efficiency and high-quality training data. It outperforms larger models in several benchmarks, providing superior performance on Indic language tasks with enhanced computational efficiency. Built with a custom tokenizer and 4 trillion tokens from diverse sources, Sarvam-1 is ideal for applications like translation and edge device deployment.
Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)
sarvamai
Multilingual Language Model
Transformers
Open
Science, Technology and Research
24/02/25 07:45:49
0
Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)
© 2026 - Copyright AIKosh. All rights reserved. This portal is developed by National e-Governance Division for AIKosh mission.