Sarvam-105B is an advanced Mixture-of-Experts (MoE) model with 10.3B active parameters, designed for superior performance across a wide range of complex tasks. It is highly optimized for complex reasoning, with particular strength in agentic tasks, mathematics, and coding.
Read more about the model - https://www.sarvam.ai/blogs/sovereign-models
Apache 2.0
sarvamai
Mixture of Experts (MoE) Language Model
Transformers
Open
Sector Agnostic
06/03/26 14:33:37
0
Apache 2.0
© 2026 - Copyright AIKosh. All rights reserved. This portal is developed by National e-Governance Division for AIKosh mission.