The first pre-trained Hindi model by any academic research lab in India ๐ฎ๐ณ!
Project Unity is an initiative to address India's linguistic diversity and richness by creating a comprehensive resource covering the country's major languages. We strive to achieve state-of-the-art performance in understanding and generating text in Indian languages. To achieve this, we train models on the monolingual regional languages of India. Our first release is the Ganga-1B model, which has been trained on a large dataset of public domain web-crawled Hindi language data, including news articles, web documents, books, government publications, educational materials, and social media conversations (filtered for quality). Additionally, the dataset has been further curated by native Indian speakers to ensure high quality. Significantly, the Ganga-2-1B model outperforms existing open-source models that support Indian languages, even at sizes of up to 7 billion parameters.
Developed by: Lingo Research Group at IIT Gandhinagar
Model type: Autoregressive Language Model
Language(s): Bilingual (Primary: Hindi [hi], Secondary: English [en])
The instruct tunned model Ganga-2-1b trained on a monolingual Hindi language dataset as part of Project Unity. We propose the name Ganga ๐ to honor the longest river flowing through the Hindi-speaking region of India ๐ฎ๐ณ.
Model Architecture and Objective:
Ganga-2-1b is a decoder-only transformer model featuring the following specifications:
Results:
Tokenizers Results:
| Model | Fertility |
|---|---|
| Ganga-2-1b | 1.12 |
| Pragna-1b | 1.58 |
| Bloom-1b1 | 1.27 |
| Bloom-1b7 | 1.27 |
| Gemma-2b | 1.89 |
| Bloom-3b | 1.27 |
| Airavata-7b | 1.69 |
| Sarvam-2b | 1.38 |
| Model | PPLSangraha Dataset |
|---|---|
| Ganga-2-1b | 8.09 |
| Ganga-1b | 15.82 |
| Pragna-1b | 9.37 |
| Bloom-1b1 | 17.49 |
| Bloom-1b7 | 14.28 |
| Gemma-2b | 31.01 |
| Bloom-3b | 12.82 |
| OpenHathi-7B | 25.73 |
| Airavata-7b | 38.24 |
| Sarvam-2b | 10.31 |
Recommendations โผ๏ธ
This model described is a research preview and is under ongoing iterative updations, and as such, it only provides limited safety measures. Additionally, it may generate offensive content. It is strictly prohibited to use the model for any illegal, harmful, violent, racist, or sexual purposes.
Apache 2.0
Aamod Thakur, Mayank singh
Large Language Models
Transformers
Open
Science, Technology and Research
20/08/25 05:43:32
1.88 GB
Apache 2.0
ยฉ 2026 - Copyright AIKosh. All rights reserved. This portal is developed by National e-Governance Division for AIKosh mission.