Indian Flag
Government Of India
A-
A
A+

Param-1-5B

Param-1-5B is a bilingual (English–Hindi) large language model developed under the Param-1 family. With 5 billion parameters, this model extends the capabilities of Param-1-2.9B by incorporating enhanced mathematical reasoning and code understanding/generation. The model is pretrained from scratch and designed to serve as a strong foundation for downstream tasks such as mathematical problem solving, and code-related understanding / generation.

About Model

* 5B parameter dense Transformer model * Bilingual: English and Hindi * Enhanced domains: Math and Code (compared to Param-1-2.9B) * Updated dataset mixture and percentage ratios * Designed as a pretrained (PT) base model

Param-1-5B

Metadata Metadata

Attribution-Non-Commercial 4.0 International (CC BY-NC 4.0)

bharatgenai

Transformers

PyTorch

Restricted

BharatGen

Other

07/05/26 09:52:31

10.42 GB

config_parambharatgen.py ( 1.94 KB )


To preview this file, you need to be a registered user. Please complete the registration process to gain access and continue viewing the content.

Activity Overview Activity Overview

  • Downloads0
  • Downloads 0
  • File Size 10.42 GB
  • Views 8

Tags Tags

  • pretrained

License Control License Control

Attribution-Non-Commercial 4.0 International (CC BY-NC 4.0)

Version Control Version Control

FolderVersion 1(10.42 GB)
  • admin·5 day(s) ago
    • undefined
      config_parambharatgen.py
    • application/json
      config.json
    • undefined
      LICENSE
    • undefined
      modeling_parambharatgen.py
    • undefined
      pytorch_model.bin
    • text/markdown
      README.md
    • application/json
      special_tokens_map.json
    • application/json
      tokenizer_config.json
    • application/json
      tokenizer.json

More Models from BharatGen More Models from BharatGen

Param-1-5B
Param-1-5B is a bilingual (English–Hindi) large language model developed under the Param-1 family. With 5 billion parameters, this model extends the capabilities of Param-1-2.9B by incorporating enhanced mathematical reasoning and code understanding/generation. The model is pretrained from scratch and designed to serve as a strong foundation for downstream tasks such as mathematical problem solving, and code-related understanding / generation.
pretrained
  • See Upvoters0
  • Downloads0
  • File Size10.42 GB
  • Views9
Updated 4 day(s) ago

BHARATGEN

Param-1-Instruct
BharatGen introduces the early checkpoint of SFT (Supervised Fine-Tuned) for Param 1, a bilingual language model trained from scratch in English and Hindi. With 2.9 billion parameters, this checkpoint builds upon the pretraining phase and serves as a foundation for more downstream tasks, safety testing, and customization.
QnA
Instruction-Tuning
Model Fine-Tuning
  • See Upvoters0
  • Downloads0
  • File Size5.36 GB
  • Views8
Updated 4 day(s) ago

BHARATGEN

BharatGen - Param 1 Indic-Scale Bilingual Foundation Model
Param1 is a 2.9 billion parameter language model pretrained on English and Hindi, designed for text completion.
Large Language Model
  • See Upvoters4
  • Downloads678
  • File Size13.79 GB
  • Views19,436
Updated 5 day(s) ago

BHARATGEN

Param2-17B-Thinking
BharatGen presents Param-2-17B-MoE-A2.4B, a large-scale Mixture-of-Experts (MoE) language model designed to deliver high model capacity while retaining the inference efficiency of a much smaller dense model. It uses a Hybrid MoE architecture with 17B total parameters, while activating only 2.4B parameters per token.
Mixture of Experts
Multilingual Text
pretrained
  • See Upvoters1
  • Downloads41
  • File Size57.29 GB
  • Views1,025
Updated 1 month(s) ago

BHARATGEN

BharatGen Multilingual TTS - Sooktam2
Sooktam-2 is a multilingual Indic Text-to-Speech model by BharatGen supporting 12 languages including Hindi, Marathi, Tamil, Telugu, Bengali, Urdu, Punjabi and Indian English. It enables high-quality speech synthesis with reference-guided voice conditioning, preserving speaker voice, accent and prosody for natural and expressive generation.
multilingual-TTS
Text-to-Speech
Multilingual Speech
Audio Synthesis
sooktam2
  • See Upvoters0
  • Downloads9
  • File Size1.25 GB
  • Views749
Updated 2 month(s) ago

BHARATGEN

BharatGen - Param-1-7B-MoE Advancing Multilingual GenAI for India
Param-1-7B-MoE is a multilingual large language model developed under the Param-1 family as part of BharatGen – A Suite of Generative AI Technologies for India. With 7 billion parameters and a Mixture of Experts (MoE) architecture, the model is designed to better understand and generate text across English, Hindi, and 14 additional Indian languages. The model is pretrained from scratch with a strong focus on linguistic diversity, cultural context, and large-scale multilingual representation.
safetensors
mixtral
region:us
  • See Upvoters1
  • Downloads79
  • File Size0
  • Views1,174
Updated 4 month(s) ago

BHARATGEN

BharatGen-AgriParam
Large language model fine-tuned from Param-1-2.9B-Instruct on a high-quality, India-centric agriculture dataset.
Multiturn
QnA
  • See Upvoters0
  • Downloads1
  • File Size0
  • Views38
Updated 5 month(s) ago

BHARATGEN

BharatGen-FinanceParam
large language model fine-tuned from Param-1-2.9B-Instruct on a high-quality finance dataset.
Multiturn
QnA
  • See Upvoters0
  • Downloads7
  • File Size0
  • Views58
Updated 5 month(s) ago

BHARATGEN

BharatGen-LegalParam
Large language model fine-tuned from Param-1-2.9B-Instruct on an exhaustive India-centric legal dataset.
Multiturn
QnA
Summarization
  • See Upvoters0
  • Downloads0
  • File Size0
  • Views22
Updated 5 month(s) ago

BHARATGEN

BharatGen-AyurParam
Large language model fine-tuned from Param-1-2.9B-Instruct on a high-quality Ayurveda dataset
Ayurvedic
  • See Upvoters0
  • Downloads1
  • File Size0
  • Views22
Updated 5 month(s) ago

BHARATGEN