Indian Flag
Government Of India
A-
A
A+

Param2-17B-Thinking

BharatGen presents Param-2-17B-MoE-A2.4B, a large-scale Mixture-of-Experts (MoE) language model designed to deliver high model capacity while retaining the inference efficiency of a much smaller dense model. It uses a Hybrid MoE architecture with 17B total parameters, while activating only 2.4B parameters per token.

About Model

* 17B parameter Mixture of Experts (MoE) language model * Multilingual: English, Hindi + 21 Indian languages * Trained on ~22 trillion tokens across two pretraining phases * Uses 64 specialized experts, dynamically activated per token * Supports long-context understanding (up to 4096 tokens) * Efficient inference: Only 2.4B active parameters per token * Advanced Capabilities: Thinking & Reasoning, Tool Calling, Mathematics, Code Generation * Designed for diverse downstream applications and further fine-tuning

Param2-17B-Thinking

Metadata Metadata

Attribution-Non-Commercial 4.0 International (CC BY-NC 4.0)

bharatgenai

Transformers

PyTorch

Open

BharatGen

Other

13/03/26 11:03:42

57.29 GB

Param2-17B-A2.4B-Thinking ( 2 directories )


Directory
__MACOSX

1 directories

Directory
Param2-17B-A2.4B-Thinking

20 files

Activity Overview Activity Overview

  • Downloads1
  • Downloads 29
  • File Size 25.33 GB
  • Views 837

Tags Tags

  • Mixture of Experts
  • Multilingual Text
  • pretrained

License Control License Control

Attribution-Non-Commercial 4.0 International (CC BY-NC 4.0)

Version Control Version Control

FolderVersion 1(25.33 GB)
  • admin·1 month(s) ago
    • chevron_rightFolder
      Param2-17B-A2.4B-Thinking
      • chevron_rightFolder
        __MACOSX
      • chevron_rightFolder
        Param2-17B-A2.4B-Thinking

More Models from BharatGen More Models from BharatGen

Param2-17B-Thinking
BharatGen presents Param-2-17B-MoE-A2.4B, a large-scale Mixture-of-Experts (MoE) language model designed to deliver high model capacity while retaining the inference efficiency of a much smaller dense model. It uses a Hybrid MoE architecture with 17B total parameters, while activating only 2.4B parameters per token.
pretrained
Mixture of Experts
Multilingual Text
  • See Upvoters1
  • Downloads29
  • File Size57.29 GB
  • Views837
Updated 1 month(s) ago

BHARATGEN

BharatGen Multilingual TTS - Sooktam2
Sooktam-2 is a multilingual Indic Text-to-Speech model by BharatGen supporting 12 languages including Hindi, Marathi, Tamil, Telugu, Bengali, Urdu, Punjabi and Indian English. It enables high-quality speech synthesis with reference-guided voice conditioning, preserving speaker voice, accent and prosody for natural and expressive generation.
Text-to-Speech
multilingual-TTS
sooktam2
Audio Synthesis
Multilingual Speech
  • See Upvoters0
  • Downloads9
  • File Size1.25 GB
  • Views602
Updated 1 month(s) ago

BHARATGEN

BharatGen - Param-1-7B-MoE Advancing Multilingual GenAI for India
Param-1-7B-MoE is a multilingual large language model developed under the Param-1 family as part of BharatGen – A Suite of Generative AI Technologies for India. With 7 billion parameters and a Mixture of Experts (MoE) architecture, the model is designed to better understand and generate text across English, Hindi, and 14 additional Indian languages. The model is pretrained from scratch with a strong focus on linguistic diversity, cultural context, and large-scale multilingual representation.
safetensors
mixtral
region:us
  • See Upvoters1
  • Downloads79
  • File Size0
  • Views1,116
Updated 3 month(s) ago

BHARATGEN

A2TTS-Malayalam Speaker Adaptive TTS (Text-to-Speech)-v0.5
Text-to-speech synthesis model tailored to match a given speaker's voice sample.
SpeechSynthesis
multilingual-TTS
TextToSpeech
  • See Upvoters0
  • Downloads16
  • File Size1.62 GB
  • Views1,623
Updated 9 month(s) ago

BHARATGEN

A2TTS-Gujarati Speaker Adaptive TTS (Text-to-Speech)-v0.5
Text-to-speech synthesis model tailored to match a given speaker's voice sample.
SpeechSynthesis
TextToSpeech
multilingual-TTS
  • See Upvoters1
  • Downloads22
  • File Size1.62 GB
  • Views3,284
Updated 9 month(s) ago

BHARATGEN

A2TTS-Telugu Speaker Adaptive TTS (Text-to-Speech)-v0.5
Text-to-speech synthesis model tailored to match a given speaker's voice sample.
SpeechSynthesis
multilingual-TTS
TextToSpeech
  • See Upvoters0
  • Downloads21
  • File Size1.62 GB
  • Views1,878
Updated 9 month(s) ago

BHARATGEN

A2TTS-Tamil Speaker Adaptive TTS (Text-to-Speech)-v0.5
Text-to-speech synthesis model tailored to match a given speaker's voice sample.
TextToSpeech
multilingual-TTS
SpeechSynthesis
  • See Upvoters0
  • Downloads49
  • File Size1.62 GB
  • Views2,854
Updated 9 month(s) ago

BHARATGEN

A2TTS-Punjabi Speaker Adaptive TTS (Text-to-Speech)-v0.5
Text-to-speech synthesis model tailored to match a given speaker's voice sample.
TextToSpeech
SpeechSynthesis
multilingual-TTS
  • See Upvoters0
  • Downloads35
  • File Size1.62 GB
  • Views1,526
Updated 9 month(s) ago

BHARATGEN

A2TTS-Marathi Speaker Adaptive TTS (Text-to-Speech)-v0.5
Text-to-speech synthesis model tailored to match a given speaker's voice sample.
TextToSpeech
SpeechSynthesis
multilingual-TTS
  • See Upvoters2
  • Downloads22
  • File Size1.62 GB
  • Views2,981
Updated 9 month(s) ago

BHARATGEN

A2TTS-Kannada Speaker Adaptive TTS (Text-to-Speech)-v0.5
Text-to-speech synthesis model tailored to match a given speaker's voice sample.
SpeechSynthesis
multilingual-TTS
TextToSpeech
  • See Upvoters0
  • Downloads11
  • File Size1.62 GB
  • Views1,788
Updated 9 month(s) ago

BHARATGEN