Indian Flag
Government Of India
A-
A
A+

BharatGen-FinanceParam

large language model fine-tuned from Param-1-2.9B-Instruct on a high-quality finance dataset.

About Model

BharatGen introduces FinanceParam, a domain-specialized large language model fine-tuned from Param-1-2.9B-Instruct on a high-quality finance dataset. FinanceParam is designed to deliver accurate, bilingual (English-Hindi) Indian financial knowledge for personal finance, taxation, banking, investments, and policy guidance.

BharatGen-FinanceParam

Metadata Metadata

Attribution 4.0 International (CC BY- 4.0)

BharatGen

Transformers

Transformers

Open

BharatGen

Commerce, Finance, Banking and Insurance

09/12/25 06:43:45

0

Activity Overview Activity Overview

  • Downloads0
  • Redirect 7
  • File Size 0
  • Views 58

Tags Tags

  • Multiturn
  • QnA

License Control License Control

Attribution 4.0 International (CC BY- 4.0)

More Models from BharatGen More Models from BharatGen

Param-1-5B
Param-1-5B is a bilingual (English–Hindi) large language model developed under the Param-1 family. With 5 billion parameters, this model extends the capabilities of Param-1-2.9B by incorporating enhanced mathematical reasoning and code understanding/generation. The model is pretrained from scratch and designed to serve as a strong foundation for downstream tasks such as mathematical problem solving, and code-related understanding / generation.
pretrained
  • See Upvoters0
  • Downloads0
  • File Size10.42 GB
  • Views9
Updated 4 day(s) ago

BHARATGEN

Param-1-Instruct
BharatGen introduces the early checkpoint of SFT (Supervised Fine-Tuned) for Param 1, a bilingual language model trained from scratch in English and Hindi. With 2.9 billion parameters, this checkpoint builds upon the pretraining phase and serves as a foundation for more downstream tasks, safety testing, and customization.
QnA
Instruction-Tuning
Model Fine-Tuning
  • See Upvoters0
  • Downloads0
  • File Size5.36 GB
  • Views8
Updated 4 day(s) ago

BHARATGEN

BharatGen - Param 1 Indic-Scale Bilingual Foundation Model
Param1 is a 2.9 billion parameter language model pretrained on English and Hindi, designed for text completion.
Large Language Model
  • See Upvoters4
  • Downloads678
  • File Size13.79 GB
  • Views19,436
Updated 5 day(s) ago

BHARATGEN

Param2-17B-Thinking
BharatGen presents Param-2-17B-MoE-A2.4B, a large-scale Mixture-of-Experts (MoE) language model designed to deliver high model capacity while retaining the inference efficiency of a much smaller dense model. It uses a Hybrid MoE architecture with 17B total parameters, while activating only 2.4B parameters per token.
Mixture of Experts
Multilingual Text
pretrained
  • See Upvoters1
  • Downloads41
  • File Size57.29 GB
  • Views1,025
Updated 1 month(s) ago

BHARATGEN

BharatGen Multilingual TTS - Sooktam2
Sooktam-2 is a multilingual Indic Text-to-Speech model by BharatGen supporting 12 languages including Hindi, Marathi, Tamil, Telugu, Bengali, Urdu, Punjabi and Indian English. It enables high-quality speech synthesis with reference-guided voice conditioning, preserving speaker voice, accent and prosody for natural and expressive generation.
multilingual-TTS
Text-to-Speech
Multilingual Speech
Audio Synthesis
sooktam2
  • See Upvoters0
  • Downloads9
  • File Size1.25 GB
  • Views749
Updated 2 month(s) ago

BHARATGEN

BharatGen - Param-1-7B-MoE Advancing Multilingual GenAI for India
Param-1-7B-MoE is a multilingual large language model developed under the Param-1 family as part of BharatGen – A Suite of Generative AI Technologies for India. With 7 billion parameters and a Mixture of Experts (MoE) architecture, the model is designed to better understand and generate text across English, Hindi, and 14 additional Indian languages. The model is pretrained from scratch with a strong focus on linguistic diversity, cultural context, and large-scale multilingual representation.
safetensors
mixtral
region:us
  • See Upvoters1
  • Downloads79
  • File Size0
  • Views1,174
Updated 4 month(s) ago

BHARATGEN

BharatGen-AgriParam
Large language model fine-tuned from Param-1-2.9B-Instruct on a high-quality, India-centric agriculture dataset.
Multiturn
QnA
  • See Upvoters0
  • Downloads1
  • File Size0
  • Views39
Updated 5 month(s) ago

BHARATGEN

BharatGen-FinanceParam
large language model fine-tuned from Param-1-2.9B-Instruct on a high-quality finance dataset.
Multiturn
QnA
  • See Upvoters0
  • Downloads7
  • File Size0
  • Views59
Updated 5 month(s) ago

BHARATGEN

BharatGen-LegalParam
Large language model fine-tuned from Param-1-2.9B-Instruct on an exhaustive India-centric legal dataset.
Multiturn
QnA
Summarization
  • See Upvoters0
  • Downloads0
  • File Size0
  • Views22
Updated 5 month(s) ago

BHARATGEN

BharatGen-AyurParam
Large language model fine-tuned from Param-1-2.9B-Instruct on a high-quality Ayurveda dataset
Ayurvedic
  • See Upvoters0
  • Downloads1
  • File Size0
  • Views23
Updated 5 month(s) ago

BHARATGEN