Romansetu model is built on base pretrained model which is supervised fine tuned on instuction-following tasks using romanized Indian languages.
This model is based on a pretrained LLM (like LLaMA 2) and Supervised fine-tuned specifically on instruction datasets written in romanized Indian languages. This fine-tuning enables the model to perform a wide range of tasks—such as question answering, summarization, and translation—using roman script input and output.
Llama 2 Community License Agreement
Jaavid Aktar Husain, Raj Dabre, Aswanth Kumar, Jay Gala, Thanmay Jayakumar, Ratish Puduppully, Anoop Kunchukuttan
Multilingual Model
N.A.
Open
Sector Agnostic
02/05/25 11:01:05
0
Llama 2 Community License Agreement
© 2026 - Copyright AIKosh. All rights reserved. This portal is developed by National e-Governance Division for AIKosh mission.