Indian Flag
Government Of India
A-
A
A+

gemma4-26B-A4B

This model is a fine-tuned version of the Gemma 4 26B architecture. It has been optimized using the A4B (Adapter-for-Business/Balance) methodology to provide a high-efficiency balance between reasoning capabilities and conversational fluidity. With 26 billion parameters, it sits in the "sweet spot" for complex tasks that require more nuance than 7B models but less compute than ultra-large models.

About Model

Gemma 4 26B A4B is a state-of-the-art Mixture-of-Experts (MoE) open model developed by Google DeepMind. While it contains 25.2B total parameters, it only activates approximately 3.8B parameters per token during inference. This allows the model to deliver the reasoning and quality of a ~31B dense model while maintaining the speed and low compute requirements of a much smaller system.

gemma4-26B-A4B

Metadata Metadata

Apache 2.0

Google

Image Feature Extraction

Transformers

Open

Science, Technology and Research

20/04/26 07:24:51

Priya Gupta

48.10 GB

.gitattributes ( 1.53 KB )


To preview this file, you need to be a registered user. Please complete the registration process to gain access and continue viewing the content.

Activity Overview Activity Overview

  • Downloads0
  • Downloads 0
  • File Size 48.10 GB
  • Views 8

Tags Tags

  • advanced-reasoning
  • Image
  • Intelligent Document Processing

License Control License Control

Apache 2.0

Version Control Version Control

FolderVersion 1(48.10 GB)
  • admin·11 day(s) ago
    • undefined
      .gitattributes
    • application/json
      config.json
    • application/json
      generation_config.json
    • undefined
      model-00001-of-00002.safetensors
    • undefined
      model-00002-of-00002.safetensors
    • application/json
      model.safetensors.index.json
    • application/json
      processor_config.json
    • text/markdown
      README.md
    • application/json
      tokenizer_config.json

More Models from DAKSH SOLUTIONS AND SERVICES More Models from DAKSH SOLUTIONS AND SERVICES

gemma4-26B-A4B
This model is a fine-tuned version of the Gemma 4 26B architecture. It has been optimized using the A4B (Adapter-for-Business/Balance) methodology to provide a high-efficiency balance between reasoning capabilities and conversational fluidity. With 26 billion parameters, it sits in the "sweet spot" for complex tasks that require more nuance than 7B models but less compute than ultra-large models.
advanced-reasoning
Image
Intelligent Document Processing
  • See Upvoters0
  • Downloads0
  • File Size48.10 GB
  • Views8
Updated 1 day(s) ago

DAKSH SOLUTIONS AND SERVICES