This is a SmolLM3 model uploaded using the KerasHub library and can be used with JAX, TensorFlow, and PyTorch backends. This model is related to a CausalLM task.

Model config:

  • name: smol_lm3_backbone
  • trainable: True
  • dtype: {'module': 'keras', 'class_name': 'DTypePolicy', 'config': {'name': 'float32'}, 'registered_name': None}
  • vocabulary_size: 128256
  • hidden_dim: 2048
  • intermediate_dim: 11008
  • num_layers: 36
  • num_attention_heads: 16
  • num_key_value_heads: 4
  • attention_bias: False
  • attention_dropout: 0.0
  • rope_layer_enabled_list: [True, True, True, False, True, True, True, False, True, True, True, False, True, True, True, False, True, True, True, False, True, True, True, False, True, True, True, False, True, True, True, False, True, True, True, False]
  • layer_types: ['full_attention', 'full_attention', 'full_attention', 'full_attention', 'full_attention', 'full_attention', 'full_attention', 'full_attention', 'full_attention', 'full_attention', 'full_attention', 'full_attention', 'full_attention', 'full_attention', 'full_attention', 'full_attention', 'full_attention', 'full_attention', 'full_attention', 'full_attention', 'full_attention', 'full_attention', 'full_attention', 'full_attention', 'full_attention', 'full_attention', 'full_attention', 'full_attention', 'full_attention', 'full_attention', 'full_attention', 'full_attention', 'full_attention', 'full_attention', 'full_attention', 'full_attention']
  • mlp_bias: False
  • layer_norm_epsilon: 1e-06
  • max_position_embeddings: 65536
  • rope_theta: 5000000.0
  • partial_rotary_factor: 1.0

This model card has been generated automatically and should be completed by the model author. See Model Cards documentation for more information.

Downloads last month
6
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support