LlamaBackbone
classkeras_nlp.models.LlamaBackbone(
vocabulary_size,
num_layers,
num_query_heads,
hidden_dim,
intermediate_dim,
num_key_value_heads,
rope_max_wavelength=10000,
rope_scaling_factor=1.0,
layer_norm_epsilon=1e-06,
dropout=0,
dtype=None,
tie_word_embeddings=False,
**kwargs
)
The Llama Transformer core architecture with hyperparameters.
This network implements a Transformer-based decoder network, Llama, as described in "Llama 7B". It includes the embedding lookups and transformer layers.
The default constructor gives a fully customizable, randomly initialized
Llama model with any number of layers, heads, and embedding
dimensions. To load preset architectures and weights, use the from_preset
constructor.
Arguments
10000
.1.0
.1e-6
.keras.mixed_precision.DTypePolicy
. The dtype to use
for model computations and weights. Note that some computations,
such as softmax and layer normalization, will always be done at
float32 precision regardless of dtype.Examples
input_data = {
"token_ids": np.ones(shape=(1, 12), dtype="int32"),
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0]]),
}
# Pretrained Llama decoder.
model = keras_hub.models.LlamaBackbone.from_preset("llama7b_base_en")
model(input_data)
# Randomly initialized Llama decoder with custom config.
model = keras_hub.models.LlamaBackbone(
vocabulary_size=10,
hidden_dim=512,
num_layers=2,
num_query_heads=32,
num_key_value_heads=8,
intermediate_dim=1024,
layer_norm_epsilon=1e-6,
dtype="float32"
)
model(input_data)
from_preset
methodLlamaBackbone.from_preset(preset, load_weights=True, **kwargs)
Instantiate a keras_hub.models.Backbone
from a model preset.
A preset is a directory of configs, weights and other file assets used
to save and load a pre-trained model. The preset
can be passed as a
one of:
'bert_base_en'
'kaggle://user/bert/keras/bert_base_en'
'hf://user/bert_base_en'
'./bert_base_en'
This constructor can be called in one of two ways. Either from the base
class like keras_hub.models.Backbone.from_preset()
, or from
a model class like keras_hub.models.GemmaBackbone.from_preset()
.
If calling from the base class, the subclass of the returning object
will be inferred from the config in the preset directory.
For any Backbone
subclass, you can run cls.presets.keys()
to list
all built-in presets available on the class.
Arguments
True
, the weights will be loaded into the
model architecture. If False
, the weights will be randomly
initialized.Examples
# Load a Gemma backbone with pre-trained weights.
model = keras_hub.models.Backbone.from_preset(
"gemma_2b_en",
)
# Load a Bert backbone with a pre-trained config and random weights.
model = keras_hub.models.Backbone.from_preset(
"bert_base_en",
load_weights=False,
)
Preset name | Parameters | Description |
---|---|---|
llama2_7b_en | 6.74B | 7 billion parameter, 32-layer, base LLaMA 2 model. |
llama2_7b_en_int8 | 6.74B | 7 billion parameter, 32-layer, base LLaMA 2 model with activation and weights quantized to int8. |
llama2_instruct_7b_en | 6.74B | 7 billion parameter, 32-layer, instruction tuned LLaMA 2 model. |
llama2_instruct_7b_en_int8 | 6.74B | 7 billion parameter, 32-layer, instruction tuned LLaMA 2 model with activation and weights quantized to int8. |
vicuna_1.5_7b_en | 6.74B | 7 billion parameter, 32-layer, instruction tuned Vicuna v1.5 model. |
llama3_8b_en | 8.03B | 8 billion parameter, 32-layer, base LLaMA 3 model. |
llama3_8b_en_int8 | 8.03B | 8 billion parameter, 32-layer, base LLaMA 3 model with activation and weights quantized to int8. |
llama3_instruct_8b_en | 8.03B | 8 billion parameter, 32-layer, instruction tuned LLaMA 3 model. |
llama3_instruct_8b_en_int8 | 8.03B | 8 billion parameter, 32-layer, instruction tuned LLaMA 3 model with activation and weights quantized to int8. |
token_embedding
propertykeras_nlp.models.LlamaBackbone.token_embedding
A keras.layers.Embedding
instance for embedding token ids.
This layer embeds integer token ids to the hidden dim of the model.
enable_lora
methodLlamaBackbone.enable_lora(rank)
Enable Lora on the backbone.
Calling this method will freeze all weights on the backbone,
while enabling Lora on the query & value EinsumDense
layers
of the attention layers.