LlamaCausalLMPreprocessor
classkeras_hub.models.LlamaCausalLMPreprocessor(
tokenizer, sequence_length=1024, add_start_token=True, add_end_token=True, **kwargs
)
Llama Causal LM preprocessor.
This preprocessing layer is meant for use with
keras_hub.models.LlamaCausalLM
. By default, it will take in batches of
strings, and return outputs in a (x, y, sample_weight)
format, where the
y
label is the next token id in the x
sequence.
For use with generation, the layer also exposes two methods
generate_preprocess()
and generate_postprocess()
. When this preprocessor
is attached to a keras_hub.models.LlamaCausalLM
instance, these methods
will be called implicitly in generate()
. They can also be called
standalone (e.g. to precompute preprocessing inputs for generation in a
separate process).
Arguments
keras_hub.models.LlamaTokenizer
instance.True
, the preprocessor will prepend the tokenizer
start token to each input sequence. Default is True
.True
, the preprocessor will append the tokenizer
end token to each input sequence. Default is False
.Call arguments
tf.Tensor
or list of python strings.None
as the layer generates labels.None
as the layer
generates label weights.sequence_length
of
the layer.Examples
# Load the preprocessor from a preset.
preprocessor = keras_hub.models.LlamaCausalLMPreprocessor.from_preset(
"llama_base_en"
)
# Tokenize and pack a single sentence.
sentence = tf.constant("League of legends")
preprocessor(sentence)
# Same output.
preprocessor("League of legends")
# Tokenize a batch of sentences.
sentences = tf.constant(["Taco tuesday", "Fish taco please!"])
preprocessor(sentences)
# Same output.
preprocessor(["Taco tuesday", "Fish taco please!"])
# Map a dataset to preprocess a single sentence.
features = tf.constant(
[
"Avatar 2 is amazing!",
"Well, I am not sure.",
]
)
labels = tf.constant([1, 0])
ds = tf.data.Dataset.from_tensor_slices((features, labels))
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)
# Map a dataset to preprocess unlabled sentences.
ds = tf.data.Dataset.from_tensor_slices(features)
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)
from_preset
methodLlamaCausalLMPreprocessor.from_preset(
preset, config_file="preprocessor.json", **kwargs
)
Instantiate a keras_hub.models.Preprocessor
from a model preset.
A preset is a directory of configs, weights and other file assets used
to save and load a pre-trained model. The preset
can be passed as
one of:
'bert_base_en'
'kaggle://user/bert/keras/bert_base_en'
'hf://user/bert_base_en'
'./bert_base_en'
For any Preprocessor
subclass, you can run cls.presets.keys()
to
list all built-in presets available on the class.
As there are usually multiple preprocessing classes for a given model,
this method should be called on a specific subclass like
keras_hub.models.BertTextClassifierPreprocessor.from_preset()
.
Arguments
Examples
# Load a preprocessor for Gemma generation.
preprocessor = keras_hub.models.GemmaCausalLMPreprocessor.from_preset(
"gemma_2b_en",
)
# Load a preprocessor for Bert classification.
preprocessor = keras_hub.models.BertTextClassifierPreprocessor.from_preset(
"bert_base_en",
)
Preset name | Parameters | Description |
---|---|---|
llama2_7b_en | 6.74B | 7 billion parameter, 32-layer, base LLaMA 2 model. |
llama2_7b_en_int8 | 6.74B | 7 billion parameter, 32-layer, base LLaMA 2 model with activation and weights quantized to int8. |
llama2_instruct_7b_en | 6.74B | 7 billion parameter, 32-layer, instruction tuned LLaMA 2 model. |
llama2_instruct_7b_en_int8 | 6.74B | 7 billion parameter, 32-layer, instruction tuned LLaMA 2 model with activation and weights quantized to int8. |
vicuna_1.5_7b_en | 6.74B | 7 billion parameter, 32-layer, instruction tuned Vicuna v1.5 model. |
llama3_8b_en | 8.03B | 8 billion parameter, 32-layer, base LLaMA 3 model. |
llama3_8b_en_int8 | 8.03B | 8 billion parameter, 32-layer, base LLaMA 3 model with activation and weights quantized to int8. |
llama3_instruct_8b_en | 8.03B | 8 billion parameter, 32-layer, instruction tuned LLaMA 3 model. |
llama3_instruct_8b_en_int8 | 8.03B | 8 billion parameter, 32-layer, instruction tuned LLaMA 3 model with activation and weights quantized to int8. |
tokenizer
propertykeras_hub.models.LlamaCausalLMPreprocessor.tokenizer
The tokenizer used to tokenize strings.