SentencePieceTokenizer
classkeras_hub.tokenizers.SentencePieceTokenizer(
proto=None,
sequence_length=None,
dtype="int32",
add_bos=False,
add_eos=False,
**kwargs
)
A SentencePiece tokenizer layer.
This layer provides an implementation of SentencePiece tokenization
as described in the SentencePiece paper
and the SentencePiece package.
The tokenization will run entirely within the Tensorflow graph, and can
be saved inside a keras.Model
.
By default, the layer will output a tf.RaggedTensor
where the last
dimension of the output is ragged after whitespace splitting and sub-word
tokenizing. If sequence_length
is set, the layer will output a dense
tf.Tensor
where all inputs have been padded or truncated to
sequence_length
. The output dtype can be controlled via the dtype
argument, which should be either an integer or string type.
Arguments
string
path to a SentencePiece proto file, or a
bytes
object with a serialized SentencePiece proto. See the
SentencePiece repository
for more details on the format.sequence_length
.sequence_length
.References
Examples
From bytes.
def train_sentence_piece_bytes(ds, size):
bytes_io = io.BytesIO()
sentencepiece.SentencePieceTrainer.train(
sentence_iterator=ds.as_numpy_iterator(),
model_writer=bytes_io,
vocab_size=size,
)
return bytes_io.getvalue()
# Train a sentencepiece proto.
ds = tf.data.Dataset.from_tensor_slices(["the quick brown fox."])
proto = train_sentence_piece_bytes(ds, 20)
# Tokenize inputs.
tokenizer = keras_hub.tokenizers.SentencePieceTokenizer(proto=proto)
ds = ds.map(tokenizer)
From a file.
def train_sentence_piece_file(ds, path, size):
with open(path, "wb") as model_file:
sentencepiece.SentencePieceTrainer.train(
sentence_iterator=ds.as_numpy_iterator(),
model_writer=model_file,
vocab_size=size,
)
# Train a sentencepiece proto.
ds = tf.data.Dataset.from_tensor_slices(["the quick brown fox."])
proto = train_sentence_piece_file(ds, "model.spm", 20)
# Tokenize inputs.
tokenizer = keras_hub.tokenizers.SentencePieceTokenizer(proto="model.spm")
ds = ds.map(tokenizer)
tokenize
methodSentencePieceTokenizer.tokenize(inputs)
Transform input tensors of strings into output tokens.
Arguments
detokenize
methodSentencePieceTokenizer.detokenize(inputs)
Transform tokens back into strings.
Arguments
get_vocabulary
methodSentencePieceTokenizer.get_vocabulary()
Get the tokenizer vocabulary.
vocabulary_size
methodSentencePieceTokenizer.vocabulary_size()
Get the integer size of the tokenizer vocabulary.
token_to_id
methodSentencePieceTokenizer.token_to_id(token)
Convert a string token to an integer id.
id_to_token
methodSentencePieceTokenizer.id_to_token(id)
Convert an integer id to a string token.