Code examples

Code examples

Our code examples are short (less than 300 lines of code), focused demonstrations of vertical deep learning workflows.

All of our examples are written as Jupyter notebooks and can be run in one click in Google Colab, a hosted notebook environment that requires no setup and runs in the cloud. Google Colab includes GPU and TPU runtimes.

= Good starter example

V3
= Keras 3 example

Computer Vision

Image classification

V3
Image classification from scratch
V3
Simple MNIST convnet
V3
Image classification via fine-tuning with EfficientNet
V3
Image classification with Vision Transformer
V3
Classification using Attention-based Deep Multiple Instance Learning
V3
Image classification with modern MLP models
V3
A mobile-friendly Transformer-based model for image classification
V3
Pneumonia Classification on TPU
V3
Compact Convolutional Transformers
V3
Image classification with ConvMixer
V3
Image classification with EANet (External Attention Transformer)
V3
Involutional neural networks
V3
Image classification with Perceiver
V3
Few-Shot learning with Reptile
V3
Semi-supervised image classification using contrastive pretraining with SimCLR
V3
Image classification with Swin Transformers
V2
Train a Vision Transformer on small datasets
V2
A Vision Transformer without Attention
V3
Image Classification using Global Context Vision Transformer
V3
Image Classification using BigTransfer (BiT)

Image segmentation

V3
Image segmentation with a U-Net-like architecture
V3
Multiclass semantic segmentation using DeepLabV3+
V2
Highly accurate boundaries segmentation using BASNet
V3
Image Segmentation using Composable Fully-Convolutional Networks

Object detection

V2
Object Detection with RetinaNet
V3
Keypoint Detection with Transfer Learning
V3
Object detection with Vision Transformers

3D

V3
3D image classification from CT scans
V3
Monocular depth estimation
V3
3D volumetric rendering with NeRF
V3
Point cloud segmentation with PointNet
V3
Point cloud classification

OCR

V3
OCR model for reading Captchas
V3
Handwriting recognition

Image enhancement

V3
Convolutional autoencoder for image denoising
V3
Low-light image enhancement using MIRNet
V3
Image Super-Resolution using an Efficient Sub-Pixel CNN
V3
Enhanced Deep Residual Networks for single-image super-resolution
V3
Zero-DCE for low-light image enhancement

Data augmentation

V3
CutMix data augmentation for image classification
V3
MixUp augmentation for image classification
V3
RandAugment for Image Classification for Improved Robustness

Image & Text

V3
Image captioning
V2
Natural language image search with a Dual Encoder

Vision models interpretability

V3
Visualizing what convnets learn
V3
Model interpretability with Integrated Gradients
V3
Investigating Vision Transformer representations
V3
Grad-CAM class activation visualization

Image similarity search

V2
Near-duplicate image search
V3
Semantic Image Clustering
V3
Image similarity estimation using a Siamese Network with a contrastive loss
V3
Image similarity estimation using a Siamese Network with a triplet loss
V3
Metric learning for image similarity search
V2
Metric learning for image similarity search using TensorFlow Similarity
V3
Self-supervised contrastive learning with NNCLR

Video

V3
Video Classification with a CNN-RNN Architecture
V3
Next-Frame Video Prediction with Convolutional LSTMs
V3
Video Classification with Transformers
V3
Video Vision Transformer

Performance recipes

V3
Gradient Centralization for Better Training Performance
V3
Learning to tokenize in Vision Transformers
V3
Knowledge Distillation
V3
FixRes: Fixing train-test resolution discrepancy
V3
Class Attention Image Transformers with LayerScale
V3
Augmenting convnets with aggregated attention
V3
Learning to Resize

Other

V2
Semi-supervision and domain adaptation with AdaMatch
V2
Barlow Twins for Contrastive SSL
V2
Consistency training with supervision
V2
Distilling Vision Transformers
V2
Focal Modulation: A replacement for Self-Attention
V2
Using the Forward-Forward Algorithm for Image Classification
V2
Masked image modeling with Autoencoders
V2
Segment Anything Model with 🤗Transformers
V2
Semantic segmentation with SegFormer and Hugging Face Transformers
V2
Self-supervised contrastive learning with SimSiam
V2
Supervised Contrastive Learning
V2
When Recurrence meets Transformers
V2
Efficient Object Detection with YOLOV8 and KerasCV

Natural Language Processing

Text classification

V3
Text classification from scratch
V3
Review Classification using Active Learning
V3
Text Classification using FNet
V2
Large-scale multi-label text classification
V3
Text classification with Transformer
V3
Text classification with Switch Transformer
V2
Text classification using Decision Forests and pretrained embeddings
V3
Using pre-trained word embeddings
V3
Bidirectional LSTM on IMDB
V3
Data Parallel Training with KerasHub and tf.distribute

Machine translation

V3
English-to-Spanish translation with KerasHub
V3
English-to-Spanish translation with a sequence-to-sequence Transformer
V3
Character-level recurrent sequence-to-sequence model

Entailment prediction

V2
Multimodal entailment

Named entity recognition

V3
Named Entity Recognition using Transformers

Sequence-to-sequence

V2
Text Extraction with BERT
V3
Sequence to sequence learning for performing number addition

Text similarity search

V3
Semantic Similarity with KerasHub
V3
Semantic Similarity with BERT
V3
Sentence embeddings using Siamese RoBERTa-networks

Language modeling

V3
End-to-end Masked Language Modeling with BERT
V3
Abstractive Text Summarization with BART
V2
Pretraining BERT with Hugging Face Transformers

Parameter efficient fine-tuning

V3
Parameter-efficient fine-tuning of GPT-2 with LoRA

Other

V2
Training a language model from scratch with 🤗 Transformers and TPUs
V2
MultipleChoice Task with Transfer Learning
V2
Question Answering with Hugging Face Transformers
V2
Abstractive Summarization with Hugging Face Transformers

Structured Data

Structured data classification

V3
Structured data classification with FeatureSpace
V3
FeatureSpace advanced use cases
V3
Imbalanced classification: credit card fraud detection
V3
Structured data classification from scratch
V3
Structured data learning with Wide, Deep, and Cross networks
V2
Classification with Gated Residual and Variable Selection Networks
V2
Classification with TensorFlow Decision Forests
V3
Classification with Neural Decision Forests
V3
Structured data learning with TabTransformer

Recommendation

V3
Collaborative Filtering for Movie Recommendations
V3
A Transformer-based recommendation system

Timeseries

Timeseries classification

V3
Timeseries classification from scratch
V3
Timeseries classification with a Transformer model
V3
Electroencephalogram Signal Classification for action identification
V3
Event classification for payment card fraud detection

Anomaly detection

V3
Timeseries anomaly detection using an Autoencoder

Timeseries forecasting

V3
Traffic forecasting using graph neural networks and LSTM
V3
Timeseries forecasting for weather prediction

Generative Deep Learning

Image generation

V3
Denoising Diffusion Implicit Models
V3
A walk through latent space with Stable Diffusion
V2
DreamBooth
V2
Denoising Diffusion Probabilistic Models
V2
Teach StableDiffusion new concepts via Textual Inversion
V2
Fine-tuning Stable Diffusion
V3
Variational AutoEncoder
V3
GAN overriding Model.train_step
V3
WGAN-GP overriding Model.train_step
V3
Conditional GAN
V3
CycleGAN
V2
Data-efficient GANs with Adaptive Discriminator Augmentation
V3
Deep Dream
V3
GauGAN for conditional image generation
V3
PixelCNN
V2
Face image generation with StyleGAN
V2
Vector-Quantized Variational Autoencoders

Style transfer

V3
Neural style transfer
V2
Neural Style Transfer with AdaIN

Text generation

V3
GPT2 Text Generation with KerasHub
V3
GPT text generation from scratch with KerasHub
V3
Text generation with a miniature GPT
V3
Character-level text generation with LSTM
V2
Text Generation using FNet

Graph generation

V2
Drug Molecule Generation with VAE
V2
WGAN-GP with R-GCN for the generation of small molecular graphs

Other

V2
A walk through latent space with Stable Diffusion 3
V2
Density estimation using Real NVP

Audio Data

Speech recognition

V3
Automatic Speech Recognition with Transformer

Other

V2
Automatic Speech Recognition using CTC
V2
MelGAN-based spectrogram inversion using feature matching
V2
Speaker Recognition
V2
Audio Classification with the STFTSpectrogram layer
V2
English speaker accent recognition using Transfer Learning
V2
Audio Classification with Hugging Face Transformers

Reinforcement Learning

Actor Critic Method
Proximal Policy Optimization
Deep Q-Learning for Atari Breakout
Deep Deterministic Policy Gradient (DDPG)

Graph Data

Graph attention network (GAT) for node classification
Node Classification with Graph Neural Networks
Message-passing neural network (MPNN) for molecular property prediction
Graph representation learning with node2vec

Quick Keras Recipes

Keras usage tips

V3
Parameter-efficient fine-tuning of Gemma with LoRA and QLoRA
V3
Float8 training and inference with a simple Transformer model
V3
Keras debugging tips
V3
Customizing the convolution operation of a Conv2D layer
V3
Trainer pattern
V3
Endpoint layer pattern
V3
Reproducibility in Keras Models
V3
Writing Keras Models With TensorFlow NumPy
V3
Simple custom layer example: Antirectifier
V3
Packaging Keras models for wide distribution using Functional Subclassing

Serving

V3
Serving TensorFlow models with TFServing

ML best practices

V3
Estimating required sample size for model training
V3
Memory-efficient embeddings for recommendation systems
V3
Creating TFRecords

Other

V2
Approximating non-Function Mappings with Mixture Density Networks
V2
Probabilistic Bayesian Neural Networks
V2
Knowledge distillation recipes
V2
Evaluating and exporting scikit-learn metrics in a Keras callback
V2
How to train a Keras model on TFRecord files

Adding a new code example

We welcome new code examples! Here are our rules:

  • They should be shorter than 300 lines of code (comments may be as long as you want).
  • They should demonstrate modern Keras best practices.
  • They should be substantially different in topic from all examples listed above.
  • They should be extensively documented & commented.

New examples are added via Pull Requests to the keras.io repository. They must be submitted as a .py file that follows a specific format. They are usually generated from Jupyter notebooks. See the tutobooks documentation for more details.

If you would like to convert a Keras 2 example to Keras 3, please open a Pull Request to the keras.io repository.