ResNetBackbone
classkeras_cv.models.ResNetBackbone(
stackwise_filters,
stackwise_blocks,
stackwise_strides,
include_rescaling,
input_shape=(None, None, 3),
input_tensor=None,
block_type="block",
**kwargs
)
Instantiates the ResNet architecture.
Reference
The difference in ResNetV1 and ResNetV2 rests in the structure of their individual building blocks. In ResNetV2, the batch normalization and ReLU activation precede the convolution layers, as opposed to ResNetV1 where the batch normalization and ReLU activation are applied after the convolution layers.
For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning.
Arguments
True
, inputs will be passed through a Rescaling(1/255.0)
layer.layers.Input()
)
to use as image input for the model.Examples
input_data = tf.ones(shape=(8, 224, 224, 3))
# Pretrained backbone
model = keras_cv.models.ResNetBackbone.from_preset("resnet50_imagenet")
output = model(input_data)
# Randomly initialized backbone with a custom config
model = ResNetBackbone(
stackwise_filters=[64, 128, 256, 512],
stackwise_blocks=[2, 2, 2, 2],
stackwise_strides=[1, 2, 2, 2],
include_rescaling=False,
)
output = model(input_data)
from_preset
methodResNetBackbone.from_preset()
Instantiate ResNetBackbone model from preset config and weights.
Arguments
None
, which follows whether the preset has
pretrained weights available.Examples
# Load architecture and weights from preset
model = keras_cv.models.ResNetBackbone.from_preset(
"resnet50_imagenet",
)
# Load randomly initialized model from preset architecture with weights
model = keras_cv.models.ResNetBackbone.from_preset(
"resnet50_imagenet",
load_weights=False,
Preset name | Parameters | Description |
---|---|---|
resnet18 | 11.19M | ResNet model with 18 layers where the batch normalization and ReLU activation are applied after the convolution layers (v1 style). |
resnet34 | 21.30M | ResNet model with 34 layers where the batch normalization and ReLU activation are applied after the convolution layers (v1 style). |
resnet50 | 23.56M | ResNet model with 50 layers where the batch normalization and ReLU activation are applied after the convolution layers (v1 style). |
resnet101 | 42.61M | ResNet model with 101 layers where the batch normalization and ReLU activation are applied after the convolution layers (v1 style). |
resnet152 | 58.30M | ResNet model with 152 layers where the batch normalization and ReLU activation are applied after the convolution layers (v1 style). |
resnet50_imagenet | 23.56M | ResNet model with 50 layers where the batch normalization and ReLU activation are applied after the convolution layers (v1 style). Trained on Imagenet 2012 classification task. |
ResNet18Backbone
classkeras_cv.models.ResNet18Backbone(
stackwise_filters,
stackwise_blocks,
stackwise_strides,
include_rescaling,
input_shape=(None, None, 3),
input_tensor=None,
block_type="block",
**kwargs
)
ResNetBackbone (V1) model with 18 layers.
Reference
The difference in ResNetV1 and ResNetV2 rests in the structure of their individual building blocks. In ResNetV2, the batch normalization and ReLU activation precede the convolution layers, as opposed to ResNetV1 where the batch normalization and ReLU activation are applied after the convolution layers.
For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning.
Arguments
True
, inputs will be passed through a Rescaling(1/255.0)
layer.layers.Input()
)
to use as image input for the model.Example
input_data = tf.ones(shape=(8, 224, 224, 3))
# Randomly initialized backbone
model = ResNet18Backbone()
output = model(input_data)
ResNet34Backbone
classkeras_cv.models.ResNet34Backbone(
stackwise_filters,
stackwise_blocks,
stackwise_strides,
include_rescaling,
input_shape=(None, None, 3),
input_tensor=None,
block_type="block",
**kwargs
)
ResNetBackbone (V1) model with 34 layers.
Reference
The difference in ResNetV1 and ResNetV2 rests in the structure of their individual building blocks. In ResNetV2, the batch normalization and ReLU activation precede the convolution layers, as opposed to ResNetV1 where the batch normalization and ReLU activation are applied after the convolution layers.
For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning.
Arguments
True
, inputs will be passed through a Rescaling(1/255.0)
layer.layers.Input()
)
to use as image input for the model.Example
input_data = tf.ones(shape=(8, 224, 224, 3))
# Randomly initialized backbone
model = ResNet34Backbone()
output = model(input_data)
ResNet50Backbone
classkeras_cv.models.ResNet50Backbone(
stackwise_filters,
stackwise_blocks,
stackwise_strides,
include_rescaling,
input_shape=(None, None, 3),
input_tensor=None,
block_type="block",
**kwargs
)
ResNetBackbone (V1) model with 50 layers.
Reference
The difference in ResNetV1 and ResNetV2 rests in the structure of their individual building blocks. In ResNetV2, the batch normalization and ReLU activation precede the convolution layers, as opposed to ResNetV1 where the batch normalization and ReLU activation are applied after the convolution layers.
For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning.
Arguments
True
, inputs will be passed through a Rescaling(1/255.0)
layer.layers.Input()
)
to use as image input for the model.Example
input_data = tf.ones(shape=(8, 224, 224, 3))
# Randomly initialized backbone
model = ResNet50Backbone()
output = model(input_data)
ResNet101Backbone
classkeras_cv.models.ResNet101Backbone(
stackwise_filters,
stackwise_blocks,
stackwise_strides,
include_rescaling,
input_shape=(None, None, 3),
input_tensor=None,
block_type="block",
**kwargs
)
ResNetBackbone (V1) model with 101 layers.
Reference
The difference in ResNetV1 and ResNetV2 rests in the structure of their individual building blocks. In ResNetV2, the batch normalization and ReLU activation precede the convolution layers, as opposed to ResNetV1 where the batch normalization and ReLU activation are applied after the convolution layers.
For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning.
Arguments
True
, inputs will be passed through a Rescaling(1/255.0)
layer.layers.Input()
)
to use as image input for the model.Example
input_data = tf.ones(shape=(8, 224, 224, 3))
# Randomly initialized backbone
model = ResNet101Backbone()
output = model(input_data)
ResNet152Backbone
classkeras_cv.models.ResNet152Backbone(
stackwise_filters,
stackwise_blocks,
stackwise_strides,
include_rescaling,
input_shape=(None, None, 3),
input_tensor=None,
block_type="block",
**kwargs
)
ResNetBackbone (V1) model with 152 layers.
Reference
The difference in ResNetV1 and ResNetV2 rests in the structure of their individual building blocks. In ResNetV2, the batch normalization and ReLU activation precede the convolution layers, as opposed to ResNetV1 where the batch normalization and ReLU activation are applied after the convolution layers.
For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning.
Arguments
True
, inputs will be passed through a Rescaling(1/255.0)
layer.layers.Input()
)
to use as image input for the model.Example
input_data = tf.ones(shape=(8, 224, 224, 3))
# Randomly initialized backbone
model = ResNet152Backbone()
output = model(input_data)