DepthwiseConv1D
classkeras.layers.DepthwiseConv1D(
kernel_size,
strides=1,
padding="valid",
depth_multiplier=1,
data_format=None,
dilation_rate=1,
activation=None,
use_bias=True,
depthwise_initializer="glorot_uniform",
bias_initializer="zeros",
depthwise_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
depthwise_constraint=None,
bias_constraint=None,
**kwargs
)
1D depthwise convolution layer.
Depthwise convolution is a type of convolution in which each input channel is convolved with a different kernel (called a depthwise kernel). You can understand depthwise convolution as the first step in a depthwise separable convolution.
It is implemented via the following steps:
depth_multiplier
output channels.Unlike a regular 1D convolution, depthwise convolution does not mix information across different input channels.
The depth_multiplier
argument determines how many filters are applied to
one input channel. As such, it controls the amount of output channels that
are generated per input channel in the depthwise step.
Arguments
strides > 1
is incompatible with
dilation_rate > 1
."valid"
or "same"
(case-insensitive).
"valid"
means no padding. "same"
results in padding evenly to
the left/right or up/down of the input. When padding="same"
and
strides=1
, the output has the same size as the input.input_channel * depth_multiplier
."channels_last"
or "channels_first"
.
The ordering of the dimensions in the inputs. "channels_last"
corresponds to inputs with shape (batch, steps, features)
while "channels_first"
corresponds to inputs with shape
(batch, features, steps)
. It defaults to the image_data_format
value found in your Keras config file at ~/.keras/keras.json
.
If you never set it, then it will be "channels_last"
.None
, no activation is applied.True
, bias will be added to the output.None
, the default initializer ("glorot_uniform"
)
will be used.None
, the
default initializer ("zeros"
) will be used.Optimizer
(e.g. used to implement
norm constraints or value constraints for layer weights). The
function must take as input the unprojected variable and must return
the projected variable (which must have the same shape). Constraints
are not safe to use when doing asynchronous distributed training.Optimizer
.Input shape
data_format="channels_last"
:
A 3D tensor with shape: (batch_shape, steps, channels)
data_format="channels_first"
:
A 3D tensor with shape: (batch_shape, channels, steps)
Output shape
data_format="channels_last"
:
A 3D tensor with shape:
(batch_shape, new_steps, channels * depth_multiplier)
data_format="channels_first"
:
A 3D tensor with shape:
(batch_shape, channels * depth_multiplier, new_steps)
Returns
A 3D tensor representing
activation(depthwise_conv1d(inputs, kernel) + bias)
.
Raises
strides > 1
and dilation_rate > 1
.Example
>>> x = np.random.rand(4, 10, 12)
>>> y = keras.layers.DepthwiseConv1D(3, 3, 2, activation='relu')(x)
>>> print(y.shape)
(4, 4, 36)