9

Tensorflow Conv1D | Snakey Code

 2 years ago
source link: https://snakeycode.wordpress.com/2021/07/09/tensorflow-conv1d/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Tensorflow Conv1D

The Tensorflow Conv1D layer is useful for time series analysis. Just like Conv2D for image analysis, Conv1D reduces the number of weights in the model by sharing the weights. Here is a tutorial that uses this function.

The details of how Conv1D works with multiple features are not obvious. Here is a minimal example, with 1 batch, 4 time points and 7 features. The 1D convolution has 1 filter and a kernel size of 3.

import tensorflow as tf
import numpy as np

input_shape = (1, 4, 7)
x = np.reshape(np.arange(4.0 * 7.0), input_shape)
model = tf.keras.layers.Conv1D(1, 3, activation='linear', input_shape=input_shape[1:], use_bias=False)

# Run the model to get the filter
output = model(x)
print('Output shape: {}'.format(output.shape))
print(model.get_weights()[0].shape)

...Output shape: (1, 2, 1)
...Filter shape: (3, 7, 1)

The second dimension of the output is 2 because that is the number of shifts a 3 element kernel can make in an array of 4 data points (e.g. indices (0,1,2) and (1,2,3)).

To calculate output[0][0][0], we take rows 1, 2, 3 of the data, which gives a 3×7 tensor, multiply element wise by the filter and sum the values. To calculate output[0][1][0], we take rows 2, 3, 4 and repeat.

This is potentially confusing since we have a 2D filter and we are doing a 1D convolution. What makes this is a 1D convolution is that we only slide the filter along one dimension. Note that each feature has its own filter (column in the weights). Also, this layer combines all features. In this case, we have 3×7 inputs mapping to 1 output for each filter.

If you do not want the features combined in this layer, use Tensorflow DepthwiseConv1D:

import tensorflow as tf
import numpy as np

# 1 sample, 4 time points, 7 features
input_shape = (1, 4, 7)
x = np.reshape(np.arange(4.0 * 7.0), input_shape)
kernel_size = 3
n_filters = 1

model = tf.keras.layers.DepthwiseConv1D(kernel_size, depth_multiplier=n_filters,
                                        activation='linear', use_bias=False)

output = model(x)
w = model.get_weights()[0]  # the weights of the first (and only filter)
print('Output shape: {}'.format(output.shape))
print('Filter shape: {}'.format(model.get_weights()[0].shape))

...Output shape: (1, 2, 7)
...Filter shape: (3, 7, 1)

The calculations are similar to Conv1D, except after element-wise multiplication, each feature is summed separately. This is why the third dimension of the output has gone from 1 to the number of features.

Advertisements
Report this ad

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK