You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanks your this work
I would like to know the CNN model architecture that you are implemented. I think you are based on this post : http://aqibsaeed.github.io/2016-11-04-human-activity-recognition-cnn/
but why you used Conv2D while the convolution layer will be 1D (temporal) ?
The text was updated successfully, but these errors were encountered:
Hi Thanks for reaching out to me. The reason for me to use the Conv2D instead of Conv1D is because I am processing the data for all three channels at once and treating the data combined in a matrix as an image. This improves the accuracy of activity detection and helps the network to distinguish between activities which might appear the same if only one axis data is used.
For example, depending on the orientation of the accelerometer climbing, descending the stairs and walking would be the same when data from only one or two sensors are used. However, the third sensor will provide the information for altitude changes and can resolve the problem. Hope it helps...
Hi, thanks your this work
I would like to know the CNN model architecture that you are implemented. I think you are based on this post : http://aqibsaeed.github.io/2016-11-04-human-activity-recognition-cnn/
but why you used Conv2D while the convolution layer will be 1D (temporal) ?
The text was updated successfully, but these errors were encountered: