-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Float Conv2D, Layernorm, Softmax, Reshape, Div, Relu #27
Conversation
aada6cd
to
d6fdb16
Compare
dd6c3d3
to
92efdff
Compare
…n PulpOpen - Float Conv2D, LayerNorm, Div, Reshape, Relu, Softmax basic classes - Add Float Conv2D, LayerNorm, Div, Reshape, Relu on Generic - Add Softmax on PulpOpen
92efdff
to
a8dea26
Compare
DeeployTest/Platforms/Generic/main.c
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some float test can not pass with diff 1e-5.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For which operator and with which shape?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First batch of comments is here. I didn't go through everything yet.
Deeploy/Targets/Generic/Parsers.py
Outdated
self.operatorRepresentation['data_in'] = data_in.name | ||
self.operatorRepresentation['data_out'] = data_out.name | ||
self.operatorRepresentation['size'] = np.prod(data_in.shape) | ||
self.operatorRepresentation['lastDimLength'] = data_in.shape[-1] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do you need to know the last dimension length for RELU?
Deeploy/Targets/Generic/Parsers.py
Outdated
self.operatorRepresentation[outputs[idx]] = ctxt.lookup(outputNode.name).name | ||
|
||
self.operatorRepresentation['size'] = np.prod(ctxt.lookup(self.operatorRepresentation['input1']).shape) | ||
self.operatorRepresentation['lastDimLength'] = ctxt.lookup(self.operatorRepresentation['input1']).shape[-1] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same question here, why would you need to know the length of the last dimension for a Div?
def alignToContext(self, ctxt: NetworkContext, | ||
operatorRepresentation: OperatorRepresentation) -> Tuple[NetworkContext, Dict]: | ||
|
||
data_in = ctxt.lookup(operatorRepresentation['data_in']) | ||
data_out = ctxt.lookup(operatorRepresentation['data_out']) | ||
|
||
return ctxt, operatorRepresentation, [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This method does not do anything, so why do you need it?
class _reluTemplate(NodeTemplate): | ||
|
||
def __init__(self, templateStr): | ||
super().__init__(templateStr) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You don't need this, just do `referenceTemplate = NodeTemplate(""""Your template""")
class _FloatConvTemplate(NodeTemplate): | ||
|
||
def __init__(self, templateStr): | ||
super().__init__(templateStr) | ||
|
||
def alignToContext(self, ctxt: NetworkContext, | ||
operatorRepresentation: OperatorRepresentation) -> Tuple[NetworkContext, Dict, List[str]]: | ||
|
||
data_in = ctxt.lookup(operatorRepresentation['data_in']) | ||
data_out = ctxt.lookup(operatorRepresentation['data_out']) | ||
|
||
return ctxt, operatorRepresentation, [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here, this is class is useless
4ead546
to
8942dbd
Compare
1. Delete useless classes in Templates 2. Fix relu dimension bug
Deeploy/Targets/Generic/Parsers.py
Outdated
self.operatorRepresentation['data_in'] = data_in.name | ||
self.operatorRepresentation['data_out'] = data_out.name | ||
self.operatorRepresentation['size'] = np.prod(data_in.shape) | ||
self.operatorRepresentation['batch'] = data_in.shape[0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need to know the batch for the relu?
void Relu_fp32_fp32(float32_t* input, float32_t* output, int32_t size, int32_t last_dim_length) { | ||
|
||
int32_t batch_size = size / last_dim_length; | ||
|
||
for (int b = 0; b < batch_size; b++) { | ||
for (int i = 0; i < last_dim_length; i++) { | ||
output[b * last_dim_length + i] = MAX(input[b * last_dim_length + i], 0.0f); | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Relu is an element-wise function (unlike softmax or layernorm, which are row-wise). We don't need the last_dim_length
here :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't forget to align the function signature in the header ;)
|
||
referenceTemplate = NodeTemplate(""" | ||
// Relu (Name: ${nodeName}, Op: ${nodeOp}) | ||
SINGLE_CORE Relu_fp${data_in_type.referencedType.typeWidth}_fp${data_out_type.referencedType.typeWidth}(${data_in}, ${data_out}, ${size}, ${batch}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove the batch here please, we don't need this for relu.
@@ -112,7 +113,8 @@ | |||
'RQIntegerDiv': RQIntegerDivLayer([RQIntegerDivMapper]), | |||
'MatMul': MatMulLayer([MatMulMapper]), | |||
'IntegerMean': ReduceMeanLayer([ReduceMeanMapper]), | |||
'iSoftmax': iSoftmaxLayer([Softmax_int8_Mapper]), | |||
'iSoftmax': SoftmaxLayer([Softmax_int8_Mapper]), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI: For class name we use PascalCase and for variable names we use camelCase. This is a very NIT comment but try to stick to the syle as much as you can please 😁
8942dbd
to
934456a
Compare
Add Float Conv2D, LayerNorm, Div, Softmax, Reshape, Relu on Generic Platform
Changed:
(Note: nlevel, sign not used for float)
Added:
Deleted:
Issue
PR Merge Checklist
devel
commit and pointing todevel
.CHANGELOG.md
file has been updated.