-
Notifications
You must be signed in to change notification settings - Fork 8
/
log_SSTAP.txt
4441 lines (4441 loc) · 579 KB
/
log_SSTAP.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
train subset video numbers: 966
unlabel unlabeled subset video numbers: 8683
validation subset video numbers: 4728
use 0.09999999999999998 label for training!!!
training batchsize : 8
unlabel_training batchsize : 24
use Semi !!!
training 1 (epoch 0): tem_loss: 1.389, pem class_loss: 0.693, pem reg_loss: 0.041, consistency_loss: 0.00025, consistency_loss_ema: 0.00000, total_loss: 2.490
training 11 (epoch 0): tem_loss: 1.407, pem class_loss: 0.576, pem reg_loss: 0.034, consistency_loss: 0.00013, consistency_loss_ema: 0.00007, total_loss: 2.328
training 21 (epoch 0): tem_loss: 1.381, pem class_loss: 0.540, pem reg_loss: 0.032, consistency_loss: 0.00012, consistency_loss_ema: 0.00009, total_loss: 2.246
training 31 (epoch 0): tem_loss: 1.355, pem class_loss: 0.513, pem reg_loss: 0.031, consistency_loss: 0.00013, consistency_loss_ema: 0.00011, total_loss: 2.178
training 41 (epoch 0): tem_loss: 1.339, pem class_loss: 0.503, pem reg_loss: 0.031, consistency_loss: 0.00017, consistency_loss_ema: 0.00015, total_loss: 2.148
training 51 (epoch 0): tem_loss: 1.328, pem class_loss: 0.480, pem reg_loss: 0.029, consistency_loss: 0.00025, consistency_loss_ema: 0.00022, total_loss: 2.102
training 61 (epoch 0): tem_loss: 1.322, pem class_loss: 0.461, pem reg_loss: 0.028, consistency_loss: 0.00028, consistency_loss_ema: 0.00026, total_loss: 2.064
training 71 (epoch 0): tem_loss: 1.308, pem class_loss: 0.451, pem reg_loss: 0.027, consistency_loss: 0.00029, consistency_loss_ema: 0.00028, total_loss: 2.032
training 81 (epoch 0): tem_loss: 1.302, pem class_loss: 0.448, pem reg_loss: 0.027, consistency_loss: 0.00030, consistency_loss_ema: 0.00028, total_loss: 2.017
training 91 (epoch 0): tem_loss: 1.307, pem class_loss: 0.443, pem reg_loss: 0.026, consistency_loss: 0.00035, consistency_loss_ema: 0.00034, total_loss: 2.012
training 101 (epoch 0): tem_loss: 1.307, pem class_loss: 0.437, pem reg_loss: 0.026, consistency_loss: 0.00038, consistency_loss_ema: 0.00037, total_loss: 2.004
training 111 (epoch 0): tem_loss: 1.300, pem class_loss: 0.436, pem reg_loss: 0.026, consistency_loss: 0.00037, consistency_loss_ema: 0.00037, total_loss: 1.994
[94mBMN training loss(epoch 0): tem_loss: 1.297, pem class_loss: 0.436, pem reg_loss: 0.026, total_loss: 1.990[0m
[94mBMN val loss(epoch 0): tem_loss: 1.255, pem class_loss: 0.380, pem reg_loss: 0.022, total_loss: 1.854[0m
[94mBMN val_ema loss(epoch 0): tem_loss: 1.252, pem class_loss: 0.394, pem reg_loss: 0.022, total_loss: 1.864[0m
use Semi !!!
training 121 (epoch 1): tem_loss: 1.125, pem class_loss: 0.374, pem reg_loss: 0.023, consistency_loss: 0.00227, consistency_loss_ema: 0.00205, total_loss: 1.732
training 131 (epoch 1): tem_loss: 1.237, pem class_loss: 0.370, pem reg_loss: 0.022, consistency_loss: 0.00204, consistency_loss_ema: 0.00201, total_loss: 1.823
training 141 (epoch 1): tem_loss: 1.194, pem class_loss: 0.400, pem reg_loss: 0.023, consistency_loss: 0.00238, consistency_loss_ema: 0.00244, total_loss: 1.825
training 151 (epoch 1): tem_loss: 1.193, pem class_loss: 0.384, pem reg_loss: 0.022, consistency_loss: 0.00240, consistency_loss_ema: 0.00249, total_loss: 1.796
training 161 (epoch 1): tem_loss: 1.195, pem class_loss: 0.395, pem reg_loss: 0.022, consistency_loss: 0.00259, consistency_loss_ema: 0.00271, total_loss: 1.812
training 171 (epoch 1): tem_loss: 1.204, pem class_loss: 0.391, pem reg_loss: 0.022, consistency_loss: 0.00270, consistency_loss_ema: 0.00277, total_loss: 1.817
training 181 (epoch 1): tem_loss: 1.195, pem class_loss: 0.392, pem reg_loss: 0.022, consistency_loss: 0.00257, consistency_loss_ema: 0.00271, total_loss: 1.809
training 191 (epoch 1): tem_loss: 1.200, pem class_loss: 0.398, pem reg_loss: 0.023, consistency_loss: 0.00263, consistency_loss_ema: 0.00275, total_loss: 1.827
training 201 (epoch 1): tem_loss: 1.201, pem class_loss: 0.406, pem reg_loss: 0.023, consistency_loss: 0.00275, consistency_loss_ema: 0.00286, total_loss: 1.838
training 211 (epoch 1): tem_loss: 1.201, pem class_loss: 0.404, pem reg_loss: 0.023, consistency_loss: 0.00271, consistency_loss_ema: 0.00284, total_loss: 1.835
training 221 (epoch 1): tem_loss: 1.195, pem class_loss: 0.396, pem reg_loss: 0.022, consistency_loss: 0.00279, consistency_loss_ema: 0.00288, total_loss: 1.815
training 231 (epoch 1): tem_loss: 1.193, pem class_loss: 0.391, pem reg_loss: 0.022, consistency_loss: 0.00278, consistency_loss_ema: 0.00289, total_loss: 1.804
[94mBMN training loss(epoch 1): tem_loss: 1.194, pem class_loss: 0.392, pem reg_loss: 0.022, total_loss: 1.807[0m
[94mBMN val loss(epoch 1): tem_loss: 1.211, pem class_loss: 0.373, pem reg_loss: 0.021, total_loss: 1.796[0m
[94mBMN val_ema loss(epoch 1): tem_loss: 1.210, pem class_loss: 0.371, pem reg_loss: 0.021, total_loss: 1.789[0m
use Semi !!!
training 241 (epoch 2): tem_loss: 1.208, pem class_loss: 0.739, pem reg_loss: 0.043, consistency_loss: 0.01210, consistency_loss_ema: 0.01554, total_loss: 2.382
training 251 (epoch 2): tem_loss: 1.109, pem class_loss: 0.402, pem reg_loss: 0.022, consistency_loss: 0.01205, consistency_loss_ema: 0.01191, total_loss: 1.735
training 261 (epoch 2): tem_loss: 1.113, pem class_loss: 0.388, pem reg_loss: 0.023, consistency_loss: 0.01077, consistency_loss_ema: 0.01030, total_loss: 1.726
training 271 (epoch 2): tem_loss: 1.111, pem class_loss: 0.399, pem reg_loss: 0.023, consistency_loss: 0.01127, consistency_loss_ema: 0.01093, total_loss: 1.742
training 281 (epoch 2): tem_loss: 1.126, pem class_loss: 0.396, pem reg_loss: 0.023, consistency_loss: 0.01226, consistency_loss_ema: 0.01221, total_loss: 1.755
training 291 (epoch 2): tem_loss: 1.139, pem class_loss: 0.394, pem reg_loss: 0.023, consistency_loss: 0.01201, consistency_loss_ema: 0.01205, total_loss: 1.758
training 301 (epoch 2): tem_loss: 1.145, pem class_loss: 0.388, pem reg_loss: 0.022, consistency_loss: 0.01181, consistency_loss_ema: 0.01200, total_loss: 1.752
training 311 (epoch 2): tem_loss: 1.140, pem class_loss: 0.381, pem reg_loss: 0.021, consistency_loss: 0.01160, consistency_loss_ema: 0.01204, total_loss: 1.735
training 321 (epoch 2): tem_loss: 1.136, pem class_loss: 0.377, pem reg_loss: 0.021, consistency_loss: 0.01140, consistency_loss_ema: 0.01187, total_loss: 1.724
training 331 (epoch 2): tem_loss: 1.141, pem class_loss: 0.374, pem reg_loss: 0.021, consistency_loss: 0.01128, consistency_loss_ema: 0.01171, total_loss: 1.725
training 341 (epoch 2): tem_loss: 1.136, pem class_loss: 0.368, pem reg_loss: 0.021, consistency_loss: 0.01106, consistency_loss_ema: 0.01164, total_loss: 1.709
training 351 (epoch 2): tem_loss: 1.141, pem class_loss: 0.366, pem reg_loss: 0.021, consistency_loss: 0.01107, consistency_loss_ema: 0.01174, total_loss: 1.713
[94mBMN training loss(epoch 2): tem_loss: 1.140, pem class_loss: 0.369, pem reg_loss: 0.021, total_loss: 1.715[0m
[94mBMN val loss(epoch 2): tem_loss: 1.203, pem class_loss: 0.366, pem reg_loss: 0.020, total_loss: 1.771[0m
[94mBMN val_ema loss(epoch 2): tem_loss: 1.195, pem class_loss: 0.361, pem reg_loss: 0.020, total_loss: 1.756[0m
use Semi !!!
training 361 (epoch 3): tem_loss: 1.195, pem class_loss: 0.459, pem reg_loss: 0.032, consistency_loss: 0.03124, consistency_loss_ema: 0.02197, total_loss: 1.971
training 371 (epoch 3): tem_loss: 1.159, pem class_loss: 0.359, pem reg_loss: 0.021, consistency_loss: 0.02493, consistency_loss_ema: 0.02423, total_loss: 1.723
training 381 (epoch 3): tem_loss: 1.133, pem class_loss: 0.342, pem reg_loss: 0.020, consistency_loss: 0.02280, consistency_loss_ema: 0.02300, total_loss: 1.672
training 391 (epoch 3): tem_loss: 1.119, pem class_loss: 0.354, pem reg_loss: 0.020, consistency_loss: 0.02207, consistency_loss_ema: 0.02248, total_loss: 1.675
training 401 (epoch 3): tem_loss: 1.114, pem class_loss: 0.354, pem reg_loss: 0.020, consistency_loss: 0.02159, consistency_loss_ema: 0.02179, total_loss: 1.669
training 411 (epoch 3): tem_loss: 1.116, pem class_loss: 0.343, pem reg_loss: 0.020, consistency_loss: 0.02180, consistency_loss_ema: 0.02235, total_loss: 1.654
training 421 (epoch 3): tem_loss: 1.114, pem class_loss: 0.344, pem reg_loss: 0.020, consistency_loss: 0.02192, consistency_loss_ema: 0.02264, total_loss: 1.654
training 431 (epoch 3): tem_loss: 1.123, pem class_loss: 0.346, pem reg_loss: 0.020, consistency_loss: 0.02178, consistency_loss_ema: 0.02250, total_loss: 1.668
training 441 (epoch 3): tem_loss: 1.121, pem class_loss: 0.347, pem reg_loss: 0.020, consistency_loss: 0.02116, consistency_loss_ema: 0.02207, total_loss: 1.668
training 451 (epoch 3): tem_loss: 1.110, pem class_loss: 0.341, pem reg_loss: 0.020, consistency_loss: 0.02161, consistency_loss_ema: 0.02221, total_loss: 1.649
training 461 (epoch 3): tem_loss: 1.110, pem class_loss: 0.344, pem reg_loss: 0.020, consistency_loss: 0.02196, consistency_loss_ema: 0.02230, total_loss: 1.653
training 471 (epoch 3): tem_loss: 1.110, pem class_loss: 0.343, pem reg_loss: 0.020, consistency_loss: 0.02237, consistency_loss_ema: 0.02274, total_loss: 1.650
[94mBMN training loss(epoch 3): tem_loss: 1.115, pem class_loss: 0.348, pem reg_loss: 0.020, total_loss: 1.662[0m
[94mBMN val loss(epoch 3): tem_loss: 1.192, pem class_loss: 0.379, pem reg_loss: 0.021, total_loss: 1.778[0m
[94mBMN val_ema loss(epoch 3): tem_loss: 1.190, pem class_loss: 0.360, pem reg_loss: 0.020, total_loss: 1.750[0m
use Semi !!!
training 481 (epoch 4): tem_loss: 1.194, pem class_loss: 0.343, pem reg_loss: 0.017, consistency_loss: 0.03459, consistency_loss_ema: 0.04356, total_loss: 1.703
training 491 (epoch 4): tem_loss: 1.077, pem class_loss: 0.305, pem reg_loss: 0.018, consistency_loss: 0.03743, consistency_loss_ema: 0.03861, total_loss: 1.561
training 501 (epoch 4): tem_loss: 1.062, pem class_loss: 0.313, pem reg_loss: 0.018, consistency_loss: 0.03573, consistency_loss_ema: 0.03636, total_loss: 1.555
training 511 (epoch 4): tem_loss: 1.062, pem class_loss: 0.304, pem reg_loss: 0.018, consistency_loss: 0.03591, consistency_loss_ema: 0.03703, total_loss: 1.542
training 521 (epoch 4): tem_loss: 1.078, pem class_loss: 0.322, pem reg_loss: 0.018, consistency_loss: 0.03565, consistency_loss_ema: 0.03733, total_loss: 1.581
training 531 (epoch 4): tem_loss: 1.073, pem class_loss: 0.332, pem reg_loss: 0.019, consistency_loss: 0.03505, consistency_loss_ema: 0.03703, total_loss: 1.593
training 541 (epoch 4): tem_loss: 1.079, pem class_loss: 0.336, pem reg_loss: 0.019, consistency_loss: 0.03506, consistency_loss_ema: 0.03681, total_loss: 1.605
training 551 (epoch 4): tem_loss: 1.087, pem class_loss: 0.338, pem reg_loss: 0.019, consistency_loss: 0.03429, consistency_loss_ema: 0.03590, total_loss: 1.614
training 561 (epoch 4): tem_loss: 1.087, pem class_loss: 0.338, pem reg_loss: 0.019, consistency_loss: 0.03398, consistency_loss_ema: 0.03563, total_loss: 1.616
training 571 (epoch 4): tem_loss: 1.088, pem class_loss: 0.336, pem reg_loss: 0.019, consistency_loss: 0.03387, consistency_loss_ema: 0.03536, total_loss: 1.615
training 581 (epoch 4): tem_loss: 1.098, pem class_loss: 0.333, pem reg_loss: 0.019, consistency_loss: 0.03434, consistency_loss_ema: 0.03552, total_loss: 1.620
training 591 (epoch 4): tem_loss: 1.100, pem class_loss: 0.335, pem reg_loss: 0.019, consistency_loss: 0.03476, consistency_loss_ema: 0.03563, total_loss: 1.625
[94mBMN training loss(epoch 4): tem_loss: 1.097, pem class_loss: 0.335, pem reg_loss: 0.019, total_loss: 1.620[0m
[94mBMN val loss(epoch 4): tem_loss: 1.193, pem class_loss: 0.361, pem reg_loss: 0.020, total_loss: 1.749[0m
[94mBMN val_ema loss(epoch 4): tem_loss: 1.195, pem class_loss: 0.358, pem reg_loss: 0.020, total_loss: 1.752[0m
use Semi !!!
training 601 (epoch 5): tem_loss: 1.048, pem class_loss: 0.289, pem reg_loss: 0.013, consistency_loss: 0.03908, consistency_loss_ema: 0.04374, total_loss: 1.470
training 611 (epoch 5): tem_loss: 1.063, pem class_loss: 0.325, pem reg_loss: 0.019, consistency_loss: 0.04042, consistency_loss_ema: 0.03862, total_loss: 1.577
training 621 (epoch 5): tem_loss: 1.093, pem class_loss: 0.325, pem reg_loss: 0.019, consistency_loss: 0.03912, consistency_loss_ema: 0.03854, total_loss: 1.605
training 631 (epoch 5): tem_loss: 1.092, pem class_loss: 0.312, pem reg_loss: 0.018, consistency_loss: 0.03828, consistency_loss_ema: 0.03817, total_loss: 1.588
training 641 (epoch 5): tem_loss: 1.093, pem class_loss: 0.310, pem reg_loss: 0.018, consistency_loss: 0.03904, consistency_loss_ema: 0.03778, total_loss: 1.585
training 651 (epoch 5): tem_loss: 1.090, pem class_loss: 0.308, pem reg_loss: 0.018, consistency_loss: 0.03831, consistency_loss_ema: 0.03743, total_loss: 1.576
training 661 (epoch 5): tem_loss: 1.091, pem class_loss: 0.312, pem reg_loss: 0.018, consistency_loss: 0.03765, consistency_loss_ema: 0.03749, total_loss: 1.584
training 671 (epoch 5): tem_loss: 1.089, pem class_loss: 0.311, pem reg_loss: 0.018, consistency_loss: 0.03724, consistency_loss_ema: 0.03784, total_loss: 1.582
training 681 (epoch 5): tem_loss: 1.090, pem class_loss: 0.317, pem reg_loss: 0.018, consistency_loss: 0.03691, consistency_loss_ema: 0.03811, total_loss: 1.590
training 691 (epoch 5): tem_loss: 1.089, pem class_loss: 0.319, pem reg_loss: 0.018, consistency_loss: 0.03711, consistency_loss_ema: 0.03824, total_loss: 1.589
training 701 (epoch 5): tem_loss: 1.089, pem class_loss: 0.322, pem reg_loss: 0.018, consistency_loss: 0.03703, consistency_loss_ema: 0.03856, total_loss: 1.595
training 711 (epoch 5): tem_loss: 1.093, pem class_loss: 0.320, pem reg_loss: 0.018, consistency_loss: 0.03723, consistency_loss_ema: 0.03869, total_loss: 1.595
[94mBMN training loss(epoch 5): tem_loss: 1.092, pem class_loss: 0.319, pem reg_loss: 0.018, total_loss: 1.593[0m
[94mBMN val loss(epoch 5): tem_loss: 1.198, pem class_loss: 0.370, pem reg_loss: 0.020, total_loss: 1.766[0m
[94mBMN val_ema loss(epoch 5): tem_loss: 1.199, pem class_loss: 0.358, pem reg_loss: 0.020, total_loss: 1.752[0m
use Semi !!!
training 721 (epoch 6): tem_loss: 1.104, pem class_loss: 0.397, pem reg_loss: 0.019, consistency_loss: 0.04701, consistency_loss_ema: 0.05393, total_loss: 1.686
training 731 (epoch 6): tem_loss: 1.068, pem class_loss: 0.314, pem reg_loss: 0.017, consistency_loss: 0.03672, consistency_loss_ema: 0.03947, total_loss: 1.555
training 741 (epoch 6): tem_loss: 1.062, pem class_loss: 0.289, pem reg_loss: 0.016, consistency_loss: 0.03660, consistency_loss_ema: 0.04031, total_loss: 1.512
training 751 (epoch 6): tem_loss: 1.070, pem class_loss: 0.300, pem reg_loss: 0.017, consistency_loss: 0.03650, consistency_loss_ema: 0.03967, total_loss: 1.537
training 761 (epoch 6): tem_loss: 1.057, pem class_loss: 0.306, pem reg_loss: 0.017, consistency_loss: 0.03698, consistency_loss_ema: 0.03963, total_loss: 1.535
training 771 (epoch 6): tem_loss: 1.057, pem class_loss: 0.310, pem reg_loss: 0.018, consistency_loss: 0.03707, consistency_loss_ema: 0.03917, total_loss: 1.546
training 781 (epoch 6): tem_loss: 1.056, pem class_loss: 0.310, pem reg_loss: 0.018, consistency_loss: 0.03697, consistency_loss_ema: 0.03855, total_loss: 1.544
training 791 (epoch 6): tem_loss: 1.050, pem class_loss: 0.308, pem reg_loss: 0.018, consistency_loss: 0.03656, consistency_loss_ema: 0.03867, total_loss: 1.535
training 801 (epoch 6): tem_loss: 1.055, pem class_loss: 0.310, pem reg_loss: 0.018, consistency_loss: 0.03726, consistency_loss_ema: 0.03955, total_loss: 1.543
training 811 (epoch 6): tem_loss: 1.056, pem class_loss: 0.306, pem reg_loss: 0.018, consistency_loss: 0.03763, consistency_loss_ema: 0.03976, total_loss: 1.537
training 821 (epoch 6): tem_loss: 1.060, pem class_loss: 0.305, pem reg_loss: 0.018, consistency_loss: 0.03739, consistency_loss_ema: 0.03944, total_loss: 1.542
training 831 (epoch 6): tem_loss: 1.065, pem class_loss: 0.303, pem reg_loss: 0.017, consistency_loss: 0.03682, consistency_loss_ema: 0.03963, total_loss: 1.543
[94mBMN training loss(epoch 6): tem_loss: 1.073, pem class_loss: 0.305, pem reg_loss: 0.017, total_loss: 1.552[0m
[94mBMN val loss(epoch 6): tem_loss: 1.203, pem class_loss: 0.356, pem reg_loss: 0.019, total_loss: 1.751[0m
[94mBMN val_ema loss(epoch 6): tem_loss: 1.197, pem class_loss: 0.363, pem reg_loss: 0.019, total_loss: 1.755[0m
use Semi !!!
training 841 (epoch 7): tem_loss: 1.049, pem class_loss: 0.281, pem reg_loss: 0.019, consistency_loss: 0.07491, consistency_loss_ema: 0.05149, total_loss: 1.521
training 851 (epoch 7): tem_loss: 1.030, pem class_loss: 0.257, pem reg_loss: 0.015, consistency_loss: 0.04571, consistency_loss_ema: 0.04046, total_loss: 1.437
training 861 (epoch 7): tem_loss: 1.043, pem class_loss: 0.289, pem reg_loss: 0.017, consistency_loss: 0.03864, consistency_loss_ema: 0.03738, total_loss: 1.499
training 871 (epoch 7): tem_loss: 1.038, pem class_loss: 0.281, pem reg_loss: 0.016, consistency_loss: 0.03474, consistency_loss_ema: 0.03410, total_loss: 1.480
training 881 (epoch 7): tem_loss: 1.032, pem class_loss: 0.286, pem reg_loss: 0.016, consistency_loss: 0.03291, consistency_loss_ema: 0.03213, total_loss: 1.482
training 891 (epoch 7): tem_loss: 1.032, pem class_loss: 0.286, pem reg_loss: 0.016, consistency_loss: 0.03073, consistency_loss_ema: 0.03115, total_loss: 1.480
training 901 (epoch 7): tem_loss: 1.032, pem class_loss: 0.282, pem reg_loss: 0.016, consistency_loss: 0.03015, consistency_loss_ema: 0.03041, total_loss: 1.474
training 911 (epoch 7): tem_loss: 1.035, pem class_loss: 0.280, pem reg_loss: 0.016, consistency_loss: 0.02970, consistency_loss_ema: 0.02997, total_loss: 1.477
training 921 (epoch 7): tem_loss: 1.039, pem class_loss: 0.275, pem reg_loss: 0.016, consistency_loss: 0.02900, consistency_loss_ema: 0.02968, total_loss: 1.475
training 931 (epoch 7): tem_loss: 1.037, pem class_loss: 0.273, pem reg_loss: 0.016, consistency_loss: 0.02870, consistency_loss_ema: 0.02970, total_loss: 1.469
training 941 (epoch 7): tem_loss: 1.034, pem class_loss: 0.269, pem reg_loss: 0.016, consistency_loss: 0.02845, consistency_loss_ema: 0.02957, total_loss: 1.461
training 951 (epoch 7): tem_loss: 1.036, pem class_loss: 0.271, pem reg_loss: 0.016, consistency_loss: 0.02793, consistency_loss_ema: 0.02939, total_loss: 1.465
[94mBMN training loss(epoch 7): tem_loss: 1.036, pem class_loss: 0.273, pem reg_loss: 0.016, total_loss: 1.467[0m
[94mBMN val loss(epoch 7): tem_loss: 1.197, pem class_loss: 0.371, pem reg_loss: 0.019, total_loss: 1.762[0m
[94mBMN val_ema loss(epoch 7): tem_loss: 1.199, pem class_loss: 0.366, pem reg_loss: 0.019, total_loss: 1.756[0m
use Semi !!!
training 961 (epoch 8): tem_loss: 0.972, pem class_loss: 0.242, pem reg_loss: 0.014, consistency_loss: 0.02581, consistency_loss_ema: 0.02944, total_loss: 1.350
training 971 (epoch 8): tem_loss: 0.995, pem class_loss: 0.240, pem reg_loss: 0.016, consistency_loss: 0.02441, consistency_loss_ema: 0.02783, total_loss: 1.396
training 981 (epoch 8): tem_loss: 1.023, pem class_loss: 0.256, pem reg_loss: 0.016, consistency_loss: 0.02420, consistency_loss_ema: 0.02768, total_loss: 1.438
training 991 (epoch 8): tem_loss: 1.000, pem class_loss: 0.251, pem reg_loss: 0.016, consistency_loss: 0.02585, consistency_loss_ema: 0.02850, total_loss: 1.406
training 1001 (epoch 8): tem_loss: 0.996, pem class_loss: 0.250, pem reg_loss: 0.015, consistency_loss: 0.02580, consistency_loss_ema: 0.02892, total_loss: 1.400
training 1011 (epoch 8): tem_loss: 0.991, pem class_loss: 0.255, pem reg_loss: 0.015, consistency_loss: 0.02594, consistency_loss_ema: 0.02944, total_loss: 1.400
training 1021 (epoch 8): tem_loss: 0.998, pem class_loss: 0.254, pem reg_loss: 0.015, consistency_loss: 0.02636, consistency_loss_ema: 0.02978, total_loss: 1.405
training 1031 (epoch 8): tem_loss: 1.000, pem class_loss: 0.250, pem reg_loss: 0.015, consistency_loss: 0.02686, consistency_loss_ema: 0.02984, total_loss: 1.400
training 1041 (epoch 8): tem_loss: 1.003, pem class_loss: 0.256, pem reg_loss: 0.015, consistency_loss: 0.02715, consistency_loss_ema: 0.02981, total_loss: 1.412
training 1051 (epoch 8): tem_loss: 1.008, pem class_loss: 0.259, pem reg_loss: 0.015, consistency_loss: 0.02781, consistency_loss_ema: 0.02971, total_loss: 1.418
training 1061 (epoch 8): tem_loss: 1.012, pem class_loss: 0.260, pem reg_loss: 0.015, consistency_loss: 0.02778, consistency_loss_ema: 0.02979, total_loss: 1.423
training 1071 (epoch 8): tem_loss: 1.013, pem class_loss: 0.263, pem reg_loss: 0.015, consistency_loss: 0.02785, consistency_loss_ema: 0.02988, total_loss: 1.431
[94mBMN training loss(epoch 8): tem_loss: 1.016, pem class_loss: 0.264, pem reg_loss: 0.015, total_loss: 1.434[0m
[94mBMN val loss(epoch 8): tem_loss: 1.198, pem class_loss: 0.380, pem reg_loss: 0.019, total_loss: 1.770[0m
[94mBMN val_ema loss(epoch 8): tem_loss: 1.198, pem class_loss: 0.379, pem reg_loss: 0.019, total_loss: 1.768[0m
use Semi !!!
training 1081 (epoch 9): tem_loss: 0.932, pem class_loss: 0.195, pem reg_loss: 0.010, consistency_loss: 0.03730, consistency_loss_ema: 0.03202, total_loss: 1.222
training 1091 (epoch 9): tem_loss: 0.983, pem class_loss: 0.255, pem reg_loss: 0.015, consistency_loss: 0.02991, consistency_loss_ema: 0.03024, total_loss: 1.388
training 1101 (epoch 9): tem_loss: 1.006, pem class_loss: 0.268, pem reg_loss: 0.016, consistency_loss: 0.02917, consistency_loss_ema: 0.03073, total_loss: 1.439
training 1111 (epoch 9): tem_loss: 0.999, pem class_loss: 0.258, pem reg_loss: 0.017, consistency_loss: 0.02986, consistency_loss_ema: 0.03122, total_loss: 1.423
training 1121 (epoch 9): tem_loss: 0.995, pem class_loss: 0.258, pem reg_loss: 0.016, consistency_loss: 0.02993, consistency_loss_ema: 0.03120, total_loss: 1.417
training 1131 (epoch 9): tem_loss: 1.001, pem class_loss: 0.255, pem reg_loss: 0.016, consistency_loss: 0.02995, consistency_loss_ema: 0.03135, total_loss: 1.412
training 1141 (epoch 9): tem_loss: 1.005, pem class_loss: 0.259, pem reg_loss: 0.016, consistency_loss: 0.02993, consistency_loss_ema: 0.03191, total_loss: 1.420
training 1151 (epoch 9): tem_loss: 0.998, pem class_loss: 0.256, pem reg_loss: 0.015, consistency_loss: 0.02994, consistency_loss_ema: 0.03174, total_loss: 1.408
training 1161 (epoch 9): tem_loss: 1.002, pem class_loss: 0.256, pem reg_loss: 0.015, consistency_loss: 0.03006, consistency_loss_ema: 0.03185, total_loss: 1.411
training 1171 (epoch 9): tem_loss: 1.002, pem class_loss: 0.254, pem reg_loss: 0.015, consistency_loss: 0.03020, consistency_loss_ema: 0.03192, total_loss: 1.409
training 1181 (epoch 9): tem_loss: 1.003, pem class_loss: 0.253, pem reg_loss: 0.015, consistency_loss: 0.03020, consistency_loss_ema: 0.03195, total_loss: 1.408
training 1191 (epoch 9): tem_loss: 1.004, pem class_loss: 0.252, pem reg_loss: 0.015, consistency_loss: 0.03043, consistency_loss_ema: 0.03226, total_loss: 1.407
[94mBMN training loss(epoch 9): tem_loss: 1.004, pem class_loss: 0.252, pem reg_loss: 0.015, total_loss: 1.406[0m
[94mBMN val loss(epoch 9): tem_loss: 1.200, pem class_loss: 0.398, pem reg_loss: 0.019, total_loss: 1.790[0m
[94mBMN val_ema loss(epoch 9): tem_loss: 1.200, pem class_loss: 0.391, pem reg_loss: 0.019, total_loss: 1.782[0m
unlabel percent: 0.9
eval student model !!
load : ./checkpoint/Semi-base-0.9/BMN_best.pth.tar OK !
validation subset video numbers: 4728
Post processing start
Post processing finished
[INIT] Loaded annotations from validation subset.
Number of ground truth instances: 7293
Number of proposals: 472775
Fixed threshold for tiou score: [0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95]
[RESULTS] Performance on ActivityNet proposal task.
Area Under the AR vs AN curve: 65.1337583984643%
AR@1 is 0.32665569724393256
AR@5 is 0.46319758672699846
AR@10 is 0.5390648567119156
AR@100 is 0.7337035513506103
eval teacher model !!
load : ./checkpoint/Semi-base-0.9/BMN_best_ema.pth.tar OK !
validation subset video numbers: 4728
Post processing start
Post processing finished
[INIT] Loaded annotations from validation subset.
Number of ground truth instances: 7293
Number of proposals: 472615
Fixed threshold for tiou score: [0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95]
[RESULTS] Performance on ActivityNet proposal task.
Area Under the AR vs AN curve: 64.91984779925957%
AR@1 is 0.3252982311805841
AR@5 is 0.4589469354175236
AR@10 is 0.5341011929247222
AR@100 is 0.7323872206225148
#
train subset video numbers: 1925
unlabel unlabeled subset video numbers: 7724
validation subset video numbers: 4728
use 0.19999999999999996 label for training!!!
training batchsize : 8
unlabel_training batchsize : 24
use Semi !!!
training 1 (epoch 0): tem_loss: 1.379, pem class_loss: 0.693, pem reg_loss: 0.039, consistency_loss: 0.00064, consistency_loss_ema: 0.00000, total_loss: 2.462
training 11 (epoch 0): tem_loss: 1.370, pem class_loss: 0.843, pem reg_loss: 0.054, consistency_loss: 0.00019, consistency_loss_ema: 0.00004, total_loss: 2.755
training 21 (epoch 0): tem_loss: 1.353, pem class_loss: 0.715, pem reg_loss: 0.044, consistency_loss: 0.00015, consistency_loss_ema: 0.00008, total_loss: 2.512
training 31 (epoch 0): tem_loss: 1.358, pem class_loss: 0.654, pem reg_loss: 0.040, consistency_loss: 0.00017, consistency_loss_ema: 0.00013, total_loss: 2.415
training 41 (epoch 0): tem_loss: 1.349, pem class_loss: 0.596, pem reg_loss: 0.037, consistency_loss: 0.00017, consistency_loss_ema: 0.00015, total_loss: 2.310
training 51 (epoch 0): tem_loss: 1.342, pem class_loss: 0.572, pem reg_loss: 0.035, consistency_loss: 0.00017, consistency_loss_ema: 0.00015, total_loss: 2.262
training 61 (epoch 0): tem_loss: 1.335, pem class_loss: 0.536, pem reg_loss: 0.033, consistency_loss: 0.00019, consistency_loss_ema: 0.00017, total_loss: 2.198
training 71 (epoch 0): tem_loss: 1.328, pem class_loss: 0.512, pem reg_loss: 0.031, consistency_loss: 0.00024, consistency_loss_ema: 0.00023, total_loss: 2.152
training 81 (epoch 0): tem_loss: 1.325, pem class_loss: 0.501, pem reg_loss: 0.030, consistency_loss: 0.00026, consistency_loss_ema: 0.00025, total_loss: 2.130
training 91 (epoch 0): tem_loss: 1.319, pem class_loss: 0.492, pem reg_loss: 0.029, consistency_loss: 0.00027, consistency_loss_ema: 0.00026, total_loss: 2.105
training 101 (epoch 0): tem_loss: 1.309, pem class_loss: 0.481, pem reg_loss: 0.029, consistency_loss: 0.00028, consistency_loss_ema: 0.00027, total_loss: 2.077
training 111 (epoch 0): tem_loss: 1.302, pem class_loss: 0.476, pem reg_loss: 0.028, consistency_loss: 0.00028, consistency_loss_ema: 0.00028, total_loss: 2.063
training 121 (epoch 0): tem_loss: 1.296, pem class_loss: 0.471, pem reg_loss: 0.028, consistency_loss: 0.00030, consistency_loss_ema: 0.00029, total_loss: 2.047
training 131 (epoch 0): tem_loss: 1.294, pem class_loss: 0.469, pem reg_loss: 0.028, consistency_loss: 0.00031, consistency_loss_ema: 0.00031, total_loss: 2.041
training 141 (epoch 0): tem_loss: 1.292, pem class_loss: 0.461, pem reg_loss: 0.027, consistency_loss: 0.00033, consistency_loss_ema: 0.00032, total_loss: 2.024
training 151 (epoch 0): tem_loss: 1.288, pem class_loss: 0.457, pem reg_loss: 0.027, consistency_loss: 0.00033, consistency_loss_ema: 0.00032, total_loss: 2.014
training 161 (epoch 0): tem_loss: 1.285, pem class_loss: 0.455, pem reg_loss: 0.026, consistency_loss: 0.00033, consistency_loss_ema: 0.00032, total_loss: 2.004
training 171 (epoch 0): tem_loss: 1.284, pem class_loss: 0.450, pem reg_loss: 0.026, consistency_loss: 0.00033, consistency_loss_ema: 0.00033, total_loss: 1.995
training 181 (epoch 0): tem_loss: 1.283, pem class_loss: 0.448, pem reg_loss: 0.026, consistency_loss: 0.00034, consistency_loss_ema: 0.00033, total_loss: 1.990
training 191 (epoch 0): tem_loss: 1.280, pem class_loss: 0.447, pem reg_loss: 0.026, consistency_loss: 0.00034, consistency_loss_ema: 0.00033, total_loss: 1.985
training 201 (epoch 0): tem_loss: 1.278, pem class_loss: 0.445, pem reg_loss: 0.026, consistency_loss: 0.00034, consistency_loss_ema: 0.00033, total_loss: 1.979
training 211 (epoch 0): tem_loss: 1.274, pem class_loss: 0.442, pem reg_loss: 0.025, consistency_loss: 0.00035, consistency_loss_ema: 0.00034, total_loss: 1.968
training 221 (epoch 0): tem_loss: 1.270, pem class_loss: 0.438, pem reg_loss: 0.025, consistency_loss: 0.00035, consistency_loss_ema: 0.00034, total_loss: 1.960
training 231 (epoch 0): tem_loss: 1.266, pem class_loss: 0.437, pem reg_loss: 0.025, consistency_loss: 0.00036, consistency_loss_ema: 0.00035, total_loss: 1.953
[94mBMN training loss(epoch 0): tem_loss: 1.263, pem class_loss: 0.436, pem reg_loss: 0.025, total_loss: 1.949[0m
[94mBMN val loss(epoch 0): tem_loss: 1.211, pem class_loss: 0.392, pem reg_loss: 0.022, total_loss: 1.822[0m
[94mBMN val_ema loss(epoch 0): tem_loss: 1.199, pem class_loss: 0.383, pem reg_loss: 0.021, total_loss: 1.787[0m
use Semi !!!
training 241 (epoch 1): tem_loss: 1.212, pem class_loss: 0.375, pem reg_loss: 0.020, consistency_loss: 0.00346, consistency_loss_ema: 0.00342, total_loss: 1.786
training 251 (epoch 1): tem_loss: 1.236, pem class_loss: 0.402, pem reg_loss: 0.023, consistency_loss: 0.00430, consistency_loss_ema: 0.00402, total_loss: 1.866
training 261 (epoch 1): tem_loss: 1.184, pem class_loss: 0.390, pem reg_loss: 0.022, consistency_loss: 0.00369, consistency_loss_ema: 0.00366, total_loss: 1.791
training 271 (epoch 1): tem_loss: 1.170, pem class_loss: 0.383, pem reg_loss: 0.022, consistency_loss: 0.00349, consistency_loss_ema: 0.00350, total_loss: 1.769
training 281 (epoch 1): tem_loss: 1.175, pem class_loss: 0.372, pem reg_loss: 0.021, consistency_loss: 0.00337, consistency_loss_ema: 0.00340, total_loss: 1.760
training 291 (epoch 1): tem_loss: 1.187, pem class_loss: 0.378, pem reg_loss: 0.021, consistency_loss: 0.00327, consistency_loss_ema: 0.00331, total_loss: 1.778
training 301 (epoch 1): tem_loss: 1.189, pem class_loss: 0.374, pem reg_loss: 0.021, consistency_loss: 0.00329, consistency_loss_ema: 0.00335, total_loss: 1.774
training 311 (epoch 1): tem_loss: 1.195, pem class_loss: 0.381, pem reg_loss: 0.021, consistency_loss: 0.00322, consistency_loss_ema: 0.00330, total_loss: 1.789
training 321 (epoch 1): tem_loss: 1.194, pem class_loss: 0.393, pem reg_loss: 0.022, consistency_loss: 0.00314, consistency_loss_ema: 0.00324, total_loss: 1.808
training 331 (epoch 1): tem_loss: 1.196, pem class_loss: 0.391, pem reg_loss: 0.022, consistency_loss: 0.00315, consistency_loss_ema: 0.00321, total_loss: 1.807
training 341 (epoch 1): tem_loss: 1.191, pem class_loss: 0.388, pem reg_loss: 0.022, consistency_loss: 0.00310, consistency_loss_ema: 0.00321, total_loss: 1.795
training 351 (epoch 1): tem_loss: 1.189, pem class_loss: 0.385, pem reg_loss: 0.022, consistency_loss: 0.00308, consistency_loss_ema: 0.00319, total_loss: 1.790
training 361 (epoch 1): tem_loss: 1.181, pem class_loss: 0.379, pem reg_loss: 0.021, consistency_loss: 0.00323, consistency_loss_ema: 0.00331, total_loss: 1.773
training 371 (epoch 1): tem_loss: 1.179, pem class_loss: 0.381, pem reg_loss: 0.021, consistency_loss: 0.00323, consistency_loss_ema: 0.00330, total_loss: 1.773
training 381 (epoch 1): tem_loss: 1.174, pem class_loss: 0.378, pem reg_loss: 0.021, consistency_loss: 0.00332, consistency_loss_ema: 0.00335, total_loss: 1.762
training 391 (epoch 1): tem_loss: 1.178, pem class_loss: 0.377, pem reg_loss: 0.021, consistency_loss: 0.00332, consistency_loss_ema: 0.00335, total_loss: 1.765
training 401 (epoch 1): tem_loss: 1.179, pem class_loss: 0.377, pem reg_loss: 0.021, consistency_loss: 0.00329, consistency_loss_ema: 0.00333, total_loss: 1.764
training 411 (epoch 1): tem_loss: 1.183, pem class_loss: 0.380, pem reg_loss: 0.021, consistency_loss: 0.00333, consistency_loss_ema: 0.00339, total_loss: 1.773
training 421 (epoch 1): tem_loss: 1.181, pem class_loss: 0.383, pem reg_loss: 0.021, consistency_loss: 0.00334, consistency_loss_ema: 0.00341, total_loss: 1.775
training 431 (epoch 1): tem_loss: 1.181, pem class_loss: 0.383, pem reg_loss: 0.021, consistency_loss: 0.00335, consistency_loss_ema: 0.00341, total_loss: 1.775
training 441 (epoch 1): tem_loss: 1.182, pem class_loss: 0.385, pem reg_loss: 0.021, consistency_loss: 0.00332, consistency_loss_ema: 0.00339, total_loss: 1.781
training 451 (epoch 1): tem_loss: 1.182, pem class_loss: 0.384, pem reg_loss: 0.021, consistency_loss: 0.00331, consistency_loss_ema: 0.00339, total_loss: 1.780
training 461 (epoch 1): tem_loss: 1.182, pem class_loss: 0.385, pem reg_loss: 0.021, consistency_loss: 0.00328, consistency_loss_ema: 0.00336, total_loss: 1.780
training 471 (epoch 1): tem_loss: 1.181, pem class_loss: 0.383, pem reg_loss: 0.021, consistency_loss: 0.00325, consistency_loss_ema: 0.00333, total_loss: 1.775
[94mBMN training loss(epoch 1): tem_loss: 1.179, pem class_loss: 0.382, pem reg_loss: 0.021, total_loss: 1.772[0m
[94mBMN val loss(epoch 1): tem_loss: 1.186, pem class_loss: 0.359, pem reg_loss: 0.020, total_loss: 1.744[0m
[94mBMN val_ema loss(epoch 1): tem_loss: 1.179, pem class_loss: 0.361, pem reg_loss: 0.020, total_loss: 1.737[0m
use Semi !!!
training 481 (epoch 2): tem_loss: 1.118, pem class_loss: 0.323, pem reg_loss: 0.017, consistency_loss: 0.01001, consistency_loss_ema: 0.00827, total_loss: 1.608
training 491 (epoch 2): tem_loss: 1.080, pem class_loss: 0.291, pem reg_loss: 0.016, consistency_loss: 0.01107, consistency_loss_ema: 0.01140, total_loss: 1.534
training 501 (epoch 2): tem_loss: 1.117, pem class_loss: 0.331, pem reg_loss: 0.018, consistency_loss: 0.00975, consistency_loss_ema: 0.01033, total_loss: 1.632
training 511 (epoch 2): tem_loss: 1.123, pem class_loss: 0.347, pem reg_loss: 0.019, consistency_loss: 0.00888, consistency_loss_ema: 0.00930, total_loss: 1.662
training 521 (epoch 2): tem_loss: 1.144, pem class_loss: 0.360, pem reg_loss: 0.020, consistency_loss: 0.00908, consistency_loss_ema: 0.01031, total_loss: 1.706
training 531 (epoch 2): tem_loss: 1.141, pem class_loss: 0.355, pem reg_loss: 0.020, consistency_loss: 0.00974, consistency_loss_ema: 0.01062, total_loss: 1.692
training 541 (epoch 2): tem_loss: 1.141, pem class_loss: 0.364, pem reg_loss: 0.020, consistency_loss: 0.00973, consistency_loss_ema: 0.01048, total_loss: 1.702
training 551 (epoch 2): tem_loss: 1.138, pem class_loss: 0.363, pem reg_loss: 0.020, consistency_loss: 0.00968, consistency_loss_ema: 0.01041, total_loss: 1.700
training 561 (epoch 2): tem_loss: 1.139, pem class_loss: 0.361, pem reg_loss: 0.020, consistency_loss: 0.01000, consistency_loss_ema: 0.01056, total_loss: 1.698
training 571 (epoch 2): tem_loss: 1.140, pem class_loss: 0.359, pem reg_loss: 0.020, consistency_loss: 0.00999, consistency_loss_ema: 0.01043, total_loss: 1.698
training 581 (epoch 2): tem_loss: 1.138, pem class_loss: 0.358, pem reg_loss: 0.020, consistency_loss: 0.01006, consistency_loss_ema: 0.01033, total_loss: 1.694
training 591 (epoch 2): tem_loss: 1.137, pem class_loss: 0.360, pem reg_loss: 0.020, consistency_loss: 0.01001, consistency_loss_ema: 0.01026, total_loss: 1.695
training 601 (epoch 2): tem_loss: 1.140, pem class_loss: 0.362, pem reg_loss: 0.020, consistency_loss: 0.01023, consistency_loss_ema: 0.01045, total_loss: 1.702
training 611 (epoch 2): tem_loss: 1.140, pem class_loss: 0.366, pem reg_loss: 0.020, consistency_loss: 0.01052, consistency_loss_ema: 0.01063, total_loss: 1.708
training 621 (epoch 2): tem_loss: 1.138, pem class_loss: 0.364, pem reg_loss: 0.020, consistency_loss: 0.01066, consistency_loss_ema: 0.01061, total_loss: 1.705
training 631 (epoch 2): tem_loss: 1.141, pem class_loss: 0.362, pem reg_loss: 0.020, consistency_loss: 0.01065, consistency_loss_ema: 0.01062, total_loss: 1.704
training 641 (epoch 2): tem_loss: 1.144, pem class_loss: 0.364, pem reg_loss: 0.020, consistency_loss: 0.01056, consistency_loss_ema: 0.01056, total_loss: 1.710
training 651 (epoch 2): tem_loss: 1.143, pem class_loss: 0.363, pem reg_loss: 0.020, consistency_loss: 0.01042, consistency_loss_ema: 0.01043, total_loss: 1.708
training 661 (epoch 2): tem_loss: 1.143, pem class_loss: 0.366, pem reg_loss: 0.020, consistency_loss: 0.01035, consistency_loss_ema: 0.01043, total_loss: 1.712
training 671 (epoch 2): tem_loss: 1.142, pem class_loss: 0.366, pem reg_loss: 0.020, consistency_loss: 0.01044, consistency_loss_ema: 0.01048, total_loss: 1.711
training 681 (epoch 2): tem_loss: 1.143, pem class_loss: 0.366, pem reg_loss: 0.020, consistency_loss: 0.01038, consistency_loss_ema: 0.01053, total_loss: 1.711
training 691 (epoch 2): tem_loss: 1.144, pem class_loss: 0.367, pem reg_loss: 0.020, consistency_loss: 0.01037, consistency_loss_ema: 0.01049, total_loss: 1.712
training 701 (epoch 2): tem_loss: 1.142, pem class_loss: 0.365, pem reg_loss: 0.020, consistency_loss: 0.01046, consistency_loss_ema: 0.01056, total_loss: 1.709
training 711 (epoch 2): tem_loss: 1.142, pem class_loss: 0.364, pem reg_loss: 0.020, consistency_loss: 0.01047, consistency_loss_ema: 0.01055, total_loss: 1.706
[94mBMN training loss(epoch 2): tem_loss: 1.141, pem class_loss: 0.362, pem reg_loss: 0.020, total_loss: 1.703[0m
[94mBMN val loss(epoch 2): tem_loss: 1.174, pem class_loss: 0.358, pem reg_loss: 0.020, total_loss: 1.730[0m
[94mBMN val_ema loss(epoch 2): tem_loss: 1.167, pem class_loss: 0.351, pem reg_loss: 0.019, total_loss: 1.709[0m
use Semi !!!
training 721 (epoch 3): tem_loss: 1.203, pem class_loss: 0.294, pem reg_loss: 0.020, consistency_loss: 0.02527, consistency_loss_ema: 0.02820, total_loss: 1.702
training 731 (epoch 3): tem_loss: 1.053, pem class_loss: 0.321, pem reg_loss: 0.018, consistency_loss: 0.02152, consistency_loss_ema: 0.02362, total_loss: 1.553
training 741 (epoch 3): tem_loss: 1.072, pem class_loss: 0.334, pem reg_loss: 0.018, consistency_loss: 0.02003, consistency_loss_ema: 0.02214, total_loss: 1.587
training 751 (epoch 3): tem_loss: 1.074, pem class_loss: 0.325, pem reg_loss: 0.018, consistency_loss: 0.02028, consistency_loss_ema: 0.02171, total_loss: 1.574
training 761 (epoch 3): tem_loss: 1.087, pem class_loss: 0.329, pem reg_loss: 0.017, consistency_loss: 0.01988, consistency_loss_ema: 0.02126, total_loss: 1.590
training 771 (epoch 3): tem_loss: 1.094, pem class_loss: 0.332, pem reg_loss: 0.018, consistency_loss: 0.01989, consistency_loss_ema: 0.02125, total_loss: 1.604
training 781 (epoch 3): tem_loss: 1.096, pem class_loss: 0.336, pem reg_loss: 0.018, consistency_loss: 0.02077, consistency_loss_ema: 0.02171, total_loss: 1.612
training 791 (epoch 3): tem_loss: 1.092, pem class_loss: 0.330, pem reg_loss: 0.018, consistency_loss: 0.02141, consistency_loss_ema: 0.02210, total_loss: 1.598
training 801 (epoch 3): tem_loss: 1.100, pem class_loss: 0.333, pem reg_loss: 0.018, consistency_loss: 0.02168, consistency_loss_ema: 0.02217, total_loss: 1.613
training 811 (epoch 3): tem_loss: 1.104, pem class_loss: 0.338, pem reg_loss: 0.018, consistency_loss: 0.02144, consistency_loss_ema: 0.02180, total_loss: 1.625
training 821 (epoch 3): tem_loss: 1.109, pem class_loss: 0.339, pem reg_loss: 0.018, consistency_loss: 0.02096, consistency_loss_ema: 0.02141, total_loss: 1.631
training 831 (epoch 3): tem_loss: 1.107, pem class_loss: 0.337, pem reg_loss: 0.018, consistency_loss: 0.02059, consistency_loss_ema: 0.02089, total_loss: 1.626
training 841 (epoch 3): tem_loss: 1.106, pem class_loss: 0.337, pem reg_loss: 0.018, consistency_loss: 0.02055, consistency_loss_ema: 0.02092, total_loss: 1.627
training 851 (epoch 3): tem_loss: 1.105, pem class_loss: 0.337, pem reg_loss: 0.018, consistency_loss: 0.02049, consistency_loss_ema: 0.02088, total_loss: 1.626
training 861 (epoch 3): tem_loss: 1.110, pem class_loss: 0.338, pem reg_loss: 0.019, consistency_loss: 0.02017, consistency_loss_ema: 0.02051, total_loss: 1.634
training 871 (epoch 3): tem_loss: 1.112, pem class_loss: 0.343, pem reg_loss: 0.019, consistency_loss: 0.01998, consistency_loss_ema: 0.02034, total_loss: 1.643
training 881 (epoch 3): tem_loss: 1.113, pem class_loss: 0.341, pem reg_loss: 0.019, consistency_loss: 0.02002, consistency_loss_ema: 0.02033, total_loss: 1.640
training 891 (epoch 3): tem_loss: 1.113, pem class_loss: 0.344, pem reg_loss: 0.019, consistency_loss: 0.02001, consistency_loss_ema: 0.02040, total_loss: 1.645
training 901 (epoch 3): tem_loss: 1.114, pem class_loss: 0.347, pem reg_loss: 0.019, consistency_loss: 0.02023, consistency_loss_ema: 0.02054, total_loss: 1.650
training 911 (epoch 3): tem_loss: 1.117, pem class_loss: 0.348, pem reg_loss: 0.019, consistency_loss: 0.02010, consistency_loss_ema: 0.02054, total_loss: 1.655
training 921 (epoch 3): tem_loss: 1.118, pem class_loss: 0.348, pem reg_loss: 0.019, consistency_loss: 0.02024, consistency_loss_ema: 0.02058, total_loss: 1.655
training 931 (epoch 3): tem_loss: 1.119, pem class_loss: 0.351, pem reg_loss: 0.019, consistency_loss: 0.02016, consistency_loss_ema: 0.02061, total_loss: 1.661
training 941 (epoch 3): tem_loss: 1.121, pem class_loss: 0.351, pem reg_loss: 0.019, consistency_loss: 0.02006, consistency_loss_ema: 0.02044, total_loss: 1.663
training 951 (epoch 3): tem_loss: 1.121, pem class_loss: 0.351, pem reg_loss: 0.019, consistency_loss: 0.02000, consistency_loss_ema: 0.02033, total_loss: 1.665
[94mBMN training loss(epoch 3): tem_loss: 1.120, pem class_loss: 0.350, pem reg_loss: 0.019, total_loss: 1.661[0m
[94mBMN val loss(epoch 3): tem_loss: 1.171, pem class_loss: 0.347, pem reg_loss: 0.020, total_loss: 1.713[0m
[94mBMN val_ema loss(epoch 3): tem_loss: 1.168, pem class_loss: 0.354, pem reg_loss: 0.019, total_loss: 1.712[0m
use Semi !!!
training 961 (epoch 4): tem_loss: 1.049, pem class_loss: 0.424, pem reg_loss: 0.022, consistency_loss: 0.04395, consistency_loss_ema: 0.04596, total_loss: 1.696
training 971 (epoch 4): tem_loss: 1.090, pem class_loss: 0.380, pem reg_loss: 0.020, consistency_loss: 0.03241, consistency_loss_ema: 0.03183, total_loss: 1.671
training 981 (epoch 4): tem_loss: 1.126, pem class_loss: 0.383, pem reg_loss: 0.021, consistency_loss: 0.02892, consistency_loss_ema: 0.02966, total_loss: 1.716
training 991 (epoch 4): tem_loss: 1.129, pem class_loss: 0.371, pem reg_loss: 0.020, consistency_loss: 0.02861, consistency_loss_ema: 0.02894, total_loss: 1.703
training 1001 (epoch 4): tem_loss: 1.137, pem class_loss: 0.364, pem reg_loss: 0.020, consistency_loss: 0.02766, consistency_loss_ema: 0.02802, total_loss: 1.701
training 1011 (epoch 4): tem_loss: 1.127, pem class_loss: 0.354, pem reg_loss: 0.019, consistency_loss: 0.02645, consistency_loss_ema: 0.02712, total_loss: 1.673
training 1021 (epoch 4): tem_loss: 1.122, pem class_loss: 0.340, pem reg_loss: 0.018, consistency_loss: 0.02640, consistency_loss_ema: 0.02719, total_loss: 1.645
training 1031 (epoch 4): tem_loss: 1.121, pem class_loss: 0.343, pem reg_loss: 0.018, consistency_loss: 0.02664, consistency_loss_ema: 0.02735, total_loss: 1.648
training 1041 (epoch 4): tem_loss: 1.124, pem class_loss: 0.345, pem reg_loss: 0.019, consistency_loss: 0.02686, consistency_loss_ema: 0.02747, total_loss: 1.655
training 1051 (epoch 4): tem_loss: 1.126, pem class_loss: 0.342, pem reg_loss: 0.018, consistency_loss: 0.02672, consistency_loss_ema: 0.02734, total_loss: 1.652
training 1061 (epoch 4): tem_loss: 1.124, pem class_loss: 0.339, pem reg_loss: 0.018, consistency_loss: 0.02655, consistency_loss_ema: 0.02730, total_loss: 1.644
training 1071 (epoch 4): tem_loss: 1.124, pem class_loss: 0.339, pem reg_loss: 0.018, consistency_loss: 0.02677, consistency_loss_ema: 0.02728, total_loss: 1.644
training 1081 (epoch 4): tem_loss: 1.119, pem class_loss: 0.338, pem reg_loss: 0.018, consistency_loss: 0.02674, consistency_loss_ema: 0.02726, total_loss: 1.636
training 1091 (epoch 4): tem_loss: 1.119, pem class_loss: 0.339, pem reg_loss: 0.018, consistency_loss: 0.02667, consistency_loss_ema: 0.02708, total_loss: 1.637
training 1101 (epoch 4): tem_loss: 1.118, pem class_loss: 0.338, pem reg_loss: 0.018, consistency_loss: 0.02675, consistency_loss_ema: 0.02725, total_loss: 1.637
training 1111 (epoch 4): tem_loss: 1.118, pem class_loss: 0.337, pem reg_loss: 0.018, consistency_loss: 0.02700, consistency_loss_ema: 0.02752, total_loss: 1.635
training 1121 (epoch 4): tem_loss: 1.120, pem class_loss: 0.337, pem reg_loss: 0.018, consistency_loss: 0.02690, consistency_loss_ema: 0.02744, total_loss: 1.638
training 1131 (epoch 4): tem_loss: 1.120, pem class_loss: 0.336, pem reg_loss: 0.018, consistency_loss: 0.02685, consistency_loss_ema: 0.02748, total_loss: 1.636
training 1141 (epoch 4): tem_loss: 1.120, pem class_loss: 0.335, pem reg_loss: 0.018, consistency_loss: 0.02678, consistency_loss_ema: 0.02741, total_loss: 1.634
training 1151 (epoch 4): tem_loss: 1.121, pem class_loss: 0.335, pem reg_loss: 0.018, consistency_loss: 0.02661, consistency_loss_ema: 0.02732, total_loss: 1.635
training 1161 (epoch 4): tem_loss: 1.120, pem class_loss: 0.335, pem reg_loss: 0.018, consistency_loss: 0.02649, consistency_loss_ema: 0.02713, total_loss: 1.634
training 1171 (epoch 4): tem_loss: 1.120, pem class_loss: 0.334, pem reg_loss: 0.018, consistency_loss: 0.02633, consistency_loss_ema: 0.02693, total_loss: 1.633
training 1181 (epoch 4): tem_loss: 1.120, pem class_loss: 0.333, pem reg_loss: 0.018, consistency_loss: 0.02625, consistency_loss_ema: 0.02684, total_loss: 1.630
training 1191 (epoch 4): tem_loss: 1.119, pem class_loss: 0.335, pem reg_loss: 0.018, consistency_loss: 0.02648, consistency_loss_ema: 0.02703, total_loss: 1.632
[94mBMN training loss(epoch 4): tem_loss: 1.119, pem class_loss: 0.333, pem reg_loss: 0.018, total_loss: 1.630[0m
[94mBMN val loss(epoch 4): tem_loss: 1.172, pem class_loss: 0.347, pem reg_loss: 0.019, total_loss: 1.705[0m
[94mBMN val_ema loss(epoch 4): tem_loss: 1.174, pem class_loss: 0.344, pem reg_loss: 0.018, total_loss: 1.700[0m
use Semi !!!
training 1201 (epoch 5): tem_loss: 1.123, pem class_loss: 0.251, pem reg_loss: 0.017, consistency_loss: 0.05248, consistency_loss_ema: 0.03572, total_loss: 1.544
training 1211 (epoch 5): tem_loss: 1.125, pem class_loss: 0.317, pem reg_loss: 0.019, consistency_loss: 0.03433, consistency_loss_ema: 0.03325, total_loss: 1.629
training 1221 (epoch 5): tem_loss: 1.129, pem class_loss: 0.336, pem reg_loss: 0.019, consistency_loss: 0.02998, consistency_loss_ema: 0.03027, total_loss: 1.660
training 1231 (epoch 5): tem_loss: 1.133, pem class_loss: 0.325, pem reg_loss: 0.018, consistency_loss: 0.02893, consistency_loss_ema: 0.02966, total_loss: 1.637
training 1241 (epoch 5): tem_loss: 1.136, pem class_loss: 0.320, pem reg_loss: 0.018, consistency_loss: 0.02842, consistency_loss_ema: 0.02971, total_loss: 1.637
training 1251 (epoch 5): tem_loss: 1.131, pem class_loss: 0.327, pem reg_loss: 0.018, consistency_loss: 0.02848, consistency_loss_ema: 0.03075, total_loss: 1.639
training 1261 (epoch 5): tem_loss: 1.127, pem class_loss: 0.327, pem reg_loss: 0.018, consistency_loss: 0.02860, consistency_loss_ema: 0.03026, total_loss: 1.633
training 1271 (epoch 5): tem_loss: 1.128, pem class_loss: 0.332, pem reg_loss: 0.018, consistency_loss: 0.02878, consistency_loss_ema: 0.02983, total_loss: 1.641
training 1281 (epoch 5): tem_loss: 1.129, pem class_loss: 0.335, pem reg_loss: 0.018, consistency_loss: 0.02853, consistency_loss_ema: 0.02930, total_loss: 1.645
training 1291 (epoch 5): tem_loss: 1.132, pem class_loss: 0.333, pem reg_loss: 0.018, consistency_loss: 0.02821, consistency_loss_ema: 0.02931, total_loss: 1.644
training 1301 (epoch 5): tem_loss: 1.130, pem class_loss: 0.331, pem reg_loss: 0.018, consistency_loss: 0.02824, consistency_loss_ema: 0.02954, total_loss: 1.639
training 1311 (epoch 5): tem_loss: 1.130, pem class_loss: 0.330, pem reg_loss: 0.018, consistency_loss: 0.02865, consistency_loss_ema: 0.03001, total_loss: 1.638
training 1321 (epoch 5): tem_loss: 1.129, pem class_loss: 0.328, pem reg_loss: 0.018, consistency_loss: 0.02892, consistency_loss_ema: 0.03032, total_loss: 1.632
training 1331 (epoch 5): tem_loss: 1.130, pem class_loss: 0.326, pem reg_loss: 0.017, consistency_loss: 0.02905, consistency_loss_ema: 0.03059, total_loss: 1.629
training 1341 (epoch 5): tem_loss: 1.130, pem class_loss: 0.328, pem reg_loss: 0.017, consistency_loss: 0.02909, consistency_loss_ema: 0.03026, total_loss: 1.633
training 1351 (epoch 5): tem_loss: 1.130, pem class_loss: 0.324, pem reg_loss: 0.017, consistency_loss: 0.02893, consistency_loss_ema: 0.03017, total_loss: 1.627
training 1361 (epoch 5): tem_loss: 1.129, pem class_loss: 0.325, pem reg_loss: 0.017, consistency_loss: 0.02888, consistency_loss_ema: 0.03002, total_loss: 1.625
training 1371 (epoch 5): tem_loss: 1.125, pem class_loss: 0.323, pem reg_loss: 0.017, consistency_loss: 0.02885, consistency_loss_ema: 0.03005, total_loss: 1.619
training 1381 (epoch 5): tem_loss: 1.123, pem class_loss: 0.320, pem reg_loss: 0.017, consistency_loss: 0.02883, consistency_loss_ema: 0.02995, total_loss: 1.614
training 1391 (epoch 5): tem_loss: 1.123, pem class_loss: 0.319, pem reg_loss: 0.017, consistency_loss: 0.02896, consistency_loss_ema: 0.03014, total_loss: 1.612
training 1401 (epoch 5): tem_loss: 1.121, pem class_loss: 0.322, pem reg_loss: 0.017, consistency_loss: 0.02887, consistency_loss_ema: 0.03014, total_loss: 1.612
training 1411 (epoch 5): tem_loss: 1.121, pem class_loss: 0.323, pem reg_loss: 0.017, consistency_loss: 0.02893, consistency_loss_ema: 0.03017, total_loss: 1.614
training 1421 (epoch 5): tem_loss: 1.119, pem class_loss: 0.323, pem reg_loss: 0.017, consistency_loss: 0.02901, consistency_loss_ema: 0.03026, total_loss: 1.614
training 1431 (epoch 5): tem_loss: 1.118, pem class_loss: 0.323, pem reg_loss: 0.017, consistency_loss: 0.02928, consistency_loss_ema: 0.03036, total_loss: 1.613
[94mBMN training loss(epoch 5): tem_loss: 1.116, pem class_loss: 0.322, pem reg_loss: 0.017, total_loss: 1.610[0m
[94mBMN val loss(epoch 5): tem_loss: 1.176, pem class_loss: 0.347, pem reg_loss: 0.018, total_loss: 1.706[0m
[94mBMN val_ema loss(epoch 5): tem_loss: 1.177, pem class_loss: 0.345, pem reg_loss: 0.018, total_loss: 1.701[0m
use Semi !!!
training 1441 (epoch 6): tem_loss: 1.168, pem class_loss: 0.243, pem reg_loss: 0.012, consistency_loss: 0.03163, consistency_loss_ema: 0.03086, total_loss: 1.533
training 1451 (epoch 6): tem_loss: 1.076, pem class_loss: 0.310, pem reg_loss: 0.016, consistency_loss: 0.02929, consistency_loss_ema: 0.03013, total_loss: 1.549
training 1461 (epoch 6): tem_loss: 1.093, pem class_loss: 0.318, pem reg_loss: 0.017, consistency_loss: 0.02771, consistency_loss_ema: 0.02843, total_loss: 1.584
training 1471 (epoch 6): tem_loss: 1.097, pem class_loss: 0.305, pem reg_loss: 0.016, consistency_loss: 0.02885, consistency_loss_ema: 0.02978, total_loss: 1.561
training 1481 (epoch 6): tem_loss: 1.102, pem class_loss: 0.305, pem reg_loss: 0.016, consistency_loss: 0.02888, consistency_loss_ema: 0.02982, total_loss: 1.570
training 1491 (epoch 6): tem_loss: 1.094, pem class_loss: 0.300, pem reg_loss: 0.016, consistency_loss: 0.02859, consistency_loss_ema: 0.03002, total_loss: 1.552
training 1501 (epoch 6): tem_loss: 1.090, pem class_loss: 0.298, pem reg_loss: 0.016, consistency_loss: 0.02882, consistency_loss_ema: 0.03017, total_loss: 1.546
training 1511 (epoch 6): tem_loss: 1.090, pem class_loss: 0.300, pem reg_loss: 0.016, consistency_loss: 0.02883, consistency_loss_ema: 0.02992, total_loss: 1.548
training 1521 (epoch 6): tem_loss: 1.101, pem class_loss: 0.310, pem reg_loss: 0.016, consistency_loss: 0.02847, consistency_loss_ema: 0.02993, total_loss: 1.572
training 1531 (epoch 6): tem_loss: 1.107, pem class_loss: 0.311, pem reg_loss: 0.016, consistency_loss: 0.02873, consistency_loss_ema: 0.03000, total_loss: 1.577
training 1541 (epoch 6): tem_loss: 1.107, pem class_loss: 0.314, pem reg_loss: 0.016, consistency_loss: 0.02901, consistency_loss_ema: 0.03010, total_loss: 1.584
training 1551 (epoch 6): tem_loss: 1.107, pem class_loss: 0.314, pem reg_loss: 0.016, consistency_loss: 0.02991, consistency_loss_ema: 0.03014, total_loss: 1.584
training 1561 (epoch 6): tem_loss: 1.112, pem class_loss: 0.315, pem reg_loss: 0.016, consistency_loss: 0.02993, consistency_loss_ema: 0.03020, total_loss: 1.589
training 1571 (epoch 6): tem_loss: 1.113, pem class_loss: 0.312, pem reg_loss: 0.016, consistency_loss: 0.02991, consistency_loss_ema: 0.03033, total_loss: 1.586
training 1581 (epoch 6): tem_loss: 1.110, pem class_loss: 0.311, pem reg_loss: 0.016, consistency_loss: 0.02987, consistency_loss_ema: 0.03035, total_loss: 1.581
training 1591 (epoch 6): tem_loss: 1.105, pem class_loss: 0.311, pem reg_loss: 0.016, consistency_loss: 0.02999, consistency_loss_ema: 0.03029, total_loss: 1.577
training 1601 (epoch 6): tem_loss: 1.103, pem class_loss: 0.311, pem reg_loss: 0.016, consistency_loss: 0.03011, consistency_loss_ema: 0.03036, total_loss: 1.575
training 1611 (epoch 6): tem_loss: 1.108, pem class_loss: 0.310, pem reg_loss: 0.016, consistency_loss: 0.03031, consistency_loss_ema: 0.03042, total_loss: 1.579
training 1621 (epoch 6): tem_loss: 1.106, pem class_loss: 0.311, pem reg_loss: 0.016, consistency_loss: 0.03037, consistency_loss_ema: 0.03053, total_loss: 1.579
training 1631 (epoch 6): tem_loss: 1.110, pem class_loss: 0.314, pem reg_loss: 0.016, consistency_loss: 0.03047, consistency_loss_ema: 0.03076, total_loss: 1.588
training 1641 (epoch 6): tem_loss: 1.111, pem class_loss: 0.314, pem reg_loss: 0.016, consistency_loss: 0.03050, consistency_loss_ema: 0.03094, total_loss: 1.590
training 1651 (epoch 6): tem_loss: 1.112, pem class_loss: 0.315, pem reg_loss: 0.017, consistency_loss: 0.03071, consistency_loss_ema: 0.03117, total_loss: 1.592
training 1661 (epoch 6): tem_loss: 1.112, pem class_loss: 0.316, pem reg_loss: 0.017, consistency_loss: 0.03081, consistency_loss_ema: 0.03140, total_loss: 1.595
training 1671 (epoch 6): tem_loss: 1.112, pem class_loss: 0.315, pem reg_loss: 0.017, consistency_loss: 0.03082, consistency_loss_ema: 0.03147, total_loss: 1.593
[94mBMN training loss(epoch 6): tem_loss: 1.110, pem class_loss: 0.314, pem reg_loss: 0.017, total_loss: 1.590[0m
[94mBMN val loss(epoch 6): tem_loss: 1.178, pem class_loss: 0.363, pem reg_loss: 0.018, total_loss: 1.721[0m
[94mBMN val_ema loss(epoch 6): tem_loss: 1.178, pem class_loss: 0.343, pem reg_loss: 0.018, total_loss: 1.697[0m
use Semi !!!
training 1681 (epoch 7): tem_loss: 1.278, pem class_loss: 0.365, pem reg_loss: 0.017, consistency_loss: 0.02709, consistency_loss_ema: 0.04367, total_loss: 1.809
training 1691 (epoch 7): tem_loss: 1.135, pem class_loss: 0.336, pem reg_loss: 0.017, consistency_loss: 0.02925, consistency_loss_ema: 0.03261, total_loss: 1.644
training 1701 (epoch 7): tem_loss: 1.108, pem class_loss: 0.300, pem reg_loss: 0.015, consistency_loss: 0.02810, consistency_loss_ema: 0.02940, total_loss: 1.560
training 1711 (epoch 7): tem_loss: 1.105, pem class_loss: 0.300, pem reg_loss: 0.015, consistency_loss: 0.02641, consistency_loss_ema: 0.02743, total_loss: 1.560
training 1721 (epoch 7): tem_loss: 1.104, pem class_loss: 0.310, pem reg_loss: 0.016, consistency_loss: 0.02536, consistency_loss_ema: 0.02614, total_loss: 1.570
training 1731 (epoch 7): tem_loss: 1.093, pem class_loss: 0.312, pem reg_loss: 0.016, consistency_loss: 0.02402, consistency_loss_ema: 0.02509, total_loss: 1.564
training 1741 (epoch 7): tem_loss: 1.092, pem class_loss: 0.305, pem reg_loss: 0.015, consistency_loss: 0.02321, consistency_loss_ema: 0.02455, total_loss: 1.553
training 1751 (epoch 7): tem_loss: 1.094, pem class_loss: 0.301, pem reg_loss: 0.015, consistency_loss: 0.02278, consistency_loss_ema: 0.02414, total_loss: 1.546
training 1761 (epoch 7): tem_loss: 1.095, pem class_loss: 0.301, pem reg_loss: 0.015, consistency_loss: 0.02265, consistency_loss_ema: 0.02382, total_loss: 1.548
training 1771 (epoch 7): tem_loss: 1.093, pem class_loss: 0.301, pem reg_loss: 0.015, consistency_loss: 0.02238, consistency_loss_ema: 0.02344, total_loss: 1.547
training 1781 (epoch 7): tem_loss: 1.090, pem class_loss: 0.299, pem reg_loss: 0.015, consistency_loss: 0.02215, consistency_loss_ema: 0.02319, total_loss: 1.542
training 1791 (epoch 7): tem_loss: 1.092, pem class_loss: 0.305, pem reg_loss: 0.016, consistency_loss: 0.02196, consistency_loss_ema: 0.02295, total_loss: 1.554
training 1801 (epoch 7): tem_loss: 1.093, pem class_loss: 0.307, pem reg_loss: 0.016, consistency_loss: 0.02176, consistency_loss_ema: 0.02270, total_loss: 1.557
training 1811 (epoch 7): tem_loss: 1.095, pem class_loss: 0.306, pem reg_loss: 0.016, consistency_loss: 0.02161, consistency_loss_ema: 0.02244, total_loss: 1.557
training 1821 (epoch 7): tem_loss: 1.098, pem class_loss: 0.306, pem reg_loss: 0.016, consistency_loss: 0.02138, consistency_loss_ema: 0.02225, total_loss: 1.561
training 1831 (epoch 7): tem_loss: 1.097, pem class_loss: 0.304, pem reg_loss: 0.016, consistency_loss: 0.02116, consistency_loss_ema: 0.02222, total_loss: 1.556
training 1841 (epoch 7): tem_loss: 1.095, pem class_loss: 0.300, pem reg_loss: 0.015, consistency_loss: 0.02104, consistency_loss_ema: 0.02211, total_loss: 1.549
training 1851 (epoch 7): tem_loss: 1.093, pem class_loss: 0.299, pem reg_loss: 0.015, consistency_loss: 0.02105, consistency_loss_ema: 0.02211, total_loss: 1.545
training 1861 (epoch 7): tem_loss: 1.092, pem class_loss: 0.301, pem reg_loss: 0.015, consistency_loss: 0.02090, consistency_loss_ema: 0.02213, total_loss: 1.547
training 1871 (epoch 7): tem_loss: 1.090, pem class_loss: 0.299, pem reg_loss: 0.015, consistency_loss: 0.02087, consistency_loss_ema: 0.02206, total_loss: 1.542
training 1881 (epoch 7): tem_loss: 1.091, pem class_loss: 0.298, pem reg_loss: 0.015, consistency_loss: 0.02087, consistency_loss_ema: 0.02202, total_loss: 1.542
training 1891 (epoch 7): tem_loss: 1.093, pem class_loss: 0.297, pem reg_loss: 0.015, consistency_loss: 0.02079, consistency_loss_ema: 0.02198, total_loss: 1.543
training 1901 (epoch 7): tem_loss: 1.093, pem class_loss: 0.296, pem reg_loss: 0.015, consistency_loss: 0.02086, consistency_loss_ema: 0.02197, total_loss: 1.542
training 1911 (epoch 7): tem_loss: 1.092, pem class_loss: 0.295, pem reg_loss: 0.015, consistency_loss: 0.02089, consistency_loss_ema: 0.02201, total_loss: 1.539
[94mBMN training loss(epoch 7): tem_loss: 1.092, pem class_loss: 0.293, pem reg_loss: 0.015, total_loss: 1.536[0m
[94mBMN val loss(epoch 7): tem_loss: 1.176, pem class_loss: 0.353, pem reg_loss: 0.017, total_loss: 1.701[0m
[94mBMN val_ema loss(epoch 7): tem_loss: 1.176, pem class_loss: 0.348, pem reg_loss: 0.017, total_loss: 1.697[0m
use Semi !!!
training 1921 (epoch 8): tem_loss: 1.098, pem class_loss: 0.313, pem reg_loss: 0.017, consistency_loss: 0.01867, consistency_loss_ema: 0.02159, total_loss: 1.578
training 1931 (epoch 8): tem_loss: 1.048, pem class_loss: 0.294, pem reg_loss: 0.016, consistency_loss: 0.02198, consistency_loss_ema: 0.02189, total_loss: 1.499
training 1941 (epoch 8): tem_loss: 1.054, pem class_loss: 0.281, pem reg_loss: 0.015, consistency_loss: 0.02097, consistency_loss_ema: 0.02205, total_loss: 1.482
training 1951 (epoch 8): tem_loss: 1.060, pem class_loss: 0.273, pem reg_loss: 0.014, consistency_loss: 0.02100, consistency_loss_ema: 0.02208, total_loss: 1.477
training 1961 (epoch 8): tem_loss: 1.064, pem class_loss: 0.277, pem reg_loss: 0.015, consistency_loss: 0.02134, consistency_loss_ema: 0.02225, total_loss: 1.488
training 1971 (epoch 8): tem_loss: 1.071, pem class_loss: 0.280, pem reg_loss: 0.015, consistency_loss: 0.02119, consistency_loss_ema: 0.02226, total_loss: 1.499
training 1981 (epoch 8): tem_loss: 1.066, pem class_loss: 0.278, pem reg_loss: 0.015, consistency_loss: 0.02109, consistency_loss_ema: 0.02230, total_loss: 1.492
training 1991 (epoch 8): tem_loss: 1.069, pem class_loss: 0.279, pem reg_loss: 0.015, consistency_loss: 0.02101, consistency_loss_ema: 0.02248, total_loss: 1.493
training 2001 (epoch 8): tem_loss: 1.068, pem class_loss: 0.276, pem reg_loss: 0.014, consistency_loss: 0.02118, consistency_loss_ema: 0.02278, total_loss: 1.487
training 2011 (epoch 8): tem_loss: 1.072, pem class_loss: 0.277, pem reg_loss: 0.014, consistency_loss: 0.02154, consistency_loss_ema: 0.02298, total_loss: 1.493
training 2021 (epoch 8): tem_loss: 1.073, pem class_loss: 0.276, pem reg_loss: 0.014, consistency_loss: 0.02132, consistency_loss_ema: 0.02304, total_loss: 1.491
training 2031 (epoch 8): tem_loss: 1.073, pem class_loss: 0.276, pem reg_loss: 0.014, consistency_loss: 0.02147, consistency_loss_ema: 0.02307, total_loss: 1.490
training 2041 (epoch 8): tem_loss: 1.077, pem class_loss: 0.278, pem reg_loss: 0.014, consistency_loss: 0.02172, consistency_loss_ema: 0.02304, total_loss: 1.499
training 2051 (epoch 8): tem_loss: 1.079, pem class_loss: 0.278, pem reg_loss: 0.014, consistency_loss: 0.02162, consistency_loss_ema: 0.02297, total_loss: 1.500
training 2061 (epoch 8): tem_loss: 1.077, pem class_loss: 0.278, pem reg_loss: 0.014, consistency_loss: 0.02168, consistency_loss_ema: 0.02293, total_loss: 1.500
training 2071 (epoch 8): tem_loss: 1.077, pem class_loss: 0.277, pem reg_loss: 0.014, consistency_loss: 0.02163, consistency_loss_ema: 0.02293, total_loss: 1.498
training 2081 (epoch 8): tem_loss: 1.076, pem class_loss: 0.276, pem reg_loss: 0.014, consistency_loss: 0.02178, consistency_loss_ema: 0.02307, total_loss: 1.497
training 2091 (epoch 8): tem_loss: 1.078, pem class_loss: 0.277, pem reg_loss: 0.015, consistency_loss: 0.02173, consistency_loss_ema: 0.02307, total_loss: 1.501
training 2101 (epoch 8): tem_loss: 1.078, pem class_loss: 0.279, pem reg_loss: 0.015, consistency_loss: 0.02188, consistency_loss_ema: 0.02309, total_loss: 1.502
training 2111 (epoch 8): tem_loss: 1.079, pem class_loss: 0.280, pem reg_loss: 0.015, consistency_loss: 0.02181, consistency_loss_ema: 0.02313, total_loss: 1.504
training 2121 (epoch 8): tem_loss: 1.079, pem class_loss: 0.281, pem reg_loss: 0.015, consistency_loss: 0.02167, consistency_loss_ema: 0.02316, total_loss: 1.504
training 2131 (epoch 8): tem_loss: 1.078, pem class_loss: 0.279, pem reg_loss: 0.014, consistency_loss: 0.02165, consistency_loss_ema: 0.02315, total_loss: 1.502
training 2141 (epoch 8): tem_loss: 1.081, pem class_loss: 0.279, pem reg_loss: 0.014, consistency_loss: 0.02175, consistency_loss_ema: 0.02321, total_loss: 1.504
training 2151 (epoch 8): tem_loss: 1.081, pem class_loss: 0.279, pem reg_loss: 0.014, consistency_loss: 0.02170, consistency_loss_ema: 0.02325, total_loss: 1.504
[94mBMN training loss(epoch 8): tem_loss: 1.081, pem class_loss: 0.279, pem reg_loss: 0.014, total_loss: 1.505[0m
[94mBMN val loss(epoch 8): tem_loss: 1.176, pem class_loss: 0.355, pem reg_loss: 0.017, total_loss: 1.704[0m
[94mBMN val_ema loss(epoch 8): tem_loss: 1.176, pem class_loss: 0.354, pem reg_loss: 0.017, total_loss: 1.702[0m
use Semi !!!
training 2161 (epoch 9): tem_loss: 1.249, pem class_loss: 0.431, pem reg_loss: 0.015, consistency_loss: 0.02610, consistency_loss_ema: 0.02576, total_loss: 1.834
training 2171 (epoch 9): tem_loss: 1.080, pem class_loss: 0.265, pem reg_loss: 0.012, consistency_loss: 0.02596, consistency_loss_ema: 0.02339, total_loss: 1.469
training 2181 (epoch 9): tem_loss: 1.095, pem class_loss: 0.279, pem reg_loss: 0.014, consistency_loss: 0.02463, consistency_loss_ema: 0.02385, total_loss: 1.516
training 2191 (epoch 9): tem_loss: 1.074, pem class_loss: 0.272, pem reg_loss: 0.014, consistency_loss: 0.02382, consistency_loss_ema: 0.02393, total_loss: 1.483
training 2201 (epoch 9): tem_loss: 1.073, pem class_loss: 0.265, pem reg_loss: 0.013, consistency_loss: 0.02366, consistency_loss_ema: 0.02424, total_loss: 1.471
training 2211 (epoch 9): tem_loss: 1.071, pem class_loss: 0.259, pem reg_loss: 0.014, consistency_loss: 0.02406, consistency_loss_ema: 0.02430, total_loss: 1.465
training 2221 (epoch 9): tem_loss: 1.075, pem class_loss: 0.270, pem reg_loss: 0.014, consistency_loss: 0.02374, consistency_loss_ema: 0.02438, total_loss: 1.490
training 2231 (epoch 9): tem_loss: 1.070, pem class_loss: 0.267, pem reg_loss: 0.014, consistency_loss: 0.02396, consistency_loss_ema: 0.02462, total_loss: 1.481
training 2241 (epoch 9): tem_loss: 1.076, pem class_loss: 0.269, pem reg_loss: 0.014, consistency_loss: 0.02395, consistency_loss_ema: 0.02449, total_loss: 1.487
training 2251 (epoch 9): tem_loss: 1.081, pem class_loss: 0.273, pem reg_loss: 0.014, consistency_loss: 0.02378, consistency_loss_ema: 0.02440, total_loss: 1.499
training 2261 (epoch 9): tem_loss: 1.077, pem class_loss: 0.273, pem reg_loss: 0.014, consistency_loss: 0.02367, consistency_loss_ema: 0.02431, total_loss: 1.492
training 2271 (epoch 9): tem_loss: 1.077, pem class_loss: 0.271, pem reg_loss: 0.014, consistency_loss: 0.02371, consistency_loss_ema: 0.02440, total_loss: 1.489
training 2281 (epoch 9): tem_loss: 1.073, pem class_loss: 0.270, pem reg_loss: 0.014, consistency_loss: 0.02368, consistency_loss_ema: 0.02443, total_loss: 1.483
training 2291 (epoch 9): tem_loss: 1.070, pem class_loss: 0.271, pem reg_loss: 0.014, consistency_loss: 0.02349, consistency_loss_ema: 0.02443, total_loss: 1.481
training 2301 (epoch 9): tem_loss: 1.073, pem class_loss: 0.274, pem reg_loss: 0.014, consistency_loss: 0.02372, consistency_loss_ema: 0.02442, total_loss: 1.490
training 2311 (epoch 9): tem_loss: 1.071, pem class_loss: 0.272, pem reg_loss: 0.014, consistency_loss: 0.02366, consistency_loss_ema: 0.02439, total_loss: 1.484
training 2321 (epoch 9): tem_loss: 1.070, pem class_loss: 0.274, pem reg_loss: 0.014, consistency_loss: 0.02393, consistency_loss_ema: 0.02440, total_loss: 1.488
training 2331 (epoch 9): tem_loss: 1.071, pem class_loss: 0.274, pem reg_loss: 0.014, consistency_loss: 0.02404, consistency_loss_ema: 0.02442, total_loss: 1.489
training 2341 (epoch 9): tem_loss: 1.068, pem class_loss: 0.273, pem reg_loss: 0.014, consistency_loss: 0.02407, consistency_loss_ema: 0.02445, total_loss: 1.485
training 2351 (epoch 9): tem_loss: 1.069, pem class_loss: 0.273, pem reg_loss: 0.014, consistency_loss: 0.02399, consistency_loss_ema: 0.02447, total_loss: 1.486
training 2361 (epoch 9): tem_loss: 1.074, pem class_loss: 0.274, pem reg_loss: 0.014, consistency_loss: 0.02406, consistency_loss_ema: 0.02458, total_loss: 1.490
training 2371 (epoch 9): tem_loss: 1.075, pem class_loss: 0.274, pem reg_loss: 0.014, consistency_loss: 0.02412, consistency_loss_ema: 0.02468, total_loss: 1.492
training 2381 (epoch 9): tem_loss: 1.075, pem class_loss: 0.275, pem reg_loss: 0.014, consistency_loss: 0.02406, consistency_loss_ema: 0.02477, total_loss: 1.494
training 2391 (epoch 9): tem_loss: 1.077, pem class_loss: 0.275, pem reg_loss: 0.014, consistency_loss: 0.02409, consistency_loss_ema: 0.02478, total_loss: 1.496
[94mBMN training loss(epoch 9): tem_loss: 1.076, pem class_loss: 0.275, pem reg_loss: 0.014, total_loss: 1.494[0m
[94mBMN val loss(epoch 9): tem_loss: 1.177, pem class_loss: 0.362, pem reg_loss: 0.017, total_loss: 1.711[0m
[94mBMN val_ema loss(epoch 9): tem_loss: 1.176, pem class_loss: 0.358, pem reg_loss: 0.017, total_loss: 1.707[0m
unlabel percent: 0.8
eval student model !!
load : ./checkpoint/Semi-base-0.8/BMN_checkpoint.pth.tar OK !
validation subset video numbers: 4728
Post processing start
Post processing finished
[INIT] Loaded annotations from validation subset.
Number of ground truth instances: 7293
Number of proposals: 472617
Fixed threshold for tiou score: [0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95]
[RESULTS] Performance on ActivityNet proposal task.
Area Under the AR vs AN curve: 66.0927876045523%
AR@1 is 0.33226381461675575
AR@5 is 0.47631975867269993
AR@10 is 0.5507747154805978
AR@100 is 0.7420814479638009
load : ./checkpoint/Semi-base-0.8/BMN_best.pth.tar OK !
validation subset video numbers: 4728
Post processing start
Post processing finished
[INIT] Loaded annotations from validation subset.
Number of ground truth instances: 7293
Number of proposals: 472608
Fixed threshold for tiou score: [0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95]
[RESULTS] Performance on ActivityNet proposal task.
Area Under the AR vs AN curve: 65.81098313451254%
AR@1 is 0.33193473193473194
AR@5 is 0.47353626765391466
AR@10 is 0.5491841491841492
AR@100 is 0.7406828465651996
eval teacher model !!
load : ./checkpoint/Semi-base-0.8/BMN_checkpoint_ema.pth.tar OK !
validation subset video numbers: 4728
Post processing start
Post processing finished
[INIT] Loaded annotations from validation subset.
Number of ground truth instances: 7293
Number of proposals: 472707
Fixed threshold for tiou score: [0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95]
[RESULTS] Performance on ActivityNet proposal task.
Area Under the AR vs AN curve: 66.14762100644454%
AR@1 is 0.332373508844097
AR@5 is 0.4773892773892775
AR@10 is 0.5522692993281229
AR@100 is 0.743123543123543
load : ./checkpoint/Semi-base-0.8/BMN_best_ema.pth.tar OK !
validation subset video numbers: 4728
Post processing start
Post processing finished
[INIT] Loaded annotations from validation subset.
Number of ground truth instances: 7293
Number of proposals: 472604
Fixed threshold for tiou score: [0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95]
[RESULTS] Performance on ActivityNet proposal task.
Area Under the AR vs AN curve: 65.5870423693953%
AR@1 is 0.3298642533936652
AR@5 is 0.47046482928835875
AR@10 is 0.5448786507610037
AR@100 is 0.7394076511723571
#
train subset video numbers: 2885
unlabel unlabeled subset video numbers: 6764
validation subset video numbers: 4728
use 0.30000000000000004 label for training!!!
training batchsize : 24
unlabel_training batchsize : 24
use Semi !!!
training 1 (epoch 0): tem_loss: 1.403, pem class_loss: 0.693, pem reg_loss: 0.040, consistency_loss: 0.00047, consistency_loss_ema: 0.00000, total_loss: 2.502
training 11 (epoch 0): tem_loss: 1.354, pem class_loss: 0.516, pem reg_loss: 0.035, consistency_loss: 0.00013, consistency_loss_ema: 0.00008, total_loss: 2.218
training 21 (epoch 0): tem_loss: 1.325, pem class_loss: 0.467, pem reg_loss: 0.030, consistency_loss: 0.00014, consistency_loss_ema: 0.00012, total_loss: 2.092
training 31 (epoch 0): tem_loss: 1.308, pem class_loss: 0.447, pem reg_loss: 0.028, consistency_loss: 0.00016, consistency_loss_ema: 0.00016, total_loss: 2.031
training 41 (epoch 0): tem_loss: 1.288, pem class_loss: 0.431, pem reg_loss: 0.026, consistency_loss: 0.00019, consistency_loss_ema: 0.00019, total_loss: 1.979
training 51 (epoch 0): tem_loss: 1.273, pem class_loss: 0.413, pem reg_loss: 0.025, consistency_loss: 0.00022, consistency_loss_ema: 0.00021, total_loss: 1.934
training 61 (epoch 0): tem_loss: 1.272, pem class_loss: 0.409, pem reg_loss: 0.024, consistency_loss: 0.00024, consistency_loss_ema: 0.00024, total_loss: 1.925
training 71 (epoch 0): tem_loss: 1.265, pem class_loss: 0.407, pem reg_loss: 0.024, consistency_loss: 0.00025, consistency_loss_ema: 0.00025, total_loss: 1.913
training 81 (epoch 0): tem_loss: 1.259, pem class_loss: 0.403, pem reg_loss: 0.024, consistency_loss: 0.00026, consistency_loss_ema: 0.00026, total_loss: 1.900
training 91 (epoch 0): tem_loss: 1.251, pem class_loss: 0.400, pem reg_loss: 0.024, consistency_loss: 0.00028, consistency_loss_ema: 0.00028, total_loss: 1.887
training 101 (epoch 0): tem_loss: 1.244, pem class_loss: 0.395, pem reg_loss: 0.023, consistency_loss: 0.00029, consistency_loss_ema: 0.00029, total_loss: 1.872
training 111 (epoch 0): tem_loss: 1.240, pem class_loss: 0.391, pem reg_loss: 0.023, consistency_loss: 0.00029, consistency_loss_ema: 0.00030, total_loss: 1.861
[94mBMN training loss(epoch 0): tem_loss: 1.234, pem class_loss: 0.390, pem reg_loss: 0.023, total_loss: 1.853[0m
[94mBMN val loss(epoch 0): tem_loss: 1.198, pem class_loss: 0.360, pem reg_loss: 0.021, total_loss: 1.766[0m
[94mBMN val_ema loss(epoch 0): tem_loss: 1.190, pem class_loss: 0.364, pem reg_loss: 0.020, total_loss: 1.758[0m
use Semi !!!
training 121 (epoch 1): tem_loss: 1.184, pem class_loss: 0.362, pem reg_loss: 0.022, consistency_loss: 0.00261, consistency_loss_ema: 0.00267, total_loss: 1.768
training 131 (epoch 1): tem_loss: 1.185, pem class_loss: 0.366, pem reg_loss: 0.020, consistency_loss: 0.00237, consistency_loss_ema: 0.00233, total_loss: 1.756
training 141 (epoch 1): tem_loss: 1.165, pem class_loss: 0.348, pem reg_loss: 0.020, consistency_loss: 0.00282, consistency_loss_ema: 0.00276, total_loss: 1.716
training 151 (epoch 1): tem_loss: 1.153, pem class_loss: 0.350, pem reg_loss: 0.020, consistency_loss: 0.00276, consistency_loss_ema: 0.00282, total_loss: 1.704
training 161 (epoch 1): tem_loss: 1.159, pem class_loss: 0.349, pem reg_loss: 0.020, consistency_loss: 0.00269, consistency_loss_ema: 0.00272, total_loss: 1.706
training 171 (epoch 1): tem_loss: 1.158, pem class_loss: 0.351, pem reg_loss: 0.020, consistency_loss: 0.00262, consistency_loss_ema: 0.00265, total_loss: 1.706
training 181 (epoch 1): tem_loss: 1.157, pem class_loss: 0.353, pem reg_loss: 0.020, consistency_loss: 0.00271, consistency_loss_ema: 0.00274, total_loss: 1.709
training 191 (epoch 1): tem_loss: 1.160, pem class_loss: 0.354, pem reg_loss: 0.020, consistency_loss: 0.00274, consistency_loss_ema: 0.00281, total_loss: 1.716
training 201 (epoch 1): tem_loss: 1.162, pem class_loss: 0.358, pem reg_loss: 0.020, consistency_loss: 0.00272, consistency_loss_ema: 0.00282, total_loss: 1.723
training 211 (epoch 1): tem_loss: 1.165, pem class_loss: 0.359, pem reg_loss: 0.020, consistency_loss: 0.00273, consistency_loss_ema: 0.00283, total_loss: 1.727
training 221 (epoch 1): tem_loss: 1.161, pem class_loss: 0.355, pem reg_loss: 0.020, consistency_loss: 0.00279, consistency_loss_ema: 0.00287, total_loss: 1.718
training 231 (epoch 1): tem_loss: 1.159, pem class_loss: 0.355, pem reg_loss: 0.020, consistency_loss: 0.00280, consistency_loss_ema: 0.00290, total_loss: 1.716
[94mBMN training loss(epoch 1): tem_loss: 1.158, pem class_loss: 0.354, pem reg_loss: 0.020, total_loss: 1.713[0m
[94mBMN val loss(epoch 1): tem_loss: 1.187, pem class_loss: 0.346, pem reg_loss: 0.021, total_loss: 1.747[0m
[94mBMN val_ema loss(epoch 1): tem_loss: 1.179, pem class_loss: 0.346, pem reg_loss: 0.019, total_loss: 1.715[0m
use Semi !!!
training 241 (epoch 2): tem_loss: 1.059, pem class_loss: 0.288, pem reg_loss: 0.023, consistency_loss: 0.01336, consistency_loss_ema: 0.01352, total_loss: 1.572
training 251 (epoch 2): tem_loss: 1.102, pem class_loss: 0.361, pem reg_loss: 0.021, consistency_loss: 0.01069, consistency_loss_ema: 0.01150, total_loss: 1.678
training 261 (epoch 2): tem_loss: 1.112, pem class_loss: 0.365, pem reg_loss: 0.021, consistency_loss: 0.01001, consistency_loss_ema: 0.01018, total_loss: 1.689
training 271 (epoch 2): tem_loss: 1.111, pem class_loss: 0.357, pem reg_loss: 0.021, consistency_loss: 0.00986, consistency_loss_ema: 0.01001, total_loss: 1.678
training 281 (epoch 2): tem_loss: 1.113, pem class_loss: 0.353, pem reg_loss: 0.021, consistency_loss: 0.00956, consistency_loss_ema: 0.00973, total_loss: 1.672
training 291 (epoch 2): tem_loss: 1.116, pem class_loss: 0.353, pem reg_loss: 0.021, consistency_loss: 0.00937, consistency_loss_ema: 0.00972, total_loss: 1.674
training 301 (epoch 2): tem_loss: 1.120, pem class_loss: 0.354, pem reg_loss: 0.020, consistency_loss: 0.00911, consistency_loss_ema: 0.00947, total_loss: 1.677
training 311 (epoch 2): tem_loss: 1.123, pem class_loss: 0.352, pem reg_loss: 0.020, consistency_loss: 0.00892, consistency_loss_ema: 0.00931, total_loss: 1.676
training 321 (epoch 2): tem_loss: 1.125, pem class_loss: 0.348, pem reg_loss: 0.020, consistency_loss: 0.00884, consistency_loss_ema: 0.00916, total_loss: 1.671
training 331 (epoch 2): tem_loss: 1.125, pem class_loss: 0.348, pem reg_loss: 0.020, consistency_loss: 0.00873, consistency_loss_ema: 0.00911, total_loss: 1.672
training 341 (epoch 2): tem_loss: 1.126, pem class_loss: 0.348, pem reg_loss: 0.020, consistency_loss: 0.00869, consistency_loss_ema: 0.00906, total_loss: 1.673
training 351 (epoch 2): tem_loss: 1.122, pem class_loss: 0.346, pem reg_loss: 0.020, consistency_loss: 0.00877, consistency_loss_ema: 0.00914, total_loss: 1.665
[94mBMN training loss(epoch 2): tem_loss: 1.122, pem class_loss: 0.344, pem reg_loss: 0.020, total_loss: 1.663[0m
[94mBMN val loss(epoch 2): tem_loss: 1.166, pem class_loss: 0.352, pem reg_loss: 0.019, total_loss: 1.706[0m
[94mBMN val_ema loss(epoch 2): tem_loss: 1.160, pem class_loss: 0.342, pem reg_loss: 0.019, total_loss: 1.690[0m
use Semi !!!
training 361 (epoch 3): tem_loss: 1.095, pem class_loss: 0.309, pem reg_loss: 0.018, consistency_loss: 0.02978, consistency_loss_ema: 0.02353, total_loss: 1.584
training 371 (epoch 3): tem_loss: 1.088, pem class_loss: 0.338, pem reg_loss: 0.018, consistency_loss: 0.02374, consistency_loss_ema: 0.02412, total_loss: 1.604
training 381 (epoch 3): tem_loss: 1.092, pem class_loss: 0.328, pem reg_loss: 0.018, consistency_loss: 0.02118, consistency_loss_ema: 0.02224, total_loss: 1.599
training 391 (epoch 3): tem_loss: 1.092, pem class_loss: 0.324, pem reg_loss: 0.018, consistency_loss: 0.02041, consistency_loss_ema: 0.02114, total_loss: 1.595
training 401 (epoch 3): tem_loss: 1.086, pem class_loss: 0.324, pem reg_loss: 0.018, consistency_loss: 0.01993, consistency_loss_ema: 0.02117, total_loss: 1.592
training 411 (epoch 3): tem_loss: 1.098, pem class_loss: 0.327, pem reg_loss: 0.018, consistency_loss: 0.02042, consistency_loss_ema: 0.02111, total_loss: 1.606
training 421 (epoch 3): tem_loss: 1.099, pem class_loss: 0.327, pem reg_loss: 0.018, consistency_loss: 0.01999, consistency_loss_ema: 0.02070, total_loss: 1.609
training 431 (epoch 3): tem_loss: 1.099, pem class_loss: 0.327, pem reg_loss: 0.018, consistency_loss: 0.01974, consistency_loss_ema: 0.02044, total_loss: 1.610
training 441 (epoch 3): tem_loss: 1.099, pem class_loss: 0.324, pem reg_loss: 0.018, consistency_loss: 0.01948, consistency_loss_ema: 0.02027, total_loss: 1.604
training 451 (epoch 3): tem_loss: 1.104, pem class_loss: 0.329, pem reg_loss: 0.018, consistency_loss: 0.01897, consistency_loss_ema: 0.02002, total_loss: 1.616
training 461 (epoch 3): tem_loss: 1.103, pem class_loss: 0.328, pem reg_loss: 0.018, consistency_loss: 0.01885, consistency_loss_ema: 0.01999, total_loss: 1.613
training 471 (epoch 3): tem_loss: 1.106, pem class_loss: 0.327, pem reg_loss: 0.018, consistency_loss: 0.01871, consistency_loss_ema: 0.01978, total_loss: 1.613
[94mBMN training loss(epoch 3): tem_loss: 1.105, pem class_loss: 0.327, pem reg_loss: 0.018, total_loss: 1.612[0m
[94mBMN val loss(epoch 3): tem_loss: 1.164, pem class_loss: 0.339, pem reg_loss: 0.018, total_loss: 1.685[0m
[94mBMN val_ema loss(epoch 3): tem_loss: 1.161, pem class_loss: 0.339, pem reg_loss: 0.018, total_loss: 1.681[0m
use Semi !!!
training 481 (epoch 4): tem_loss: 0.965, pem class_loss: 0.297, pem reg_loss: 0.015, consistency_loss: 0.03178, consistency_loss_ema: 0.03698, total_loss: 1.412
training 491 (epoch 4): tem_loss: 1.117, pem class_loss: 0.312, pem reg_loss: 0.017, consistency_loss: 0.03128, consistency_loss_ema: 0.03323, total_loss: 1.595
training 501 (epoch 4): tem_loss: 1.111, pem class_loss: 0.313, pem reg_loss: 0.017, consistency_loss: 0.02905, consistency_loss_ema: 0.03153, total_loss: 1.593
training 511 (epoch 4): tem_loss: 1.100, pem class_loss: 0.311, pem reg_loss: 0.017, consistency_loss: 0.02828, consistency_loss_ema: 0.03023, total_loss: 1.580
training 521 (epoch 4): tem_loss: 1.096, pem class_loss: 0.314, pem reg_loss: 0.017, consistency_loss: 0.02758, consistency_loss_ema: 0.02942, total_loss: 1.581
training 531 (epoch 4): tem_loss: 1.093, pem class_loss: 0.318, pem reg_loss: 0.017, consistency_loss: 0.02702, consistency_loss_ema: 0.02892, total_loss: 1.584
training 541 (epoch 4): tem_loss: 1.096, pem class_loss: 0.318, pem reg_loss: 0.017, consistency_loss: 0.02627, consistency_loss_ema: 0.02822, total_loss: 1.585
training 551 (epoch 4): tem_loss: 1.095, pem class_loss: 0.317, pem reg_loss: 0.017, consistency_loss: 0.02587, consistency_loss_ema: 0.02775, total_loss: 1.584
training 561 (epoch 4): tem_loss: 1.095, pem class_loss: 0.315, pem reg_loss: 0.017, consistency_loss: 0.02588, consistency_loss_ema: 0.02790, total_loss: 1.580
training 571 (epoch 4): tem_loss: 1.096, pem class_loss: 0.318, pem reg_loss: 0.017, consistency_loss: 0.02593, consistency_loss_ema: 0.02783, total_loss: 1.586
training 581 (epoch 4): tem_loss: 1.097, pem class_loss: 0.317, pem reg_loss: 0.017, consistency_loss: 0.02581, consistency_loss_ema: 0.02756, total_loss: 1.584
training 591 (epoch 4): tem_loss: 1.101, pem class_loss: 0.317, pem reg_loss: 0.017, consistency_loss: 0.02607, consistency_loss_ema: 0.02747, total_loss: 1.591
[94mBMN training loss(epoch 4): tem_loss: 1.100, pem class_loss: 0.316, pem reg_loss: 0.017, total_loss: 1.589[0m
[94mBMN val loss(epoch 4): tem_loss: 1.170, pem class_loss: 0.337, pem reg_loss: 0.017, total_loss: 1.681[0m
[94mBMN val_ema loss(epoch 4): tem_loss: 1.167, pem class_loss: 0.335, pem reg_loss: 0.018, total_loss: 1.678[0m
use Semi !!!
training 601 (epoch 5): tem_loss: 1.051, pem class_loss: 0.310, pem reg_loss: 0.019, consistency_loss: 0.04011, consistency_loss_ema: 0.03335, total_loss: 1.554
training 611 (epoch 5): tem_loss: 1.076, pem class_loss: 0.300, pem reg_loss: 0.016, consistency_loss: 0.03414, consistency_loss_ema: 0.03223, total_loss: 1.536
training 621 (epoch 5): tem_loss: 1.083, pem class_loss: 0.307, pem reg_loss: 0.017, consistency_loss: 0.03166, consistency_loss_ema: 0.03150, total_loss: 1.556
training 631 (epoch 5): tem_loss: 1.094, pem class_loss: 0.304, pem reg_loss: 0.017, consistency_loss: 0.03109, consistency_loss_ema: 0.03093, total_loss: 1.564
training 641 (epoch 5): tem_loss: 1.094, pem class_loss: 0.297, pem reg_loss: 0.016, consistency_loss: 0.03039, consistency_loss_ema: 0.03075, total_loss: 1.554
training 651 (epoch 5): tem_loss: 1.095, pem class_loss: 0.302, pem reg_loss: 0.017, consistency_loss: 0.03002, consistency_loss_ema: 0.03097, total_loss: 1.563
training 661 (epoch 5): tem_loss: 1.095, pem class_loss: 0.304, pem reg_loss: 0.016, consistency_loss: 0.02980, consistency_loss_ema: 0.03072, total_loss: 1.565
training 671 (epoch 5): tem_loss: 1.092, pem class_loss: 0.304, pem reg_loss: 0.016, consistency_loss: 0.02983, consistency_loss_ema: 0.03112, total_loss: 1.560
training 681 (epoch 5): tem_loss: 1.094, pem class_loss: 0.305, pem reg_loss: 0.017, consistency_loss: 0.02990, consistency_loss_ema: 0.03133, total_loss: 1.564
training 691 (epoch 5): tem_loss: 1.095, pem class_loss: 0.306, pem reg_loss: 0.017, consistency_loss: 0.02977, consistency_loss_ema: 0.03142, total_loss: 1.566
training 701 (epoch 5): tem_loss: 1.096, pem class_loss: 0.307, pem reg_loss: 0.017, consistency_loss: 0.02955, consistency_loss_ema: 0.03146, total_loss: 1.569
training 711 (epoch 5): tem_loss: 1.098, pem class_loss: 0.307, pem reg_loss: 0.017, consistency_loss: 0.02937, consistency_loss_ema: 0.03139, total_loss: 1.570
[94mBMN training loss(epoch 5): tem_loss: 1.100, pem class_loss: 0.306, pem reg_loss: 0.017, total_loss: 1.571[0m
[94mBMN val loss(epoch 5): tem_loss: 1.172, pem class_loss: 0.336, pem reg_loss: 0.018, total_loss: 1.684[0m
[94mBMN val_ema loss(epoch 5): tem_loss: 1.170, pem class_loss: 0.335, pem reg_loss: 0.017, total_loss: 1.678[0m
use Semi !!!
training 721 (epoch 6): tem_loss: 1.029, pem class_loss: 0.324, pem reg_loss: 0.017, consistency_loss: 0.02479, consistency_loss_ema: 0.02147, total_loss: 1.522
training 731 (epoch 6): tem_loss: 1.077, pem class_loss: 0.299, pem reg_loss: 0.017, consistency_loss: 0.02717, consistency_loss_ema: 0.02840, total_loss: 1.545
training 741 (epoch 6): tem_loss: 1.084, pem class_loss: 0.292, pem reg_loss: 0.016, consistency_loss: 0.02868, consistency_loss_ema: 0.03002, total_loss: 1.536
training 751 (epoch 6): tem_loss: 1.080, pem class_loss: 0.293, pem reg_loss: 0.016, consistency_loss: 0.02924, consistency_loss_ema: 0.03004, total_loss: 1.532
training 761 (epoch 6): tem_loss: 1.082, pem class_loss: 0.294, pem reg_loss: 0.016, consistency_loss: 0.02897, consistency_loss_ema: 0.02990, total_loss: 1.536
training 771 (epoch 6): tem_loss: 1.083, pem class_loss: 0.297, pem reg_loss: 0.016, consistency_loss: 0.02898, consistency_loss_ema: 0.03003, total_loss: 1.538
training 781 (epoch 6): tem_loss: 1.085, pem class_loss: 0.293, pem reg_loss: 0.016, consistency_loss: 0.02948, consistency_loss_ema: 0.03052, total_loss: 1.535
training 791 (epoch 6): tem_loss: 1.091, pem class_loss: 0.295, pem reg_loss: 0.016, consistency_loss: 0.02951, consistency_loss_ema: 0.03066, total_loss: 1.544
training 801 (epoch 6): tem_loss: 1.095, pem class_loss: 0.296, pem reg_loss: 0.016, consistency_loss: 0.02953, consistency_loss_ema: 0.03066, total_loss: 1.549
training 811 (epoch 6): tem_loss: 1.096, pem class_loss: 0.294, pem reg_loss: 0.016, consistency_loss: 0.02961, consistency_loss_ema: 0.03095, total_loss: 1.547
training 821 (epoch 6): tem_loss: 1.096, pem class_loss: 0.296, pem reg_loss: 0.016, consistency_loss: 0.02951, consistency_loss_ema: 0.03103, total_loss: 1.550
training 831 (epoch 6): tem_loss: 1.094, pem class_loss: 0.296, pem reg_loss: 0.016, consistency_loss: 0.02943, consistency_loss_ema: 0.03129, total_loss: 1.549
[94mBMN training loss(epoch 6): tem_loss: 1.096, pem class_loss: 0.298, pem reg_loss: 0.016, total_loss: 1.553[0m
[94mBMN val loss(epoch 6): tem_loss: 1.170, pem class_loss: 0.334, pem reg_loss: 0.017, total_loss: 1.678[0m
[94mBMN val_ema loss(epoch 6): tem_loss: 1.172, pem class_loss: 0.335, pem reg_loss: 0.017, total_loss: 1.678[0m
use Semi !!!
training 841 (epoch 7): tem_loss: 1.060, pem class_loss: 0.315, pem reg_loss: 0.020, consistency_loss: 0.03567, consistency_loss_ema: 0.03098, total_loss: 1.574
training 851 (epoch 7): tem_loss: 1.076, pem class_loss: 0.291, pem reg_loss: 0.017, consistency_loss: 0.02615, consistency_loss_ema: 0.02822, total_loss: 1.533
training 861 (epoch 7): tem_loss: 1.073, pem class_loss: 0.284, pem reg_loss: 0.016, consistency_loss: 0.02404, consistency_loss_ema: 0.02701, total_loss: 1.519
training 871 (epoch 7): tem_loss: 1.070, pem class_loss: 0.288, pem reg_loss: 0.016, consistency_loss: 0.02286, consistency_loss_ema: 0.02559, total_loss: 1.518
training 881 (epoch 7): tem_loss: 1.079, pem class_loss: 0.290, pem reg_loss: 0.016, consistency_loss: 0.02252, consistency_loss_ema: 0.02497, total_loss: 1.527
training 891 (epoch 7): tem_loss: 1.072, pem class_loss: 0.285, pem reg_loss: 0.015, consistency_loss: 0.02228, consistency_loss_ema: 0.02468, total_loss: 1.511
training 901 (epoch 7): tem_loss: 1.072, pem class_loss: 0.284, pem reg_loss: 0.015, consistency_loss: 0.02185, consistency_loss_ema: 0.02434, total_loss: 1.511
training 911 (epoch 7): tem_loss: 1.074, pem class_loss: 0.281, pem reg_loss: 0.015, consistency_loss: 0.02176, consistency_loss_ema: 0.02404, total_loss: 1.507
training 921 (epoch 7): tem_loss: 1.075, pem class_loss: 0.283, pem reg_loss: 0.015, consistency_loss: 0.02170, consistency_loss_ema: 0.02396, total_loss: 1.511
training 931 (epoch 7): tem_loss: 1.079, pem class_loss: 0.284, pem reg_loss: 0.015, consistency_loss: 0.02152, consistency_loss_ema: 0.02365, total_loss: 1.515
training 941 (epoch 7): tem_loss: 1.078, pem class_loss: 0.281, pem reg_loss: 0.015, consistency_loss: 0.02150, consistency_loss_ema: 0.02356, total_loss: 1.510
training 951 (epoch 7): tem_loss: 1.078, pem class_loss: 0.280, pem reg_loss: 0.015, consistency_loss: 0.02154, consistency_loss_ema: 0.02355, total_loss: 1.508
[94mBMN training loss(epoch 7): tem_loss: 1.078, pem class_loss: 0.280, pem reg_loss: 0.015, total_loss: 1.506[0m
[94mBMN val loss(epoch 7): tem_loss: 1.169, pem class_loss: 0.341, pem reg_loss: 0.017, total_loss: 1.680[0m
[94mBMN val_ema loss(epoch 7): tem_loss: 1.170, pem class_loss: 0.339, pem reg_loss: 0.017, total_loss: 1.678[0m
use Semi !!!
training 961 (epoch 8): tem_loss: 1.093, pem class_loss: 0.271, pem reg_loss: 0.013, consistency_loss: 0.01702, consistency_loss_ema: 0.02163, total_loss: 1.492
training 971 (epoch 8): tem_loss: 1.060, pem class_loss: 0.277, pem reg_loss: 0.015, consistency_loss: 0.02138, consistency_loss_ema: 0.02247, total_loss: 1.486
training 981 (epoch 8): tem_loss: 1.069, pem class_loss: 0.275, pem reg_loss: 0.014, consistency_loss: 0.02103, consistency_loss_ema: 0.02284, total_loss: 1.486
training 991 (epoch 8): tem_loss: 1.077, pem class_loss: 0.273, pem reg_loss: 0.014, consistency_loss: 0.02075, consistency_loss_ema: 0.02327, total_loss: 1.494
training 1001 (epoch 8): tem_loss: 1.073, pem class_loss: 0.270, pem reg_loss: 0.014, consistency_loss: 0.02127, consistency_loss_ema: 0.02392, total_loss: 1.485
training 1011 (epoch 8): tem_loss: 1.071, pem class_loss: 0.270, pem reg_loss: 0.014, consistency_loss: 0.02161, consistency_loss_ema: 0.02395, total_loss: 1.484
training 1021 (epoch 8): tem_loss: 1.070, pem class_loss: 0.270, pem reg_loss: 0.014, consistency_loss: 0.02172, consistency_loss_ema: 0.02382, total_loss: 1.482
training 1031 (epoch 8): tem_loss: 1.075, pem class_loss: 0.270, pem reg_loss: 0.014, consistency_loss: 0.02157, consistency_loss_ema: 0.02397, total_loss: 1.486
training 1041 (epoch 8): tem_loss: 1.074, pem class_loss: 0.271, pem reg_loss: 0.014, consistency_loss: 0.02157, consistency_loss_ema: 0.02400, total_loss: 1.487
training 1051 (epoch 8): tem_loss: 1.072, pem class_loss: 0.267, pem reg_loss: 0.014, consistency_loss: 0.02174, consistency_loss_ema: 0.02407, total_loss: 1.480
training 1061 (epoch 8): tem_loss: 1.073, pem class_loss: 0.270, pem reg_loss: 0.014, consistency_loss: 0.02187, consistency_loss_ema: 0.02422, total_loss: 1.486
training 1071 (epoch 8): tem_loss: 1.074, pem class_loss: 0.272, pem reg_loss: 0.014, consistency_loss: 0.02202, consistency_loss_ema: 0.02432, total_loss: 1.489
[94mBMN training loss(epoch 8): tem_loss: 1.073, pem class_loss: 0.273, pem reg_loss: 0.014, total_loss: 1.490[0m
[94mBMN val loss(epoch 8): tem_loss: 1.171, pem class_loss: 0.343, pem reg_loss: 0.017, total_loss: 1.683[0m
[94mBMN val_ema loss(epoch 8): tem_loss: 1.169, pem class_loss: 0.342, pem reg_loss: 0.017, total_loss: 1.679[0m
use Semi !!!
training 1081 (epoch 9): tem_loss: 1.052, pem class_loss: 0.245, pem reg_loss: 0.015, consistency_loss: 0.02485, consistency_loss_ema: 0.02282, total_loss: 1.447
training 1091 (epoch 9): tem_loss: 1.070, pem class_loss: 0.266, pem reg_loss: 0.014, consistency_loss: 0.02367, consistency_loss_ema: 0.02492, total_loss: 1.474
training 1101 (epoch 9): tem_loss: 1.068, pem class_loss: 0.267, pem reg_loss: 0.014, consistency_loss: 0.02321, consistency_loss_ema: 0.02510, total_loss: 1.478
training 1111 (epoch 9): tem_loss: 1.067, pem class_loss: 0.264, pem reg_loss: 0.014, consistency_loss: 0.02316, consistency_loss_ema: 0.02529, total_loss: 1.471
training 1121 (epoch 9): tem_loss: 1.069, pem class_loss: 0.267, pem reg_loss: 0.014, consistency_loss: 0.02327, consistency_loss_ema: 0.02559, total_loss: 1.478
training 1131 (epoch 9): tem_loss: 1.068, pem class_loss: 0.268, pem reg_loss: 0.014, consistency_loss: 0.02345, consistency_loss_ema: 0.02545, total_loss: 1.480
training 1141 (epoch 9): tem_loss: 1.069, pem class_loss: 0.270, pem reg_loss: 0.014, consistency_loss: 0.02343, consistency_loss_ema: 0.02552, total_loss: 1.483
training 1151 (epoch 9): tem_loss: 1.066, pem class_loss: 0.270, pem reg_loss: 0.014, consistency_loss: 0.02346, consistency_loss_ema: 0.02553, total_loss: 1.481
training 1161 (epoch 9): tem_loss: 1.065, pem class_loss: 0.269, pem reg_loss: 0.014, consistency_loss: 0.02336, consistency_loss_ema: 0.02559, total_loss: 1.477
training 1171 (epoch 9): tem_loss: 1.064, pem class_loss: 0.270, pem reg_loss: 0.014, consistency_loss: 0.02346, consistency_loss_ema: 0.02568, total_loss: 1.477
training 1181 (epoch 9): tem_loss: 1.066, pem class_loss: 0.269, pem reg_loss: 0.014, consistency_loss: 0.02351, consistency_loss_ema: 0.02577, total_loss: 1.479
training 1191 (epoch 9): tem_loss: 1.068, pem class_loss: 0.270, pem reg_loss: 0.014, consistency_loss: 0.02347, consistency_loss_ema: 0.02577, total_loss: 1.481
[94mBMN training loss(epoch 9): tem_loss: 1.068, pem class_loss: 0.269, pem reg_loss: 0.014, total_loss: 1.479[0m
[94mBMN val loss(epoch 9): tem_loss: 1.170, pem class_loss: 0.342, pem reg_loss: 0.017, total_loss: 1.680[0m
[94mBMN val_ema loss(epoch 9): tem_loss: 1.170, pem class_loss: 0.344, pem reg_loss: 0.017, total_loss: 1.683[0m
unlabel percent: 0.7
eval student model !!
load : ./checkpoint/Semi-base-0.7/BMN_checkpoint.pth.tar OK !
validation subset video numbers: 4728
Post processing start
Post processing finished
[INIT] Loaded annotations from validation subset.
Number of ground truth instances: 7293
Number of proposals: 472570
Fixed threshold for tiou score: [0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95]
[RESULTS] Performance on ActivityNet proposal task.
Area Under the AR vs AN curve: 66.52904840257783%
AR@1 is 0.333305909776498
AR@5 is 0.48201014671602904
AR@10 is 0.5594954065542301
AR@100 is 0.7460441519265049
load : ./checkpoint/Semi-base-0.7/BMN_best.pth.tar OK !
validation subset video numbers: 4728
Post processing start
Post processing finished
[INIT] Loaded annotations from validation subset.
Number of ground truth instances: 7293
Number of proposals: 472593
Fixed threshold for tiou score: [0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95]
[RESULTS] Performance on ActivityNet proposal task.
Area Under the AR vs AN curve: 66.05156999862884%
AR@1 is 0.33105717811600166
AR@5 is 0.47823940765117234
AR@10 is 0.5522007404360345
AR@100 is 0.7419717537364596
eval teacher model !!
load : ./checkpoint/Semi-base-0.7/BMN_checkpoint_ema.pth.tar OK !
validation subset video numbers: 4728
Post processing start
Post processing finished
[INIT] Loaded annotations from validation subset.
Number of ground truth instances: 7293
Number of proposals: 472593
Fixed threshold for tiou score: [0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95]
[RESULTS] Performance on ActivityNet proposal task.
Area Under the AR vs AN curve: 66.43150281091457%
AR@1 is 0.3334156040038393
AR@5 is 0.4840257781434253
AR@10 is 0.5598107774578363
AR@100 is 0.7447552447552448
load : ./checkpoint/Semi-base-0.7/BMN_best_ema.pth.tar OK !
validation subset video numbers: 4728
Post processing start
Post processing finished
[INIT] Loaded annotations from validation subset.
Number of ground truth instances: 7293
Number of proposals: 472605
Fixed threshold for tiou score: [0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95]
[RESULTS] Performance on ActivityNet proposal task.
Area Under the AR vs AN curve: 66.26360208419034%
AR@1 is 0.3324832030714384
AR@5 is 0.47951460304401483
AR@10 is 0.5562731386260799
AR@100 is 0.7437954202660084
#
train subset video numbers: 3867
unlabel unlabeled subset video numbers: 5782
validation subset video numbers: 4728
use 0.4 label for training!!!
training batchsize : 24
unlabel_training batchsize : 24
use Semi !!!
training 1 (epoch 0): tem_loss: 1.391, pem class_loss: 0.693, pem reg_loss: 0.041, consistency_loss: 0.00032, consistency_loss_ema: 0.00000, total_loss: 2.491
training 11 (epoch 0): tem_loss: 1.363, pem class_loss: 0.504, pem reg_loss: 0.031, consistency_loss: 0.00012, consistency_loss_ema: 0.00007, total_loss: 2.180
training 21 (epoch 0): tem_loss: 1.323, pem class_loss: 0.437, pem reg_loss: 0.027, consistency_loss: 0.00014, consistency_loss_ema: 0.00011, total_loss: 2.033
training 31 (epoch 0): tem_loss: 1.304, pem class_loss: 0.438, pem reg_loss: 0.026, consistency_loss: 0.00017, consistency_loss_ema: 0.00016, total_loss: 2.004
training 41 (epoch 0): tem_loss: 1.282, pem class_loss: 0.424, pem reg_loss: 0.025, consistency_loss: 0.00019, consistency_loss_ema: 0.00018, total_loss: 1.960
training 51 (epoch 0): tem_loss: 1.273, pem class_loss: 0.416, pem reg_loss: 0.025, consistency_loss: 0.00022, consistency_loss_ema: 0.00022, total_loss: 1.938
training 61 (epoch 0): tem_loss: 1.266, pem class_loss: 0.417, pem reg_loss: 0.025, consistency_loss: 0.00025, consistency_loss_ema: 0.00025, total_loss: 1.930
training 71 (epoch 0): tem_loss: 1.259, pem class_loss: 0.407, pem reg_loss: 0.024, consistency_loss: 0.00026, consistency_loss_ema: 0.00026, total_loss: 1.908
training 81 (epoch 0): tem_loss: 1.253, pem class_loss: 0.400, pem reg_loss: 0.024, consistency_loss: 0.00028, consistency_loss_ema: 0.00028, total_loss: 1.890
training 91 (epoch 0): tem_loss: 1.247, pem class_loss: 0.395, pem reg_loss: 0.023, consistency_loss: 0.00028, consistency_loss_ema: 0.00029, total_loss: 1.875
training 101 (epoch 0): tem_loss: 1.243, pem class_loss: 0.390, pem reg_loss: 0.023, consistency_loss: 0.00029, consistency_loss_ema: 0.00029, total_loss: 1.862
training 111 (epoch 0): tem_loss: 1.238, pem class_loss: 0.388, pem reg_loss: 0.023, consistency_loss: 0.00030, consistency_loss_ema: 0.00031, total_loss: 1.854
training 121 (epoch 0): tem_loss: 1.234, pem class_loss: 0.384, pem reg_loss: 0.022, consistency_loss: 0.00031, consistency_loss_ema: 0.00032, total_loss: 1.842
training 131 (epoch 0): tem_loss: 1.232, pem class_loss: 0.383, pem reg_loss: 0.022, consistency_loss: 0.00032, consistency_loss_ema: 0.00032, total_loss: 1.837
training 141 (epoch 0): tem_loss: 1.232, pem class_loss: 0.383, pem reg_loss: 0.022, consistency_loss: 0.00032, consistency_loss_ema: 0.00033, total_loss: 1.838
training 151 (epoch 0): tem_loss: 1.228, pem class_loss: 0.382, pem reg_loss: 0.022, consistency_loss: 0.00033, consistency_loss_ema: 0.00034, total_loss: 1.831
training 161 (epoch 0): tem_loss: 1.224, pem class_loss: 0.381, pem reg_loss: 0.022, consistency_loss: 0.00033, consistency_loss_ema: 0.00034, total_loss: 1.826
[94mBMN training loss(epoch 0): tem_loss: 1.224, pem class_loss: 0.381, pem reg_loss: 0.022, total_loss: 1.826[0m
[94mBMN val loss(epoch 0): tem_loss: 1.187, pem class_loss: 0.356, pem reg_loss: 0.020, total_loss: 1.748[0m
[94mBMN val_ema loss(epoch 0): tem_loss: 1.187, pem class_loss: 0.355, pem reg_loss: 0.020, total_loss: 1.738[0m
use Semi !!!
training 162 (epoch 1): tem_loss: 1.137, pem class_loss: 0.327, pem reg_loss: 0.017, consistency_loss: 0.00222, consistency_loss_ema: 0.00196, total_loss: 1.635
training 172 (epoch 1): tem_loss: 1.126, pem class_loss: 0.346, pem reg_loss: 0.020, consistency_loss: 0.00258, consistency_loss_ema: 0.00271, total_loss: 1.677
training 182 (epoch 1): tem_loss: 1.145, pem class_loss: 0.367, pem reg_loss: 0.021, consistency_loss: 0.00232, consistency_loss_ema: 0.00248, total_loss: 1.723
training 192 (epoch 1): tem_loss: 1.144, pem class_loss: 0.363, pem reg_loss: 0.021, consistency_loss: 0.00248, consistency_loss_ema: 0.00260, total_loss: 1.715
training 202 (epoch 1): tem_loss: 1.153, pem class_loss: 0.356, pem reg_loss: 0.020, consistency_loss: 0.00249, consistency_loss_ema: 0.00263, total_loss: 1.713
training 212 (epoch 1): tem_loss: 1.150, pem class_loss: 0.352, pem reg_loss: 0.020, consistency_loss: 0.00241, consistency_loss_ema: 0.00250, total_loss: 1.702
training 222 (epoch 1): tem_loss: 1.149, pem class_loss: 0.351, pem reg_loss: 0.020, consistency_loss: 0.00241, consistency_loss_ema: 0.00246, total_loss: 1.698
training 232 (epoch 1): tem_loss: 1.149, pem class_loss: 0.348, pem reg_loss: 0.020, consistency_loss: 0.00239, consistency_loss_ema: 0.00242, total_loss: 1.696
training 242 (epoch 1): tem_loss: 1.145, pem class_loss: 0.347, pem reg_loss: 0.020, consistency_loss: 0.00236, consistency_loss_ema: 0.00240, total_loss: 1.691
training 252 (epoch 1): tem_loss: 1.145, pem class_loss: 0.343, pem reg_loss: 0.020, consistency_loss: 0.00240, consistency_loss_ema: 0.00246, total_loss: 1.684
training 262 (epoch 1): tem_loss: 1.146, pem class_loss: 0.343, pem reg_loss: 0.020, consistency_loss: 0.00245, consistency_loss_ema: 0.00254, total_loss: 1.685
training 272 (epoch 1): tem_loss: 1.147, pem class_loss: 0.344, pem reg_loss: 0.020, consistency_loss: 0.00249, consistency_loss_ema: 0.00259, total_loss: 1.688
training 282 (epoch 1): tem_loss: 1.146, pem class_loss: 0.345, pem reg_loss: 0.020, consistency_loss: 0.00251, consistency_loss_ema: 0.00260, total_loss: 1.688
training 292 (epoch 1): tem_loss: 1.145, pem class_loss: 0.344, pem reg_loss: 0.020, consistency_loss: 0.00253, consistency_loss_ema: 0.00263, total_loss: 1.685
training 302 (epoch 1): tem_loss: 1.145, pem class_loss: 0.342, pem reg_loss: 0.020, consistency_loss: 0.00253, consistency_loss_ema: 0.00266, total_loss: 1.683
training 312 (epoch 1): tem_loss: 1.146, pem class_loss: 0.344, pem reg_loss: 0.020, consistency_loss: 0.00258, consistency_loss_ema: 0.00272, total_loss: 1.686
training 322 (epoch 1): tem_loss: 1.144, pem class_loss: 0.345, pem reg_loss: 0.020, consistency_loss: 0.00258, consistency_loss_ema: 0.00270, total_loss: 1.687
[94mBMN training loss(epoch 1): tem_loss: 1.144, pem class_loss: 0.345, pem reg_loss: 0.020, total_loss: 1.687[0m
[94mBMN val loss(epoch 1): tem_loss: 1.168, pem class_loss: 0.345, pem reg_loss: 0.019, total_loss: 1.705[0m
[94mBMN val_ema loss(epoch 1): tem_loss: 1.158, pem class_loss: 0.345, pem reg_loss: 0.019, total_loss: 1.696[0m
use Semi !!!
training 323 (epoch 2): tem_loss: 1.071, pem class_loss: 0.303, pem reg_loss: 0.019, consistency_loss: 0.00863, consistency_loss_ema: 0.01096, total_loss: 1.564
training 333 (epoch 2): tem_loss: 1.105, pem class_loss: 0.341, pem reg_loss: 0.020, consistency_loss: 0.00890, consistency_loss_ema: 0.00926, total_loss: 1.645
training 343 (epoch 2): tem_loss: 1.101, pem class_loss: 0.350, pem reg_loss: 0.021, consistency_loss: 0.00871, consistency_loss_ema: 0.00944, total_loss: 1.658
training 353 (epoch 2): tem_loss: 1.100, pem class_loss: 0.338, pem reg_loss: 0.020, consistency_loss: 0.00860, consistency_loss_ema: 0.00935, total_loss: 1.636
training 363 (epoch 2): tem_loss: 1.101, pem class_loss: 0.336, pem reg_loss: 0.020, consistency_loss: 0.00849, consistency_loss_ema: 0.00910, total_loss: 1.633
training 373 (epoch 2): tem_loss: 1.095, pem class_loss: 0.328, pem reg_loss: 0.019, consistency_loss: 0.00856, consistency_loss_ema: 0.00909, total_loss: 1.615
training 383 (epoch 2): tem_loss: 1.093, pem class_loss: 0.327, pem reg_loss: 0.019, consistency_loss: 0.00837, consistency_loss_ema: 0.00890, total_loss: 1.612
training 393 (epoch 2): tem_loss: 1.101, pem class_loss: 0.327, pem reg_loss: 0.019, consistency_loss: 0.00846, consistency_loss_ema: 0.00888, total_loss: 1.619
training 403 (epoch 2): tem_loss: 1.099, pem class_loss: 0.324, pem reg_loss: 0.019, consistency_loss: 0.00873, consistency_loss_ema: 0.00894, total_loss: 1.614
training 413 (epoch 2): tem_loss: 1.103, pem class_loss: 0.326, pem reg_loss: 0.019, consistency_loss: 0.00897, consistency_loss_ema: 0.00913, total_loss: 1.620
training 423 (epoch 2): tem_loss: 1.105, pem class_loss: 0.327, pem reg_loss: 0.019, consistency_loss: 0.00887, consistency_loss_ema: 0.00905, total_loss: 1.623
training 433 (epoch 2): tem_loss: 1.101, pem class_loss: 0.327, pem reg_loss: 0.019, consistency_loss: 0.00879, consistency_loss_ema: 0.00899, total_loss: 1.617
training 443 (epoch 2): tem_loss: 1.102, pem class_loss: 0.328, pem reg_loss: 0.019, consistency_loss: 0.00888, consistency_loss_ema: 0.00910, total_loss: 1.619
training 453 (epoch 2): tem_loss: 1.104, pem class_loss: 0.328, pem reg_loss: 0.019, consistency_loss: 0.00894, consistency_loss_ema: 0.00928, total_loss: 1.620
training 463 (epoch 2): tem_loss: 1.108, pem class_loss: 0.330, pem reg_loss: 0.019, consistency_loss: 0.00891, consistency_loss_ema: 0.00929, total_loss: 1.627
training 473 (epoch 2): tem_loss: 1.110, pem class_loss: 0.330, pem reg_loss: 0.019, consistency_loss: 0.00887, consistency_loss_ema: 0.00923, total_loss: 1.628
training 483 (epoch 2): tem_loss: 1.111, pem class_loss: 0.331, pem reg_loss: 0.019, consistency_loss: 0.00894, consistency_loss_ema: 0.00927, total_loss: 1.631
[94mBMN training loss(epoch 2): tem_loss: 1.111, pem class_loss: 0.331, pem reg_loss: 0.019, total_loss: 1.631[0m
[94mBMN val loss(epoch 2): tem_loss: 1.154, pem class_loss: 0.343, pem reg_loss: 0.018, total_loss: 1.680[0m
[94mBMN val_ema loss(epoch 2): tem_loss: 1.148, pem class_loss: 0.336, pem reg_loss: 0.018, total_loss: 1.668[0m
use Semi !!!
training 484 (epoch 3): tem_loss: 1.067, pem class_loss: 0.264, pem reg_loss: 0.014, consistency_loss: 0.01724, consistency_loss_ema: 0.02662, total_loss: 1.473
training 494 (epoch 3): tem_loss: 1.047, pem class_loss: 0.290, pem reg_loss: 0.016, consistency_loss: 0.02178, consistency_loss_ema: 0.02403, total_loss: 1.494
training 504 (epoch 3): tem_loss: 1.067, pem class_loss: 0.307, pem reg_loss: 0.017, consistency_loss: 0.02057, consistency_loss_ema: 0.02202, total_loss: 1.544
training 514 (epoch 3): tem_loss: 1.075, pem class_loss: 0.311, pem reg_loss: 0.017, consistency_loss: 0.02031, consistency_loss_ema: 0.02136, total_loss: 1.556
training 524 (epoch 3): tem_loss: 1.076, pem class_loss: 0.310, pem reg_loss: 0.017, consistency_loss: 0.02002, consistency_loss_ema: 0.02088, total_loss: 1.557
training 534 (epoch 3): tem_loss: 1.077, pem class_loss: 0.310, pem reg_loss: 0.017, consistency_loss: 0.01950, consistency_loss_ema: 0.02039, total_loss: 1.557
training 544 (epoch 3): tem_loss: 1.079, pem class_loss: 0.314, pem reg_loss: 0.017, consistency_loss: 0.01896, consistency_loss_ema: 0.01969, total_loss: 1.567
training 554 (epoch 3): tem_loss: 1.084, pem class_loss: 0.317, pem reg_loss: 0.017, consistency_loss: 0.01892, consistency_loss_ema: 0.01955, total_loss: 1.573
training 564 (epoch 3): tem_loss: 1.086, pem class_loss: 0.319, pem reg_loss: 0.017, consistency_loss: 0.01878, consistency_loss_ema: 0.01947, total_loss: 1.579
training 574 (epoch 3): tem_loss: 1.087, pem class_loss: 0.320, pem reg_loss: 0.018, consistency_loss: 0.01885, consistency_loss_ema: 0.01966, total_loss: 1.582
training 584 (epoch 3): tem_loss: 1.092, pem class_loss: 0.320, pem reg_loss: 0.018, consistency_loss: 0.01886, consistency_loss_ema: 0.01979, total_loss: 1.587
training 594 (epoch 3): tem_loss: 1.091, pem class_loss: 0.320, pem reg_loss: 0.018, consistency_loss: 0.01864, consistency_loss_ema: 0.01961, total_loss: 1.586
training 604 (epoch 3): tem_loss: 1.091, pem class_loss: 0.321, pem reg_loss: 0.018, consistency_loss: 0.01871, consistency_loss_ema: 0.01950, total_loss: 1.589
training 614 (epoch 3): tem_loss: 1.093, pem class_loss: 0.320, pem reg_loss: 0.018, consistency_loss: 0.01871, consistency_loss_ema: 0.01943, total_loss: 1.590
training 624 (epoch 3): tem_loss: 1.093, pem class_loss: 0.319, pem reg_loss: 0.018, consistency_loss: 0.01861, consistency_loss_ema: 0.01950, total_loss: 1.589
training 634 (epoch 3): tem_loss: 1.096, pem class_loss: 0.320, pem reg_loss: 0.018, consistency_loss: 0.01848, consistency_loss_ema: 0.01931, total_loss: 1.594
training 644 (epoch 3): tem_loss: 1.097, pem class_loss: 0.319, pem reg_loss: 0.018, consistency_loss: 0.01855, consistency_loss_ema: 0.01932, total_loss: 1.594
[94mBMN training loss(epoch 3): tem_loss: 1.097, pem class_loss: 0.319, pem reg_loss: 0.018, total_loss: 1.594[0m
[94mBMN val loss(epoch 3): tem_loss: 1.150, pem class_loss: 0.339, pem reg_loss: 0.019, total_loss: 1.678[0m
[94mBMN val_ema loss(epoch 3): tem_loss: 1.148, pem class_loss: 0.332, pem reg_loss: 0.018, total_loss: 1.659[0m
use Semi !!!
training 645 (epoch 4): tem_loss: 1.098, pem class_loss: 0.268, pem reg_loss: 0.017, consistency_loss: 0.03531, consistency_loss_ema: 0.02828, total_loss: 1.538
training 655 (epoch 4): tem_loss: 1.068, pem class_loss: 0.302, pem reg_loss: 0.017, consistency_loss: 0.03005, consistency_loss_ema: 0.03004, total_loss: 1.543
training 665 (epoch 4): tem_loss: 1.072, pem class_loss: 0.295, pem reg_loss: 0.017, consistency_loss: 0.02884, consistency_loss_ema: 0.02939, total_loss: 1.538
training 675 (epoch 4): tem_loss: 1.084, pem class_loss: 0.304, pem reg_loss: 0.017, consistency_loss: 0.02790, consistency_loss_ema: 0.02874, total_loss: 1.560
training 685 (epoch 4): tem_loss: 1.080, pem class_loss: 0.302, pem reg_loss: 0.017, consistency_loss: 0.02804, consistency_loss_ema: 0.02923, total_loss: 1.552
training 695 (epoch 4): tem_loss: 1.085, pem class_loss: 0.305, pem reg_loss: 0.017, consistency_loss: 0.02756, consistency_loss_ema: 0.02919, total_loss: 1.560
training 705 (epoch 4): tem_loss: 1.087, pem class_loss: 0.304, pem reg_loss: 0.017, consistency_loss: 0.02724, consistency_loss_ema: 0.02901, total_loss: 1.561
training 715 (epoch 4): tem_loss: 1.088, pem class_loss: 0.305, pem reg_loss: 0.017, consistency_loss: 0.02729, consistency_loss_ema: 0.02920, total_loss: 1.563
training 725 (epoch 4): tem_loss: 1.091, pem class_loss: 0.306, pem reg_loss: 0.017, consistency_loss: 0.02712, consistency_loss_ema: 0.02876, total_loss: 1.567
training 735 (epoch 4): tem_loss: 1.092, pem class_loss: 0.305, pem reg_loss: 0.017, consistency_loss: 0.02721, consistency_loss_ema: 0.02872, total_loss: 1.567
training 745 (epoch 4): tem_loss: 1.094, pem class_loss: 0.308, pem reg_loss: 0.017, consistency_loss: 0.02729, consistency_loss_ema: 0.02869, total_loss: 1.572
training 755 (epoch 4): tem_loss: 1.095, pem class_loss: 0.307, pem reg_loss: 0.017, consistency_loss: 0.02717, consistency_loss_ema: 0.02850, total_loss: 1.572
training 765 (epoch 4): tem_loss: 1.095, pem class_loss: 0.308, pem reg_loss: 0.017, consistency_loss: 0.02685, consistency_loss_ema: 0.02820, total_loss: 1.574
training 775 (epoch 4): tem_loss: 1.096, pem class_loss: 0.311, pem reg_loss: 0.017, consistency_loss: 0.02666, consistency_loss_ema: 0.02801, total_loss: 1.579
training 785 (epoch 4): tem_loss: 1.095, pem class_loss: 0.312, pem reg_loss: 0.017, consistency_loss: 0.02666, consistency_loss_ema: 0.02799, total_loss: 1.579
training 795 (epoch 4): tem_loss: 1.096, pem class_loss: 0.313, pem reg_loss: 0.017, consistency_loss: 0.02656, consistency_loss_ema: 0.02787, total_loss: 1.580
training 805 (epoch 4): tem_loss: 1.097, pem class_loss: 0.311, pem reg_loss: 0.017, consistency_loss: 0.02638, consistency_loss_ema: 0.02773, total_loss: 1.579
[94mBMN training loss(epoch 4): tem_loss: 1.097, pem class_loss: 0.311, pem reg_loss: 0.017, total_loss: 1.579[0m
[94mBMN val loss(epoch 4): tem_loss: 1.154, pem class_loss: 0.337, pem reg_loss: 0.018, total_loss: 1.669[0m
[94mBMN val_ema loss(epoch 4): tem_loss: 1.153, pem class_loss: 0.329, pem reg_loss: 0.017, total_loss: 1.657[0m
use Semi !!!
training 806 (epoch 5): tem_loss: 1.183, pem class_loss: 0.364, pem reg_loss: 0.021, consistency_loss: 0.03205, consistency_loss_ema: 0.03401, total_loss: 1.760
training 816 (epoch 5): tem_loss: 1.107, pem class_loss: 0.312, pem reg_loss: 0.017, consistency_loss: 0.02836, consistency_loss_ema: 0.02788, total_loss: 1.589
training 826 (epoch 5): tem_loss: 1.099, pem class_loss: 0.300, pem reg_loss: 0.016, consistency_loss: 0.02751, consistency_loss_ema: 0.02930, total_loss: 1.559
training 836 (epoch 5): tem_loss: 1.096, pem class_loss: 0.302, pem reg_loss: 0.016, consistency_loss: 0.02736, consistency_loss_ema: 0.02899, total_loss: 1.558
training 846 (epoch 5): tem_loss: 1.096, pem class_loss: 0.308, pem reg_loss: 0.017, consistency_loss: 0.02734, consistency_loss_ema: 0.02907, total_loss: 1.569
training 856 (epoch 5): tem_loss: 1.090, pem class_loss: 0.307, pem reg_loss: 0.016, consistency_loss: 0.02751, consistency_loss_ema: 0.02943, total_loss: 1.562
training 866 (epoch 5): tem_loss: 1.091, pem class_loss: 0.306, pem reg_loss: 0.016, consistency_loss: 0.02783, consistency_loss_ema: 0.03003, total_loss: 1.562
training 876 (epoch 5): tem_loss: 1.089, pem class_loss: 0.302, pem reg_loss: 0.016, consistency_loss: 0.02820, consistency_loss_ema: 0.03063, total_loss: 1.553
training 886 (epoch 5): tem_loss: 1.095, pem class_loss: 0.304, pem reg_loss: 0.016, consistency_loss: 0.02795, consistency_loss_ema: 0.03053, total_loss: 1.562
training 896 (epoch 5): tem_loss: 1.092, pem class_loss: 0.303, pem reg_loss: 0.016, consistency_loss: 0.02796, consistency_loss_ema: 0.03056, total_loss: 1.559
training 906 (epoch 5): tem_loss: 1.092, pem class_loss: 0.300, pem reg_loss: 0.016, consistency_loss: 0.02808, consistency_loss_ema: 0.03060, total_loss: 1.554
training 916 (epoch 5): tem_loss: 1.093, pem class_loss: 0.302, pem reg_loss: 0.016, consistency_loss: 0.02811, consistency_loss_ema: 0.03068, total_loss: 1.558
training 926 (epoch 5): tem_loss: 1.094, pem class_loss: 0.302, pem reg_loss: 0.016, consistency_loss: 0.02825, consistency_loss_ema: 0.03065, total_loss: 1.559
training 936 (epoch 5): tem_loss: 1.097, pem class_loss: 0.303, pem reg_loss: 0.016, consistency_loss: 0.02835, consistency_loss_ema: 0.03062, total_loss: 1.562
training 946 (epoch 5): tem_loss: 1.098, pem class_loss: 0.302, pem reg_loss: 0.016, consistency_loss: 0.02845, consistency_loss_ema: 0.03050, total_loss: 1.561
training 956 (epoch 5): tem_loss: 1.097, pem class_loss: 0.302, pem reg_loss: 0.016, consistency_loss: 0.02831, consistency_loss_ema: 0.03043, total_loss: 1.562
training 966 (epoch 5): tem_loss: 1.096, pem class_loss: 0.301, pem reg_loss: 0.016, consistency_loss: 0.02841, consistency_loss_ema: 0.03035, total_loss: 1.560
[94mBMN training loss(epoch 5): tem_loss: 1.096, pem class_loss: 0.301, pem reg_loss: 0.016, total_loss: 1.560[0m
[94mBMN val loss(epoch 5): tem_loss: 1.159, pem class_loss: 0.336, pem reg_loss: 0.018, total_loss: 1.671[0m
[94mBMN val_ema loss(epoch 5): tem_loss: 1.157, pem class_loss: 0.329, pem reg_loss: 0.017, total_loss: 1.656[0m
use Semi !!!
training 967 (epoch 6): tem_loss: 1.042, pem class_loss: 0.299, pem reg_loss: 0.015, consistency_loss: 0.02652, consistency_loss_ema: 0.03131, total_loss: 1.495
training 977 (epoch 6): tem_loss: 1.068, pem class_loss: 0.280, pem reg_loss: 0.014, consistency_loss: 0.02877, consistency_loss_ema: 0.02939, total_loss: 1.490
training 987 (epoch 6): tem_loss: 1.084, pem class_loss: 0.295, pem reg_loss: 0.015, consistency_loss: 0.02835, consistency_loss_ema: 0.02926, total_loss: 1.528
training 997 (epoch 6): tem_loss: 1.092, pem class_loss: 0.290, pem reg_loss: 0.015, consistency_loss: 0.02847, consistency_loss_ema: 0.03059, total_loss: 1.528
training 1007 (epoch 6): tem_loss: 1.090, pem class_loss: 0.296, pem reg_loss: 0.015, consistency_loss: 0.02930, consistency_loss_ema: 0.03091, total_loss: 1.534
training 1017 (epoch 6): tem_loss: 1.086, pem class_loss: 0.295, pem reg_loss: 0.015, consistency_loss: 0.02912, consistency_loss_ema: 0.03082, total_loss: 1.533
training 1027 (epoch 6): tem_loss: 1.089, pem class_loss: 0.299, pem reg_loss: 0.015, consistency_loss: 0.02877, consistency_loss_ema: 0.03086, total_loss: 1.540
training 1037 (epoch 6): tem_loss: 1.088, pem class_loss: 0.295, pem reg_loss: 0.015, consistency_loss: 0.02917, consistency_loss_ema: 0.03076, total_loss: 1.537
training 1047 (epoch 6): tem_loss: 1.089, pem class_loss: 0.294, pem reg_loss: 0.015, consistency_loss: 0.02882, consistency_loss_ema: 0.03047, total_loss: 1.539
training 1057 (epoch 6): tem_loss: 1.092, pem class_loss: 0.297, pem reg_loss: 0.016, consistency_loss: 0.02910, consistency_loss_ema: 0.03065, total_loss: 1.546
training 1067 (epoch 6): tem_loss: 1.089, pem class_loss: 0.297, pem reg_loss: 0.016, consistency_loss: 0.02919, consistency_loss_ema: 0.03096, total_loss: 1.543
training 1077 (epoch 6): tem_loss: 1.092, pem class_loss: 0.296, pem reg_loss: 0.016, consistency_loss: 0.02922, consistency_loss_ema: 0.03088, total_loss: 1.545
training 1087 (epoch 6): tem_loss: 1.091, pem class_loss: 0.294, pem reg_loss: 0.016, consistency_loss: 0.02903, consistency_loss_ema: 0.03080, total_loss: 1.542
training 1097 (epoch 6): tem_loss: 1.091, pem class_loss: 0.294, pem reg_loss: 0.016, consistency_loss: 0.02892, consistency_loss_ema: 0.03065, total_loss: 1.542
training 1107 (epoch 6): tem_loss: 1.091, pem class_loss: 0.295, pem reg_loss: 0.016, consistency_loss: 0.02894, consistency_loss_ema: 0.03070, total_loss: 1.543
training 1117 (epoch 6): tem_loss: 1.090, pem class_loss: 0.294, pem reg_loss: 0.016, consistency_loss: 0.02891, consistency_loss_ema: 0.03058, total_loss: 1.541
training 1127 (epoch 6): tem_loss: 1.093, pem class_loss: 0.295, pem reg_loss: 0.016, consistency_loss: 0.02879, consistency_loss_ema: 0.03043, total_loss: 1.544
[94mBMN training loss(epoch 6): tem_loss: 1.093, pem class_loss: 0.295, pem reg_loss: 0.016, total_loss: 1.544[0m
[94mBMN val loss(epoch 6): tem_loss: 1.159, pem class_loss: 0.332, pem reg_loss: 0.017, total_loss: 1.657[0m
[94mBMN val_ema loss(epoch 6): tem_loss: 1.159, pem class_loss: 0.328, pem reg_loss: 0.017, total_loss: 1.652[0m
use Semi !!!
training 1128 (epoch 7): tem_loss: 1.065, pem class_loss: 0.253, pem reg_loss: 0.012, consistency_loss: 0.02564, consistency_loss_ema: 0.02862, total_loss: 1.437
training 1138 (epoch 7): tem_loss: 1.093, pem class_loss: 0.286, pem reg_loss: 0.015, consistency_loss: 0.02588, consistency_loss_ema: 0.02792, total_loss: 1.532
training 1148 (epoch 7): tem_loss: 1.080, pem class_loss: 0.283, pem reg_loss: 0.015, consistency_loss: 0.02422, consistency_loss_ema: 0.02550, total_loss: 1.511
training 1158 (epoch 7): tem_loss: 1.075, pem class_loss: 0.271, pem reg_loss: 0.015, consistency_loss: 0.02263, consistency_loss_ema: 0.02444, total_loss: 1.491
training 1168 (epoch 7): tem_loss: 1.078, pem class_loss: 0.279, pem reg_loss: 0.015, consistency_loss: 0.02201, consistency_loss_ema: 0.02363, total_loss: 1.505
training 1178 (epoch 7): tem_loss: 1.080, pem class_loss: 0.279, pem reg_loss: 0.015, consistency_loss: 0.02134, consistency_loss_ema: 0.02305, total_loss: 1.506
training 1188 (epoch 7): tem_loss: 1.082, pem class_loss: 0.282, pem reg_loss: 0.015, consistency_loss: 0.02076, consistency_loss_ema: 0.02237, total_loss: 1.513
training 1198 (epoch 7): tem_loss: 1.083, pem class_loss: 0.284, pem reg_loss: 0.015, consistency_loss: 0.02058, consistency_loss_ema: 0.02202, total_loss: 1.515
training 1208 (epoch 7): tem_loss: 1.082, pem class_loss: 0.282, pem reg_loss: 0.015, consistency_loss: 0.02046, consistency_loss_ema: 0.02186, total_loss: 1.512
training 1218 (epoch 7): tem_loss: 1.081, pem class_loss: 0.282, pem reg_loss: 0.015, consistency_loss: 0.02041, consistency_loss_ema: 0.02178, total_loss: 1.510
training 1228 (epoch 7): tem_loss: 1.080, pem class_loss: 0.282, pem reg_loss: 0.015, consistency_loss: 0.02022, consistency_loss_ema: 0.02167, total_loss: 1.509
training 1238 (epoch 7): tem_loss: 1.080, pem class_loss: 0.281, pem reg_loss: 0.015, consistency_loss: 0.02014, consistency_loss_ema: 0.02162, total_loss: 1.508
training 1248 (epoch 7): tem_loss: 1.080, pem class_loss: 0.278, pem reg_loss: 0.015, consistency_loss: 0.02015, consistency_loss_ema: 0.02170, total_loss: 1.505
training 1258 (epoch 7): tem_loss: 1.080, pem class_loss: 0.277, pem reg_loss: 0.015, consistency_loss: 0.02010, consistency_loss_ema: 0.02166, total_loss: 1.503
training 1268 (epoch 7): tem_loss: 1.078, pem class_loss: 0.276, pem reg_loss: 0.015, consistency_loss: 0.02008, consistency_loss_ema: 0.02169, total_loss: 1.501
training 1278 (epoch 7): tem_loss: 1.077, pem class_loss: 0.276, pem reg_loss: 0.015, consistency_loss: 0.02007, consistency_loss_ema: 0.02161, total_loss: 1.498
training 1288 (epoch 7): tem_loss: 1.078, pem class_loss: 0.274, pem reg_loss: 0.014, consistency_loss: 0.02006, consistency_loss_ema: 0.02170, total_loss: 1.497
[94mBMN training loss(epoch 7): tem_loss: 1.078, pem class_loss: 0.274, pem reg_loss: 0.014, total_loss: 1.497[0m
[94mBMN val loss(epoch 7): tem_loss: 1.158, pem class_loss: 0.338, pem reg_loss: 0.016, total_loss: 1.659[0m
[94mBMN val_ema loss(epoch 7): tem_loss: 1.158, pem class_loss: 0.333, pem reg_loss: 0.016, total_loss: 1.654[0m
use Semi !!!
training 1289 (epoch 8): tem_loss: 0.935, pem class_loss: 0.256, pem reg_loss: 0.013, consistency_loss: 0.01962, consistency_loss_ema: 0.02348, total_loss: 1.323
training 1299 (epoch 8): tem_loss: 1.064, pem class_loss: 0.273, pem reg_loss: 0.014, consistency_loss: 0.02084, consistency_loss_ema: 0.02197, total_loss: 1.479
training 1309 (epoch 8): tem_loss: 1.061, pem class_loss: 0.267, pem reg_loss: 0.014, consistency_loss: 0.02026, consistency_loss_ema: 0.02235, total_loss: 1.466
training 1319 (epoch 8): tem_loss: 1.067, pem class_loss: 0.274, pem reg_loss: 0.014, consistency_loss: 0.02047, consistency_loss_ema: 0.02226, total_loss: 1.483
training 1329 (epoch 8): tem_loss: 1.078, pem class_loss: 0.275, pem reg_loss: 0.014, consistency_loss: 0.02039, consistency_loss_ema: 0.02198, total_loss: 1.497
training 1339 (epoch 8): tem_loss: 1.072, pem class_loss: 0.276, pem reg_loss: 0.015, consistency_loss: 0.02007, consistency_loss_ema: 0.02197, total_loss: 1.494
training 1349 (epoch 8): tem_loss: 1.075, pem class_loss: 0.276, pem reg_loss: 0.015, consistency_loss: 0.02017, consistency_loss_ema: 0.02193, total_loss: 1.497
training 1359 (epoch 8): tem_loss: 1.074, pem class_loss: 0.275, pem reg_loss: 0.014, consistency_loss: 0.02025, consistency_loss_ema: 0.02201, total_loss: 1.493
training 1369 (epoch 8): tem_loss: 1.075, pem class_loss: 0.274, pem reg_loss: 0.014, consistency_loss: 0.02043, consistency_loss_ema: 0.02193, total_loss: 1.494
training 1379 (epoch 8): tem_loss: 1.074, pem class_loss: 0.273, pem reg_loss: 0.014, consistency_loss: 0.02042, consistency_loss_ema: 0.02219, total_loss: 1.492
training 1389 (epoch 8): tem_loss: 1.072, pem class_loss: 0.270, pem reg_loss: 0.014, consistency_loss: 0.02041, consistency_loss_ema: 0.02231, total_loss: 1.486
training 1399 (epoch 8): tem_loss: 1.074, pem class_loss: 0.272, pem reg_loss: 0.014, consistency_loss: 0.02043, consistency_loss_ema: 0.02237, total_loss: 1.490
training 1409 (epoch 8): tem_loss: 1.072, pem class_loss: 0.272, pem reg_loss: 0.014, consistency_loss: 0.02045, consistency_loss_ema: 0.02243, total_loss: 1.486
training 1419 (epoch 8): tem_loss: 1.071, pem class_loss: 0.273, pem reg_loss: 0.014, consistency_loss: 0.02049, consistency_loss_ema: 0.02257, total_loss: 1.487
training 1429 (epoch 8): tem_loss: 1.072, pem class_loss: 0.273, pem reg_loss: 0.014, consistency_loss: 0.02064, consistency_loss_ema: 0.02266, total_loss: 1.486
training 1439 (epoch 8): tem_loss: 1.071, pem class_loss: 0.271, pem reg_loss: 0.014, consistency_loss: 0.02071, consistency_loss_ema: 0.02273, total_loss: 1.484
training 1449 (epoch 8): tem_loss: 1.071, pem class_loss: 0.271, pem reg_loss: 0.014, consistency_loss: 0.02080, consistency_loss_ema: 0.02277, total_loss: 1.485
[94mBMN training loss(epoch 8): tem_loss: 1.071, pem class_loss: 0.271, pem reg_loss: 0.014, total_loss: 1.485[0m
[94mBMN val loss(epoch 8): tem_loss: 1.158, pem class_loss: 0.335, pem reg_loss: 0.016, total_loss: 1.657[0m
[94mBMN val_ema loss(epoch 8): tem_loss: 1.158, pem class_loss: 0.335, pem reg_loss: 0.016, total_loss: 1.656[0m
use Semi !!!
training 1450 (epoch 9): tem_loss: 1.198, pem class_loss: 0.303, pem reg_loss: 0.012, consistency_loss: 0.02099, consistency_loss_ema: 0.02149, total_loss: 1.622
training 1460 (epoch 9): tem_loss: 1.082, pem class_loss: 0.252, pem reg_loss: 0.013, consistency_loss: 0.02141, consistency_loss_ema: 0.02459, total_loss: 1.467
training 1470 (epoch 9): tem_loss: 1.063, pem class_loss: 0.247, pem reg_loss: 0.013, consistency_loss: 0.02164, consistency_loss_ema: 0.02406, total_loss: 1.437
training 1480 (epoch 9): tem_loss: 1.068, pem class_loss: 0.254, pem reg_loss: 0.013, consistency_loss: 0.02179, consistency_loss_ema: 0.02392, total_loss: 1.453
training 1490 (epoch 9): tem_loss: 1.075, pem class_loss: 0.254, pem reg_loss: 0.013, consistency_loss: 0.02178, consistency_loss_ema: 0.02368, total_loss: 1.459
training 1500 (epoch 9): tem_loss: 1.077, pem class_loss: 0.256, pem reg_loss: 0.013, consistency_loss: 0.02184, consistency_loss_ema: 0.02375, total_loss: 1.461
training 1510 (epoch 9): tem_loss: 1.074, pem class_loss: 0.257, pem reg_loss: 0.013, consistency_loss: 0.02187, consistency_loss_ema: 0.02379, total_loss: 1.461
training 1520 (epoch 9): tem_loss: 1.072, pem class_loss: 0.258, pem reg_loss: 0.013, consistency_loss: 0.02196, consistency_loss_ema: 0.02366, total_loss: 1.462
training 1530 (epoch 9): tem_loss: 1.072, pem class_loss: 0.262, pem reg_loss: 0.013, consistency_loss: 0.02192, consistency_loss_ema: 0.02366, total_loss: 1.469
training 1540 (epoch 9): tem_loss: 1.073, pem class_loss: 0.262, pem reg_loss: 0.014, consistency_loss: 0.02184, consistency_loss_ema: 0.02359, total_loss: 1.470
training 1550 (epoch 9): tem_loss: 1.074, pem class_loss: 0.265, pem reg_loss: 0.014, consistency_loss: 0.02181, consistency_loss_ema: 0.02356, total_loss: 1.477
training 1560 (epoch 9): tem_loss: 1.075, pem class_loss: 0.266, pem reg_loss: 0.014, consistency_loss: 0.02185, consistency_loss_ema: 0.02349, total_loss: 1.479
training 1570 (epoch 9): tem_loss: 1.074, pem class_loss: 0.266, pem reg_loss: 0.014, consistency_loss: 0.02189, consistency_loss_ema: 0.02364, total_loss: 1.478
training 1580 (epoch 9): tem_loss: 1.072, pem class_loss: 0.266, pem reg_loss: 0.014, consistency_loss: 0.02185, consistency_loss_ema: 0.02365, total_loss: 1.477
training 1590 (epoch 9): tem_loss: 1.072, pem class_loss: 0.266, pem reg_loss: 0.014, consistency_loss: 0.02181, consistency_loss_ema: 0.02363, total_loss: 1.476
training 1600 (epoch 9): tem_loss: 1.070, pem class_loss: 0.265, pem reg_loss: 0.014, consistency_loss: 0.02182, consistency_loss_ema: 0.02364, total_loss: 1.474
training 1610 (epoch 9): tem_loss: 1.069, pem class_loss: 0.264, pem reg_loss: 0.014, consistency_loss: 0.02190, consistency_loss_ema: 0.02369, total_loss: 1.472
[94mBMN training loss(epoch 9): tem_loss: 1.069, pem class_loss: 0.264, pem reg_loss: 0.014, total_loss: 1.472[0m
[94mBMN val loss(epoch 9): tem_loss: 1.158, pem class_loss: 0.342, pem reg_loss: 0.016, total_loss: 1.664[0m
[94mBMN val_ema loss(epoch 9): tem_loss: 1.158, pem class_loss: 0.340, pem reg_loss: 0.016, total_loss: 1.661[0m
unlabel percent: 0.6
eval student model !!
load : ./checkpoint/Semi-base-0.6/BMN_checkpoint.pth.tar OK !
validation subset video numbers: 4728
Post processing start
Post processing finished
[INIT] Loaded annotations from validation subset.
Number of ground truth instances: 7293
Number of proposals: 472687
Fixed threshold for tiou score: [0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95]
[RESULTS] Performance on ActivityNet proposal task.
Area Under the AR vs AN curve: 67.03940079528313%
AR@1 is 0.33314136843548614
AR@5 is 0.4888660359248595