-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathunique_search_results_31_12_2023.jsonl
6992 lines (6992 loc) · 668 KB
/
unique_search_results_31_12_2023.jsonl
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
{
"bose-etal-2023-detoxifying": {
"abstract": "The expression of opinions, stances, and moral foundations on social media often coincide with toxic, divisive, or inflammatory language that can make constructive discourse across communities difficult. Natural language generation methods could provide a means to reframe or reword such expressions in a way that fosters more civil discourse, yet current Large Language Model (LLM) methods tend towards language that is too generic or formal to seem authentic for social media discussions. We present preliminary work on training LLMs to maintain authenticity while presenting a community{'}s ideas and values in a constructive, non-toxic manner.",
"pages": "9--14",
"doi": "10.18653/v1/2023.sicon-1.2",
"url": "https://aclanthology.org/2023.sicon-1.2",
"publisher": "Association for Computational Linguistics",
"address": "Toronto, Canada",
"year": "2023",
"month": "July",
"booktitle": "Proceedings of the First Workshop on Social Influence in Conversations (SICon 2023)",
"editor": "Chawla, Kushal and\nShi, Weiyan",
"author": "Bose, Ritwik and\nPerera, Ian and\nDorr, Bonnie",
"title": "Detoxifying Online Discourse: A Guided Response Generation Approach for Reducing Toxicity in User-Generated Text",
"ENTRYTYPE": "inproceedings",
"ID": "bose-etal-2023-detoxifying",
"source": "acl.jsonl"
},
"paulissen-wendt-2023-lauri": {
"abstract": "Identifying expressions of human values in textual data is a crucial albeit complicated challenge, not least because ethics are highly variable, often implicit, and transcend circumstance. Opinions, arguments, and the like are generally founded upon more than one guiding principle, which are not necessarily independent. As such, little is known about how to classify and predict moral undertones in natural language sequences. Here, we describe and present a solution to ValueEval, our shared contribution to SemEval 2023 Task 4. Our research design focuses on investigating chain classifier architectures with pretrained contextualized embeddings to detect 20 different human values in written arguments. We show that our best model substantially surpasses the classification performance of the baseline method established in prior work. We discuss limitations to our approach and outline promising directions for future work.",
"pages": "193--198",
"doi": "10.18653/v1/2023.semeval-1.27",
"url": "https://aclanthology.org/2023.semeval-1.27",
"publisher": "Association for Computational Linguistics",
"address": "Toronto, Canada",
"year": "2023",
"month": "July",
"booktitle": "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
"editor": "Ojha, Atul Kr. and\nDo{\\u{g}}ru{\\\"o}z, A. Seza and\nDa San Martino, Giovanni and\nTayyar Madabushi, Harish and\nKumar, Ritesh and\nSartori, Elisa",
"author": "Paulissen, Spencer and\nWendt, Caroline",
"title": "Lauri Ingman at SemEval-2023 Task 4: A Chain Classifier for Identifying Human Values behind Arguments",
"ENTRYTYPE": "inproceedings",
"ID": "paulissen-wendt-2023-lauri",
"source": "acl.jsonl"
},
"sanchez-rada-etal-2023-sliwc": {
"pages": "617--626",
"url": "https://aclanthology.org/2023.ldk-1.67",
"publisher": "NOVA CLUNL, Portugal",
"address": "Vienna, Austria",
"year": "2023",
"month": "September",
"booktitle": "Proceedings of the 4th Conference on Language, Data and Knowledge",
"editor": "Carvalho, Sara and\nKhan, Anas Fahad and\nAni{\\'c}, Ana Ostro{\\v{s}}ki and\nSpahiu, Blerina and\nGracia, Jorge and\nMcCrae, John P. and\nGromann, Dagmar and\nHeinisch, Barbara and\nSalgado, Ana",
"author": "S{\\'a}nchez-Rada, J. Fernando and\nAraque, Oscar and\nGarc{\\'\\i}a-Grao, Guillermo and\nIglesias, Carlos {\\'A}.",
"title": "SLIWC, Morality, NarrOnt and Senpy Annotations: four vocabularies to fight radicalization",
"ENTRYTYPE": "inproceedings",
"ID": "sanchez-rada-etal-2023-sliwc",
"source": "acl.jsonl"
},
"abercrombie-etal-2023-temporal": {
"abstract": "Much work in natural language processing (NLP) relies on human annotation. The majority of this implicitly assumes that annotator{'}s labels are temporally stable, although the reality is that human judgements are rarely consistent over time. As a subjective annotation task, hate speech labels depend on annotator{'}s emotional and moral reactions to the language used to convey the message. Studies in Cognitive Science reveal a {`}foreign language effect{'}, whereby people take differing moral positions and perceive offensive phrases to be weaker in their second languages. Does this affect annotations as well? We conduct an experiment to investigate the impacts of (1) time and (2) different language conditions (English and German) on measurements of intra-annotator agreement in a hate speech labelling task. While we do not observe the expected lower stability in the different language condition, we find that overall agreement is significantly lower than is implicitly assumed in annotation tasks, which has important implications for dataset reproducibility in NLP.",
"pages": "96--103",
"doi": "10.18653/v1/2023.law-1.10",
"url": "https://aclanthology.org/2023.law-1.10",
"publisher": "Association for Computational Linguistics",
"address": "Toronto, Canada",
"year": "2023",
"month": "July",
"booktitle": "Proceedings of the 17th Linguistic Annotation Workshop (LAW-XVII)",
"editor": "Prange, Jakob and\nFriedrich, Annemarie",
"author": "Abercrombie, Gavin and\nHovy, Dirk and\nPrabhakaran, Vinodkumar",
"title": "Temporal and Second Language Influence on Intra-Annotator Agreement and Stability in Hate Speech Labelling",
"ENTRYTYPE": "inproceedings",
"ID": "abercrombie-etal-2023-temporal",
"source": "acl.jsonl"
},
"feyen-etal-2023-pragmatic": {
"abstract": "The annotation task we elaborated aims at describing the contextual factors that influence the appearance and interpretation of moral predicates, in newspaper articles on police brutality, in French and in English. The paper provides a brief review of the literature on moral predicates and their relation with context. The paper also describes the elaboration of the corpus and the ontology. Our hypothesis is that the use of moral adjectives and their appearance in context could change depending on the political orientation of the journal. We elaborated an annotation task to investigate the precise contexts discussed in articles on police brutality. The paper concludes by describing the study and the annotation task in details.",
"pages": "146--153",
"doi": "10.18653/v1/2023.law-1.15",
"url": "https://aclanthology.org/2023.law-1.15",
"publisher": "Association for Computational Linguistics",
"address": "Toronto, Canada",
"year": "2023",
"month": "July",
"booktitle": "Proceedings of the 17th Linguistic Annotation Workshop (LAW-XVII)",
"editor": "Prange, Jakob and\nFriedrich, Annemarie",
"author": "Feyen, Tess and\nMari, Alda and\nPortner, Paul",
"title": "Pragmatic Annotation of Articles Related to Police Brutality",
"ENTRYTYPE": "inproceedings",
"ID": "feyen-etal-2023-pragmatic",
"source": "acl.jsonl"
},
"upadhyaya-etal-2023-toxicity": {
"abstract": "In this work, we focus on the task of determining the public attitude toward various social issues discussed on social media platforms. Platforms such as Twitter, however, are often used to spread misinformation, fake news through polarizing views. Existing literature suggests that higher levels of toxicity prevalent in Twitter conversations often spread negativity and delay addressing issues. Further, the embedded moral values and speech acts specifying the intention of the tweet correlate with public opinions expressed on various topics. However, previous works, which mainly focus on stance detection, either ignore the speech act, toxic, and moral features of these tweets that can collectively help capture public opinion or lack an efficient architecture that can detect the attitudes across targets. Therefore, in our work, we focus on the main task of stance detection by exploiting the toxicity, morality, and speech act as auxiliary tasks. We propose a multitasking model TWISTED that initially extracts the valence, arousal, and dominance aspects hidden in the tweets and injects the emotional sense into the embedded text followed by an efficient attention framework to correctly detect the tweet{'}s stance by using the shared features of toxicity, morality, and speech acts present in the tweet. Extensive experiments conducted on 4 benchmark stance detection datasets (SemEval-2016, P-Stance, COVID19-Stance, and ClimateChange) comprising different domains demonstrate the effectiveness and generalizability of our approach.",
"pages": "4464--4478",
"doi": "10.18653/v1/2023.findings-emnlp.295",
"url": "https://aclanthology.org/2023.findings-emnlp.295",
"publisher": "Association for Computational Linguistics",
"address": "Singapore",
"year": "2023",
"month": "December",
"booktitle": "Findings of the Association for Computational Linguistics: EMNLP 2023",
"editor": "Bouamor, Houda and\nPino, Juan and\nBali, Kalika",
"author": "Upadhyaya, Apoorva and\nFisichella, Marco and\nNejdl, Wolfgang",
"title": "Toxicity, Morality, and Speech Act Guided Stance Detection",
"ENTRYTYPE": "inproceedings",
"ID": "upadhyaya-etal-2023-toxicity",
"source": "acl.jsonl"
},
"vida-etal-2023-values": {
"abstract": "With language technology increasingly affecting individuals{'} lives, many recent works have investigated the ethical aspects of NLP. Among other topics, researchers focused on the notion of morality, investigating, for example, which moral judgements language models make. However, there has been little to no discussion of the terminology and the theories underpinning those efforts and their implications. This lack is highly problematic, as it hides the works{'} underlying assumptions and hinders a thorough and targeted scientific debate of morality in NLP. In this work, we address this research gap by (a) providing an overview of some important ethical concepts stemming from philosophy and (b) systematically surveying the existing literature on moral NLP w.r.t. their philosophical foundation, terminology, and data basis. For instance, we analyse what ethical theory an approach is based on, how this decision is justified, and what implications it entails. Our findings surveying 92 papers show that, for instance, most papers neither provide a clear definition of the terms they use nor adhere to definitions from philosophy. Finally, (c) we give three recommendations for future research in the field. We hope our work will lead to a more informed, careful, and sound discussion of morality in language technology.",
"pages": "5534--5554",
"doi": "10.18653/v1/2023.findings-emnlp.368",
"url": "https://aclanthology.org/2023.findings-emnlp.368",
"publisher": "Association for Computational Linguistics",
"address": "Singapore",
"year": "2023",
"month": "December",
"booktitle": "Findings of the Association for Computational Linguistics: EMNLP 2023",
"editor": "Bouamor, Houda and\nPino, Juan and\nBali, Kalika",
"author": "Vida, Karina and\nSimon, Judith and\nLauscher, Anne",
"title": "Values, Ethics, Morals? On the Use of Moral Concepts in NLP Research",
"ENTRYTYPE": "inproceedings",
"ID": "vida-etal-2023-values",
"source": "acl.jsonl"
},
"rao-etal-2023-makes": {
"abstract": "Moral or ethical judgments rely heavily on the specific contexts in which they occur. Understanding varying shades of defeasible contextualizations (i.e., additional information that strengthens or attenuates the moral acceptability of an action) is critical to accurately represent the subtlety and intricacy of grounded human moral judgment in real-life scenarios. We introduce defeasible moral reasoning: a task to provide grounded contexts that make an action more or less morally acceptable, along with commonsense rationales that justify the reasoning. To elicit high-quality task data, we take an iterative self-distillation approach that starts from a small amount of unstructured seed knowledge from GPT-3 and then alternates between (1) self-distillation from student models; (2) targeted filtering with a critic model trained by human judgment (to boost validity) and NLI (to boost diversity); (3) self-imitation learning (to amplify the desired data quality). This process yields a student model that produces defeasible contexts with improved validity, diversity, and defeasibility. From this model we distill a high-quality dataset, $\\delta$-Rules-of-Thumb, of 1.2M entries of contextualizations and rationales for 115K defeasible moral actions rated highly by human annotators 85.9{\\%} to 99.8{\\%} of the time. Using $\\delta$-RoT we obtain a final student model that wins over all intermediate student models by a notable margin.",
"pages": "12140--12159",
"doi": "10.18653/v1/2023.findings-emnlp.812",
"url": "https://aclanthology.org/2023.findings-emnlp.812",
"publisher": "Association for Computational Linguistics",
"address": "Singapore",
"year": "2023",
"month": "December",
"booktitle": "Findings of the Association for Computational Linguistics: EMNLP 2023",
"editor": "Bouamor, Houda and\nPino, Juan and\nBali, Kalika",
"author": "Rao, Kavel and\nJiang, Liwei and\nPyatkin, Valentina and\nGu, Yuling and\nTandon, Niket and\nDziri, Nouha and\nBrahman, Faeze and\nChoi, Yejin",
"title": "What Makes it Ok to Set a Fire? Iterative Self-distillation of Contexts and Rationales for Disambiguating Defeasible Social and Moral Situations",
"ENTRYTYPE": "inproceedings",
"ID": "rao-etal-2023-makes",
"source": "acl.jsonl"
},
"rao-etal-2023-ethical": {
"abstract": "In this position paper, we argue that instead of morally aligning LLMs to specific set of ethical principles, we should infuse generic ethical reasoning capabilities into them so that they can handle value pluralism at a global scale. When provided with an ethical policy, an LLM should be capable of making decisions that are ethically consistent to the policy. We develop a framework that integrates moral dilemmas with moral principles pertaining to different foramlisms of normative ethics, and at different levels of abstractions. Initial experiments with GPT-x models shows that while GPT-4 is a nearly perfect ethical reasoner, the models still have bias towards the moral values of Western and English speaking societies.",
"pages": "13370--13388",
"doi": "10.18653/v1/2023.findings-emnlp.892",
"url": "https://aclanthology.org/2023.findings-emnlp.892",
"publisher": "Association for Computational Linguistics",
"address": "Singapore",
"year": "2023",
"month": "December",
"booktitle": "Findings of the Association for Computational Linguistics: EMNLP 2023",
"editor": "Bouamor, Houda and\nPino, Juan and\nBali, Kalika",
"author": "Rao, Abhinav and\nKhandelwal, Aditi and\nTanmay, Kumar and\nAgarwal, Utkarsh and\nChoudhury, Monojit",
"title": "Ethical Reasoning over Moral Alignment: A Case and Framework for In-Context Ethical Policies in LLMs",
"ENTRYTYPE": "inproceedings",
"ID": "rao-etal-2023-ethical",
"source": "acl.jsonl"
},
"haemmerl-etal-2023-speaking": {
"abstract": "Pre-trained multilingual language models (PMLMs) are commonly used when dealing with data from multiple languages and cross-lingual transfer. However, PMLMs are trained on varying amounts of data for each language. In practice this means their performance is often much better on English than many other languages. We explore to what extent this also applies to moral norms. Do the models capture moral norms from English and impose them on other languages? Do the models exhibit random and thus potentially harmful beliefs in certain languages? Both these issues could negatively impact cross-lingual transfer and potentially lead to harmful outcomes. In this paper, we (1) apply the MORALDIRECTION framework to multilingual models, comparing results in German, Czech, Arabic, Chinese, and English, (2) analyse model behaviour on filtered parallel subtitles corpora, and (3) apply the models to a Moral Foundations Questionnaire, comparing with human responses from different countries. Our experiments demonstrate that, indeed, PMLMs encode differing moral biases, but these do not necessarily correspond to cultural differences or commonalities in human opinions. We release our code and models.",
"pages": "2137--2156",
"doi": "10.18653/v1/2023.findings-acl.134",
"url": "https://aclanthology.org/2023.findings-acl.134",
"publisher": "Association for Computational Linguistics",
"address": "Toronto, Canada",
"year": "2023",
"month": "July",
"booktitle": "Findings of the Association for Computational Linguistics: ACL 2023",
"editor": "Rogers, Anna and\nBoyd-Graber, Jordan and\nOkazaki, Naoaki",
"author": "Haemmerl, Katharina and\nDeiseroth, Bjoern and\nSchramowski, Patrick and\nLibovick{\\'y}, Jind{\\v{r}}ich and\nRothkopf, Constantin and\nFraser, Alexander and\nKersting, Kristian",
"title": "Speaking Multiple Languages Affects the Moral Bias of Language Models",
"ENTRYTYPE": "inproceedings",
"ID": "haemmerl-etal-2023-speaking",
"source": "acl.jsonl"
},
"wang-etal-2023-t2iat": {
"abstract": "*Warning: This paper contains several contents that may be toxic, harmful, or offensive.*In the last few years, text-to-image generative models have gained remarkable success in generating images with unprecedented quality accompanied by a breakthrough of inference speed. Despite their rapid progress, human biases that manifest in the training examples, particularly with regard to common stereotypical biases, like gender and skin tone, still have been found in these generative models. In this work, we seek to measure more complex human biases exist in the task of text-to-image generations. Inspired by the well-known Implicit Association Test (IAT) from social psychology, we propose a novel Text-to-Image Association Test (T2IAT) framework that quantifies the implicit stereotypes between concepts and valence, and those in the images. We replicate the previously documented bias tests on generative models, including morally neutral tests on flowers and insects as well as demographic stereotypical tests on diverse social attributes. The results of these experiments demonstrate the presence of complex stereotypical behaviors in image generations.",
"pages": "2560--2574",
"doi": "10.18653/v1/2023.findings-acl.160",
"url": "https://aclanthology.org/2023.findings-acl.160",
"publisher": "Association for Computational Linguistics",
"address": "Toronto, Canada",
"year": "2023",
"month": "July",
"booktitle": "Findings of the Association for Computational Linguistics: ACL 2023",
"editor": "Rogers, Anna and\nBoyd-Graber, Jordan and\nOkazaki, Naoaki",
"author": "Wang, Jialu and\nLiu, Xinyue and\nDi, Zonglin and\nLiu, Yang and\nWang, Xin",
"title": "T2IAT: Measuring Valence and Stereotypical Biases in Text-to-Image Generation",
"ENTRYTYPE": "inproceedings",
"ID": "wang-etal-2023-t2iat",
"source": "acl.jsonl"
},
"sun-etal-2023-decoding": {
"abstract": "Automatic response forecasting for news media plays a crucial role in enabling content producers to efficiently predict the impact of news releases and prevent unexpected negative outcomes such as social conflict and moral injury. To effectively forecast responses, it is essential to develop measures that leverage the social dynamics and contextual information surrounding individuals, especially in cases where explicit profiles or historical actions of the users are limited (referred to as lurkers). As shown in a previous study, 97{\\%} of all tweets are produced by only the most active 25{\\%} of users. However, existing approaches have limited exploration of how to best process and utilize these important features. To address this gap, we propose a novel framework, named SocialSense, that leverages a large language model to induce a belief-centered graph on top of an existent social network, along with graph-based propagation to capture social dynamics. We hypothesize that the induced graph that bridges the gap between distant users who share similar beliefs allows the model to effectively capture the response patterns. Our method surpasses existing state-of-the-art in experimental evaluations for both zero-shot and supervised settings, demonstrating its effectiveness in response forecasting. Moreover, the analysis reveals the framework{'}s capability to effectively handle unseen user and lurker scenarios, further highlighting its robustness and practical applicability.",
"pages": "43--57",
"doi": "10.18653/v1/2023.emnlp-main.4",
"url": "https://aclanthology.org/2023.emnlp-main.4",
"publisher": "Association for Computational Linguistics",
"address": "Singapore",
"year": "2023",
"month": "December",
"booktitle": "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
"editor": "Bouamor, Houda and\nPino, Juan and\nBali, Kalika",
"author": "Sun, Chenkai and\nLi, Jinning and\nFung, Yi and\nChan, Hou and\nAbdelzaher, Tarek and\nZhai, ChengXiang and\nJi, Heng",
"title": "Decoding the Silent Majority: Inducing Belief Augmented Social Graph with Large Language Model for Response Forecasting",
"ENTRYTYPE": "inproceedings",
"ID": "sun-etal-2023-decoding",
"source": "acl.jsonl"
},
"wu-etal-2023-cross": {
"abstract": "Folk tales are strong cultural and social influences in children{'}s lives, and they are known to teach morals and values. However, existing studies on folk tales are largely limited to European tales. In our study, we compile a large corpus of over 1,900 tales originating from 27 diverse cultures across six continents. Using a range of lexicons and correlation analyses, we examine how human values, morals, and gender biases are expressed in folk tales across cultures. We discover differences between cultures in prevalent values and morals, as well as cross-cultural trends in problematic gender biases. Furthermore, we find trends of reduced value expression when examining public-domain fiction stories, extrinsically validate our analyses against the multicultural Schwartz Survey of Cultural Values and the Global Gender Gap Report, and find traditional gender biases associated with values, morals, and agency. This large-scale cross-cultural study of folk tales paves the way towards future studies on how literature influences and reflects cultural norms.",
"pages": "5113--5125",
"doi": "10.18653/v1/2023.emnlp-main.311",
"url": "https://aclanthology.org/2023.emnlp-main.311",
"publisher": "Association for Computational Linguistics",
"address": "Singapore",
"year": "2023",
"month": "December",
"booktitle": "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
"editor": "Bouamor, Houda and\nPino, Juan and\nBali, Kalika",
"author": "Wu, Winston and\nWang, Lu and\nMihalcea, Rada",
"title": "Cross-Cultural Analysis of Human Values, Morals, and Biases in Folk Tales",
"ENTRYTYPE": "inproceedings",
"ID": "wu-etal-2023-cross",
"source": "acl.jsonl"
},
"shen-etal-2023-modeling": {
"abstract": "The most meaningful connections between people are often fostered through expression of shared vulnerability and emotional experiences in personal narratives. We introduce a new task of identifying similarity in personal stories based on empathic resonance, i.e., the extent to which two people empathize with each others{'} experiences, as opposed to raw semantic or lexical similarity, as has predominantly been studied in NLP. Using insights from social psychology, we craft a framework that operationalizes empathic similarity in terms of three key features of stories: main events, emotional trajectories, and overall morals or takeaways. We create EmpathicStories, a dataset of 1,500 personal stories annotated with our empathic similarity features, and 2,000 pairs of stories annotated with empathic similarity scores. Using our dataset, we fine-tune a model to compute empathic similarity of story pairs, and show that this outperforms semantic similarity models on automated correlation and retrieval metrics. Through a user study with 150 participants, we also assess the effect our model has on retrieving stories that users empathize with, compared to naive semantic similarity-based retrieval, and find that participants empathized significantly more with stories retrieved by our model. Our work has strong implications for the use of empathy-aware models to foster human connection and empathy between people.",
"pages": "6237--6252",
"doi": "10.18653/v1/2023.emnlp-main.383",
"url": "https://aclanthology.org/2023.emnlp-main.383",
"publisher": "Association for Computational Linguistics",
"address": "Singapore",
"year": "2023",
"month": "December",
"booktitle": "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
"editor": "Bouamor, Houda and\nPino, Juan and\nBali, Kalika",
"author": "Shen, Jocelyn and\nSap, Maarten and\nColon-Hernandez, Pedro and\nPark, Hae and\nBreazeal, Cynthia",
"title": "Modeling Empathic Similarity in Personal Narratives",
"ENTRYTYPE": "inproceedings",
"ID": "shen-etal-2023-modeling",
"source": "acl.jsonl"
},
"wagner-etal-2023-event": {
"abstract": "This work focuses on the spatial dimension of narrative understanding and presents the task of event-location tracking in narrative texts. The task intends to extract the sequence of locations where the narrative is set through its progression. We present several architectures for the task that seeks to model the global structure of the sequence, with varying levels of context awareness. We compare these methods to several baselines, including the use of strong methods applied over narrow contexts. We also develop methods for the generation of location embeddings and show that learning to predict a sequence of continuous embeddings, rather than a string of locations, is advantageous in terms of performance. We focus on the test case of Holocaust survivor testimonies. We argue for the moral and historical importance of studying this dataset in computational means and that it provides a unique case of a large set of narratives with a relatively restricted set of location trajectories. Our results show that models that are aware of the larger context of the narrative can generate more accurate location chains. We further corroborate the effectiveness of our methods by showing similar trends from experiments on an additional domain.",
"pages": "8789--8805",
"doi": "10.18653/v1/2023.emnlp-main.544",
"url": "https://aclanthology.org/2023.emnlp-main.544",
"publisher": "Association for Computational Linguistics",
"address": "Singapore",
"year": "2023",
"month": "December",
"booktitle": "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
"editor": "Bouamor, Houda and\nPino, Juan and\nBali, Kalika",
"author": "Wagner, Eitan and\nKeydar, Renana and\nAbend, Omri",
"title": "Event-Location Tracking in Narratives: A Case Study on Holocaust Testimonies",
"ENTRYTYPE": "inproceedings",
"ID": "wagner-etal-2023-event",
"source": "acl.jsonl"
},
"wang-etal-2023-humanoid": {
"abstract": "Just as computational simulations of atoms, molecules and cells have shaped the way we study the sciences, true-to-life simulations of human-like agents can be valuable tools for studying human behavior. We propose Humanoid Agents, a system that guides Generative Agents to behave more like humans by introducing three elements of System 1 processing: Basic needs (e.g. hunger, health and energy), Emotion and Closeness in Relationships. Humanoid Agents are able to use these dynamic elements to adapt their daily activities and conversations with other agents, as supported with empirical experiments. Our system is designed to be extensible to various settings, three of which we demonstrate, as well as to other elements influencing human behavior (e.g. empathy, moral values and cultural background). Our platform also includes a Unity WebGL game interface for visualization and an interactive analytics dashboard to show agent statuses over time. Our platform is available on https://www.humanoidagents.com/ and code is on https://github.com/HumanoidAgents/HumanoidAgents",
"pages": "167--176",
"doi": "10.18653/v1/2023.emnlp-demo.15",
"url": "https://aclanthology.org/2023.emnlp-demo.15",
"publisher": "Association for Computational Linguistics",
"address": "Singapore",
"year": "2023",
"month": "December",
"booktitle": "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"editor": "Feng, Yansong and\nLefever, Els",
"author": "Wang, Zhilin and\nChiu, Yu Ying and\nChiu, Yu Cheung",
"title": "Humanoid Agents: Platform for Simulating Human-like Generative Agents",
"ENTRYTYPE": "inproceedings",
"ID": "wang-etal-2023-humanoid",
"source": "acl.jsonl"
},
"wang-etal-2023-zhong": {
"language": "Chinese",
"abstract": "{``}社会道德的历时变迁研究具有重要意义。通过观察语言使用与道德变迁的历时联系,能够帮助描绘社会道德的变化趋势和发展规律、把握社会道德动态、推进道德建设。目前缺少从词汇角度、利用计算手段对大规模历时语料进行系统、全面的社会道德变迁研究。基于此,该文提出道德主题词历时计量模型,通过计量指标对1946-2015共70年的《人民日报》语料进行了历时计算与分析,观察了70年社会道德主题词的使用选择与变化。研究结果发现,道德词汇的历时使用与社会道德之间存在互动关系,反映出70年中国社会道德的历时变革与发展情况。{''}",
"pages": "289--299",
"url": "https://aclanthology.org/2023.ccl-1.26",
"publisher": "Chinese Information Processing Society of China",
"address": "Harbin, China",
"year": "2023",
"month": "August",
"booktitle": "Proceedings of the 22nd Chinese National Conference on Computational Linguistics",
"editor": "Sun, Maosong and\nQin, Bing and\nQiu, Xipeng and\nJiang, Jing and\nHan, Xianpei",
"author": "Wang, Hongrui and\nYu, Dong and\nLiu, Pengyuan and\nCeng, Liying",
"title": "中国社会道德变化模型与发展动因探究------基于70年《人民日报》的计量与分析 (The Model of Moral Change and Motivation in Chinese Society ------The Vocabulary Analysis of the 70-year ''People's Daily'')",
"ENTRYTYPE": "inproceedings",
"ID": "wang-etal-2023-zhong",
"source": "acl.jsonl"
},
"simmons-2023-moral": {
"abstract": "Large Language Models (LLMs) have demonstrated impressive capabilities in generating fluent text, as well as tendencies to reproduce undesirable social biases. This work investigates whether LLMs reproduce the moral biases associated with political groups in the United States, an instance of a broader capability herein termed moral mimicry. This work explores this hypothesis in the GPT-3/3.5 and OPT families of Transformer-based LLMs. Using tools from Moral Foundations Theory, this work shows that these LLMs are indeed moral mimics. When prompted with a liberal or conservative political identity, the models generate text reflecting corresponding moral biases. This study also explores the relationship between moral mimicry and model size, and similarity between human and LLM moral word use.",
"pages": "282--297",
"doi": "10.18653/v1/2023.acl-srw.40",
"url": "https://aclanthology.org/2023.acl-srw.40",
"publisher": "Association for Computational Linguistics",
"address": "Toronto, Canada",
"year": "2023",
"month": "July",
"booktitle": "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
"editor": "Padmakumar, Vishakh and\nVallejo, Gisela and\nFu, Yao",
"author": "Simmons, Gabriel",
"title": "Moral Mimicry: Large Language Models Produce Moral Rationalizations Tailored to Political Identity",
"ENTRYTYPE": "inproceedings",
"ID": "simmons-2023-moral",
"source": "acl.jsonl"
},
"sun-etal-2023-measuring": {
"abstract": "Predicting how a user responds to news events enables important applications such as allowing intelligent agents or content producers to estimate the effect on different communities and revise unreleased messages to prevent unexpected bad outcomes such as social conflict and moral injury. We present a new task, Response Forecasting on Personas for News Media, to estimate the response a persona (characterizing an individual or a group) might have upon seeing a news message. Compared to the previous efforts which only predict generic comments to news, the proposed task not only introduces personalization in the modeling but also predicts the sentiment polarity and intensity of each response. This enables more accurate and comprehensive inference on the mental state of the persona. Meanwhile, the generated sentiment dimensions make the evaluation and application more reliable. We create the first benchmark dataset, which consists of 13,357 responses to 3,847 news headlines from Twitter. We further evaluate the SOTA neural language models with our dataset. The empirical results suggest that the included persona attributes are helpful for the performance of all response dimensions. Our analysis shows that the best-performing models are capable of predicting responses that are consistent with the personas, and as a byproduct, the task formulation also enables many interesting applications in the analysis of social network groups and their opinions, such as the discovery of extreme opinion groups.",
"pages": "554--562",
"doi": "10.18653/v1/2023.acl-short.48",
"url": "https://aclanthology.org/2023.acl-short.48",
"publisher": "Association for Computational Linguistics",
"address": "Toronto, Canada",
"year": "2023",
"month": "July",
"booktitle": "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
"editor": "Rogers, Anna and\nBoyd-Graber, Jordan and\nOkazaki, Naoaki",
"author": "Sun, Chenkai and\nLi, Jinning and\nChan, Hou Pong and\nZhai, ChengXiang and\nJi, Heng",
"title": "Measuring the Effect of Influential Messages on Varying Personas",
"ENTRYTYPE": "inproceedings",
"ID": "sun-etal-2023-measuring",
"source": "acl.jsonl"
},
"ramezani-xu-2023-knowledge": {
"abstract": "Moral norms vary across cultures. A recent line of work suggests that English large language models contain human-like moral biases, but these studies typically do not examine moral variation in a diverse cultural setting. We investigate the extent to which monolingual English language models contain knowledge about moral norms in different countries. We consider two levels of analysis: 1) whether language models capture fine-grained moral variation across countries over a variety of topics such as {``}homosexuality{''} and {``}divorce{''}; 2) whether language models capture cultural diversity and shared tendencies in which topics people around the globe tend to diverge or agree on in their moral judgment. We perform our analyses with two public datasets from the World Values Survey (across 55 countries) and PEW global surveys (across 40 countries) on morality. We find that pre-trained English language models predict empirical moral norms across countries worse than the English moral norms reported previously. However, fine-tuning language models on the survey data improves inference across countries at the expense of a less accurate estimate of the English moral norms. We discuss the relevance and challenges of incorporating cultural knowledge into the automated inference of moral norms.",
"pages": "428--446",
"doi": "10.18653/v1/2023.acl-long.26",
"url": "https://aclanthology.org/2023.acl-long.26",
"publisher": "Association for Computational Linguistics",
"address": "Toronto, Canada",
"year": "2023",
"month": "July",
"booktitle": "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
"editor": "Rogers, Anna and\nBoyd-Graber, Jordan and\nOkazaki, Naoaki",
"author": "Ramezani, Aida and\nXu, Yang",
"title": "Knowledge of cultural moral norms in large language models",
"ENTRYTYPE": "inproceedings",
"ID": "ramezani-xu-2023-knowledge",
"source": "acl.jsonl"
},
"sun-etal-2023-moraldial": {
"abstract": "Morality in dialogue systems has raised great attention in research recently. A moral dialogue system aligned with users{'} values could enhance conversation engagement and user connections. In this paper, we propose a framework, MoralDial to train and evaluate moral dialogue systems. In our framework, we first explore the communication mechanisms of morality and resolve expressed morality into three parts, which indicate the roadmap for building a moral dialogue system. Based on that, we design a simple yet effective method: constructing moral discussions between simulated specific users and the dialogue system. The constructed discussions consist of expressing, explaining, revising, and inferring moral views in dialogue exchanges, which makes conversational models learn morality well in a natural manner. Furthermore, we propose a novel evaluation method under the framework. We evaluate the multiple aspects of morality by judging the relation between dialogue responses and human values in discussions, where the multifaceted nature of morality is particularly considered. Automatic and manual experiments demonstrate that our framework is promising to train and evaluate moral dialogue systems.",
"pages": "2213--2230",
"doi": "10.18653/v1/2023.acl-long.123",
"url": "https://aclanthology.org/2023.acl-long.123",
"publisher": "Association for Computational Linguistics",
"address": "Toronto, Canada",
"year": "2023",
"month": "July",
"booktitle": "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
"editor": "Rogers, Anna and\nBoyd-Graber, Jordan and\nOkazaki, Naoaki",
"author": "Sun, Hao and\nZhang, Zhexin and\nMi, Fei and\nWang, Yasheng and\nLiu, Wei and\nCui, Jianwei and\nWang, Bin and\nLiu, Qun and\nHuang, Minlie",
"title": "MoralDial: A Framework to Train and Evaluate Moral Dialogue Systems via Moral Discussions",
"ENTRYTYPE": "inproceedings",
"ID": "sun-etal-2023-moraldial",
"source": "acl.jsonl"
},
"pyatkin-etal-2023-clarifydelphi": {
"abstract": "Context is everything, even in commonsense moral reasoning. Changing contexts can flip the moral judgment of an action; Lying to a friend is wrong in general, but may be morally acceptable if it is intended to protect their life. We present ClarifyDelphi, an interactive system that learns to ask clarification questions (e.g., why did you lie to your friend?) in order to elicit additional salient contexts of a social or moral situation. We posit that questions whose potential answers lead to \\textit{diverging} moral judgments are the most informative. Thus, we propose a reinforcement learning framework with a defeasibility reward that aims to maximize the divergence between moral judgments of hypothetical answers to a question. Human evaluation demonstrates that our system generates more relevant, informative and defeasible questions compared to competitive baselines. Our work is ultimately inspired by studies in cognitive science that have investigated the flexibility in moral cognition (i.e., the diverse contexts in which moral rules can be bent), and we hope that research in this direction can assist both cognitive and computational investigations of moral judgments.",
"pages": "11253--11271",
"doi": "10.18653/v1/2023.acl-long.630",
"url": "https://aclanthology.org/2023.acl-long.630",
"publisher": "Association for Computational Linguistics",
"address": "Toronto, Canada",
"year": "2023",
"month": "July",
"booktitle": "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
"editor": "Rogers, Anna and\nBoyd-Graber, Jordan and\nOkazaki, Naoaki",
"author": "Pyatkin, Valentina and\nHwang, Jena D. and\nSrikumar, Vivek and\nLu, Ximing and\nJiang, Liwei and\nChoi, Yejin and\nBhagavatula, Chandra",
"title": "ClarifyDelphi: Reinforced Clarification Questions with Defeasibility Rewards for Social and Moral Situations",
"ENTRYTYPE": "inproceedings",
"ID": "pyatkin-etal-2023-clarifydelphi",
"source": "acl.jsonl"
},
"liscio-etal-2023-text": {
"abstract": "Moral rhetoric influences our judgement. Although social scientists recognize moral expression as domain specific, there are no systematic methods for analyzing whether a text classifier learns the domain-specific expression of moral language or not. We propose Tomea, a method to compare a supervised classifier{'}s representation of moral rhetoric across domains. Tomea enables quantitative and qualitative comparisons of moral rhetoric via an interpretable exploration of similarities and differences across moral concepts and domains. We apply Tomea on moral narratives in thirty-five thousand tweets from seven domains. We extensively evaluate the method via a crowd study, a series of cross-domain moral classification comparisons, and a qualitative analysis of cross-domain moral expression.",
"pages": "14113--14132",
"doi": "10.18653/v1/2023.acl-long.789",
"url": "https://aclanthology.org/2023.acl-long.789",
"publisher": "Association for Computational Linguistics",
"address": "Toronto, Canada",
"year": "2023",
"month": "July",
"booktitle": "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
"editor": "Rogers, Anna and\nBoyd-Graber, Jordan and\nOkazaki, Naoaki",
"author": "Liscio, Enrico and\nAraque, Oscar and\nGatti, Lorenzo and\nConstantinescu, Ionut and\nJonker, Catholijn and\nKalimeri, Kyriaki and\nMurukannaiah, Pradeep Kumar",
"title": "What does a Text Classifier Learn about Morality? An Explainable Method for Cross-Domain Comparison of Moral Rhetoric",
"ENTRYTYPE": "inproceedings",
"ID": "liscio-etal-2023-text",
"source": "acl.jsonl"
},
"zheng-etal-2022-towards": {
"abstract": "Online messaging is dynamic, influential, and highly contextual, and a single post may contain contrasting sentiments towards multiple entities, such as dehumanizing one actor while empathizing with another in the same message. These complexities are important to capture for understanding the systematic abuse voiced within an online community, or for determining whether individuals are advocating for abuse, opposing abuse, or simply reporting abuse. In this work, we describe a formulation of directed social regard (DSR) as a problem of multi-entity aspect-based sentiment analysis (ME-ABSA), which models the degree of intensity of multiple sentiments that are associated with entities described by a text document. Our DSR schema is informed by Bandura{'}s psychosocial theory of moral disengagement and by recent work in ABSA. We present a dataset of over 2,900 posts and sentences, comprising over 24,000 entities annotated for DSR over nine psychosocial dimensions by three annotators. We present a novel transformer-based ME-ABSA model for DSR, achieving favorable preliminary results on this dataset.",
"pages": "203--208",
"doi": "10.18653/v1/2022.woah-1.19",
"url": "https://aclanthology.org/2022.woah-1.19",
"publisher": "Association for Computational Linguistics",
"address": "Seattle, Washington (Hybrid)",
"year": "2022",
"month": "July",
"booktitle": "Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)",
"editor": "Narang, Kanika and\nMostafazadeh Davani, Aida and\nMathias, Lambert and\nVidgen, Bertie and\nTalat, Zeerak",
"author": "Zheng, Joan and\nFriedman, Scott and\nSchmer-galunder, Sonja and\nMagnusson, Ian and\nWheelock, Ruta and\nGottlieb, Jeremy and\nGomez, Diana and\nMiller, Christopher",
"title": "Towards a Multi-Entity Aspect-Based Sentiment Analysis for Characterizing Directed Social Regard in Online Messaging",
"ENTRYTYPE": "inproceedings",
"ID": "zheng-etal-2022-towards",
"source": "acl.jsonl"
},
"fraser-etal-2022-moral": {
"abstract": "In an effort to guarantee that machine learning model outputs conform with human moral values, recent work has begun exploring the possibility of explicitly training models to learn the difference between right and wrong. This is typically done in a bottom-up fashion, by exposing the model to different scenarios, annotated with human moral judgements. One question, however, is whether the trained models actually learn any consistent, higher-level ethical principles from these datasets {--} and if so, what? Here, we probe the Allen AI Delphi model with a set of standardized morality questionnaires, and find that, despite some inconsistencies, Delphi tends to mirror the moral principles associated with the demographic groups involved in the annotation process. We question whether this is desirable and discuss how we might move forward with this knowledge.",
"pages": "26--42",
"doi": "10.18653/v1/2022.trustnlp-1.3",
"url": "https://aclanthology.org/2022.trustnlp-1.3",
"publisher": "Association for Computational Linguistics",
"address": "Seattle, U.S.A.",
"year": "2022",
"month": "July",
"booktitle": "Proceedings of the 2nd Workshop on Trustworthy Natural Language Processing (TrustNLP 2022)",
"editor": "Verma, Apurv and\nPruksachatkun, Yada and\nChang, Kai-Wei and\nGalstyan, Aram and\nDhamala, Jwala and\nCao, Yang Trista",
"author": "Fraser, Kathleen C. and\nKiritchenko, Svetlana and\nBalkir, Esma",
"title": "Does Moral Code have a Moral Code? Probing Delphi's Moral Philosophy",
"ENTRYTYPE": "inproceedings",
"ID": "fraser-etal-2022-moral",
"source": "acl.jsonl"
},
"roy-etal-2022-towards": {
"abstract": "Data scarcity is a common problem in NLP, especially when the annotation pertains to nuanced socio-linguistic concepts that require specialized knowledge. As a result, few-shot identification of these concepts is desirable. Few-shot in-context learning using pre-trained Large Language Models (LLMs) has been recently applied successfully in many NLP tasks. In this paper, we study few-shot identification of a psycho-linguistic concept, Morality Frames (Roy et al., 2021), using LLMs. Morality frames are a representation framework that provides a holistic view of the moral sentiment expressed in text, identifying the relevant moral foundation (Haidt and Graham, 2007) and at a finer level of granularity, the moral sentiment expressed towards the entities mentioned in the text. Previous studies relied on human annotation to identify morality frames in text which is expensive. In this paper, we propose prompting based approaches using pretrained Large Language Models for identification of morality frames, relying only on few-shot exemplars. We compare our models{'} performance with few-shot RoBERTa and found promising results.",
"pages": "183--196",
"doi": "10.18653/v1/2022.nlpcss-1.20",
"url": "https://aclanthology.org/2022.nlpcss-1.20",
"publisher": "Association for Computational Linguistics",
"address": "Abu Dhabi, UAE",
"year": "2022",
"month": "November",
"booktitle": "Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS)",
"editor": "Bamman, David and\nHovy, Dirk and\nJurgens, David and\nKeith, Katherine and\nO'Connor, Brendan and\nVolkova, Svitlana",
"author": "Roy, Shamik and\nNakshatri, Nishanth Sridhar and\nGoldwasser, Dan",
"title": "Towards Few-Shot Identification of Morality Frames using In-Context Learning",
"ENTRYTYPE": "inproceedings",
"ID": "roy-etal-2022-towards",
"source": "acl.jsonl"
},
"talat-etal-2022-machine": {
"abstract": "Ethics is one of the longest standing intellectual endeavors of humanity. In recent years, the fields of AI and NLP have attempted to address issues of harmful outcomes in machine learning systems that are made to interface with humans. One recent approach in this vein is the construction of NLP morality models that can take in arbitrary text and output a moral judgment about the situation described. In this work, we offer a critique of such NLP methods for automating ethical decision-making. Through an audit of recent work on computational approaches for predicting morality, we examine the broader issues that arise from such efforts. We conclude with a discussion of how machine ethics could usefully proceed in NLP, by focusing on current and near-future uses of technology, in a way that centers around transparency, democratic values, and allows for straightforward accountability.",
"pages": "769--779",
"doi": "10.18653/v1/2022.naacl-main.56",
"url": "https://aclanthology.org/2022.naacl-main.56",
"publisher": "Association for Computational Linguistics",
"address": "Seattle, United States",
"year": "2022",
"month": "July",
"booktitle": "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"editor": "Carpuat, Marine and\nde Marneffe, Marie-Catherine and\nMeza Ruiz, Ivan Vladimir",
"author": "Talat, Zeerak and\nBlix, Hagen and\nValvoda, Josef and\nGanesh, Maya Indira and\nCotterell, Ryan and\nWilliams, Adina",
"title": "On the Machine Learning of Ethical Judgments from Natural Language",
"ENTRYTYPE": "inproceedings",
"ID": "talat-etal-2022-machine",
"source": "acl.jsonl"
},
"guan-etal-2022-corpus": {
"abstract": "Teaching morals is one of the most important purposes of storytelling. An essential ability for understanding and writing moral stories is bridging story plots and implied morals. Its challenges mainly lie in: (1) grasping knowledge about abstract concepts in morals, (2) capturing inter-event discourse relations in stories, and (3) aligning value preferences of stories and morals concerning good or bad behavior. In this paper, we propose two understanding tasks and two generation tasks to assess these abilities of machines. We present STORAL, a new dataset of Chinese and English human-written moral stories. We show the difficulty of the proposed tasks by testing various models with automatic and manual evaluation on STORAL. Furthermore, we present a retrieval-augmented algorithm that effectively exploits related concepts or events in training sets as additional guidance to improve performance on these tasks.",
"pages": "5069--5087",
"doi": "10.18653/v1/2022.naacl-main.374",
"url": "https://aclanthology.org/2022.naacl-main.374",
"publisher": "Association for Computational Linguistics",
"address": "Seattle, United States",
"year": "2022",
"month": "July",
"booktitle": "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"editor": "Carpuat, Marine and\nde Marneffe, Marie-Catherine and\nMeza Ruiz, Ivan Vladimir",
"author": "Guan, Jian and\nLiu, Ziqi and\nHuang, Minlie",
"title": "A Corpus for Understanding and Generating Moral Stories",
"ENTRYTYPE": "inproceedings",
"ID": "guan-etal-2022-corpus",
"source": "acl.jsonl"
},
"pacheco-etal-2022-holistic": {
"abstract": "The Covid-19 pandemic has led to infodemic of low quality information leading to poor health decisions. Combating the outcomes of this infodemic is not only a question of identifying false claims, but also reasoning about the decisions individuals make. In this work we propose a holistic analysis framework connecting stance and reason analysis, and fine-grained entity level moral sentiment analysis. We study how to model the dependencies between the different level of analysis and incorporate human insights into the learning process. Experiments show that our framework provides reliable predictions even in the low-supervision settings.",
"pages": "5821--5839",
"doi": "10.18653/v1/2022.naacl-main.427",
"url": "https://aclanthology.org/2022.naacl-main.427",
"publisher": "Association for Computational Linguistics",
"address": "Seattle, United States",
"year": "2022",
"month": "July",
"booktitle": "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"editor": "Carpuat, Marine and\nde Marneffe, Marie-Catherine and\nMeza Ruiz, Ivan Vladimir",
"author": "Pacheco, Maria Leonor and\nIslam, Tunazzina and\nMahajan, Monal and\nShor, Andrey and\nYin, Ming and\nUngar, Lyle and\nGoldwasser, Dan",
"title": "A Holistic Framework for Analyzing the COVID-19 Vaccine Debate",
"ENTRYTYPE": "inproceedings",
"ID": "pacheco-etal-2022-holistic",
"source": "acl.jsonl"
},
"ammanabrolu-etal-2022-aligning": {
"abstract": "We focus on creating agents that act in alignment with socially beneficial norms and values in interactive narratives or text-based games{---}environments wherein an agent perceives and interacts with a world through natural language. Such interactive agents are often trained via reinforcement learning to optimize task performance, even when such rewards may lead to agent behaviors that violate societal norms{---}causing harm either to the agent itself or other entities in the environment. Social value alignment refers to creating agents whose behaviors conform to expected moral and social norms for a given context and group of people{---}in our case, it means agents that behave in a manner that is less harmful and more beneficial for themselves and others. We build on the Jiminy Cricket benchmark (Hendrycks et al. 2021), a set of 25 annotated interactive narratives containing thousands of morally salient scenarios covering everything from theft and bodily harm to altruism. We introduce the GALAD (Game-value ALignment through Action Distillation) agent that uses the social commonsense knowledge present in specially trained language models to contextually restrict its action space to only those actions that are aligned with socially beneficial values. An experimental study shows that the GALAD agent makes decisions efficiently enough to improve state-of-the-art task performance by 4{\\%} while reducing the frequency of socially harmful behaviors by 25{\\%} compared to strong contemporary value alignment approaches.",
"pages": "5994--6017",
"doi": "10.18653/v1/2022.naacl-main.439",
"url": "https://aclanthology.org/2022.naacl-main.439",
"publisher": "Association for Computational Linguistics",
"address": "Seattle, United States",
"year": "2022",
"month": "July",
"booktitle": "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"editor": "Carpuat, Marine and\nde Marneffe, Marie-Catherine and\nMeza Ruiz, Ivan Vladimir",
"author": "Ammanabrolu, Prithviraj and\nJiang, Liwei and\nSap, Maarten and\nHajishirzi, Hannaneh and\nChoi, Yejin",
"title": "Aligning to Social Norms and Values in Interactive Narratives",
"ENTRYTYPE": "inproceedings",
"ID": "ammanabrolu-etal-2022-aligning",
"source": "acl.jsonl"
},
"alhassan-etal-2022-bad": {
"abstract": "Natural language processing (NLP) has been shown to perform well in various tasks, such as answering questions, ascertaining natural language inference and anomaly detection. However, there are few NLP-related studies that touch upon the moral context conveyed in text. This paper studies whether state-of-the-art, pre-trained language models are capable of passing moral judgments on posts retrieved from a popular Reddit user board. Reddit is a social discussion website and forum where posts are promoted by users through a voting system. In this work, we construct a dataset that can be used for moral judgement tasks by collecting data from the AITA? (Am I the A*******?) subreddit. To model our task, we harnessed the power of pre-trained language models, including BERT, RoBERTa, RoBERTa-large, ALBERT and Longformer. We then fine-tuned these models and evaluated their ability to predict the correct verdict as judged by users for each post in the datasets. RoBERTa showed relative improvements across the three datasets, exhibiting a rate of 87{\\%} accuracy and a Matthews correlation coefficient (MCC) of 0.76, while the use of the Longformer model slightly improved the performance when used with longer sequences, achieving 87{\\%} accuracy and 0.77 MCC.",
"pages": "267--276",
"url": "https://aclanthology.org/2022.lrec-1.28",
"publisher": "European Language Resources Association",
"address": "Marseille, France",
"year": "2022",
"month": "June",
"booktitle": "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
"editor": "Calzolari, Nicoletta and\nB{\\'e}chet, Fr{\\'e}d{\\'e}ric and\nBlache, Philippe and\nChoukri, Khalid and\nCieri, Christopher and\nDeclerck, Thierry and\nGoggi, Sara and\nIsahara, Hitoshi and\nMaegaard, Bente and\nMariani, Joseph and\nMazo, H{\\'e}l{\\`e}ne and\nOdijk, Jan and\nPiperidis, Stelios",
"author": "Alhassan, Areej and\nZhang, Jinkai and\nSchlegel, Viktor",
"title": "`Am I the Bad One'? Predicting the Moral Judgement of the Crowd Using Pre--trained Language Models",
"ENTRYTYPE": "inproceedings",
"ID": "alhassan-etal-2022-bad",
"source": "acl.jsonl"
},
"ntogramatzis-etal-2022-ellogon": {
"abstract": "In this paper, we present the Ellogon Web Annotation Tool. It is a collaborative, web-based annotation tool built upon the Ellogon infrastructure offering an improved user experience and adaptability to various annotation scenarios by making good use of the latest design practices and web development frameworks. Being in development for many years, this paper describes its current architecture, along with the recent modifications that extend the existing functionalities and the new features that were added. The new version of the tool offers document analytics, annotation inspection and comparison features, a modern UI, and formatted text import (e.g. TEI XML documents, rendered with simple markup). We present two use cases that serve as two examples of different annotation scenarios to demonstrate the new functionalities. An appropriate (user-supplied, XML-based) annotation schema is used for each scenario. The first schema contains the relevant components for representing concepts, moral values, and ideas. The second includes all the necessary elements for annotating argumentative units in a document and their binary relations.",
"pages": "3442--3450",
"url": "https://aclanthology.org/2022.lrec-1.368",
"publisher": "European Language Resources Association",
"address": "Marseille, France",
"year": "2022",
"month": "June",
"booktitle": "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
"editor": "Calzolari, Nicoletta and\nB{\\'e}chet, Fr{\\'e}d{\\'e}ric and\nBlache, Philippe and\nChoukri, Khalid and\nCieri, Christopher and\nDeclerck, Thierry and\nGoggi, Sara and\nIsahara, Hitoshi and\nMaegaard, Bente and\nMariani, Joseph and\nMazo, H{\\'e}l{\\`e}ne and\nOdijk, Jan and\nPiperidis, Stelios",
"author": "Ntogramatzis, Alexandros Fotios and\nGradou, Anna and\nPetasis, Georgios and\nKokol, Marko",
"title": "The Ellogon Web Annotation Tool: Annotating Moral Values and Arguments",
"ENTRYTYPE": "inproceedings",
"ID": "ntogramatzis-etal-2022-ellogon",
"source": "acl.jsonl"
},
"perez-almendros-etal-2022-pre": {
"abstract": "Patronizing and Condescending Language (PCL) is a subtle but harmful type of discourse, yet the task of recognizing PCL remains under-studied by the NLP community. Recognizing PCL is challenging because of its subtle nature, because available datasets are limited in size, and because this task often relies on some form of commonsense knowledge. In this paper, we study to what extent PCL detection models can be improved by pre-training them on other, more established NLP tasks. We find that performance gains are indeed possible in this way, in particular when pre-training on tasks focusing on sentiment, harmful language and commonsense morality. In contrast, for tasks focusing on political speech and social justice, no or only very small improvements were witnessed. These findings improve our understanding of the nature of PCL.",
"pages": "3902--3911",
"url": "https://aclanthology.org/2022.lrec-1.415",
"publisher": "European Language Resources Association",
"address": "Marseille, France",
"year": "2022",
"month": "June",
"booktitle": "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
"editor": "Calzolari, Nicoletta and\nB{\\'e}chet, Fr{\\'e}d{\\'e}ric and\nBlache, Philippe and\nChoukri, Khalid and\nCieri, Christopher and\nDeclerck, Thierry and\nGoggi, Sara and\nIsahara, Hitoshi and\nMaegaard, Bente and\nMariani, Joseph and\nMazo, H{\\'e}l{\\`e}ne and\nOdijk, Jan and\nPiperidis, Stelios",
"author": "Perez Almendros, Carla and\nEspinosa Anke, Luis and\nSchockaert, Steven",
"title": "Pre-Training Language Models for Identifying Patronizing and Condescending Language: An Analysis",
"ENTRYTYPE": "inproceedings",
"ID": "perez-almendros-etal-2022-pre",
"source": "acl.jsonl"
},
"levine-2022-distribution": {
"abstract": "Deontic modals are auxiliary verbs which express some kind of necessity, obligation, or moral recommendation. This paper investigates the collocation and distribution within Jane Austen{'}s six mature novels of the following deontic modals: must, should, ought, and need. We also examine the co-occurrences of these modals with name mentions of the heroines in the six novels, categorizing each occurrence with a category of obligation if applicable. The paper offers a brief explanation of the categories of obligation chosen for this investigation. In order to examine the types of obligations associated with each heroine, we then investigate the distribution of these categories in relation to mentions of each heroine. The patterns observed show a general concurrence with the thematic characterizations of Austen{'}s heroines which are found in literary analysis.",
"pages": "70--74",
"url": "https://aclanthology.org/2022.latechclfl-1.9",
"publisher": "International Conference on Computational Linguistics",
"address": "Gyeongju, Republic of Korea",
"year": "2022",
"month": "October",
"booktitle": "Proceedings of the 6th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature",
"editor": "Degaetano, Stefania and\nKazantseva, Anna and\nReiter, Nils and\nSzpakowicz, Stan",
"author": "Levine, Lauren",
"title": "The Distribution of Deontic Modals in Jane Austen's Mature Novels",
"ENTRYTYPE": "inproceedings",
"ID": "levine-2022-distribution",
"source": "acl.jsonl"
},
"liu-etal-2022-aligning": {
"abstract": "Although current large-scale generative language models (LMs) can show impressive insights about factual knowledge, they do not exhibit similar success with respect to human values judgements (e.g., whether or not the generations of an LM are moral). Existing methods learn human values either by directly mimicking the behavior of human data, or rigidly constraining the generation space to human-chosen tokens. These methods are inherently limited in that they do not consider the contextual and abstract nature of human values and as a result often fail when dealing with out-of-domain context or sophisticated and abstract human values. This paper proposes SENSEI, a new reinforcement learning based method that can embed human values judgements into each step of language generation. SENSEI deploys an Actor-Critic framework, where the Critic is a reward distributor that simulates the reward assignment procedure of humans, while the Actor guides the generation towards the maximum reward direction. Compared with five existing methods in three human values alignment datasets, SENSEI not only achieves higher alignment performance in terms of both automatic and human evaluations, but also shows improvements on robustness and transfer learning on unseen human values.",
"pages": "241--252",
"doi": "10.18653/v1/2022.findings-naacl.18",
"url": "https://aclanthology.org/2022.findings-naacl.18",
"publisher": "Association for Computational Linguistics",
"address": "Seattle, United States",
"year": "2022",
"month": "July",
"booktitle": "Findings of the Association for Computational Linguistics: NAACL 2022",
"editor": "Carpuat, Marine and\nde Marneffe, Marie-Catherine and\nMeza Ruiz, Ivan Vladimir",
"author": "Liu, Ruibo and\nZhang, Ge and\nFeng, Xinyu and\nVosoughi, Soroush",
"title": "Aligning Generative Language Models with Human Values",
"ENTRYTYPE": "inproceedings",
"ID": "liu-etal-2022-aligning",
"source": "acl.jsonl"
},
"hofmann-etal-2022-modeling": {
"abstract": "The increasing polarization of online political discourse calls for computational tools that automatically detect and monitor ideological divides in social media. We introduce a minimally supervised method that leverages the network structure of online discussion forums, specifically Reddit, to detect polarized concepts. We model polarization along the dimensions of salience and framing, drawing upon insights from moral psychology. Our architecture combines graph neural networks with structured sparsity learning and results in representations for concepts and subreddits that capture temporal ideological dynamics such as right-wing and left-wing radicalization.",
"pages": "536--550",
"doi": "10.18653/v1/2022.findings-naacl.41",
"url": "https://aclanthology.org/2022.findings-naacl.41",
"publisher": "Association for Computational Linguistics",
"address": "Seattle, United States",
"year": "2022",
"month": "July",
"booktitle": "Findings of the Association for Computational Linguistics: NAACL 2022",
"editor": "Carpuat, Marine and\nde Marneffe, Marie-Catherine and\nMeza Ruiz, Ivan Vladimir",
"author": "Hofmann, Valentin and\nDong, Xiaowen and\nPierrehumbert, Janet and\nSchuetze, Hinrich",
"title": "Modeling Ideological Salience and Framing in Polarized Online Groups with Graph Neural Networks and Structured Sparsity",
"ENTRYTYPE": "inproceedings",
"ID": "hofmann-etal-2022-modeling",
"source": "acl.jsonl"
},
"liscio-etal-2022-cross": {
"abstract": "Moral values influence how we interpret and act upon the information we receive. Identifying human moral values is essential for artificially intelligent agents to co-exist with humans. Recent progress in natural language processing allows the identification of moral values in textual discourse. However, domain-specific moral rhetoric poses challenges for transferring knowledge from one domain to another. We provide the first extensive investigation on the effects of cross-domain classification of moral values from text. We compare a state-of-the-art deep learning model (BERT) in seven domains and four cross-domain settings. We show that a value classifier can generalize and transfer knowledge to novel domains, but it can introduce catastrophic forgetting. We also highlight the typical classification errors in cross-domain value classification and compare the model predictions to the annotators agreement. Our results provide insights to computer and social scientists that seek to identify moral rhetoric specific to a domain of discourse.",
"pages": "2727--2745",
"doi": "10.18653/v1/2022.findings-naacl.209",
"url": "https://aclanthology.org/2022.findings-naacl.209",
"publisher": "Association for Computational Linguistics",
"address": "Seattle, United States",
"year": "2022",
"month": "July",
"booktitle": "Findings of the Association for Computational Linguistics: NAACL 2022",
"editor": "Carpuat, Marine and\nde Marneffe, Marie-Catherine and\nMeza Ruiz, Ivan Vladimir",
"author": "Liscio, Enrico and\nDondera, Alin and\nGeadau, Andrei and\nJonker, Catholijn and\nMurukannaiah, Pradeep",
"title": "Cross-Domain Classification of Moral Values",
"ENTRYTYPE": "inproceedings",
"ID": "liscio-etal-2022-cross",
"source": "acl.jsonl"
},
"lee-goldwasser-2022-towards": {
"abstract": "Large-scale language models have been reducing the gap between machines and humans in understanding the real world, yet understanding an individual{'}s theory of mind and behavior from text is far from being resolved. This research proposes a neural model{---}Subjective Ground Attention{---}that learns subjective grounds of individuals and accounts for their judgments on situations of others posted on social media. Using simple attention modules as well as taking one{'}s previous activities into consideration, we empirically show that our model provides human-readable explanations of an individual{'}s subjective preference in judging social situations. We further qualitatively evaluate the explanations generated by the model and claim that our model learns an individual{'}s subjective orientation towards abstract moral concepts.",
"pages": "1752--1766",
"doi": "10.18653/v1/2022.findings-emnlp.126",
"url": "https://aclanthology.org/2022.findings-emnlp.126",
"publisher": "Association for Computational Linguistics",
"address": "Abu Dhabi, United Arab Emirates",
"year": "2022",
"month": "December",
"booktitle": "Findings of the Association for Computational Linguistics: EMNLP 2022",
"editor": "Goldberg, Yoav and\nKozareva, Zornitsa and\nZhang, Yue",
"author": "Lee, Younghun and\nGoldwasser, Dan",
"title": "Towards Explaining Subjective Ground of Individuals on Social Media",
"ENTRYTYPE": "inproceedings",
"ID": "lee-goldwasser-2022-towards",
"source": "acl.jsonl"
},
"kiehne-etal-2022-contextualizing": {
"abstract": "To comprehensibly contextualize decisions, artificial systems in social situations need a high degree of awareness of the rules of conduct of human behavior. Especially transformer-based language models have recently been shown to exhibit some such awareness. But what if norms in some social setting do not adhere to or even blatantly deviate from the mainstream? In this paper, we introduce a novel mechanism based on deontic logic to allow for a flexible adaptation of individual norms by de-biasing training data sets and a task-reduction to textual entailment. Building on the popular {`}Moral Stories{'} dataset we on the one hand highlight the intrinsic bias of current language models, on the other hand characterize the adaptability of pre-trained models to deviating norms in fine-tuning settings.",
"pages": "4620--4633",
"doi": "10.18653/v1/2022.findings-emnlp.339",
"url": "https://aclanthology.org/2022.findings-emnlp.339",
"publisher": "Association for Computational Linguistics",
"address": "Abu Dhabi, United Arab Emirates",
"year": "2022",
"month": "December",
"booktitle": "Findings of the Association for Computational Linguistics: EMNLP 2022",
"editor": "Goldberg, Yoav and\nKozareva, Zornitsa and\nZhang, Yue",
"author": "Kiehne, Niklas and\nKroll, Hermann and\nBalke, Wolf-Tilo",
"title": "Contextualizing Language Models for Norms Diverging from Social Majority",
"ENTRYTYPE": "inproceedings",
"ID": "kiehne-etal-2022-contextualizing",
"source": "acl.jsonl"
},
"mather-etal-2022-stance": {
"abstract": "We present a generalized paradigm for adaptation of propositional analysis (predicate-argument pairs) to new tasks and domains. We leverage an analogy between stances (belief-driven sentiment) and concerns (topical issues with moral dimensions/endorsements) to produce an explanatory representation. A key contribution is the combination of semi-automatic resource building for extraction of domain-dependent concern types (with 2-4 hours of human labor per domain) and an entirely automatic procedure for extraction of domain-independent moral dimensions and endorsement values. Prudent (automatic) selection of terms from propositional structures for lexical expansion (via semantic similarity) produces new moral dimension lexicons at three levels of granularity beyond a strong baseline lexicon. We develop a ground truth (GT) based on expert annotators and compare our concern detection output to GT, to yield 231{\\%} improvement in recall over baseline, with only a 10{\\%} loss in precision. F1 yields 66{\\%} improvement over baseline and 97.8{\\%} of human performance. Our lexically based approach yields large savings over approaches that employ costly human labor and model building. We provide to the community a newly expanded moral dimension/value lexicon, annotation guidelines, and GT.",
"pages": "3354--3367",
"doi": "10.18653/v1/2022.findings-acl.264",
"url": "https://aclanthology.org/2022.findings-acl.264",
"publisher": "Association for Computational Linguistics",
"address": "Dublin, Ireland",
"year": "2022",
"month": "May",
"booktitle": "Findings of the Association for Computational Linguistics: ACL 2022",
"editor": "Muresan, Smaranda and\nNakov, Preslav and\nVillavicencio, Aline",
"author": "Mather, Brodie and\nDorr, Bonnie and\nDalton, Adam and\nde Beaumont, William and\nRambow, Owen and\nSchmer-Galunder, Sonja",
"title": "From Stance to Concern: Adaptation of Propositional Analysis to New Tasks and Domains",
"ENTRYTYPE": "inproceedings",
"ID": "mather-etal-2022-stance",
"source": "acl.jsonl"
},
"pacheco-etal-2022-hands": {
"abstract": "We recently introduced DRaiL, a declarative neural-symbolic modeling framework designed to support a wide variety of NLP scenarios. In this paper, we enhance DRaiL with an easy to use Python interface, equipped with methods to define, modify and augment DRaiL models interactively, as well as with methods to debug and visualize the predictions made. We demonstrate this interface with a challenging NLP task: predicting sentence and entity level moral sentiment in political tweets.",
"pages": "371--378",
"doi": "10.18653/v1/2022.emnlp-demos.37",
"url": "https://aclanthology.org/2022.emnlp-demos.37",
"publisher": "Association for Computational Linguistics",
"address": "Abu Dhabi, UAE",
"year": "2022",
"month": "December",
"booktitle": "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"editor": "Che, Wanxiang and\nShutova, Ekaterina",
"author": "Pacheco, Maria Leonor and\nRoy, Shamik and\nGoldwasser, Dan",
"title": "Hands-On Interactive Neuro-Symbolic NLP with DRaiL",
"ENTRYTYPE": "inproceedings",
"ID": "pacheco-etal-2022-hands",
"source": "acl.jsonl"
},
"asprino-etal-2022-uncovering": {
"abstract": "Moral values as commonsense norms shape our everyday individual and community behavior. The possibility to extract moral attitude rapidly from natural language is an appealing perspective that would enable a deeper understanding of social interaction dynamics and the individual cognitive and behavioral dimension. In this work we focus on detecting moral content from natural language and we test our methods on a corpus of tweets previously labeled as containing moral values or violations, according to Moral Foundation Theory. We develop and compare two different approaches: (i) a frame-based symbolic value detector based on knowledge graphs and (ii) a zero-shot machine learning model fine-tuned on a task of Natural Language Inference (NLI) and a task of emotion detection. The final outcome from our work consists in two approaches meant to perform without the need for prior training process on a moral value detection task.",
"pages": "33--41",
"doi": "10.18653/v1/2022.deelio-1.4",
"url": "https://aclanthology.org/2022.deelio-1.4",
"publisher": "Association for Computational Linguistics",
"address": "Dublin, Ireland and Online",
"year": "2022",
"month": "May",
"booktitle": "Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures",
"editor": "Agirre, Eneko and\nApidianaki, Marianna and\nVuli{\\'c}, Ivan",
"author": "Asprino, Luigi and\nBulla, Luana and\nDe Giorgis, Stefano and\nGangemi, Aldo and\nMarinucci, Ludovica and\nMongiovi, Misael",
"title": "Uncovering Values: Detecting Latent Moral Content from Natural Language with Explainable and Non-Trained Methods",
"ENTRYTYPE": "inproceedings",
"ID": "asprino-etal-2022-uncovering",
"source": "acl.jsonl"
},
"padia-etal-2022-jointly": {
"abstract": "Moral values as commonsense norms shape our everyday individual and community behavior. The possibility to extract moral attitude rapidly from natural language is an appealing perspective that would enable a deeper understanding of social interaction dynamics and the individual cognitive and behavioral dimension. In this work we focus on detecting moral content from natural language and we test our methods on a corpus of tweets previously labeled as containing moral values or violations, according to Moral Foundation Theory. We develop and compare two different approaches: (i) a frame-based symbolic value detector based on knowledge graphs and (ii) a zero-shot machine learning model fine-tuned on a task of Natural Language Inference (NLI) and a task of emotion detection. The final outcome from our work consists in two approaches meant to perform without the need for prior training process on a moral value detection task.",
"pages": "42--52",
"doi": "10.18653/v1/2022.deelio-1.5",
"url": "https://aclanthology.org/2022.deelio-1.5",
"publisher": "Association for Computational Linguistics",
"address": "Dublin, Ireland and Online",
"year": "2022",
"month": "May",
"booktitle": "Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures",
"editor": "Agirre, Eneko and\nApidianaki, Marianna and\nVuli{\\'c}, Ivan",
"author": "Padia, Ankur and\nFerraro, Francis and\nFinin, Tim",
"title": "Jointly Identifying and Fixing Inconsistent Readings from Information Extraction Systems",
"ENTRYTYPE": "inproceedings",
"ID": "padia-etal-2022-jointly",
"source": "acl.jsonl"
},
"zhao-etal-2022-polarity": {
"abstract": "Most works on computational morality focus on moral polarity recognition, i.e., distinguishing right from wrong. However, a discrete polarity label is not informative enough to reflect morality as it does not contain any degree or intensity information. Existing approaches to compute moral intensity are limited to word-level measurement and heavily rely on human labelling. In this paper, we propose MoralScore, a weakly-supervised framework that can automatically measure moral intensity from text. It only needs moral polarity labels, which are more robust and easier to acquire. Besides, the framework can capture latent moral information not only from words but also from sentence-level semantics which can provide a more comprehensive measurement. To evaluate the performance of our method, we introduce a set of evaluation metrics and conduct extensive experiments. Results show that our method achieves good performance on both automatic and human evaluations.",
"pages": "1250--1262",
"url": "https://aclanthology.org/2022.coling-1.107",
"publisher": "International Committee on Computational Linguistics",
"address": "Gyeongju, Republic of Korea",
"year": "2022",
"month": "October",
"booktitle": "Proceedings of the 29th International Conference on Computational Linguistics",
"editor": "Calzolari, Nicoletta and\nHuang, Chu-Ren and\nKim, Hansaem and\nPustejovsky, James and\nWanner, Leo and\nChoi, Key-Sun and\nRyu, Pum-Mo and\nChen, Hsin-Hsi and\nDonatelli, Lucia and\nJi, Heng and\nKurohashi, Sadao and\nPaggio, Patrizia and\nXue, Nianwen and\nKim, Seokhwan and\nHahm, Younggyun and\nHe, Zhong and\nLee, Tony Kyungil and\nSantus, Enrico and\nBond, Francis and\nNa, Seung-Hoon",
"author": "Zhao, Chunxu and\nLiu, Pengyuan and\nYu, Dong",
"title": "From Polarity to Intensity: Mining Morality from Semantic Space",
"ENTRYTYPE": "inproceedings",
"ID": "zhao-etal-2022-polarity",
"source": "acl.jsonl"
},
"ziems-etal-2022-moral": {
"abstract": "Conversational agents have come increasingly closer to human competence in open-domain dialogue settings; however, such models can reflect insensitive, hurtful, or entirely incoherent viewpoints that erode a user{'}s trust in the moral integrity of the system. Moral deviations are difficult to mitigate because moral judgments are not universal, and there may be multiple competing judgments that apply to a situation simultaneously. In this work, we introduce a new resource, not to authoritatively resolve moral ambiguities, but instead to facilitate systematic understanding of the intuitions, values and moral judgments reflected in the utterances of dialogue systems. The Moral Integrity Corpus, MIC, is such a resource, which captures the moral assumptions of 38k prompt-reply pairs, using 99k distinct Rules of Thumb (RoTs). Each RoT reflects a particular moral conviction that can explain why a chatbot{'}s reply may appear acceptable or problematic. We further organize RoTs with a set of 9 moral and social attributes and benchmark performance for attribute classification. Most importantly, we show that current neural language models can automatically generate new RoTs that reasonably describe previously unseen interactions, but they still struggle with certain scenarios. Our findings suggest that MIC will be a useful resource for understanding and language models{'} implicit moral assumptions and flexibly benchmarking the integrity of conversational agents. To download the data, see \\url{https://github.com/GT-SALT/mic}",
"pages": "3755--3773",
"doi": "10.18653/v1/2022.acl-long.261",
"url": "https://aclanthology.org/2022.acl-long.261",
"publisher": "Association for Computational Linguistics",
"address": "Dublin, Ireland",
"year": "2022",
"month": "May",
"booktitle": "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
"editor": "Muresan, Smaranda and\nNakov, Preslav and\nVillavicencio, Aline",
"author": "Ziems, Caleb and\nYu, Jane and\nWang, Yi-Chia and\nHalevy, Alon and\nYang, Diyi",
"title": "The Moral Integrity Corpus: A Benchmark for Ethical Dialogue Systems",
"ENTRYTYPE": "inproceedings",
"ID": "ziems-etal-2022-moral",
"source": "acl.jsonl"
},
"mohammad-2022-ethics": {
"abstract": "Several high-profile events, such as the mass testing of emotion recognition systems on vulnerable sub-populations and using question answering systems to make moral judgments, have highlighted how technology will often lead to more adverse outcomes for those that are already marginalized. At issue here are not just individual systems and datasets, but also the AI tasks themselves. In this position paper, I make a case for thinking about ethical considerations not just at the level of individual models and datasets, but also at the level of AI tasks. I will present a new form of such an effort, Ethics Sheets for AI Tasks, dedicated to fleshing out the assumptions and ethical considerations hidden in how a task is commonly framed and in the choices we make regarding the data, method, and evaluation. I will also present a template for ethics sheets with 50 ethical considerations, using the task of emotion recognition as a running example. Ethics sheets are a mechanism to engage with and document ethical considerations before building datasets and systems. Similar to survey articles, a small number of carefully created ethics sheets can serve numerous researchers and developers.",
"pages": "8368--8379",
"doi": "10.18653/v1/2022.acl-long.573",
"url": "https://aclanthology.org/2022.acl-long.573",
"publisher": "Association for Computational Linguistics",
"address": "Dublin, Ireland",
"year": "2022",
"month": "May",
"booktitle": "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
"editor": "Muresan, Smaranda and\nNakov, Preslav and\nVillavicencio, Aline",
"author": "Mohammad, Saif",
"title": "Ethics Sheets for AI Tasks",
"ENTRYTYPE": "inproceedings",
"ID": "mohammad-2022-ethics",
"source": "acl.jsonl"
},
"alshomary-etal-2022-moral": {
"abstract": "An audience{'}s prior beliefs and morals are strong indicators of how likely they will be affected by a given argument. Utilizing such knowledge can help focus on shared values to bring disagreeing parties towards agreement. In argumentation technology, however, this is barely exploited so far. This paper studies the feasibility of automatically generating morally framed arguments as well as their effect on different audiences. Following the moral foundation theory, we propose a system that effectively generates arguments focusing on different morals. In an in-depth user study, we ask liberals and conservatives to evaluate the impact of these arguments. Our results suggest that, particularly when prior beliefs are challenged, an audience becomes more affected by morally framed arguments.",
"pages": "8782--8797",
"doi": "10.18653/v1/2022.acl-long.601",
"url": "https://aclanthology.org/2022.acl-long.601",
"publisher": "Association for Computational Linguistics",
"address": "Dublin, Ireland",
"year": "2022",
"month": "May",
"booktitle": "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
"editor": "Muresan, Smaranda and\nNakov, Preslav and\nVillavicencio, Aline",
"author": "Alshomary, Milad and\nEl Baff, Roxanne and\nGurcke, Timon and\nWachsmuth, Henning",
"title": "The Moral Debater: A Study on the Computational Generation of Morally Framed Arguments",
"ENTRYTYPE": "inproceedings",
"ID": "alshomary-etal-2022-moral",
"source": "acl.jsonl"
},
"roy-goldwasser-2021-analysis": {
"abstract": "The Moral Foundation Theory suggests five moral foundations that can capture the view of a user on a particular issue. It is widely used to identify sentence-level sentiment. In this paper, we study the Moral Foundation Theory in tweets by US politicians on two politically divisive issues - Gun Control and Immigration. We define the nuanced stance of politicians on these two topics by the grades given by related organizations to the politicians. First, we identify moral foundations in tweets from a huge corpus using deep relational learning. Then, qualitative and quantitative evaluations using the corpus show that there is a strong correlation between the moral foundation usage and the politicians{'} nuanced stance on a particular topic. We also found substantial differences in moral foundation usage by different political parties when they address different entities. All of these results indicate the need for more intense research in this area.",
"pages": "1--13",
"doi": "10.18653/v1/2021.socialnlp-1.1",
"url": "https://aclanthology.org/2021.socialnlp-1.1",
"publisher": "Association for Computational Linguistics",
"address": "Online",
"year": "2021",
"month": "June",
"booktitle": "Proceedings of the Ninth International Workshop on Natural Language Processing for Social Media",
"editor": "Ku, Lun-Wei and\nLi, Cheng-Te",
"author": "Roy, Shamik and\nGoldwasser, Dan",
"title": "Analysis of Nuanced Stances and Sentiment Towards Entities of US Politicians through the Lens of Moral Foundation Theory",
"ENTRYTYPE": "inproceedings",
"ID": "roy-goldwasser-2021-analysis",
"source": "acl.jsonl"
},
"zhou-etal-2021-assessing": {
"abstract": "Lab studies in cognition and the psychology of morality have proposed some thematic and linguistic factors that influence moral reasoning. This paper assesses how well the findings of these studies generalize to a large corpus of over 22,000 descriptions of fraught situations posted to a dedicated forum. At this social-media site, users judge whether or not an author is in the wrong with respect to the event that the author described. We find that, consistent with lab studies, there are statistically significant differences in uses of first-person passive voice, as well as first-person agents and patients, between descriptions of situations that receive different blame judgments. These features also aid performance in the task of predicting the eventual collective verdicts.",
"pages": "61--69",
"doi": "10.18653/v1/2021.socialnlp-1.5",
"url": "https://aclanthology.org/2021.socialnlp-1.5",
"publisher": "Association for Computational Linguistics",
"address": "Online",
"year": "2021",
"month": "June",
"booktitle": "Proceedings of the Ninth International Workshop on Natural Language Processing for Social Media",
"editor": "Ku, Lun-Wei and\nLi, Cheng-Te",
"author": "Zhou, Karen and\nSmith, Ana and\nLee, Lillian",
"title": "Assessing Cognitive Linguistic Influences in the Assignment of Blame",
"ENTRYTYPE": "inproceedings",
"ID": "zhou-etal-2021-assessing",
"source": "acl.jsonl"
},
"robertson-etal-2021-covid": {
"abstract": "We present a COVID-19 news dashboard which visualizes sentiment in pandemic news coverage in different languages across Europe. The dashboard shows analyses for positive/neutral/negative sentiment and moral sentiment for news articles across countries and languages. First we extract news articles from news-crawl. Then we use a pre-trained multilingual BERT model for sentiment analysis of news article headlines and a dictionary and word vectors -based method for moral sentiment analysis of news articles. The resulting dashboard gives a unified overview of news events on COVID-19 news overall sentiment, and the region and language of publication from the period starting from the beginning of January 2020 to the end of January 2021.",
"pages": "110--115",
"url": "https://aclanthology.org/2021.hackashop-1.15",
"publisher": "Association for Computational Linguistics",
"address": "Online",
"year": "2021",
"month": "April",
"booktitle": "Proceedings of the EACL Hackashop on News Media Content Analysis and Automated Report Generation",
"editor": "Toivonen, Hannu and\nBoggia, Michele",
"author": "Robertson, Frankie and\nLagus, Jarkko and\nKajava, Kaisla",
"title": "A COVID-19 news coverage mood map of Europe",
"ENTRYTYPE": "inproceedings",
"ID": "robertson-etal-2021-covid",
"source": "acl.jsonl"
},
"ramezani-etal-2021-unsupervised-framework": {
"abstract": "Morality plays an important role in social well-being, but people{'}s moral perception is not stable and changes over time. Recent advances in natural language processing have shown that text is an effective medium for informing moral change, but no attempt has been made to quantify the origins of these changes. We present a novel unsupervised framework for tracing textual sources of moral change toward entities through time. We characterize moral change with probabilistic topical distributions and infer the source text that exerts prominent influence on the moral time course. We evaluate our framework on a diverse set of data ranging from social media to news articles. We show that our framework not only captures fine-grained human moral judgments, but also identifies coherent source topics of moral change triggered by historical events. We apply our methodology to analyze the news in the COVID-19 pandemic and demonstrate its utility in identifying sources of moral change in high-impact and real-time social events.",
"pages": "1215--1228",
"doi": "10.18653/v1/2021.findings-emnlp.105",
"url": "https://aclanthology.org/2021.findings-emnlp.105",
"publisher": "Association for Computational Linguistics",
"address": "Punta Cana, Dominican Republic",
"year": "2021",
"month": "November",
"booktitle": "Findings of the Association for Computational Linguistics: EMNLP 2021",
"editor": "Moens, Marie-Francine and\nHuang, Xuanjing and\nSpecia, Lucia and\nYih, Scott Wen-tau",
"author": "Ramezani, Aida and\nZhu, Zining and\nRudzicz, Frank and\nXu, Yang",
"title": "An unsupervised framework for tracing textual sources of moral change",
"ENTRYTYPE": "inproceedings",
"ID": "ramezani-etal-2021-unsupervised-framework",
"source": "acl.jsonl"
},
"emelin-etal-2021-moral": {
"abstract": "In social settings, much of human behavior is governed by unspoken rules of conduct rooted in societal norms. For artificial systems to be fully integrated into social environments, adherence to such norms is a central prerequisite. To investigate whether language generation models can serve as behavioral priors for systems deployed in social settings, we evaluate their ability to generate action descriptions that achieve predefined goals under normative constraints. Moreover, we examine if models can anticipate likely consequences of actions that either observe or violate known norms, or explain why certain actions are preferable by generating relevant norm hypotheses. For this purpose, we introduce Moral Stories, a crowd-sourced dataset of structured, branching narratives for the study of grounded, goal-oriented social reasoning. Finally, we propose decoding strategies that combine multiple expert models to significantly improve the quality of generated actions, consequences, and norms compared to strong baselines.",
"pages": "698--718",
"doi": "10.18653/v1/2021.emnlp-main.54",
"url": "https://aclanthology.org/2021.emnlp-main.54",
"publisher": "Association for Computational Linguistics",
"address": "Online and Punta Cana, Dominican Republic",
"year": "2021",
"month": "November",
"booktitle": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
"editor": "Moens, Marie-Francine and\nHuang, Xuanjing and\nSpecia, Lucia and\nYih, Scott Wen-tau",
"author": "Emelin, Denis and\nLe Bras, Ronan and\nHwang, Jena D. and\nForbes, Maxwell and\nChoi, Yejin",
"title": "Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences",
"ENTRYTYPE": "inproceedings",
"ID": "emelin-etal-2021-moral",
"source": "acl.jsonl"
},
"roy-etal-2021-identifying": {
"abstract": "Extracting moral sentiment from text is a vital component in understanding public opinion, social movements, and policy decisions. The Moral Foundation Theory identifies five moral foundations, each associated with a positive and negative polarity. However, moral sentiment is often motivated by its targets, which can correspond to individuals or collective entities. In this paper, we introduce morality frames, a representation framework for organizing moral attitudes directed at different entities, and come up with a novel and high-quality annotated dataset of tweets written by US politicians. Then, we propose a relational learning model to predict moral attitudes towards entities and moral foundations jointly. We do qualitative and quantitative evaluations, showing that moral sentiment towards entities differs highly across political ideologies.",
"pages": "9939--9958",
"doi": "10.18653/v1/2021.emnlp-main.783",
"url": "https://aclanthology.org/2021.emnlp-main.783",
"publisher": "Association for Computational Linguistics",
"address": "Online and Punta Cana, Dominican Republic",
"year": "2021",
"month": "November",
"booktitle": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
"editor": "Moens, Marie-Francine and\nHuang, Xuanjing and\nSpecia, Lucia and\nYih, Scott Wen-tau",
"author": "Roy, Shamik and\nPacheco, Maria Leonor and\nGoldwasser, Dan",
"title": "Identifying Morality Frames in Political Tweets using Relational Learning",
"ENTRYTYPE": "inproceedings",
"ID": "roy-etal-2021-identifying",
"source": "acl.jsonl"
},
"peng-etal-2021-zi": {
"language": "Chinese",
"abstract": "随着人工智能的发展,越来越多的研究开始关注人工智能伦理。在NLP领域,道德自动识别作为研究分析文本中的道德的一项重要任务,近年来开始受到研究者的关注。该任务旨在识别文本中的道德片段,其对自然语言处理的道德相关的下游任务如偏见识别消除、判定模型隐形歧视等具有重要意义。与英文相比,目前面向中文的道德识别研究开展缓慢,其主要原因是至今还没有较大型的道德中文数据集为研究提供数据。为解决上述问题,本文在中文语料上进行了中文道德句的标注工作,并初步对识别中文文本道德句进行探索。我们首先构建了国内首个10万级别的中文道德句数据集,然后本文提出了利用流行的几种机器学习方法探究识别中文道德句任务的效果。此外,我们还探索了利用额外知识辅助的方法,对中文道德句的识别任务进行了进一步的探究。",
"pages": "537--548",
"url": "https://aclanthology.org/2021.ccl-1.49",
"publisher": "Chinese Information Processing Society of China",
"address": "Huhhot, China",
"year": "2021",
"month": "August",
"booktitle": "Proceedings of the 20th Chinese National Conference on Computational Linguistics",
"editor": "Li, Sheng and\nSun, Maosong and\nLiu, Yang and\nWu, Hua and\nLiu, Kang and\nChe, Wanxiang and\nHe, Shizhu and\nRao, Gaoqi",
"author": "Peng, Shiya and\nLiu, Chang and\nDeng, Yayue and\nYu, Dong",
"title": "字里行间的道德:中文文本道德句识别研究(Morality Between the Lines: Research on Identification of Chinese Moral Sentence)",
"ENTRYTYPE": "inproceedings",
"ID": "peng-etal-2021-zi",
"source": "acl.jsonl"
},
"mahajan-shaikh-2020-studying": {
"abstract": "We highlight the contribution of emotional and moral language towards information contagion online. We find that retweet count on Twitter is significantly predicted by the use of negative emotions with negative moral language. We find that a tweet is less likely to be retweeted (hence less engagement and less potential for contagion) when it has emotional language expressed as anger along with a specific type of moral language, known as authority-vice. Conversely, when sadness is expressed with authority-vice, the tweet is more likely to be retweeted. Our findings indicate how emotional and moral language can interact in predicting information contagion.",
"pages": "128--130",
"doi": "10.18653/v1/2020.winlp-1.34",
"url": "https://aclanthology.org/2020.winlp-1.34",
"publisher": "Association for Computational Linguistics",
"address": "Seattle, USA",
"year": "2020",
"month": "July",
"booktitle": "Proceedings of the The Fourth Widening Natural Language Processing Workshop",
"editor": "Cunha, Rossana and\nShaikh, Samira and\nVaris, Erika and\nGeorgi, Ryan and\nTsai, Alicia and\nAnastasopoulos, Antonios and\nChandu, Khyathi Raghavi",
"author": "Mahajan, Khyati and\nShaikh, Samira",
"title": "Studying The Effect of Emotional and Moral Language on Information Contagion during the Charlottesville Event",
"ENTRYTYPE": "inproceedings",
"ID": "mahajan-shaikh-2020-studying",
"source": "acl.jsonl"
},
"hulpus-etal-2020-knowledge": {
"abstract": "Operationalizing morality is crucial for understanding multiple aspects of society that have moral values at their core {--} such as riots, mobilizing movements, public debates, etc. Moral Foundations Theory (MFT) has become one of the most adopted theories of morality partly due to its accompanying lexicon, the Moral Foundation Dictionary (MFD), which offers a base for computationally dealing with morality. In this work, we exploit the MFD in a novel direction by investigating how well moral values are captured by KGs. We explore three widely used KGs, and provide concept-level analogues for the MFD. Furthermore, we propose several Personalized PageRank variations in order to score all the concepts and entities in the KGs with respect to their relevance to the different moral values. Our promising results help to progress the operationalization of morality in both NLP and KG communities.",
"pages": "71--80",
"url": "https://aclanthology.org/2020.starsem-1.8",
"publisher": "Association for Computational Linguistics",
"address": "Barcelona, Spain (Online)",
"year": "2020",
"month": "December",
"booktitle": "Proceedings of the Ninth Joint Conference on Lexical and Computational Semantics",
"editor": "Gurevych, Iryna and\nApidianaki, Marianna and\nFaruqui, Manaal",
"author": "Hulpu{\\textcommabelow{s}}, Ioana and\nKobbe, Jonathan and\nStuckenschmidt, Heiner and\nHirst, Graeme",
"title": "Knowledge Graphs meet Moral Values",
"ENTRYTYPE": "inproceedings",
"ID": "hulpus-etal-2020-knowledge",
"source": "acl.jsonl"
},
"shahid-etal-2020-detecting": {
"abstract": "We describe work in progress on detecting and understanding the moral biases of news sources by combining framing theory with natural language processing. First we draw connections between issue-specific frames and moral frames that apply to all issues. Then we analyze the connection between moral frame presence and news source political leaning. We develop and test a simple classification model for detecting the presence of a moral frame, highlighting the need for more sophisticated models. We also discuss some of the annotation and frame detection challenges that can inform future research in this area.",
"pages": "120--125",
"doi": "10.18653/v1/2020.nuse-1.15",
"url": "https://aclanthology.org/2020.nuse-1.15",
"publisher": "Association for Computational Linguistics",
"address": "Online",
"year": "2020",
"month": "July",
"booktitle": "Proceedings of the First Joint Workshop on Narrative Understanding, Storylines, and Events",
"editor": "Bonial, Claire and\nCaselli, Tommaso and\nChaturvedi, Snigdha and\nClark, Elizabeth and\nHuang, Ruihong and\nIyyer, Mohit and\nJaimes, Alejandro and\nJi, Heng and\nMartin, Lara J. and\nMiller, Ben and\nMitamura, Teruko and\nPeng, Nanyun and\nTetreault, Joel",
"author": "Shahid, Usman and\nDi Eugenio, Barbara and\nRojecki, Andrew and\nZheleva, Elena",
"title": "Detecting and understanding moral biases in news",
"ENTRYTYPE": "inproceedings",
"ID": "shahid-etal-2020-detecting",
"source": "acl.jsonl"
},
"van-den-berg-etal-2020-doctor": {
"isbn": "979-10-95546-34-4",
"language": "English",
"abstract": "Entity framing is the selection of aspects of an entity to promote a particular viewpoint towards that entity. We investigate entity framing of political figures through the use of names and titles in German online discourse, enhancing current research in entity framing through titling and naming that concentrates on English only. We collect tweets that mention prominent German politicians and annotate them for stance. We find that the formality of naming in these tweets correlates positively with their stance. This confirms sociolinguistic observations that naming and titling can have a status-indicating function and suggests that this function is dominant in German tweets mentioning political figures. We also find that this status-indicating function is much weaker in tweets from users that are politically left-leaning than in tweets by right-leaning users. This is in line with observations from moral psychology that left-leaning and right-leaning users assign different importance to maintaining social hierarchies.",
"pages": "4924--4932",
"url": "https://aclanthology.org/2020.lrec-1.606",
"publisher": "European Language Resources Association",
"address": "Marseille, France",
"year": "2020",
"month": "May",
"booktitle": "Proceedings of the Twelfth Language Resources and Evaluation Conference",
"editor": "Calzolari, Nicoletta and\nB{\\'e}chet, Fr{\\'e}d{\\'e}ric and\nBlache, Philippe and\nChoukri, Khalid and\nCieri, Christopher and\nDeclerck, Thierry and\nGoggi, Sara and\nIsahara, Hitoshi and\nMaegaard, Bente and\nMariani, Joseph and\nMazo, H{\\'e}l{\\`e}ne and\nMoreno, Asuncion and\nOdijk, Jan and\nPiperidis, Stelios",
"author": "van den Berg, Esther and\nKorfhage, Katharina and\nRuppenhofer, Josef and\nWiegand, Michael and\nMarkert, Katja",
"title": "Doctor Who? Framing Through Names and Titles in German",
"ENTRYTYPE": "inproceedings",
"ID": "van-den-berg-etal-2020-doctor",
"source": "acl.jsonl"
},
"king-morante-2020-must": {
"isbn": "979-10-95546-34-4",
"language": "English",
"abstract": "In this paper we analyze the use of modal verbs in a corpus of texts related to the vaccination debate. Broadly speaking, the vaccination debate centers around whether vaccination is safe, and whether it is morally acceptable to enforce mandatory vaccination. In order to successfully intervene and curb the spread of preventable diseases due to low vaccination rates, health practitioners need to be adequately informed on public perception of the safety and necessity of vaccines. Public perception can relate to the strength of conviction that an individual may have towards a proposition (e.g. {`}one must vaccinate{'} versus {`}one should vaccinate{'}), as well as qualify the type of proposition, be it related to morality ({`}government should not interfere in my personal choice{'}) or related to possibility ({`}too many vaccines at once could hurt my child{'}). Text mining and analysis of modal auxiliaries are economically viable means of gaining insights into these perspectives, particularly on a large scale due to the widespread use of social media and blogs as vehicles of communication.",
"pages": "5730--5738",
"url": "https://aclanthology.org/2020.lrec-1.703",
"publisher": "European Language Resources Association",
"address": "Marseille, France",
"year": "2020",
"month": "May",
"booktitle": "Proceedings of the Twelfth Language Resources and Evaluation Conference",
"editor": "Calzolari, Nicoletta and\nB{\\'e}chet, Fr{\\'e}d{\\'e}ric and\nBlache, Philippe and\nChoukri, Khalid and\nCieri, Christopher and\nDeclerck, Thierry and\nGoggi, Sara and\nIsahara, Hitoshi and\nMaegaard, Bente and\nMariani, Joseph and\nMazo, H{\\'e}l{\\`e}ne and\nMoreno, Asuncion and\nOdijk, Jan and\nPiperidis, Stelios",
"author": "King, Liza and\nMorante, Roser",
"title": "Must Children be Vaccinated or not? Annotating Modal Verbs in the Vaccination Debate",
"ENTRYTYPE": "inproceedings",
"ID": "king-morante-2020-must",
"source": "acl.jsonl"
},
"forbes-etal-2020-social": {
"abstract": "Social norms{---}the unspoken commonsense rules about acceptable social behavior{---}are crucial in understanding the underlying causes and intents of people{'}s actions in narratives. For example, underlying an action such as {``}wanting to call cops on my neighbor{''} are social norms that inform our conduct, such as {``}It is expected that you report crimes.{''} We present SOCIAL CHEMISTRY, a new conceptual formalism to study people{'}s everyday social norms and moral judgments over a rich spectrum of real life situations described in natural language. We introduce SOCIAL-CHEM-101, a large-scale corpus that catalogs 292k rules-of-thumb such as {``}It is rude to run a blender at 5am{''} as the basic conceptual units. Each rule-of-thumb is further broken down with 12 different dimensions of people{'}s judgments, including social judgments of good and bad, moral foundations, expected cultural pressure, and assumed legality, which together amount to over 4.5 million annotations of categorical labels and free-text descriptions. Comprehensive empirical results based on state-of-the-art neural models demonstrate that computational modeling of social norms is a promising research direction. Our model framework, Neural Norm Transformer, learns and generalizes SOCIAL-CHEM-101 to successfully reason about previously unseen situations, generating relevant (and potentially novel) attribute-aware social rules-of-thumb.",
"pages": "653--670",