注释标记的原则
By Emily Saltz, Tommy Shane, Victoria Kwan, Claire Leibowicz, Claire Wardle
埃米莉·萨尔茨 ( Emily Saltz) , 汤米·沙恩 ( Tommy Shane) , 关 颖琳 ( Victoria Kwan) , 克莱尔·莱博维奇 ( Claire Leibowicz) , 克莱尔·沃德 ( Claire Wardle)
贴标签或不贴标签:标签何时可能造成弊大于利? (To label or not to label: When might labels cause more harm than good?)
Manipulated photos and videos flood our fragmented, polluted, and increasingly automated information ecosystem, from a synthetically generated “deepfake,” to the far more common problem of older images resurfacing and being shared with a different context. While research is still limited, there is some empirical support to show visuals tend to be both more memorable and more widely shared than text-only posts, heightening their potential to cause real-world harm, at scale — just consider the numerous audiovisual examples in this running list of hoaxes and misleading posts about police brutality protests in the United States. In response, what should social media platforms do? Twitter, Facebook, and others have been working on new ways to identify and label manipulated media.
处理过的照片和视频充斥着我们分散,污染和日益自动化的信息生态系统,从合成生成的“ deepfake”,到更常见的旧图像重新出现并与其他环境共享的问题。 尽管研究仍很有限,但有一些经验支持表明视觉效果比纯文字信息更容易让人记忆深刻 , 分享更多 ,从而扩大了它们对现实世界造成危害的潜力,请考虑一下其中的众多视听实例。有关美国警察暴行抗议的恶作剧和误导性帖子列表 。 作为回应,社交媒体平台应该做什么? Twitter,Facebook和其他公司一直在研究识别和标记可操纵媒体的新方法。
As the divisive reactions to Twitter’s recent decision to label two of Trump’s misleading and incendiary tweets demonstrate, label designs aren’t applied in a vacuum. In one instance, the platform appended a label under a tweet that contained incorrect information about California’s absentee ballot process, encouraging readers to click through and “Get the facts about mail-in ballots.” In the other instance, Twitter replaced the text of a tweet referring to shooting of looters with a label that says, “This tweet violates the Twitter Rules about glorifying violence.” (Determining that it was in the public interest for the tweet to remain accessible, however, this label allowed users the option to click through to the original text.)
正如对Twitter最近决定标记特朗普的两个误导性和煽动性推文的决定所产生的分歧React所表明的那样,标签设计并不是在真空中应用的。 在一个实例中,该平台在一条推文下添加了一个标签,其中包含有关加利福尼亚州缺席投票程序的错误信息,从而鼓励读者点击并“获得有关邮寄投票的事实”。 在另一种情况下,Twitter用一个标有“ 枪击劫掠者 ”的推文替换了标签,上面写着“此推文违反了推崇暴力的推特规则”。 (确定此推文是否可访问符合公共利益,不过,该标签允许用户选择单击以查看原始文本。)
These examples show that the world notices and reacts to the way platforms label content — including President Trump himself, who has directly responded to the labels. With every labeling decision, the ‘fourth wall’ of platform neutrality is swiftly breaking down. Behind it, we can see that every label — whether for text, images, video, or a combination — comes with a set of assumptions that must be independently tested against clear goals and transparently communicated to users. As each platform uses its own terminology, visual language, and interaction design for labels, with application informed respectively by their own detection technologies, internal testing, and theories of harm (sometimes explicit, sometimes ad hoc) — the end result is a largely incoherent landscape of labels leading to unknown societal effects.
这些示例表明,世界注意到平台对内容加标签的方式并做出了React,包括直接对标签做出回应的特朗普总统本人。 通过每项标签决策,平台中立性的“第四壁”正在Swift瓦解。 在其背后,我们可以看到每个标签(无论是文本,图像,视频还是组合标签)都带有一组假设,这些假设必须针对明确的目标进行独立测试,并以透明的方式传达给用户。 由于每个平台使用其自己的术语,视觉语言和标签交互设计,其应用分别通过其自身的检测技术,内部测试和伤害理论(有时是明确的,有时是临时的)来告知,因此最终结果在很大程度上是不一致的标签的景观导致未知的社会影响。
At the Partnership on AI and First Draft, we’ve been collaboratively studying how digital platforms might address manipulated media with empirically-tested and responsible design solutions. Though definitions of “manipulated media” vary, we define it as any image or video with content or context edited along the “cheap fake” to “deepfake” spectrum (Data & Society) with the potential to mislead and cause harm. In particular, we’re interrogating the potential risks and benefits of labels: language and visual indicators that notify users about manipulation. What are best practices for digital platforms applying (or not applying) labels to manipulated media in order to reduce mis/disinformation’s harms?
在AI和初稿合作伙伴计划中,我们一直在协作研究数字平台如何通过经过实验测试和负责任的设计解决方案来处理受控媒体。 尽管“操纵的媒体”的定义有所不同,但我们将其定义为具有从“廉价假货”到“深造”频谱 (数据和社会)编辑的内容或上下文的任何图像或视频,可能会误导和造成伤害。 特别是,我们正在调查标签的潜在风险和收益:向用户通知操作的语言和视觉指示器。 为了减少错误/虚假信息的危害,数字平台将最佳实践应用于(或不应用于)受控媒体的最佳做法是什么?
我们发现了什么 (What we’ve found)
In order to reduce mis/disinformation’s harm to society, we have compiled the following set of principles for designers at platforms to consider for labeling manipulated media. We also include design ideas for how platforms might explore, test, and adapt these principles for their own contexts. These principles and ideas draw from hundreds of studies and many interviews with industry experts, as well as our own ongoing user-centric research at PAI and First Draft (more to come on this later this year).
为了减少错误/虚假信息对社会的危害,我们为平台的设计人员编制了以下原则,以供他们考虑对受控媒体进行标记。 我们还包括有关平台如何根据自己的环境探索,测试和调整这些原理的设计思想。 这些原则和思想来自数百项研究和对行业专家的多次采访,以及我们正在进行的针对PAI和First Draft的以用户为中心的研究(更多信息将在今年晚些时候发布)。
Ultimately, labeling is just one way of addressing mis/disinformation: how does labeling compare to other approaches, such as removing or downranking content? Platform interventions are so new that there isn’t yet robust public data on their effects. But researchers and designers have been studying topics of trust, credibility, information design, media perception, and effects for decades, providing a rich starting point of human-centric design principles for manipulated media interventions.
最终,标记只是解决错误/虚假信息的一种方法:标记与其他方法(例如删除或降级内容)相比如何? 平台干预非常新,以至于尚没有关于其影响的可靠公共数据。 但是数十年来,研究人员和设计师一直在研究信任,可信度,信息设计,媒体感知和效果等主题,为操纵媒体干预提供了以人为本的设计原则的丰富起点。
原则 (Principles)
1.不要对错误/虚假信息引起不必要的关注 (1. Don’t attract unnecessary attention to the mis/disinformation)
Although assessing harm from visual mis/disinformation can be difficult, there are cases where the severity or likelihood of harm is so great — for example, when the manipulated media poses a threat to someone’s physical safety — that it warrants an intervention stronger than a label. Research shows that even brief exposure has a “continued influence effect” where the memory trace of the initial post cannot be unremembered [1]. With images and video especially sticky in memory, due to “the picture superiority effect” [2] — the best tactic for reducing belief in such instances of manipulated media may be outright removal or downranking.
尽管很难评估视觉错误/虚假信息造成的伤害,但在某些情况下,伤害的严重性或可能性如此之大(例如,当被操纵的媒体对某人的人身安全构成威胁时),它需要采取的干预措施要强于标签。 研究表明,即使是短暂的暴露也具有“持续的影响效应”,在该效应中,最初帖子的记忆轨迹无法被记住[1]。 由于“图片优势” [2],图像和视频在内存中特别发粘,因此,减少对此类受控媒体实例的置信度的最佳策略可能是彻底删除或降级。
Another way of reducing exposure may be the addition of an overlay. This approach is not without risk, however: the extra interaction (clicking to remove the overlay) should not be framed in a way that creates a “curiosity gap” for the user, making it tempting to click to reveal the content. Visual treatments may want to make the posts less noticeable, interesting, and visually salient compared to other content — for example, using grayscale, or reducing the size of the content.
减少曝光的另一种方法可能是添加覆盖层。 但是,这种方法并非没有风险:额外的交互(单击以删除叠加层)不应以给用户造成“好奇心差距”的方式进行构架,以使其诱使单击以显示内容。 与其他内容相比,视觉处理可能希望使帖子不那么引人注目,更有趣并且在视觉上更加醒目-例如,使用灰度或减小内容的大小。
2.使标签引人注目且易于处理 (2. Make labels noticeable and easy to process)
A label can only be effective if it’s noticed in the first place. How well a fact-check works depends on attention and timing: debunks are most effective when noticed simultaneously with the mis/disinformation, and are much less effective if noticed or displayed after exposure. Applying this lesson, labels need to be as or more noticeable than the manipulated media, so as to inform the user’s initial gut reaction — what’s known as “system 1” thinking [1], [2], [3], [4], [5].
只有首先注意到标签,标签才有效。 事实检查的效果取决于注意力和时间安排:当与错误/虚假信息同时被发现时,窃听是最有效的,而在暴露后被发现或显示时,窃听的效果则要差得多。 应用本课时,标签必须比被操纵的媒体更引人注目,以便告知用户最初的肠道React-所谓的“系统1”思维[1],[2],[3],[4] ,[5]。
From a design perspective, this means starting with accessible graphics and language. An example of this is how The Guardian prominently highlights the age of old articles.
从设计的角度来看,这意味着从可访问的图形和语言开始。 这方面的一个例子是《卫报》如何突出强调旧物品的时代。
The risk of label prominence, however, is that it could draw the user’s attention to misinformation that they may not have noticed otherwise. One potential way to mitigate that risk would be to highlight the label, e.g. through color or animation, only if the user dwells on or interacts with the content, indicating interest. Indeed, Facebook appears to have embraced this approach by animating their context button only if a user dwells on a post.
然而,标签突出的风险是,它可能会引起用户注意他们可能没有注意到的错误信息。 减轻这种风险的一种可能方法是仅在用户停留在内容上或与内容互动时(例如,通过颜色或动画)突出显示标签,以表示兴趣。 确实,Facebook仅在用户停留在帖子上时才通过为其上下文按钮添加动画来采用这种方法。
Another opportunity for making labels easy to process might be to minimize other cues in contrast, such as social media reactions and endorsement. Research has found that exposure to social engagement metrics increases vulnerability to misinformation [6]. In minimizing other cues, platforms may be able to nudge users to focus on the critical task at hand: accurately assessing the credibility of the media.
使标签易于处理的另一个机会可能是尽量减少其他暗示,例如社交媒体的React和认可。 研究发现,接触社会参与指标会增加错误信息的脆弱性[6]。 在最小化其他提示的情况下,平台可能能够促使用户专注于眼前的关键任务:准确评估媒体的信誉。
3.鼓励情绪上的思考和怀疑 (3. Encourage emotional deliberation and skepticism)
In addition to being noticeable and easily understood, a label should encourage a user to evaluate the media at hand. Multiple studies have shown that the more one can be critically and skeptically engaged in assessing content, the more accurately one can judge the information [1], [2], [5], [7]. In general, research indicates people are more likely to trust misleading media due to “lack of reasoning, rather than motivated reasoning” [8]. Thus, deliberately nudging people to engage in reasoning may actually increase accurate recollection of claims [1].
除了引人注目和易于理解之外,标签还应鼓励用户评估手头的媒体。 多项研究表明,越能批判和怀疑地参与评估内容,就越能准确地判断信息[1],[2],[5],[7]。 总的来说,研究表明,人们由于“缺乏推理,而不是动机推理”而更容易相信误导性媒体[8]。 因此,故意促使人们参与推理实际上可以增加对索赔的准确回忆[1]。
One tactic to encourage deliberation, already employed by platforms like Facebook, Instagram, and Twitter, requires users to engage in an extra interaction like a click before they can see the visual misinformation. This additional friction builds in time for reflection and may help the user to shift into a more skeptical mindset, especially if used alongside prompts that prepare the user to critically assess the information before viewing content [1], [2]. For example, labels could ask people “Is the post written in a style that I expect from a professional news organization?” before reading the content [9].
Facebook,Instagram和Twitter等平台已经采用的一种鼓励商议的策略是,要求用户进行额外的互动,例如单击,然后才能看到视觉错误信息。 这种额外的摩擦会及时产生反映的效果,并可能帮助用户转变为更加怀疑的心态,尤其是与提示结合使用时,提示用户可以在查看内容[1],[2]之前严格评估信息。 例如,标签可能会问人们“帖子是按照我期望的专业新闻机构的风格撰写的吗?” 在阅读内容之前[9]。
4.提供更多信息的灵活访问 (4. Offer flexible access to more information)
During deliberation, different people may have different questions about the media. Access to these additional details should be provided, without compromising the readability of the initial label. Platforms should consider how a label can progressively disclose more detail as the user interacts with it, enabling users to follow flexible analysis paths according to their own lines of critical consideration [10].
在审议过程中,不同的人可能对媒体有不同的疑问。 应该提供对这些附加详细信息的访问,而不会损害初始标签的可读性。 平台应考虑标签如何在用户与标签交互时逐步披露更多细节,从而使用户能够根据自己的重要考虑因素遵循灵活的分析路径[10]。
What kind of detail might people want? This question warrants further design exploration and testing. Some possible details may include the capture and edit trail of media; information on a source and its activity; and general information on media manipulation tactics, ratings, and mis/disinformation. Interventions like Facebook’s context button can provide this information through multiple tabs or links to pages with additional details.
人们可能想要什么样的细节? 这个问题值得进一步的设计探索和测试。 一些可能的细节可能包括媒体的捕获和编辑轨迹。 有关来源及其活动的信息; 以及有关媒体操纵策略,评级和错误/虚假信息的一般信息。 诸如Facebook的上下文按钮之类的干预措施可以通过多个选项卡或具有更多详细信息的页面链接来提供此信息。
5.在上下文中使用一致的标签系统 (5. Use a consistent labeling system across contexts)
On platforms it is crucial to consider not just the effects of a label for a particular piece of content, but across all media encountered — labeled and unlabeled. Recent research suggests that labeling only a subset of fact-checkable content on social media may do more harm than good by increasing users’ beliefs in the accuracy of unlabeled claims. This is because a lack of a label may imply accuracy in cases where the content may be false, but not labeled, known as “the implied truth effect” [11]. The implied truth effect is perhaps the most profound challenge for labeling, as it is impossible for fact-checkers to check all media posted to platforms. Because of this limitation in fact-checking at scale, fact-checked media on platforms will always be a subset, and labeling that subset will always have the potential to boost the perceived credibility of all other unchecked media, regardless of its accuracy.
在平台上,至关重要的是,不仅要考虑标签对特定内容的影响,而且还要考虑所遇到的所有媒体(带标签和不带标签)。 最近的研究表明,在社交媒体上仅标记事实可检查内容的一部分可能会增加用户对未标记主张的准确性的信念,弊大于利。 这是因为在内容可能是错误的但未标记的情况下,缺少标签可能意味着准确性,这被称为“隐含真相效应” [11]。 隐含的真相效应可能是标签方面最严峻的挑战,因为事实检查人员不可能检查发布到平台上的所有媒体。 由于在大规模事实检查中存在此限制,因此平台上经过事实检查的媒体将始终是子集,并且标记该子集将始终具有提高所有其他未经检查的媒体的可信度的潜力,无论其准确性如何。
Platform designs should, therefore, aim to minimize this “implied truth effect” at a cross-platform, ecosystem level. This demands much more exploration and testing across platforms. For example, what might be the effects of labeling all media as “unchecked” by default?
因此,平台设计应旨在在跨平台的生态系统级别上最小化这种“隐含的真相效应”。 这需要跨平台进行更多的探索和测试。 例如,默认情况下将所有媒体标记为“未选中”会产生什么影响?
Further, consistent systems of language and iconography across platforms enable users to form a mental model of media that they can extend to accurately judge media across contexts, for example, by understanding that a “manipulated” label means the same thing whether it’s on YouTube, Facebook, or Twitter. Additionally, if a design language is consistent across platforms, it may help users more rapidly recognize issues over time as it becomes more familiar, and thus and easier to process and trust (c.f. Principle 2).
此外,跨平台的一致语言和图像系统使用户能够形成一种媒体的心理模型,他们可以扩展该模型以准确判断各种情况下的媒体,例如,通过了解“操纵的”标签在YouTube上的含义是相同的, Facebook或Twitter。 此外,如果一种设计语言在各个平台之间是一致的,则随着时间变得越来越熟悉,它可以帮助用户随着时间的流逝更快地识别问题,从而更轻松地进行处理和信任(参见原则2)。
6.重复事实,而不是虚假事实 (6. Repeat the facts, not the falsehoods)
It’s important not just to debunk what’s false, but to ensure that users come away with a clear understanding of what’s accurate. It’s risky to display labels that describe media in terms of the false information without emphasizing what is true. Familiar things seem more true, meaning every repetition of the mis/disinformation claim associated with a visual risks imprinting that false claim deeper in memory — a phenomenon known as “the illusory truth effect” or the “myth-familiarity boost” [1].
重要的是,不仅要揭穿虚假内容,而且要确保用户清楚了解准确内容。 这是有风险的描述中的虚假信息方面没有媒体强调什么是真正的显示标签。 熟悉的事物看起来更真实,这意味着与视觉相关的错误/虚假信息主张的每一次重复都会使错误的主张铭刻在记忆的更深处,这种现象被称为“虚幻的真实效果”或“神话般的熟习感” [1]。
Rather than frame the label in terms of what’s inaccurate, platforms should elevate the accurate details that are known. Describe the accurate facts rather than simply rate the media in unspecific terms while avoiding repeating the falsehoods [1], [2], [3]. If it’s possible, surface clarifying metadata, such as the accurate location, age, and subject matter of a visual.
平台不应该根据不准确的内容来构架标签,而应该提升已知的准确细节。 描述准确的事实,而不是简单地用非特定术语对媒体进行评级,同时避免重复错误的说法[1],[2],[3]。 如果可能的话,表面澄清的元数据,例如视觉的准确位置,年龄和主题。
7.使用非对抗性的,移情的语言 (7. Use non-confrontational, empathetic language)
The language on a manipulated media label should be non-confrontational, so as not to challenge an individual’s identity, intelligence, or worldview. Though the research is mixed, some studies have found that in rare cases, confrontational language risks triggering the “backfire effect,” where an identity-challenging fact-check may further entrench belief in a false claim [13], [14]. Regardless, it helps to adapt and translate the label to the person. As much as possible, meet people where they are by framing the correction in a way that is consistent with the users’ worldview. Highlighting accurate aspects of the media that are consistent with their preexisting opinions and cultural context may make the label easier to process [1], [12].
受控媒体标签上的语言应具有非对抗性,以免挑战个人的身份,智慧或世界观。 尽管研究结果参差不齐 ,但一些研究发现,在极少数情况下,对抗性语言风险会引发“事与愿违的后果”,在这种情况下,具有挑战性的事实检查可能会进一步巩固对虚假主张的信念[13],[14]。 无论如何,它有助于使标签适应并翻译给人。 尽可能以与用户的世界观相一致的方式来制定修正方案,与他们所处的地方相识。 突出显示媒体与其先前观点和文化背景一致的准确方面,可能会使标签更易于处理[1],[12]。
Design-wise, this has implications for the language of the label and label description. While the specific language should be tested and refined with audiences, language choice does matter. A 2019 study found that when adding a tag to false headlines on social media, the more descriptive “Rated False” tag was more effective at reducing belief in the headline’s accuracy than a tag that said “Disputed” [15]. Additionally, there is an opportunity to build trust by using adaptive language and terminology consistent with other sources and framings of an issue that a user follows and trusts. For example, “one study found that Republicans were far more likely to accept an otherwise identical charge as a ‘carbon offset’ than as a ‘tax,’ whereas the wording has little effect on Democrats or Independents (whose values are not challenged by the word ‘tax;’ Hardisty, Johnson, & Weber, 2010)” [1].
在设计方面,这对标签和标签描述的语言有影响。 虽然应该与受众一起测试和完善特定的语言,但语言的选择确实很重要。 一项2019年的研究发现,在社交媒体的虚假标题上添加标签时,更具描述性的“定级虚假”标签比说“有争议”的标签更有效地降低了对标题准确性的信念[15]。 此外,还有机会使用与用户关注和信任的问题的其他来源和框架相一致的自适应语言和术语来建立信任。 例如,“一项研究发现,共和党人更可能接受其他相同的收费作为“碳补偿”,而不是“税收”,而措辞对民主党或独立人士的影响很小(其价值不受“民主党”或“独立党”的挑战)。单词'tax;'Hardisty,Johnson,&Weber,2010)” [1]。
8.强调用户信任的可靠反驳来源 (8. Emphasize credible refutation sources that the user trusts)
The effectiveness of a label to influence user beliefs may depend on the source of the correction. To be most effective, a correction should be told by people or groups the user knows and trusts [1], [16], rather than a central authority or authorities that the user may distrust. Research indicates that when it comes to the source of a fact-check, people value trustworthiness over expertise–that means a favorite celebrity or YouTube personality might be a more resonant messenger than an unfamiliar fact-checker with deep expertise.
标签影响用户信念的有效性可能取决于更正的来源。 为了更有效,应该由用户认识并信任的人员或团体告知更正[1],[16],而不是用户可能不信任的中央机构。 研究表明,在事实核查的来源上,人们重视可信度,而不是专业知识。这意味着,与不熟悉且具有深厚专业知识的事实检查员相比,最喜欢的名人或YouTube个性可能更能引起人们的共鸣。
If a platform has access to correction information from multiple sources, the platform should highlight sources the user trusts, for example promoting sources that the user has seen and/or interacted with before. Additionally, platforms could highlight related, accurate articles from publishers the user follows and interacts with, or promote comments from friends that are consistent with the fact-check. This strategy may have the additional benefit of minimizing potential retaliation threats to publishers and fact-checkers — i.e. preventing users from personally harassing sources they distrust.
如果平台可以从多个来源访问校正信息,则该平台应突出显示用户信任的来源,例如,提升用户之前已经看到和/或与之交互的来源。 此外,平台可以突出显示用户关注并与之交互的发布者提供的相关,准确的文章,或者促进来自朋友的与事实核对一致的评论。 该策略可能具有额外的好处,即可以最大程度地减少对发布者和事实检查者的报复威胁,即防止用户亲自骚扰他们不信任的来源。
9.对标签的局限性保持透明,并提供一种对抗它的方法 (9. Be transparent about the limitations of the label and provide a way to contest it)
Given the difficulty of ratings and displaying labels at scale for highly subjective, contextualized user-generated content, a user may have reasonable disagreements with a label’s conclusion about a post. For example, Dr. Safiya Noble, a professor of Information Studies at UCLA, and author of the book “Algorithms of Oppression,” recently shared a Black Lives Matter-related post on Instagram that she felt was unfairly flagged as “Partly False Information.”
鉴于很难对高度主观的,上下文相关的用户生成的内容进行评级和大规模显示标签,因此用户可能会对标签关于帖子的结论有合理的分歧。 例如,加州大学洛杉矶分校(UCLA)信息研究教授,《压迫算法》(Algorithms of Oppression)的作者萨菲亚·诺布尔(Safiya Noble)博士最近在Instagram上分享了与黑生活问题相关的帖子 ,她认为自己被不公平地标记为“部分错误的信息”。 ”
As a result, labels should offer swift recourse to contest labels and provide feedback if users feel labels have been inappropriately applied, similar to interactions for reporting harmful posts.
结果,标签应该提供快速的竞赛标签资源,并且如果用户觉得标签使用不当,则应该提供反馈,类似于报告有害帖子的交互。
Additionally, platforms should share the reasoning and process behind the label’s application. If an instance of manipulated media was identified and labeled through automation, for example, without context-specific vetting by humans, that should be made clear to users. This also means linking the label to an explanation that not only describes how the media was manipulated, but who made that determination and should be held accountable. Facebook’s labels, for example, will say that the manipulated media was identified by third-party fact-checkers, and clicking the “See Why” button opens up a preview of the actual fact check.
此外,平台应共享标签应用程序背后的推理和过程。 例如,如果通过自动化识别并标记了可操纵媒体的实例,而没有人进行特定于上下文的审查,则应向用户明确。 这也意味着将标签链接到一个说明上,该说明不仅描述了媒体是如何被操纵的,而且是由谁做出决定的,应当追究责任。 例如,Facebook的标签将说被操纵的媒体是由第三方事实检查员识别的,然后单击“查看原因”按钮可打开实际事实检查的预览。
10.用多种视觉角度填充缺少的替代方案 (10. Fill in missing alternatives with multiple visual perspectives)
Experiments have shown that a debunk is more effective at dislodging incorrect information from a person’s mind when it provides an alternative explanation to take the place of the faulty information [3]. Thus, labels for manipulated media should present alternatives that preempt the user’s inevitable question: what really happened then?
实验表明,当debunk提供替代性解释来代替错误信息时,它可以更有效地从人的思想中消除错误信息[3]。 因此,用于可操纵媒体的标签应提供替代方案,以取代用户不可避免的问题:那时到底发生了什么?
Recall that the “picture superiority effect” means images are more likely to be remembered than words. If possible, fight the misleading visuals with accurate visuals. This can be done by displaying multiple photo and video perspectives in order to visually reinforce the accurate event [5]. Additionally, platforms could explore displaying “related images,” for example by surfacing reverse image search results, to provide visually similar images and videos possibly from the same event.
回想一下,“图片优势效果”意味着图像比文字更容易被记住。 如有可能,请以准确的视觉效果与误导性视觉效果作斗争。 这可以通过显示多个照片和视频透视图来完成,以便在视觉上增强准确的事件[5]。 此外,平台可以探索显示“相关图像”,例如通过显示反向图像搜索结果,以提供可能来自同一事件的视觉相似图像和视频。
11.帮助用户识别和理解特定的操纵策略 (11. Help users identify and understand specific manipulation tactics)
Research shows that people are poor at identifying manipulations in photos and videos: In one study measuring perception of photos doctored in various ways, such as airbrushing and perspective distortion, people could only identify 60% of images as manipulated. They were even worse at telling where exactly a photo was edited, accurately locating the alterations only 45% of the time [18].
研究表明,人们不擅长识别照片和视频中的操作:在一项对以各种方式(例如喷枪和透视畸变)篡改的照片的感知度的研究中,人们只能识别出60%的图像是被操作的。 他们甚至更不敢说出照片的确切编辑位置,仅在45%的时间内准确地定位了更改[18]。
Additionally, even side-by-side presentation of manipulated vs. original visuals may not be enough without extra indications of what’s been edited, due to “change blindness,” a phenomenon where people fail to notice major changes to visual stimuli: “It often requires a large number of alternations between the two images before the change can be identified…[which] persists when the original and changed images are shown side by side” [19].
此外,由于“变化盲”,人们无法注意到视觉刺激的重大变化,即使没有对编辑内容的额外指示,即使是并排呈现的视觉效果也可能不够。需要先在两个图像之间进行大量交替,然后才能识别出更改... [当同时显示原始图像和更改后的图像时,[这]仍然存在” [19]。
These findings indicate that manipulations need to be highlighted in a way that is easy to process in comparison to any available accurate visuals. [18], [19]. Showing manipulations side-by-side to an unmanipulated original may be useful especially alongside annotated sections of where an image or video has been edited, and including explanations of manipulations in order to aid future recognition.
这些发现表明,与任何可用的准确视觉效果相比,都需要以一种易于处理的方式突出显示操作。 [18],[19]。 与未处理的原件并排显示操作可能会很有用,尤其是在已编辑图像或视频的带注释的部分旁边,并包括对操作的说明以帮助将来识别。
In general, platforms should provide specific corrections, emphasizing the factual information in close proximity to where the manipulations occur — for example, highlighting alterations in context of the specific, relevant points of the video [20]. Or, if a video clip has been taken out of context, show how. For example, when an edited video of Bloomberg’s Democratic debate performance was misleadingly cut together with long pauses and cricket audio to indicate that his opponents were left speechless, platforms could have provided access to the original unedited debate clip and describe how and where the video was edited in order to increase user awareness of this tactic.
通常,平台应提供特定的更正,强调与操纵发生位置非常接近的事实信息,例如,突出显示视频特定,相关点的上下文中的更改[20]。 或者,如果视频剪辑已脱离上下文,请显示操作方法。 例如,当彭博社的民主党辩论表演的剪辑视频误导性地与长时间的停顿和的声音误导起来以表明他的对手无话可说时,平台本可以提供对原始未经剪辑的辩论剪辑的访问,并描述该视频的方式和位置进行编辑,以提高用户对此策略的了解。
12.根据用例调整和测量标签 (12. Adapt and measure labels according to the use case)
In addition to individual user variances, platforms should consider use case variances: interventions for YouTube, where users may encounter manipulated media through searching specific topics or through the auto-play feature, might look different from an intervention on Instagram for users encountering media through exploring tags, or an intervention on TikTok, where users passively scroll through videos. Before designing, it is critical to understand how the user might land on manipulated media, and what actions the user should take as a result of the intervention.
除了个人用户差异外,平台还应考虑用例差异:针对YouTube的干预(用户可能会通过搜索特定主题或通过自动播放功能来遇到可操纵的媒体),对于YouTube的干预可能看起来不同于通过探索而遇到媒体的用户对Instagram干预。标签或对TikTok的干预,用户可以在其中被动地滚动浏览视频。 在设计之前,至关重要的是要了解用户可能如何落在被操纵的媒体上,以及用户由于干预而应采取的行动。
For example, is the goal to have the user click through to more information about the context of the media? Is it to prevent users from accessing the original media (which Facebook has cited as its own metric for success)? Are you trying to reduce exposure, engagement, or sharing? Are you trying to prompt searches for facts on other platforms? Are you trying to educate the user about manipulation techniques?
例如,目标是让用户点击以获取有关媒体上下文的更多信息吗? 是否阻止用户访问原始媒体( Facebook引用其为成功标准)? 您是否要减少曝光,参与或共享? 您是否要在其他平台上提示搜索事实? 您是否正在尝试向用户介绍操纵技术?
Finally, labels do not exist in isolation. Platforms should consider how labels interact with other contexts and credibility indicators around a post (e.g. social cues, source verification). Clarity about the use and goals are crucial to designing meaningful interventions.
最后,标签不是孤立存在的。 平台应考虑标签如何与帖子周围的其他上下文和信誉指标交互(例如,社交线索,来源验证)。 明确用途和目标对于设计有意义的干预措施至关重要。
这些原则只是一个起点 (These principles are just a starting point)
While these principles can serve as a starting point, they are no replacement for continued and rigorous user experience testing across diverse populations, tested in-situ in actual contexts of use. Much of this research was conducted in artificial contexts that are not ecologically valid, and as this is a nascent research area, some principles have been demonstrated more robustly than others.
虽然这些原则可以作为起点,但它们不能替代在不同人群中进行的持续且严格的用户体验测试,这些测试是在实际使用环境中进行的。 这项研究大部分是在不符合生态学原理的人工环境中进行的,并且由于这是一个新兴的研究领域,因此某些原理已被更可靠地证明。
Given this, there are inevitable tensions and contradictions in the research literature around interventions. Ultimately, every context is different: abstract principles can get you to an informed concept or hypothesis, but the design details of specific implementations will surely offer new insights and opportunities for iteration toward the goals of reducing exposure to and engagement with mis/disinformation.
鉴于此,关于干预的研究文献中不可避免地存在着紧张和矛盾。 最终,每个上下文都是不同的:抽象原理可以使您了解一个有见识的概念或假设,但是特定实现的设计细节必将为减少错误和虚假信息的暴露和参与的目标提供新的见解和迭代机会。
To move forward in understanding the effects of labeling requires that platforms publicly commit to the goals and intended effects of their design choices; and share their findings against these goals so as to be held accountable. For this reason, PAI and First Draft are collaborating with researchers and partners in industry to explore the costs and benefits of labeling in upcoming user experience research of manipulated media labels.
为了进一步理解标签的效果,要求平台公开致力于其设计选择的目标和预期效果; 并针对这些目标分享他们的发现,以便追究责任。 因此,PAI和First Draft正在与行业的研究人员和合作伙伴合作,在即将进行的可操纵媒体标签的用户体验研究中探索标签的成本和收益。
Stay tuned for more insights into best practices around labeling (or not labeling) manipulated media, and reach out to help us thoughtfully address this interdisciplinary wicked problem space together.
请继续关注有关标记(或不标记)操纵的媒体的最佳实践的更多见解,并努力帮助我们深思熟虑地共同解决这一跨学科的问题空间。
引文 (Citations)
- Swire, Briony, and Ullrich KH Ecker. “Misinformation and its correction: Cognitive mechanisms and recommendations for mass communication.” Misinformation and mass audiences (2018): 195–211. 太古,布里奥尼和乌尔里希·KH·埃克。 “错误信息及其纠正:大众传播的认知机制和建议。” 虚假信息和大众听众(2018):195–211。
The Legal, Ethical, and Efficacy Dimensions of Managing Synthetic and Manipulated Media https://carnegieendowment.org/2019/11/15/legal-ethical-and-efficacy-dimensions-of-managing-synthetic-and-manipulated-media-pub-80439
管理合成媒体和操纵媒体的法律,道德和功效维度https://carnegieendowment.org/2019/11/15/legal-ethical-and-efficacy-dimensions-of-managing-synthetic-and-manipulated-media- pub-80439
- Cook, John, and Stephan Lewandowsky. The debunking handbook. Nundah, Queensland: Sevloid Art, 2011. 库克,约翰和斯蒂芬·莱万多斯基。 揭穿手册。 昆士兰州Nundah:Sevloid Art,2011年。
Infodemic: Half-Truths, Lies, and Critical Information in a Time of Pandemics https://www.aspeninstitute.org/events/infodemic-half-truths-lies-and-critical-information-in-a-time-of-pandemics/
信息流行病:大流行时期的半真相,谎言和关键信息https://www.aspeninstitute.org/events/infodemic-half-truths-lies-and-critical-information-in-a-time-of-大流行/
The News Provenance Project, 2020 https://www.newsprovenanceproject.com/
2020年的新闻来源项目https://www.newsprovenanceproject.com/
- Avram, Mihai, et al. “Exposure to Social Engagement Metrics Increases Vulnerability to Misinformation.” arXiv preprint arXiv:2005.04682 (2020). Avram,Mihai等。 “暴露于社会参与度指标会增加错误信息的脆弱性。” arXiv预印本arXiv:2005.04682(2020)。
- Bago, Bence, David G. Rand, and Gordon Pennycook. “Fake news, fast and slow: Deliberation reduces belief in false (but not true) news headlines.” Journal of experimental psychology: general (2020). Bago,Bence,David G. Rand和Gordon Pennycook。 “虚假新闻,快而慢:审议减少了对虚假(但非真实)新闻头条的信任。” 实验心理学杂志:一般(2020)。
- Pennycook, Gordon, and David G. Rand. “Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning.” Cognition 188 (2019): 39–50. Pennycook,Gordon和David G. Rand。 “懒惰,没有偏见:党派虚假新闻的易感性可以通过缺乏推理而不是有动机的推理来更好地解释。” 认知188(2019):39–50。
- Lutzke, L., Drummond, C., Slovic, P., & Árvai, J. (2019). Priming critical thinking: Simple interventions limit the influence of fake news about climate change on Facebook. Global Environmental Change, 58, 101964. Lutzke,L.,Drummond,C.,Slovic,P.,&Árvai,J.(2019年)。 引发批判性思维:简单的干预措施可以限制有关气候变化的虚假新闻对Facebook的影响。 全球环境变化,58,101964。
- Karduni, Alireza, et al. “Vulnerable to misinformation? Verifi!.” Proceedings of the 24th International Conference on Intelligent User Interfaces. 2019. Karduni,Alireza等。 “容易受到误传? 验证!” 第24届智能用户界面国际会议论文集。 2019。
- Metzger, Miriam J. “Understanding credibility across disciplinary boundaries.” Proceedings of the 4th workshop on Information credibility. 2010. Metzger和MiriamJ。“了解跨学科界限的信誉。” 第四届信息可信度研讨会论文集。 2010。
- Nyhan, Brendan. Misinformation and fact-checking: Research findings from social science. New America Foundation, 2012. Nyhan,布伦丹。 错误信息和事实核查:来自社会科学的研究结果。 新美国基金会,2012年。
- Wood, Thomas, and Ethan Porter. “The elusive backfire effect: Mass attitudes’ steadfast factual adherence.” Political Behavior 41.1 (2019): 135–163. 伍德,托马斯和伊桑·波特。 “难以捉摸的适得其反的效果:大众态度坚定不移地坚持事实。” 政治行为41.1(2019):135–163。
- Nyhan, Brendan, and Jason Reifler. “When corrections fail: The persistence of political misperceptions.” Political Behavior 32.2 (2010): 303–330. Nyhan,Brendan和Jason Reifler。 “当纠正失败时:政治误解的持续存在。” 政治行为32.2(2010):303-330。
- Clayton, Katherine, et al. “Real solutions for fake news? Measuring the effectiveness of general warnings and fact-check tags in reducing belief in false stories on social media.” Political Behavior (2019): 1–23. 克莱顿,凯瑟琳等人。 “假新闻的真正解决方案? 衡量一般警告和事实检查标签在减少对社交媒体上虚假故事的信任方面的有效性。” 政治行为(2019):1-23。
- Badrinathan, Sumitra, Simon Chauchard, and D. J. Flynn. “I Don’t Think That’s True, Bro!.” Badrinathan,Sumitra,Simon Chauchard和DJ Flynn。 “我认为那不是真的,兄弟!”
Ticks or It Didn’t Happen: Confronting Key Dilemmas In Authenticity Infrastructure For Multimedia, WITNESS (2019) https://lab.witness.org/ticks-or-it-didnt-happen/
壁垒还是没有发生:面对多媒体真实性基础架构中的关键难题,WITNESS(2019) https://lab.witness.org/ticks-or-it-didnt-happen/
- Nightingale, Sophie J., Kimberley A. Wade, and Derrick G. Watson. “Can people identify original and manipulated photos of real-world scenes?.” Cognitive research: principles and implications 2.1 (2017): 30. Nightingale,Sophie J.,Kimberley A.Wade和Derrick G.Watson。 “人们可以识别真实场景的原始照片和操纵过的照片吗?”。 认知研究:原理与意义2.1(2017):30。
- Shen, Cuihua, et al. “Fake images: The effects of source, intermediary, and digital media literacy on contextual assessment of image credibility online.” new media & society 21.2 (2019): 438–463. 沉翠华,等。 “伪造图像:来源,中介和数字媒体素养对在线图像可信度的上下文评估的影响。” 新媒体与社会21.2(2019):438–463。
- Diakopoulos, Nicholas, and Irfan Essa. “Modulating video credibility via visualization of quality evaluations.” Proceedings of the 4th workshop on Information credibility. 2010. Diakopoulos,Nicholas和Irfan Essa。 “通过可视化质量评估来调节视频可信度。” 第四届信息可信度研讨会论文集。 2010。
翻译自: https://medium.com/swlh/it-matters-how-platforms-label-manipulated-media-here-are-12-principles-designers-should-follow-438b76546078
注释标记的原则
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/275710.shtml
如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!