In a world where information changes rapidly, and its interpretation depends on one’s perspective, defining misinformation is hard. Exchange of information involves a certain amount of subjectivity. Everyone agrees that falsehoods are spreading, but not everyone agrees on what those falsehoods are. Fact checking is now common place, but who is checking the fact-checkers? Interpretations of truth are highly subjective to sociological, economic, political and ideological contexts.
Never-mind the fact that the definition of “information” itself was, and still is, hotly contested.
Add to this the complexities of how information is perceived and acted upon, and the accelerating progress of science - where what is true one moment may be false the next, and this becomes an intractable problem.
Instead of trying to determine what is misinformation or disinformation for the people using Unpress, which is impossible since we will never have access to perfect information, we take a different approach. Our rating system is designed to effectively crowd source what people believe to be effective, accurate and trustworthy information and sources of information in a way that does not devolve into “like” and “dislike.” We balance this with our planned community awareness features, so that perspectives from individual communities do not get drowned out by the opinions of the masses. Finally, the system is structured in such a way as to engage our objective thinking more than our subjective notions of whether or not we like something because it agrees with what we already believe. The result is that what is largely agreed to be misinformation may not be removed, but instead its distribution to other unpress users will be limited.
This system, however, is not perfect, nor is it bullet proof. So we have prepared clear guidance in a number of areas, being careful to balance the concerns of free speech, with the values of security, privacy, authenticity and respect.
We remove content when:
it is likely to directly increase the risk of imminent physical harm
it is likely to directly interfere with the functioning of democratic processes, such as information that misleads people about when, where and how to participate in a civic process.
it is clearly highly deceptive manipulated media
the information has been credibly shown to be a hoax
it violates our community standards
To determine what content falls into these categories, we partner with independent experts who possess knowledge and expertise to assess whether content is likely to directly contribute to the risk of imminent harm. This includes, for instance, partnering with human rights organizations with a presence on the ground in a country to determine the truth of a rumor about civil conflict, and partnering with health organizations during the global COVID-19 pandemic.
For all other content, we focus on creating an environment that fosters a productive dialogue. We believe in a marketplace of ideas, where the truth will win out. There is a social risk to de-platforming or censoring creators or content that provides alternative perspectives on issues. Those people, and their views do not disappear. Instead they go underground, or to fringe platforms like 4Chan, 8Chan and the dark web where they grow and spread. We believe that it is better to keep them in the light, where productive dialogue around concepts, facts, evidence and perspectives can prevail.1
Finally, we prohibit content and behavior in other areas that often overlap with the spread of misinformation. For example, our Community Standards prohibit platform manipulation, spam, fraud, and coordinated inauthentic behavior.