FAQ

While the reasoning capabilities of LLMs are steadily improving, they still lack the depth and nuance of human reasoning. Additionally, LLMs fall short if the data available for a topic on the internet is biased, for example, when certain viewpoints are under-represented online.

At present, LLMs also suffer from the hallucination problem, meaning they occasionally produce generate misleading or fabricated results.

Despite these limitations, in some cases, LLMs may generate a valuable set of initial arguments. Humans can then refine these arguments or study the data retrieved to craft more nuanced questions.

AI tools can also assist with several operational tasks, such as:

  • Detecting potential duplicate arguments
  • Offering a second opinion on the relative strength of argument pairs
  • Summarizing the information presented on a topic page

On nlite, fact-checking relies mainly on crowdsourced verification—a collaborative effort among users. This decentralized approach is often seen as less prone to bias and more accurate than fact-checking by a single organization, which is why it has grown in popularity across social media platforms [1, 2].

The platform facilitates this process by making it mandatory for authors to select a Source Type when submitting an argument or counter. There are two source types to choose from: Self-explanatory and Linked References. The Self-explanatory type refers to arguments that are merely based on logical principles and do not require external references. In contrast, when the Linked References option is selected, the argument or counter submitter acknowledges that (i) certain parts of the argument require external references, and that (ii) they are linking those references to the submission. That's where the name Linked References comes from. This requirement nudges argument submitters to check if any references are needed to support their claims and, if so, to provide them. (Learn more)

Our answer consists of two parts.

  1. Friend groups: One way nlite is commonly used is to investigate controversial topics within friend groups. While people may have differing opinions in these environments, they often do not intentionally distort data to misrepresent the other side. In such cases, nlite serves as a reliable tool for efficient discussion.
  2. Broader environments: In more public or diverse settings, where bad-faith actors may be present, nlite incorporates safeguards to minimize manipulation. A current area of focus is the development of an algorithm that helps detect the possible presence of two subgroups of users: one with good intentions that aims at ranking arguments in the proper order, and another with bad intentions that aims at ranking arguments either in reverse order or randomly.

    It's important to note that if manipulative behavior is detected, the platform can always publicize it, potentially damaging the perpetrators' reputation more than any early benefits they might gain.

We recommend assessing arguments along the following dimensions:

  • Accuracy of data and adequacy of references
  • Logical consistency
  • Clarity of expression
  • Respectful and soft tone

If you’ve reviewed the arguments on both sides and still feel unsure, that often means it’s time to ask new questions to dig deeper into the conversation.

However, if you believe the discussion is already fairly developed, here are a few tips that might help:

  1. Use AI tools to analyze and summarize content. While AI may not be ideal for creating entirely new arguments, it excels at summarizing complex material. To assist with this, the platform has a built-in AI tool. To access it, click the three dots next to the topic title and select AI Post-processing at the bottom of the menu. This opens a new page where you can ask any questions about the topic page content. Responses are currently generated using ChatGPT, and the content submitted on the topic page is sent to ChatGPT to guide the responses it provides.
  2. Good decision-making doesn’t require absolute certainty. Total certainty is unrealistic. Strong decision-makers assess the information available to them and use it to make the most informed choice possible at any given moment. True perfection lies in making consistently good-enough decisions.

This is a valid concern, as a good argument may well be expressed in various ways by different people. If the platform’s ranking algorithm functions properly, all variants will rise to the top, leading to redundancy among the top items. To address this, the platform includes a mechanism to identify and remove duplicate arguments.

When users click the Evaluate Arguments button under a viewpoint, the platform occasionally asks the following question along with two selected arguments: Are the following arguments (essentially) making the same point? Responses to these questions are used to identify and eliminate duplicate arguments. (Learn More)

Overview