FAQ

While the reasoning capabilities of LLMs are steadily improving, they still lack the depth and nuance of human reasoning. Additionally, LLMs fall short if the data available for a topic on the internet is biased, for example, when certain viewpoints are under-represented online.

LLMs also suffer from the hallucination problem, meaning they occasionally produce misleading or fabricated results.

Despite these limitations, LLMs may generate a valuable set of initial arguments. Humans can then refine these arguments or study the information retrieved to craft more nuanced questions.

AI tools can also assist with several operational tasks, such as:

  • Detecting potential duplicate arguments
  • Offering a second opinion on the relative strength of submitted arguments
  • Summarizing the information presented on topic pages

Rigorous mathematical results show that the ranking algorithm is actually quite efficient at identifying the top arguments in realistic settings. With a little patience, you'll be surprised by the quality of the results that rise to the top!

For those with a technical background, it may initially appear that ranking \( n \) arguments requires on the order of \( n^2 \) pairwise comparisons. However, it can be shown that the correct order is \( n \log(n) \). To see why, consider a simpler, deterministic example: finding the largest number in a list of \( n \) numbers using pairwise comparisons. Basic algorithmic analysis shows that this takes \( O(n) \) comparisons. The platform’s setting is probabilistic rather than deterministic—because the outcomes of pairwise comparisons are noisy—and this changes the problem in important ways. Even so, it can still be shown that identifying the run time increases only to \( O(n \log(n)) \), not \( O(n^2) \).

For more details, see the section titled Simple does it: eliciting the Borda rule with naive sampling in this paper by Lee et al. (Learn more)

On nlite, fact-checking relies mainly on crowdsourced verification—a collaborative effort among users. This decentralized approach is often considered more reliable than fact-checking by a single organization, and it has grown in popularity across social media platforms [1, 2].

That said, the platform facilitates this process by making it mandatory for authors to select a Source Type when submitting an argument or counter. There are two source types to choose from: Self-explanatory and Linked References. The Self-explanatory type refers to arguments that are merely based on logical principles and do not require external references. In contrast, when the Linked References option is selected, the argument submitter acknowledges that (i) certain parts of the argument require external references, and that (ii) they are linking those references to the submission. That's where the name Linked References comes from. This requirement nudges argument submitters to check if any references are needed to support their claims and, if so, to provide them. (Learn more)

Our answer consists of two parts.

  1. Friend groups: One way nlite is commonly used is to investigate controversial topics within friend groups. While people may have differing opinions in these environments, they often do not intentionally distort data to misrepresent the other side. In such cases, nlite serves as a reliable tool for efficient discussion.
  2. Broader environments: In more public or diverse settings, where bad-faith actors may be present, nlite incorporates safeguards to minimize manipulation. A current area of focus is the development of an algorithm that helps detect the possible presence of two subgroups of users: one with good intentions that aims at ranking arguments in the proper order, and another with bad intentions that aims at ranking arguments either in reverse order or randomly.

    It's important to note that if manipulative behavior is detected, the platform can always publicize it, potentially damaging the perpetrators' reputation more than any early benefits they might gain.

Assess arguments along the following dimensions:

  • Logical consistency
  • Accuracy of references
  • Clarity of expression
  • Respectful tone

If you’ve reviewed the arguments on both sides and still feel unsure, that often means it’s time to ask new questions to dig deeper into the conversation.

However, if you believe the discussion is already fairly mature, here are a few tips that might help:

  1. Use AI tools to analyze and summarize content. While AI may not be ideal for creating entirely new arguments, it excels at summarizing complex material. To facilitate this, the platform has a built-in AI tool. To access it, click the three dots next to the topic title and select AI Post-processing at the bottom of the menu. This opens a new page where you can ask any questions about the topic page content. Responses are currently generated using ChatGPT, and the content submitted on the topic page is sent to ChatGPT to guide the responses it provides. (Learn more)
  2. Good decision-making doesn’t require absolute certainty. Total certainty is unrealistic. Strong decision-makers assess the information available to them and use it to make the most informed choice possible.

This is a valid concern, as a good argument may well be expressed in various ways by different people. If the platform’s ranking algorithm functions properly, all variants will rise to the top, leading to redundancy among the top items. To address this, the platform includes a mechanism to identify and remove duplicate arguments.

When users click the Evaluate Arguments button under a viewpoint, the platform occasionally asks the following question along with two selected arguments: Are the following arguments (essentially) making the same point? Responses to these questions are used to identify and eliminate duplicate arguments. (Learn More)

Overview