FAQ

Large Language Models (LLMs) are powerful tools for summarizing data on well-studied topics. However, they are not good at reasoning about new topics that constantly emerge in our societies, which may not be well-represented in their training data and may also be hard to collect real-time data for.

Additionally, LLMs fall short if the data available for a topic on the internet is biased, for example, when certain viewpoints are under-represented online.

At present, LLMs also suffer from the hallucination problem, meaning they occasionally produce generate inaccurate or misleading results.

Despite these limitations, LLMs might be able to generate a valuable set of initial arguments. Humans can use this initial output as a starting point to develop more advanced arguments or study the data retrieved to craft better, more nuanced questions.

AI tools can also assist with certain platform operations, such as:

  • Summarizing the information presented on a topic page
  • Offering a second opinion on the relative strength of argument pairs
  • Detecting potential duplicate arguments

On nlite, fact-checking relies mainly on crowdsourced verification—a collaborative effort among users. This decentralized approach is often seen as less prone to bias and more accurate than fact-checking by a single organization, which is why it has grown in popularity across social media platforms.

The platform facilitates this process by making it mandatory for authors to select a Source Type when submitting an argument. There are two source types to choose from: Self-explanatory and Linked References. The Self-explanatory type refers to arguments that are merely based on logical principles and do not require external references. In contrast, when the Linked References option is selected, the argument submitter acknowledges that (i) certain parts of the argument require external references, and that (ii) they are linking those references to the submission. That's where the name Linked References comes from.

The requirement above nudges users to check if any references are needed to support their claims and, if so, to provide them. (Learn more)

Our answer consists of two parts.

  1. Friend groups: One way nlite is commonly used is to investigate controversial topics within friend groups. While people may have differing opinions in these environments, they often do not intentionally distort data to misrepresent the other side. In such cases, nlite serves as a tool for efficient discussion.
  2. Broader environments: In more public or diverse settings, where bad-faith actors may be present, nlite incorporates safeguards to minimize manipulation. A current area of focus is the development of an algorithm that helps detect the possible presence of two subgroups of users: one with good intentions that aims at ranking arguments in the proper order, and another with bad intentions that aims at ranking arguments either in reverse order or randomly.

    It's important to note that if manipulative behavior is detected, the platform can always publicize it, potentially damaging the perpetrators' reputation more than any early benefits they might gain.

This is a valid concern, as a good argument may well be presented with different wordings by different people. If the platform’s ranking algorithm functions properly, all variants will rise to the top, leading to redundancy among the top items. To address this, the platform includes a mechanism to identify and remove duplicate arguments.

When users click the Evaluate Arguments button under a viewpoint, the platform occasionally ask the following question along with two selected arguments: Are the following arguments (essentially) making the same point? Responses to these questions are used to identify and eliminate duplicate arguments. (Learn More)

If you’ve reviewed the arguments on both sides and still feel unsure, that often means it’s time to ask new questions to deepen your understanding. If you believe the investigation is already fairly developed, here are a few tips that might help:

  1. Remember that good decision-making doesn’t require absolute certainty. Total certainty is unrealistic. Strong decision-makers are not those who wait for perfect clarity, but those who efficiently gather relevant information and make informed choices based on it.
  2. Use AI tools to summarize the discussion. While AI isn’t ideal for generating new logical reasoning, it excels at summarizing complex material. You can use it to distill the arguments presented by real people, which may help clarify the main points and contrasts.

This is a great question! nlite is a valuable tool in the following two situations: (1) You are new to a topic and want to learn about it quickly and efficiently, or (2) You are already deeply knowledgeable in a particular area and wish to share your expertise with others but lack an effective platform to do so. These scenarios are discussed in more detail below.

For learners: If you are new to a topic, nlite significantly increases the efficiency with which you can learn about it. This is due to the brief and to-the-point structure of topic pages and the rigor with which the top arguments are identified. The alternative would be to spend many hours listening to debates or study lengthy articles, and even then, you wouldn't know whether you've just learned about the insights of a specific group of experts, or what you've come across are truly the best arguments that ever exist for the viewpoints.

For experts: If you are already deeply knowledgeable and experienced in a field (we all have our own areas of expertise!) and would like to enlighten society with your knowledge, but lack a platform to do so, nlite gives you a powerful opportunity to share your insights. Note that you don't need to be a well-known public figure to have an impact on nlite. The platform is designed to create an environment where the strength of your arguments determines your influence. Financial, political, and societal backgrounds are not significant factors.

Overview