While the reasoning capabilities of LLMs are steadily improving, they still lack the depth and nuance of human reasoning. Additionally, LLMs fall short if the data available for a topic on the internet is biased, for example, when certain viewpoints are under-represented online.
LLMs also suffer from the hallucination problem, meaning they occasionally produce inaccurate or misleading results.
Despite these limitations, LLMs might be able to generate a valuable set of initial arguments. Humans can use this initial output as a starting point or study the data retrieved to craft better, more nuanced arguments or questions.