The advent of ChatGPT, an innovative AI tool from OpenAI, has sparked both intrigue and caution within the scientific research community. Its capacity to revolutionize our search for information is profound, particularly evident in the rise of AI tools for scientific literature exploration.
One such tool making waves is Elicit.org. Designed as a research assistant, it employs language models such as GPT-3 to automate elements of researchers’ workflows, refining the literature review process. AI tools such as this are promising for the future of scientific research, but understanding their functions and limitations is crucial when employing them.
Elicit’s intuitive interface makes it easy to ask questions and get results. For instance, asking, “Is high blood pressure associated with severe SARS-CoV2 infection?” prompts Elicit to retrieve relevant papers and produce an interactive table summarizing key findings from the top four publications. The tool’s interactive table makes it simple to compare studies, outlining up to fourteen categories such as main findings, study limitations, and question-relevant summaries.
Additionally, Elicit offers features to filter search results by keywords in the abstract, publication date, full-text availability, and study type. The search results can then be exported into a CSV or bib file for further review.
Elicit is a creation of Ought, a non-profit machine learning research lab. It sources papers from Semantic Scholar, a comprehensive database hosting over 212 million papers from diverse scientific disciplines. Detailed information is available about how Elicit works.
The dawn of AI in research is exciting, with Elicit being among the players spearheading this transformative journey. Tools like Elicit promise enhanced efficiency and a reimagined approach to literature reviews. However, it’s crucial to navigate this new landscape carefully, maximizing benefits while being mindful of potential limitations.
~Ansuman Chattopadhyay