Research Rovers: AI Research Assistants for NASA

Help NASA assess emerging capabilities for AI-based research assistants. #science

$30,000 in prizes
oct 2023
294 joined

Problem Description

In this challenge, you are asked to provide an AI-based research assistant solution that addresses one or more of the research tasks below.

A NASA Researcher's Mission


NASA has a bold and inspiring mission to explore the unknown in air and space, innovate for the benefit of humanity, and inspire the world through discovery. NASA mission goals require NASA researchers to maintain a continuous understanding of emerging scientific and technological advancements, and to leverage those advancements towards the mission.

To do this they need to understand the existing literature in research domains both within and outside their area of expertise. In their current workflow, this may involve running multiple queries on public and commercial databases, downloading hundreds of papers, and reviewing the relevance of these papers based on their title, abstract, and other metadata. Synthesizing the results of these papers requires reading, or at least skimming, hundreds of papers, while traversing different reporting standards and jargon across disciplines and publications.

A great deal of time and effort is spent reviewing information that is ultimately not relevant or useful. More powerful tools can help researchers make this process more efficient, and potentially unlock new ways to understand the vast and growing body of scientific literature.

Research assistant tasks


NASA has identified the following set of research tasks that an AI-based solution may be able to assist with. You should choose one or more of these tasks to address with your solution.

  1. Identifying seminal papers in a particular domain or domains
  2. Identifying state-of-the-art papers in a specified domain and relevant papers in related domains
  3. Summarizing research results across different publication formats and standards
  4. Identifying relevant search terminology in a particular domain (which may differ from the researcher’s field of expertise, even for equivalent concepts)
  5. Identifying test problems or benchmark datasets in a particular domain or domains
  6. Identifying research gaps and opportunities for new research in a particular domain or domains
  7. Identifying the leading experts and potential collaborators in a particular domain or domains
  8. Interactive compiling of a written report summarizing the research corpus in a particular domain or domains

In this challenge, you are asked to provide an AI-based research assistant solution that addresses one or more of the tasks above.

You are also permitted to provide a solution for a task not listed above, so long as it relates to the broader goal of assisting with research by the NASA workforce. Use the list above as a guide for what is likely to be relevant and valuable. Be advised that going outside the provided list of tasks entails some risk, as there is no guarantee that the solution will be relevant to NASA researchers, which is a critical component of any submission.

Submission Format


Your submission will be a ZIP archive containing the following two components:

  • 5–10 minute video demonstration (video.mp4)
  • 2–4 page writeup (writeup.pdf)

Additionally, you may include optional supplementary materials like code or visualizations in support of your submission. Please ensure that the total size of your submission does not exceed 1 GB. When extracted your submission must have the following top-level file and directory names:

.
├── video.mp4
├── writeup.pdf
└── supplementary/

Video requirements

Your video should be an MP4 file of 5–10 minutes in length. Videos should be in English and clearly demonstrate how your proposed solution addresses one or more research assistant tasks.

Videos are not expected to have professional-level quality and polish, but should succinctly summarize the main points of your demonstration. Substance matters more than slickness. Screen recording videos with narration are acceptable. Participants are not required to appear in their videos.

Writeup requirements

Your writeup should succinctly summarize the key points of your demonstration, and should meet these requirements:

  • 2–4 pages including figures and tables
  • 8.5 x 11 inch paper size with minimum margin of 1 inch
  • Minimum font size of 11
  • Minimum single-line spacing
  • PDF file format

We suggest including each of the following sections in your writeup to ensure you are including all of the information needed for the judging process.

Relevance

  • Which research assistant task is being addressed and how? If not addressing one of the provided tasks, explain the research task your solution is addressing and why it is relevant.
  • What are the specific pain points or limitations in existing research processes that are being addressed by your solution?

Effectiveness

  • How effective is your solution in addressing the research assistant task?
  • How much does your solution improve workforce productivity and efficacy versus what is currently standard practice? If standard practice is not well-established, feel free to provide what you think is a reasonable definition of standard practice. If possible, provide quantitative estimates of improved effectiveness such as time saved, productivity increases, etc.

Deployability

  • Provide instructions for how competition organizers could reproduce your proposed solution.
  • What are the costs to implement your solution, including but not limited to commercial costs of API usage, data storage, etc.
  • Provide the names and license types of any datasets used in developing your solution, including free and publicly available data.
  • Provide the names of any third party products or services used in developing your solution.
  • Briefly outline how your proposed solution might be productionized for ongoing use. For these purposes, you do not need to consider specifics of NASA technological infrastructure and can outline a plan that would be feasible for a generic organization following modern software best practices.

Supplementary material (optional)

Participants may optionally provide supplementary material in support of their solution. Supplementary materials are not required for a successful solution, and judges are not required to review any materials that are submitted. Please ensure that the total size of your submission does not exceed 1 GB.

Here are some examples of supplementary material that may be accepted:

  • Visualizations
  • Jupyter notebooks
  • Code or links to GitHub repositories
  • Images or figures from demonstration video
  • Demonstration video transcript

Judging


A panel of NASA experts, including researchers, will review all submissions. Submissions will be scored using the rubric described below, with prizes awarded to the top 4 submissions overall.

Judging rubric

Submissions will be judged using the following rubric of equally-weighted criteria. In assigning scores to each submission, judges will consider the questions outlined below for each criteria.

  1. Relevance (25%)

    • Does the solution address one of the research assistant tasks or another task that is relevant to NASA researcher workflows?
    • How clearly is the relevance of the solution communicated in the video and writeup?
  2. Effectiveness (25%)

    • How effectively does the solution address a pain point or limitation in existing workflows?
    • Where possible, does the submission provide quantitative measures of effectiveness (such as productivity increased, time saved, etc.)?
    • Are there gaps or shortcomings in the solution that the participant may not have addressed?
    • Are the solution outputs (recommendations, summarizations, etc.) trustworthy or verifiable?
    • How clearly is the overall effectiveness of the solution communicated in the video and writeup?
  3. Deployability (25%)

    • How easy or difficult would the solution be to implement? To what extent is there a clear and feasible path for deploying the demonstrated solution so that it can be used on an ongoing basis by NASA researchers?
    • Is the feasible path consistent with contemporary industry best practices for software development? Does it make use of or remain compatible with cloud-based services where appropriate?
    • How high are the financial costs (API usage, cloud costs, subscriptions, etc.) to implement the solution?
    • Do the video and writeup clearly show how the solution would be reproduced by a user or developer other than the participant?
    • How clearly are these deployability concerns communicated in the video and writeup?
  4. Novelty (25%)

    • Does the solution employ a novel or unique approach relative to what is currently available to researchers?
    • Is it unique compared with other challenge solutions?
    • Does it make an original contribution to the public discourse about leveraging AI for this task?
    • How clearly is the novelty of the solution communicated in the video and writeup?

Example Solutions


Open-ended challenges like this one provide an opportunity to be creative, but that flexibility to explore many different ideas can also make it daunting to get started.

Below we provide a non-exhaustive list of the types of solutions that might impress the judging panel. Keep in mind that you do not need to provide a solution that follows one of these examples. Feel free to develop a solution not listed here, so long as you ensure it remains relevant to NASA researcher workflows.

  • Demonstrating a customized chatbot that leverages commercially available LLMs to answer questions about a user-provided set of documents.
  • A search tool prototype that uses the NTRS dataset to identify the leading NASA experts in a given field and their areas of expertise.
  • Demonstrating ways to use a pre-trained model that has been fine-tuned to specialize in a literature summarization task and can ingest new datasets.
  • A recommender system prototype that can suggest research papers or leading experts based on the researcher’s domain of interest, or across domains.
  • A browser plugin that assists with literature summarization by enabling integration with LLM components.
  • A Jupyter notebook demonstrating a research assistant chatbot developed using a combination of original code and a commercially available API.
  • A new feature added to an existing research assistant tool that improves literature summarization capabilities.
  • [or your idea here]

On the other hand, here are some examples of solutions that would not meet our minimum bar for novelty and originality:

  • Demonstration of prompting best practices on the ChatGPT web application or similar.
  • Tutorial-style demonstrations of documented features in existing tools.

Midpoint Submissions


You are permitted to make one midpoint submission, which will allow you to receive initial feedback from judges on your solution. Judges will provide individual feedback on your midpoint submission privately by email. Only one midpoint submission per team or participant will be accepted. We highly encourage you to take advantage of this opportunity.

See the midpoint submissions page for details and to make your submission.

The deadline for midpoint submissions is September 8, 11:59 p.m. UTC.

NASA Technology Ecosystem


NASA researchers use a variety of data sources and tools in their current workflows which, given the diversity of the workforce and personal preferences, are not easily summarized. In general, researchers are able to access free and openly available data on the internet. They use office productivity software such as Microsoft 365, but are not limited to a particular brand or product. Increasingly, the organization favors cloud-based solutions and makes use of hosted cloud environments such as those provided by Amazon, Google and Microsoft.

As a participant you are not expected to tailor your solutions to the specifics of NASA's existing technology usage, but being aware of some of the data and tools used by NASA researchers may inspire you to come up with your own ideas or build on what already exists.

Data sources

To help get you started, challenge organizers have assembled the following list of publicly available data sources relevant to this challenge, which may be used in developing and demonstrating your own solution. Each of these data sources provides options for bulk downloading of scientific literature metadata or content, which you may then use to explore the problem space, train and test models, and ultimately demonstrate the relevance and efficacy of your solution. Participants are responsible for assembling their own dataset from these data sources.

You are not required to use these data sources as part of your solution. Our hope is simply to point out some possible options. Please also note that since these data sources are not hosted by DrivenData, we cannot guarantee that they will be maintained in a particular state over the duration of the challenge.

  • arXiv: arXiv is a free-to-access online repository where researchers can share and access preprints of scientific papers in various disciplines. Articles from the preprint repository arXiv are available directly from Amazon S3 or by using open source tools for bulk downloading. Note that files available from Amazon S3 are on a requester pays basis.
  • CrossRef: CrossRef is an organization that provides Digital Object Identifiers (DOIs) to academic content and facilitates reliable linking and citation of scholarly publications. They also provide a free API that allows users to access metadata about research papers registered by member organizations, such as research institutions and publishers.
  • NASA Technical Reports Server (NTRS): NTRS provides access to scientific and technical information created or funded by NASA, including metadata, full-text documents, images, videos, conference papers, journal articles, meeting papers, patents, research reports and more. The bulk records file on the NTRS website contains metadata for over half a million NTRS documents, with fields for authors, affiliations, abstract, title, dates, and in many cases links to full content (slides, videos, technical paper pdfs, etc).
  • OpenAlex: OpenAlex is an open catalog of global research, providing metadata on papers, authors, institutions, concepts and publishers. A free API is available with a daily limit of 100,000 requests per user, along with a complete database snapshot that’s updated each month.

Analytical tools

Below is a non-exhaustive list of existing research assistant tools used by NASA researchers. This list is intended to give you a sense for the types of tools that are available, inspiring you to come up with related ideas or build off what already exists. You are not required to incorporate these tools as part of your solution.

  • ChatPDF: Online tool that lets you upload a PDF and ask a chatbot about it.
  • Connected Papers: Online tool to visualize research paper relationships.
  • Elicit: Free literature review tool that uses language models like GPT-3.
  • Google Scholar: Free search engine that indexes scholarly literature.
  • Inciteful: Online tool for finding relevant literature using citation network analysis.
  • Papers: A reference management app for collecting and organizing research materials.
  • Proquest: Collection of databases for academic theses, dissertations, and journals.
  • Research Rabbit: Online citation-based literature mapping tool.
  • ResearchGate: Social network for researchers to share and discover work.
  • Scite.ai: Platform for discovering and evaluating scientific articles.
  • Semantic Scholar: AI-powered research tool for scientific literature.
  • and many more...

Developers of existing research assistant tools are encouraged to participate in this challenge by providing demonstrations of new features for their existing tools.

Third-party Data and Services


You are permitted and even encouraged to use third party data and services in developing your solution, but keep in mind that you are responsible for ensuring that you have all the necessary rights, licenses and permissions to use those data and services. As part of your submission, you will need to describe any components in your solution that are dependent on proprietary systems or commercial products.

Note that datasets with limitations on "commercial use" may not allow for use in a data science challenge, which could make your solution ineligible for prizes. If using data from third party websites, like subscription-based research archives and publishers, be sure that your use of the data does not violate the terms of your user agreement or access restrictions.

Be sure to review the challenge rules and, when in doubt, ask questions on the user forum.

Good Luck!


Good luck and enjoy this problem! If you have any questions you can always visit the user forum!