The Rise of Multidisciplinary Research Stimulated by AI Research Tools
A revolution is quietly taking place in academic and scholarly research prompted by the advent of AI research tools. This will reshape the very nature of our studies and greatly accelerate synergies and collaborations across academic fields.
AI research tools such as OpenAI o1 have now reached test score levels that meet or exceed those who hold Ph.D. degrees in the sciences and a number of other fields. These Generative AI (GenAI) tools utilize Large Language Models that include research and knowledge across many disciplines. Increasingly, they are used for research project ideation and literature searches. The tools are generating interesting insights to researchers that they may not have been exposed to in years gone by.
The field of academe has long emphasized the single-discipline research study. We offer degrees in single disciplines; faculty members are granted appointments most often in only one department, school or college; and for the most part, our peer-reviewed academic journals are in only one discipline, although sometimes welcoming papers from closely associated or allied fields. Dissertations are most commonly single-discipline based. Although research grants are more often multidisciplinary and practical solution-finding in nature, a large number remain focused on one field of study.
The problem is that as we advance our knowledge and application expertise in one field, we can become unaware of important developments in other fields that directly or indirectly impact the study in our chosen discipline. Innovation is not always a single purpose, straight line advance. More often today, innovation comes from the integration of knowledge of disparate fields such as sociology; engineering; ecology and environmental developments, and expanding understanding of quantum physics and quantum computing. Until recently, we have not had an efficient way to identify and integrate knowledge and perspectives from fields that, at first glance, seem unrelated.
AI futurist and innovator Professor Thomas Conway of Algonquin College of Applied Arts and Technology addresses this topic in “Harnessing the Power of Many: A Multi-LLM Approach to Multidisciplinary Integration.”
Amidst the urgency of increasingly complex global challenges, the need for integrative approaches that transcend traditional disciplinary boundaries has never been more critical. Climate change, global health crises, sustainable development, and other pressing issues demand solutions from diverse knowledge and expertise. However, effectively combining insights from multiple disciplines has long been a significant hurdle in academia and research.
The Multi-LLM Iterative Prompting Methodology (MIPM) emerges as a transformative solution to this challenge. MIPM offers a structured yet flexible framework for promoting and enhancing multidisciplinary research, peer review, and education. At its core, MIPM addresses the fundamental issue of effectively combining diverse disciplinary perspectives to lead to genuine synthesis and innovation. Its transformative potential is a beacon of hope in the face of complex global challenges.
Even as we integrate AI research tools and techniques, we, ourselves, and our society at large are changing. Many of the common frontier language models powering research tools are multidisciplinary by nature, although some are designed with strengths in specific fields. Their responses to our prompts are multidisciplinary. The response to our iterative follow-up prompts can take us to fields and areas of expertise of which we were not previously aware. The replies are not coming solely from a single discipline expert, book or other resource. They are coming from a massive language model that spans disciplines, languages, cultures, and millennia. As we integrate these tools, we too will naturally become aware of new and emerging perspectives, research and developments generated by fields that are outside our day-to-day knowledge, training and expertise. This will expand our perspectives beyond the fields of our formal study. As the quality of our AI-based research tools expand, their impact on research cannot be overstated. It will lead us in new directions and broader perspectives, uncovering the potential for new knowledge, informed by multiple disciplines. One recent example is “Storm,” a brainstorming tool developed by the team at Stanford’s Open Virtual Assistant Lab (OVAL):
The core technologies of the STORM&Co-STORM system include support from Bing Search and GPT-4o mini. The STORM component iteratively generates outlines, paragraphs, and articles through multi-angle Q&A between “LLM experts” and “LLM hosts.” Meanwhile, Co-STORM generates interactive dynamic mind maps through dialogues among multiple agents, ensuring that no information needs overlooked by the user. Users only need to input an English topic keyword, and the system can generate a high-quality long text that integrates multi-source information, similar to a Wikipedia article. When experiencing the STORM system, users can freely choose between STORM and Co-STORM modes. Given a topic, STORM can produce a structured high-quality long text within 3 minutes. Additionally, users can click “See BrainSTORMing Process” to view the brainstorming process of different LLM roles. In the “Discover” section, users can refer to articles and chat examples generated by other scholars, and personal articles and chat records can also be found in the sidebar “My Library.”
More about Storm is available at https://storm.genie.stanford.edu/.
One of the concerns raised by skeptics at this point in the development of these research tools is the security of prompts and results. Few are aware of the opportunities for “air-gapped” or closed systems and even the ChatGPT Temporary Chats. In the case of OpenAI’s temporary chats, you can start a Temporary Chat by tapping the version of ChatGPT you’re using at the top of the GPT app, and selecting Temporary chat. I do this commonly in using Ray’s eduAI Advisor. OpenAI says that in the temporary chat mode results “won’t appear in history, use or create memories, or be used to train our models. For safety purposes, we may keep a copy for up to 30 days.” We can anticipate these kinds of protections will be offered by other providers. This may provide adequate security for many applications.
Further security can be provided by installing a stand-alone instance of the LLM database and software in an “air-gapped computer” that maintains data completely disconnected from the internet or any other network, ensuring an unparalleled level of protection. Small Language Models (SLM) and medium sized models are providing impressive results, approaching and in some cases exceeding frontier model performance while storing all data locally, off-line. For example, last year Microsoft introduced a line of SLM and medium models:
Microsoft’s experience shipping copilots and enabling customers to transform their businesses with generative AI using Azure AI has highlighted the growing need for different-size models across the quality-cost curve for different tasks. Small language models, like Phi-3, are especially great for:
-Resource constrained environments including on-device and offline inference scenarios.
-Latency bound scenarios where fast response times are critical.
-Cost constrained use cases, particularly those with simpler tasks.
In the near term we will find turnkey private search applications that will offer even more impressive results. Work continues on rapidly increasing multidisciplinary responses to research on an ever-increasing number of pressing research topics.
The ever-evolving AI research tools are now providing us with responses from multiple disciplines. These results will lead us to engage in more multidisciplinary studies that will become a catalyst for change across academia. Will you begin to consider cross-discipline research studies and engage your colleagues from other fields to join you in research projects?
This article was originally published on Inside Higher Ed.
Ray Schroeder is Professor Emeritus, Associate Vice Chancellor for Online Learning at the University of Illinois Springfield (UIS) and Senior Fellow at UPCEA. Each year, Ray publishes and presents nationally on emerging topics in online and technology-enhanced learning. Ray’s social media publications daily reach more than 12,000 professionals. He is the inaugural recipient of the A. Frank Mayadas Online Leadership Award, recipient of the University of Illinois Distinguished Service Award, the United States Distance Learning Association Hall of Fame Award, and the American Journal of Distance Education/University of Wisconsin Wedemeyer Excellence in Distance Education Award 2016.
Other UPCEA Updates + Blogs
Whether you need benchmarking studies, or market research for a new program, UPCEA Consulting is the right choice.
We know you. We know the challenges you face and we have the solutions you need. We speak your language and have been serving leaders like you for more than 100 years. UPCEA consultants are current or former continuing and online higher education professionals who are experts in the industry—put our expertise to work for you.
UPCEA is dedicated to advancing quality online learning at the institutional level. UPCEA is uniquely focused on excellence at the highest levels – leadership, administration, strategy – applying a macro lens to the online teaching and learning enterprise. Its engaged members include the stewards of online learning at most of the leading universities in the nation.
We offers a variety of custom research options through a variable pricing model.