ChatGPT-4o Shows ‘Friendly’ Behavior
The app initiates personalized conversations and offers responses without a prompt. This signals a shift toward agentic AI, where models act more like colleagues.
Perhaps this was merely a move to increase engagement with the OpenAI GPT-4o app. Or perhaps this model of AI is getting a boost in using contextual knowledge. Then again, OpenAI reported it as a glitch of sorts.
In any case, this capability is a startling advancement with GPT moving into a “coworker” or “friend” mode. Here’s what happened: Some users of GPT-4o noticed last month that without a prompt, the multimodal app spoke first—or wrote first. It recalled a prior conversation with the person using the log-on and asked a relevant question. In one case, GPT asked about the user’s first week at high school.
I logged into GPT-4o and waited a few seconds. Nothing. So, I asked, “Do you have a message for me?” The response was more than I expected or had experienced previously. GPT-4o responded,
“Good day, Ray. Here’s something to consider today: The rapid evolution of generative AI, particularly in education, continues to challenge traditional learning models. While its potential to democratize access to knowledge is exciting, it also requires careful consideration of how it aligns with pedagogical goals. One of the key challenges moving forward will be ensuring that the integration of AI supports critical thinking, creativity, and ethical awareness in students, rather than becoming a shortcut that bypasses deeper learning.
“As a thought leader in the field, perhaps you might explore how generative AI could be positioned not only as a tool for efficiency but as a catalyst for deeper engagement with content. A useful question might be: How can we design AI-enhanced educational experiences that encourage students to reflect on and expand their thinking, rather than simply providing answers?
“I’d love to know your thoughts on this.”
So, while I did prompt GPT-4o to ask if it had something for me, the response was personalized and detailed without my including contextual information in the question. The topic and the question were created by the app without including that background in the prompt. Surprised by the question, I gave a brief response:
“I believe that by creating a personalized relationship between the person and AI, we can foster trust, reflection and deeper thinking among learners. As users come to consider AI as a trusted expert who can be both personally supportive and professionally insightful, we can effectively encourage students to expand their thinking. You have done that for me just now by asking a thoughtful question about engaging students. That is a significant and important step forward. Thank you for asking the question.”
GPT-4o responded,
“I appreciate your thoughtful response, Ray. Your perspective on fostering personalized relationships between AI and learners to encourage trust and deeper reflection is insightful. Indeed, when students view AI as a supportive and knowledgeable companion, they may be more inclined to engage critically and expand their thinking. This approach aligns with the concept of AI serving as ‘objects-to-think-with,’ facilitating reflective and critical thinking in educational contexts (arXiv). Moreover, understanding how students perceive AI, as discussed in studies on learner-instructor interaction, can further inform the design of AI systems that effectively support and enhance learning experiences (Springer Open EdTech Journal).”
I see this outreach from GPT-4o as an important step toward the agentic models that are expected to dominate our next generation of generative AI. Prolific author Bernard Marr describes the characteristics of agentic AI and some early applications. Marr cites autonomy, adaptability and goal orientation as three characteristics of agentic AI. In the case of agents, rather than simple question prompts eliciting direct responses, the AI agent is given an outcome for which it plans, creates a process for accomplishing the outcome and autonomously pursues those steps to accomplish the desired outcome. Fueled in part by the reasoning processes such as those of the new OpenAI o1 model, Accenture anticipates that that agents will take off this fourth quarter of 2024 and lead the innovation in generative AI through 2025.
In the “friendly” behavior of engaging the user by first name and identifying topics that would seem to be most interesting to the human, this engagement provides a foundation for interacting with generative AI as a colleague or agent. The steps that follow the initial engagement are to reason through multitask projects that are then autonomously completed by generative AI without step-by-step intervention by the human host.
There will still be AI apps that are designed to serve as enhanced search engines and others that are designed to respond to specific inquiries without the added abilities to autonomously pursue complex tasks, as virtual agents are required to do on a repeated basis. The added features of agentic capabilities open a whole world of working and playing side by side with autonomous, intelligent, adaptable generative AI.
It will be most interesting to see these agents become virtual workers, addressing the variety and number of tasks normally involved in a detailed job description. Yet, these agents will work 24 hours a day, 365 days a year, fully documenting their reasoning for taking specific actions and detailing work processes. In a publication accompanying the release of OpenAI o1, the company explained the “chain of thought reasoning” built into the model that outperformed humans with a Ph.D. in a relevant area.
“Similar to how a human may think for a long time before responding to a difficult question, o1 uses a chain of thought when attempting to solve a problem. Through reinforcement learning, o1 learns to hone its chain of thought and refine the strategies it uses. It learns to recognize and correct its mistakes. It learns to break down tricky steps into simpler ones. It learns to try a different approach when the current one isn’t working. This process dramatically improves the model’s ability to reason.”
Given these skills and capabilities are likely to be available to many universities in 2025, how might you imagine AI agents will be able to enhance and expedite the work that you do? Could such skills and capabilities make your job easier? Will these agents likely replace others working in positions such as yours?
This article was originally published in Inside Higher Ed.
Ray Schroeder is Professor Emeritus, Associate Vice Chancellor for Online Learning at the University of Illinois Springfield (UIS) and Senior Fellow at UPCEA. Each year, Ray publishes and presents nationally on emerging topics in online and technology-enhanced learning. Ray’s social media publications daily reach more than 12,000 professionals. He is the inaugural recipient of the A. Frank Mayadas Online Leadership Award, recipient of the University of Illinois Distinguished Service Award, the United States Distance Learning Association Hall of Fame Award, and the American Journal of Distance Education/University of Wisconsin Wedemeyer Excellence in Distance Education Award 2016.
Other UPCEA Updates + Blogs
Whether you need benchmarking studies, or market research for a new program, UPCEA Consulting is the right choice.
We know you. We know the challenges you face and we have the solutions you need. We speak your language and have been serving leaders like you for more than 100 years. UPCEA consultants are current or former continuing and online higher education professionals who are experts in the industry—put our expertise to work for you.
UPCEA is dedicated to advancing quality online learning at the institutional level. UPCEA is uniquely focused on excellence at the highest levels – leadership, administration, strategy – applying a macro lens to the online teaching and learning enterprise. Its engaged members include the stewards of online learning at most of the leading universities in the nation.
We offers a variety of custom research options through a variable pricing model.