How is Artificial Intelligence Impacting College Counseling? 

Jennifer Stephan presents on an expert panel

On May 10, I presented at the IECA 2023 Spring Conference in Seattle, WA, in the Featured College Session titled "Is Artificial Intelligence Changing College Counseling?"

What is IECA?

For some context, the IECA is a not-for-profit international professional association representing experienced independent educational consultants. IECA's published mission is: "To promote the highest quality independent educational consulting to students and families seeking skilled, ethical, academic or therapeutic guidance; to enhance professional development; and to foster career satisfaction among members." IECA holds a fall and a spring conference each year to advance its goals of providing opportunities for professional growth to its members. 

At the IECA fall 2022 conference, I gave an individual presentation on navigating the competitive computer science landscape and reported back key takeaways from the conference.

Ballroom with round tables and conference attendees sitting at them.

IECA Conference | May 2023

At this spring's conference, I spoke at the Featured College Session, "Is Artificial Intelligence Changing College Consulting?" The session was held In the grand ballroom and over 500 counselors were in attendance. I was joined by other experts actively immersed in using AI in our daily practices and careers.

Why was I selected to serve on this key conference panel?

I've had a 30-year career in higher education focused on STEM. After earning my Ph.D. in Electrical and Computer Engineering at Carnegie Mellon University, I taught CS for 14 years at Wellesley College. I then moved into administration there, where I developed double-degree programs and other pathways for students to pursue an interest in STEM at MIT and Olin College of Engineering. I am Dean of Academic Advising and Undergraduate Studies for the School of Engineering at Tufts. In this role, I guide approximately 700 computer science (CS) students majoring in CS and oversee all undergraduate degree programs offered by the six departments in the School of Engineering, including the CS department. So, I have deep computer science (CS) expertise and knowledge of the history of CS and engineering fields. Further, in my role at Tufts, I'm part of institutional conversations and planning about the impact of Artificial Intelligence (AI). 

During the panel discussion, I shared my insights from my career as a computer science professor, engineering dean, and IEC specializing in STEM with a focus on the impact of AI on college counseling and higher education.

How does AI work? 

In its current form, AI can do a great job on things already done. Its output is a mashup or collection of what has already been written and created. There must be data out there for it to provide a good response. One way to think about AI is as a sophisticated, highly-powered search engine. With this lens, AI is actually not so new. 

AI tools need data, lots of data, to train their algorithms. This is why ChatGPT, for instance, provides a free version. ChatGPT needs many people to use the tool so they can train their algorithms and improve the tool. They are getting free training data from everyone in the world!

What are the limitations of AI? What is it that we do that AI cannot do?

1) AI output is not current.

Because AI tools use existing (past) data to generate outputs, they will always be a cycle behind those producing the training data. For instance, this article which I have written, once published, will be available to be used by AI tools. But I had to write the article first. 

Similarly, AI is not able to reliably produce up-to-date information. I asked ChatGPT, “What school recently ended its early assurance medical program?” It replied, “As an AI language model, I do not have access to the most up-to-date information about recent events or changes in educational programs. However, if you can provide me with the name of the school or additional context about the program in question, I may be able to help you find more information.” 

2) AI produces misinformation. 

Some examples:

  • I asked ChatGPT about myself: “What can you tell me about Jennifer Stephan, Dean at Tufts University and Independent Educational Consultant at Lantern College Counseling?” It produced a wildly inaccurate result. As a reminder, I am currently employed at Tufts as a Dean in the School of Engineering; I have always held a role in the School of Engineering at Tufts, never the School of Arts and Sciences, and I have never held a professorial position at Tufts. When I was a professor, it was at Wellesley College and in the field of Computer Science. This is ChatGPT’s response: “According to the Tufts University website, Jennifer Stephan served as the Associate Dean of Academic Affairs for the School of Arts and Sciences, and also as the Dean of Academic Affairs and Associate Professor of Chemistry at the School of Arts and Sciences before leaving the university in 2019.” Not one of these details is correct, including the statement that the information came from the Tufts University website. 

  • I spoke with a former student who is now in medical school and she shared how they are being asked to use ChatGPT in their classes to learn. She described an assignment in which students were all told to ask the same question of ChatGPT: “Give me some papers on the relationship between pembrolizumab treatment and immune-related endocrinopathy. Then paste the following prompt: Are these papers real? and report on what ChatGPT produced.” The students got different answers for the same question, which is not uncommon with AI tools. Further, some got actual papers, and some got fabricated ones that do not exist. Some got authentic articles, but the titles, authors, or sources were not correct or accurate.

Since AI produces incorrect information, AI output must be checked and verified, which requires expertise in the subject matter, whether it be my career, medical research publications, or a university’s academic programs and offerings. AI cannot be viewed as an end but may be part of a learning process.

3) AI produces information that is not personalized. 

Let’s consider the personal statement of a college application. A compelling personal statement differentiates, humanizes, and individualizes the student. It makes an emotional connection to the reader, and persuades an admissions officer that the student is the type of person they would like to have in their campus community. 

I decided to write my own personal statement as if I was applying to college and see how my response would compare to one produced by ChatGPT. Because ChatGPT uses existing writing to create its outputs, by definition, AI writing is not individualized. Indeed, as detailed in a complete analysis of the exercise, it is hard to find anything remotely personal in the personal statement produced by ChatGPT! 

4) AI is not relational. AI cannot provide the type of experiences that humans learn from: the student-teacher, student-advisor, or mentoring relationship. Such human connections are the foundation for learning and many parts of a student’s college application.

How can we leverage AI for college applications?

How can AI tools be used well? How can their strengths be harnessed? What are the educational opportunities moving forward, particularly within the college admissions process?

1) We must learn how to ask good questions to get good answers. AI, in and of itself, is not an end. Rather, it is a tool that we need to learn to use well. Usually, this involves an informed, iterative process where we refine, or guide, the prompts based on expertise. We can think of ourselves as the supervisor of the AI tool, where we are the subject matter expert that guides the process of engaging with AI and using the tool.

  • We need to learn how to devise and refine prompts based on our knowledge. 

  • We must learn to think critically about what ChatGPT produces. 

  • There are many solutions that AI considers equally valid. That is, it may produce different results for the same problem. So, it is valuable to ask it for multiple answers and sample many responses.

2) We can use AI tools to brainstorm. 

  • For instance, a professor says he asked ChatGPT to describe his department and suggest some courses (for students) and workshops (for industry) the program could offer in the future. Then the professor chose one and asked ChatGPT to outline the curriculum. 

3) We can use it to develop new tools and methods of teaching.

  • Earlier in this article, I described my ChatGPT essay writing exercise in which I decided to write my own personal statement as if I was applying to college and see how my response would compare to one produced by ChatGPT. I compared my and ChatGPT essays to demonstrate that AI writing is not personalized and creative enough, does too much telling (rather than showing), and lacks “voice.” I use my analysis of the exercise to teach my students what makes a strong application. After reading my article, one of my students said that the ChatGPT essay “lacked heart and soul.” Likewise, we could give the students essays written by chatbots and ask them to evaluate and improve upon them – not to submit them, but as a writing exercise.

4) We can use AI to understand how students learn.

  • CS faculty members are allowing students to use ChatGPT to help them solve coding assignments, ask them to share how they used AI, and reflect on the experience. Students report using ChatGPT as a coding partner. For instance, they copied some lines from a piece of code that they suspected contained an error and told ChatGPT what behavior they were expecting vs. what they were getting. ChatGPT was able to identify the bug correctly and also walked the student through fixing it. These interactions provide a window into how students debug, think, and code.

Jennifer Stephan and fellow expert panel members

At the conference, I also attended sessions on Case Studies in Selective College Admissions led by representatives from Vanderbilt University and Scripps College and Unpuzzling Canadian Applications with representatives from the University of Victoria, the University of British Columbia, and the University of Toronto. Additionally, I had the privilege of speaking at length with Lumiere Education founder and conference sponsor Dhruva Bhat about the role and value of independent research for high school students.

I am grateful for this opportunity to present and exchange insights with fellow college admissions experts and look forward to next year’s conference. 

Jennifer Stephan

Jennifer Stephan is a college admissions expert based in Massachusetts. Read more.

Previous
Previous

Artificial Intelligence for College Applications: What Students Need to Know

Next
Next

Illuminating Your Path to an Engineering Major and Career