AI in admissions: CU Boulder study creates tool to read college essays

Dec. 30—A new AI tool can read college essays — and it has the potential to change the way admissions is run at colleges and universities nationwide.

University of Colorado Boulder researchers developed AI tools that can read college admissions essays and identify the presence of certain character traits without showing bias toward applicants from certain race, gender or socioeconomic backgrounds.

"It doesn't look for words," Sidney D'Mello, CU Boulder professor and the study co-author, said. "It looks for words in context. It's really looking for meaning. That's the key thing."

The tools, while not yet being used at any higher education institution, have the potential to assist admissions officers. It could check officers' biases and improve efficiency, making sure no applicant is forgotten.

"There's so much more to a person than what they write in an essay," D'Mello said, adding, "I really think these tools can help surface aspirational, deserving, amazing students who may otherwise get overlooked."

Some universities are starting to consider how AI might play a role in admissions. At CU Boulder, spokesperson Nicole Mueksch said the university "is still very much in an exploratory and fact-finding phase in regard to how AI could be incorporated into the admissions process." Because of this, CU Boulder isn't in a position to discuss it at this time.

To conduct the study, the team of researchers partnered with Common App to assemble a data set of 300,000 students who applied to college in 2008 and 2009. They took a random sample of essays which were then read by humans and rated for the presence or absence of seven personal qualities including leadership, mindset and teamwork. Then, researchers created an AI model that would be able to replicate the human ratings of the essays.

The study found that the AI-generated ratings of the essays matched the humans' scores well. The AI model was also able to accurately predict whether a student graduated, based on the presence or lack of desired character traits identified in the essay.

"We found we could replicate the human ratings pretty accurately and that these ratings were then predictive of college graduation," Benjamin Lira, a doctoral student at the University of Pennsylvania and the study's first author, said.

The AI tool can sift through large volumes of data with humans making the data and guiding decisions, a key component throughout the study. The tool does not replace admissions officers, D'Mello said. Rather, it offers them more support. There's often not enough time for people to read all essays deeply and they often don't agree on their merits.

Additionally, the new Supreme Court rulings against affirmative action mean there's more emphasis placed on college essays, D'Mello said. Maybe an applicant can't produce a beautifully written essay with an admissions consultant, but they can express a lived experience in a deeply passionate way. The AI tool can judge both essays for the same character traits, without giving an advantage to the student with the perfectly written essay.

Lira said these types of AI tools could allow admissions officers to be less affected by bias and be more efficient.

"There's a bunch of research showing that humans can be unreliable," Lira said. "So, I think we would suggest that having a model alongside you could help you catch things that maybe you would miss."

Lira said ethical concerns surrounding AI are warranted, and there are a number of examples where the deployment of AI has been harmful. AI is not inherently good or evil, Lira said, it's simply math that reproduces what it gets from the data and its programming. If it's harmful, he said, it's due to biased data.

As long as it's done carefully and constantly audited for fairness, it can be a beneficial tool in admissions. Lira said it's unclear when and how colleges might begin to incorporate these types of tools.

"We're hoping our findings can speak to ways we can try to use AI for admissions in particular but any kind of decision-making in general in a way that's aligned to what we want to achieve as a society," Lira said.

Before the tools can be used, D'Mello said, researchers need to do additional work to collect more data sets and make sure the AI tools are robust, accurate and transparent.

"There's much more work to be done and we think it's important to go slow rather than release these things into the wild," D'Mello said.