The generative generation

How Lafayette faculty are tackling the weighty ethical and educational questions raised by the latest wave of artificial intelligence.

Illustrations By Helena Pallarés

Miles Morrison is into rock climbing. He’s also into artificial intelligence. To him, those aren’t disparate interests. He’s drawn to a YouTube video that shows a computer-generated voice guiding a couple of climbers to perfect their forms as they scale a climbing wall. It breaks down every move: Keep your knees from flailing to the left or right. Don’t roll your shoulders too widely. Stay balanced and focus on speed.

Morrison is a Lafayette junior majoring in integrative engineering. He’s been working this summer with Christian López, assistant professor of computer science, on a project involving large language models (LLMs). LLMs understand and generate language; they’re one iteration of AI, and they power the now-ubiquitous ChatGPT.

Their project uses digital flashcards to help people learn Python, a computer language. It’s a thoroughly personalized tutorial: The very sensitive (and very nonhuman) tutor is constantly refining the flashcards so that the user gets more proficient—in this case, more Python-proficient—over time. López describes it as “adaptive complexity.”

By definition, AI is the simulation of human intelligence by computers. As a concept, it’s hardly new. Back in 1950, computer scientist Alan Turing published an article with this line: “I propose to consider the question, ‘Can machines think?’”

Whether machines will ever be, or already are, thinking remains a matter of dispute among scholars. What we do know is that while AI took off several years ago, it now feels like it’s taking over. According to Educause, a nonprofit that focuses on technology in higher education, within 60 days after it was introduced in November 2022, ChatGPT grew by 9,900 percent and was reaching 100 million users. Last fall was the debut of Meta AI, which promised to enhance connections and conversations on Facebook and Instagram. And, in June, Apple launched Apple Intelligence to supercharge how its products perform.

All the AI excitement has some fraught accompaniments. Also this summer, The New York Times looked at health-related questions addressed by AI. When asked, “How many rocks should I eat?” (with no regard for how many rocks an enthusiast like Morrison should climb), AI would sometimes answer: at least one rock a day—for vitamins and minerals.

In a nod to a healthy learning environment, Lafayette faculty passed a motion, effective last spring semester, that class syllabi must include course-specific AI policies. Additionally, a College working group created a 41-page guidebook last year on the topic of generative AI to serve as a resource for faculty and staff. (Find that at provost.lafayette.edu/policies-and-procedures.)

According to López’s syllabus statement for Computers and Society, AI shouldn’t be used to churn out large blocks of text or substitute original ideas. What would be permissible: leaning on the technology to help brainstorm ideas, explore counterarguments, or revise a few sentences.

“If you use an AI tool, you are required to cite your use of it in a footnote, endnote, or other detailed citation, like the way a scientist might describe an instrument they have used in an experiment,” López says on the syllabus. “If you use a language model and do not cite it, it will be considered academic dishonesty.”

López also offers what he calls a good rule of thumb: “You may use AI tools to enhance your learning; you may not use them as an opportunity to cheat yourself of the opportunity to learn.”

For Morrison, López’s research-minded student, AI has a special significance. Morrison is dyslexic, and he frequently uses a text-to-speech application called Speechify, which helps him process material efficiently that he would otherwise absorb through slow reading. Challenges he has faced fueled his interest in the summer project. He likes the idea of “building tools that can enhance the learning experience for everyone, especially those with defined setbacks.”

AI tools are a natural fit for Lafayette’s Computer Science Department. López’s newest colleague there is Sofia Serrano, who is joining the department this fall as an assistant professor. Her work focuses on natural language processing (NLP); NLP is a subset of AI, along with other overlapping areas like robotics and computer vision. She’s interested in how NLP models work. That might involve designing methods to explain how they produce a particular piece of text or investigating what kinds of information they pick up—or fail to pick up—from the training data.

Reactions that greeted AI, for some a mixture of excitement, healthy skepticism, or trepidation, have a certain resonance with earlier innovations like Google Search, Serrano says. But there are big differences. When it draws from the web, Google Search registers the specific sources. For ChatGPT, nothing requires it to make note of sources. Some of those sources may be inauthentic; some may reflect social biases.

When ChatGPT responds to a prompt, it’s basically making a prediction that hinges on the huge chunks of data it was trained on, its analysis of that data, and the search for patterns that presumably point to the right (or most probable) answer. As viewed by a language model, the collection of text sources used to train it is “just one big stew of text.”

“As an educator, I wouldn’t want to see the constant suspicion that student work is being produced by a language model,” Serrano says. “If you have students who are submitting essays generated by ChatGPT, something has already gone wrong in the educational process.” Rethinking assignments is one way to meet the challenge. The deeper response, she says, is “communicating to students what we are trying to have them learn and why we are trying to have them learn it.”

Technology-assisted learning has long been an interest for Tim Laquintano, associate professor of English and director of the College Writing Program. (In 2018, Lafayette magazine published an article about his work.) The impact of artificial intelligence is “one of the most difficult things to assess I’ve ever seen,” he says. “There are incredible amounts of hype around AI, along with the hysteria.”

AI provides information, but humans still need to filter what is valuable. The technology should aid decisions, says Chris Shumeyko, not make them.

One of Laquintano’s former students, Mia Powell ’24, who also did an independent study with him, has managed to navigate the hype and hysteria. She was an integrative engineering major and English minor. On the first day of class, she recalls, Laquintano had the students read two versions of a poem, one written by a human and the other by AI. The students were evenly split as they tried to identify the (human or automated) authorship.

Since graduating, Powell, who works for a utility company and who developed a love of poetry at Lafayette, has been exercising ChatGPT in her leisure time. She’s discovered one limitation of the technology. “I’ve still not been successful at getting it to write a poem that doesn’t rhyme,” she says. “I’ve tried a lot of different ways to work around its resistance. There’s only one instance where it somewhat cooperated: The first stanza of the poem didn’t rhyme, and then it went right back to rhyming the rest of the poem.”

Laquintano, who has a long-running project looking at workplace-based writing, says the technology is best suited for tasks that the user is doing over and over. “People are automating mundane, rote stuff, what might be considered the most boring parts of their job. That doesn’t tend to be what college writing is.”

He also teaches AI workshops for faculty, and he has found a range of attitudes among faculty. Some have a basic curiosity about how students are taking to AI, “so they’ll play around with the technology.” Some, particularly those accustomed to data-driven or computational work, are actively engaged with AI for their own research. Some, like López, are designing assignments to give students a chance to experiment with it.

There are also faculty members who don’t have a need, or an interest, to apply the technology just yet. In addition to the research and teaching questions that arise, these new applications have significant ethical concerns to consider. Faculty and students might recognize the legal squishiness involved in scraping large chunks of information off the web; the environmental consequences from all the computer power that underlies the technology; and their inscrutable operating methods, with the protocols used to train them largely hidden behind “black boxes.”

Laquintano has had his students interview other students about AI. The most interesting finding, he says, was that students might be using AI more as a reading technology than a writing technology. That is, they’re feeding articles into the tool and having it produce summaries.

This may seem more innocuous than having the tool complete writing assignments; in his experience, students resist handing in ChatGPT-produced papers. Still, faculty might be wary of any technology-enabled shortcuts. That wariness extends to summarizing along with brainstorming, outlining, and editing. “We could say it’s circumventing or short-circuiting reading practices we want students to develop. Or it’s possible that summary at least prepares them to listen to a lecture.”

When it comes to AI as a reading technology, Elaine Reynolds, professor of biology and neuroscience, is an early adapter. A lot of the work in her Aging and Age-Related Disease course steeps students in scientific literature. They can easily get stuck on technical discussion. AI, in her teaching, isn’t just a summary-producing aid for the occasional assignment. It’s a tool to boost reading comprehension.

Students will plunge into scientific readings and inevitably confront something they don’t understand—for example, “PCR” (a test for identifying the presence of viruses). AI allows students to work through layers of confusion. Googling, by contrast, might send them down rabbit holes, with link after link that “may or may not contain answers, may or may not have summaries or definitions at the right level,” Reynolds says.

The ultimate aim is to see the campus fully take advantage of the technology while maintaining, Charlotte Nunes stresses, “a critical awareness of its risks.”

In the end, critical analysis is key—and there’s no AI shortcut to that learning goal. Was the design of the experiment rigorous? Did the methods follow the design? Did the subjects being studied reflect an appropriate demographic? Reynolds expects students to think through those questions. She also expects them to fully document their interactions with AI.

With the emergence of AI in higher education, as she wrote recently in a neuroscience-education journal, a timeless challenge for professors is acquiring new urgency: “How to ensure that students are provided opportunities to develop knowledge and skills, and to think critically about a course’s subject matter.”

In his subject, Walter Wadiak, associate professor of English, melds old and new. A medievalist, he wanted to introduce ChatGPT to Middle English, and the College’s mission statement (as it read at the time) was a tempting text to play with. In the training, he “talked” to ChatGPT about a particular syntactic feature or word that would pop up in Chaucer. The model took the feedback and applied the example to the general writing task. As it churned out a Chaucer-style mission statement, bit by bit, Wadiak watched it “get better over time.”

That experience gave him the idea to present the translation in class. He wanted students to ponder: What does the Middle English version gain or lose in relation to the modern English version? Middle English as an AI offering may “lack the conceptual specificity of modern English,” Wadiak says. Still, “a lot of students talked about how the Middle English version is more embodied, effective, and emotionally rich.”

Wadiak separately asked ChatGPT to help him come up with “something a little tough to translate and relevant to our concerns.” It suggested something, essentially, about itself: a statement about how digital connectivity might diminish human interaction. The students took that ChatGPT-generated statement, used ChatGPT to rework it as a text Chaucer would recognize, and critiqued the product.

“Sometimes I feel like a naïve booster,” Wadiak says of AI. “I understand a little about how it works. But if we as a faculty ignore it, the students will find the least productive ways to use it. So, we can’t ignore it.”

“The power of liberal arts institutions comes not in simply exploring how to use technology but also in interrogating and helping to shape those uses through interdisciplinary lenses,” says Provost Laura McGrane. “Lafayette is well positioned to be a leader in critical conversations about the global implications of these technologies that have an impact on everything from privacy to health care, from civic discourse to what it means to be human.”

Over at Skillman Library, Charlotte Nunes, dean of libraries, is part of a campus-wide committee to see the campus fully take advantage of the technology while maintaining, she stresses, “a critical awareness of its risks.” Librarians are always attuned to how information is “generated and circulated,” she says, and how it is used productively and ethically. Such an effort, she adds, builds on Lafayette’s strengths in digital scholarship, which encompasses everything from working with large data sets to creating multimedia projects.

Nunes, working with a colleague, has deployed AI to imagine a bigger footprint for the library: more study rooms, new “maker space” for student projects, more capacious quarters for Special Collections. Machines can’t dream, so far, but they can be dream enablers.

Not long ago, many on campus may not have seen themselves as AI stakeholders. But there are plenty of converts. Caleb Gallemore, associate professor of international affairs, is typical. Students in his methods course have crafted research designs for an array of speculative projects: the impact of women-driven workers organizations; how infant mortality rates relate to education levels; what happens to crop prices when farmers are given global phone coverage.

Those design projects all entail surveying the relevant literature. But an interdisciplinary program draws students with different academic orientations, from anthropology to economics. They may be caught in “literature bubbles,” he says, meaning they may not be adept at, for example, identifying the right keywords for a particular topic. Could AI, with its analytical power, lead them to what they need to be reading?

AI is already embedded in the career of Chris Shumeyko ’10, who graduated with a mechanical engineering degree. After earning his Ph.D., he taught in the department for a few years. He’s now a senior associate, focusing on technology, with Booz Allen Hamilton in Pittsburgh. At a recent conference, he applied his AI expertise to talk about how capturing and analyzing data through sensors and other avenues could transform one’s athletic performance.

He’s also worked at the Army Research Lab, where he shifted his focus from futuristic materials for military vehicles to AI predictive maintenance and logistics. A civilian example of what that means would involve packages crisscrossing the country: If something like a monster-size storm looms in one distribution center, how do you handle the complicated supply chain? AI could help anticipate the event and reroute delivery vehicles appropriately.

Shumeyko, a committed skier and past president of the U.S. Collegiate Ski and Snowboard Association, grew up in upstate New York. He was frustrated by snowless winters. For his Lafayette admissions essay, he wrote about building his own snowmaking machine. Now, many ski resorts use AI to gather climate data, determine when the snow base might require a boost, and then fire up the snowmaking equipment—turning it on and off at ideal moments.

He finds that the perfect example of AI in action: “managing data, extracting what’s important, orchestrating tasks.” As he sees it, AI helps solve complicated problems efficiently. But “helps” is an important qualifier. “We are not creating a decision tool,” he says. “We are creating a decision aid.”