
“Jack the Bulldog” is a chatbot for Georgetown's Admissions Office, designed to answer commonly asked questions about admissions.
Duration
2 months
Deliverables
Chatbot UI
RASA dialogue system
My Role
Conversation Design
Natural Language Processing
Problem Statement
The Georgetown Undergraduate Admissions Office needs a more efficient way to manage a high volume of global inquiries from prospective students, especially during peak times near application deadlines.

-
Limited Accessibility
Time zone differences make it difficult for prospective students from different countries to reach the admissions office during its working hours.
-
Delayed Responses
The high volume of emails and phone calls, especially near deadlines, can lead to slower response times, frustrating both the admissions office and prospective students.
-
Overwhelmed Staff
The admissions team struggles to manage the constant flow of inquiries, leading to inefficiencies, potential miscommunications, and a less personalized experience for students.
Impact
212
utterances by the users
81.1%
overall success rate of the system
4.2
average ratings out of 5

What are the key data points that users typically search for?
Data were manually extracted from the Georgetown University websites, including the undergraduate admissions page and pages that branched from the main university page. The information from these webpages was written into responses for specific user intents.
Intents:
Topics of input that the user could say that have a matching response from the system. A total of 47 intents with an average of about 10 intent examples for each.


Examples of good mood intent and challenge bot intent
Design the dialogue flow
To organize our intents, we created 11 categories that were related to topics one would find on a Georgetown University webpage, specific to the application process or the university in general: transfer students, dates, application requirements, admission statistics, international students, financial aid, housing, student life, visits, contact information, or out of domain.

Basic topics of the system

System architecture
Train the system
Entities are specific pieces of information that Rasa could extract from a user’s message, and responses are all of the system’s outputs. The responses we created were specific to each user intent.
The majority of our dialogue management was incorporated in Stories. For each intent, we created a story that offered a response to the user’s question and then asked for confirmation if the question was answered. We incorporated checkpoints that helped the conversation flow in the proper direction, based on whether or not the user was satisfied with Jack’s response and the information that was presented.

Integrate the persona
To make the conversation more engaging and establish emotional connections with prospective attendees, we decided to give the chatbot a distinct personality.

We chose Jack the Bulldog, Georgetown University’s official mascot, as our persona. The system embodies the persona mainly through the use of language.
Jack has a unique way of saying things and it is incorporated throughout the conversation. For example, he calls the admission office “Paw-ffice” and he apologizes by saying “I a-paw-logize.”


Jack tells a fun fact about Georgetown when he cannot answer a question, and he is willing to provide more fun facts if people ask for it.
We tried to show his emotions and attitudes in the conversation.

VUI Design
GU's mascot logo

Hamburger menu for restarting the session and Help option


Two of GU's official colours


Test Out the Design among the University
To evaluate the system, both objective and subjective data were collected. A total of 20 participants were recruited to interact with the bot. Immediately following the interaction with the system, each participant was asked to complete a user experience survey.

Survey Responses on System Performance

Survey Responses on Persona
For the objective evaluation, we have two sets of data: one measures the overall success rate of the system (Table 1), and the other measures the task success rate (Table 2).


Out of 212 utterances, the system was able to respond correctly to 172, giving a success rate of 81.10%. The success rate decreased to 66.7% when only questions from users were considered.