Capstone Undergraduate Projects

Overview

Level Project Scope Available
498 Sports Analytics Dashboard 3-5 ✔️
499 MaskRCNN for Multi-class Segmentation 3 ✔️
499 Natural Language → Planning 2-3 ✔️
499 Interactive Agent-based Simulation 3-5 ✔️
499 Automatic Map Ingestion for ABM 3-4 ✔️
499 Modelling Human Behaviour 2-3 ✔️
499 PDDL Editor for Education 2-4 ✔️
499 CISC 352 Auto-composition of Digital Assets 2-3 ✔️
499 CISC 352 Assignment Design 4-5 ✔️
500 Adding Time to NL → Planning 1
500 Extracting PDDL from DreamerV2 1
500 Dialogue Agent Conversation Alignment 1
500 Sports Analytics 1-3 ✔️
500 Powerful Puzzling 1 ✔️
500 Knitting Verification 1 ✔️
500 Custom Project 1 ✔️

Capacity

We won’t be able to fill every project position. This is a rough measure of how much capacity Prof. Muise and the MuLab has remaining for capstone projects:


Application Procedure

To apply for any of the projects, please email Prof. Muise with the following details:

  1. Your name and a bit of background about yourself.

  2. The project name and expression of your interest in the area.

  3. Your Queen’s transcript.

  4. (if available) A CV/Resume

  5. (if available) A link to any software/projects you’ve worked on (e.g., GitHub profile).

Over the coming months, we will reach out to interested students and possibly interview if there is high demand for the project. If necessary, the interview process will involve a small coding exercise as well as meeting with Prof. Muise and/or Mu Lab members.


Project Ideas

A short summary of each project is below. You can watch a video presentation that covers most of them here:

Sports Analytics Dashboard

The aim of this project is to create a framework and pipeline to enable research to sports analytics at Queen’s University. Co-advised by Prof. Catherine Pfaff and Prof. Muise, students will work with other researchers focusing on sports analytics, as well as contacts within sports organizations (both Queen’s varsity and more broadly). (image source)

MaskRCNN for Multi-class Segmentation

Mask-RCNN’s are widely used for segmentation tasks. They’ve been trained on the COCO dataset and the code can be found and run from here. Mask-RCNN can only segment an object without going into much detail on its specifications.

The objective of this project is to use Mask-RCNN with a webcam so that objects are segmented in real-time in an environment (e.g. your home, the streets), and then use another approach to interpret the segmented object. For example, the Mask-RCNN can segment a chair in your room, and then another model added on top (bonus if the Mask-RCNN’s architecture is modified to accommodate this) can specify the qualities of the chair (i.e. size, color, material). (image source)


Natural Language → Planning

The idea for this project is to extract planning models (descriptions of actions, their preconditions+effects, etc) from natural language instructions. Examples include recipes and WikiHow instructions.

Interactive Agent-based Simulation

With large-scale agent-based models (ABMs) to simulate the spread of COVID, having a visual means to interact with the simulation can play a crucial role in understanding the unfolding of events. This project aims to provide such an interface (text-based) for a pre-existing ABM system. (image source)

Automatic Map Ingestion for ABM

Agent-based modelling typically requires many settings to be configured and tweaked in order to obtain more accurate results. This projects aims to streamline the process of creating a simulation for the spread of COVID in a particular jurisdiction by ingesting geospatial map data for a town or city and converting that to the proper format for a pre-existing simulator.

Modelling Human Behaviour

This project aims to capture interpretable insight from human users of a system using modern AI techniques. The learned representation will capture the core elements of observed human behaviour in a form detailing how and when the user transitions from one mental state to another. The source of behavioural information will be data retrieved from biomedical devices such as heart rate or skin conductance sensors. The elements of the learned representation, and the mechanics it captures, will all be learned entirely in a data-driven fashion. The research will be conducted on a driving simulation testbed that will allow for mixed human-machine control of the (virtual) vehicle.

This is part of a larger research project, and the scope of the 499 project will be limited to one component of the larger system.


PDDL Editor for Education

The aim of this project is to improve the online editor for planning specifications for the purposes of eduction. This may include anything from new visualization techniques to novel plugins for interacting with or debugging planning problems. (image source)

CISC 352 Auto-composition of Digital Assets

This project aims to build on the ongoing efforts to build a framework of automated pedagogy for AI educational resources. In particular, the aim is to convert a set of given specifications into a visual representation for further modification. Core components of the visual assets will be produced in the lead-up to this project, and the results will be used in future incarnations of CISC 352.

CISC 352 Assignment Design

As part of the re-design of CISC 352, this project aims to re-imagine what assignments are used within the course. This not only involves putting together a compelling assignment to assess the material, but further involves the application of AI techniques to (1) generate the unique problems for each student; (2) automatically mark the student submissions; and (3) automatically generate meaningful feedback when submissions are incorrect.

Adding Time to NL → Planning

Many domains contain an element of time. This project aims to look at the task of converting natural language instructions to planning domains, with a specific emphasis on problems that have durative actions. A prime example is the setting of recipes, where some actions (e.g., put a pot of water on to boil) can be done concurrently with others (e.g., chopping veggies). (image source)

Extracting PDDL from DreamerV2

Dreamerv2 is a modern deep learning architecture for high-performance reinforcement learning results in the Atari domain. It learns a latent representation of the world for conducting search and simulation, and this project aims to analyze that representation for extracting symbolic (and interpretable) world models. (image source)

Dialogue Agent Conversation Alignment

Dialogue agents are commonly represented as trees of conversation paths that alternate between a human user and the agent response. This project aims to align a candidate conversation over top of a pre-existing dialogue agent representation. Where this cannot be achieved completely, a partial alignment should be made for the longest possible conversation prefix, indicating a gap in the dialogue agent’s ability to handle the candidate conversation. This work will require the application of natural language understanding techniques.

Sports Analytics

This open-ended project is for focusing on advanced sports analytics. This includes vision-based applications to locate and identify players, geometric analysis of games to determine team influence of player behaviour, etc. Interested students are encouraged to propose ideas they would like to consider. (image source)

Powerful Puzzling

The aim for this project is to be able to highlight on an image of puzzle pieces, a pair of piece edges that likely go together. If done in real-time, this would allow an interactive experience of puzzle solving where the system suggests the next move, and the human continually responds by attempting the suggestion. (image source)

Knitting Verification

Knitting charts provide a visual representation of the stiches necessary to produce a desired knitting pattern. These charts are typically accompanied by written natural language instructions. However, there can be discrepancies between the chart and the written instructions, causing errors while crafting. This project explores creating a tool to detect if there is a discrepancy between instructions and the knitting chart. This will be done by generating knitting charts from natural language instructions that can then be aligned to the accompanying knitting chart to determine the correctness. Existing work has been done for large scale manufacturing, [analyzing machine instructions for optimization using transfer planning] and [generating manufacturing instructions from chart images]. However, no work as of yet has focused on synthesizing knitting charts from natural language instructions alone. (image source)

Custom Project

500-level projects are largely about independent research. If you have a passion to do research in the type of area the lab specializes in (look at all the projects above to get a sense), then feel free to [pitch a project] that you might like to pursue.