In the MALLET lab I work on a project improving automated algorithm selection. Traditionally, automated algorithm selection models are trained on the problem instance
feature values and performance data from algorithm runs. The project I work on showed training these automated algorithm selection
models on the feature values of the algorithms in addition to problem instance features improves overall performance.
This allows the evaluation of sets of problem instances in less time and with less memory by choosing the most optimal algorithm more frequently.
An extended abstract on my work submitted to AAAI the Undergraduate Consortium. Abstract.pdf
3-minute lighting talk presented at the 2020 Wyoming Research Scholars Symposium
Research: AI index
Every year the HAI lab at Stanford and partners release a report tracking and summarizing progress in the field of AI.
I ran a set of computational experiments using the high-performance computing cluster at the University of Wyoming that showed the improvements in Boolean Satisfiability solvers over the last 5 years.
My experiments were used in the report and I was cited for my contribution on page 72.
I've been an avid runner for nearly a decade now and my brother currently runs for a collegiate cross-country and track program. One day we were discussing the NCAA national meet when
I realized the way cross-country meets are scored means leaves room for optimization. The closer a runner is to the median of the field the same percentage improvement in their time has a
greater positive impact on their team's score. I set out to model and visualize this relationship in R. Check out the graphic below I made using ggplot to get a visual sense for why this is the case.
I ultimately created a script that returns the runners on a team in order of how relevant their improvements are to the team score.
See the pdf below for the full write up on this project.