# Principal Investigator

My research combines artificial intelligence and machine learning to build robust systems with state-of-the-art performance. I develop techniques to induce models of how algorithms for solving computationally difficult problems behave in practice. Such models allow to select the best algorithm and choose the best parameter configuration for solving a given problem. I lead the Meta-Algorithmics, Learning and Large-scale Empirical Testing (MALLET) lab and directing the Artificially Intelligent Manufacturing center (AIM) at the University of Wyoming.

#### PhD Students

#### Damir Pulatov

In practice, there exist many algorithms that solve hard problems. Algorithm selection is a field within AI that address the issue of selecting the optimal algorithm for a given instance of a problem. Even though these selection systems have shown their promise in a lot of applications, there are still scenarios where they are not performing well. We think that this is because all algorithm selection approaches treat algorithms that they choose as black boxes. This means that potentially useful information is ignored when making a selection. We propose a white box algorithm selection approach, where we investigate properties of software to help us improve algorithm selection, especially in the areas where it fails.

Elevator Pitch

#### Haniye Kashgarani

The algorithm selection problem is to choose the most suitable algorithm from an algorithm portfolio for solving a given problem instance. One approach for improving instance-based algorithm selection techniques is to select not one, but multiple solvers from a given portfolio, and to run these in parallel. We do some parallel tests with different number of parallel solvers on different nodes running on Uwyo Teton cluster.

Elevator Pitch

#### Mehdi Nourelahi

I am currently investigating the effect of choosing pivot in Quick Sort more wisely. Broadly speaking, I am trying to understand if we choose pivot from a percentage of our list, whether it will outperform other common methods of choosing pivot or not.

Elevator Pitch

#### Master Students

#### Sourin Dey

Graphene is a material having a vast array of promising properties which has enhanced the development of next generation nano-circuits. We are researching to improve the graphene production from graphene oxide through LASER irradiation powered by AI. The goal is to figure out best parameter configuration which will improve the G to D ratio (this ratio quantifies the improvement of graphene) using Bayesian Optimization. Currently I’m working on the automation of the whole process. Next, I’ll focus to improve the expected improvement of Bayesian optimization as well as exploration-exploitation trade-off to fulfill the goal.

Elevator Pitch

#### Undergraduate Students

#### Jesse Ken Evans

The algorithm selection problem is to choose the most suitable algorithm from an algorithm portfolio for solving a given problem instance. One approach for improving instance-based algorithm selection techniques is to select not one, but multiple solvers from a given portfolio, and to run these in parallel. We do some parallel tests with different number of parallel solvers on different nodes running on Uwyo Teton cluster.

Elevator Pitch

#### Krishna Sai Chemudupati

I am presently working on improving the existing Algorithm Selection systems, by studying cases where they do not perform very well. Algorithm Selection systems are Machine Learning models which predict the best algorithm to solve a given problem instance based on previous data of algorithm runs, of all the algorithms in the pool across all the problem instances. In practice, these Machine Learning models fail to predict the best algorithm for a few types of problems to the extent that using one optimized algorithm which works well in general across all the problem instances is better. I am looking into such problems types, to understand what makes these problems harder for the Machine learning models to predict for, and leverage this information to improve the Algorithm Selection systems.

Elevator Pitch

#### Thomas Wise

When comparing the performance of two algorithms, it is possible that you may accidentally determine that the wrong algorithm performs better. This is because run-time is a very noisy measurement. Even running the same algorithm on the same instance, there can be a considerable amount of variance in the possible run-times. This variance can cause an algorithm to seem worse because it got unlucky, or better because it got lucky. In some cases, when you compare the performance of two algorithms, the better option may seem worse just because it got unlucky. My research is in trying to find a more robust way to measure the performance of algorithms. More specifically, to find something that is more consistent, but can still be used as a proxy for run-time. A more robust measurement may stop algorithms from being misclassified based on luck.

Elevator Pitch