Honors Theses Archive
Author:Christina MacKenzie
Title:A Survey of Attacks Against the NTRU Public-Key Cryptosystem
Abstract:Public-key systems of cryptography are vital for ensuring the security of private information. The security of the most commonly used public-key cryptosystems today, including RSA and the classic and elliptic-curve variants of Diffie-Hellman, is based on the presumed hardness, in today’s computational setting, of the factorization and discrete logarithm problems. With the advent of quantum computing, however, these systems will no longer be secure; Shor’s quantum algorithm works in polynomial time to solve the factorization and discrete log problems, allowing for messages encrypted with those systems to be efficiently recovered by a third party. In response, the National Institute of Standards and Technology (NIST) has begun to solicit submissions for public-key cryptosystems which will remain secure in the quantum setting.
In this paper, we explore one of the frontrunners in this search, the NTRU Public-Key Cryptosystem. Most commonly described in terms of polynomial rings, the security of NTRU is based on the presumed hardness of lattice problems in both the classical and quantum computational settings. We present several proof-of-concept implementations of the most viable currently-known attacks against NTRU for the purpose of comparing and evaluating the merits of each attack. We then use the outcomes of these proof-of-concept implementations alongside descriptions of the optimized implementations to make recommendations for choices of NTRU parameters.
Author:Ronan C. Manvelian
Title:Optimization of Molecular Graphs with Deep Kernel Learning
Abstract:Molecular geometry optimization requires evaluation of a given molecule’s potential energy surface (PES). To optimize a molecule is to reconfigure its geometry such that its resulting potential energy matches a ground state energy specified by a local minimum on the molecule’s PES. A ground state energy calls attention to several properties of the molecule in question, such as how its atoms behave at low temperatures. For this reason, chemists seek to determine the ground state energies for molecules of interest. Thus, chemists have grown increasingly interested in a variety of geometry optimization methods, which have proven invaluable for the determination of such molecular properties.
There exist several different approaches to geometry optimization, including standard quantum chemical methods as well as machine learning methods involving neural networks. In our case, we utilize a combination of graph neural networks—a class of neural networks whose inputs are graphs rather than standard feature vectors—and Gaussian processes to accurately predict the PES of CH4. More specifically, we employ deep kernel learning to create a surrogate PES used to optimize distinct geometric instances of the molecule. Our project goal is for the combination of these methodologies to outperform existing topological algorithms used for geometry optimization.
Author:Jianxin Wang
Title:Optimal Activation Functions for the Random Feature Regression Model
Abstract:Adversarial robustness in neural networks has attracted a lot of attention. Extensive works have been focusing on improving adversarial robustness through new training methods. Those methods are affecting the accuracy of neural networks to some extent. In this paper, we propose an idea of designing activation to improve both sensitivity and accuracy in neural networks. To analyze it in a closed-form, we make use of the Random Feature Regression (RFR) Model. The asymptotic mean squared test error and sensitivity of the RFR have been recently studied. We build on this work and identify in closed-form the family of activation functions that minimize a combination of the test error and sensitivity of the RFR under different notions of functional parsimony. In particular, we find several scenarios under which the optimal activation functions are linear, others in which they are saturating (clipped) linear functions, and others in which they can be expressed in terms of Hermite polynomials.
Author:Gina Chun
Title:Ploymorphic Modelling the Potential Engergy Surface of Carbon Systems using Graph Neural Networks
Abstract:We present a machine learning method to model the potential energy surface of carbon structures with the following properties: polymorphic and physically-plausible. The model is polymorphic in that it models carbon structures regardless of the number of carbon atoms in the molecule. The model is physically-plausible because it is not dependent on the specific Cartesian coordinates of every single atom, making it invariant to translation, rotation, and permutation.
The design of this method starts with graph representation for every molecule with the edges holding the distances between an atom’s nearest neighbors so that only the local environment is captured. It also uses a message passing graph neural network (MPGNN) which allows nodes to learn the data from neighboring nodes. This approach comes from how molecules are composed of structural units that are approximately constant, thus representing a molecule as a network of local ”neighborhoods” produces local features that can be aggregated into producing higher-level features of the entire molecule.
The results show that the MPGNN is indeed learning the potential energy surface and is consistently improving as the number of message passing increases. A com- parison between the MPGNN and a dummy regressor and an untrained MPGNN proves that the model is functioning as it should and is properly learning, but a comparison with the QUIP software shows that the MPGNN model still has room for improvement. The present work demonstrates a less costly approach to model the potential energy surface of carbon structures with a MPGNN, considering that current frameworks still have a high computational cost. The findings also show the potential for further improvement on the model and better future performance through even more fine tuning.
Author:David Shen
Title:Giza on Cassandra: Cross Data Center Metadata Replication
Abstract:Cloud storage often involves data replication to data centers located across the country, to provide better availability and protection against losses due to catastrophic data center failures. However, cross-data-center replication introduces issues of consistency, where different data centers may have different versions of an object stored, and from that, the latency required to maintain consistency. Giza is one method towards creating a strongly consistent, versioned object store. It was originally built on top of Microsoft Azure Storage, and utilizes Fast Paxos to allow for single round-trip-time latencies in the common, contention-less case, and classic Paxos to resolve contention. This paper seeks to implement Giza on top of Cassandra, an open source distributed database, giving us a direct comparison of cross-data-center node replication with and without Giza, and a look into real world effects of the tradeoffs between round-trip latencies and the larger quorum required for Fast Paxos. This paper will also simplify and clarify details on the implementation of Giza and better explain its unique usage of Fast and classic Paxos.
Author: Yicheng Shen
Title:Assisting Federated Learning with Vehicular Clouds
Abstract: In recent years, Federated Learning (FL) emerged as a machine learning technique that avoids the step of centralizing local data from edge devices in a data center. Collecting only the gradients computed from local on-device data, FL enables edge devices to upload less bytes during communication and protects users’ privacy. Moreover, FL algorithms are suitable for many use cases, such as recommendation systems, decision making, risk control, and pattern recognition.
This thesis aims to integrate vehicular clouds with FL in order to improve the performance of FL. Smart vehicles nowadays are equipped with more and more computing power. As their onboard computers do not operate at full-speed at all times, we can use their excess computing power to perform training. We present a system named VC-SGD (Vehicular Clouds-Stochastic Gradient Descent) that supports FL by using vehicular clouds and edge devices. Our three-layer architecture consists of a cloud server, some edge servers which are deployed with vehicular clouds, road-side units and base stations, and many workers which are edge devices. In addition, this thesis investigates the data collection problem by simulating real-time location-specific data. We implemented a simulator which utilizes SUMO to simulate vehicle mobility and MXNet to perform machine learning tasks. Finally, we used the simulator to evaluate the performance of VC-SGD.
Author: Darius Russell Kish
Title:Learning to Optimize the Positioning of Methane on Metal Catalysts Using Gaussian Processes
Abstract: Dry Reforming of Methane is a vital industrial reaction to produce hydrogen from methane, which is used in many downstream reactions. The current method for the reforming of methane uses high pressure steam, leading to an undesirable CO2 producing side reaction that is a major greenhouse gas contributor. Much research has been dedicated to dry reforming on metal catalysts, but current generation catalysts are prone to deactivation by side products. Physical R&D of catalysts takes approx. 2 years, so computational methods are vital for high-throughput exploration of new catalysts. The standard method uses Quantum Mechanical (QM) theory, taking into account subatomic interactions, however QM comes with a large computational cost. Significant time is spent optimizing orientation of the molecule on the catalyst surface, and improvements to the optimization cycle will greatly improve evaluation throughput.
Using a small dataset of precomputed, optimized systems, we propose and implement a method of transfer learning by using the posterior mean of a Gaussian Process (GP) derived from training data as the prior mean for a Gaussian Process that optimizes the geometry of an unseen system. We implement this method in GPyTorch, which also allows us to fit the gradient of the energy, more canonically recognized as the forces on the atoms. We may then find a geometry of minimum energy by iteratively minimizing the posterior mean of the testing GP, evaluating new steps using QM, and adding the data to the GP for the next iteration. Our main contribution is the framework and code to perform this task, though we show a brief proof of concept for its function.
Author: Rachel Duquette
Title: Automatic Speech Recognition for English-Spanish Code-Switching
Abstract: Advances in machine learning over the last decade have resulted in dramatic improvements in Automatic Speech Recognition (ASR) performance. As alexa, siri, and google home attest, speech recognition now works remarkably well in monolingual contexts where there is abundant data on the language in question. Recent research has attempted to extend these improvements to code switching ASR, or multilingual ASR where the language can change mid utterance. Progress has been made on a number of high-resource language pairs, such as English-Mandarin and English-Hindi. A few low-resource language pairs are also being studied, such as Frisian-Dutch.
My thesis converts a corpus of English-Spanish code switching data into a format that enables speech recognition experiments, incorporates monolingual data to improve the language model, and explores generating synthetic code-switched data using a deep learning machine translation architecture.
Author: Mingzhuo Deng
Title:Parallel Computation of Sovereign Default Models
Abstract: This paper focuses on parallel and efficient computation of macroeconomic models, with emphasis on solving sovereign default models.
To frame the paper's contributions, we first give a brief characterization of computing sovereign default model. The highly nonlinear Sovereign default model cannot be solved with fast methods like perturbation. A common approach is to use iterative procedures such as value function iteration. Value function iteration represents many characteristic features of large-scale economic computation: expensive iterations and frequent matrix storage and access. These disadvantages make efficient computation of sovereign default model on a very dense grid attractive.
Our motivation is two-fold. First, we demonstrate Julia’s competitive advantage through testing multiple hardware platforms and languages. The sovereign default model is solved using C CUDA, standard parallel C++, OpenACC, Julia, and Julia with CUDA. The spectrum of implementations provides a comprehensive comparison on the advantages and limits of each approach. Julia and Julia CUDA stand out from the comparison due to its excellent trade-off with quick execution speed high performance and low programming barrier.
Second, we provide an implementation of sovereign default model based on Julia and Julia CUDA syntax. Best practices of using Julia are described, which can speed up the computation of the sovereign default model by a factor of over 1000 times with GPUs. Julia demonstrates an exceptional balance in execution speed and easiness of development for macroeconomics models. The choice of programming language and GPU compiler reflects our opinion. As the paper is written, CUDA in Julia is a developing library with incomplete implementation compared to CUDA in C (Besard et al., 2018). The high-level Julia language and the Julia-style design of CUDA library in Julia streamline an efficient process to test model on standard Julia and then parallelize in Julia CUDA.
Author: Yinzhe Ma
Title:Evaluating the Empirical Performance of Sandwich Learned Bloom Filter and Adaptive Learned Bloom Filter
Abstract: A Bloom Filter is a set data structure that uses independent hash functions to store a set of elements; when querying elements from the same universe, the Bloom Filter gives out possible false positives and no false negatives. Therefore, Bloom Filters have been widely used in fields such as weak password detection and safe browsing in Google Chrome. However, such structures face a tradeoff between memory efficiency and accuracy. A recent paper, “The Case for Learned Index Structure” (Kraska and Beutel and Dean and Polyzotis, 2017), suggested that machine learning models could improve the performance of such index structure; furthermore, publications from Mitzenmacher on Learned Bloom Filters (Mitzenmacher,2018) and from Dai et al. on Adaptive Learned Bloom Filters (Dai and Shrivastava, 2019) further explored different variations of the Learned Index Structure, especially from a theoretical standpoint. This project aims to 1) validate these two newly proposed data structures from an empirical perspective under the scenario of checking whether Email-URL synthetic dataset is malign or benign; 2) explore necessary implementation methods of these two proposed data structures so that they achieve the assumed enhancement when comparing with regular Bloom Filter.
Author: Yifan Zhang
Title: Byzantine General Problems: Theory and Application in a Dynamic System
Abstract: This survey paper aims to study the theory and applications of Byzantine general problems. In particular, we compare models and methods being used to study Byzantine broadcast problems in dynamic systems, where nodes may join and leave at any time. Byzantine consensus is not only one of the most fundamental problems in distributed algorithms, but also an important building block in cryptography and distributed systems. There exist two main variants of Byzantine consensus problems: Byzantine Broadcast (BB) and Byzantine Agreement (BA). Byzantine broadcast problems further have several variants including Byzantine reliable broadcast (BRB) and Byzantine consistent broadcast (㽶B). Byzantine consistent broadcast has relaxed conditions compared to the original Byzantine broadcast protocol, and has significant applications in blockchain systems.
One of the most important techniques to study Byzantine broadcast problems is digital signature and public-key infrastructure (PKI). The signature scheme enjoys ideal unforgeability as any node is able to sign specific messages. In fact, blockchain systems are extremely theft-prone so the security protocol, which assumes PKI, is especially important in blockchain models. We look into more advanced signature schemes such as multi-signatures which further enhance the resilience of blockchain against corrupt individuals. We also notice that it is important to define join/leave protocols to study Byzantine general problems in a dynamic system. Hence, we compare how such protocols are defined in dynamic system models, and see how they are applied in blockchains.
Author: Qinzi Zhang
Title: Echo-CGC: A Communication-Efficient Byzantine-tolerant Distributed Machine Learning Algorithm in Single-Hop Radio Network
Abstract: Distributed machine learning has become a popular topic in recent years because growing data size makes it costly and almost impossible to process every computation task on a single device. However, the distributed schemes are vulnerable to malicious adversarial attacks. As Blanchard etc. proved in their , traditional aggregating function which outputs a linear combination of the inputs cannot tolerate even a single faulty worker.
In the past few years, many fault-tolerant distributed machine learning (DML) algorithms have been proposed, e.g., (Chen, etc.) and (Blanchard etc.). However, one limitation of prior algorithms in this domain is the high communication complexity. All the fault-tolerant DML algorithms that we are aware of need to send n d-dimensional vectors from worker nodes to the parameter server in each round, where n is the number of workers and d is the number of dimensions of the feature space (which may be in the order of millions). In a wireless network, power consumption is proportional to the number of bits transmitted. Consequently, it is extremely difficult, if not impossible, to deploy these algorithms in power-limited wireless devices. Motivated by this observation, we aim to reduce the communication complexity of fault-tolerant DML algorithms in the single-hop radio network.
Inspired by the developed by Gupta and Vaidya, we propose a gradient descent-based algorithm, Echo-CGC. Our main novelty is a mechanism to utilize the broadcast properties of the radio network to avoid transmitting the raw gradients (full d-dimensional vectors). In the radio network, each worker is able to overhear previous gradients that were transmitted to the parameter server.
Roughly speaking, in Echo-CGC, if a worker ``agrees'' with some linear combination of prior gradients, it will broadcast the ``echo message'' instead of its raw local gradient. The echo message is a vector of coefficients of the linear combination, which is at most n-dimensional. In comparison, the traditional approaches need to send n local gradients in each round, where each gradient is typically a vector in an ultra-high dimensional space (d >> n). Finally, we prove the correctness of Echo-CGC under certain assumptions, and we compute a theoretical ratio of reduced complexity.
Author: Brian Ward
Title: A Validated Parser for Stan
Abstract: Before any code can be interpreted, compiled, or otherwise processed, it must first be parsed. This process of mechanically reading and interpreting code is a well-trodden area of compiler design, and one that has many existing tools and methods. A programmer can specify the syntax of a language succinctly and use a variety of tools, known as parser generators, to create code which implements the translation from that syntax into a tree structure for evaluation or compilation.
Recent developments in the field allow for a way to be certain that this translation is done following the desired specification. Rather than simply trusting a handwritten parser, or trusting those who write parser generators such as yacc, it is now possible to use formal methods to prove that the parsing code is correct with respect to the specification that was used to create it.
This thesis is an exploration of using said methods to produce a verified parser for the probabilistic programming language Stan, a common statistical modeling and computation language. To this end, an existing tool for parser generation and verification, Menhir, was extended to allow error reporting functionality alongside its validated output.
Author: Anran Du
Title:CarML: Distributed Machine Learning in Vehicular Clouds
Abstract: This paper presents CarML, a Distributed Machine Learning (DML) platform built on top of an emerging computing paradigm, vehicular clouds. DML has been developed to handle today’s exponentially growing training data size and model complexity by distributing the workload across multiple machines. Although this process usually takes place in a datacenter using specialized computers, we are proposing a novel medium to take on DML training. Smart vehicles possess abundant computation and communication resources that are often overlooked and underutilized; CarML aims to release these idle resources and bring distributed machine learning into the highways.
Our architecture is inspired by the three-layer parameter server model, with the top-layer cloud server storing the dataset and the model, the middle-layer edge servers distributing the dataset and aggregating trained gradients, and the bottom-layer vehicles responsible for training. We discuss our design and technical challenges, namely in the areas of RSU placement, data distribution, communication synchrony, and fault-tolerance training, followed by our preliminary solutions. We develop a general simulator that uses SUMO to simulate vehicle mobility and TensorFlow to perform real training. Our results indicate that CarML can augment existing cluster-based distributed machine learning solutions despite high mobility and uncertainty.
Author: Xuheng (Michael) Duan
Title: Experience of Learning Blockchains
Abstract: Today, blockchain technology becomes a hot topic and attracts people’s attention worldwide. Initially, blockchain was introduced in 2008 by Satoshi Nakamoto as a secure system for bitcoin. Today, there are public blockchain projects like Ethereum and Bitcoin that enables anyone to process secure peer-to-peer transactions. Meanwhile, there are also private blockchain projects like Hyperledger Fabric[1], which restricts the participants and realizes secure communications between corporations.
Throughout the past year, I studied the Hyperledger blockchain system and encountered numerous difficulties. Moreover, witnessing so many different sorts of blockchain projects, one would like to understand and compare different blockchain performance at scale. Thus, I also studied three different benchmark tools, namely Blockbench[2], Caliper++[3] and Hyperledger Caliper[4], and focused on the latter two. At last, we used Mininet[5] to emulate a virtual network and tested the performance of Ethereum. Overcoming the struggles, I document my learning outcomes and share my experience on these open-source projects with those who would like to engage the field of blockchain.
Author: Yingjian (Steven) Wu
Title: Reliable Broadcast in Practical Networks: Algorithm and Evaluation
Abstract: Reliable broadcast is an important primitive to ensure that a source can reliably broadcast a message to all the non-faulty nodes in either a synchronous or asynchronous network. This network system can also be failure prone, meaning packets can be dropped or the packets can be corrupted. The faulty server can perform different faulty behaviour such as sending wrong messages, and not sending any message. In 1987, Bracha first proposed reliable broadcast protocols, and since then different reliable broadcast protocols had been designed in order to achieve different goals, such as reducing round and bit complexity.
In a practical network, there are several constraints such as limited bandwidth or high latency. Thus We aim to design new reliable broadcast protocols that consider these practical network constraints. More specifically, we use cryptographic hash functions and erasure coding to reduce communication and computation complexity.
Finally, We also designed a general benchmark framework that can be used to test reliable broadcast algorithms. We evaluated new algorithms we have designed and implemented using this benchmark platform. and the algorithms showed superior performance in practical networks.
Author: Juan Mantica
Title: Using Vehicular Clouds for Computation and Storage
Abstract: As data is expected to increase rapidly in upcoming years, researchers have proposed using Vehicular Ad Hoc Networks, or VANETS, as a means to address issues that accompany Big Data. Newer generation of vehicles will have storage and computation power that might surpass the capabilities of commodity computers. VANETS that are composed of cars with such capabilities would serve as a perfectly cheap and efficient solution to reduce the workload of massive and expensive data centers as a computational resource.
This thesis aims to further research the application of VANETS to solve two important issues that accompany modern computing: content delivery and machine learning. For the former the thesis explores the use of micro-clouds (a form of vehicular clouds) as a content delivery network in which users can download data through free short range transmission protocols. For the latter, the thesis explores the training of machine learning models on load balanced micro-clouds. Finally, these two protocols are tested and simulated on our own developed simulator and on the state of the art network simulator: VEINS.
Author: Shikun Wang
Title: First-Order Conic Parallel Solver for Graph Matching
Abstract: Graph representations of real-world phenomena are ubiquitous--from social and information networks, to technological, biological, chemical, and brain networks. Many graph mining tasks require a similarity measure between graphs. Examples include clustering of graphs and anomaly detection, nearest neighbor and similarity search, pattern recognition, and transfer learning. Such tasks find applications in diverse areas including image processing, chemistry and social network analysis.
Given two graphs, their similarity is a score quantifying their topological differences. It is desirable for such a score to be a metric, i.e., to be non-negative, symmetric, positive-definite, and, crucially, to satisfy the triangle inequality. Metrics have several computational advantages over non-metrics. For example, operations such as nearest-neighbor search, clustering, outlier detection, and diameter computation have known fast algorithms when performed on objects embedded in a metric space. Unfortunately, algorithms to compute several classic distances between graphs do not scale to large graphs; other distances do not satisfy all of the metric properties.
In this work we develop a solver to compute some of the scalable metrics recently proposed by Bento and Ioannidis 2018. Our solver is based on the conic representation of the set of doubly stochastic matrices. This representation allows us to reduce the metric computation to a convex optimization problem with box-constraints, which we solve via a first-order interior point method. Our method is parallelized on a GPU and its performance is compared with several existing graph-distance-computation algorithms.
Author:Zezhi Wang
Title: A Distributed System for Reducing Uploaded Data Redundancy in Vehicular Networks
Abstract: For autonomous vehicles, uploading vehicle sensor data is crucial to surrounding awareness and decision making. We develop a system that relies on a peer-to-peer mechanism to obtain information via the vehicular network. Consider the scenario that each vehicle equipped with a camera and communication capability updates a new image snapshot to the datacenter, and the datacenter combines snapshots to create a map. In a naïve upload scheme, each vehicle uploads its snapshot and the datacenter will integrate it into a map. This method is not scalable when a number of vehicles need to upload, as the communication bandwidth will be a bottleneck. To address the challenge, we propose a novel distributed system to reduce data redundancy and hence save bandwidth and workload at the datacenter. The key idea is to use location and orientation information to simplify the design and coordination among vehicles, and utilize computer vision algorithms to remove redundant information. In this thesis, we outline the design of our system and verify the efficacy of our system through a simulation study.
Author:Zhao Yajie
Title: Fault-Tolerant Causal Memory
Abstract: Distributed shared memory (DSM) is of growing importance in the field of distributed systems. DSM may provide different levels of consistency, depending on its applications. Many DSM systems maintain a strong consistency, but in doing so introduce more latency that sacrifices the scalability. This paper instead focuses on another type of shared memory consistency called causal memory, which provides a weaker level of consistency in exchange for lower latency. In particular, this study provides an emulated causal memory in the client-server model over asynchronous message-passing networks. In this model, clients can read and write to the shared memory in the form of key-value pairs, while servers serve clients’ requests and maintain causal consistency of the shared memory. We propose two algorithms for Byzantine-tolerant causal memory. We also implement the algorithms to study the performance compared to other types of DSM systems.
Author: Daniel Brett
Title: Linkage Attacks and the Fallacy of Anonymity: A Study of Modern Privacy Techniques
Advisor: Howard Straubing
Abstract: In this paper I present a detailed overview and analysis of modern privacy techniques like k-anonymity and `-diversity. This paper then examines two datasets, one with 2015 New York hospital inpatient discharges and one with Yelp restaurant reviews from 2010-2017. I discuss how each of these datasets are susceptible to linkage attacks and whether or not they satisfy modern privacy definitions. Finally, I outline potential mechanisms that could be implemented to improve privacy protections in publicly available datasets.
Author: Minghao Liu
Title: The PCP Theorem and Hardness of Approximation
Advisor: Howard Straubing
Abstract: In Computational Complexity, there are combinatorial optimization problems that are considered hard (NP-Hard) to solve for the optimal solution. An easier alternative was to approximate the optimal solution. Hardness of approximation is a field concerning the complexity of approximating the optimal solution of certain problems to a constant factor. With the discovery of the PCP Theorem, many widely known and extensively researched problems are proven hard to approximate up to some factor. The relatively recent Unique Games Conjecture opened up more possibilities, and if proven to be true, would indicate that many approximation algorithms cannot be possibly improved. This talk will introduce the field of hardness of approximation, the motivations behind the results, and how the PCP Theorem, as well as its variations, brought breakthroughs in this field.
Author: Sazan Dauti
Title: FastMock: Automatically Render Realistic Device Screen Mocks using Computer Vision and Machine Learning
Advisor: Robert Signorile
Abstract: Device screen mocks prove to be an essential tool in terms of marketing for mobile apps. As the mobile app market grows, so does the need for realistic screen mocks. Currently, the most efficient and effective way to create these mocks is to use advanced photo editing software, such as Adobe Photoshop. In order to create these mocks with Photoshop, one must have access to the .psd of a processed image (or create one themselves). This .psd file contains multiple layers and masks allowing one to replace a layer with their desired screenshot to create the mock. The use of multiple layers in Photoshop makes the mock more realistic by allowing the use of highlights and shadows on the screenshot, thus giving it a more “glassy” effect.
The goal of this thesis is to eliminate the need of Photoshop – or any other photo editor –and automatically generate a realistic device screen mock given only images of the device and the desired screenshot. Computer vision is used to place the screenshot on the device image, as well as extracting features from images. Various machine learning approaches are then investigated to find the optimal solution for simulating highlights and shadows on the screenshot.
Author: Serge Aleshin-Guendel
Title: Examining the Structure of Convolutional Neural Networks
Advisor: Sergio Alvarez
Abstract: Machine learning, in conjunction with large data sets, has seen many success stories in recent years, leading to increased interest in the field. One popular class of machine learning models is deep neural networks, where stacked layers of “neurons” are used to learn approximate representations of data. One particular model, the Convolutional Neural Network (CNN), is notable in that it’s become the standard in most computer vision tasks. However, there is little to no theory surrounding how to best build these CNNs.
The aim of this thesis is two-fold. The first aim is to provide a brief introduction to the field of supervised machine learning, neural networks, and CNNs. The second aim is to explore how to best build CNNs, through an examination of structural properties related to the width, depth, and receptive field of networks.
Author: Zhihe Shen
Title: Using QBF Solvers to Solve Games and Puzzles
Advisor: Howard Straubing
Abstract: There are multiple types of games, such as board games and card games. Some are multiplayer games and some are single-player games. Many games such as 2-player games are hard to solve because the problem of determining whether a given player has a winning strategy for these games is PSPACE-complete. It is proved that the problem of determining whether a quantified boolean formula is true is also PSPACE-complete. Because of the PSPACE-completeness of TQBF, every problem in PSPACE, in particular these games, can be encoded as an instance of TQBF. Thus, one way to understand the complexity of a game is to encode it as a quantified Boolean formula.
This thesis aims to investigate the computational complexity of different kinds of games. We choose to work on games played between two agents, for example, simple geography games. Because they are in PSPACE, we convert them into non-clausal quantified Boolean formulas based on the rules of each game. By solving those formulas, we can find a winning strategy for either player. One way to solve these formulas is to use a quantified Boolean formula solver (QBF solver). In this paper, we will use GhostQ to solve the non-clausal quantified Boolean formula.
Author: Joseph Stroud
Title: Creating a Feature Vector to Identify Similartiy between MIDI Files
Advisor: Sergio Alvarez
Abstract: Today there are many large databases of music, whether used for online streaming services or music stores. A similarity measure between pieces of music is extremely useful in tasks such as sorting a database or suggesting music that is similar to a selected piece. The goal of my project is to create a feature vector using only the actual musical content of a piece, ignoring metadata like artist or composer names. This feature vector can be used to measure distance between pieces in a feature space. To analyze the information contained in the feature vector, clustering and classification machine learning tasks were run with the goal finding pieces that share musical characteristics, whether they are grouped together into a cluster or classified as the correct genre.
Author:Ning Lu
Title:A Machine Learning Approach to Automated Trading
Advisor:Sergio Alvarez
Abstract: The Financial Market is a complex and dynamical system, and is influenced by many factors that are subject to uncertainty. Therefore, it is a difficult task to forecast stock price movements. Machine Learning aims to automatically learn and recognize patterns in large data sets. The selforganizing and selflearning characteristics of Machine Learning algorithms suggest that such algorithms might be effective to tackle the task of predicting stock price fluctuations, and in developing automated trading strategies based on these predictions. Artificial intelligence techniques have been used to forecast market movements, but published approaches do not typically include testing in a real (or simulated) trading environment.
This thesis aims to explore the application of various machine learning algorithms, such as Logistic Regression, Naïve Bayes, Support Vector Machines, and variations of these techniques, to predict the performance of stocks in the S&P 500. Automated trading strategies are then developed based on the best performing models. Finally, these strategies are tested in a simulated market trading environment to obtain outofsample validation of predictive performance.
Author:Benjamin Madany
Title:PaintSpace: Exploring the Concept of Painting on a 3D Canvas Using Virtual Reality and 3D Input
Advisor:Robert Signorile
Abstract: 3D technology has seen a wide range of innovations, from 3D graphics and modeling to 3D printing. Among the most recent of these innovations are immersive virtual reality and 3D input. These have allowed for the creation of unique, 3D experiences, and they also present the opportunity for a wide variety of applications whose purposes range from entertainment to educational or medical use. One possibility is an extension of 3D modeling that utilizes these recent technologies to present a 3D canvas to an artist. Applications such as Google’s Tilt Brush explore this concept of drawing in 3D space. As the ability to draw in such space is novel, development of such a tool presents several challenges. This thesis explores the process of building a 3D painting application. I first present the key challenges encountered during development. Then, I detail various solutions and options related to these challenges. Next, I examine the capabilities and state of my application, and finally, I compare it to other available applications.
Author: Ayako Mikami
Advisor: Sergio Alvarez
Abstract: Recent work in deep machine learning has led to more powerful artificial neural network designs, including Recurrent Neural Networks (RNN) that can process input sequences of arbitrary length. We focus on a special kind of RNN known as a Long-Short-Term-Memory (LSTM) network. LSTM networks have enhanced memory capability, creating the possibility of using them for learning and generating music and language.
This thesis focuses on generating Chinese music and Japanese lyrics using LSTM networks. For Chinese music generation, an existing LSTM implementation is used called char-RNN written by Andrej Karpathy in the Lua programming language, using the Torch deep learning library. I collected a data set of 2,500 Chinese folk songs in abc notation, to serve as the LSTM training input. The network learns a probabilistic model of sequences of musical notes from the input data that allows it to generate new songs. To generate Japanese lyrics, I modified Denny Britz’s GRU model into a LSTM networks in the Python programming language, using the Theano deep learning library. I collected over 1MB of Japanese Pop lyrics as the training data set. For both implementations, I discuss the overall performance, design of the model, and adjustments made in order to improve performance.
Author:Jake St. Germain
Title:Neuroprosthetics: An Investigation into Utilizing EEG Brain Waves to Control a Robotic Arm
Advisor:Robert Signorile
Abstract: Medical advances have created a society where more people are surviving life- threatening injuries, although at a physical cost. Individuals who are handicapped due to these enormous medical developments are becoming increasingly more common, especially ones who are living with limited motion of their extremities, paralyzed from the head or waist down, or are paraplegic or quadriplegic. Improvements in prosthetic technology have not yet caught up to the medical world, often limiting the options for these individuals to assimilate back into society. This thesis proposes that it is possible to directly connect the brain, through the measurement of EEG waves, to a robotic arm to increase functionality and movement. This union will require the frequency of EEG brain waves and the transfer and manipulation of this information to a robotic arm, all in real-time. This research will utilize the capabilities of the Emotiv EPOC system to record and parse brain waves, which will then be transferred to a Lego NXT robot set that resembles a prosthetic arm. That system simulates a 㽶I, or Brain Computer Interface, where the brain is able to communicate with an external source, or the Lego arm.
Author: Dylan Wolff
Title:Mutational Fuzzing to Discover Software Bugs and Vulnerabilities
Advisor: Robert Signorile
Abstract: Recent major vulnerabilities such as the Heartbleed bug have emphasized the importance of properly testing software. For both product testers and hackers, fuzzing has emerged as an important tool used to find these issues. This thesis investigates one branch of the field, mutational fuzzing, in which valid inputs are randomly altered to produce sample files. While the samples produced by this approach are not as efficient as the hand-crafted inputs used in generational fuzzing, a mutational fuzzer can easily be used on multiple applications with no prior knowledge of the input formats and only a small amount of setup. The fuzzer created for this thesis, written in Python, is a multiprocessing, distributed generic file fuzzer, and was used to test the popular media player VLC for bugs.
Author:Ali M. Aslam
Title:Learning Distributed Representations Of Natural Language with Artificial Neural Networks
Advisor:Sergio Alvarez
Abstract: Methods in natural language processing (NLP) often make use of handcrafted features or simplifying assumptions of language that work well for many tasks and allow for computational tractability. They can, however, fall short in expressing the full semantic and syntactic richness of natural language that is necessary for solving complex problems of language understanding. To ad- dress this shortcoming, artificial neural networks (ANN), robust computational models used for machine learning, have recently been applied to NLP in order to learn continuous-valued vector representations that capture some of the linguistic properties of words and even sentences. These representations have subsequently helped achieve state of the art performance in a number of tasks. In this thesis, I first introduce the motivation and basic concepts behind natural language processing and artificial neural networks. I then describe the representations that ANN approaches to NLP comprise and finally, I mention the latest developments at the cross-section of these two areas, along with the basic intuitions behind them.
Author:Samuel L. Baxter
Title:An ML Implementation of the Dependently Typed Lambda Calculus
Advisor:Robert Muller
Abstract: As programming languages and the field of computer science develop, the question of program correctness and reliability becomes more prevalent in our field. How can we assure that programs execute as expected, and what errors can we catch before we reach them? Type systems provide a framework to give satisfactory answers to these questions, and more expressive type systems yield more reliable conclusions drawn from static analysis of a program. However, more complex systems are computationally expensive, so the difficulty lies in striking a balance between a type system's effectiveness and its practicality in implementation and in execution.
This thesis discusses the limits of simple type systems as well as the polymorphic lambda calculus that underlies Java's Generics. We provide an Ocaml implementation of a dependently typed, higher-order functional language, to display the benefits of this expressive system that catches errors before runtime many other systems cannot detect. This aims to be a simple, general ML implementation that does not rely on any language-specific features, so as to highlight the type system in its clearest form. The dependently typed lambda calculus is compared to other systems, with concrete examples and a discussion of its benefits and drawbacks.
Author:Peter J. Casinelli
Title:Evaluating and Implementing Recommender Systems as Web Services Using Apache Mahout
Advisor:Sergio Alvarez
Abstract: In recent years there has been a dramatic increase in the amount of online content. Recommender systems software has emerged to help users navigate through this increased content, often leveraging user-specific data that is collected from users. A recommender system helps a user make decisions by predicting their preferences, during shopping, searching, or simply browsing, based on the user's past preferences as well as the preferences of other users. This thesis explores different recommender system algorithms such as User-User Collaborative and Item-Item Collaborative filtering using the open source library Apache Mahout. We simulate recommendation system environments in order to evaluate the behavior of these collaborative filtering algorithms, with a focus on recommendation quality and time performance. We also consider how recommender systems behave in real world applications. We explore the implementation of a web service that serves as a front end to a recommender system, keeping in mind our evaluation results, as well as ease of access to applications, and the overall user experience.
Author:Ehsan Dadgar-Kiani
Title:Interactive Mobile Simulation of Classical Electrodynamics using Finite Difference Methods
Advisor:Sergio Alvarez
Abstract: Unlike other branches of physics that can be simplified to lower spatial dimensions in order to improve understanding for an introductory student, classical electrodynamics is innately a three-dimensional science. This introduces the need for an educational utility to simulate the otherwise complicated phenomena that occur in three dimensions. This thesis aims to address this need through an iPad application that allows users to construct custom simulations and interact with them through touch gestures. The application allows the user to simultaneously explore electromagnetic induction, circuit theory, electrostatic fields, and elementary particle systems. Since the application allows for a high level of customization, a versatile modeling technique was needed to numerically approximate the laws of electrodynamics, called Maxwells Equations. The finite difference time domain method is a general method of approximating differential equations by discretizing the space and time domain. Since this applications primary purpose is as an explorative and potentially educational tool rather than a lab utility, it uses techniques to favor temporal as opposed to numerical efficiency. Furthermore, the circuit implementation made use of various elementary graph algorithms such as depth first search in order to solve a circuit's equivalent matrix equation.
Author:Steven J. Valeri
Title:Local Sonic Positioning System
Advisor:William Ames
Abstract: Based on recent innovations such as the Nintendo Wii and the XBox Kinect, gaming companies are looking to improve and drastically alter the way users interact with games. The Kinect uses cameras and image recognition algorithms to represent a user’s body motions as the movement of virtual objects in 3D space. The Wii uses accelerometers and gyroscopes for the same purpose. This thesis project explores the possibility of using a process similar to GPS to locate the position of an object and move it around in a virtual 3D environment. A GPS receiver can calculate its position because it can determine how far away it is from multiple satellites. This thesis proposes that given three speakers and a microphone, the distance between one of the speakers and the microphone can be calculated from the time it takes sound to travel from the speaker to the microphone. The position of the microphone can then be calculated by finding the distance between the microphone and each of the three speakers. By locating the position of the microphone consistently, the changes in position can be used as input for a video game, resulting in the movement of an object in a virtual 3D space.
Author:Milo C. Watanabe
Title:privateBook: Encrypting User Data with Attribute-Based Encryption Using Privacy Policies
Advisor:Robert Muller
Abstract: As the internet and cloud services have pervaded our lives in nearly every aspect, one of the biggest issues facing the populace today is nding ways to protect our privacy. An important part of protection for the average user of a service is their privacy policy: essentially the only way today to let a client dene how their data is used by the server. But often a client is not given a choice, and sometimes their policy is not even followed. We present here an way of encrypting user data such that they rst create a privacy policy, and their data is protected from even the service unless their privacy requirements are met by the service.
Author:Andrew Crotty
Title: Low-Overhead Concurrency Control Using State-Based Transaction Scheduling
Advisor:Ed Sciore
Abstract: NewSQL RDBMSs specifically target OLTP applications, attempting to combine the high performance of NoSQL systems with the ACID guarantees of traditional architectures. Of late, these RDBMSs have eschewed the standard design practices of their predecessors in favor of more streamlined and specialized techniques. One such innovation in vogue at present is the serial execution model, in which data is divided into optimally sized partitions that each own a single worker thread for processing. Research suggests that, in current main-memory systems, this approach actually outperforms customary concurrency mechanisms by providing maximal CPU utilization at minimal cost. However, the serial execution model makes stringent assumptions regarding the partitionability of datasets, and many common workloads do not possess such characteristics.
In order to better address these workloads, we present a low-overhead concurrency control model that does not rely on strict expectations about partitionability. Our design uses a novel state-based transaction scheduler to efficiently organize the concurrent execution of non-conflicting operations. We demonstrate that our model achieves optimal CPU utilization for the aforementioned workloads and out-performs other concurrency control strategies.
Author:Helen Jiang
Title:Digital Paintchat: Examining and Expanding Digital Collaborative Tools for Artists
Advisor:William Ames
Abstract: The digital world has revolutionized virtually every aspect of peoples' lives. Many professional illustrators have begun to use digital tools in order to simplify their drawing process and make it more efficient. There are many different software programs that artists use, each fitted to meet different needs, such as photo manipulation, painting, or animation. Although digital art is constantly evolving and expanding, and there is little research on how artists interact with digital media.
Communication is one of the areas in which technology has had the most profound change. People from anywhere in the world have the ability to contact each other at a moment's notice. This reality has lead to new, fruitful collaborations in a variety of fields. Thus far, there are no fully-functional artist tools that enable direct communication between artists. My thesis involves the planning and implementation of such a program.
I first conducted a digital arts survey to gather data on how current digital artists interact with the programs they are using, the way they use tools that are common among all digital art software programs, as well as the shortcomings of these tools and digital art in general. The survey was answered by both amateur and professional artists from online art communities, the majority of whom have been using art programs for over four years. Afterwards, I began programming a basic drawing program based on the results of the survey, and added networking capabilities.
Author:Jinho Kim
Title:Automatic Pitch Detection and Shifting of Musicial Tones in Real Time
Advisor:Sergio Alvarez
Abstract: Musical notes are acoustic stimuli with specific properties that trigger a psychological perception of pitch. Pitch is directly associated with the fundamental frequency of a sound wave, which is typically the lowest frequency of a periodic waveform. Shifting the perceived pitch of a sound wave is most easily done by changing the playback speed, but this method warps some of the characteristics and changes the time scale. This thesis aims to accurately shift the pitch of musical notes while preserving its other characteristics, and it implements this in real time on an Android device. There are various methods of detecting and shifting pitch, but in the interests of simplicity, accuracy, and speed, a three step process is used. First, the fundamental pitch of a stable periodic section of the signal is found using the Yin pitch detection algorithm. Secondly, pitch marks that represent the local peak of energy are found, each spaced out by roughly one period (inverse of the fundamental frequency). Lastly, these marks are used in the Pitch Synchronous Overlap and Add (PSOLA) algorithm to generate a new signal with the desired fundamental frequency and similar acoustical characteristics to the original signal.
Author:Matt Ricketson
Title:Logical Time Synchronization in Distributed Networks with Volatile Latency
Advisor:Robert Signorile
Abstract: The ability to accurately synchronize logical time between nodes in a network is important for various applications such as data collection, distributed robotics, and gaming. Most existing algorithms for time synchronization, however, are either only applicable in networks with low, consistent latency, or that are not distributed in nature. This is suitable for computers perpetually connected to WiFi or wired networks, but for mobile, embedded devices connected through volatile-latency networks such as cellular networks and that often communicate in a distributed manner, these algorithms are less than ideal.
This thesis explores a new method for time synchronization in these types of networks, called the "Follower" algorithm, which is shown to be effective at adapting to rapid, inconsistent changes in latency and to be ideal for distributed networks where nodes are disconnected and reconnected constantly. The algorithm is evaluated through extensive virtual network simulations that directly compare its effectiveness against Cristian's time synchronization algorithm, and is also implemented in a physical network of minimalist Arduino-based devices to show its potential benefit for reducing cost and complexity in distributed sensor networks.
Author:Tair Akhmejanov
Title:Computational Complexity and Randomness
Advisor:Howard Straubing
Abstract: The limit of efficient computation is an interesting topic both from the perspective of practical knowledge and pure mathematics. We examine the effect of randomness on efficiency. Does a machine with access to random coin flips provide a means for more efficient algorithms? Empirically, there are a number of problems that are solvable by randomized algorithms for which no equally efficient classical algorithm is known. Despite this observation, there are theoretical results in computational complexity theory suggesting that randomness can always be eliminated without much loss in efficiency. We explore both options by implementing a randomized algorithm, but also working through the theory of derandomization. We then explore methods for decreasing the amount of randomness needed in our algorithms by using expander graphs as “pseudorandom objects.” The ideas developed about expander graphs are then exploited to implement an algorithm solving graph connectivity in logarithmic space.
[NOTE: PDF form of this thesis is not currently available.]
Author:John Bacon
Title:Relative Distributed Inertial Navigation Using Android Technology
Advisor:Ed Sciore
Abstract: Sensors have been a part of computer science and human computer interaction for many years, with applications ranging from spaceship navigation systems, robotics, to gaming. In gaming specifically, the use of the accelerometer has been popularized by the Nintendo Wii, which uses an accelerometer and a gyroscope to allow the Wii controller to represent 3D objects in 3D environments, such as a golf club, tennis racket, or a sword. This thesis project investigates the ability to use Android technology in tandem with sensors on Android devices to be able to represent players themselves as objects in a 3D environment. By combining sensor principles with OpenGL ES, this thesis attempts to represent players themselves as objects in a 3D environment by implementing relative distributed inertial navigation. Inertial navigation investigates using the accelerometer, gyroscope, and various filtering techniques. This navigation is further specified to be relative and distributed, as the environment is an arbitrary 3D environment, positioning players relative to that environment, and is designed for multi-user interaction. The thesis combines techniques from inertial navigation, OpenGL ES, Android, and client-server interaction to implement a reasonable solution, and provides research alongside supporting design and implementation decisions, and ends by making conclusions about how to further the technology and possible real-world applications.
[NOTE: PDF form of this thesis is not currently available.]
Author:Brett Beaulieu-Jones
Title:A fusion model of Artificial Neural Networks and Moore-Penrose Pseudo-inverses for Stock Market Forecasting
Advisor:Sergio Alvarez
Abstract: This project proposes and implements a fusion model that combines a machine learning method, Artificial Neural Networks (ANN), with a linear algebra method, Moore-Penrose Pseudo-inverses, for stock market forecasting. The goal of the model is to decide which stocks to buy on a given day, by predicting the stocks expected to change in price by the greatest percentage. The prediction tool is trained with past daily closing prices, and tested with the previous four days' closing prices. It then uses a design by committee approach to combine the results of the ANN and the Moore-Penrose methods. Forecasts are evaluated with three experiments: a three month prediction on the dataset used by a leading academic project in stock market prediction, historical yearly predictions to test long term viability, and a five week live simulation experiment in an attempt to demonstrate real time practical use of the proposed method.
Author:Mike Betten
Title:Near Real Time Human Pose Estimation using Locality Sensitive Hashing
Advisor:Hao Jiang
Abstract: This thesis aims to develop a data set of poses to be used with locality sensitive hashing in human pose estimation, and then to determine the potential for real or near real time use of locality sensitive hashing in human pose estimation with that data set. Background subtraction is used to obtain a silhouette of the human pose from a single frame of video. The pose is then matched against the data set of poses using locality sensitive hashing. Results indicate that a data set can be developed to be used with locality sensitive hashing to match human poses in video and that near real time results are possible. The application of locality sensitive hashing for human pose estimation in video facilitates further discussion of the uses for pose estimation in video as well as the uses of locality sensitive hashing in fast feature matching.
Author:Ben Brown
Title:The Universal Content Management System (UCMS)
Advisor:Bill Ames
Abstract: There are millions of content management systems out there in the world, each fitted with a fairly narrow purpose. These systems can range from very simple ones to very advanced ones, but what they all have in common is that they require a programmer (or even team of programmers) to build them.
What I have attempted to do is to create a universal content management system—one which allows for (ideally) any kind of feature or set of features to be created with as little dependence on programmers and on other people with any significant understanding of technology. In other words, this system generates whole features and specific pages for these features on its own, with the user only specifying what sort of fields and other content he wants on those pages. This is code that writes code.
In order to make this possible, I have looked at the most general and most common design patterns for content management systems—specifically database design patterns—and incorporated them into this system. What pattern or set of patterns we use is entirely contingent on what options the user chooses, but the user does not need to know more than basic database relationship concepts in order to use the system effectively (assuming, of course, that he take some time to familiarize himself with the system and the terminology it uses).
UCMS can generate blogs and simple forums, among other features. At this stage, it is powerful enough to be a simple—moderate content management system, in terms of complexity. This is because UCMS still has some limitations, as there are some general design patterns and issues that have not yet been covered by it. Additionally, it is limited only to PHP and MySQL based content management systems (as this is the language and database system, respectively, that it has been built on and for).
[NOTE: PDF form of this thesis is not currently available.]
Author:Greg Epstein
Title:Harnessing User Data to Improve Facebook Features
Advisor:Sergio Alvarez
Abstract: The recent explosion of online social networking through sites like Twitter, MySpace, Facebook has millions of users spending hours a day sort- ing through information on their friends, coworkers and other contacts. These networks also house massive amounts of user activity information that is often used for advertising purposes but can be utilized for other activities as well. Facebook, now the most popular in terms of registered users, active users and page rank, has a sparse offering of built-in filtering and predictive tools such as “suggesting a friend” or the “Top News” feed filter. However these basic tools seem to underutilize the information that Facebook stores on all of its users. This paper explores how to better use available Facebook data to create more useful tools to assist users in sorting through their activities on Facebook.
Author:Ryan Gadsby
Title:Implementations of Robotic Swarm Networks using Low-Cost Hardware
Advisor:Robert Signorile
Abstract: From the surface of Mars, to terrestrial car factories, robots play an important role in many aspects of human life. However, the limitations of robotic hardware–power consumption, imprecise movement, and monetary cost chief among them are often a massive obstacle to the implementation of many possible applications of this technology. Swarm robotics provides several methods of circumventing these roadblocks. Smaller, more primitive robots typically consume less power and cost less money to develop than larger, more intricate machines. However, by distributing tasks among multiple smaller robots, the same amount of work can be accomplished as if we designated the same task to a single rover. The problem of imprecise movement can be alleviated by using multiple robots to interpolate their estimated surroundings with existing data instead of relying on a single machine's interpretation of its environments. By using a large network consisting of lowcost robots, we can take a more cavalier approach to the use of robots of exploration and reconnaissance: since even an inexpensive sensor on an expensive machine might be irreplaceable and therefore limit the possible applications of a given robot, a member of a swarm can disappear without inconveniencing its brothers unduly.
This thesis examines several different swarm network configurations and their performance at tasks analogous to realworld applications of swarm technology. This is achieved using a lowcost modular robotic controller developed by LEGO Mindstorms known as the NXT, and installing a Java Virtual Machine on it to provide a hasslefree development platform. The benefits of swarm technology are fully explored in the swarm's implementation as their collective sensory input is used to form a “hive mind” accessible by any given member of the swarm. I focused on the task of searching for an object in the physical environment, and compared the time spent and general effectiveness of the swarm to a single bot equipped with more sensors doing the same task.
Author:T. J. Keemon
Title:3D Reconstruction of Human Motion Through Video
Advisor:Hao Jiang
Abstract: In this thesis, we study computer vision methods to capture 3D human movements in videos. The goal is to extract high quality movement from general videos using minimum human interaction. With a few mouse clicks on the body joints in each video frame, all the possible 3D body configurations are automatically constructed and their likelihoods are quantified using a prior trained with millions of exemplars from the CMU motion capturing database. The 3D body movement is optimized using dynamic programming using the pose hypotheses and the temporal smoothness constraints. The proposed method can be used for unconstrained motion capturing and our experiments show that it is efficient and accurate. We further study two applications based on the proposed motion capturing method. The first one is to animate characters using the motion captured from videos. The second one is for sports performance analysis. With the 3D movement information, we can measure body part speed, coordination, and various other parameters.
Author:Dan Szafir
Advisor:Robert Signorile
Abstract: It has long been known that as neurons fire within the brain they produce measurable electrical activity. Electroencephalography (EEG) is the measurement and recording of these electrical signals using sensors arrayed across the scalp. Though there is copious research in using EEG technology in the fields of neuroscience and cognitive psychology, it is only recently that the possibility of utilizing EEG measurements as inputs in the control of computers has emerged. The idea of Brain-Computer Interfaces (㽶Is) which allow the control of devices using brain signals evolved from the realm of science fiction to simple devices that currently exist. 㽶Is naturally present themselves to many extremely useful applications including prosthetic devices, restoring or aiding in communication and hearing, military applications, video gaming and virtual reality, and robotic control, and have the possibility of significantly improving the quality of life of many disabled individuals. However, current 㽶Is suffer from many problems including inaccuracies, delays between thought, detection, and action, exorbitant costs, and invasive surgeries. The purpose of this research is to examine the Emotiv EPOC© System as a cost-effective gateway to non-invasive portable EEG measurements and utilize it to build a thought-based 㽶I to control the Parallax Scribbler® robot. This research furthers the analysis of the current pros and cons of EEG technology as it pertains to 㽶Is and offers a glimpse of the future potential capabilities of 㽶I systems.
Author:David Emerson
Title:Primality Testing and Sub-Exponential Factorization
Advisor:Howard Straubing
Abstract: This paper discusses the problems of primality testing and large number factorization. The first section is dedicated to a discussion of primality testing algorithms and their importance in real world applications. Over the course of the discussion the structure of the primality algorithms are developed rigorously and demonstrated with examples. This section culminates in the presentation and proof of the modern deterministic polynomial-time Agrawal-Kayal-Saxena algorithm for deciding whether a given n is prime.
The second section is dedicated to the process of factorization of large composite numbers. While primality and factorization are mathematically tied in principle they are very different computationally. This fact is explored and current high powered factorization methods and the mathematical structures on which they are built are examined.
Author:Shahbano Imran
Title:Videogame Interaction Through Vision-Based Input
Advisor:Hao Jiang
Abstract: The purpose of this project was to build an interface allowing applications and videogames to use non-invasive vision based input as a replacement for conventional input device like a mouse or a joystick for enhanced HCI. The phases of the implementation involve template matching and tracking in real-time processing of video, and recognition of gestures through trajectory tracking. The limited pose estimation involves using the information about the location of the head and hands—regions of interest are first isolated by classifiers such as skin tone, then smaller parts of the frame are processed to achieve a real-time calculation to recognize and track the different parts of the body. Tracking requires segmenting the subject from the background, and matching these segments during consecutive frames. A robust implementation of pose estimation could be used to create interesting vision-based interfaces. Fundamental limitations to current algorithms in pose estimation include the compromise between accuracy and real-time processing costs. I’m focusing on restricting variable factors and using controlled conditions to get maximum accuracy for specific parts of the body, such as hands.
Author:Jason Croft
Title:A Feedback-Based Content Distribution System for Peer-to-Peer Networks
Advisor:Robert Signorile
Abstract: Persistent control and ownership of data in a decentralized network environment is difficult with no central authority regulating control of, or storing, confidential files. We propose a method to control distribution of confidential data in peer-to-peer (P2P) networks through the use of a feedback-based, restrictive content distribution system. Unauthorized distribution is thwarted through a method of self-destructing, one-time-use data. Transmitted data is encrypted and encapsulated within executables, and authenticated to a single user and machine. After a single use, data is destroyed through a method of in-memory compilation of a new executable, overwriting the original at runtime. Feedback from the executable classifies data transactions as satisfactory or unsatisfactory to compute trust values using a reputation management algorithm. Unsatisfactory transactions, arising from misuse of data (e.g., failed authentication, unauthorized multiple uses, or attempted distribution of data), lowers a peer's trust. We examine and modify the algorithms to two distributed reputation systems: EigenTrust, which attempts to reduce distribution of inauthentic files through the notion of transitive trust, and a Bayesian approach by Buchegger and Boudec based on combining first- and second-hand reputation information. We note the strengths and tradeoffs of each, but show the Bayesian approach gives the best results for conditions of our environment.
[NOTE: PDF form of this thesis is not currently available.]
Author:Peter Sempolinski
Title:Automatic Solutions of Logic Puzzles
Advisor:Howard Straubing
Abstract: The use of computer programs to automatically solve logic puzzles is examined in this work. A typical example of this type of logic puzzle is one in which there are five people, with five different occupations and five different color houses. The task is to use various clues to determine which occupation and which color belongs to each person. The clues to this type of puzzle often are statements such as, “John is not the barber,” or “Joe lives in the blue house.” These puzzles often range widely in complexity with varying numbers of objects to identify and varying numbers of characteristics that need to be identified for each object.
With respect to the theoretical aspects of solving these puzzles automatically, this work proves that the problem of determining, given a logic puzzle, whether or not that logic puzzle has a solution is NP-Complete. This implies, provided that P is not equal to NP, that, for large inputs, automated solvers for these puzzles will not be efficient in all cases.
Having proved this, this work proceeds to seek methods that will work for solving these puzzles efficiently in most cases. To that end, each logic puzzle can be encoded as an instance of boolean satisfiability. Two possible encodings are proposed that both translate logic puzzles into boolean formulas in conjunctive normal form. Using a selection of test puzzles, a group of boolean satisfiability solvers is used to solve these puzzles in both encodings. In most cases, these simple solvers are successful in producing solutions efficiently.
[NOTE: PDF form of this thesis is not currently available.]
Author:Lawrence Chang
Title:
Advisor:David Martin
Abstract: With vast amounts of data in the world, organization becomes a challenge. The success of data driven web services (IMDb, YouTube, Google Maps, Wikipedia, et cetera) all hinge on their ability to present information in an intuitive manner with user friendly interfaces. One area that fails to have such a service is sports statistics. With the ubiquitous appeal of sports, having a solution to this problem can be universally beneficial. Many sites exist that have statistics of different sports, but there are limitations to all of them. Since there is very little continuity among all sports, statistics are represented disparately.
There are several problems with this approach. Any time there needs to be a change to the informational structure, the entire database and interface need to change. In addition, there can never be a single interface if there are different schemas for different sports, leading to a user unfriendly interface.
My system uses a unique schema that is capable of representing statistics from any sport, no matter how unique. Adding new statistics to a sport to reflect rule changes or adding a new sport altogether are seamless. In addition, the web interface is structured by Rails, which changes automatically with the schema.
Challenges included developing a universal sports schema and testing it sufficiently enough to prove its generality. Finding and extracting the data to populate the database also presented difficulties.
Author:Brad Hayes
Advisor:David Martin
Abstract: This thesis examines and investigates the substitution of the mouse for a more natural means of human computer interaction, ranging from manipulating WIMP-based applications to developing post-WIMP interfaces and exploring their usefulness. The WIMP (Window, Icon, Menu, Pointer) interface has been the standard paradigm for the personal computing era. Computer software has been optimized for the keyboard and mouse input device pair, tools that have remained fundamentally stagnant with regard to innovation for decades[1]. Accomplished through the construction of a touchscreen with variable levels of contact detection, targeted demo applications not only show the effectiveness of such an input apparatus but introduce the potential for previously unexplored levels of interaction.
The use of direct-contact manipulation provides a more natural interaction than is achieved by the mouse, avoiding the need to abstract crucial concepts such as 'selecting', 'dragging', or 'resizing'. The introduction of vision driven touch-sensitivity allows for the use of physical objects to denote distinct meanings, providing a means to create associations between physical and digital actions. Building upon this concept, gesture support is a logical and practical capability to expect from a 'direct' input device. As such, it is analyzed and implemented as a core component of the device's software.
Common difficulties with software based touchscreens include device mobility, device reliability, and poor interface software implementation. While mobility is not mitigated within this project, reliability and interface/usability design are principally addressed. Challenges addressed during the implementation of the project primarily revolved around physical limitations and performance restrictions, as the quality of algorithm necessary is inversely proportional to the quality of equipment being used.
Author:William Clerico
Title:Web 2.0 and Communication in Nonprofit Organizations: A Case Study and Proof of Concept
Advisor:William Ames
Abstract: The nonprofit sector is a large piece of the U.S. economy with tremendous growth. There are 1.5 million different organizations with revenues of (670 billion annually. 109 million Americans volunteer each year. With such large amounts of money flowing through these organizations each year, there has been several studies looking at the efficiencies with which they operate.
In 2003, the Harvard Business Review published a comprehensive study of the nonprofit sector, specifically the largest 200,000 organizations which have revenue of more than $25,000 every year. They identified five major areas for improvement:
- Reduce funding costs
- Distribute holdings faster
- Reduce program service costs
- Trim administrative costs
- Improve sector effectiveness
The authors of the article made several suggestions about how to improve these various areas, even brushing upon the topic of technology by way of online solicitation donation. However, they also mentioned the difficulties of improving efficiency without significant time and capital investment. It is also difficult to talk in the general sense about the application of technology when the scope of nonprofit organizations is so broad and so varied.
Author:Daniel J. Scali
Advisor:Sergio Alvarez
Abstract: This thesis examines the application of genetic programming to evolving strategies for playing an iterated version of the Prisoner’s Dilemma game. The study examines the evolution of strategies for a single population of players pitted against a static environment, as well as the co-evolution of strategies for two distinct subpopulations of players competing against one another. The results indicate that the strategies that can be evolved are strongly influenced by the function set provided during the design process. In co-evolutionary runs in particular, the function set shapes the environment of opponents that an individual strategy is evaluated against. Experimental runs that alter the makeup of the function set facilitate a discussion of how different function sets and environments can lead to diverse strategies with varying levels of performance.
Author:Robert Russo
Title:Evaluating Bayesian Networks as a Recommender System Technology for Motion Pictures
Advisor:Sergio Alvarez
Abstract: This thesis applies machine learning techniques to a dataset of movies described by collaborative (social) and content attributes in order to create a mixed recommender system for movies. Bayesian networks, two versions of neural networks, decision trees, and simple rule classifiers are compared. It is determined that Bayesian networks and a 1R classifier based on the single best attribute outperform the remaining techniques. An attempt to contrast recommendation quality for content-only and collaborative-only datasets as compared with a dataset described by both content and collaborative attributes yields inconclusive results. Lack of sufficient content information in the current datasets may be the reason for this.
Author:Sergey Weinstein
Title:3-D Stereoscopic Reconstruction using Structured Light
Advisor:David Martin
Abstract: Much like humans, computers are able to infer 3-D positions of objects using sight alone. Replacing our eyes with electronic cameras and our brain with soul-less, heartless algorithms, it is possible for a computer to reconstruct a 3-D scene from simple photos. In order to understand the relative positions of objects in the world, we can take two photos of the same scene (from different viewpoints) and look at how positions of objects change from photo to photo. To illustrate this, try extending your arm and holding it in front of your eye. Look at your thumb with only your right eye, then switch eyes. Your thumb seems to jump a great deal between those two view points, whereas the background stays in mostly the same spot. This change of relative positions allows us to understand the 3-D structure of the things we see.
The major difficulty with simulating this on a computer is the fact that its very hard to know where an object is from photo to photo (e.g. a computer doesn't know what a thumb is). The common way to identify the 'correspondences' between photos is to look at the image one small area at a time, and then trying to find a similar patch in the other image. This process is very time consuming and can result in inaccurate correspondences.
The strategy taken by this thesis involves using structured light (projecting a checker board on to the scene while photographing it.) This strategy allows the computer see the image as a binary collection of pixels rather than a multi-valued grey (or color) image. The computer can then give a unique index to each part of the scene and quickly figure out where things are in both pictures. It can then recover the 3-D structure of the scene and paste the surfaces with the original objects' textures.
Author:Michael Ahern
Title:Coevolving Quidditch Players Using Genetic Programming
Advisor:Sergio Alvarez
Abstract: Ever since the invention of the computer people have been fascinated by the idea of Artificial Intelligence (AI). Although general purpose AI remains science fiction, AI and Machine Learning (ML) techniques have been used to develop everything from autonomous Martian rovers to computers that drive cars. Although it is ideal to build physical systems to test algorithms, often times cost constraints require initial development to be done using rich game-like simulators. Building upon this line of research, my thesis describes the automatic programming of simulated agents playing a "Quidditch-like" game using genetic programming.
Author:Joel Barciauskas
Title:Extensions to Polyphonic C#
Advisor:Edward Sciore
Abstract: Synchronization mechanisms in concurrent programming have progressed very slowly over the years. Semaphores and monitors are still the most widely used tools for solving these problems. We discuss in this paper a new type of synchronization mechanism called "chords." Previous papers have discussed a few areas in which chords are well-suited to replace semaphores and monitors, including asynchronous method execution and return value retrieval and the reader-writer problem. We will attempt to solve a number of other traditional synchronization problems using chords, and make suggestions as to possible extensions to the rules governing chords that would make them even more useful.
Author:Doyle Hunt
Title:Free Riding in a Peer to Peer Networked Environment
Advisor:Robert Signorile
Abstract: Peer to peer networks have become very popular as file sharing and distributed systems have been adopted by everyone from children in elementary school to academic professors. As these networks have continued to grow, the .Free Riding. problem, which has always plagued society, has surfaced in these decentralized networks. In one study of the Gnutella Network, approximately 70% of users consume, but either provides undesirable or no resources at all. The main factor that allows this type of behavior to exist is the lack of authority to govern the group as a whole. As this problem continues to grow, P2P networks may either destroy themselves, or require some form of central governance to prevent rampant shirking. This paper attempts to show the history and development of P2P networks, the inherent problems these networks face, and what solutions have been developed or suggested as a way to address the issues that could potentially limit the scalability and power of a distributed P2P network. I will present a solution that I have proposed along with solutions by other P2P network providers, and the results which have been achieved.
Author:Andrew Logan
Title:distCVS: A Distributed Peer-to-Peer Versioning File Storage System
Advisor:Robert Signorile, Elizabeth Borowsky
Abstract: The current layout of resources on the internet suffers from the fact that there is generally a single point of failure for data access. Peer-to-peer applications promise to change this by distributing resources, and therefore network routes and load, across the network itself. The problem is that the development and testing of such a system is often a hard and tedious chore, especially since the technology is still rapidly evolving. In this paper, I present distCVS, a peer-to-peer system for the storage of versioned data. distCVS is unique because it is an application built by using the Bamboo peer-to-peer networking framework to pass messages for an unmodified version of the CVS source control system. The development of this type of peer-to-peer system allows the designer to not only rapidly assemble a working system, it also allows him or her to take advantage of any future improvements that may be made to either Bamboo or CVS.
Author:Edmond J. Murphy
Title:Assorted Attacks on the RSA Cryptographic Algorithm
Advisor:Howard Straubing
Abstract: This thesis concentrates on the vulnerabilities of the RSA Cryptographic Algorithm when it is not securely implemented. While it has been proven that a brute force attack on the algorithm is not practical there remain aspects of the algorithm that require proper use to prevent back-door attacks. The attacks performed in this thesis attempt to exploit both mathematical and inherent timing vulnerabilities of the algorithm. Furthermore, simple practices which prevent theses attacks are discussed.
Author:Gregory Pavlov
Title:Intelligent Entities in a Distributed Simulation Environment
Advisor:Robert Signorile
Abstract: In addressing the growing model sizes used in simulation environments, I examine adding machine learning techniques to the entities in a model in an effort to produce such side effects as emergent behavior. Distributing the environment in order to increase the efficiency of the simulation also plays an important role in this thesis. The added intelligence of entities may have some affect upon the speed at which a model may be executed, as additional computation will be required. Taking a model-based approach, I attempt to solve some of the problems of interaction between components in a distributed simulation.
Author:Daniel Shaw
Title:An Eye-Tracking Evaluation of Multicultural Interface Designs
Advisor:Jim Gips
Abstract: This paper examines the impact of a multicultural approach on the usability of web and software interface designs. Through the use of an eye-tracking system, the study compares the ability of American users to navigate traditional American and Japanese websites. The ASL R6 eye-tracking system recorded user search latency and the visual scan path in locating specific items on the American and Japanese pages. Experimental results found statistically significant latency values when searching for left- or right-oriented navigation menus. Among the participants, visual observations of scan paths indicated a strong preference for initial movements toward the left. These results demonstrate the importance of manipulating web layouts and navigation menus for American and Japanese users. This paper further discusses the potential strengths resulting from modifications of interface designs to correspond with such cultural search tendencies, and suggestions for further research.
Author:Darren Yeung
Title:Voice Over Peer-to-Peer Telephony Service
Advisor:Robert Signorile
Abstract: The purpose of this project is to build a telephone service that operates over a peer-to-peer (P2P) network. That is, a service that does not require a centralized server to allow two or more clients to communicate with one another. Rather, each peer is an independent entity with .equal. power and capabilities in terms of using this application. P2P systems allow networks to sustain a lot more peers than current server-client models of networks. Each peer will have capabilities such as the ability to find users connected to the network, accept or reject calls, store peers in a phonebook for future reference, have an answering machine, and record messages during a call.
This has the potential to completely change the business, home, and mobile phone industries. Imagine you have a mobile phone or any mobile device that supports Wi-Fi technology. Whenever you are within range of a wireless hotspot, you are able to call any peer connected to the network, even peer in your phonebook halfway across the country (assuming they are connected to the network). Of course, all of this is free because the fantastic group at Sun Microsystems in charge of Project JXTA created a platform enabling P2P communication with no cost to us, the consumers. Using several protocols defined in JXTA, I was able to establish a completely decentralized telephone network with many functions that are available on current mobile phones. I decided to use JXTA because it provides the basic building blocks to produce peer-to-peer applications.
Author:Derek Carr
Title:Computation of Potentially Visible Set for Occluded Three-Dimensional Environments
Advisor:William Ames
Abstract: This thesis deals with the problem of visibility culling in interactive three dimensional environments. Included in this thesis is a discussion surrounding the issues involved in both constructing and rendering three-dimensional environments. When rendering a three-dimensional environment, only a subset of objects in the environment is visible to the user. We refer to this subset of objects as the Potentially Visible Set (PVS). This thesis presents an algorithm that divides an environment into a network of convex cellular volumes connected by invisible portal regions. A renderer can then utilize this network of cells and portals to compute a PVS via a depth first traversal of the scene graph in real-time. Finally, this thesis discusses how a simulation engine might exploit this data structure to provide dynamic collision detection against the scene graph.
Author:Adam Chmielewski
Title:Construction of a Database of Annotated Natural Human Motion and its Application to Benchmarking
Advisor:David Martin
Abstract: Humans possess an uncanny ability to perceive the motion of other human beings. The performance of current algorithms on this task falls far short of human performance. In aiming to close this gap, we have collected video clips of natural human motion (e.g. walking, shopping, sports, etc.) and annotated figure tracks and head locations. We then use the annotated clips as a baseline to evaluate the quality of identification and tracking algorithms such as face detection and figure tracking. To date, we have developed annotation tools, collected and annotated 93 5-30 second clips, and evaluated two current face detection algorithms.
Author:Matt Veino
Title:Face Based Indexing of Digital Photo Albums
Advisor:David Martin
Abstract: Managing a large collection of digital snapshots -- even for the average amateur digital photographer -- is a chore. Searching is often limited to captions, but writing a descriptive caption for each photo is overly tedious. We aim to simplify the management of a collection of photos by enabling searches on the content of the photographs themselves while minimizing the amount of supervision by the user. We focus on the problem of finding faces in photographs. The user identifies a small number of faces in the photo collection. Once we have these faces, we can run algorithms on other pictures to search through and find faces in the photographs. Once we have locations of faces, we will attempt to recognize the individuals.
Author:Michael Elliot
Title:An Attempt at 4-Legged Robocup
Advisor:Robert Signorile
Abstract: An attempt to design and implement an entire AIBO to be Robocup ready proved a challenging, and rewarding experience. Due to the large scale time requirements and time consuming technical issues that had to be dealt with, my experimentation was cut short. Successes, however, include the creation of a vision system and of an omni-directional walking system that could be used to achieve the ultimate goal of playing soccer. These systems are in many ways a final product in themselves - typically teams are separated into groups which complete each sub-model in the dog, giving them freedom to explore many options. In addition, I had no starting point and had to learn the OPEN-R programming environment on my own, and thus, development required a significantly greater amount of time as compared to a situation where there was already a working system with knowledgeable people there to oversee.
Author:Hiroyuki Hayano
Title:Distributed Data Storage: Archiving Over Networks
Advisor:Elizabeth Borowsky
Abstract: Technology today allows for hard drives that are tens of gigabytes in capacity, and the Distributed Data Storage (DDS) system is an attempt at making a good use of a small, unused portions of such drives in networked computers. The client of the the DDS service can split a substantially large file into small packages and back them up in remote computers. The system can be used for digital archiving of massive data, and this paper serves as a cornerstone for a suggested model of such application.
Author:Frank Mazzacano
Title:A Reliable Multicast Protocol for a Distributed System
Advisor:Robert Signorile
Abstract: Since the advent of Internet gaming users have been mostly concerned with a quick connection that will allow them to receive and send game state information. However, users also demand that the information they exchange is done so in a reliable way, to maintain fairness between all clients. This ended up making developers weigh pros and cons in picking the packet transportation protocol. They settled on a unicast protocol, which is more reliable, but not as quick as possible. I have chosen to use the less utilized protocol of multicast to send and receive information between clients in a distributed gaming system.
Author:Evan McCarthy
Title:A Three Dimensional Environment Suitable for Distribution over a Multicast Network
Advisor:Robert Signorile
Abstract: The development of three-dimensional environments (3DEs) revolutionized the computer graphics industry, in particular the gaming industry, by bringing computer graphics one step closer to replicating reality. This application is a 3DE that replicates Ignacio Hall and serves as the backdrop for a distributed system in which users compete against each other for resources and objectives. The 3DE must resemble the model and interact with the users in a realistic way.
Author:Jonathan Pearlin
Title:A Distributed Mutual Exclusion Algorithm Using Multicast Communication
Advisor:Robert Signorile
Abstract: The presented algorithm in this paper uses a combination of various mutual exclusion techniques in conjunction with synchronization techniques and multicast communication. The goal is to provide an efficient, low traffic algorithm that provides mutual exclusion to a distributed system with shared resources. Previous work in the field suggests that optimization in distributed mutual exclusion algorithms is related to the number of messages required to achieve the mutual exclusion of shared resources. The proposed algorithm promises to achieve this in a constant number of required messages, which is the result of the use of multicast communication. The developed algorithm was applied to a distributed system with three-dimensional graphics in order to display its functionality and practically in the world of distributed systems.
Author:Sharif Tai
Advisor:James Gips
Abstract: The EagleEyes system allows non-ambulatory, non-verbal people to use a computer, often for the first time. The EagleEyes connect system now allows them to move from simply interacting with the computer to have the ability to connect with another user anywhere in the world. They can then communicate or play games with another EagleEyes user, allowing them to explore a previously closed avenue.
Author:Timothy Wake
Title:RNA Secondary Structure Prediction Using Applications of the Partition Function
Advisor:Peter Clote
Abstract: A dynamic programming algorithm is presented for calculating the partition function and the pairwise base-pairing probabilities over all secondary structures for a given RNA nucleotide sequence, and the calculation of the pairwise base-pairing probabilities; the algorithm is an application of the approach used by McCaskill to accomplish this for nested secondary structures to the class of structures inclusive of pseudo-knots, using a technique due to Eddy et. al.
Author:Tricia Yaw
Title:Fault-Tolerance of E-Commerce Transaction Protocols
Advisor:Elizabeth Borowsky
Abstract: E-commerce began in the early 1990s, as the internet grew. One of the primary business forces driving the growth of the internet was the potential for on- line shopping. To enable both the trust of users of business transactions as well as to guard against on- line scams, Visa and MasterCard teamed up to develop a secure method of transfer. Secure Electronic Transmission (SET) was created to allow safe credit card transactions over the internet. As the late 1990s grew into the dot-com boom, different protocols were developed for varying purposes. Digicash created Ecash, in the expectation that users would want an anonymous means for purchasing products, and Paypal came into existence to enable regular people a means of purchasing items from each other.
Author:Christopher R. Fagiani
Title:An Evaluation of Tracking Methods for Human-Computer Interaction
Advisor:James Gips
Abstract: The child human-computer interaction paradigm, based around the keyboard and mouse, has seen little change since the advent of modern computing. Since many desktop and laptop computers now come with cameras as standard equipment, it is desirable to employ them for next-generation human-computer interaction devices.
Author:Cristopher Stauffer
Title:Real-Time Terrain Rendering and Scene Graph Management
Advisor:Williams Ames
Abstract: The development of computer-aided terrain rendering has over the years been the necessity of an array of applications such as topographical mapping, surgical and medical aids, as well as simulations and entertainment. The goal of rendering has been redefined over the past decade due to the introduction of systems capable of real-time terrain representation.
Author:Michael Tierney
Title:Middleman - Linking Data Sources and Wireless Devices
Advisor:Edward Sciore
Abstract: The combined technological developments of Java and XML have helped to create many advances in portability and customizable presentation of data. My research, the development of the Middleman Server System, focused on combining the portable aspects of XML and Java with the seemingly natural pairing of XML and database connectivity.
Author:John Weicher
Title:Distributed Ray Tracing
Advisor:William Ames
Abstract: Photo-realistic images are those that have been generated by a computer by doing mathematical and geometric calculations based on the physics of the real world, but are images which are indistinguishable from two-dimensional photographs taken of a real life three-dimensional scene. As computers became more powerful, several techniques were developed in attempts to do this. Raytracing is one of those techniques, and is probably one of the most popular 3d image-synthesis techniques in use today. Raytracing is actually a remarkably simple process, providing one has a bit of background understanding first.
Author:Jay Cahill
Title:Error Tolerant Clustering of Gene Microarray Data
Advisor:Peter Clote
Abstract: Gene microarray technology allows for unprecedented and massive production of biological data across multiple experimental conditions and in time series. Computer analysis of this data can help guide biological bench work toward the assignment of gene function, classification of cells and tissues and the ultimately assist in the diagnosis and treatment of disease. One approach to the analysis of microarray data is the identification of group of genes with common expression patterns or "clusters." The author implements an error-tolerant clustering algorithm due to Amir Ben-Dor, Ron Shamir and Zohar Yakhini. In their original paper, they defined a stochastic error model for microarray data, and, based on that model, prove that their algorithm recovers the underlying cluster structure of microarray data with high probability. In this paper, their results are replicated on artificial data. In addition, the author tests the stability of clusterings generated by the algorithm and compares the performance of discretized and non-discretized versions.
Author:Daniel Russo
Title:2-Dimensional Shape Categorization Using Polar Coordinate Representations
Advisor:Peter Kugel
Abstract: This paper describes a method for comparing and categorizing shapes in BMP image files that involves storing the points of shapes as polar coordinates. In doing this, the calculations involved in the comparisons become simpler, and the overall running time of the comparison algorithm is significantly reduced.
Author:Kristen Grauman
Title:Communication via Eye Blinks - Detection and Duration Analysis in Real Time
Advisor:Margrit Betke, James Gips
Abstract: A method for a real-time vision system that automatically detects a user's blinks and accurately measures their durations is introduced. The system is intended to provide an alternate input modality to allow people with severe disabilities to access a computer. Voluntary long blinks trigger mouse clicks, while involuntary short blinks are ignored. The system enables communication using "blink patterns:" sequences of long and short blinks which are interpreted as semiotic messages. The location of the eyes is determined automatically through the motion of the user's initial blinks. Subsequently, the eye is tracked by correlation across time, and appearance changes are automatically analyzed in order to classify the eye as either open or closed at each frame. No manual initialization, special lighting, or prior face detection is required. The system has been tested with interactive games and a spelling program. Results demonstrate overall detection accuracy of 95.6% and an average rate of 28 frames per second.
Author:Mark Beppu
Title:EagleEyes Wireless Mouse
Advisor:William Ames
Abstract: Eagle Eyes is a new technology that allows the user to move the cursor on the screen of a Windows or Macintosh computer by moving his or her eyes or head. Basically, the cursor follows the location that the user is looking at on the screen. In this way, the eyes replace the mouse.
Author:Brendan Wright
Title:Flow-Directed Uncurrying
Advisor:Robert Muller
Abstract: In modern mostly-functional programming languages, all functions are functions of exactly one argument. Multiple arguments can be passed to a function as a tuple or by defining the function so that it returns another function that will accept the next argument. Curry style is very general but, unfortunately, it is very expensive to implement when functions are represented as closure data structures. In this work, we show how to transform curry-style functions to the more efficient tuple-style functions.
[NOTE: PDF form of this thesis is not currently available.]
Author:Aaron Moller
Title:Flow-Sensitive Program Transformations
Advisor:Robert Muller
Abstract: Modern compilers use flow analysis as an aid in carrying-out optimizing program transformations. Intermediate languages such as the CIL typed intermediate language or SSA form both embed the flow information within the intermediate representation (IR). While this is helpful for checking the integrity of the IR, it complicates optimization since the flow information valid for an unoptimized representation is not, in general, valid for the result of applying an optimization. This thesis investigates which optimizations can be applied without invalidating the flow information. It also shows how to update the flow information for an optimization that does invalidate the flow.
[NOTE: PDF form of this thesis is not currently available.]
Author:Jun Kawai
Title:Gaze Detection Via Self-Organizing Gray-Scale Units
Advisor:Margrit Betke
Abstract: We present a gaze estimation system that detects an eye in a face image and estimates the gaze direction by computing the position of the pupil with respect to the center of the eye. Our system is based on unsupervised learning. It creates a map of self-organized gray-scale image units that collectively describe the eye outline. Our approach is information-conserving.
Author:Joe Jerista and Yun Pang
Title:Hidden Surface Removal
Advisor:William Ames
Abstract: We essentially created several algorithms which are able to reduce the number of calculations required to render a scene in a three-dimensional environment. The core of these algorithms is the ability to "see" what the user can view through the window of the computer screen into the world, perform the calculations in only what the user can see, draw it to screen, and ignore the remainder of the environment.
Author:Canh N. Cao
Title:Computer Generated Holography
Advisor:William Ames
Abstract: In the computer science field, a computer generated holographic image is computed by numerically simulating the physical phenomena of light diffraction and interference. It is possible for a computer software to calculate the phase of light of an object. In my thesis, I implemented a computer program that is able to generate holograms by computing the phase of light of different objects such as points, lines, and wire frames.
Author:Brad Alan and Andrew Gregory
Title:Implementing Beta Unification
Advisor:Robert Muller
Abstract: Principality of typings is the property that for each typable term, there is a typing from which all other typings are obtained via some set of operations. Type inference is the problem of finding a typing for a given term, if possible. Kfoury and Wells developed an algorithm for computing principal typings for finite-rank intersection types. The algorithm depends on a new notion of unification, what is called beta-unification. We develop the first implementation of the beta-unification algorithm.
View Brad Alan and Andrew Gregory's code.
[NOTE: PDF form of this thesis is not currently available.]