1-BIT CS IN PRESENCE OF PRE-QUANTIZATION AND POST-QUANTIZATION NOISE
Presented by Swatantra Kafle
Advisor: Dr. Pramod Varshney
Related research area(s): Other
Poster number EECS 1-01
In this paper, we consider the problem of sparse signal reconstruction with 1-bit compressed measurements using GAMP algorithm. This work proposes reconstruction method for a general setting, i.e., when the compressed measurements are corrupted with additive Gaussian noise before quantization and with the bit-flip error after quantization. This work is first of its kind which tries to analyze the effect of pre-quantization and post-quantization noises in the signal reconstruction from the 1-bit quantized measurement. We can always improve the performance of signal reconstruction by making extra assumptions on sparse signals or taking side information into account. We believe that the reconstruction performance provided by this algorithm can be used as a benchmark performance for every effort in improving reconstruction performance from noisy 1-bit compressed measurements.
A CODEBOOK OF BRIGHTNESS TRANSFER FUNCTIONS FOR IMPROVED TARGET RE‐IDENTIFICATION ACROSS NON‐OVERLAPPING CAMERA VIEWS
Presented by Yu Zheng
Advisor: Dr. Senem Velipasalar
Related research area(s): Unmanned Systems; Intelligent Systems; Security
Poster number EECS 1-02
Target re-identification across non-overlapping camera views is a challenging task due to variations in target appearance, illumination, viewpoint and intrinsic parameters of cameras. Brightness transfer function (BTF) was introduced for inter-camera color calibration, and to improve the performance of target re-identification methods. There have been several works based on BTFs, more specifically using weighted BTFs (WBTF), cumulative BTF (CBTF) and mean BTF (MBTF). In this work, we present a novel method to model the appearance variation across different camera views. We propose building a codebook of BTFs composed of the most representative BTFs for a camera pair. We also propose an ordering and trimming criteria to avoid using all possible combinations of codewords for different color channels. In addition, to obtain a better appearance model, we present a different way to segment a target from the background. Evaluations on VIPeR, CUHK01 and CAVIAR4REID datasets show that the proposed method outperforms other approaches focusing on BTFs, including WBTF, CBTF and MBTF. As proven by the results, the proposed method provides an improved brightness transfer across different camera views, and any target ReID approach incorporating color/brightness histograms can benefit from it.
A Deep Reinforcement Learning-Based Framework for Cloud Resource Allocation
Presented by Ning Liu
Advisor: Dr. Yanzhi Wang
Related research area(s): Intelligent Systems
Poster number EECS 1-03
Automatic decision-making approaches, such as reinforcement learning (RL), have been applied to (partially) solve the resource allocation problem adaptively in the cloud computing system. However, a complete cloud resource allocation framework exhibits high dimensions in state and action spaces, which prohibit the usefulness of traditional RL techniques.
In addition, high power consumption has become one of the critical concerns in design and control of cloud computing systems, which degrades system reliability and increases cooling cost. An effective dynamic power management (DPM) policy should minimize power consumption while maintaining performance degradation within an acceptable level.
Thus, a joint virtual machine (VM) resource allocation and power management framework is critical to the overall cloud computing system. Moreover, novel solution framework is necessary to address the even higher dimensions in state and action spaces.
A Deep Reinforcement Learning-Based Framework for Content Caching
Presented by Chen Zhong
Advisor: Dr. Cenk Gursoy
Related research area(s): Intelligent Systems
Poster number EECS 1-04
Content caching at the edge nodes is a promising technique to reduce the data traffic in next-generation wireless networks. Inspired by the success of Deep Reinforcement Learning(DRL) in solving complicated control problems, this work presents a DRL-based framework with Wolpertinger architecture for content caching at the base station. The proposed framework is aimed at maximizing the long-term cache hit rate, and it requires no knowledge of the content popularity distribution. To evaluate the proposed framework, we compare the performance with other caching algorithms, including Least Recently Used(LRU), Least Frequently Used (LFU), and First-In First-Out(FIFO) caching strategies. Meanwhile, since the Wolpertinger architecture can effectively limit the action space size, we also compare the performance with Deep Q-Network to identify the impact of dropping a portion of the actions. Our results show that the proposed framework can achieve improved short-term cache hit rate and improved and stable long-term cache hit rate in comparison with LRU, LFU, and FIFO schemes. Additionally, the performance is shown to be competitive in comparison to Deep Q-learning, while the proposed framework can provide significant savings in runtime.
An Approach to Classification of Power System Dynamic Events and False Data Injection Using PMU Data
Presented by Rui Ma
Advisor: Dr. Sara Eftekharnejad
Related research area(s): Security
Poster number EECS 1-05
Identification of the real-time transient events is essential for power system protection and control. Phasor measurement units (PMUs) provide synchronized voltage phasors, current phasors, and frequency measurements at a high resolution. A targeted false data injection attack on PMUs can prompt operators to take wrong actions that eventually jeopardize power system reliability. In this work, a multivariate-based model is developed to not only verify the correctness of the PMU data but also classify dynamic events by considering the attributes of each PMU data and their relationship. In addition, a method, inspired by text mining is proposed to help efficiently identify each transient event.
Behavioral Hypothesis Testing
Presented by Baocheng Geng
Advisor: Dr. Pramod Varshney
Related research area(s): Intelligent Systems
Poster number EECS 1-06
Traditional sensor networks have found applications in lots of areas. These sensor networks only contain machines to collect information to a fusion center. However, many mission-oriented control system requires humans as an essential part of the decision-making process. Unlike traditional sensors which can be programmed to do whatever we want, when analyzing human’s decision making we need to consider the cognitive limitation, uncertainty and unpredictability of human beings. In this work, we model how human make decisions in a hypothesis testing scenario as an individual, as well as in population level.
Convolutional Neural Network Based Multi-Scale 3D Object Detection
Presented by Burak Kakillioglu
Advisor: Senem Velipasalar
Related research area(s): Intelligent Systems
Poster number EECS 1-07
The advent of LiDAR sensors and increasing availability of 3D depth sensors such as Microsoft Kinect sensors induced research using 3D data for many application areas including virtual reality, autonomous navigation and surveillance. Moreover, smart phone companies are expeditiously moving towards implementing 3D sensors in their devices. As new and reliable techniques have become available, the demand for 3D data processing has increased in many industries, such as transportation, avionics, defense and entertainment. Current state-of-the-art approaches for 2D image classification and detection, which are based on convolutional neural networks (CNN), have achieved very high accuracy rates on various applications surpassing human performance. Although there have been successful attempts at object classification and detection in 3D volumetric domain, the performance of the state-of-the art approaches are still not as improved as that of their 2D counterparts. In this work we created a 3D object detector which detects and localizes objects in 3D point clouds of scene captures which are possibly acquired by LiDAR sensors or depth cameras. We first encode the input scene cloud into 3D occupancy grid. Then we extract multi-scale 3D features of the scene using CNNs and apply sliding anchors on these features where these anchors act as a classifier for every region on each feature scale. Our algorithm gives promising preliminary detection results on SUN-RGBD dataset.
Distributed Self Localization and tracking with an unknown number of targets
Presented by Pranay Sharma
Advisor: Dr. Pramod Varshney
Related research area(s): Unmanned Systems; Intelligent Systems
Poster number EECS 1-08
In this work, we propose an algorithm for sequentially localizing multiple mobile agents, while simultaneously using these agents to track an unknown number of mobile targets. We work under the additional constraint of measurement origin uncertainty, i.e., it is unknown which target a measurement originated from, or if it was caused by a false alarm. For better scalability along with distributed implementation, we use particle-based belief propagation on appropriately designed factor graphs. Since each agent node needs to estimate its own state, as well as the states of all targets, we use suitable “gossip” schemes for communicating relevant information among agents. To the best of our knowledge, this is the first work to consider simultaneous localization and tracking, in a distributed setting, with measurement origin uncertainty.
Beyond the obvious application in surveillance, this problem also has potential application in autonomous driving (in which case other manual driven vehicles and pedestrians become targets), indoor localization, deploying unmanned first-responder agents in hazardous environments, biomedical analytics (for e.g., tracking cells in multidimensional time-lapse fluorescence microscopy), and crowd counting.
Downlink Analysis in Unmanned Aerial Vehicle (UAV) Assisted Cellular Networks with Clustered Users
Presented by Esma Turgut
Advisor: Dr. Cenk Gursoy
Related research area(s): Unmanned Systems
Poster number EECS 1-09
The use of unmanned aerial vehicles (UAVs) operating as aerial base stations (BSs) has emerged as a promising solution especially in scenarios requiring rapid deployments (e.g., in the cases of crowded hotspots, sporting events, natural disasters) to assist the ground BSs. An analytical framework is provided to analyze the signal-to-interference-plus-noise ratio (SINR) coverage probability of UAV assisted cellular networks with clustered user equipments (UEs). Locations of UAVs and ground BSs are modeled as Poison point processes (PPPs), and UEs are assumed to be distributed according to a Poisson cluster process (PCP) around the projections of UAVs on the ground. The complementary cumulative distribution function (CCDF) and probability density function (PDF) of path losses for both UAV and ground BS tiers are derived. Association probabilities with each tier are obtained. SINR coverage probability is derived for the entire network using tools from stochastic geometry. Area spectral efficiency (ASE) of the entire network is determined, and SINR coverage probability expression for a more general model is presented by considering that UAVs are located at different heights. We have shown that UAV height and path-loss exponents play important roles on the coverage performance. Coverage probability can be improved with smaller number of UAVs, while better area spectral efficiency is achieved by employing more UAVs and having UEs more compactly clustered around the UAVs.
Fast and Energy-Aware Resource Provisioning and Task Scheduling for Cloud Systems
Presented by Hongjia Li
Advisor: Dr. Yanzhi Wang
Related research area(s): Other
Poster number EECS 1-10
Cloud computing has become an attractive computing paradigm in recent years to offer on demand computing resources for users worldwide. Through Virtual Machine (VM) technologies, the cloud service providers (CSPs) can provide users the infrastructure, platform, and software with a quite low cost. In this paper, we propose a fast and energy-aware resource provisioning and task scheduling algorithm to achieve low energy cost with reduced computational complexity for CSPs.
In our iterative algorithm, we divide the provisioning and scheduling to multiple steps which can effectively reduce the complexity and minimize the run time while achieving a reasonable energy cost. Experimental results demonstrate that compared to the baseline algorithm, the proposed algorithm can achieve up to 79.94% runtime improvement with an acceptable energy cost increase.
Fusion of Correlated Decisions Using Regular Vine Copulas
Presented by Shan Zhang
Advisor: Dr. Pramod Varshney
Related research area(s): Other
Poster number EECS 1-11
In this paper, we propose a regular vine copula based methodology for the fusion of correlated decisions. Regular vine copula is an extremely flexible and powerful graphical model to characterize complex dependence among multiple modalities. It can express a multivariate copula by using a cascade of bivariate copulas, the so-called pair copulas. Assuming that local detectors are single threshold binary quantizers and taking complex dependence among sensor decisions into account, we design an optimal fusion rule using a regular vine copula under the Neyman-Pearson framework. In order to reduce the computational
complexity resulting from the complex dependence, we propose an efficient and computationally light regular vine copula based optimal fusion algorithm. Numerical experiments are conducted to demonstrate the effectiveness of our approach.
Hardware Acceleration of Bayesian Neural Networks
Presented by Ruizhe Cai
Advisor: Dr. Yanzhi Wang
Related research area(s): Intelligent Systems
Poster number EECS 1-12
Bayesian Neural Networks (BNNs) have been proposed to address the problem of model uncertainty in training and inference. By introducing weights associated with conditioned probability distributions, BNNs are capable of resolving the overfitting issue commonly seen in conventional neural networks and allow for small-data training, through the variational inference process. Frequent usage of Gaussian random variables in this process requires a properly optimized Gaussian Random Number Generator (GRNG). The high hardware cost of conventional GRNG makes the hardware implementation of BNNs challenging.In this paper, we propose VIBNN, an FPGA-based hardware accelerator design for variational inference on BNNs. We explore the design space for massive amount of Gaussian variable sampling tasks in BNNs. Specifically, we introduce two high performance Gaussian (pseudo) random number generators:
- the RAM-based Linear Feedback Gaussian Random Number Generator (RLF-GRNG),which is inspired by the properties of binomial distribution and linear feedback logics
- the Bayesian Neural Network-oriented Wallace Gaussian Random Number Generator.
To achieve highscalability and efficient memory access, we propose a deep pipelined accelerator architecture with fast execution and good hardware utilization. Experimental results demonstrate that the proposed VIBNN implementations on an FPGA can achieve throughput of 321,543.4 Images/s and energy efficiency upto 52,694.8 Images/J while maintaining good accuracy.
Inducing Cascading Failures in Transportation Networks
Presented by Griffin Kearney
Advisor: Dr. Makan Fardad
Related research area(s): Unmanned Systems,Intelligent Systems,Security
Poster number EECS 1-13
In this work we examine the effect of malicious attacks in disrupting optimal routing algorithms for transportation networks. Highway networks, disaster evacuation plans, water supply networks, and (the routing of data packets in) computer networks can all be described with transportation network models. We focus on modeling traffic networks using the cell transmission model, which is a spatiotemporal discretization of kinematic wave equations. Here, vehicles are modeled as masses and roads as cells, and traffic flow is subject to conservation of mass and capacity constraints. At time zero a resource-constrained malicious agent reduces the capacities of cells so as to maximize the amount of time mass spends in the network. For the resulting set of capacities the network router then solves a linear program to determine the flow configuration that minimizes the amount of time mass spends in the network. This two-player problem can be written as a max-min that can be transformed to an equivalent bilinear maximization problem. Optimization problems with bilinear objectives are non-convex, and known to be NP-hard in general. Linearization techniques are applied to the formulation to find solutions. These techniques scale gracefully in network size but may converge to globally non-optimal values. Analyzing fundamental examples shows that attackers with relatively small resource budgets can cause widespread failure in a traffic network.
Integrating ARM TrustZone with Android to Protect VoIP Calls
Presented by Amit Ahlawat
Advisor: Dr. Wenliang Du
Related research area(s): Security
Poster number EECS 1-14
An important trend in the use of mobile devices to make phone calls is the use of Voice-over-IP (VoIP) apps, such as Facebook Messenger, Google Hangouts, Signal etc. When using a VoIP app, users place implicit trust on the VoIP protocol, VoIP infrastructure and the mobile device. A compromise in any one of them can lead to privacy loss. In this work, we focus on the trust placed on a mobile device. Focusing on the Android OS, the absolute trust placed by users on the device is misplaced, as data shows that the number of vulnerabilities discovered in Android has increased from year to year. With such weak trusted computing base (TCB), mobile devices cannot provide the privacy guarantee needed by users during a VoIP call, as a compromised mobile OS can examine the audio content of a VoIP call.
This work intends to design a solution for secure VoIP call on Android using a technology called ARM TrustZone. The design leverages the partitioning of device resources allowed by TrustZone and applies it to secure audio peripherals on a device while a VoIP call is being made. The goal is that during a VoIP call, a compromised Android OS should be unable to access the audio from the mic and the speaker. The design aims to provide TrustZone support transparently to VoIP app developers s.t. with minimal configuration, they should be able to convert an existing VoIP app to a TrustZone-secured version.
Making Computers Emotion Aware
Presented by Danushka Bandara
Advisor: Dr. Senem Velipasalar
Related research area(s): Intelligent Systems
Poster number EECS 1-15
The human brain is the genesis of our emotions. Our work uses the brain as an objective indicator of emotion. Emotion in the brain is associated with the Limbic system and the Prefrontal Cortex. We focus on the Blood Flow Activity in the Pre Frontal Cortex to infer a person’s emotional state (Positivity or Negativity of emotion in particular: AKA Valence). The contributions of this work includes improving on current state of the art in Valence classification as well as proposing a novel approach to capturing the spatial nature of fNIRS data in the classification of fNIRS data in general. Such improvement in emotion classification can impact fields such as robotics, psychology, education, autonomous vehicles, interactive media and assistive technologies where an objective awareness of human emotion can provide the computer the ability to better adapt to the user.
Microscopic Origin of the Chiroptical Response of Plasmonic Media
Presented by Matthew Davis. Advisor: Dr. Jay Lee
Related research area(s): Health and Well-being,Other
Poster number EECS 1-17
Chiral plasmonic systems are compelling components in a range of impactful technologies such as actively controlled ultrafast CP modulators, high efficiency visible wavelength holograms, and enhanced circular dichroism (CD) spectroscopy. Chiroptical techniques such as CD spectroscopy can be used to identify structural and handedness information of a chiral medium, which is of great importance in the study of pharmaceuticals, physiology, and the origins of life itself. CD spectroscopy plays an important role in both the identification of enantiomorphic compounds and conformational analysis, however CD responses in natural molecules is weak. The promise of enhanced CD spectroscopy has attracted an intense research effort in the field of chiral plasmonics. Providing orders of magnitude signal enhancement, plasmonic systems seem poised to continue making significant contributions to chiroptical measurements. Understanding the chiroptical properties of plasmonic structures is, therefore, vital. In this work, a Lorentzian coupled-oscillator model is introduced to facilitate a comprehensive study of the chiroptical properties of plasmonic media to aid in the study and identification of CD responses. The GPBK model is shown to unite several previously reported chiroptical phenomena into a unified theoretical framework, illuminating the origins of parasitic non-CD response types in chiral structures and providing guidance to the thoughtful design of chiroptical measurements.
ONLINE DESIGN OF PRECODERS FOR HIGH DIMENSIONAL SIGNAL DETECTION IN WIRELESS SENSOR NETWORKS
Presented by Prashant Khanduri
Advisor: Dr. Pramod Varshney
Related research area(s): Energy Sources, Conversion, and Conservation,Intelligent Systems
Poster number EECS 1-18
In this work, we present an efficient methodology to design precoders for distributed detection of unknown high dimensional signals. We consider a wireless sensor network, where several distributed sensors collaborate to perform binary hypothesis testing based on observations of an unknown high dimensional signal corrupted by noise. The sensors collect data over both temporal and spatial domains. Due to network resource constraints, each sensor performs a linear compression (through precoding) of the observed high dimensional signal at each time instant and forwards the compressed signal to the fusion center (FC). The FC then employs the generalized likelihood ratio test (GLRT) to make a decision on the presence or absence of the signal. We propose online linear precoding/compression strategies for such sensors that collect data over spatio-temporal domain, so that the detection performance at the FC is maximized under certain network resource constraints. Through the measure of non-centrality parameter and receiver operating characteristics (ROC), we show that our proposed precoder design achieves very good detection performance.
Power Control and Mode Selection for VBR Video Streaming in D2D Networks
Presented by Chuang Ye
Advisor: Dr. Mustafa Gursoy
Related research area(s): Other
Poster number EECS 1-19
In this work, we investigate the problem of power control for streaming variable-bit-rate (VBR) videos in a device-to-device (D2D) wireless network. A VBR video traffic model that considers video frame sizes and playout buffers at the mobile users is adopted. A setup with one pair of D2D users (DUs) and one cellular user (CU) is considered and three modes, namely cellular mode, dedicated mode and reuse mode, are employed. Mode selection for the data delivery is determined and the transmit powers of the base station (BS) and device transmitter are optimized with the goal of maximizing the overall transmission rate while VBR video data can be delivered to the CU and DU without causing playout buffer underflows or overflows. A low-complexity algorithm is proposed. Through simulations with VBR video traces over fading channels, we demonstrate that video delivery with mode selection and power control achieves a better performance than just using a single mode throughout the transmission.
Robust Decentralized Learning with Unreliable Agents
Presented by Qunwei Li
Advisor: Dr. Pramod Varshney
Related research area(s): Intelligent Systems
Poster number EECS 1-20
Many machine learning problems can be formulated as consensus optimization problems which can be solved efficiently via a cooperative multi-agent system. However, the agents in the system can be unreliable due to a variety of reasons: noise, faults and attacks. Thus, providing erroneous data leads the problem solving process in a wrong direction, and degrades the performance of distributed machine learning algorithms. This paper considers the problem of decentralized learning in the presence of unreliable agents. First, we rigorously analyze the effect of erroneous updates (in ADMM-based learning iterations) on the convergence behavior in solving optimization problems in the multi-agent system. We show that the algorithm converges to a neighborhood of the optimal solution and characterize the neighborhood size analytically. Next, we provide guidelines for multi-agent system design to achieve a faster problem-solving capability. We also provide necessary conditions on the falsified updates for exact convergence to the optimal solution. Finally, to mitigate the influence of unreliable agents, we propose a robust variant of ADMM and show its resilience to unreliable agents.
SC-DCNN: Highly-Scalable Deep Convolutional Neural Network using Stochastic Computing
Presented by Ao Ren
Advisor: Dr. Yanzhi Wang
Related research area(s): Intelligent Systems
Poster number EECS 1-21
With recent advancing of wearable devices and Internet of Things (IoTs), it becomes very attractive to implement the deep convolutional neural networks (DCNNs) onto embedded and portable systems. Stochastic Computing (SC), which uses a bit-stream to represent a probability number within [-1, 1] by counting the number of ones in the bit-stream, has high potential for implementing DCNNs with high scalability and ultra-low hardware footprint. Since multiplications and additions can be calculated using AND gates and multiplexers in SC, significant reductions in power (energy) and hardware footprint can be achieved compared to the conventional binary arithmetic implementations. We present the first comprehensive design and optimization framework of SC-based DCNNs (SC-DCNNs), using a bottom-up approach. We first present the optimal designs of function blocks that perform the basic operations, i.e., inner product, pooling, and activation function, in DCNN. Then we propose the optimal design of four types of combinations of basic function blocks, named feature extraction blocks, which are in charge of extracting features from input feature maps. Besides, weight storage methods are proposed and investigated to reduce the area and power (energy) consumption for storing weights. Finally, the whole SC-DCNN implementation is optimized, with feature extraction blocks carefully selected, to minimize area and power (energy) consumption while maintaining a high network accuracy level.
Statistical Reinforcement Learning Based Joint Antenna Selection and User Scheduling for Single-Cell Massive MIMO Systems
Presented by Mangqing Guo
Advisor: Dr. M. Cenk Gursoy
Related research area(s): Other
Poster number EECS 1-22
A statistical reinforcement learning based joint antenna selection and user scheduling method to improve the Energy Efficiency (EE) of single-cell Massive MIMO system is investigated. The uplink and downlink transmissions are considered together. We also consider a limitation on the number of Radio Frequency chains in Massive MIMO systems. The original energy-efficiency-maximizing problem is solved via a two-step approach: Determine the optimal subset of antennas at the Base Station (BS) first, and then obtain the optimal subset of users with the subset of antennas selected before. We note that EE initially increases with the number of antennas at the BS, and then decreases, and random antenna selection is already very close to the optimum antenna selection. So we could determine the optimum number of antennas that should be selected at the BS via a typical bisection algorithm. Then the original problem can be transformed into a combinatorial optimization problem. Finally, we use statistical reinforcement learning methods to solve this combinatorial optimization problem.
Throughput Analysis with Content Caching and RRH Association in Cloud-Radio Access Networks
Presented by Yang Yang
Advisor: Dr. Mustafa Gursoy
Related research area(s): Energy Sources, Conversion, and Conservation
Poster number EECS 1-23
Cloud-Radio Access Network (C-RAN) is a centralized, cloud computing-based architecture for radio access networks that supports 2G, 3G, 4G and future wireless communication standards. However, due to the introduction of caches at the base station and cloud center, how to cache contents and associate users with the Remote Radio Heads (RRHs) will have significant
influence on the network. In our recent work, local content caching algorithms and RRH association are investigated under C-RAN constraints. In particular, we first propose a pre-ideal user association algorithm to demonstrate the association scenario in the desired best performance case. Then, a content caching method is presented to explain how the contents should be stored. Effective capacity is used as the metric in performance comparison, which is also the objective we want to maximize in practical RRH association scenarios.
Towards Ultra-High Performance and Energy Efficiency of Deep Learning Systems: An Algorithm-Hardware Co-Optimization Framework
Presented by Caiwen Ding
Advisor: Dr. Yanzhi Wang
Related research area(s): Intelligent Systems,Other
Poster number EECS 1-24
Hardware accelerations of deep learning systems have been extensively investigated in industry and academia. The aim of this paper is to achieve ultra-high energy efficiency and performance for hardware implementations of deep neural networks (DNNs). An algorithm-hardware co-optimization framework is developed, which is applicable to different DNN types, sizes, and application scenarios. The algorithm part adopts the general block-circulant matrices to achieve a fine-grained tradeoff of accuracy and compression ratio. It applies to both fully-connected and convolutional layers and contains a mathematically rigorous proof of the effectiveness of the method. The proposed algorithm reduces computational complexity per layer from O($n^2$) to O($n\log n$) and storage complexity from O($n^2$) to O($n$), both for training and inference. The hardware part consists of highly efficient \emph{Field Programmable Gate Array} (FPGA)-based implementations using effective reconfiguration, batch processing, deep pipelining, resource re-using, and hierarchical control. Experimental results demonstrate that the proposed framework achieves at least 152X speedup and 71X energy efficiency gain compared with IBM TrueNorth processor under the same test accuracy. It achieves at least 31X energy efficiency gain compared with the reference FPGA-based work.
Cache-miss Oblivious Data Shuffling on Harware Enclaves
Presented by Ju Chen
Advisor: Dr. Yanzhi Tang
Related research area(s): Security
Poster number EECS 1-25
Memory-access based side-channel attacks are real and serious security problems. By exploiting the vulnerabilities, attackers can extract sensitive information such as secret keys and users’ profiles from the programs run on a shared execution platform. Recently discovered attacks such as Meltdown and Spectra even allowing a program to read arbitrary information in memory by combining the vulnerabilities with hardware design flaws. This work takes the initiative to mitigate memory-access based side-channel attacks. Our approach is to enforce the obliviousness of cache-misses emitted during a program execution against sensitive data. The novelty is to combine the detection of the cache-misses with the externally oblivious algorithms. The evaluation results show that our approach outperforms the related works in the performance and the scalability, without losing security.
Uplink Coverage in Heterogeneous mmWave Cellular Networks with User-Centric Small Cell Deployments
Presented by Xueyuan Wang
Advisor: Dr. Mustafa Gursoy
Related research area(s): Other
Poster number EECS 1-26
Demand for mobile data has been growing rapidly in recent years resulting in a global bandwidth shortage for wireless service providers. For future cellular networks including 5G wireless systems, two key techniques for capacity improvement will be network densification and the use of higher frequencies such as in millimeter wave (mmWave) bands. Yet as another trend, heterogeneous cellular wireless networks are being developed to support higher data rates to satisfy the increasing user demand for broadband wireless services, by supporting the coexistence of denser but lower power small-cell base stations (BSs) with the conventional high-power and low density large-cell BSs. Motivated by the fact that mmWave communications and user-centric deployments have been attracting growing attention as significant components in next-generation wireless networks, our work focus on K-tier heterogeneous uplink mmWave cellular networks with UE-centric small cell deployments. Our work gives insights for setting up next generation wireless networks, and how to deploy base stations to obtain high communication quality.