spacer
spacer search

UCLA Adaptive Systems Laboratory

Search
spacer
header
Home
Awards
Members
Publications
Software
Courses
Seminars
Photos
Contact
 
Home arrow Seminar Abstracts arrow TU Darmstadt (Nov. 7, 2014)

TU Darmstadt (Nov. 7, 2014)
Plenary Lecture
LOEWE Priority Program Cocoon
TU Darmstadt, Germany

Dictionary Learning over Distributed Models

November 7, 2014

Data mining applications rely on the premise that it is possible to benefit immensely from leveraging information from different users. The range of benefits, and the computational cost necessary to analyze the data, depends on how the vast amount of information is mined. It is sometimes advantageous to collect the information from all users at a central location for processing and analysis. Many current implementations rely on this centralized approach. However, the rapid increase in the number of users, coupled with privacy and communication constraints related to transmitting, storing, and analyzing huge amounts of data at remote central locations, has been serving as strong motivation for the development of decentralized solutions to learning and data

In this talk, we examine the distributed dictionary learning problem over a network of N learners. Dictionary learning amounts to a useful procedure by which dependencies among input features can be represented in terms of suitable bases. It has found applications in many machine learning and inference tasks including image de-noising, dimensionality-reduction, bi-clustering, feature-extraction and classification, and novel document detection. We assume the network is connected, meaning that any two arbitrary agents are either connected directly or by means of a path passing through other agents. We do not require the agents to share their data sets but only a statistic that is representative of their local information.

Dictionary learning usually alternates between two steps: (i) an inference (sparse coding) step and (ii) a dictionary update step. However, with the increasing complexity of various learning tasks, the size of the learning dictionaries is becoming demanding in terms of memory and computing requirements. It is therefore important to study scenarios where the dictionary need not be available in a single central location but is possibly spread out over multiple locations. This is particularly true in Big Data scenarios where multiple large dictionary models may already be available at separate locations and it is not feasible to aggregate all dictionaries in one location due to communication or privacy considerations. This observation motivates us to examine how to learn a dictionary model that is stored over a network of agents, where each agent is in charge of only a portion of the dictionary elements.



 

spacer
spacer