Research
@
DCL Lab

Our work focuses on distributed computing methods applied to scalable machine learning i.e. Distributed Computing for Machine Learning (DC4ML). We also employ machine learning techniques to scale distributed computing algorithms, particularly, shared-memory based concurrent data structures – Machine Learning for Distributed Computing (ML4DC). Check out the specific topics for more detail.

Research

Federated Learning

Federated Learning is a machine learning approach that allows multiple parties to collaboratively...

Distributed Machine Learning

Distributed machine learning (DML) utilizes multiple computers to train a single machine learning...

Learned Data Structures

Learned data structures use machine learning techniques to improve the performance of queries. Tr...

Concurrent Data Structures

Concurrent data structures are designed to handle simultaneous access and modification by multipl...

Real Time Graph Analytics

Given a dataset representing the entities and relations among them, performing analysis via repre...

Federated Neural Architecture Search

Given the requirements in terms of dataset and constraints in terms of computation budget, neural...