Research
@
DCL Lab

Our work focuses on distributed computing methods applied to scalable machine learning i.e. Distributed Computing for Machine Learning (DC4ML). We also employ machine learning techniques to scale distributed computing algorithms, particularly, shared-memory based concurrent data structures – Machine Learning for Distributed Computing (ML4DC). Check out the specific topics for more detail.

Research

Federated Learning

Federated Learning, emerged as a special case of Distributed Machine Learning, is now an establis...

Distributed Machine Learning

Distributed machine learning (DML) utilizes multiple computers to collaboratively train a single ...

Learned Data Structures

Learned data structures use machine learning techniques to improve the performance of queries. Tr...

Concurrent Data Structures

Concurrent data structures are designed to handle simultaneous access and modification by multipl...