Abbrevation
ParLearning
City
Orlando
Country
United States
Deadline Paper
Start Date
End Date
Abstract

Scaling up machine&#8211;learning (ML), data mining (DM) and reasoning algorithms from Artificial Intelligence (AI) for massive datasets is a major technical challenge in the times of "Big Data"&#046; The past ten years has seen the rise of multi&#8211;core and GPU based computing&#046; In parallel and distributed computing, several frameworks such as OpenMP, OpenCL, and Spark continue to appear to facilitate scaling up ML/DM/AI algorithms using higher levels of abstraction&#046; We invite novel works that advance the trio&#8211;fields of ML/DM/AI through development of scalable algorithms or computing frameworks&#046; Ideal submissions would be characterized as scaling up X on Y, where potential choices for X and Y are provided below&#046;<br>Scaling up<br>recommender systems<br>gradient descent algorithms<br>deep learning<br>sampling/sketching techniques<br>clustering (agglomerative techniques, graph clustering, clustering heterogeneous data)<br>classification (SVM and other classifiers)<br>SVD<br>probabilistic inference (bayesian networks)<br>logical reasoning<br>graph algorithms and graph mining<br>On<br>Multi&#8211;core architectures/frameworks (OpenMP)<br>Many&#8211;core (GPU) architectures/frameworks (OpenCL, OpenACC, CUDA, Intel TBB)<br>Distributed systems/frameworks (GraphLab, MPI, Hadoop, Spark, Storm, Mahout etc&#046;)<br>