Scaling up machine–learning (ML), data mining (DM) and reasoning algorithms from Artificial Intelligence (AI) for massive datasets is a major technical challenge in the times of "Big Data". The past ten years has seen the rise of multi–core and GPU based computing. In parallel and distributed computing, several frameworks such as OpenMP, OpenCL, and Spark continue to appear to facilitate scaling up ML/DM/AI algorithms using higher levels of abstraction. We invite novel works that advance the trio–fields of ML/DM/AI through development of scalable algorithms or computing frameworks. Ideal submissions would be characterized as scaling up X on Y, where potential choices for X and Y are provided below.<br>Scaling up<br>recommender systems<br>gradient descent algorithms<br>deep learning<br>sampling/sketching techniques<br>clustering (agglomerative techniques, graph clustering, clustering heterogeneous data)<br>classification (SVM and other classifiers)<br>SVD<br>probabilistic inference (bayesian networks)<br>logical reasoning<br>graph algorithms and graph mining<br>On<br>Multi–core architectures/frameworks (OpenMP)<br>Many–core (GPU) architectures/frameworks (OpenCL, OpenACC, CUDA, Intel TBB)<br>Distributed systems/frameworks (GraphLab, MPI, Hadoop, Spark, Storm, Mahout etc.)<br>
Abbrevation
ParLearning
City
Orlando
Country
United States
Deadline Paper
Start Date
End Date
Abstract