As "Big Data" grows bigger at a rapid pace, Machine Learning (ML) techniques have come to play a central role in automatic data processing and analytics across a wide spectrum of application domains. However, lack of well-defined principles in choosing ML algorithms suitable for a given problem remains a major challenge. Today this choice depends primarily upon empirical rules such as the size of training data, number of distinct labels, need for interpretable decision boundaries, and real-time memory constraints. It is often also guided by pragmatic factors such as readily available code and comfort level of the programmers, and empirically determined parameters finely tuned by repeated experiments. For instance, the Netflix prize awarded in 2009 showed that developing highly efficient and scalable machine learning algorithms takes years of trial and error to achieve high prediction accuracy. In this seminar, we will examine the foundations of the next generation of domain agnostic ML techniques which will be able to encapsulate "a priori" knowledge of ML successes across domains, in a deep analytics framework. The goal is to transform the complex alchemy involved in using ML techniques that take years to master into a simple science that can be readily adapted by practitioners across fields.