Scalable Data Mining (CS60021)
Instructor: Sourangshu Bhattacharya
Teaching Assistants: Soumi Das and Bidisha Samanta
Class Schedule: Monday (8:00 - 9:55), Tuesday (12:00 – 12:55)
Classroom: CSE-107
Website: http://cse.iitkgp.ac.in/~sourangshu/coursefiles/cs60021-2019a.html
First Meeting: Tuesday, 16 July 2019, 12:00 pm
Content:
Syllabus:
In this course, we discuss algorithmic techniques
as well as software paradigms which allow one to
write scalable algorithms for the common data mining tasks.
Software paradigms:
Big Data Processing: Motivation and Fundamentals.
Map-reduce framework. Functional programming and Scala.
Programming using map-reduce paradigm. Case studies: Finding
similar items, Page rank, Matrix factorization.
Tensorflow: Motivation, Computation graphs,
Operations, Tensors, Example programs.
Algorithmic techniques:
Dimensionality reduction: Random
projections, Johnson-Lindenstrauss lemma, JL transforms,
sparse JL-transform.
Finding similar items: Shingles, Minhashing, Locality
Sensitive Hashing families.
Stream processing: Motivation, Sampling, Bloom
filtering, Count based sketches: FM sketch, AMS
sketch. Hash based sketches: count sketch.
Optimization and Machine learning algorithms:
Optimization algorithms: Stochastic gradient descent,
Variance reduction, Momentum algorithms, ADAM.
Algorithms for distributed optimization: Distributed
stochastic gradient descent and related methods. ADMM and
decomposition methods.
Textbooks:
- Mining of Massive Datasets. 2nd edition. - Jure Leskovec, Anand Rajaraman, Jeff Ullman. Cambridge University Press. http://www.mmds.org/
- Tensorflow for Machine Intelligence: A hands on
introduction to learning algorithms. Sam Abrahams et al.
Bleeding edge press.
- Hadoop: The definitive Guide. Tom White. Oreilly Press.
- Recent literature.
Course Material:
- Hadoop Map reduce introduction and programming: slides
- Hadoop System, HDFS: slides slides
- Reference: Hadoop: The definitive Guide. Tom White. Oreilly Press. Chapters: 2, 3, 5, and 6
- Spark: Programming, Computation Graphs, System slides
- Reference: Spark RDD programming guide: http://spark.apache.org/docs/latest/rdd-programming-guide.html
- Practice Problems for Hadoop and Spark: here
- Tensorflow: Programming, Computation Graphs: slides
- Comparison with Pytorch, Static vs Dynamic Graphs: slides
- Streaming algorithms: Reservoir Sampling, Bloom filters: Slides
- Reference: MMDS Book: chapter 4
- Practice Problems: here
- Streaming algorithms: Count-distinct: Slides
- Streaming algorithms: Frequency Estimation: Slides
- Streaming algorithms: Count Sketch, Moment estimation, AMS sketch, Join size estimation, Range queries. Slides
- References: Some slides from "Data Stream Algorithms" by Andrew McGregor https://people.cs.umass.edu/~mcgregor/CS711S18/index.html
- Much of the material is discussed in "Sketch Techniques for Approximate Query Processing" by Graham Cormode: http://dimacs.rutgers.edu/~graham/pubs/papers/sk.pdf
- Practice Problems: here
- LSH: Shingles, Minhash, locality sensitive hashing. Slides
- LSH: Generalization to gap LSH, Simhash, hamming distance, etc. Slides
- LSH: Multi-probe LSH, Learning to hash Slides
- LSH Practice Problems: here. More practice questions on Shingles and Minhash can be found in the MMDS book.
- Large scale machine learning: Slides
- ML as stochastic optimization.
- Reference: MMDS book, chapter 12
- Convergence proof and Convergence rate for SGD
- Reference: Nesterov, Yu, and J-Ph Vial. "Confidence level solutions for stochastic programming." Automatica 44, no. 6 (2008): 1559-1568.
- Refernce: Shalev-Shwartz, Shai, and Shai Ben-David. Understanding machine learning: From theory to algorithms. Cambridge university press, 2014. Chapter 14.
- Accelerated SGD vairants: Ruder, Sebastian. "An overview of gradient descent optimization algorithms." arXiv preprint arXiv:1609.04747 (2016).
- Finite sum SGD: SAGA, SVRG:
- Slides from: https://www.cs.ubc.ca/~schmidtm/SVAN16/ lecture 6.
- Johnson, Rie, and Tong Zhang. "Accelerating stochastic gradient descent using predictive variance reduction." In Advances in neural information processing systems, pp. 315-323. 2013.
- Schmidt, Mark, Nicolas Le Roux, and Francis Bach. "Minimizing finite sums with the stochastic average gradient." Mathematical Programming 162, no. 1-2 (2017): 83-112.
- Distributed optimization: Slides
- ADMM Reference: Boyd, Stephen, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. "Distributed optimization and statistical learning via the alternating direction method of multipliers." Foundations and Trends® in Machine learning 3, no. 1 (2011): 1-122.
- Distributed SVM: Forero, Pedro A., Alfonso Cano, and Georgios B. Giannakis. "Consensus-based distributed support vector machines." Journal of Machine Learning Research 11, no. May (2010): 1663-1707.
- Weighted Parameter Averaging: Das, Ayan, Raghuveer Chanda, Smriti Agrawal, and Sourangshu Bhattacharya. "Distributed Weighted Parameter Averaging for SVM Training on Big Data." In Workshops at the Thirty-First AAAI Conference on Artificial Intelligence. 2017.
- Optimization Practice Problems: here
Assignments: