Seminar aus Maschinellem Lernen und Data Mining
The seminar is available in TUCaN under module number 20-00-0102.
When and Where?
The kick-off meeting will take place on Tuesday, 23.04.2019 at 17:10-18:50 in room S1|10 211 (access via S1|11, Alexanderstraße 4).
Do not miss the kick-off meeting if you want to participate in the seminar!
All futher meetings (30.04.-16.07) will take pace on Tuesdays at 17:10 - 18:50 in room S2|02 E202 (Piloty building).
Content
In this semester's machine learning and data mining seminar, we discuss papers related to Domain-knowledge for Deep Learning.
In case you want to recap basics of deep learning, here are some resources:
- LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).
- Schmidhuber, J. Deep Learning in neural networks: An overview. Neural Networks 61, 85–117 (2015).
- Goodfellow, I., Bengio, Y. & Courville, A. Deep Learning. (2016).
Organization
The language used in the seminar will be English.
The topics for the talks will be assigned in the kick-off meeting.
It is not necessary to have prior knowledge, but prior knowledge in data mining and machine learning will be helpful. Participation is limited to 20 students. In case we have more students, students with prior knowledge in data mining and knowledge discovery will be preferred. The selection will be made at kick-off meeting. If there are more qualified people than topics, we will use random selections.
The students are expected to give a 20 minute talk on the material they are assigned, followed by feedback, questions, and discussions. Although each topic is typically associated with a single paper, the point of the talk is not to exactly reproduce the entire contents of the paper, but to communicate the key ideas of the methods that are introduced in the paper. Thus, the content of the talk should exceed the scope of the paper, and demonstrate that a thorough understanding of the material was achieved. See also our general advices on giving talks.
For further questions feel free to send an email to ml-sem@ke.tu-darmstadt.de. No prior registration is needed, however, please still send us an email so that we are able to estimate beforehand the number of participants, and have your E-mail address for possible announcements. Also make sure that you are registered in TUCaN.
Talks
The talks are expected to be accompanied by slides. The students will have to send the slides one week in advance to the talk to ml-sem@ke.tu-darmstadt.de. We will use this opportunity to provide early feedback on common problems such as too many slides, too much text on the slides, small font sizes, etc. The talk and the slides should be in English.
There will be two talks in each meeting. As mentioned above, each topic is associated with one paper, but the talk should not exactly reproduce the content of the paper, but communicate the key ideas of the introduced method.
All papers should be freely available on the internet or in the ULB. Note that some paper sources such as Springer link often only works on campus networks (sometimes not even via VPN). If you cannot find a paper, contact us.
Grading
The slides, the presentation and the question and answers section of the talk will influence the overall grade. Furthermore, it is expected that students actively participate in the discussions, and this will also be part of the final grade.
To achieve a grade in the 1.x range, the talk needs to exceed the contentual recitation of the given material and include own ideas, own experience or even demos. An exact recitation of the papers will lead to a grade in the 2.x range. A weak presentation and lack of engagement in the discussions may lead to a grade in the 3.x range, or worse. Please read also very carefully our guidelines for giving a talk.
In addition to the grading, we will also give public feedback on the talks immediately after the talks, and we are considering a best presentation award at the end of the seminar.
Topics
Here is a list of topics, each topic consists of two seminar talks (indicated by the bullet list). For each seminar talk, we give 1-3 papers as a starting point. However, note that you are not supposed to reproduce the papers in all details. For most talks, you should explain the method that is introduced in the paper(s), and show where and how it can be used. Often you will find much better examples or use cases in later publications on these methods. See also our guidelines for giving a talk.
Limitations of Deep Learning (14.5.)
- Malte L.
Sun, Chen, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta, ‘Revisiting Unreasonable Effectiveness of Data in Deep Learning Era’, in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 843–852 <https://doi.org/10.1109/ICCV.2017.97>. Slides. - Lars E.
Zhang, Chiyuan, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals, ‘Understanding Deep Learning Requires Rethinking Generalization’, 2016 <https://doi.org/10.1109/TKDE.2015.2507132>. Slides.
Posterior Regularization (21.5.)
- NN
Ganchev, Kuzman, João Graça, Jennifer Gillenwater, and Ben Taskar, ‘Posterior Regularization for Structured Latent Variable Models’, Journal of Machine Learning Research, 11 (2010), 2002–49 - Patricia H.
Hu, Zhiting, Xuezhe Ma, Zhengzhong Liu, Eduard Hovy, and Eric Xing, ‘Harnessing Deep Neural Networks with Logic Rules’, in ArXiv, 2016 <https://doi.org/10.18653/v1/P16-1228>. Slides.
Knowledge Distillation (28.5.)
- Nicolas D.
Buciluǎ, Cristian, Rich Caruana, and Alexandru Niculescu-Mizil, ‘Model Compression’, in Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2006, pp. 535–41 <https://doi.org/https://doi.org/10.1145/1150402.1150464> - Alexander M.
Hinton, Geoffrey, Oriol Vinyals, and Jeff Dean, ‘Distilling the Knowledge in a Neural Network’, in ArXiv, 2015, pp. 1–9 <https://doi.org/10.1063/1.4931082>
Deep Learning and Decision Trees (4.6.)
- Leon K. (-> 9.7.)
Mike Wu, Michael C Hughes, Sonali Parbhoo, Maurizio Zazzi, Volker Roth, and Finale Doshi-Velez. 2017. Beyond Sparsity: Tree Regularization of Deep Models for Interpretability. arXiv preprint arXiv:1711.06178 (2017). - Lukas W.
Yongxin Yang, Irene Garcia Morillo, Timothy M. Hospedales: Deep Neural Decision Trees. CoRR abs/1806.06988 (2018). Slides.
Logic-based Neural Networks (11.6.)
- Benedikt N.
Towell, Geoffrey G., and Jude W. Shavlik, ‘Knowledge-Based Artificial Neural Networks’, Artificial Intelligence, 70 (1994), 119–65 <https://doi.org/10.1016/0004-3702(94)90105-8> - Franz K.
Son N. Tran; Artur S. d'Avila Garcez: Deep Logic Networks: Inserting and Extracting Knowledge From Deep Belief Networks. IEEE Transactions on Neural Networks and Learning Systems, 2016.
Differentiable Logic (18.6.)
- Thomas L.
William W. Cohen, Fan Yang, Kathryn Mazaitis: TensorLog: Deep Learning Meets Probabilistic DBs. CoRR abs/1707.05390 (2017) - Felix O.
Richard Evans, Edward Grefenstette: Learning Explanatory Rules from Noisy Data. J. Artif. Intell. Res. 61: 1-64 (2018)
Graph Convolutional Neural Networks (25.6.)
- Kim B.
Schlichtkrull, Michael, Kipf Thomas N., Peter Bloem, Van Den Berg Rianne, Ivan Titov, and Max Welling, ‘Modeling Relational Data with Graph Convolutional Networks’, in Proceeding of the 15th European Semantic Web Conference (Springer International Publishing, 2018), pp. 593–607 <https://doi.org/https://doi.org/10.1007/978-3-319-93417-4_38> - Ming N.
Liang, Xiaodan, Hao Zhang, Liang Lin, and Eric P Xing, ‘Symbolic Graph Reasoning Meets Convolutions’, in Advances in Neural Information Processing Systems 31, 2018, pp. 1–11 <http://papers.nips.cc/paper/7456-symbolic-graph-reasoning-meets-convolutions.pdf>
Knowledge Graphs (2.7.)
- Alexander K.
Annervaz, K M, Somnath Basu Roy Chowdhury, and Ambedkar Dukkipati, ‘Learning beyond Datasets: Knowledge Graph Augmented Neural Networks for Natural Language Processing’, in NAACL, 2018, pp. 313–22 <http://arxiv.org/abs/1802.05930>, Slides - Lennard R.
Marino, Kenneth, Ruslan Salakhutdinov, and Abhinav Gupta, ‘The More You Know: Using Knowledge Graphs for Image Classification’, in Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 20–28 <https://doi.org/https://doi.org/10.1109/CVPR.2017.10>, Slides
Neural Networks with Memory (9.7.)
- NN
Parisotto, Emilio, and Ruslan Salakhutdinov, ‘Neural Map: Structured Memory for Deep Reinforcement Learning’, 2017, 1–13 <http://arxiv.org/abs/1702.08360> - Marc U.
Graves, Alex, Greg Wayne, and Ivo Danihelka, ‘Neural Turing Machines’, 2014, pp. 1–26 <http://arxiv.org/abs/1410.5401>
Sum-Product Networks (16.7.)
- Idris N.
Hoifung Poon, Pedro M. Domingos: Sum-Product Networks: A New Deep Architecture. UAI 2011: 337-346 - Maximilian O.
Robert Gens, Pedro M. Domingos: Discriminative Learning of Sum-Product Networks. NIPS 2012: 3248-3256