Machine Learning based approaches have become ubiquitous in many areas of society, industry and academia. Understanding what Machine Learning (ML) is providing and reproducing what it infers, has become an essential prerequisite for adoption. In this line of thought, course materials, introductory media and lecture series of a broad variety, depth, and quality are public availability. To this date and the best our knowledge, there is no structured approach to collect and discuss best practices in teaching Machine Learning. This workshop strives to change this.
With our workshop, we want to start an academic discussion on best practices. We would like to help improve existing material as a community and make conceiving new material more effective. We are very happy that this idea was approved for ECML PKDD 2020 workshop programme.
Many experts and practitioners who develop Machine Learning models or infrastructure around these models are confronted with the opportunity to teach Machine Learning at some point in their career. Traditionally, many rely on their gut feeling to design courses that are motivated by these circumstances. The methods of choice are often PowerPoint or similar technologies.
This workshop targets those who would like to know, how teachers from around the globe approach teaching Machine Learning: How deep do they dive into the matter? What mental models do they use to visualize concepts? What media is at play in teaching ML by others?
With this workshop, we hope that all participants obtain a better feeling where they stand with their teaching and how they can improve or collaborate with others.
The main goal of this workshop is to motivate and nourish best practices at any stage of the teaching process. For this, we would like to cover a structured approach to teaching motivated by the carpentries or a variation thereof. We believe that the core concepts contained in this are helpful for any teaching practitioners.
The central activity of the workshop will be twofold:
a call-for-papers whereby teaching professionals or beginners are asked to describe their method of choice when teaching a given ML topic. We like to attract at maximum 4-page long mini-articles (excluding references and acknowledgements) that present or discuss a teaching activity related to machine learning. For more details, see below.
(potentially parallel) presentations of 5-10 minute lightning talks during the workshop at ECML PKDD 2020 which present accepted papers mentioned above. For more details, see below.
We invite interested parties to submit a mini-paper describing a specific piece of teaching related to Machine Learning. These mini-papers are expected to present teaching examples from various aspects of ML. For example:
Each paper is kindly asked to answer at least the following questions (if applicable):
Submissions can be made here until June 26, 2020.
Papers must be written in English and formatted according to the ICML 2019 latex template.
The maximum length of papers is 4 pages (excluding references and acknowledgements) in this format. The program chairs reserve the right to reject any over-length papers without review. Papers that ‘cheat’ the page limit by, including but not limited to, using smaller than specified margins or font sizes will also be treated as over-length. Note that for example negative vspaces are also not allowed.
Additional materials (e.g. proofs, audio, images, video, data, or source code) can be provided as URLs inside the paper of your submission. The reviewers and the program committee reserve the right to judge the paper solely based on the 4 pages; looking at any additional material is at the discretion of the reviewers and is not required.
We strive to pursue a double-blind review process. All papers need to be ‘best-effort’ anonymized. We strongly encourage to also make code and data available anonymously (e.g., in an anonymous git repository or Dropbox folder). It is allowed to have a (non-anonymous) pre-print online, but it should not be cited in the submitted paper to preserve anonymity. Reviewers will be asked not to search for them.
We will conduct an open double-blinded peer-review using openreview.net on all contributions and select contributions based on the reviewers’ feedback. Here are the important dates:
Each submitted paper will be reviewed publicly by at least two experienced machine learning instructors. If you’d like to help out reviewing papers, please let us know and open an issue here or contact us.
To prepare for the workshop, we plan a series of online events and talks before the conference. Please stay tuned to find out more in due course.
|tba||Didactics of Data||Rebecca Fiebrink|
|tba||Experiences with MOOCs||Elisabeth Sulmont|
|tba||Experiences in Lectures||Heide Seibold, Bernd Bischl|
|tba||Experiences in Bootcamps/Compact Courses||Anne Fouillioux, Peter Steinbach|
Participants of the workshop session will be motivated to provide feedback to their peers. Depending on the room and number of submissions, we will divide the presentations based on the field they focus on: vision applications, language applications, general concepts etc. Each of these working groups is asked to collect general patterns on what works and what doesn’t. After this session, we will compile a report to summarize and publish the findings of this event and to lay the foundation for future activities.
|09.00 am||Preface (Parallel) Teaching Example Session||Organizers|
|09.30 am||How to give feedback||Organizers|
|10.15 am||Teaching Example Presentations||All|
|10.45 am||Coffee break|
|11.15 am||Teaching Example Presentations||All|
|01.30 pm||Summary Teaching Example Presentations||All|
|02.00 pm||Farewell and Next Steps||Organizers|
The timing of the above is tentative as it will crucially depend on the venue and exact schedule of the conference.
Bernd holds the chair of Statistical Learning and Data Science at the LMU Munich
Postdoc / Open Science Advocate
Team Lead AI Consultants for Matter Research at Helmholtz-Zentrum Dresden-Rossendorf
Research fellow at the HTW Dresden in the department of artificial intelligence.