Modelling Online Learning with Computability Theory
22.02.2022, 10:00
– Golm, Haus 9 2.22 and Zoom
Forschungsseminar Statistik
Karen Seidel
Abstract: In machine learning, algorithms generalise from available training data to unseen situations.
The engineering practices used in the respective technologies are far from understood.
Research in inductive inference analyses concrete mathematical models for this complex subject with tools from computability theory.We investigate models for incremental binary classification, an example for supervised online learning.
Our starting point is a model for human and machine learning suggested by E.~M.~Gold.For learning algorithms that use all of the available binary labeled training data in order to compute the current hypothesis, we observe that the distribution of the training data does not influence learnability.
Moreover, we show that the learning algorithm can be assumed to only change its last hypothesis in case it is inconsistent with the current training data.
When approximating the concept to be learned, we obtain a strict hierarchy, depending on the error parameter.
We also consider a hypothesis space more suitable for symmetric classification tasks and provide the complete map.Furthermore, we model more efficient incremental learning algorithms.
These update their hypothesis without direct regress to past training data.
We focus on hypothesis based and state based algorithms and show that they are equally powerful in many cases.An additional requirement arising from cognitive science research is U-shapedness, stating that the learning algorithm does diverge from a correct hypothesis.
We show that forbidding U-shapes restricts both kinds of memory-efficient algorithms, even though they are not equivalent in this setting.
Zoom link on request