603556-Tonnaer

Chapter 1 Introduction 1.1 Motivation Machine learning (ML) has shown to be highly effective in solving problems related to pattern recognition (Bishop, 2006), where rule-based solutions or symbolic equations are difficult to formulate. Instead of relying on predetermined rules, ML models learn from data, identifying patterns and relationships that can be used to make predictions on new data. But when the data is high-dimensional, the curse of dimensionality (Bellman, 1957) poses new challenges for ML models. Lower-level individual data dimensions, or local properties, may carry little meaning on their own and need to be combined in more complex ways to reveal higher-level global structures. Detecting these structures is essential for accurately modelling and predicting outcomes in high-dimensional data, as they provide a way to reduce the complexity of the data and extract meaningful insights. Images are a clear example of such high-dimensional data; even small 32by-32-pixel images have over 1000 dimensions (i.e. pixel values). Each pixel carries little information on its own, only larger patterns of many pixels are meaningful. ML models that work well on data with fewer dimensions would need an enormous amount of training data to succeed on images, and even then they would likely learn spurious connections that aren’t semantically meaningful and won’t generalise to unseen data. Deep learning (DL) models (Lecun et al., 2015), or neural networks, overcome the challenges of high-dimensional data by learning layers or hierarchies of

RkJQdWJsaXNoZXIy MjY0ODMw