II) Too much data
Very quickly I ran into the problem of too much data. The standard clustering techniques (much like today) assumed batch processing of the data. That means, get all you data into a single room (on a single computer) and think. It was a problem I had met in part earlier when doing my masters studying aerial photography, but now I had too much data and couldn't avoid it. So, how to process lots of data without storing it became my new problem. This lead to my doctoral thesis, a streaming clustering algorithm.
It is interesting to pause here and think what too much data means. What is data?
Data is recordings of sensory perceptions. Do you like that definition? Sometimes I find the word 'observations' better to describe a sensory perception, somehow that is less anthropomorphic. The term 'sensory perception' and its brother term 'receptive field' are terms I first heard from Prof. Shaul Hochstein at Hebrew University. A receptive field is the field which stimulates a neuron: a location in space that is reflecting photons to a specific location on the retina excites a specific neuron at that location, a sound at a frequency that travels through the cochlea and excites a neuron. There is something elegant about defining an abstract receptive field, independent of the type of stimulus. Either way it helps to separate between the external source of the stimulus and the resultant stimulated sensor.
Data is then the recording of a sensor that observes an external event.
Well, storing the data is one problem, but what to do with the data once it is stored is another.
Batch processing, trying to make sense of all the data at the same time has its logic. Intuitively, we know that you can't understand something out of context. Hence, it is best to reserve judgment until all the facts are gathered -- batch processing. Gather all the data and then make a decision.
But what to do when either it is not possible to gather all the facts or even more likely, it is not possible to grasp all the facts, it is not possible to process all the data?
Too much data.
The premise of my thesis is that learning occurs precisely when we have too much data. I think we gravitate to the mistaken understanding that we learn things when we are exposed to them. Oh, here is something new, let me understand it. But this is fundamentally incorrect. Learning requires a distance metric and metrics require relative measures to previous knowledge, more on that later. For now let me introduce this simple idea. We learn when we have too much data, we learn when we need to forget something, not when we try to remember it.
Too much data forces us to move from memorization to learning. And learning is beautiful thing.
Comments
Post a Comment