Introduction to advanced concepts in AI and Machine Learning
I created a set of short videos and blog posts to introduce some advanced ideas in AI and Machine Learning. It is easier for me to think about them as I met them, chronologically in my life, but I may revisit the ideas later from a different perspective.
I also noticed that one of things I am doing is utilising slightly off-centre tools to describe an idea. So for example, I employ Kohonen Feature Maps to describe embeddings. I think I gain a couple of things this way, first it is a different perspective than most people are used to. In addition, well you will see :-)
I recommend first opening the blog entry (as per the links below), then concurrently watching the linked video.
Hope you enjoy these as much as I did putting them together,
David
Here are links:
https://data-information-meaning.blogspot.com/2020/12/memorization-learning-and-classification.html
https://data-information-meaning.blogspot.com/2020/12/12-data-information-and-meaning.html
https://data-information-meaning.blogspot.com/2020/12/13-kmeans-as-means-to-describe-failed.html
https://data-information-meaning.blogspot.com/2020/12/14-kohenen-feature-maps-new-embedded.html
https://data-information-meaning.blogspot.com/2020/12/15-agglomerative-online-clustering.html
https://data-information-meaning.blogspot.com/2021/01/155-multi-scale-defines-learning.html
https://data-information-meaning.blogspot.com/2021/01/16-phase-transitions-measure-of-learning.html
https://data-information-meaning.blogspot.com/2021/01/165-phase-transitions-measure-of.html
=-=-
Here is a fun quote:
https://medium.com/@jcbaillie/beyond-the-symbolic-vs-non-symbolic-ai-debate-96dffce7270c
...
I cannot resist to make a small digression here, about a parallel that can be made with recent trends in physics. Until now, all successful physics has been formulated in the framework of differential equations, acting on continuous and differentiable variables. It is what Wigner called the “unreasonable effectiveness of mathematics in the natural sciences”. Recently, more and more researchers are exploring the possibility that Nature is best described at small scale by discrete structures, with evolution formulated as discrete transformation on these structures (see for example the work on Causal Sets, Loop Quantum Gravity, or even the Wolfram Physics project). It is as if, at some point, the unreasonable effectiveness is ending and reality cannot be approximated by models that offer gradients anymore: we have to get down to the raw “machine code” that is inherently discrete, step-wise, compositional and irreducible. One might wonder if these physicists are right (it’s difficult to come up with experiment proposals at that scale, so we might never know), and if some similar dichotomy is not also at play in AI, as discussed above.
Comments
Post a Comment