Skip to main content

Posts

Showing posts from January, 2021

1.65 Phase transitions, a measure of learning

Phase transitions, a measure of learning https://rumble.com/vcj8fk-1.65-phase-transitions-a-measure-of-learning.html Lets compare KMeans to faddc KMeans faddc That was quite dramatic, how did we get there: KMeans faddc Neat right, KMeans spreads its representations equally across the entire dataset, minimising the global loss of information The thing to notice with faddc is that the representation is very stable up to a point, at a certain point there is a dramatic shift in the representation.   Here I graph the derivative energy, the change, the difference between the distortion given each 'k'....

1.6 Phase transitions, a measure of learning

Phase transitions, a measure of learning https://rumble.com/vcg8gw-1.6-phase-transitions-a-measure-of-learning.html Phase transitions demonstrate a loose coupling.  A tight coupling like in KMeans mirror the distortion at each level.  A loose coupling enables the higher level to move at a different pace, disjoint, from the lower level.  This separation between levels, indicates that the levels represent different descriptions of the data, they speak different languages. Important to differentiate between the model, the heirarchy, the grammar, and the content.  So next series I will do that.  The learning is in the model not the content.  The content can be memorized, its the relationships between the content that are learnt. Here is a slightly different dataset, there are at least two apparent scales.  Lets see what KMeans does as we increase 'k': Neat right, KMeans spreads its representations equally across the entire dataset, minimising the global lo...

0.0 Introduction to advanced concepts in AI and Machine Learning

Introduction to advanced concepts in AI and Machine Learning I created a set of short videos and blog posts to introduce some advanced ideas in AI and Machine Learning.  It is easier for me to think about them as I met them, chronologically in my life, but I may revisit the ideas later from a different perspective. I also noticed that one of things I am doing is utilising slightly off-centre tools to describe an idea.  So for example, I employ Kohonen Feature Maps to describe embeddings.  I think I gain a couple of things this way, first it is a different perspective than most people are used to.  In addition, well you will see :-) I recommend first opening the blog entry (as per the links below), then concurrently watching the linked video. Hope you enjoy these as much as I did putting them together, David Here are links: https://data-information-meaning.blogspot.com/2020/12/memorization-learning-and-classification.html https://data-information-meaning.blogspot.com/...

1.55 Multi-scale defines learning

Multi-scale defines learning https://rumble.com/vcexnm-multi-scale-defines-learning-part-1.html https://rumble.com/vcexxw-multi-scale-defines-learning-part-2.html Why have I been so focused on multi-scale data?  Because learning can only occur when there are multiple scales.  Why is that true, is sounds nonsensical, what is the relationship between the scale of a set of data and learning that data.   I need to better define multi-scale systems. A multi-scale system has at least two levels.  Each level has elements.  Yet the elements on each level are different.  How different?  In what way are they different?  Primarily the descriptive language of the elements is different.  The elements on each level describe different attributes. Since each level creates its own descriptive language, two different levels are not able to communicate directly with each other.   Although that is a negative definition I like it.   The p...