Recently Yann LeCun complemented the authors of 'A ConvNet for the 2020s' https://mobile.twitter.com/ylecun/status/1481194969830498308?s=20 https://github.com/facebookresearch/ConvNeXt These statements imply that continued improvements in the metrics of success are indicators that learning is improving. Furthermore says LeCun, common sense reinforces this idea that 'helpful tricks' are successful in increasing the learning that occurs in these models. But are these models learning? Are they learning better? Or perhaps they have succeeded at overfitting and scoring better but have not learnt anything new. We took a look at the what the model learned, not just how it scored on its own metric. To this end we created a graph with links between each image and its top 5 classifications, the weights of the links are in proportion to the score of the class. Here are the data files: https://github.com/DaliaSmirnov/imagenet_research/blob/main/prediction_df_resnet50.p...
We went through a very painful process, we lost time, the most expense resource on the planet. We also lost some of our people, yet another extremely painful process. But let me back-up a bit in the story. When we started iSkoot there were three core tech challenges, creating a virtual audio driver, transporting the media in realtime to an IP-PBX and scaling the solution. One of these things is not like the other. Scaling the solution demanded that we pay attention to the architecture of the first two core-tech issues. Architecture was the keystone to startups back then. And when we said architecture we meant System & Software Architecture. Today there has been a significant shift in the hi-tech world, systems and software have been replaced by data as the core value of a company and the keystone, that internally magic thing that binds all the elements into a whole. At Blue dot we lived this shift. Blue dot started in the age of System Architecture....