I have been looking at the image annotation/workflows and running the algorithms. This was heartening news.
If you look at the “crunching” of various models, it does look like a sure win. However, it wasn’t clear to me if it is related to the on-premise solution or the “cloud”. I had been interacting with someone who did mention about the number crunching abilities of his graphics card that has improved rapidly.
Based on the pace of its cost decline, AI is in very early days. During the first decade of Moore’s Law, transistor count doubled every year—or at twice the rate of change seen during decades thereafter. The 10-100x cost declines we are witnessing in both AI training and AI inference suggest that AI is nascent in its development, perhaps with decades of slower but sustained growth aheadAI Training Costs Are Improving at 50x the Speed of Moore’s Law
I am not sure how relevant is Moore’s law but the recent development of AMD Ryzen 4000 series has surely spiked my interest in the ever increasing cores (as well as hardware optimisations). These are really exciting times!