Vision Transformers (ViT) Explained

Vision and language are the two big domains in machine learning. Two distinct disciplines with their own problems, best practices, and model architectures. At least, that was the case.

The Vision Transformer (ViT)[1] marks the first step towards the merger of these two fields into a single unified discipline. For the first time in the history of ML, a single model architecture has come to dominate both language and vision.


This is a companion discussion topic for the original entry at https://www.pinecone.io/learn/vision-transformers/