Abstract: Recently, Masked Autoencoders (MAE) have gained attention for their abilities to generate visual representations efficiently through pretext tasks. However, there has been little research ...