Transformer-based models have rapidly spread from text to speech, vision, and other modalities. This has created challenges for the development of Neural Processing Units (NPUs). NPUs must now ...
Abstract: Vision-language pre-training models have demonstrated outstanding performance on a wide range of multimodal tasks. Nevertheless, they remain susceptible to multimodal adversarial examples.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results