Latest from Google AI – Vid2Seq: a pretrained visual language model for describing multi-event videos
Posted by Antoine Yang, Student Researcher, and Arsha Nagrani, Research Scientist, Google Research, Perception team Videos have become an increasingly important part of our daily lives, spanning fields such as entertainment, education, and communication. Understanding the content of videos, however, is a challenging task as videos often contain multiple events occurring at different time scales….