Artificial Intelligence Technologies Concept

Researchers on the Laptop Science and Synthetic Intelligence Laboratory (CSAIL) have developed a man-made intelligence (AI) method that permits machines to be taught ideas shared between totally different modalities reminiscent of movies, audio clips, and pictures. The AI system can be taught {that a} child crying in a video is expounded to the spoken phrase “crying” in an audio clip, for instance, and use this data to determine and label actions in a video. The method performs higher than different machine-learning strategies at cross-modal retrieval duties, the place knowledge in a single format (e.g. video) have to be matched with a question in one other format (e.g. spoken language). It additionally permits customers to see the reasoning behind the machine’s decision-making. Sooner or later, this method might doubtlessly be used to assist robots be taught concerning the world by means of notion in a approach just like people.

A machine-learning mannequin can determine the motion in a video clip and label it, with out the assistance of people.

People observe the world by means of a mix of various modalities, like imaginative and prescient, listening to, and our understanding of language. Machines, however, interpret the world by means of knowledge that algorithms can course of.

So, when a machine “sees” a photograph, it should encode that photograph into knowledge it could actually use to carry out a process like picture classification. This course of turns into extra sophisticated when inputs are available in a number of codecs, like movies, audio clips, and pictures.

“The principle problem right here is, how can a machine align these totally different modalities? As people, that is straightforward for us. We see a automotive after which hear the sound of a automotive driving by, and we all know these are the identical factor. However for machine studying, it’s not that easy,” says Alexander Liu, a graduate scholar within the Laptop Science and Synthetic Intelligence Laboratory (CSAIL) and first creator of a paper tackling this drawback.

Artificial Intelligence System Video Audio Text

MIT researchers developed a machine studying method that learns to symbolize knowledge in a approach that captures ideas that are shared between visible and audio modalities. Their mannequin can determine the place sure motion is going down in a video and label it. Credit score: Courtesy of the researchers. Edited by MIT Information

Liu and his collaborators developed a man-made intelligence method that learns to symbolize knowledge in a approach that captures ideas that are shared between visible and audio modalities. As an example, their methodology can be taught that the motion of a child crying in a video is expounded to the spoken phrase “crying” in an audio clip.

Utilizing this data, their machine-learning mannequin can determine the place a sure motion is going down in a video and label it.

It performs higher than different machine-learning strategies at cross-modal retrieval duties, which contain discovering a bit of knowledge, like a video, that matches a person’s question given in one other kind, like spoken language. Their mannequin additionally makes it simpler for customers to see why the machine thinks the video it retrieved matches their question.

This system might sometime be utilized to assist robots find out about ideas on the earth by means of notion, extra like the best way people do.

Becoming a member of Liu on the paper are CSAIL postdoc SouYoung Jin; grad college students Cheng-I Jeff Lai and Andrew Rouditchenko; Aude Oliva, senior analysis scientist in CSAIL and MIT director of the MIT-IBM Watson AI Lab; and senior author James Glass, senior research scientist and head of the Spoken Language Systems Group in CSAIL. The research will be presented at the Annual Meeting of the Association for Computational Linguistics.

Learning representations

The researchers focus their work on representation learning, which is a form of machine learning that seeks to transform input data to make it easier to perform a task like classification or prediction.

The representation learning model takes raw data, such as videos and their corresponding text captions, and encodes them by extracting features, or observations about objects and actions in the video. Then it maps those data points in a grid, known as an embedding space. The model clusters similar data together as single points in the grid. Each of these data points, or vectors, is represented by an individual word.

For instance, a video clip of a person juggling might be mapped to a vector labeled “juggling.”

The researchers constrain the model so it can only use 1,000 words to label vectors. The model can decide which actions or concepts it wants to encode into a single vector, but it can only use 1,000 vectors. The model chooses the words it thinks best represent the data.

Rather than encoding data from different modalities onto separate grids, their method employs a shared embedding space where two modalities can be encoded together. This enables the model to learn the relationship between representations from two modalities, like video that shows a person juggling and an audio recording of someone saying “juggling.”

To help the system process data from multiple modalities, they designed an algorithm that guides the machine to encode similar concepts into the same vector.

“If there is a video about pigs, the model might assign the word ‘pig’ to one of the 1,000 vectors. Then if the model hears someone saying the word ‘pig’ in an audio clip, it should still use the same vector to encode that,” Liu explains.

A better retriever

They tested the model on cross-modal retrieval tasks using three datasets: a video-text dataset with video clips and text captions, a video-audio dataset with video clips and spoken audio captions, and an image-audio dataset with images and spoken audio captions.

For example, in the video-audio dataset, the model chose 1,000 words to represent the actions in the videos. Then, when the researchers fed it audio queries, the model tried to find the clip that best matched those spoken words.

“Just like a Google search, you type in some text and the machine tries to tell you the most relevant things you are searching for. Only we do this in the vector space,” Liu says.

Not only was their technique more likely to find better matches than the models they compared it to, it is also easier to understand.

Because the model could only use 1,000 total words to label vectors, a user can more see easily which words the machine used to conclude that the video and spoken words are similar. This could make the model easier to apply in real-world situations where it is vital that users understand how it makes decisions, Liu says.

The model still has some limitations they hope to address in future work. For one, their research focused on data from two modalities at a time, but in the real world humans encounter many data modalities simultaneously, Liu says.

“And we know 1,000 words works on this kind of dataset, but we don’t know if it can be generalized to a real-world problem,” he adds.

Plus, the images and videos in their datasets contained simple objects or straightforward actions; real-world data are much messier. They also want to determine how well their method scales up when there is a wider diversity of inputs.

Reference: “Cross-Modal Discrete Representation Learning” by Alexander H. Liu, SouYoung Jin, Cheng-I Jeff Lai, Andrew Rouditchenko, Aude Oliva and James Glass, 10 June 2021, Computer Science > Computer Vision and Pattern Recognition.
arXiv:2106.05438

This research was supported, in part, by the MIT-IBM Watson AI Lab and its member companies, Nexplore and Woodside, and by the MIT Lincoln Laboratory.


Supply By https://scitechdaily.com/revolutionary-ai-system-learns-concepts-shared-across-video-audio-and-text/