This AI Paper from NVIDIA and SUTD Singapore Introduces TANGOFLUX and CRPO: Efficient and High-Quality Text-to-Audio Generation with Flow Matching

0
8
This AI Paper from NVIDIA and SUTD Singapore Introduces TANGOFLUX and CRPO: Efficient and High-Quality Text-to-Audio Generation with Flow Matching

Text-to-audio generation has transformed how audio content is created, automating processes that traditionally required significant expertise and time. This technology enables the conversion of textual prompts into diverse and expressive audio, streamlining workflows in audio production and creative industries. Bridging textual input with realistic audio outputs has opened possibilities in applications like multimedia storytelling, music, and sound design.

One of the significant challenges in text-to-audio systems is ensuring that generated audio aligns faithfully with textual prompts. Current models often fail to capture intricate details, leading to inconsistencies fully. Some outputs omit essential elements or introduce unintended audio artifacts. The lack of standardized methods for optimizing these systems further exacerbates the problem. Unlike language models, text-to-audio systems do not benefit from robust alignment strategies, such as reinforcement learning with human feedback, leaving much room for improvement.

Previous approaches to text-to-audio generation relied heavily on diffusion-based models, such as AudioLDM and Stable Audio Open. While these models deliver decent quality, they come with limitations. Their reliance on extensive denoising steps makes them computationally expensive and time-intensive. Furthermore, many models are trained on proprietary datasets, which limits their accessibility and reproducibility. These constraints hinder their scalability and ability to handle diverse and complex prompts effectively.

To address these challenges, researchers from the Singapore University of Technology and Design (SUTD) and NVIDIA introduced TANGOFLUX, an advanced text-to-audio generation model. This model is designed for efficiency and high-quality output, achieving significant improvements over previous methods. TANGOFLUX utilizes the CLAP-Ranked Preference Optimization (CRPO) framework to refine audio generation and ensure alignment with textual descriptions iteratively. Its compact architecture and innovative training strategies allow it to perform exceptionally well while requiring fewer parameters.

TANGOFLUX integrates advanced methodologies to achieve state-of-the-art results. It employs a hybrid architecture combining Diffusion Transformer (DiT) and Multimodal Diffusion Transformer (MMDiT) blocks, enabling it to handle variable-duration audio generation. Unlike traditional diffusion-based models, which depend on multiple denoising steps, TANGOFLUX uses a flow-matching framework to create a direct and rectified path from noise to output. This rectified flow approach reduces the computational steps required for high-quality audio generation. During training, the system incorporates textual and duration conditioning to ensure precision in capturing input prompts’ nuances and the audio output’s desired length. The CLAP model evaluates the alignment between audio and textual prompts by generating preference pairs and optimizing them iteratively, a process inspired by alignment techniques used in language models.

In terms of performance, TANGOFLUX outshines its predecessors across multiple metrics. It generates 30 seconds of audio in just 3.7 seconds using a single A40 GPU, demonstrating exceptional efficiency. The model achieves a CLAP score of 0.48 and an FD score of 75.1, both indicative of high-quality and text-aligned audio outputs. Compared to Stable Audio Open, which achieves a CLAP score of 0.29, TANGOFLUX significantly improves alignment accuracy. In multi-event scenarios, where prompts include multiple distinct events, TANGOFLUX excels, showcasing its ability to capture intricate details and temporal relationships effectively. The system’s robustness is further highlighted by its ability to maintain performance even with reduced sampling steps, a feature that enhances its practicality in real-time applications.

Human evaluations corroborate these results, with TANGOFLUX scoring the highest in subjective metrics such as overall quality and prompt relevance. Annotators consistently rated its outputs as clearer and more aligned than other models like AudioLDM and Tango 2. The researchers also emphasized the importance of the CRPO framework, which allowed for creating a preference dataset that outperformed alternatives such as BATON and Audio-Alpaca. The model avoided performance degradation typically associated with offline datasets by generating new synthetic data during each training iteration.

The research successfully addresses critical limitations in text-to-audio systems by introducing TANGOFLUX, which combines efficiency with superior performance. Its innovative use of rectified flow and preference optimization sets a benchmark for future advancements in the field. This development enhances the quality and alignment of generated audio and demonstrates scalability, making it a practical solution for widespread adoption. The work of SUTD and NVIDIA represents a significant leap forward in text-to-audio technology, pushing the boundaries of what is achievable in this rapidly evolving domain.


Check out the Paper, Code Repo, and Pre-Trained Model. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 60k+ ML SubReddit.

🚨 FREE UPCOMING AI WEBINAR (JAN 15, 2025): Boost LLM Accuracy with Synthetic Data and Evaluation IntelligenceJoin this webinar to gain actionable insights into boosting LLM model performance and accuracy while safeguarding data privacy.


Nikhil is an intern consultant at Marktechpost. He is pursuing an integrated dual degree in Materials at the Indian Institute of Technology, Kharagpur. Nikhil is an AI/ML enthusiast who is always researching applications in fields like biomaterials and biomedical science. With a strong background in Material Science, he is exploring new advancements and creating opportunities to contribute.

🧵🧵 [Download] Evaluation of Large Language Model Vulnerabilities Report (Promoted)


Credit: Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here