Google DeepMind Just Released PaliGemma 2: A New Family of Open-Weight Vision Language Models (3B, 10B and 28B)

0
15

Vision-language models (VLMs) have come a long way, but they still face significant challenges when it comes to effectively generalizing across different tasks. These models often struggle with diverse input data types, like images of various resolutions or text prompts that require subtle understanding. On top of that, finding a balance between computational efficiency and model scalability is no easy feat. These challenges make it hard for VLMs to be practical for many users, especially those who need adaptable solutions that perform consistently well across a wide range of real-world applications, from document recognition to detailed image captioning.

Google DeepMind Just Released PaliGemma 2: A New Family of Open-Weight Vision Language Models (3B, 10B and 28B) recently introduced the PaliGemma 2 series, a new family of Vision-Language Models (VLMs) with parameter sizes of 3 billion (3B), 10 billion (10B), and 28 billion (28B). The models support resolutions of 224×224, 448×448, and 896×896 pixels. This release includes nine pre-trained models with different combinations of sizes and resolutions, making them versatile for a variety of use cases. Two of these models are also fine-tuned on the DOCCI dataset, which contains image-text caption pairs, and support parameter sizes of 3B and 10B at a resolution of 448×448 pixels. Since these models are open-weight, they can be easily adopted as a direct replacement or upgrade for the original PaliGemma, offering users more flexibility for transfer learning and fine-tuning.

Technical Details

PaliGemma 2 builds on the original PaliGemma model by incorporating the SigLIP-So400m vision encoder along with the Gemma 2 language models. The models are trained in three stages, using different image resolutions (224px, 448px, and 896px) to allow for flexibility and scalability based on the specific needs of each task. PaliGemma 2 has been tested on more than 30 transfer tasks, including image captioning, visual question answering (VQA), video tasks, and OCR-related tasks like table structure recognition and molecular structure identification. The different variants of PaliGemma 2 excel under different conditions, with larger models and higher resolutions generally performing better. For example, the 28B variant offers the highest performance, though it requires more computational resources, making it suitable for more demanding scenarios where latency is not a major concern.

The PaliGemma 2 series is notable for several reasons. First, offering models at different scales and resolutions allows researchers and developers to adapt performance according to their specific needs, computational resources, and desired balance between efficiency and accuracy. Second, the models have shown strong performance across a range of challenging tasks. For instance, PaliGemma 2 has achieved top scores in benchmarks involving text detection, optical music score recognition, and radiography report generation. In the HierText benchmark for OCR, the 896px variant of PaliGemma 2 outperformed previous models in word-level recognition accuracy, showing improvements in both precision and recall. Benchmark results also suggest that increasing model size and resolution generally leads to better performance across diverse tasks, highlighting the effective combination of visual and textual data representation.

Conclusion

Google’s release of PaliGemma 2 represents a meaningful step forward in vision-language models. By providing nine models across three scales with open-weight availability, PaliGemma 2 addresses a wide range of applications and user needs, from resource-constrained scenarios to high-performance research tasks. The versatility of these models and their ability to handle diverse transfer tasks make them valuable tools for both academic and industry applications. As more use cases integrate multimodal inputs, PaliGemma 2 is well-positioned to provide flexible and effective solutions for the future of AI.


Check out the Paper and Models on Hugging Face. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. If you like our work, you will love our newsletter.. Don’t Forget to join our 60k+ ML SubReddit.

🚨 [Must Attend Webinar]: ‘Transform proofs-of-concept into production-ready AI applications and agents’ (Promoted)


Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.

🚨🚨FREE AI WEBINAR: ‘Fast-Track Your LLM Apps with deepset & Haystack'(Promoted)


Credit: Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here