How AI Models Learn to Solve Problems That Humans Can’t

0
4

Natural Language processing uses large language models (LLMs) to enable applications such as language translation, sentiment analysis, speech recognition, and text summarization. These models depend on human feedback-based supervised data, but relying on unsupervised data becomes necessary as they surpass human capabilities. However, the issue of alignment arises as the models get more complex and nuanced. Researchers at Carnegie Mellon University, Peking University, MIT-IBM Watson AI Lab, University of Cambridge, Max Planck Institute for Intelligent Systems, and UMass Amherst have developed the Easy-to-Hard Generalization (E2H) methodology that tackles the problem of alignment in complex tasks without relying on human feedback. 

Traditional alignment techniques rely heavily on supervised fine-tuning and Reinforcement Learning from Human Feedback (RLHF). This reliance on human capabilities serves as a hindrance when scaling these systems, as collecting high-quality human feedback is labor-intensive and costly. Furthermore, the generalization of these models to scenarios beyond learned behaviors is challenging. Therefore, there is an urgent need for a methodology that can accomplish complex tasks without requiring exhaustive human supervision.

The proposed solution, Easy-to-Hard Generalization, employs a three-step methodology to achieve scalable task generalization:

  1. Process-Supervised Reward Models (PRMs): The models are trained on simple human-level tasks. These trained models then evaluate and guide the problem-solving capability of AI on higher-level complex tasks. 
  2. Easy-to-Hard Generalization: The models are gradually exposed to more complex tasks as they train. Predictions and evaluations from the easier tasks are used to guide learning on harder ones.
  3. Iterative Refinement: The models are adjusted based on the feedback provided by the PRMs.

This learning process with iterative refinement enables AI to shift from human-feedback-dependent models to reduced human annotations. Generalization of tasks that deviate from the learned behavior is smoother. Thus, this method optimizes AI’s performance in situations where human engagement becomes obscure.

Performance comparison shows significant improvements on the MATH500 benchmark, a 7b process-supervised RL model achieved 34.0% accuracy, while a 34b model reached 52.5% accuracy, using only human supervision on easy problems. The method demonstrated effectiveness on the APPS coding benchmark as well. These results suggest comparable or superior alignment outcomes to RLHF while significantly reducing the need for human-labeled data on complex tasks.

This research addresses the critical challenge of AI alignment beyond human supervision by introducing an innovative, easy-to-hard generalization framework. The proposed method demonstrates promising results in enabling AI systems to tackle increasingly complex tasks while aligning with human values. Notable strengths include its novel approach to scalable alignment, effectiveness across domains such as mathematics and coding, and potential to address limitations of current alignment methods. However, further validation in diverse, real-world scenarios is necessary. Overall, this work marks a significant step toward developing AI systems that can safely and effectively operate without direct human supervision, paving the way for more advanced and aligned AI technologies.


Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 60k+ ML SubReddit.

🚨 Trending: LG AI Research Releases EXAONE 3.5: Three Open-Source Bilingual Frontier AI-level Models Delivering Unmatched Instruction Following and Long Context Understanding for Global Leadership in Generative AI Excellence….


Afeerah Naseem is a consulting intern at Marktechpost. She is pursuing her B.tech from the Indian Institute of Technology(IIT), Kharagpur. She is passionate about Data Science and fascinated by the role of artificial intelligence in solving real-world problems. She loves discovering new technologies and exploring how they can make everyday tasks easier and more efficient.

🧵🧵 [Download] Evaluation of Large Language Model Vulnerabilities Report (Promoted)


Credit: Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here