2024년 8월 1일
MoMa: Efficient Early-Fusion Pre-training with Mixture of Modality-Aware Experts
(Xi Victoria Lin, Akshat Shrivastava, Liang Luo, Srinivasan Iyer, Mike Lewis, Gargi Gosh, Luke Zettlemoyer, Armen Aghajanyan)
We introduce MoMa, a novel modality-aware mixture-of-experts (MoE) architecture designed for pre-training mixed-modal, early-fusion language models. MoMa processes images and text in arbitrary sequences by dividing expert modules into modality-specific groups. These groups exclusively process designated tokens while employing learned routing within each group to maintain semantically informed adaptivity. Our empirical results reveal substantial pre-training efficiency gains through this modality-specific parameter allocation. Under a 1-trillion-token training budget, the MoMa 1.4B model, featuring 4 text experts and 4 image experts, achieves impressive FLOPs savings: 3.7x overall, with 2.6x for text and 5.2x for image processing compared to a compute-equivalent dense baseline, measured by pre-training loss. This outperforms the standard expert-choice MoE with 8 mixed-modal experts, which achieves 3x overall FLOPs savings (3x for text, 2.8x for image). Combining MoMa with mixture-of-depths (MoD) further improves pre-training FLOPs savings to 4.2x overall (text: 3.4x, image: 5.3x), although this combination hurts performance in causal inference due to increased sensitivity to router accuracy. These results demonstrate MoMa's potential to significantly advance the efficiency of mixed-modal, early-fusion language model pre-training, paving the way for more resource-efficient and capable multimodal AI systems.
Multi Modal 모델에 대한 MoE 적용. 각 모달리티별로 구분된 Expert 그룹을 사용하고 Mixture of Depth (https://arxiv.org/abs/2404.02258) 적용, 각 모달리티에 1개 Expert를 부여해서 학습을 시작한 다음 Sparse Upcycling 하는 방법을 사용했습니다. 추가로 Expert Choice를 사용하면서 Autoregressive 상황에 적절하도록 조정한 부분도 있네요. 굉장히 흥미롭습니다.
#vision-language #moe
Large Language Monkeys: Scaling Inference Compute with Repeated Sampling
(Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V. Le, Christopher Ré, Azalia Mirhoseini)
Scaling the amount of compute used to train language models has dramatically improved their capabilities. However, when it comes to inference, we often limit the amount of compute to only one attempt per problem. Here, we explore inference compute as another axis for scaling by increasing the number of generated samples. Across multiple tasks and models, we observe that coverage - the fraction of problems solved by any attempt - scales with the number of samples over four orders of magnitude. In domains like coding and formal proofs, where all answers can be automatically verified, these increases in coverage directly translate into improved performance. When we apply repeated sampling to SWE-bench Lite, the fraction of issues solved with DeepSeek-V2-Coder-Instruct increases from 15.9% with one sample to 56% with 250 samples, outperforming the single-attempt state-of-the-art of 43% which uses more capable frontier models. Moreover, using current API pricing, amplifying the cheaper DeepSeek model with five samples is more cost-effective and solves more issues than paying a premium for one sample from GPT-4o or Claude 3.5 Sonnet. Interestingly, the relationship between coverage and the number of samples is often log-linear and can be modelled with an exponentiated power law, suggesting the existence of inference-time scaling laws. Finally, we find that identifying correct samples out of many generations remains an important direction for future research in domains without automatic verifiers. When solving math word problems from GSM8K and MATH, coverage with Llama-3 models grows to over 95% with 10,000 samples. However, common methods to pick correct solutions from a sample collection, such as majority voting or reward models, plateau beyond several hundred samples and fail to fully scale with the sample budget.
샘플링 횟수 증가에 따른 성능 변화. 즉 추론 시점의 연산량 사용 증가에 따른 성능 변화라고 할 수 있겠죠. Oracle Verifier가 있다면 Log Linear한 커브를 얻을 수 있다고 합니다. 그렇지만 문제는 바로 그 Verifier를 어떻게 확보할 수 있는가에 있긴 하죠.
코딩과 수학에 대해서는 이런 Verifier를 확보할 수 있다고 생각하긴 하지만 논문에서도 유닛 테스트 같은 방법의 한계를 지적합니다. 수학에 대해서는 어떨지 잘 모르겠네요.
#scaling-law
Gemma Scope: Open Sparse Autoencoders Everywhere All At Once on Gemma 2
(Tom Lieberum, Senthooran Rajamanoharan, Arthur Conmy, Lewis Smith, Nicolas Sonnerat, Vikrant Varma, János Kramár, Anca Dragan, Rohin Shah and Neel Nanda)
Sparse autoencoders (SAEs) are an unsupervised method for learning a sparse decomposition of a neural network’s latent representations into seemingly interpretable features. Despite recent excitement about their potential, research applications outside of industry are limited by the high cost of training a comprehensive suite of SAEs. In this work, we introduce Gemma Scope, an open suite of JumpReLU SAEs trained on all layers and sub-layers of Gemma 2 2B and 9B and select layers of Gemma 2 27B base models. We evaluate the quality of each SAE on standard metrics and release these results. We hope that by releasing these SAE weights, we can help push forward safety and interpretability research in the community. Weights, a tutorial and an interactive demo can be found at https://huggingface.co/google/gemma-scope.
구글이 Gemma 2 모델들에 대해 학습한 Sparse Autoencoder를 공개했군요. 얼마 전 공개했던 JumpReLU를 사용했습니다. (https://arxiv.org/abs/2407.14435) Activation 저장에만 20 페타바이트를 사용했는데 저장하는 게 그래도 매번 계산하는 것보다는 저렴했다고 합니다. Interpretation도 자원이 있어야 할 수 있다는 의미라고 할지.
#mechanistic-interpretation