Meet OpenLLaMA: An Open-Supply Replica of Meta AI’s LLaMA Massive Language Mannequin


A brand new growth in giant language fashions has emerged with the discharge of OpenLLaMA, an open-source copy of Meta AI’s LLaMA mannequin. The creators of OpenLLaMA have made the permissively licensed mannequin publicly out there as a 7B OpenLLaMA mannequin that has been skilled with 200 billion tokens. The discharge consists of PyTorch and Jax weights of pre-trained OpenLLaMA fashions, analysis outcomes, and a comparability in opposition to the unique LLaMA fashions. This growth has important implications for machine studying, significantly for researchers who require giant language fashions however face challenges accessing proprietary fashions. 

The creators of OpenLLaMA have shared particulars on how they skilled their fashions on the RedPajama dataset, which is a copy of the LLaMA coaching dataset containing over 1.2 trillion tokens. They adopted the identical preprocessing and coaching hyperparameters as the unique LLaMA paper, together with mannequin structure, context size, coaching steps, studying fee schedule, and optimizer. The one distinction between their strategy and the unique one is the dataset used: OpenLLaMA employs the RedPajama dataset moderately than the one utilized by the unique LLaMA.

The fashions had been skilled on cloud TPU-v4s utilizing EasyLM, a JAX-based coaching pipeline developed for coaching and fine-tuning language fashions. They employed a mixture of regular knowledge parallelism and absolutely sharded knowledge parallelism (also referred to as ZeRO stage 3) to steadiness the coaching throughput and reminiscence utilization. Total, their coaching run achieved a throughput of over 1900 tokens/second / TPU-v4 chip. 

The efficiency of OpenLLaMA was evaluated on a number of duties utilizing the lm-evaluation-harness. The outcomes had been in contrast in opposition to the unique LLaMA mannequin and GPT-J, a 6B parameter mannequin skilled on the Pile dataset by EleutherAI. The analysis metrics for the unique LLaMA mannequin had been generated by operating it on the identical duties. The outcomes for the LLaMA mannequin barely differed from these reported within the unique LLaMA paper, which can be because of variations in analysis protocols. Nevertheless, OpenLLaMA exhibited comparable or higher efficiency than the unique LLaMA and GPT-J throughout most duties, in line with the introduced outcomes. Though OpenLLaMA was skilled on 200 billion tokens as an alternative of the 1 trillion tokens used for the unique LLaMA and 500 billion tokens used for GPT-J, its efficiency is predicted to enhance even additional upon finishing its coaching on 1 trillion tokens.

To encourage suggestions and collaboration from the group, the workforce behind OpenLLaMA has launched a preview checkpoint of their weights. These weights can be found in two codecs: an EasyLM format to be used with their EasyLM framework and a PyTorch format to be used with the Huggingface transformers library. Not like the unique LLaMA mannequin, OpenLLaMA’s tokenizer and weights are skilled fully from scratch, so acquiring the unique LLaMA tokenizer and weights is now not needed. Nevertheless, it’s important to notice that OpenLLaMA makes use of the BOS (starting of a sentence) token (id=1) throughout coaching, so this token needs to be prepended for optimum efficiency throughout a few-shot analysis. The preview checkpoint weights and EasyLM framework are permissively below the Apache 2.0 license. The workforce is at the moment targeted on finishing the coaching course of on your entire RedPajama dataset to permit for an apple-to-apple comparability between the unique LLaMA and OpenLLaMA. Moreover, they’re engaged on coaching a smaller 3B mannequin for low-resource use circumstances. The workforce plans to launch extra updates quickly.


Take a look at the Github Link. Don’t neglect to affix our 20k+ ML SubRedditDiscord Channel, and Email Newsletter, the place we share the most recent AI analysis information, cool AI initiatives, and extra. You probably have any questions concerning the above article or if we missed something, be happy to e mail us at Asif@marktechpost.com

🚀 Check Out 100’s AI Tools in AI Tools Club


Niharika is a Technical consulting intern at Marktechpost. She is a 3rd 12 months undergraduate, at the moment pursuing her B.Tech from Indian Institute of Know-how(IIT), Kharagpur. She is a extremely enthusiastic particular person with a eager curiosity in Machine studying, Knowledge science and AI and an avid reader of the most recent developments in these fields.


Leave a Reply

Your email address will not be published. Required fields are marked *