logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

Meta Introduced 65-Billion-Parameter Language Model LLaMA

It should help researchers advance their work in this field of AI.

Meta has released LLaMA (Large Language Model Meta AI), "a state-of-the-art foundational large language model designed to help researchers advance their work in this subfield of AI."

It allows researchers that don’t have access to large amounts of infrastructure to study these models. According to the creators, training this smaller, more performant foundation model requires less computing power and resources.

"LLaMA works by taking a sequence of words as an input and predicts the next word to recursively generate text. To train our model, we chose text from the 20 languages with the most speakers, focusing on those with Latin and Cyrillic alphabets."

It comes in several sizes – 7B, 13B, 33B, and 65B parameters – with Meta also providing a LLaMA model card that details how it was built. The 65B and 33B variants were trained on 1.4 trillion tokens.

LLaMA is released under a noncommercial license focused on research use cases, so academic researchers, government, civil society, academic organizations, and industry research laboratories can access it.

LLaMA as a foundation model is easier to fine-tune and retrain. Together with larger systems, it can help with a number of tasks, like generating creative text, solving mathematical theorems, predicting protein structures, and more. 

Learn more about LLaMA and its potential here and don't forget to join our 80 Level Talent platformour Reddit page, and our Telegram channel, follow us on Instagram and Twitter, where we share breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more