Meta's AI Language Model LLaMA Gets Leaked

The model's weights are currently available via torrent.

Last week, Meta entered the AI language model race with the introduction of LLaMA, a large 65-billion-parameter language model designed to help researchers advance their work in this subfield of AI. Trained on texts from the 20 most-spoken languages, LLaMA works by taking a sequence of words as input and predicts the next word to generate text recursively, enabling researchers that don't have access to large amounts of infrastructure to study language models.

Although the model was released publically under a non-commercial license, many people, including industry researchers and well-known AI enthusiasts, commented that they failed to get Meta's approval, with their applications being declined for seemingly no reason. Luckily for them, it seems that there is now a workaround that allows one to access LLaMA without the necessity to apply officially.

First noticed by Replit CEO Amjad Masad, the model's weights have been leaked on torrent, with the leaker putting up "a PR to replace the Google Form in the repo with it". Reportedly, the torrent contains lots of weights that were likely trained with a method described in the LLaMA paper. The alleged leaker describes the GitHub repository's intention as "a minimal, hackable, and readable example to load LLaMA models and run inference."

You can learn more here. Also, don't forget to join our 80 Level Talent platformour Reddit page, and our Telegram channel, follow us on Instagram and Twitter, where we share breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more