Announcing Minerva-7B: try it now!
Minerva-7B is the latest addition to the Minerva LLMs family: the first family of LLMs trained from scratch on Italian data.
Large Language Models (LLMs) are transforming how we interact with technology, but most are designed with English as their primary focus, often leaving other languages underserved. Enter Minerva, the first family of LLMs developed from scratch with a primary focus on the Italian language.
Minerva breaks new ground by prioritizing Italian at its core, unlike most models that adapt English-centric foundations to other languages. This allows Minerva to excel in capturing the richness of Italian vocabulary, syntax, and cultural nuances, setting a new standard for Italian-language AI.
Why Minerva Matters
Minerva isn’t just about language—it’s about inclusivity and fairness in AI. By focusing on a non-English language, Minerva addresses global biases in AI research and paves the way for more equitable technology. Built entirely on open-source data, Minerva also champions transparency, offering researchers and developers a platform for innovation.
Curious to learn more?
- Check out https://nlp.uniroma1.it/minerva
A Note on Responsible Use
Minerva is a research project, and while it demonstrates significant advancements, it’s not without limitations. The models may occasionally generate incorrect or biased outputs. We encourage users to approach Minerva critically and responsibly.
With Minerva, we’re not just building a language model—we’re building a more inclusive AI future for Italian and beyond.