mistral-nemo:latest

2M 10 months ago

A state-of-the-art 12B model with 128k context length, built by Mistral AI in collaboration with NVIDIA.

tools 12b

10 months ago

994f3b8b7801 · 7.1GB

llama
·
12.2B
·
Q4_0
{{- range $i, $_ := .Messages }} {{- if eq .Role "user" }} {{- if and $.Tools (le (len (slice $.Mess
Apache License Version 2.0, January 2004
{ "stop": [ "[INST]", "[/INST]" ] }

Readme

Mistral NeMo is a 12B model built in collaboration with NVIDIA. Mistral NeMo offers a large context window of up to 128k tokens. Its reasoning, world knowledge, and coding accuracy are state-of-the-art in its size category. As it relies on standard architecture, Mistral NeMo is easy to use and a drop-in replacement in any system using Mistral 7B.

nemo-base-performance.png

Reference

Blog

Hugging Face