phi4-mini:3.8b

179.8K 3 months ago

Phi-4-mini brings significant enhancements in multilingual support, reasoning, and mathematics, and now, the long-awaited function calling feature is finally supported.

tools 3.8b

3 months ago

78fad5d182a7 · 2.5GB

phi3
·
3.84B
·
Q4_K_M
Microsoft. Copyright (c) Microsoft Corporation. MIT License Permission is hereby granted, free of
{{- if or .System .Tools }}<|system|>{{ if .System }}{{ .System }}{{ end }} {{- if .Tools }}{{ if no

Readme

Note: this model requires Ollama 0.5.13 or later.

Phi-4-mini-instruct is a lightweight open model built upon synthetic data and filtered publicly available websites - with a focus on high-quality, reasoning dense data. The model belongs to the Phi-4 model family and supports 128K token context length. The model underwent an enhancement process, incorporating both supervised fine-tuning and direct preference optimization to support precise instruction adherence and robust safety measures.

Primary use cases

The model is intended for broad multilingual commercial and research use. The model provides uses for general purpose AI systems and applications which require:

  • Memory/compute constrained environments
  • Latency bound scenarios
  • Strong reasoning (especially math and logic).
  • The model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.

References

Hugging Face

Blog post