GENAIWIKI

Phi-4

CurrentLatest

Phi-4 is Microsoft Research’s small language model line focused on strong reasoning per parameter for on-device and low-cost cloud scenarios.

Best for:Edge and hybrid cloud assistantsCost tier:Phi
Compared to:Phi-3 MediumReplaces:Phi-3 Medium

Small LLM · Release · See vendor

slmmicrosoftedge

Updated 1 day ago · Verified Apr 2026 · Score 78

Decision summary

Why teams reach for it, where it fits, and what to watch for — before you dive into specs.

Why teams choose it

  • Excellent for routing and medium-complexity tasks at low cost.
  • Pair with retrieval for factual enterprise Q&A.

Best use cases

  • Use this when edge and hybrid cloud assistants
  • Use this when latency-sensitive IDE features

Tradeoffs

  • Not a full replacement for GPT-4-class on hardest tasks.
  • Model packaging varies—watch quantization effects.

Technical details

Modalities, benchmarks, and release context.

Modalities

What goes in and what comes out.

Inputs
text
Outputs
text
Capabilities
reasoning, coding, edge
Release: ·License: See vendor

Benchmarks snapshot

Structured JSON for reproducible comparisons.

No benchmark data yet — see comparisons for relative performance.

Family lineup

Explore other versions in this family after you have the headline on this model.

Continue exploring

A short set of comparisons, nearby models, and links to go deeper — without repeating the same paths.

This page is based on publicly available documentation, benchmarks, and real-world usage patterns. Last reviewed for accuracy recently.