GENAIWIKI

Vector search

Search GenAIWiki

Query the full knowledge graph. Results rank by semantic similarity across all six libraries.

Search results for “large language model text

Models

7

Not finding exactly what you need?

Ask GenAIWiki →

Prompts

3

Tools

10

OpenAI Playground

Provider of widely used frontier model APIs for text, vision, and audio, with strong developer tooling and broad ecosystem adoption across production AI applications.

Best match

Ollama

Local model runtime for running and serving open LLMs on developer machines and private infrastructure, with simple pull/run workflows and API access.

Best match

LangGraph

LangGraph is a library for building stateful, cyclic agent and workflow graphs on top of LangChain—suited to multi-step tools, human-in-the-loop approvals, and durable execution patterns that go beyond linear chains.

Hugging Face Transformers

AI platform and model hub for discovering, hosting, and deploying open models, datasets, and inference endpoints across NLP, vision, audio, and multimodal tasks.

Hugging Face

Hub for open models, datasets, and Spaces demos, plus Inference Endpoints, Transformers, and enterprise features for teams that train, fine-tune, or serve open-weight and partner models at scale.

Groq

GroqCloud offers very low-latency, high-throughput LLM inference using Groq’s LPU-style hardware, with OpenAI-compatible APIs for select open and partner models aimed at interactive and batch production workloads.

LangChain

Application framework for orchestrating LLM workflows, tool calling, retrieval, and agents across multiple providers in Python and TypeScript ecosystems.

DSPy

DSPy is a programming framework for building LM pipelines declaratively—optimizing prompts and few-shot demonstrations with compilers and metrics instead of hand-tuning every string—aimed at researchers and product teams who want systematic prompt improvement tied to eval scores.

Together AI

Inference platform for open-source and frontier model APIs with broad model catalog coverage, cost controls, and production endpoints for text and multimodal workloads.

LanceDB

LanceDB is an embedded, serverless-friendly vector database built on the Lance columnar format—optimized for multimodal and large-scale local or object-store–backed retrieval with a small operational footprint for data science and edge-style deployments.

Tutorials

7

Reducing Hallucinations with Citation Constraints in Academic Research Models

This tutorial outlines methods to reduce hallucinations in academic research models by implementing citation constraints. It targets researchers and developers working on language models for academic purposes. Prerequisites include familiarity with natural language processing and model training.

Best match

Observability: Traces for LLM + Tool Spans

Implementing observability practices to trace interactions between large language models (LLMs) and external tools. Prerequisites include knowledge of observability tools and LLM architectures.

Best match

Canary Prompts for Regression Detection

Utilizing canary prompts to detect regressions in language models. Prerequisites include familiarity with regression testing and LLM evaluation metrics.

SLI/SLO for Generative Endpoints

Establishing Service Level Indicators (SLIs) and Service Level Objectives (SLOs) for generative endpoints is crucial for maintaining quality and reliability. This tutorial outlines how to define and implement SLIs/SLOs effectively.

Multimodal Prompts for Document QA in Legal Settings

Using multimodal prompts can improve document question answering (QA) in legal contexts. Prerequisites include access to relevant legal documents and a model capable of processing multimodal inputs.

Quantization Impact on Retrieval Quality in Healthcare Applications

This tutorial investigates the effects of quantization on retrieval quality in healthcare applications, focusing on the trade-offs between model size and accuracy. Prerequisites include a basic understanding of machine learning models and quantization techniques.

Reducing Hallucinations with Citation Constraints in Academic Research

This tutorial explores how to effectively implement citation constraints to minimize hallucinations in academic research models. Prerequisites include familiarity with natural language processing (NLP) and access to a research dataset.

Comparisons

2

Glossary

7