Vector search
Search GenAIWiki
Query the full knowledge graph. Results rank by semantic similarity across all six libraries.
Search results for “vector database”
Glossary
10Vector database
A database or engine optimized for similarity search over embedding vectors, typically with metadata filters and hybrid lexical+vector queries for production RAG.
Best match
vector
A vector is a fixed-length array of numbers; embeddings represent meaning as vectors for search and clustering.
Best match
Feature Vector
A numerical representation of an object's characteristics used in machine learning.
graph-database
A database specifically designed to store and navigate relationships between data points using graph structures.
variational-autoencoder
A generative model that learns to represent data in a latent space using variational inference.
visualization-tools
Software applications used to create graphical representations of data.
graph-embedding
A technique for transforming graph-structured data into a continuous vector space while preserving its properties.
support-vector-regression
An extension of support vector machines that predicts continuous values instead of categories.
energy-based-model
A probabilistic model that associates a scalar energy value with each configuration of variables to model distributions.
graph-attention-network
A neural network architecture that employs attention mechanisms to process graph-structured data.
Not finding exactly what you need?
Ask GenAIWiki →Tools
12Milvus
An open-source vector database designed for high-performance similarity search and analysis of large-scale vector data. It handles millions of vectors efficiently with a query latency of under 100ms for similarity searches.
Best match
Qdrant
Vector database focused on high-performance similarity search with strong payload filtering, hybrid retrieval features, and both open-source and managed cloud options.
Best match
Weaviate
Open source vector database with hybrid search, metadata filtering, and flexible deployment options across self-hosted clusters and managed cloud environments.
LanceDB
LanceDB is an embedded, serverless-friendly vector database built on the Lance columnar format—optimized for multimodal and large-scale local or object-store–backed retrieval with a small operational footprint for data science and edge-style deployments.
Pinecone
Managed vector database for semantic search and RAG systems with metadata filtering, namespaces, and cloud-hosted reliability for production retrieval workloads.
Redis Vector
Redis Vector Search extends Redis with vector similarity queries alongside familiar key, JSON, and search capabilities—useful when you already run Redis for caching or features and want co-located embeddings with low-latency hybrid retrieval without adding a separate database cluster.
Supabase Vector
Postgres-based platform with pgvector support, managed database operations, and integrated auth/storage features for building retrieval-enabled full-stack applications.
Chroma
Chroma is an open-source embedding database designed for managing and searching embeddings efficiently. It provides robust performance with sub-100ms latency for retrieval tasks.
Vercel AI SDK
TypeScript SDK for building AI features in web apps with streaming responses, multi-provider model adapters, and ergonomic server/client integration patterns.
Vertex AI
Google Cloud Vertex AI is a managed platform for training, tuning, and serving models—including Gemini and partner models—with IAM integration, VPC-SC, and data residency options for enterprises that already standardize on Google Cloud for analytics and data lakes.
FAISS
FAISS (Facebook AI Similarity Search) is a library for efficient similarity search and clustering of dense vectors. It allows for millions of items to be searched with latency typically under 100ms for nearest neighbor searches.
Hugging Face Transformers
AI platform and model hub for discovering, hosting, and deploying open models, datasets, and inference endpoints across NLP, vision, audio, and multimodal tasks.
Tutorials
6Pgvector Index Tuning (HNSW vs IVF)
Learn how to tune pgvector indexes using HNSW and IVF algorithms for optimal performance. Prerequisites include familiarity with PostgreSQL and vector databases.
Best match
Pgvector Index Tuning: HNSW vs IVF for E-commerce Search
This tutorial explores the tuning of Pgvector indexes using HNSW and IVF methods, specifically for optimizing search capabilities in e-commerce platforms. Prerequisites include basic knowledge of PostgreSQL and vector search concepts.
Best match
Agent Memory: Scratchpad vs Vector Store
This tutorial compares scratchpad memory and vector store memory in AI agents, focusing on their use cases and performance characteristics. Prerequisites include a basic understanding of AI memory architectures.
Understanding Agent Memory: Scratchpad vs Vector Store
Explore the differences between scratchpad and vector store memory in AI agents, and learn how to choose the right approach for your applications.
Graph RAG for Entity-Heavy Domains
Explore the use of Graph Retrieval-Augmented Generation (RAG) for domains with complex entities, requiring knowledge of graph databases and RAG techniques.
Graph RAG for Entity-Heavy Domains: A Practical Guide
This tutorial delves into using Graph RAG (Retrieval-Augmented Generation) techniques for domains rich in entities, such as legal and healthcare sectors. Prerequisites include understanding of RAG and graph database concepts.
Comparisons
7FAISS vs Milvus vs Chroma
FAISS is a library for embedding search (GPU-friendly ANN); Milvus is a purpose-built vector database server; Chroma is a lightweight embedded/embeddable store. Pick library vs server vs embedded based on scale and team skills.
Best match
Weaviate vs Qdrant
Weaviate pairs vector search with GraphQL and hybrid retrieval modules; Qdrant emphasizes payload filters and a Rust ANN core with cloud or self-host options. Pick based on API style, hybrid search ergonomics, and ops model.
Best match
Chroma vs Milvus
Chroma optimizes developer ergonomics for embedded and lightweight RAG; Milvus targets large-scale distributed vector search. Choose based on corpus size, team ops skills, and whether you need a cluster-scale engine from day one.
Pinecone vs Weaviate vs Qdrant
Three-way vector stack comparison: Pinecone (managed SaaS), Weaviate (self-host/cloud + hybrid), Qdrant (Rust engine, strong filtering). Choose based on ops appetite, hybrid search needs, and cost curve at scale.
Pinecone vs Weaviate
Pinecone is fully managed SaaS with minimal ops; Weaviate offers self-hosted or cloud with hybrid search and GraphQL. Trade off control and hybrid search vs operational simplicity.
Vertex AI vs Amazon Bedrock
Vertex AI is Google Cloud’s managed AI platform for Gemini and partner models with deep GCP integration; Amazon Bedrock exposes Anthropic, Meta, Amazon, and partner models on AWS. The decision is usually cloud estate and data gravity: where your identity, networking, and data already live.
Pinecone vs Qdrant
Pinecone is fully managed SaaS with minimal vector ops; Qdrant offers a Rust performance-focused engine with strong payload filters and hybrid search, self-hosted or via Qdrant Cloud. Choose based on ops appetite, filter complexity, and cost at scale.