Posted On: Sep 28, 2023
HAQM Titan Embeddings is a text embeddings model that converts natural language text including single words, phrases, or even large documents, into numerical representations that can be used to power use cases such as search, personalization, and clustering based on semantic similarity. Optimized for text retrieval to enable Retrieval Augmented Generation (RAG) use cases, HAQM Titan Embeddings, enables you to first convert your text data into numerical representations or vectors and then use those vectors to accurately search for relevant passages from a vector database, allowing you to make the most of your proprietary data in combination with other foundation models (FMs).
Titan Embeddings supports more than 25 languages, including English, Chinese, and Spanish. You can input up to 8192 tokens, making it well suited to work with single word, phrases, or entire documents based on your use case. The model returns output vectors of 1,536 dimensions, giving it a high degree of accuracy, while also optimizing for low-latency, cost-effective results. Because Titan Embeddings is available via HAQM Bedrock’s serverless experience, you can easily access it using a single API and without managing any infrastructure.
HAQM Titan Embeddings is available in all AWS regions where HAQM Bedrock is available, including US East (N. Virginia) and US West (Oregon) AWS Regions. To get started on building generative AI apps with HAQM Titan, see the HAQM Titan web page.