Posted On: Nov 26, 2023
HAQM Redshift enhances Redshift ML to support large language models (LLM). HAQM Redshift ML enables customers to create, train, and deploy machine learning models using familiar SQL commands. Now, you can leverage pretrained publicly available LLMs in HAQM SageMaker JumpStart as part of Redshift ML, allowing you to bring the power of LLMs to analytics. For example, you can make inferences on your product feedback data in HAQM Redshift, use LLMs to summarize feedback, perform entity extraction, sentiment analysis and product feedback classification.
To use this feature, you need to create an endpoint for an LLM in HAQM SageMaker JumpStart. You can leverage the out of the box predefined models or train a custom model in HAQM Sagemaker JumpStart with your own data and then use the model endpoint to make remote inferences on your Redshift data using Redshift ML. To use LLM inferences, your input and output data type needs to be SUPER. There are no additional costs associated for using LLMs with HAQM Redshift ML, refer to HAQM SageMaker pricing page for more details.
HAQM Redshift ML enhancement for LLM support is now available in preview in the US East (N. Virginia), US West (Oregon), EU-West (Ireland), US-East (Ohio), EU-North (Stockholm) and AP-Northeast (Tokyo) AWS Regions. To get started and learn more, visit the HAQM Redshift database developers guide.