Clarifies which LLMs we support for Obs and Sec throughout AI docs#5629
Clarifies which LLMs we support for Obs and Sec throughout AI docs#5629benironside wants to merge 4 commits intomainfrom
Conversation
Updated the AI Assistant section to clarify the integration with large language model (LLM) providers, emphasizing the use of connectors and providing a more specific reference to the LLM performance matrix for observability. This enhances user understanding of available models and their performance ratings.
Vale Linting ResultsSummary: 1 suggestion found 💡 Suggestions (1)
The Vale linter checks documentation changes against the Elastic Docs style guide. To use Vale locally or report issues, refer to Elastic style guide for Vale. |
There was a problem hiding this comment.
Nice additions! How can we clarify the paths to the right performance pages too for the Search solution and the various AI features it includes? For example, is AI Assistant for Obs/Search still relevant for search? If not, is there a dedicated performance page for AB models (there's a page yes, but what is it relevant for?) cc @leemthompo in case you know more there.
I'm thinking that in addition to this, we may benefit from an orientation page if this is a regular concerns for our users like "LLM performance in the Elasticsearch platform" where we could clearly say that different models perform differently based on what feature/solution they're used and then link to the various places. Right now I feel like we have a (too) complex and incomplete web of interlinking which I'm not sure is fixing the underlying issue.
Also, similar to what is being done for EIS, specifying the (minimum) stack version for the models could be a good idea too.
Lastly, support folks seem to have particular needs/interests there, we should include them in reviewing what we produce around this issue IMO to make sure that what we deliver matches their expectations too.
Fixes /elastic/docs-content-internal/issue/731
The problem was that we weren't always making it clear that certain LLMs are supported for certain use cases, while others aren't, or don't perform well. In other words, we may have oversold the extent to which users can bring any model.
The purpose of this PR is to make it clear to users who are learning about our AI powered features that we have tested models, we vouch for them, and we can only vouch for them.
It:
Generative AI disclosure
To help us ensure compliance with the Elastic open source and documentation guidelines, please answer the following:
Tool(s) and model(s) used: Claude 4.6, Cursor
-->