Elastic has announced the release of Playground, a low-code interface designed to allow developers to construct Retrieval-Augmented Generation (RAG) applications using Elasticsearch. Aimed at streamlining the development process, the interface enables users to conduct A/B testing of various large language models (LLMs) and fine-tune retrieval mechanisms that integrate proprietary data indexed in Elasticsearch databases.
Matt Riley, Global Vice President and General Manager of Search at Elastic, emphasised the significance of this development. Riley stated, "While prototyping conversational search, the ability to experiment with and rapidly iterate on key components of a RAG workflow is essential to get accurate and hallucination-free responses from LLMs." He further explained that developers rely on the Elastic Search AI platform, which includes the Elasticsearch vector database, to provide comprehensive hybrid search capabilities and access innovation from an increasing number of LLM providers.
The new Playground interface is designed to consolidate these capabilities into a user-friendly format, thereby removing the complexity traditionally associated with building and refining generative AI experiences. According to Riley, this will "ultimately accelerate time to market for our customers."
One of the noteworthy features of Playground is its ability to directly leverage transformer models within Elasticsearch. The interface is further enhanced by the Elasticsearch Open Inference API, which allows for the integration of models from a range of inference providers, including Cohere and Azure AI Studio. This expands the flexibility and applicability of the Playground interface for various use cases.
Playground currently supports chat completion models from OpenAI and Azure OpenAI Service, making it versatile for numerous conversational AI applications. Users interested in exploring this new tool can read more about it on the Elastic blog.