StarTree Looks To Bring Real-Time Data And Analytics To AI | PTechHub


The additions of Model Context Protocol support and vector auto embedding to the StarTree real-time analytics platform will boost the performance of AI agents and power real-time retrieval-augmented generation.

Big data startup StarTree is boosting the capabilities of its real-time analytics software with new functionality that more rapidly provides artificial intelligence applications and agents with data and analytical insights.

Additions to the StarTree platform, including Model Context Protocol support and vector auto embedding, are designed to meet the demands of agentic AI and generative AI applications for low-latency queries and real-time data context awareness.

StarTree also expanded the range of deployment options for its product with a new Bring Your Own Kubernetes plan that gives organizations full control over the SrarTree analytics infrastructure within their own Kubernetes environments.

[Related: Meeting The Data Needs Of The AI World: The 2025 CRN Big Data 100]

“There is a need to allow autonomous agents to get the information they need to complete a task,” said Chad Meley, StarTree senior vice president, marketing and developer relations, in an interview with CRN. “They need that context of what’s happening now in many cases like cashflow analysis or the state of an IT system.”

StarTree, founded in 2018 and headquartered in Mountain View, Calif., develops a real-time analytics platform that turns streams of raw data, such as website clickstreams, operational applications and sensor data, into actionable intelligence. The cloud-based system is built on Apache Pinot, the open-source, distributed OLAP database designed for real-time analytics that was originally developed by StarTree’s founders when they worked at LinkedIn.

StarTree’s platform has been adopted by businesses and organizations – including more than 30 percent of the Fortune 1000 – for high-performance data analytics and business intelligence tasks that require low latency for queries and high concurrency (the ability to handle tens of thousands of queries per second).

Like analytics, the growing wave of AI and generative AI applications and agents – and the large language models that power them – need huge volumes of data to operate and that’s driving demand for more sophisticated technologies up and down the “big data stack.” They also require sub-second query speeds, real-time context awareness, and the ability to support large numbers of autonomous agents working in parallel, according to StarTree.

Making A Connection With AI

StarTree today said it is supporting Model Context Protocol (MCP), a standardized way for AI applications – more specifically the large language models LLMs) that power them – to connect and interact with external data sources and tools. That support will be available on StarTree Cloud in June.

Peter Corless, StarTree product marketing director, said MCP is essentially an API that connects AI applications and LLMs to other systems. MCP was developed by AI startup Anthropic and announced in November 2024.

The MCP support will allow StarTree to act as a local data source within MCP-based architectures and be accessible to MCP data servers, according to the company. That, in turn, will make it possible for AI agents to dynamically analyze live, structured enterprise data. It also makes it easier to deploy natural language-to-SQL queries.

And retrieval-augmented generation (RAG) via MCP will allow AI systems to develop “definitive answers” and real-time insights in response to queries from data stored in StarTree.

StarTree is also adding vector embedding model hosting into its platform, a move the company said will simplify and accelerate vector embedding generation and data ingestion for real-time RAG use cases based on Amazon Bedrock. The addition makes it possible to instantly transform existing data into AI-ready assets via an automated, rea-time pipeline Corless said.

StarTree also announced the general availability of a new deployment option it calls Bring Your Own Kubernetes that the company says gives organizations full control over Star Tree’s analytics infrastructure running within their own Kubernetes environments – either on-premises or in a private cloud.

The new BYOK option, currently in private preview, is in addition to StarTree’s existing deployment options including fully managed SaaS and Bring Your Own Cloud.



Source link