Databricks opens up agentic AI pipelines, fills in Postgres-powered Lakebase

Databricks opens up agentic AI pipelines, fills in Postgres-powered Lakebase

Databricks kicked off its user conference this week with a slew of announcements that put (agentic) AI at the center of its strategy, and fleshed out its plans for Postgres following its recent $1 billion purchase of Neon.

On the agentic AI front, it announced a public preview of Lakebase, a “fully managed Postgres database” built for AI, and itself built on the technology it acquired with its Neon buy. Databricks said the approach brings “operational data to the Lakehouse”. It argued that traditional operational databases were not geared to the requirements of AI, and that operational and analytical systems needed to converge.

It said the project added an operational database layer to its existing Data Intelligence Platform, and would allow developers to build both data applications and AI agents on a single multicloud platform.

Meanwhile, Databricks claimed a “new approach to building AI agents” in the shape of Agent Bricks. It said customers would just need to “provide a high-level description of the agent’s task, and connect your enterprise data.” The technology, released in beta, will be optimized for common use cases, including information extraction and knowledge assurance.

It claimed most teams were currently relying on “gut checks” to assess agents. Its platform would guarantee “task specific evaluations and LLM judges to assess quality” of agents, it said.

The vendor has open sourced its core declarative ETL framework as Apache Spark Declarative Pipelines. It said the move would help engineers build the complex data pipelines needed to get AI agents, and other workloads, into production.

At the same time, it has opened a preview of Lakeflow Designer, which it said would allow data analysts to build pipelines without coding using a drag and drop interface, while maintaining enterprise governance and scalability. The tooling promises to ease the burden on data engineering teams working hard to scale up enterprise’ AI projects. It will be in private preview shortly. The underlying Lakeflow unified data engineering capability is now generally available.

The firm also tipped its hat towards business users, as well as developers and data specialists.

Its Unity Catalog gets full support for Apache Iceberg tables, in preview for now, including Apache Iceberg REST APIs. The firm said this eliminates lock in and unites “the Apache Iceberg and Delta Lake ecosystems with a single approach to governance”.

Business users get the ability to define business metrics and KPIs as “first class data assets” though Unity Catalog Metrics. And business users also get a “curated internal marketplace that surfaces the highest-value data, AI and AI/BI assets, organized by business domain.”

It also unwrapped Databricks One a “new experience” that gives business users access to its Data Intelligence Platform’s AI and data capabilities without having to grapple with the Databricks technical workspace. Instead, they will get to interact with AI/BI dashboards using natural language via  an AI/BI Genie assistant.  Or, as Databricks puts it, “anyone can now talk to their data”. The service is in private preview, and will be in beta later this summer.