October 08, 2025
This presentation delves into the potential of integrating LLMs with Apache Spark and Apache Iceberg as part of a Big Data to AI foundational architecture. In this session we’ll explore the potential of combining Iceberg, Spark and LLMs to give you a real world AI architecture that uses your data.
We'll build an AI application that allows users to perform data queries and extract insights from massive datasets using natural language. We'll start with understanding the structure and architecture of a large dataset. Then we'll look at options for querying the dataset using Apache Spark and Trino. Finally, we'll use an LLM to query the dataset using natural language. We'll also look at other uses of LLMs as part of an overall solution, and explore the differences between different LLMs.
We’ll also discuss where event streaming (Kafka and Flink) fit into this architecture. The design of this architecture is meant to be flexible and give your dev team the ability to choose different technologies for the processing and querying. I’ll leave you with a CONCRETE example that you can run on your laptop and explore the possibilities. Again, this will be an example of a real-world application; the dataset used will be for home sales data for the last 15 years.
We will use these technologies: