This project is a JavaFX chat application demonstrating Retrieval-Augmented Generation (RAG). It uses Langchain4J to connect to the Google Gemini model and answer questions based on a private knowledge base of PDF documents.
- Frontend: JavaFX (MaterialFX style).
- LLM: Google Gemini (
gemini models). - RAG Components:
- Embedding Model: Ollama (
embeddinggemma) onhttp://localhost:11434. - Vector Store: ChromaDB on
http://localhost:8000.
- Embedding Model: Ollama (
- JDK 21 with JavaFX.
- Maven.
- Google Gemini API Key: Must be set in a
.envfile asGEMINI_API_KEY. - ChromaDB and Ollama running locally on their respective default ports.
-
Prepare Knowledge Base: Place your PDF files in the directory:
src/main/resources/knowledge-base/ -
Execute:
# Build the project $ mvn clean install # Run the application $ mvn javafx:run
The RAG pipeline will automatically load and ingest your PDFs into the vector store upon startup.