Skip to content

feat(lightspeed): creating a new notebook and upload the documents#2704

Open
its-mitesh-kumar wants to merge 6 commits intoredhat-developer:mainfrom
its-mitesh-kumar:feat/lightspeed-create-notebooks
Open

feat(lightspeed): creating a new notebook and upload the documents#2704
its-mitesh-kumar wants to merge 6 commits intoredhat-developer:mainfrom
its-mitesh-kumar:feat/lightspeed-create-notebooks

Conversation

@its-mitesh-kumar
Copy link
Copy Markdown
Member

@its-mitesh-kumar its-mitesh-kumar commented Apr 6, 2026

Description

Adds notebook creation and document upload functionality to the Lightspeed plugin. Users can create new notebook sessions, upload documents with file type/size validation and drag-and-drop support, and track upload progress via real-time status polling. The notebook detail view includes a collapsible document sidebar displaying uploaded and in-progress documents. An overwrite confirmation modal handles duplicate file uploads.

Fixed

UI after changes

Screen.Recording.2026-04-08.at.7.20.14.PM.mov
Screen.Recording.2026-04-08.at.7.23.28.PM.mov

Steps to Test


1. Set up Lightspeed Stack

In the lightspeed-stack repo, open lightspeed-stack.yaml:

  • Comment out the API Key
  • Change the URL to http://localhost:8321
  • Run below command in the terminal
make run

2. Clone the Llama Stack Distribution

git clone https://github.com/its-mitesh-kumar/llama-stack-distribution
cd llama-stack-distribution

3. Update Distribution Config

Replace the contents of distribution/config.yaml with the project-specific configuration.

#
#
# Copyright Red Hat
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
version: 3
distro_name: developer-lightspeed-lls-0.5.x
apis:
  # - agents
  - inference
  - safety
  - tool_runtime
  - vector_io
  - files
container_image:
external_providers_dir:
providers:
  # agents:
  #   - config:
  #       persistence:
  #         agent_state:
  #           namespace: agents
  #           backend: kv_default
  #         responses:
  #           table_name: responses
  #           backend: sql_default
  #     provider_id: meta-reference
  #     provider_type: inline::meta-reference
  inference:
    - provider_id: ${env.ENABLE_VLLM:+vllm}
      provider_type: remote::vllm
      config:
        base_url: ${env.VLLM_URL:=}
        api_token: ${env.VLLM_API_KEY:=}
        max_tokens: ${env.VLLM_MAX_TOKENS:=4096}
        network:
          tls:
            verify: ${env.VLLM_TLS_VERIFY:=true}
    - provider_id: ${env.ENABLE_OLLAMA:+ollama}
      provider_type: remote::ollama
      config:
        base_url: ${env.OLLAMA_URL:=http://localhost:11434/v1}
    - provider_id: ${env.ENABLE_OPENAI:+openai}
      provider_type: remote::openai
      config:
        api_key: ${env.OPENAI_API_KEY:=}
    - provider_id: ${env.ENABLE_VERTEX_AI:+vertexai}
      provider_type: remote::vertexai
      config:
        project: ${env.VERTEX_AI_PROJECT:=}
        location: ${env.VERTEX_AI_LOCATION:=global}
    - provider_id: sentence-transformers
      provider_type: inline::sentence-transformers
      config: {}
    - provider_id: ${env.ENABLE_SAFETY:+safety-guard}
      provider_type: remote::vllm
      config:
        base_url: ${env.SAFETY_URL:=http://ollama:11434/v1}
        api_token: ${env.SAFETY_API_KEY:=}
  tool_runtime:
    - provider_id: model-context-protocol
      provider_type: remote::model-context-protocol
      config: {}
  vector_io:
    - provider_id: rhdh-docs
      provider_type: inline::faiss
      config:
        persistence:
          namespace: vector_io::faiss
          backend: kv_rag
    - provider_id: notebooks
      provider_type: inline::faiss
      config:
        persistence:
          namespace: vector_io::faiss
          backend: kv_rag
  files:
    - provider_id: localfs
      provider_type: inline::localfs
      config:
        storage_dir: /tmp/llama-stack-files
        metadata_store:
          table_name: files_metadata
          backend: sql_default
  safety:
    - provider_id: ${env.ENABLE_SAFETY:+llama-guard}
      provider_type: inline::llama-guard
      config:
        excluded_categories: []
storage:
  backends:
    kv_default:
      type: kv_sqlite
      db_path: /tmp/kvstore.db
    sql_default:
      type: sql_sqlite
      db_path: /tmp/sql_store.db
    kv_rag:
      type: kv_sqlite
      db_path: /tmp/rag-content/vector_db/rhdh_product_docs/1.9/faiss_store.db
  stores:
    metadata:
      namespace: registry
      backend: kv_default
    inference:
      table_name: inference_store
      backend: sql_default
      max_write_queue_size: 10000
      num_writers: 4
    conversations:
      table_name: openai_conversations
      backend: sql_default
registered_resources:
  models:
    - model_id: sentence-transformers/all-mpnet-base-v2
      metadata:
        embedding_dimension: 768
      model_type: embedding
      provider_id: sentence-transformers
      provider_model_id: sentence-transformers/all-mpnet-base-v2
    - model_id: ${env.SAFETY_MODEL:=llama-guard3:8b}
      provider_id: ${env.ENABLE_SAFETY:+safety-guard}
      provider_model_id: ${env.SAFETY_MODEL:=llama-guard3:8b}
      model_type: llm
      metadata: {}
  tool_groups: []
  vector_stores:
    - vector_store_id: vs_3d4808b2-5f00-4de6-baa3-c86752cf827c # see readme for this value
      embedding_model: sentence-transformers/sentence-transformers/all-mpnet-base-v2
      embedding_dimension: 768
      provider_id: rhdh-docs
  shields:
    - shield_id: llama-guard-shield
      provider_id: ${env.ENABLE_SAFETY:+llama-guard}
      provider_shield_id: safety-guard/${env.SAFETY_MODEL:=llama-guard3:8b}
vector_stores:
  annotation_prompt_params:
    enable_annotations: true
    annotation_instruction_template: >
      When appropriate, cite sources at the end of sentences using doc_url and doc_title format. 
      Citing sources is not always required because citations are handled externally. 
      Never include any citation that is in the form '<| file-id |>'.
  default_provider_id: rhdh-docs
  default_embedding_model:
    provider_id: sentence-transformers
    model_id: sentence-transformers/all-mpnet-base-v2
server:
  auth:
  host: 0.0.0.0
  port: 8321
  quota:
  tls_cafile:
  tls_certfile:
  tls_keyfile:

4. Add Environment File

Create a .env file at the root of the cloned repository with the required environment variables.

# Note: You only need to set the variables you normally would with '-e' flags.
# You do not need to set them all if they will go unused.

# Service Images
LIGHTSPEED_CORE_IMAGE=quay.io/lightspeed-core/lightspeed-stack:dev-20260316-b2f54cf
RAG_CONTENT_IMAGE=quay.io/redhat-ai-dev/rag-content:release-1.9-lls-0.4.3-d0444cd9b57222ec9bdbaa36354337480a2ecf97
OLLAMA_IMAGE=docker.io/ollama/ollama:0.18.2

# Enable Inference Providers
## Set any providers you want enabled to 'true'
## E.g. ENABLE_VLLM=true
## Leave all disabled providers EMPTY
## E.g. ENABLE_OPENAI=
ENABLE_VLLM=true
ENABLE_VERTEX_AI=
ENABLE_OPENAI=
ENABLE_OLLAMA=
ENABLE_SAFETY=

# vLLM Inference Settings
VLLM_URL=xxxxxx
VLLM_API_KEY=xxxxxxx
# vLLM Optional Variables
VLLM_MAX_TOKENS=
VLLM_TLS_VERIFY=

# OpenAI Inference Settings
OPENAI_API_KEY=

# Vertex AI Inference Settings
VERTEX_AI_PROJECT=
VERTEX_AI_LOCATION=
GOOGLE_APPLICATION_CREDENTIALS=

# Ollama Inference Settings
OLLAMA_URL=

# Question Validation Safety Shield Settings
## Ensure VALIDATION_PROVIDER is one of your enabled Inference Providers
## Only required for Llama Stack configs that use the Lightspeed Core provider
## E.g. VALIDATION_PROVIDER=vllm if ENABLE_VLLM=true
VALIDATION_PROVIDER=
VALIDATION_MODEL_NAME=redhataillama-31-8b-instruct

# Llama Guard Settings
SAFETY_MODEL=llama-guard3:8b
SAFETY_URL=http://ollama:11434/v1
## Only required for non-local environments with a api key
SAFETY_API_KEY=

# Other
LLAMA_STACK_LOGGING=

5. Run Llama Distribution

podman run -it --rm --replace --name llama-stack \
  --platform linux/amd64 \
  -p 8321:8321 \
  -v $(pwd)/distribution/config.yaml:/opt/app-root/config.yaml:Z \
  -v $(pwd)/rag-content:/rag-content:Z \
  --env-file .env \
  quay.io/opendatahub/llama-stack:6187785c769f9924276b0fcdafafdee42ec53d6e

6. Start the Plugin

In the rhdh-plugins repo, ensure the following is present in workspaces/lightspeed/app-config.yaml:

lightspeed:
  aiNotebooks:
    enabled: true

Then start the application:

cd workspaces/lightspeed
yarn start:legacy

✔️ Checklist

  • A changeset describing the change and affected packages. (more info)
  • Added or Updated documentation
  • Tests for new functionality and regression tests for bug fixes
  • Screenshots attached (for UI changes)

Signed-off-by: its-mitesh-kumar <itsmiteshkumar98@gmail.com>
Signed-off-by: its-mitesh-kumar <itsmiteshkumar98@gmail.com>
@rhdh-gh-app
Copy link
Copy Markdown

rhdh-gh-app bot commented Apr 6, 2026

Important

This PR includes changes that affect public-facing API. Please ensure you are adding/updating documentation for new features or behavior.

Changed Packages

Package Name Package Path Changeset Bump Current Version
@red-hat-developer-hub/backstage-plugin-lightspeed workspaces/lightspeed/plugins/lightspeed minor v1.4.0

@rhdh-qodo-merge
Copy link
Copy Markdown

Review Summary by Qodo

Implement notebook creation and document upload functionality

✨ Enhancement

Grey Divider

Walkthroughs

Description
• Add notebook creation and document upload flow
• Implement document management API methods and hooks
• Add file validation and upload utilities with constraints
• Extend UI with notebook view, sidebar, and upload modal
• Add multilingual translations for notebook features
Diagram
flowchart LR
  A["User Creates Notebook"] -->|createSession| B["NotebooksApiClient"]
  B -->|API Call| C["Backend Session"]
  C -->|SessionResponse| B
  B -->|NotebookSession| D["NotebookView Component"]
  E["User Uploads Document"] -->|uploadDocument| B
  B -->|FormData| F["Backend Upload"]
  F -->|UploadDocumentResponse| B
  B -->|Poll Status| G["Document Status"]
  G -->|DocumentStatus| H["UI Updates"]
  D -->|Display| H
Loading

Grey Divider

File Changes

1. workspaces/lightspeed/plugins/lightspeed/src/api/NotebooksApiClient.ts ✨ Enhancement +111/-19

Add document management and session creation methods

workspaces/lightspeed/plugins/lightspeed/src/api/NotebooksApiClient.ts


2. workspaces/lightspeed/plugins/lightspeed/src/api/notebooksApi.ts ✨ Enhancement +23/-1

Extend API interface with document operations

workspaces/lightspeed/plugins/lightspeed/src/api/notebooksApi.ts


3. workspaces/lightspeed/plugins/lightspeed/src/const.ts ⚙️ Configuration changes +28/-0

Add notebook constraints and file type mappings

workspaces/lightspeed/plugins/lightspeed/src/const.ts


View more (18)
4. workspaces/lightspeed/plugins/lightspeed/src/hooks/notebooks/useCreateNotebook.ts ✨ Enhancement +49/-0

Create hook for notebook session creation

workspaces/lightspeed/plugins/lightspeed/src/hooks/notebooks/useCreateNotebook.ts


5. workspaces/lightspeed/plugins/lightspeed/src/hooks/notebooks/useDocumentStatusPolling.ts ✨ Enhancement +79/-0

Implement polling hook for document processing status

workspaces/lightspeed/plugins/lightspeed/src/hooks/notebooks/useDocumentStatusPolling.ts


6. workspaces/lightspeed/plugins/lightspeed/src/hooks/notebooks/useUploadDocument.ts ✨ Enhancement +59/-0

Create hook for document upload with file type detection

workspaces/lightspeed/plugins/lightspeed/src/hooks/notebooks/useUploadDocument.ts


7. workspaces/lightspeed/plugins/lightspeed/src/utils/notebook-upload-utils.ts ✨ Enhancement +94/-0

Add file validation utilities for uploads

workspaces/lightspeed/plugins/lightspeed/src/utils/notebook-upload-utils.ts


8. workspaces/lightspeed/plugins/lightspeed/src/types.ts ✨ Enhancement +72/-0

Define types for documents and upload responses

workspaces/lightspeed/plugins/lightspeed/src/types.ts


9. workspaces/lightspeed/plugins/lightspeed/src/components/notebooks/NotebookView.tsx ✨ Enhancement +373/-0

Create main notebook view with document management

workspaces/lightspeed/plugins/lightspeed/src/components/notebooks/NotebookView.tsx


10. workspaces/lightspeed/plugins/lightspeed/src/components/notebooks/AddDocumentModal.tsx ✨ Enhancement +182/-0

Build file upload modal with validation

workspaces/lightspeed/plugins/lightspeed/src/components/notebooks/AddDocumentModal.tsx


11. workspaces/lightspeed/plugins/lightspeed/src/components/notebooks/DocumentSidebar.tsx ✨ Enhancement +188/-0

Create sidebar displaying documents and upload status

workspaces/lightspeed/plugins/lightspeed/src/components/notebooks/DocumentSidebar.tsx


12. workspaces/lightspeed/plugins/lightspeed/src/components/notebooks/UploadResourceScreen.tsx ✨ Enhancement +79/-0

Add empty state screen for document uploads

workspaces/lightspeed/plugins/lightspeed/src/components/notebooks/UploadResourceScreen.tsx


13. workspaces/lightspeed/plugins/lightspeed/src/components/notebooks/SidebarCollapseIcon.tsx ✨ Enhancement +61/-0

Define custom SVG icons for sidebar controls

workspaces/lightspeed/plugins/lightspeed/src/components/notebooks/SidebarCollapseIcon.tsx


14. workspaces/lightspeed/plugins/lightspeed/src/components/LightSpeedChat.tsx ✨ Enhancement +41/-3

Integrate notebook view and creation functionality

workspaces/lightspeed/plugins/lightspeed/src/components/LightSpeedChat.tsx


15. workspaces/lightspeed/plugins/lightspeed/src/components/notebooks/NotebooksTab.tsx ✨ Enhancement +8/-1

Add create notebook button to notebooks tab

workspaces/lightspeed/plugins/lightspeed/src/components/notebooks/NotebooksTab.tsx


16. workspaces/lightspeed/plugins/lightspeed/src/translations/ref.ts 📝 Documentation +28/-0

Add English translations for notebook features

workspaces/lightspeed/plugins/lightspeed/src/translations/ref.ts


17. workspaces/lightspeed/plugins/lightspeed/src/translations/de.ts 📝 Documentation +30/-0

Add German translations for notebook features

workspaces/lightspeed/plugins/lightspeed/src/translations/de.ts


18. workspaces/lightspeed/plugins/lightspeed/src/translations/es.ts 📝 Documentation +30/-0

Add Spanish translations for notebook features

workspaces/lightspeed/plugins/lightspeed/src/translations/es.ts


19. workspaces/lightspeed/plugins/lightspeed/src/translations/fr.ts 📝 Documentation +30/-0

Add French translations for notebook features

workspaces/lightspeed/plugins/lightspeed/src/translations/fr.ts


20. workspaces/lightspeed/plugins/lightspeed/src/translations/it.ts 📝 Documentation +30/-0

Add Italian translations for notebook features

workspaces/lightspeed/plugins/lightspeed/src/translations/it.ts


21. workspaces/lightspeed/plugins/lightspeed/src/translations/ja.ts 📝 Documentation +30/-0

Add Japanese translations for notebook features

workspaces/lightspeed/plugins/lightspeed/src/translations/ja.ts


Grey Divider

Qodo Logo

@rhdh-qodo-merge
Copy link
Copy Markdown

rhdh-qodo-merge bot commented Apr 6, 2026

Code Review by Qodo

🐞 Bugs (1)   📘 Rule violations (0)   📎 Requirement gaps (0)   🎨 UX Issues (0)
🐞\ ≡ Correctness (1)

Grey Divider


Action required

1. Upload limits/type mismatch 🐞
Description
Notebook upload validation allows .docx/.odt and 25MB files, but the backend only supports
md/txt/pdf/json/yaml/yml/log/url and enforces a 20MB multipart limit; this will cause confusing
client-side acceptance followed by server-side failures or ingestion of binary files as UTF-8 text.
This is user-facing breakage for common formats and makes the displayed error/help text incorrect.
Code

workspaces/lightspeed/plugins/lightspeed/src/const.ts[R36-62]

+export const NOTEBOOK_MAX_FILES = 10;
+export const NOTEBOOK_MAX_FILE_SIZE_BYTES = 25 * 1024 * 1024; // 25 MB
+export const UNTITLED_NOTEBOOK_NAME = 'Untitled Notebook';
+
+export const NOTEBOOK_ALLOWED_EXTENSIONS: Record<string, string[]> = {
+  'text/plain': ['.txt', '.log'],
+  'text/markdown': ['.md'],
+  'application/pdf': ['.pdf'],
+  'application/json': ['.json'],
+  'application/x-yaml': ['.yaml', '.yml'],
+  'application/vnd.openxmlformats-officedocument.wordprocessingml.document': [
+    '.docx',
+  ],
+  'application/vnd.oasis.opendocument.text': ['.odt'],
+};
+
+export const NOTEBOOK_EXTENSION_TO_FILE_TYPE: Record<string, string> = {
+  '.txt': 'txt',
+  '.md': 'md',
+  '.pdf': 'pdf',
+  '.json': 'json',
+  '.yaml': 'yaml',
+  '.yml': 'yaml',
+  '.log': 'log',
+  '.docx': 'txt',
+  '.odt': 'txt',
+};
Evidence
Frontend explicitly whitelists .docx/.odt and sets a 25MB max size, while backend upload middleware
limits to 20MB and backend tests assert docx is unsupported by the notebook file-type validator (and
SupportedFileType enum does not include docx/odt).

workspaces/lightspeed/plugins/lightspeed/src/const.ts[36-62]
workspaces/lightspeed/plugins/lightspeed-backend/src/service/constant.ts[18-40]
workspaces/lightspeed/plugins/lightspeed-backend/src/service/notebooks/documents/documentHelpers.test.ts[110-134]
workspaces/lightspeed/plugins/lightspeed-backend/src/service/constant.ts[36-48]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
The frontend notebook uploader allows file types and sizes that the backend will not accept/handle correctly (docx/odt allowed + 25MB limit), leading to failed uploads or garbled document content.

### Issue Context
Backend upload limit is 20MB and backend-supported notebook file types exclude docx/odt.

### Fix Focus Areas
- workspaces/lightspeed/plugins/lightspeed/src/const.ts[36-62]
- workspaces/lightspeed/plugins/lightspeed/src/utils/notebook-upload-utils.ts[17-94]
- workspaces/lightspeed/plugins/lightspeed/src/translations/ref.ts[71-82] (and other locales)

### What to change
- Remove `.docx`/`.odt` from `NOTEBOOK_ALLOWED_EXTENSIONS` and from `NOTEBOOK_EXTENSION_TO_FILE_TYPE` (or, if docx/odt support is desired, add real backend support + extend backend `SupportedFileType` and parsing).
- Change `NOTEBOOK_MAX_FILE_SIZE_BYTES` to match backend (20 * 1024 * 1024), or fetch the limit from config/server and use that consistently.
- Update upload modal info text and `notebook.upload.error.fileTooLarge` translations to match the real enforced limit.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Re-upload stuck on same id🐞
Description
NotebookView permanently tracks processed documentIds in a ref and skips handling any later
completion results for the same documentId, which breaks re-uploading a file with the same title and
can leave uploads stuck in pending state.
Because backend document_id is deterministically derived from the title, this collision is easy to
trigger (e.g., uploading the same file name twice).
Code

workspaces/lightspeed/plugins/lightspeed/src/components/notebooks/NotebookView.tsx[R142-224]

+  const [uploadingFileNames, setUploadingFileNames] = useState<string[]>([]);
+  const [pendingUploads, setPendingUploads] = useState<PendingUpload[]>([]);
+  const [toastAlerts, setToastAlerts] = useState<Partial<AlertProps>[]>([]);
+  const processedIds = useRef<Set<string>>(new Set());
+
+  const handleOpenUploadModal = () => setIsUploadModalOpen(true);
+  const handleCloseUploadModal = () => setIsUploadModalOpen(false);
+
+  const handleFilesUploading = (files: File[]) => {
+    setUploadingFileNames(prev => [...prev, ...files.map(f => f.name)]);
+  };
+
+  const handleUploadStarted = (info: {
+    fileName: string;
+    documentId: string;
+  }) => {
+    setPendingUploads(prev => [
+      ...prev,
+      { fileName: info.fileName, documentId: info.documentId },
+    ]);
+  };
+
+  const handleUploadFailed = (fileName: string) => {
+    setUploadingFileNames(prev => prev.filter(n => n !== fileName));
+    setToastAlerts(prev => [
+      {
+        key: Date.now() + fileName,
+        title: (t as Function)('notebook.upload.failed', {
+          fileName,
+        }) as string,
+        variant: 'danger',
+      },
+      ...prev,
+    ]);
+  };
+
+  const pollingResults = useDocumentStatusPolling(sessionId, pendingUploads);
+
+  useEffect(() => {
+    const completedOrFailed = pollingResults.filter(
+      r =>
+        (r.status === 'completed' ||
+          r.status === 'failed' ||
+          r.status === 'cancelled') &&
+        !processedIds.current.has(r.documentId),
+    );
+
+    if (completedOrFailed.length === 0) return;
+
+    const idsToRemove = new Set<string>();
+    const namesToRemove = new Set<string>();
+    const newAlerts: Partial<AlertProps>[] = [];
+
+    for (const result of completedOrFailed) {
+      processedIds.current.add(result.documentId);
+      idsToRemove.add(result.documentId);
+      namesToRemove.add(result.fileName);
+
+      if (result.status === 'completed') {
+        newAlerts.push({
+          key: Date.now() + result.documentId,
+          title: (t as Function)('notebook.upload.success', {
+            fileName: result.fileName,
+          }) as string,
+          variant: 'success',
+        });
+      } else {
+        newAlerts.push({
+          key: Date.now() + result.documentId,
+          title: (t as Function)('notebook.upload.failed', {
+            fileName: result.fileName,
+          }) as string,
+          variant: 'danger',
+        });
+      }
+    }
+
+    setPendingUploads(prev => prev.filter(u => !idsToRemove.has(u.documentId)));
+    setUploadingFileNames(prev =>
+      prev.filter(name => !namesToRemove.has(name)),
+    );
+    setToastAlerts(prev => [...newAlerts, ...prev]);
+  }, [pollingResults, t]);
Evidence
NotebookView filters out terminal polling results if documentId is already in processedIds and never
removes ids from that set. Backend generates document_id via sanitizeTitle(newTitle || title), so
repeating the same title yields the same documentId and will be ignored by the effect forever.

workspaces/lightspeed/plugins/lightspeed/src/components/notebooks/NotebookView.tsx[142-224]
workspaces/lightspeed/plugins/lightspeed-backend/src/service/notebooks/notebooksRouters.ts[284-321]
workspaces/lightspeed/plugins/lightspeed-backend/src/service/notebooks/documents/documentHelpers.ts[238-251]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`processedIds` is a lifetime Set that never clears, so any later upload that reuses a previously seen `documentId` will never be removed from `pendingUploads` and will never clear `uploadingFileNames`.

### Issue Context
Backend returns deterministic `document_id` based on `sanitizeTitle(newTitle || title)`, so collisions happen naturally when users re-upload a file with the same name.

### Fix Focus Areas
- workspaces/lightspeed/plugins/lightspeed/src/components/notebooks/NotebookView.tsx[142-224]

### What to change
Implement one of:
- Reset `processedIds.current` when `sessionId` changes.
- When `handleUploadStarted` adds a pending upload, call `processedIds.current.delete(documentId)` so a re-upload can be processed.
- Preferably, stop using a global Set keyed only by `documentId`: add a unique `uploadInstanceId` (timestamp/uuid) to `PendingUpload`, and track processing by that instance id. Keep `documentId` only for polling.
- Ensure any dedupe logic cannot prevent cleanup of `pendingUploads` for a new upload attempt.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

3. NotebookView always has no docs🐞
Description
LightspeedChat renders NotebookView with documents={[]}, and NotebookView uses documents.length
to render the sidebar list and to compute existingDocumentCount for upload validation; as wired,
the UI cannot reflect actual session documents and will revert to the empty-state after uploads
finish.
This breaks document count limits and prevents the sidebar from ever showing completed documents
unless documents are fetched/passed in.
Code

workspaces/lightspeed/plugins/lightspeed/src/components/LightSpeedChat.tsx[R1256-1261]

+            <NotebookView
+              sessionId={activeNotebook.session_id}
+              notebookName={activeNotebook.name}
+              documents={[]}
+              onClose={handleCloseNotebook}
+            />
Evidence
The parent explicitly passes an empty documents array, while NotebookView relies on the documents
prop for hasDocuments, rendering the DocumentSidebar, and for the existingDocumentCount passed
into AddDocumentModal.

workspaces/lightspeed/plugins/lightspeed/src/components/LightSpeedChat.tsx[1252-1262]
workspaces/lightspeed/plugins/lightspeed/src/components/notebooks/NotebookView.tsx[228-370]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`NotebookView` is currently invoked with `documents={[]}`, but it uses `documents` to render the document list and enforce max file count. With the current wiring, completed uploads will disappear from the UI and validation will ignore existing backend documents.

### Issue Context
The Notebooks API now has `listDocuments(sessionId)`.

### Fix Focus Areas
- workspaces/lightspeed/plugins/lightspeed/src/components/LightSpeedChat.tsx[1252-1262]
- workspaces/lightspeed/plugins/lightspeed/src/components/notebooks/NotebookView.tsx[228-370]
- workspaces/lightspeed/plugins/lightspeed/src/api/NotebooksApiClient.ts[180-186]

### What to change
- Add a documents query (e.g., `useQuery({ queryKey: ['notebooks','documents',sessionId], queryFn: () => notebooksApi.listDocuments(sessionId) })`) and pass the result into `NotebookView`, OR move the fetch into `NotebookView` and remove the `documents` prop.
- After upload completes (polling reaches `completed`), invalidate/refetch the documents query so completed documents appear in the sidebar.
- Use the fetched document count for `existingDocumentCount` to enforce `NOTEBOOK_MAX_FILES` correctly.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

Comment on lines +36 to +62
export const NOTEBOOK_MAX_FILES = 10;
export const NOTEBOOK_MAX_FILE_SIZE_BYTES = 25 * 1024 * 1024; // 25 MB
export const UNTITLED_NOTEBOOK_NAME = 'Untitled Notebook';

export const NOTEBOOK_ALLOWED_EXTENSIONS: Record<string, string[]> = {
'text/plain': ['.txt', '.log'],
'text/markdown': ['.md'],
'application/pdf': ['.pdf'],
'application/json': ['.json'],
'application/x-yaml': ['.yaml', '.yml'],
'application/vnd.openxmlformats-officedocument.wordprocessingml.document': [
'.docx',
],
'application/vnd.oasis.opendocument.text': ['.odt'],
};

export const NOTEBOOK_EXTENSION_TO_FILE_TYPE: Record<string, string> = {
'.txt': 'txt',
'.md': 'md',
'.pdf': 'pdf',
'.json': 'json',
'.yaml': 'yaml',
'.yml': 'yaml',
'.log': 'log',
'.docx': 'txt',
'.odt': 'txt',
};
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. Upload limits/type mismatch 🐞 Bug ≡ Correctness

Notebook upload validation allows .docx/.odt and 25MB files, but the backend only supports
md/txt/pdf/json/yaml/yml/log/url and enforces a 20MB multipart limit; this will cause confusing
client-side acceptance followed by server-side failures or ingestion of binary files as UTF-8 text.
This is user-facing breakage for common formats and makes the displayed error/help text incorrect.
Agent Prompt
### Issue description
The frontend notebook uploader allows file types and sizes that the backend will not accept/handle correctly (docx/odt allowed + 25MB limit), leading to failed uploads or garbled document content.

### Issue Context
Backend upload limit is 20MB and backend-supported notebook file types exclude docx/odt.

### Fix Focus Areas
- workspaces/lightspeed/plugins/lightspeed/src/const.ts[36-62]
- workspaces/lightspeed/plugins/lightspeed/src/utils/notebook-upload-utils.ts[17-94]
- workspaces/lightspeed/plugins/lightspeed/src/translations/ref.ts[71-82] (and other locales)

### What to change
- Remove `.docx`/`.odt` from `NOTEBOOK_ALLOWED_EXTENSIONS` and from `NOTEBOOK_EXTENSION_TO_FILE_TYPE` (or, if docx/odt support is desired, add real backend support + extend backend `SupportedFileType` and parsing).
- Change `NOTEBOOK_MAX_FILE_SIZE_BYTES` to match backend (20 * 1024 * 1024), or fetch the limit from config/server and use that consistently.
- Update upload modal info text and `notebook.upload.error.fileTooLarge` translations to match the real enforced limit.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JslYoon We will be supporting .docx/.odt and 25MB files in the backend too. Please do make required changes.

@its-mitesh-kumar its-mitesh-kumar changed the title [WIP} Feat/lightspeed create notebooks [WIP] Feat/lightspeed create notebooks Apr 6, 2026
Signed-off-by: its-mitesh-kumar <itsmiteshkumar98@gmail.com>
Signed-off-by: its-mitesh-kumar <itsmiteshkumar98@gmail.com>
@its-mitesh-kumar its-mitesh-kumar changed the title [WIP] Feat/lightspeed create notebooks feat(lightspeed): creating a new notebook and upload the documents Apr 8, 2026
@its-mitesh-kumar
Copy link
Copy Markdown
Member Author

@JslYoon While making the sessions api call its giving one document in the response, while calling document list api I am getting all three documents. Screen recording for the same.

Screen.Recording.2026-04-08.at.6.59.09.PM.mov

@its-mitesh-kumar
Copy link
Copy Markdown
Member Author

its-mitesh-kumar commented Apr 8, 2026

@aprilma419 watch the recordings and let me the concerns you have on it. Except the documents count on the notebook list.

Signed-off-by: its-mitesh-kumar <itsmiteshkumar98@gmail.com>
@JslYoon
Copy link
Copy Markdown
Contributor

JslYoon commented Apr 8, 2026

@JslYoon While making the sessions api call its giving one document in the response, while calling document list api I am getting all three documents. Screen recording for the same.

This is not a bug and known problem. this happens when you reload/restart rhdh-plugins when you upload the file. So the metadata isn't updated but the file is updated because llama-stack processes that. So it should be fine

Signed-off-by: its-mitesh-kumar <itsmiteshkumar98@gmail.com>
@sonarqubecloud
Copy link
Copy Markdown

sonarqubecloud bot commented Apr 8, 2026

@aprilma419
Copy link
Copy Markdown

LGTM!

The only concern I have is whether we need to show the successful alerts for each source uploaded. Maybe getting rid of the alerts is fine. If a source is failing to upload, the inline alert in the modal would tell.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants