-
Notifications
You must be signed in to change notification settings - Fork 13
✨ Upload jdtls log to analysis attachments #178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
In java provider, there is a `.metadata/.log` file with full output of jdtls. This might be needed for debugging, adding it to analysis tasks attachments (under more descriptive `provider-jdtls.log` name). There were konveyor-related entires of this log already part of provider log, but this PR adds ful content of the jdtls log file. Fixes: konveyor#177 Signed-off-by: Marek Aufart <[email protected]>
WalkthroughAdded functionality to Changes
Sequence Diagram(s)sequenceDiagram
participant Run as Run Function
participant FS as File System
participant API as addon.File API
Run->>FS: Check if Dir/.metadata/.log exists
alt File exists
Run->>FS: Open metadata log
Run->>FS: Create Dir/provider-jdtls.log
Run->>FS: Copy metadata log contents
Run->>API: Post file (upload)
alt Upload successful
Run->>Run: Attach uploaded file to output
else Upload fails
Run->>Run: Return error
end
else File not found
Note over Run: Continue (no-op)
end
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes
Poem
Pre-merge checks and finishing touches✅ Passed checks (5 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
cmd/analyzer.go (1)
59-85: Consider flushing the destination file before upload.The implementation correctly copies the metadata log and follows the established upload pattern. However, there's a reliability concern: the destination file remains open when
addon.File.Postis called on line 79, since thedefer dst.Close()on line 73 won't execute until the function returns. While the data fromio.Copyis typically visible to other readers on modern systems, explicitly flushing ensures the log is completely written to disk before upload, preventing potential incomplete or buffered data issues.Apply this diff to explicitly sync the file before uploading:
_, err = io.Copy(dst, src) if err != nil { return } + err = dst.Sync() + if err != nil { + return + } // Upload the renamed file f, pErr = addon.File.Post(renamedLog)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
cmd/analyzer.go(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
cmd/analyzer.go (1)
cmd/main.go (1)
Dir(23-23)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build-and-upload-for-global-ci
🔇 Additional comments (1)
cmd/analyzer.go (1)
4-4: LGTM: Import addition is correct.The
iopackage is needed for theio.Copyoperation introduced later in the file.
| } | ||
| addon.Attach(f) | ||
| } | ||
| metadataLog := path.Join(Dir, ".metadata", ".log") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like a java provider specific concern. The java provider knows about jdtls and these log files and should be responsible for logging them properly. I don't think we should be putting provider specific logic in the addon.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree this file is provider-specific (present for java provider only). Reason for not putting it to "standard" tackle2-addon command logging with Reporter is primary its size (typically 100MB per application or more).
There are fragments of this file already added to the java provider output (executed via analyzer), those lines are filtered for konveyor-related stuff. However, it turned out that it is critical to have full jdtls log to debug possible failures on users error reports, where jdtls failed with something, so we need all content that file.
This file is already created and present on the analysis task pod filesystem, so I'd like find an acceptable way how to get it to Hub files attached to analysis. Theoretically could be requested with data on a Task, but that looks risky to me (and we'd need filter allowed paths anyway to not leak e.g. creds).
I'm not sure if provider have a reasonable way to report this file directly. Also I'm thiking about extending the filter for .metadata/.log https://github.com/konveyor/analyzer-lsp/blob/main/external-providers/java-external-provider/pkg/java_external_provider/provider.go#L496-L511 but that also doesn't look as a reliable solution covering unexpected failures and relevant information from the log.
Signed-off-by: Marek Aufart <[email protected]>
|
Closing this PR, as it is going to be resolved in different way: Proposed solution: Use existing addon Data.Verbosity field to tell analyzer (and its providers) expected log level, that will be used by java-external-provider to decide when jdtls should be sent to its stdout. Steps:
cc @dymurray |

In java provider, there is a
.metadata/.logfile with full output of jdtls. This might be needed for debugging, adding it to analysis tasks attachments (under more descriptiveprovider-jdtls.logname).There were konveyor-related entires of this log already part of provider log, but this PR adds ful content of the jdtls log file.
Fixes: #177
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.