Skip to content
Draft
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 19 additions & 0 deletions imap_processing/lo/l2/lo_l2.py
Original file line number Diff line number Diff line change
Expand Up @@ -1307,6 +1307,25 @@ def calculate_flux_corrections(dataset: xr.Dataset, flux_factors: Path) -> xr.Da
"""
logger.info("Applying flux corrections")

bg_logarithmic_stability_factor = 0.04

# Add in the background intensity to ensure that logarithms behave
# properly in the flux corrector when intensities are zero or very low.
Comment on lines +1312 to +1313
Copy link
Contributor

@subagonsouth subagonsouth Jan 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is really confusing to me. I thought that a critical idea in the Lo algorithm is that the backgrounds do not get subtracted. So coming into this function, ena_intensity = signal_intensity + bg_intensity. Then this factor of the background is added in on top which means ena_intensity = signal_intensity + bg_intensity + stability_factor * bg_intensity. That doesn't make sense to me. I know that this is what Nathan wants but I am really struggling to understand why. To me this seems like it is purely an problem with the crappy intput data being used for validation and this fix actually degrades the output product.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I completely agree with this comment, and that is why I'm hesitant to throw this in there at the last second.

I think this is actually really bad because it adds in 4% uncertainty, but then doesn't remove that later on, so could be very misleading in terms of what is going on, and this is only before flux corrections, but if someone is looking at a sputtering/bootstrap map, then those wouldn't be corrected. It seems like this is a recipe for confusion.

dataset["ena_intensity"] += (
dataset["bg_intensity"] * bg_logarithmic_stability_factor
)
dataset["bg_intensity"] += dataset["bg_intensity"] * bg_logarithmic_stability_factor

# Commensurately, adjust the uncertainties to account for this addition
dataset["ena_intensity_stat_uncert"] = np.sqrt(
(dataset["ena_intensity_stat_uncert"]) ** 2
+ (dataset["bg_intensity"] * bg_logarithmic_stability_factor) ** 2
)
dataset["bg_intensity_sys_err"] = np.sqrt(
(dataset["bg_intensity_sys_err"]) ** 2
+ (dataset["bg_intensity"] * bg_logarithmic_stability_factor) ** 2
)
Comment on lines +1324 to +1327
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


# Flux correction
corrector = PowerLawFluxCorrector(flux_factors)

Expand Down
Loading