-
Notifications
You must be signed in to change notification settings - Fork 361
Open
Labels
Description
Hello,
I have a batch of pairs of sequences. Each pair contains sequences of different lengths, which are padded to equal lengths. Is there a way to ignore these padded elements to compute the soft-dtw alignment? For example, https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html provides a feature to ignore a particular class index to compute cross-entropy loss.
Do you suggest any workaround to compute the dtw loss efficiently in such a case? I can only think of processing (removing paddings) each pair sample individually, but this will be too slow.