Create config for jepa forecasting finetuning, and fix jepa finetuning#1946
Create config for jepa forecasting finetuning, and fix jepa finetuning#1946csjfwang wants to merge 48 commits intoecmwf:developfrom
Conversation
2. fix issue 1943, JEPA finetuning with 2D-RoPE
| "student-teacher": { | ||
| enabled: False, | ||
| type: LossLatentSSLStudentTeacher, | ||
| type: Disabled, |
There was a problem hiding this comment.
Why was this change necessary?
There was a problem hiding this comment.
Because here is not filtered by enabled: False:
| enabled: False, | ||
| masking_strategy: "random", | ||
| num_samples: 1, | ||
| num_samples: 0, |
There was a problem hiding this comment.
if enabled: False already why do we need num_samples 0?
There was a problem hiding this comment.
Since I re-used function get_batch_size_from_config (), and it doesn't use filter enabled: False, I will try to send another fix to avoid num_samples: 1 still be used.
There was a problem hiding this comment.
oh I see, I guess in effect this part of the code gets the unfiltered config and that is the source of the error.
For now set it to 0 samples, but maybe raise an issue!
There was a problem hiding this comment.
Thank you! Then I will keep the num_samples: 0 and raise an issue to report the unfiltered config thing!
| # granted to it by virtue of its status as an intergovernmental organisation | ||
| # nor does it submit to any jurisdiction. | ||
|
|
||
| embed_orientation: "channels" |
There was a problem hiding this comment.
let's remove model params of the encoder, because they should be taken from the base_config anyway
There was a problem hiding this comment.
@sophie-xhonneux I removed the params related to encoder, can you look again?
Description
Issue Number
Closes #1943
Is this PR a draft? Mark it as draft.
Checklist before asking for review
./scripts/actions.sh lint./scripts/actions.sh unit-test./scripts/actions.sh integration-testlaunch-slurm.py --time 60