Inconsistent updates for the last few weeks
First off, thank you for building and maintaining this data set!
I found this data set while searching for existing historical archives for DWD ICON and ICON-EU. As far as I can tell, this is the only open archive of ICON-EU forecasts. I work in renewable energy forecasting and was interested in testing the improvements we might find if ICON forecasts were included as an input.
The uploads for the last few weeks have been a bit sparse with many missing days and cycles. Is it possible that missing forecast runs will still be processed and uploaded to hugging face?
Thank you!
Hi,
Yes, sorry about that. We do think its the only open archive of ICON-EU forecasts, we have been grabbing the raw files and should be able to backfill them. So we hope to add them, our machine doing the backups have been down a bit, which is why the last couple weeks its been inconsistent, and off. Some of the data, from December last year, is available though Open Meteo's open dataset, although it doesn't seem to have all the variables that we archive here.
No worried at all and thanks again for maintaining this archive!
Is there anything I can do to help? Would it be possible to get access to the raw gribs? I could do the work to process to zarr and backfill missing forecasts in this data set.
The gribs we use are all pulled from this rolling archive https://opendata.dwd.de/weather/nwp/icon/grib/ which has a two day window of the latest data. The issue we're facing at the moment with the processing is bandwidth issues - it's a reasonable amount of data to pull and process every init time and was eating into the network and compute speed of our other dataset pipelines! All the scripts we use to download and process are available on the openclimatefix github if you did want to have a look - just be warned you need a decent chunk of RAM and generous internet speeds!
Thank you! I totally understand the network and compute bandwidth issues after dealing with similar pipelines for a number of other weather models.
I poked around the pipeline code at https://github.com/openclimatefix/nwp-consumer. That's how I got the idea that I could maybe help backfill this hugging face ICON-EU archive using the same code if I had access to an archive of the raw grib files.
We actually started pulling in the grib files from the DWD's opendata service last week, so we should be good going forward. The issue is that we'd ideally have a continuous historical archive of the 0Z cycles from DWD for a long period so we can build new models with a good sample size. Is there anyway I can get access to the historical gribs that you've collected for days where there is a missing forecast in this zarr archive? Maybe we could open up an s3 bucket or something where gribs could get pushed and then run the processing to fill in missing zarr data sets here?
I think hosting an S3 bucket with the files might again be quite expensive - but the good news is I've got the rolling archival service working again so it should be there going forward, and I'll start investigating backfilling the missing bits. For reference, the code I'm using for it now is at https://github.com/openclimatefix/dagster-dags/containers/icon - the consumer ended up using too much memory due to the way it is designed (I'm working on remedying that as well, but in the meantime the basic container works well enough).
Oh, that's great to hear! We just got something similar running for processing the realtime gribs to zarr. I was just about to process what was available in the archive to a more time-series focused data set but will hold off for now if there's a chance some of these gaps will get filled in. Thank you!