Datasets:

DOI:
License:

how to download the dataset

#7
by kwon-encored - opened

In the PVNet GitHub repository, under config.yaml, the example NWP file is specified as:

"gs://solar-pv-nowcasting-data/NWP/UK_Met_Office/UKV_intermediate_version_7.zarr"

However, it appears that the actual NWP files are sourced from dwd-icon-eu, where multiple files are available for each day. For instance, the repository lists dates from 2022-05-07 to 2023-05-08, but the Hugging Face site shows multiple files available for download on any given date, rather than a single Zarr file.

The code seems to require a single Zarr file, but when I navigate to the 2023/05/08 directory, I find several different files available for download. Since the repository expects a single Zarr file, I'm unsure how to handle the multiple files for different dates. Can you please provide guidance on how to proceed?

Control-V (2).png

Control-V (3).png

Link to config.yaml => https://github.com/openclimatefix/PVNet/blob/main/configs.exam

while the code requires one-single zarr file, we have a lot.... do I have to somehow make all those files with different date into one zarr file?
image.png

Open Climate Fix org

Hi! Thanks for reaching out.

We support opening multiple files, so no need to combine them! You can just pass something like this as nwp_zarr_path: "PATH/*.zarr.zip". If you're interested in how that works, the bit that handles that is in ocf_datapipes here: https://github.com/openclimatefix/ocf_datapipes/blob/main/ocf_datapipes/load/nwp/providers/utils.py

@AUdaltsova

Thank you so much for your reply!

Just to clarify, my understanding is that I need to first download all the necessary NWPs (e.g., 20230508_00, 20230508_06, 20230508_12, 20230508_18) and place them into a specific folder.
For instance, if I store them in a folder named /home/user/Desktop/all_NWP_datasets_0508/,

then I would set nwp_zarr_path as /home/user/Desktop/all_NWP_datasets_0508/*.zarr.zip

Is that correct?

Open Climate Fix org

Yes, exactly, that should work!

@AUdaltsova
Sorry to bother you again, but I could use some more advice.

image.png

The image above is from the GitHub repository instructions on how to import the satellite dataset. Upon further investigation, I discovered that the satellite file (version 4, 2021 HRV) is a massive 2251 GB, which makes it impractical to download to my local computer. However, the only relevant configuration shown in the config.yaml file is the following:

image.png

Where should I include the Google authentication information to ensure that this GitHub repository can successfully access and read the satellite file from Google Cloud Storage? I’m a bit confused about how the authentication is supposed to work without it being explicitly specified in the code or the YAML file.

Thank you again for your incredible help!

Open Climate Fix org

No problem! Unfortunately, I'm not very well placed to answer this, but @james-ocf might be able to help?

Open Climate Fix org

The dataset you are trying to open is here

And there is an example on how to open the dataset here

I would have thought its public so you dont need to authentication at all, but I could be wrong. What error messages do you get?

@peterdudfield
Thank you for your response with extra resource!

(About the Google authentication) So when I run the code without Google Authentication code,

image.png

It shows the following error:

RefreshError: ("Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true from the Google Compute Engine metadata service. Status: 404 Response:\nb''", <google.auth.transport.requests._Response object at 0x78032c7e7880>)

But when I un-comment the top, run the line to log into my google account, then the data become available

@AUdaltsova

Thank you for your kind response once again.

I have another question regarding your research paper titled Forecasting Regional Photovoltaic Power in Great Britain with a Multi-Modal Late Fusion Network
(https://s3.us-east-1.amazonaws.com/climate-change-ai/papers/iclr2024/46/paper.pdf).

In your study, you selected the window resolution of UKV data to be 24x24 pixels (where 1 pixel represents 2 kilometers by 2 kilometers)
and ECMWF data to be 12x12 pixels (where 1 pixel represents 0.05 degrees by 0.05 degrees).

Could you please clarify whether there was any specific rule or regulation that led you to choose the values of 24 and 12, or was this decision made primarily to accommodate the different sizes of regions in the United Kingdom?

Open Climate Fix org

So we are trying to predict GSPs with this paper, so small regions in the uk.
We tried to give the model relevant information, so a grid around the region. On balance, we dont want to provide the model with too much information as that would be extra memoery and storage e.t.c

In the PVNET 2.0, I use the datasets icon-eu, however, after "python save_batches.py", it returns: raise KeyError( KeyError: "Values for icon-eu not yet available in ocf-datapipes ['ukv', 'gfs', 'ecmwf', 'ecmwf_india', 'excarta', 'merra2', 'merra2_uk']".

Could you pls tell me how to deal with the icon-eu datasets in PVnet? as the default "UKV_intermediate_version_7.zarr" is not available at this moment.

Open Climate Fix org

Hi @cjwddhfys

Thanks for getting in touch.

You are totally right, we haven't intergrated ICON data from HF to PVnet yet. We did it when we had one large zarr of ICON data, but the HF data is laid out differently. Created one large zarr of in the HF was too big to upload.
I've put this in a github issue https://github.com/openclimatefix/ocf_datapipes/issues/359 and your very welcome to give it a go.

Thanks

Sign up or log in to comment