Datasets:
Data Structure for Late-Fusion Model
@AUdaltsova
Hi, I saw your presentation for "Forecasting regional PV power in Great Britain with a multi-modal late fusion network" and got interested!
I am referring to your:
Let me explain my current situation.
In your MultiModal Late-Fusion research, you structured your NWP dataset with dimensions like (11,12,12,6) or (11,24,24,6), where the 12x12 or 24x24 grid represents the spatial dimensions defined by x_osgb and y_osgb for UKV and lat/lon for IFS, corresponding to location data. I am attempting to apply your methodology to a different country—Saudi Arabia—but the NWP dataset I am using differs slightly, particularly in the projection and coordinate system.
I would like to follow your data structure of (11,12,12,6), but I’m facing challenges due to the differences in the way spatial data is represented. Instead of using x_osgb/y_osgb (or even lat/lon coordinates), my dataset organizes the weather data into a simple pixel grid format, where each 3 km x 3 km grid cell is labeled with integer values starting from (1,1) at the bottom-left corner of the region of interest. Each grid pixel is assigned corresponding NWP variables, but there’s no direct mapping to actual geographical coordinates like latitude and longitude.
Given this setup, my question revolves around the significance of location data (like x_osgb/y_osgb or lat/lon) as inputs in your machine learning model. Since my pixel values don't provide meaningful spatial information to the model like, for example, latitude or longitude. I’m concerned this might affect the model's ability to generalize, especially when it comes to predicting location-sensitive outputs like power generation.
Do you think omitting true geographical data and relying solely on my current NWP pixel grid values could negatively impact the model's performance? Or is it safe to proceed with this simplified representation, assuming the pixel-based structure provides enough spatial context for the machine learning input?
Basically, I was questioning , under (11,12,12,6) how important is (12,12) in that machine learning model - can it be any unit that is not related to power output dataset location?
I apologize for such a lengthy inquiry...
Hi @net-zero-2050 ,
Thanks for your interest in the workshop paper/presentation! Exciting to hear that you are trying to implement this for Saudi Arabia too, I can have a go at answering your query and others can chip in if I have missed anything:
What ultimately gets fed into the PVNet model are pixel grids (in the original projections of the input) with accompanying values for the NWP variables/satellite image channels e.g the 12x12 you mention refers to a pixel grid for one of the NWPs, the location coordinates like lat/long or osgb coordinates are used to find the centre of a point of interest and then build the pixel grids around that, the coordinates themselves aren't explicitly passed to the model.
In your case you mention you already have NWP data in a pixel grid format for your regions of interest, that should be sufficient and match what we have as inputs to our model. I am interested how you already have these NWP grids around your regions of interest if you didn't have some coordinate system to work around in the first place but maybe this has been done manually somehow or you had this already done/available in your data from some other pre processing?
I hope I have understood your question and this helps! Let us know if anything else needs clarifying, thanks
Thank you so much for your response!
To answer your question, the NWP file we use primarily comes from "HARMONIE-AROME."
Past students in our lab have developed algorithms to create this well-structured dataset from just a few key inputs. I apologize for any confusion earlier!
For clarification: when you say thatcoordinates themselves aren't explicitly passed to the model
, does this mean that in your study, the grid (e.g., 12x12 or 24x24) can represent any coordinate system, whether it's lat/lon, LCC, or OSGB?
Also, where exactly does the study include geolocation information? I was under the impression that in order to connect the NWP data from one location to its corresponding power generation model location, geolocation data would be important.
For example, after training the machine learning model with data from Brixton, if we wanted to test it with a new power generation site in Chelsea (which is nearby), wouldn't the model naturally weigh weather data from Brixton more heavily than, say, Liverpool (throughout the "black-box" during ML)?
Thank you again for your cheers!!
does this mean that in your study, the grid (e.g., 12x12 or 24x24) can represent any coordinate system, whether it's lat/lon, LCC, or OSGB?
Yes in theory, currently there are a listed of supported coordinate systems in our data processing library ocf_datapipes https://github.com/openclimatefix/ocf_datapipes/blob/main/ocf_datapipes/utils/location.py#L20 but that could be extended to include any system, ultimately the coordinate systems are used to get a grid of values and the underlying coordinate systems don't actually matter and they can be different for different inputs. For the UK regional PVNet model which predicts solar yield for each Grid Supply Point in the UK (around 316 of them) each sample is a specific GSP, an encoding of the GSP ID is also included, this allows the model to hopefully learn what specific parts of the gridded input to focus on for each GSP.
However this works because in the models we have trained we have always trained on the sites/locations we are doing predictions for, so in your question/example (thank you for including that, it clarifies what you were asking about) if the model is being tested on a site not included in the training data and not been given some sort of encoded information about that site then it may not perform as well (although in this specific case it may be okay since Brixton was in your training data and may be close enough to Chelsea to generalise) . It would be interesting to test how well these models do on this sort of zero-shot learning task, another thing to possibly try could be including some sort of extra explicit locational feature for each site (perhaps lat long) which may then improve performance but perhaps for the best results including some training data on the sites you would like to do inference on makes most sense, but obviously that comes with its own set of challenges!
Hope that helps! Thanks
Hi @Sukhil-Patel ,
I sincerely thank you so much for your valuable time and explanation! I now have a clear understanding and plan to proceed with using my grid unit.
Just one final question: Could you please help me locate where in the code you "the satellite and two NWP inputs were appended with an additional channel to hold a learned embedding for the GSP ID.
"?
No problem, so the code for that bit can be found in the PVNet library here it's done as one of steps in the forward pass of the model
@Sukhil-Patel
Hi, I came across this form and noticed that a similar question was already asked.
I'm a bit confused about how appending the GSP_ID affects the satellite and two NWP inputs for the model to learn the locations of solar power generation.
This is the version I'm looking at for GSP_ID data:
https://www.neso.energy/data-portal/gis-boundaries-gb-grid-supply-points/gsp_-_gnode_-_direct_connect_-_region_lookup_20181031
(though in a different file format, not online).
Specifically, how do:
UKV: x_osgb, y_osgb
IFS: lat, lon
Sat: x_osgb, y_osgb
get embedded and concatenated with the GSP_ID?
My thought process was similar to @kwon-encored , where I assumed that each IFS, UKV, and Sat input (along with solar position) must have some connection or relation to the "PVNet solar generation" locations to enable the model to learn correctly. Could you help explain how the embedding is done for each NWP with the GSP_ID?
Hi @aws-s3-renewable , thanks for your question, as a start if you haven't already I would recommend having a read of a workshop paper that OCF produced that is on this topic and goes into detail about how the model works and how and what impact the GSP_ID feature has on the learning process, the paper link is here: https://s3.us-east-1.amazonaws.com/climate-change-ai/papers/iclr2024/46/paper.pdf, section 3 and 4 are most relevant and goes into detail and hopefully will answer that question, if this doesn't answer that then please do let us know and we can try to clarify, thanks!
Hi @Sukhil-Patel , thank you for your response and the reference!
I wanted to clarify a couple of coding details. Specifically, how were the two NWP inputs appended with an additional channel to hold a learned embedding for the GSP ID
(pg. 3)? Also, how does the process work when NWP and satellite data have been encoded into 1D vectors, and they are concatenated along with the calculated solar coordinates and another embedding of the GSP ID
(pg. 3)?
In the code you shared previously (https://github.com/openclimatefix/PVNet/blob/main/pvnet/models/multimodal/multimodal.py#L366), I can see that the GSP is being added to modes = OrderedDict()
, but I’m not sure where or how it's embedded and concatenated. Could you help clarify this part?
Thanks so much!
So for the first bit of the workshop paper:
Before being fed into the 3D
convolutional layers, the satellite and two NWP inputs were appended with an additional channel to
hold a learned embedding for the GSP ID
Whether this is used in the model is controlled by this parameter which can be set in the config, if it is set to True then you can see an example of it's use here for the satellite image data and if you go into the definition of that ImageEmbedding class you can see how it is fed through an embedding layer which would be a matrix with shape (number_of_gsp_ids, number_of_pixels_in_image) and then that gets concatenated with the original satellite data before being fed through the convolutional encoders
For the second bit
NWP and satellite data have been encoded into 1D vectors, and they are concatenated along with the calculated solar coordinates and another embedding of the GSP ID
- The embedding of the GSP id happens here on this line, the definition of the embedding is here, so would be a matrix of shape (318, 16) for lots of our examples, since we have 318 GSP ids and we use an embedding dim size of 16
- All the modes (including the GSP id embedding mode) get added to the modes ordered dict and that gets sent to an output network such as this one, on this line we can see that concatenation of all the modalities happens before there is a forward pass in the model, that definition of that concatenation modes function is here
Just to note I think in our current models we have trained we have just used this second bit of gsp_id encoding, e.g. encoding the gsp id and that concatenating it with the 1d vector of encoded satellite data and NWP data and have not been using the gsp id encodings within the satellite and nwp data themselves before those are encoded, I would need to double check why but my assumption that from empirical results after experimentation we have found this to be sufficient and not change results significantly
Hope this helps!
@Sukhil-Patel
I sincerely thank you for your precious feedback to my question.
I think I am getting to the point!
I understand that your research is using Zarr files for all NWPs (IFS & UKV) and Satellite Image.
When you say "concat satellite image to 1D embedded vector" do you mean you can directly embed zarr file with embedded variable?
Re:post
if add_image_embedding_channel:
self.sat_embed = ImageEmbedding(
num_embeddings, self.sat_sequence_len, self.sat_encoder.image_size_pixels
)
- (from above)Even though the explanation is embedding of satellite zarr file with GSP_ID, I do not see where "GSP_ID" is coming from.
- Also, my GSP_ID file shows name of the region, and centroid lat and lon. Is that same as yours?
- Say we are embedding GSP_ID here with sliced satellite data - are you concatnating only the cropped part of GSP_ID or entire GSP_ID of entire UK? (meaning all GSP vs sliced GSP for ROI)
- how should I decide on the value for
num_embeddings
?
I apologize for length inquiry. Thank you so much
hi @aws-s3-renewable ,
I'm not sure what you mean by "directly embed zarr file with embedded variable", but I'll try to give a bit of context that might be helpful, and please let me know if you need any clarification.
So what happens is, when creating batches we crop the initial .zarr files, be that of satellite or NWP data, to include only the "region of interest" (usually a square around the point of interest, POI, which I think should be the centroid lat lon you mention in your question 2. Depending on the spatial resolution of the data source, it can be any size, but generally speaking all of them will tend to cover more or less the same area, which results in some data being 12x12, and some 24x24. You can find a good illustration of that on page 9 of the paper) and the timeframe we are interested in, which results in a multidimensional array.
For example, for satellite data with 11 channels, a spatial crop of 24x24 and 5 images of history you will get an array of shape 5x11x24x24 for each observation. All of this is then stored as torch tensors ( @Sukhil-Patel correct me if I'm wrong) with a batch structure you can find in ocf_datapipes, for example, here it is for sat data. This is what will be passed to the encoder (and embedding if requested).
Hopefully that could clear it up a little bit, let me know if you have any questions regarding this part!
Now to the rest:
- If you look further down in the same file, in the forward pass, you will see the ID passed to the embedding defined by the code snippet you've provided, specifically you need this part of code. It says target_id because we reuse the same base model for other things, but in your case that would by GSP id, and here it is in the batch model.
- Unfortunately I don't have a file on hand, but that sounds right!
- It will only add the corresponding GSP to each sample in the batch.
num_embeddings
is the number of entries in your embedding table, so in this case, the number of GSPs (for us it is 318)
This discussion has been very helpful! Thanks to @aws-s3-renewable for the additional questions :)
Quick question: Above, you mentioned that with 11 channels, a spatial crop of 24x24, and 5 historical images, you’ll get an array of shape 5x11x24x24 for each observation.
However, in the paper, the shape is written as 11x24x24x5
Which dimension order is correct to use for the machine learning code in this case?
Hey,
@kwon-encored
I believe its the torch.swapaxes(sat_data, 1, 2).float()
from below
https://github.com/openclimatefix/PVNet/blob/dfce50ac954af1e758edb0dfbdd67c50083ab100/pvnet/models/multimodal/multimodal.py#L322
@Sukhil-Patel
I hope you’re doing well!
Could you please share the links or file paths where each encoder and FC-layer is located?
I'm working on implementing code based on your architecture (see image below), and I’m writing it in a traditional style, something like this (without Classes and foward()):
# VERY SAMPLE EXAMPLE CODE for illustration
model = nn.Sequential(
# Stack 3x3 Conv2d with 32 filters, followed by ELU, repeated 6 times
nn.Conv2d(in_channels=32, out_channels=32, kernel_size=3, padding=1),
nn.ELU(),
nn.Flatten(),
# Fully connected layers with ELU activations
nn.Linear(32 * input_height * input_width, 256), # Replace input_height and input_width
nn.ELU(),
nn.Linear(256, 256),
nn.ELU()
)
Since your code seems to be mixed with components unrelated to the current paper,
it would be very helpful to get specific links for each encoder and FC layer you used.
Thank you so much for your assistance!
@Sukhil-Patel
just adding on from above question,
Even though we begin with fc, 256, I could not find any information about fc_hidden_features: 256
nor fc_hidden_features: 48
at the end. (only 128 exists which happens later). Where can I find parts that runs (1) fc, 256 and (2) fc, 48 with LeakyReLU (3) x6 times with 2 fc, 128?
Hi @aws-s3-renewable ,
We build the model from a config file that we load with hydra. The model config file we are currently using is here. In the config you'll see entries such as "output_network" which show the path inside the pvnet library to the final components of the network. There are also encoders for the NWP and satellite data. These network sub-components are then stitched together into the overall network in using this class which is also defined in the config. The hyperparameters you mention are either defined in the config or are calculated based on the parameters in the config if you follow the code through. Also note that the config I've linked is to our latest model and hyperparameters might be slightly different from those we used in the paper
Hope that helps
@kwon-encored hi! If in doubt, you can always find the expected initial shapes here in the batchkey class. But as @aws-s3-renewable has correctly pointed out, those can then change along the way before going into different elements of the model.
By the way, @Sukhil-Patel and I are holding office hours on October 29th, 12:00 BST if you can make it! We will of course keep answering your questions here, but it might be nice to have a live discussion if you can drop in.
@james-ocf
Hi James, I sincerely thank you for your response! (I apologize for bombarding you with such long questions)
When I tried searching for keywords likefc_hidden_features: 256
or fc_hidden_features: 48
Would the link you provided include those layers in between?
In the config it shows that we are using this class for the one of the NWP encoders (UKV data). In this specific config we don't specify fc_features
so it defaults to the value of 128, but we do specify out_features=256
. So in UKV encoder it goes through fully connected layers with 128 and then 256 units. The fact that we squeeze and then expand the number of units here was actually my mistake, although I don't think the network performance is particularly sensitive to these choices.
The output network we use in that configuration is this class. You can see that we specify the fc_hidden_units=128
which matches the diagram from the paper. It is the out_features
parameter from the class which was used to set the final output layer size of 48. This is calculated in the multimodal class's parent class based on the desired forecast horizon and quantiles and plugged into the output network later.