V-BeachNet
This repository contains the official PyTorch implementation for the paper "A New Framework for Quantifying Alongshore Variability of Swash Motion Using Fully Convolutional Networks."
V-BeachNet paper:
Salatin, R., Chen, Q., Raubenheimer, B., Elgar, S., Gorrell, L., & Li, X. (2024). A New Framework for Quantifying Alongshore Variability of Swash Motion Using Fully Convolutional Networks. Coastal Engineering, 104542. doi: 10.1016/j.coastaleng.2024.104542.
Prerequisites
This code is tested on a newly installed Ubuntu 24.04 with default version of Python and Nvidia GPU.
Install Anaconda prerequisite (Can also be accessed from here):
sudo apt update && \ sudo apt install libgl1-mesa-dri libegl1 libglu1-mesa libxrandr2 libxss1 libxcursor1 libxcomposite1 libasound2-data libasound2-plugins libxi6 libxtst6
Download Anaconda3:
curl -O https://repo.anaconda.com/archive/Anaconda3-2024.06-1-Linux-x86_64.sh
Locate the downloaded file and install it:
bash Anaconda3-2024.06-1-Linux-x86_64.sh
Steps
Clone this repository and change directory:
git clone https://huggingface.co/rezasalatin/V-BeachNet.git cd V-BeachNet
Create the virtual environment with the requirements:
conda env create -f environment.yml conda activate vbeach
Visit the "Training_Station" folder and copy your manually segmented (using labelme) dataset to this directory. Open the following file to change any of the variables and save it. Then execute it to train the model:
./train_video_seg.sh
Access your trained model from the
log/
directory.Visit the "Testing_Station" folder and copy your data to this directory. Open the following file to change any of the variables (especially the model path from the
log/
folder) and save it. Then execute it to test the model:./test_video_seg.sh
Access your segmented data from the
output
directory.