id
stringlengths 19
42
| text
stringlengths 0
3.16M
| added
stringlengths 24
24
| created
stringlengths 24
24
| source
stringclasses 1
value |
---|---|---|---|---|
proofpile-arXiv_065-4 | \section{Introduction}\label{sec:intro}
Space provides a useful vantage point for monitoring large-scale trends on the surface of the Earth~\cite{manfreda2018use,albert2017using,yeh2020using}. Accordingly, numerous EO satellite missions have been launched or are being planned. Many EO satellites carry multispectral or hyperspectral sensors that measure the electromagnetic radiations emitted or reflected from the surface, which are then processed to form \emph{data cubes}. The data cubes are the valuable inputs to the EO applications.
However, two thirds of the surface of the Earth is under cloud cover at any given point in time~\cite{jeppesen2019cloud}. In many EO applications, the clouds occlude the targets of interest and reduce the value of the data. In fact, many weather prediction tasks actually require clear-sky measurements~\cite{liu2020hyperspectral}. Dealing with cloud cover is part-and-parcel of practical EO processing pipelines~\cite{transon2018survey, li2019deep-ieee, paoletti2019deep, mahajan2020cloud, yuan2021review}. Cloud mitigation strategies include segmenting and masking out the portion of the data that is affected by clouds~\cite{griffin2003cloud,gomez-chova2007cloud}, and restoring the cloud-affected regions~\cite{li2019cloud,meraner2020cloud,zi2021thin} as a form of data enhancement. Increasingly, deep learning forms the basis of the cloud mitigation routines~\cite{li2019deep-ieee,castelluccio2015land,sun2020satellite,yang2019cdnet}.
\begin{figure}[t]\centering
\begin{subfigure}[b]{0.47\linewidth}
\centering
\includegraphics[width=\linewidth]{./figures/intro/rgb_cloudy.pdf}
\caption{Cloudy image (in RGB).}
\end{subfigure}
\hspace{0.5em}
\begin{subfigure}[b]{0.47\linewidth}
\centering
\includegraphics[width=\linewidth]{./figures/intro/rgb_notcloudy.pdf}
\caption{Non-cloudy image (in RGB).}
\end{subfigure}
\begin{subfigure}[b]{0.47\linewidth}
\centering
\includegraphics[width=\linewidth]{./figures/intro/b128_patch.pdf}
\caption{Adversarial cube to bias the detector in the cloud-sensitive bands.}
\label{fig:falsecolor}
\end{subfigure}
\hspace{0.5em}
\begin{subfigure}[b]{0.47\linewidth}
\centering
\includegraphics[width=\linewidth]{./figures/intro/rgb_patch.pdf}
\caption{Adversarial cube blended in the environment in the RGB domain.}
\end{subfigure}
\vspace{-0.5em}
\caption{(Row 1) Cloudy and non-cloudy scenes. (Row 2) Our \emph{adversarial cube} fools the multispectral cloud detector~\cite{giuffrida2020cloudscout} to label the non-cloudy scene as cloudy with high confidence.}
\label{fig:example}
\end{figure}
As the onboard compute capabilities of satellites improve, it has become feasible to conduct cloud mitigation directly on the satellites~\cite{li2018onboard,giuffrida2020cloudscout}. A notable example is CloudScout~\cite{giuffrida2020cloudscout}, which was tailored for the PhiSat-1 mission~\cite{esa-phisat-1} of the European Space Agency (ESA). PhiSat-1 carries the HyperScout-2 imager~\cite{esposito2019in-orbit} and the Eyes of Things compute payload~\cite{deniz2017eyes}. Based on the multispectral measurements, a convolutional neural network (CNN) is executed on board to perform cloud detection, which, in the case of~\cite{giuffrida2020cloudscout}, involves making a binary decision on whether the area under a data cube is \emph{cloudy} or \emph{not cloudy}; see Fig.~\ref{fig:example} (Row 1). To save bandwidth, only \emph{non-cloudy} data cubes are downlinked, while \emph{cloudy} ones are not transmitted to ground~\cite{giuffrida2020cloudscout}.
However, deep neural networks (DNNs) in general and CNNs in particular are vulnerable towards adversarial examples, \ie, carefully crafted inputs aimed at fooling the networks into making incorrect predictions~\cite{akhtar2018threat, yuan2019adversarial}. A particular class of adversarial attacks called physical attacks insert adversarial patterns into the environment that, when imaged together with the targeted scene element, can bias DNN inference~\cite{athalye2018synthesizing, brown2017adversarial, eykholt2018robust, sharif2016accessorize, thys2019fooling}. In previous works, the adversarial patterns were typically colour patches optimised by an algorithm and fabricated to conduct the attack.
It is natural to ask if DNNs for EO data are susceptible to adversarial attacks. In this paper, we answer the question in the affirmative by developing a physical adversarial attack against a multispectral cloud detector~\cite{giuffrida2020cloudscout}; see Fig.~\ref{fig:example} (Row 2). Our adversarial pattern is optimised in the multispectral domain (hence is an \emph{adversarial cube}) and can bias the cloud detector to assign a \emph{cloudy} label to a \emph{non-cloudy} scene. Under the mission specification of CloudScout~\cite{giuffrida2020cloudscout}, EO data over the area will not be transmitted to ground.
\vspace{-1em}
\paragraph{Our contributions}
Our specific contributions are:
\begin{enumerate}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt]
\item We demonstrate the optimisation of adversarial cubes to be realised as an array of exterior paints that exhibit the multispectral reflectance to bias the cloud detector.
\item We propose a novel multi-objective adversarial attack concept, where the adversarial cube is optimised to bias the cloud detector in the cloud sensitive bands, while remaining visually camouflaged in the visible bands.
\item We investigate mitigation strategies against our adversarial attack and propose a simple robustification method.
\end{enumerate}
\vspace{-1em}
\paragraph{Potential positive and negative impacts}
Research into adversarial attacks can be misused for malicious activities. On the other hand, it is vital to highlight the potential of the attacks so as to motivate the development of mitigation strategies. Our contributions above are aimed towards the latter positive impact, particularly \#3 where a defence method is proposed. We are hopeful that our work will lead to adversarially robust DNNs for cloud detection.
\section{Related work}\label{sec:related_work}
Here, we review previous works on dealing with clouds in EO data and adversarial attacks in remote sensing.
\subsection{Cloud detection in EO data}\label{sec:related_hyperspectral}
EO satellites are normally equipped with multispectral or hyperspectral sensors, the main differences between the two being the spectral and spatial resolutions~\cite{madry2017electrooptical,transon2018survey}. Each ``capture'' by a multi/hyperspectral sensor produces a data cube, which consists of two spatial dimensions with as many channels as spectral bands in the sensor.
Since 66-70\% of the surface of the Earth is cloud-covered at any given time~\cite{jeppesen2019cloud,li2018onboard}, dealing with clouds in EO data is essential. Two major goals are:
\begin{itemize}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt]
\item Cloud detection, where typically the location and extent cloud coverage in a data cube is estimated;
\item Cloud removal~\cite{li2019cloud,meraner2020cloud,zi2021thin}, where the values in the spatial locations occluded by clouds are restored.
\end{itemize}
Since our work relates to the former category, the rest of this subsection is devoted to cloud detection.
Cloud detection assigns a \emph{cloud probability} or \emph{cloud mask} to each pixel of a data cube. The former indicates the likelihood of cloudiness at each pixel, while the latter indicates discrete levels of cloudiness at each pixel~\cite{sinergise-cloud-masks}. In the extreme case, a single binary label (\emph{cloudy} or \emph{not cloudy}) is assigned to the whole data cube~\cite{giuffrida2020cloudscout}; our work focusses on this special case of cloud detection.
Cloud detectors use either \emph{hand-crafted features} or \emph{deep features}. The latter category is of particular interest because the methods have shown state-of-the-art performance~\cite{lopezpuigdollers2021benchmarking,liu2021dcnet}. The deep features are extracted from data via a series of hierarchical layers in a DNN, where the highest-level features serve as optimal inputs (in terms of some loss function) to a classifier, enabling discrimination of subtle inter-class variations and high intra-class variations~\cite{li2019deep-ieee}. The majority of cloud detectors that use deep features are based on an extension or variation of Berkeley's fully convolutional network architecture~\cite{long2015fully, shelhamer2017fully}, which was designed for pixel-wise semantic segmentation and demands nontrivial computing resources. For example, \cite{li2019deep} is based on SegNet~\cite{badrinarayanan2017segnet}, while \cite{mohajerani2018cloud, jeppesen2019cloud, yang2019cdnet, lopezpuigdollers2021benchmarking, liu2021dcnet, zhang2021cnn} are based on U-Net~\cite{ronneberger2015u-net}, all of which are not suitable for on-board implementation.
\subsection{On-board processing for cloud detection}
On-board cloud detectors can be traced back to the thresholding-based Hyperion Cloud Cover algorithm~\cite{griffin2003cloud}, which operated on 6 of the hyperspectral bands of the EO-1 satellite. Li \etal's on-board cloud detector~\cite{li2018onboard} is an integrative application of the techniques of decision tree, spectral angle map~\cite{decarvalhojr2000spectral}, adaptive Markov random field~\cite{zhang2011adaptive} and dynamic stochastic resonance~\cite{chouhan2013enhancement}, but no experimental feasibility results were reported. Arguably the first DNN-based on-board cloud detector is CloudScout~\cite{giuffrida2020cloudscout}, which operates on the HyperScout-2 imager~\cite{esposito2019in-orbit} and Eye of Things compute payload~\cite{deniz2017eyes}. As alluded to above, the DNN assigns a single binary label to the whole input data cube; details of the DNN will be provided in Sec.~\ref{sec:training}.
\subsection{Adversarial attacks in remote sensing}
Adversarial examples can be \emph{digital} or \emph{physical}. Digital attacks apply pixel-level perturbations to legitimate test images, subject to the constraints that these perturbations look like natural occurrences, \eg, electronic noise. Classic white-box attacks such as the FGSM~\cite{goodfellow2015explaining}
have been applied to attacking CNN-based classifiers for RGB images~\cite{xu2021assessing}, multispectral images~\cite{kalin2021automating} and synthetic aperture radio images~\cite{li2021adversarial}. A key observation is the generalisability of attacks from RGB to multispectral images~\cite{ortiz2018integrated, ortiz2018on}. Generative adversarial networks have been used to generate natural-looking hyperspectral adversarial examples~\cite{burnel2021generating}.
Physical attacks, as defined in Sec.~\ref{sec:intro}, need only access to the environment imaged by the victim, whereas digital attacks need access to the victim's test images (\eg, in a memory buffer); in this sense, physical attacks have weaker operational requirements and the associated impact is more concerning. For \emph{aerial/satellite RGB imagery}, physical attacks on a classifier~\cite{czaja2018adversarial}, aircraft detectors~\cite{den2020adversarial, lu2021scale} and a car detector~\cite{du2022physical} have been investigated but only \cite{du2022physical} provided real-world physical test results. For \emph{aerial/satellite multi/hyperspectral imagery}, our work is arguably the first to consider physical adversarial attacks.
\section{Threat model}\label{sec:threat_model}
We first define the threat model that serves as a basis for our proposed adversarial attack.
\begin{description}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt]
\item[Attacker's goals] The attacker aims to generate an adversarial cube that can bias a pretrained multispectral cloud detector to label non-cloudy space-based observation of scenes on the surface as cloudy. In addition, the attacker would like to visually camouflage the cube in a specific \textbf{region of attack (ROA)}; see Fig.~\ref{fig:rgb_scenes} for examples. Finally, the cube should be physically realisable.
\begin{figure}[ht]\centering
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{./figures/threat_model/hills-roa.pdf}
\caption{Hills.}
\label{fig:hills}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{./figures/threat_model/desert-roa.pdf}
\caption{Desert.}
\label{fig:desert}
\end{subfigure}
\vspace{-0.5em}
\caption{Sample regions of attack.}
\label{fig:rgb_scenes}
\end{figure}
\item[Attacker's knowledge] The attacker has full information of the targeted DNN, including architecture and parameter values, \ie, white-box attack. This is a realistic assumption due to the publication of detailed information on the model and training data~\cite{giuffrida2020cloudscout}. Moreover, from a threat mitigation viewpoint, assuming the worst case is useful.
\item[Attacker's strategy] The attacker will optimise the adversarial cube on training data sampled from the same input domain as the cloud detector; the detailed method will be presented in Sec.~\ref{sec:attacking}. The cube will then be fabricated and placed in the environment, including the ROA, although Sec.~\ref{sec:limitations} will describe limitations on real-world evaluation of the proposed attack in our study.
\end{description}
\section{Building the cloud detector}\label{sec:training}
We followed Giuffrida \etal.~\cite{giuffrida2020cloudscout} to build a multispectral cloud detector suitable for satellite deployment.
\subsection{Dataset}\label{sec:cloud_detectors}
We employed the Cloud Mask Catalogue~\cite{francis_alistair_2020_4172871}, which contains cloud masks for 513 Sentinel-2A~\cite{2021sentinel-2} data cubes collected from a variety of geographical regions, each with 13 spectral bands and 20 m ground resolution (1024$\times$1024 pixels). Following Giuffrida \etal., who also used Sentinel-2A data, we applied the Level-1C processed version of the data, \ie, top-of-atmosphere reflectance data cubes. We further spatially divide the data into 2052 data (sub)cubes of 512$\times$512 pixels each.
To train the cloud detector model, the data cubes were assigned a binary label (\textit{cloudy} vs.~\textit{not cloudy}) by thresholding the number of cloud pixels in the cloud masks. Following Giuffrida \etal., two thresholds were used: 30\%, leading to dataset version TH30, and 70\%, leading to dataset version TH70 (the rationale will be described later). Each dataset was further divided into training, validation, and testing sets. Table~\ref{tab:cm_dataset} in the supp.~material summarises the datasets.
\subsection{Model}
We employed the CNN of Giuffrida \etal., which contains four convolutional layers in the feature extraction layers and two fully connected layers in the decision layers (see Fig.~\ref{fig:cnn_model} in the supp.~material for more details). The model takes as input 3 of the 13 bands of Sentinel-2A: band 1 (coastal aerosol), band 2 (blue), and band 8 (NIR). These bands correspond to the cloud-sensitive wavelengths; see Fig.~\ref{fig:falsecolor} for a false colour image in these bands. Using only 3 bands also leads to a smaller CNN ($\le 5$ MB) which allows it to fit on the compute payload of CloudScout~\cite{giuffrida2020cloudscout}.
Calling the detector ``multispectral'' can be inaccurate given that only 3 bands are used. However, in Sec.~\ref{sec:mitigation}, we will investigate adversarial robustness by increasing the input bands and model parameters of Giuffrida \etal.'s model.
\subsection{Training}
Following~\cite{giuffrida2020cloudscout}, a two stage training process was applied:
\begin{enumerate}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt]
\item Train on TH30 to allow the feature extraction layers to recognise ``cloud shapes''.
\item Then, train on TH70 to fine-tune the decision layers, while freezing the weights in the feature extraction layers.
\end{enumerate}
The two stage training is also to compensate for unbalanced distribution of training samples. Other specifications (\eg, learning rate and decay schedule, loss function) also follow that of Giuffrida \etal.; see~\cite{giuffrida2020cloudscout} for details.
Our trained model has a memory footprint of 4.93 MB (1,292,546 32-bit float weights), and testing accuracy and false positive rate of 95.07\% and 2.46\%, respectively.
\section{Attacking the cloud detector}\label{sec:attacking}
Here, we describe our approach to optimising adversarial cubes to attack multispectral cloud detectors.
\subsection{Adversarial cube design}\label{sec:material_selection}
Digitally, an adversarial cube $\mathbf{P}$ is the tensor
\begin{equation*}
\mathbf{P} =
\begin{pmatrix}
\mathbf{p}_{1,1} & \mathbf{p}_{1,2} & \cdots & \mathbf{p}_{1,N} \\
\mathbf{p}_{2,1} & \mathbf{p}_{2,2} & \cdots & \mathbf{p}_{2,N} \\
\vdots & \vdots & \ddots & \vdots \\
\mathbf{p}_{M,1} & \mathbf{p}_{M,2} & \cdots & \mathbf{p}_{M,N}
\end{pmatrix} \in [0,1]^{M \times N \times 13},
\end{equation*}
where $M$ and $N$ (in pixels) are the sizes of the spatial dimensions, and $\mathbf{p}_{i,j} \in [0,1]^{13}$ is the intensity at pixel $(i,j)$ corresponding to the 13 multispectral bands of Sentinel-2A.
Physically, $\mathbf{P}$ is to be realised as an array of exterior paint mixtures (see Fig.~\ref{fig:colour_swatches}) that exhibit the multispectral responses to generate the attack. The real-world size of each pixel of $\mathbf{P}$ depends on the ground resolution of the satellite-borne multispectral imager (more on this in Sec.~\ref{sec:limitations}).
\subsubsection{Material selection and measurement}
To determine the appropriate paint mixtures for $\mathbf{P}$, we first build a library of multispectral responses of exterior paints. Eighty exterior paint swatches (see Fig.~\ref{fig:colour_swatches_real}) were procured and scanned with a Field Spec Pro 3 spectrometer~\cite{asd2008fieldspec3} to measure their reflectance (Fig.~\ref{fig:paint_reflectance}) under uniform illumination. To account for solar illumination when viewed from the orbit, the spectral power distribution of sunlight (specifically, using the AM1.5 Global Solar Spectrum\cite{astm2003specification}; Fig.~\ref{fig:solar_spectrum}) was factored into our paint measurements via element-wise multiplication to produce the apparent reflection; Fig.~\ref{fig:paint_apparent_reflectance}. Lastly, we converted the continuous spectral range of the apparent reflectance of a colour swatch to the 13 Sentinel-2A bands by averaging over the bandwidth of each band; Fig.~\ref{fig:paint_13bands}. The overall result is the matrix
\begin{align}
\mathbf{C} = \left[ \begin{matrix} \mathbf{c}_1, \mathbf{c}_2, \dots, \mathbf{c}_{80} \end{matrix} \right] \in [0,1]^{13 \times 80}
\end{align}
called the \emph{spectral index}, where $\mathbf{c}_q \in [0,1]^{13}$ contains the reflectance of the $q$-th colour swatch over the 13 bands.
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\columnwidth]{./figures/methods/colour_swatches_diagram.pdf}
\vspace{-2.0em}
\caption{The adversarial cube (digital size $4 \times 5$ pixels in the example) is to be physically realised as a mixture of exterior paint colours that generate the optimised multispectral responses.}
\label{fig:colour_swatches}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\columnwidth]{./figures/methods/colour_swatches.pdf}
\vspace{-1.5em}
\caption{A subset of our colour swatches (paint samples).}
\label{fig:colour_swatches_real}
\end{figure}
\begin{figure*}[ht]\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{./figures/methods/ybr_reflectance.pdf}
\caption{Reflectance of a colour swatch.}
\label{fig:paint_reflectance}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{./figures/methods/solar_spectrum.pdf}
\caption{AM1.5 Global Solar Spectrum.}
\label{fig:solar_spectrum}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{./figures/methods/ybr_apparent_reflectance.pdf}
\caption{Apparent reflectance of (a).}
\label{fig:paint_apparent_reflectance}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{./figures/methods/ybr_13bands.pdf}
\caption{13 Sentinel-2 bands of (c).}
\label{fig:paint_13bands}
\end{subfigure}
\vspace{-0.5em}
\caption{Process of obtaining the 13 Sentinel-2 spectral bands of a colour swatch.}
\label{fig:spectrometer}
\end{figure*}
\subsubsection{Adversarial cube parametrisation}
We obtain $\mathbf{p}_{i,j}$ as a linear combination of the spectral index
\begin{align}\label{eq:convex}
\mathbf{p}_{i,j} = \mathbf{C}\cdot \sigma(\mathbf{a}_{i,j}),
\end{align}
where $\mathbf{a}_{i,j}$ is the real vector
\begin{align}
\mathbf{a}_{i,j} = \left[ \begin{matrix} a_{i,j,1} & a_{i,j,2} & \dots & a_{i,j,80} \end{matrix} \right]^T \in \mathbb{R}^{80},
\end{align}
and $\sigma$ is the softmax function
\begin{align}
\sigma(\mathbf{a}_{i,j}) = \frac{1}{\sum^{80}_{d=1} e^{a_{i,j,d}}} \left[ \begin{matrix} e^{a_{i,j,1}} & \dots & e^{a_{i,j,80}} \end{matrix} \right]^T.
\end{align}
Effectively, $\mathbf{p}_{i,j}$~\eqref{eq:convex} is a convex combination of $\mathbf{C}$.
Defining each $\mathbf{p}_{i,j}$ as a linear combination of $\mathbf{C}$ supports the physical realisation of each $\mathbf{p}_{i,j}$ through proportional mixing of the existing paints, as in colour printing~\cite{sharma2017digital}. Restricting the combination to be convex, thereby placing each $\mathbf{p}_{i,j}$ in the convex hull of $\mathbf{C}$, contributes to the sparsity of the coefficients~\cite{caratheodory-theorem}. In Sec.~\ref{sec:opimisation}, we will introduce additional constraints to further enhance physical realisability.
To enable the optimal paint mixtures to be estimated, we collect the coefficients for all $(i,j)$ into the set
\begin{align}
\mathcal{A} = \{ \mathbf{a}_{i,j} \}^{j = 1,\dots,N}_{i=1,\dots,M},
\end{align}
and parametrise the adversarial cube as
\begin{equation*}
\mathbf{P}(\mathcal{A}) =
\begin{pmatrix}
\mathbf{C}\sigma(\mathbf{a}_{1,1}) & \mathbf{C}\sigma(\mathbf{a}_{1,2}) & \cdots & \mathbf{C}\sigma(\mathbf{a}_{1,N}) \\
\mathbf{C}\sigma(\mathbf{a}_{2,1}) & \mathbf{C}\sigma(\mathbf{a}_{2,2}) & \cdots & \mathbf{C}\sigma(\mathbf{a}_{2,N}) \\
\vdots & \vdots & \ddots & \vdots \\
\mathbf{C}\sigma(\mathbf{a}_{M,1}) & \mathbf{C}\sigma(\mathbf{a}_{M,2}) & \cdots & \mathbf{C}\sigma(\mathbf{a}_{M,N})
\end{pmatrix},
\end{equation*}
and where $\mathbf{p}_{i,j}(\mathcal{A})$ is pixel $(i,j)$ of $\mathbf{P}(\mathcal{A})$. Optimising a cube thus reduces to estimating $\mathcal{A}$.
\subsection{Data collection for cube optimisation}\label{sec:data_collection}
Based on the attacker's goals (Sec.~\ref{sec:threat_model}), we collected Sentinel-2A Level-1C data products~\cite{2021copernicus} over the globe with a distribution of surface types that resembles the Hollstein dataset~\cite{hollstein2016ready-to-use}. The downloaded data cubes were preprocessed following~\cite{francis_alistair_2020_4172871}, including spatial resampling to achieve a ground resolution of 20~m and size $512 \times 512 \times 13$. Sen2Cor~\cite{main-knorn2017sen2cor} was applied to produce probabilistic cloud masks, and a threshold of 0.35 was applied on the probabilities to decide \textit{cloudy} and \textit{not cloudy} pixels. The binary cloud masks were further thresholded with 70\% cloudiness (Sec.~\ref{sec:cloud_detectors}) to yield a single binary label for each data cube. The data cubes were then evaluated with the cloud detector trained in Sec.~\ref{sec:training}. Data cubes labelled \emph{not cloudy} by the detector was separated into training and testing sets
\begin{align}
\mathcal{D} = \{ \mathbf{D}_k \}^{2000}_{k=1}, \;\;\;\; \mathcal{E} = \{ \mathbf{E}_\ell \}^{400}_{\ell=1},
\end{align}
for adversarial cube training. One data cube $\mathbf{T} \in \mathcal{D}$ is chosen as the ROA (Sec.~\ref{sec:threat_model}).
\begin{figure*}[ht]\centering
\includegraphics[width=0.95\linewidth]{./figures/methods/pipeline.pdf}
\vspace{-0.5em}
\caption{Optimisation process for generating adversarial cubes.}
\label{fig:pipeline}
\end{figure*}
\subsection{Optimising adversarial cubes}\label{sec:patch}
We adapted Brown \etal's~\cite{brown2017adversarial} method, originally developed for optimising adversarial patches (visible domain). Fig.~\ref{fig:pipeline} summarises our pipeline for adversarial cube optimisation, with details provided in the rest of this subsection.
\vspace{-1em}
\paragraph{Subcubes}
First, we introduce the subcube notation. Let $b \subseteq \{1,2,\dots,13\}$ index a subset of the Sentinel-2A bands. Using $b$ in the superscript of a data cube, e.g., $\mathbf{P}^{b}$, implies extracting the subcube of $\mathbf{P}$ with the bands indexed by $b$. Of particular interest are the following two band subsets:
\begin{itemize}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt]
\item $c = \{1, 2, 8\}$, \ie, the cloud sensitive bands used in~\cite{giuffrida2020cloudscout}.
\item $v = \{2, 3, 4\}$, \ie, the visible bands.
\end{itemize}
\subsubsection{Cube embedding and augmentations}\label{sec:augmentations}
Given the current $\mathcal{A}$, adversarial cube $\mathbf{P}(\mathcal{A})$ is embedded into a training data cube $\mathbf{D}_k$ through several geometric and spectral intensity augmentations that simulate the appearance of the adversarial cube when captured in the field by a satellite. The geometric augmentations include random rotations and positioning to simulate variations in placement of $\mathbf{P}(\mathcal{A})$ in the scene. The spectral intensity augmentations include random additive noise, scaling and corruption to simulate perturbation by ambient lighting.
\subsubsection{Loss function and optimisation}\label{sec:opimisation}
Define $\mathbf{D}_k(\mathcal{A})$ as the training data cube $\mathbf{D}_k$ embedded with $\mathbf{P}(\mathcal{A})$ (with the augmentations described in Sec.~\ref{sec:augmentations}). The data cube is forward propagated through the cloud detector $f$ to estimate the \emph{confidence}
\begin{align}
\hat{y}_k = f(\mathbf{D}^c_k(\mathcal{A}))
\end{align}
of $\mathbf{D}_k(\mathcal{A})$ being in the \emph{cloudy} class. Note that the cloud detector considers only the subcube $\mathbf{D}^c_k(\mathcal{A})$ corresponding to the cloud sentitive bands. Since we aim to bias the detector to assign high $\hat{y}_k$ to $\mathbf{D}_k(\mathcal{A})$, we construct the loss
\begin{align}\label{eq:loss}
\Psi(\mathcal{A},\mathcal{D}) = \sum_k -\log(f(\mathbf{D}^c_k(\mathcal{A}))).
\end{align}
In addition to constraining the spectral intensities in $\mathbf{P}(\mathcal{A})$ to be in the convex hull of $\mathbf{C}$, we also introduce the multispectral non-printability score (NPS)
\begin{align}\label{eq:nps_loss}
\Phi(\mathcal{A}, \mathbf{C}) = \frac{1}{M N} \sum_{i,j} \left( \min_{\textbf{c} \in \mathbf{C}} \left\| \textbf{p}_{i,j}(\mathcal{A}) - \mathbf{c}\right\|_2 \right).
\end{align}
Minimising $\Phi$ encourages each $\textbf{p}_{i,j}(\mathcal{A})$ to be close to (one of) the measurements in $\textbf{C}$, which sparsifies the coefficients $\sigma(\mathbf{a}_{i,j})$ and helps with the physical realisability of $\mathbf{P}(\mathcal{A})$. The multispecral NPS is an extension of the original NPS for optimising (visible domain) adversarial patches~\cite{sharif2016accessorize}.
To produce an adversarial cube that is ``cloaked'' in the visible domain in the ROA defined by $\mathbf{T}$, we devise the term
\begin{align}\label{eq:cloaking_loss}
\Omega(\mathcal{A}, \mathbf{T}) = \left\| \textbf{P}^{v}(\mathcal{A}) - \mathbf{T}^v_{M \times N} \right\|_2,
\end{align}
where $\mathbf{T}^v_{M \times N}$ is a randomly cropped subcube of spatial height $M$ and width $N$ in the visible bands $\mathbf{T}^v$ of $\mathbf{T}$.
The overall loss is thus
\begin{equation}
L(\mathcal{A}) = \underbrace{\Psi(\mathcal{A},\mathcal{D})}_{\textrm{cloud sensitive}} + \alpha\cdot \underbrace{\Phi(\mathcal{A}, \mathbf{C})}_{\textrm{multispectral}} + \beta \cdot \underbrace{\Omega(\mathcal{A}, \mathbf{T})}_{\textrm{visible domain}}, \label{eq:overall_loss}
\end{equation}
where weights $\alpha, \beta \ge 0$ control the relative importance of the terms. Notice that the loss incorporates multiple objectives across different parts of the spectrum.
\vspace{-1em}
\paragraph{Optimisation}
Minimising $L$ with respect to $\mathcal{A}$ is achieved using the Adam~\cite{kingma2014adam} stochastic optimisation algorithm. Note that the pre-trained cloud detector $f$ is not updated.
\vspace{-1em}
\paragraph{Parameter settings}
See Sec.~\ref{sec:results}.
\subsection{Limitations on real-world testing}\label{sec:limitations}
While our adversarial cube is optimised to be physically realisable, two major constraints prevent physical testing:
\begin{itemize}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt]
\item Lack of precise knowledge of and control over the operation of a real satellite makes it difficult to perform coordinated EO data capture with the adversarial cube.
\item Cube dimensions of about 100$\times$100 pixels are required for effective attacks, which translates to 2 km$\times$2 km = 4 km$^2$ ground size (based on the ground resolution of the data; see Sec.~\ref{sec:data_collection}). This prevents full scale fabrication on an academic budget. However, the size of the cube is well within the realm of possibility, \eg, solar farms and airports can be much larger than $4$ km$^2$~\cite{ong2013land}.
\end{itemize}
We thus focus on evaluating our attack in the digital domain, with real-world testing left as future work.
\section{Measuring effectiveness of attacks}\label{sec:metrics}
Let $\mathbf{P}^\ast = \mathbf{P}(\mathcal{A}^\ast)$ be the adversarial cube optimised by our method (Sec.~\ref{sec:attacking}). Recall from Sec.~\ref{sec:data_collection} that both datasets $\mathcal{D}$ and $\mathcal{E}$ contain \emph{non-cloudy} data cubes. We measure the effectiveness of $\mathbf{P}^\ast$ on the training set $\mathcal{D}$ via two metrics:
\begin{itemize}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt]
\item Detection accuracy of the pretrained cloud detector $f$ (Sec.~\ref{sec:training}) on $\mathcal{D}$ embedded with $\mathbf{P}^\ast$, i.e.,
\begin{equation}\label{eq:accuracy}
\text{Accuracy}({\mathcal{D}}) \triangleq
\frac{1}{|\mathcal{D}|}
\sum^{|\mathcal{D}|}_{k=1} \mathbb{I}(f(\mathbf{D}^c_k(\mathcal{A}^\ast)) \le 0.5),
\end{equation}
where the lower the accuracy, the less often $f$ predicted the correct class label (\emph{non-cloudy}, based on confidence threshold $0.5$), hence the more effective the $\mathbf{P}^\ast$.
\item Average confidence of the pretrained cloud detector $f$ (Sec.~\ref{sec:training}) on $\mathcal{D}$ embedded with $\mathbf{P}^\ast$, i.e.,
\begin{equation}\label{eq:average_probability}
\text{Cloudy}({\mathcal{D}}) \triangleq
\frac{1}{|\mathcal{D}|}
\sum^{|\mathcal{D}|}_{k=1} f(\mathbf{D}^c_k(\mathcal{A}^\ast).
\end{equation}
The higher the avg confidence, the more effective the $\mathbf{P}^\ast$.
\end{itemize}
To obtain the effectiveness measures on the testing set $\mathcal{E}$, simply swap $\mathcal{D}$ in the above with $\mathcal{E}$.
\section{Results}\label{sec:results}
We optimised adversarial cubes of size 100$\times$100 pixels on $\mathcal{D}$ (512$\times$512 pixel dimension) under different loss configurations and evaluated them digitally (see Sec.~\ref{sec:limitations} on obstacles to real-world testing). Then, we investigated different cube designs and mitigation strategies for our attack.
\subsection{Ablation tests}\label{sec:ablation}
Based on the data collected, we optimised adversarial cubes under different combinations of loss terms:
\begin{itemize}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt]
\item $\Psi$: Adversarial biasing in the cloud-sensitive bands~\eqref{eq:loss}.
\item $\Phi$: Multispectral NPS~\eqref{eq:nps_loss}.
\item $\Omega$-Hills: Cloaking~\eqref{eq:cloaking_loss} with $\mathbf{T}$ as Hills (Fig.~\ref{fig:hills}).
\item $\Omega$-Desert: Cloaking~\eqref{eq:cloaking_loss} with $\mathbf{T}$ as Desert (Fig.~\ref{fig:desert}).
\end{itemize}
The weights in~\eqref{eq:overall_loss} were empirically determined to be $\alpha = 5.0$ and $\beta = 0.05$.
\vspace{-1em}
\paragraph{Convex hull and NPS}
Fig.~\ref{fig:cubes_hull} shows the optimised cubes $\mathbf{P}^\ast$ and its individual spectral intensities $\mathbf{p}^\ast_{i,j}$ in the cloud sensitive bands (false colour) and visible domain. Note that without the convex hull constraints, the intensities (green points) are scattered quite uniformly, which complicates physical realisability of the paint mixtures. The convex hull constraints predictably limit the mixtures to be in the convex hull of $\mathbf{C}$. Carath{\'e}odory's Theorem~\cite{caratheodory-theorem} ensures that each $\mathbf{p}^\ast_{i,j}$ can be obtained by mixing at most 13 exterior paints. In addition, the multispectral NPS term encourages the mixtures to cluster closely around the columns of $\mathbf{C}$ (red points), \ie, close to an existing exterior paint colour.
\vspace{-1em}
\paragraph{Visual camouflage}
Fig.~\ref{fig:cubes_loss_images} illustrates optimised cubes $\mathbf{P}^\ast$ embedded in the ROA Hills and Desert, with and without including the cloaking term~\eqref{eq:cloaking_loss} in the loss function. Evidently the cubes optimised with $\Omega$ are less perceptible.
\vspace{-1em}
\paragraph{Effectiveness of attacks}
Table~\ref{tab:result_loss} shows quantitative results on attack effectiveness (in terms of the metrics in Sec.~\ref{sec:metrics}) on the training $\mathcal{D}$ and testing $\mathcal{E}$ sets---again, recall that these datasets contain only \emph{non-cloudy} data cubes. The results show that the optimised cubes are able to strongly bias the pretrained cloud detector, by lowering the accuracy by at least $63\%$ (1.00 to 0.37) and increasing the cloud confidence by more than $1000\%$ (0.05 to 0.61). The figures also indicate the compromise an attacker would need to make between the effectiveness of the attack, physical realisablity and visual imperceptibility of the cube.
\begin{table}[ht]
\setlength\tabcolsep{1pt}
\centering
\begin{tabular}{p{4.0cm} | p{1.0cm} p{1.0cm} | p{1.0cm} p{1.0cm}}
\rowcolor{black} & \multicolumn{2}{l |}{\textcolor{white}{\textbf{Accuracy}}} & \multicolumn{2}{l}{\textcolor{white}{\textbf{Cloudy}}} \\
\hline
\textbf{Loss functions} & \textbf{Train} & \textbf{Test} & \textbf{Train} & \textbf{Test} \\
\hline
- (no adv.~cubes) & 1.00 & 1.00 & 0.05 & 0.05 \\
$\Psi$ (no convex hull constr.) & 0.04 & 0.03 & 0.95 & 0.95 \\
$\Psi$ & 0.13 & 0.12 & 0.81 & 0.83 \\
$\Psi + \alpha\Phi$ & 0.22 & 0.19 & 0.73 & 0.75 \\
$\Psi + \beta\Omega$-Hills & 0.17 & 0.14 & 0.77 & 0.80 \\
$\Psi + \beta\Omega$-Desert & 0.23 & 0.25 & 0.72 & 0.73 \\
$\Psi + \alpha\Phi + \beta\Omega$-Hills & 0.25 & 0.28 & 0.71 & 0.70 \\
$\Psi + \alpha\Phi + \beta\Omega$-Desert & 0.37 & 0.37 & 0.61 & 0.61 \\
\end{tabular}
\vspace{-0.5em}
\caption{Effectiveness of 100$\times$100 adversarial cubes optimised under different loss configurations (Sec.~\ref{sec:ablation}). Lower accuracy = more effective attack. Higher cloud confidence = more effective attack.}
\label{tab:result_loss}
\end{table}
\begin{figure*}[ht]\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{./figures/results/hull/log_nohull.pdf}
\caption{$L = \Psi$ (without convex hull constraints).}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{./figures/results/hull/log_hull.pdf}
\caption{$L = \Psi$.}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{./figures/results/hull/log+nps_hull.pdf}
\caption{$L = \Psi + \alpha \cdot \Phi$.}
\end{subfigure}
\vspace{-0.5em}
\caption{Effects of convex hull constraints and multispectral NPS on optimised cube $\mathbf{P}^\ast$. The top row shows the cube and individual pixels $\mathbf{p}^\ast_{i,j}$ (green points) in the visible bands $v$, while the bottom row shows the equivalent values in the cloud sensitive bands $c$ (in false colour). In the 3-dimensional plots, the red points indicate the columns of the spectral index $\mathbf{C}$ and black lines its convex hull.}
\label{fig:cubes_hull}
\end{figure*}
\begin{figure}[ht]\centering
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{./figures/results/loss/not_camo_hills.pdf}
\caption{$L = \Psi + \alpha \Phi$.}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{./figures/results/loss/camo_hills.pdf}
\caption{$L = \Psi + \alpha \Phi + \beta \Omega$-$\textrm{Hills}$.}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{./figures/results/loss/not_camo_desert.pdf}
\caption{$L = \Psi + \alpha \Phi$.}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{./figures/results/loss/camo_desert.pdf}
\caption{$L = \Psi + \alpha \Phi + \beta \Omega$-$\textrm{Desert}$.}
\end{subfigure}
\vspace{-0.5em}
\caption{Optimised cubes $\mathbf{P}^\ast$ shown in the visible domain $v$ with and without the cloaking term~\eqref{eq:cloaking_loss}.}
\label{fig:cubes_loss_images}
\end{figure}
\subsection{Different cube configurations}\label{sec:multcube}
Can the physical footprint of the adversarial cube be reduced to facilitate real-world testing? To answer this question, we resize $\mathbf{P}$ to 50$\times$50 pixels and optimise a number of them (4 or 6) instead. We also tested random configurations with low and high proximity amongst the cubes. The training pipeline for the multi-cube setting remains largely the same. Fig.~\ref{fig:cubes_config_images} shows (in visible domain) the optimised resized cubes embedded in a testing data cube.
Quantitative results on the effectiveness of the attacks are given in Table~\ref{tab:result_cubeconfig}. Unfortunately, the results show a significant drop in attack effectiveness when compared against the 100$\times$100 cube on all loss configurations. This suggests that the size and spatial continuity of the adversarial cube are important factors to the attack.
\begin{table}[ht]
\setlength\tabcolsep{1pt}
\centering
\begin{tabular}{p{0.7cm} | p{1.50cm} | p{1.80cm} | p{1.0cm} p{1.0cm} | p{1.0cm} p{1.0cm}}
\rowcolor{black} \multicolumn{3}{l |}{\textcolor{white}{\textbf{Cube configurations}}} & \multicolumn{2}{l |}{\textcolor{white}{\textbf{Accuracy}}} & \multicolumn{2}{l}{\textcolor{white}{\textbf{Cloudy}}} \\
\hline
\textbf{\#} & \textbf{Size} & \textbf{Proximity} & \textbf{Train} & \textbf{Test} & \textbf{Train} & \textbf{Test} \\
\hline
\multicolumn{3}{l |}{- (no adv.~cubes)} & 1.00 & 1.00 & 0.05 & 0.05 \\
4 & 50$\times$50 & Low & 0.87 & 0.87 & 0.26 & 0.27 \\ %
6 & 50$\times$50 & Low & 0.71 & 0.72 & 0.33 & 0.33 \\
4 & 50$\times$50 & High & 0.63 & 0.62 & 0.42 & 0.44 \\
6 & 50$\times$50 & High & 0.63 & 0.63 & 0.40 & 0.41 \\
\end{tabular}
\vspace{-0.5em}
\caption{Effectiveness of 50$\times$50 adversarial cubes under different cube configurations (Sec.~\ref{sec:multcube}) optimised with loss $L = \Psi + \alpha\Phi$. Lower accuracy = more effective attack. Higher cloud confidence = more effective attack. Compare with single 100$\times$100 adversarial cube results in Table~\ref{tab:result_loss}.}
\label{tab:result_cubeconfig}
\end{table}
\begin{figure}[ht]\centering
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{./figures/results/config/four_random.pdf}
\caption{Four 50$\times$50 cubes (low prox).}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{./figures/results/config/six_random.pdf}
\caption{Six 50$\times$50 cubes (low prox).}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{./figures/results/config/four_fixed.pdf}
\caption{Four 50$\times$50 cubes (high prox).}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{./figures/results/config/six_fixed.pdf}
\caption{Six 50$\times$50 cubes (high prox).}
\end{subfigure}
\vspace{-0.5em}
\caption{Optimised cubes $\mathbf{P}^\ast$ shown in the visible domain $v$ of different cube configurations.}
\label{fig:cubes_config_images}
\end{figure}
\subsection{Mitigation strategies}\label{sec:mitigation}
We investigated several mitigation strategies against our adversarial attack:
\begin{itemize}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt]
\item 13 bands: Increasing the number of input bands of the cloud detector from 3 to 13 (all Sentinel-2A bands);
\item $\sqrt{2}$: Doubling the model size of the cloud detector by increasing the number of filter/kernels in the convolutional layers and activations in the fully connected layers by $\sqrt{2}$
\item $2\times$ CONV: Doubling the model size of the cloud detector by adding two additional convolutional layers.
\end{itemize}
Table~\ref{tab:result_mitigations} shows that using a ``larger'' detector (in terms of the number of input channels and layers) yielded slightly worse cloud detection accuracy. However, increasing the number of input bands significantly reduced our attack effectiveness, possibly due to the increased difficulty of biasing all 13 channels simultaneously. This argues for using greater satellite-borne compute payloads than that of~\cite{giuffrida2020cloudscout}.
\begin{table}[ht]
\setlength\tabcolsep{1pt}
\centering
\begin{tabular}{p{1.5cm} | p{2.5cm} | p{1.0cm} p{1.0cm} | p{1.0cm} p{1.0cm}}
\rowcolor{black} & & \multicolumn{2}{l |} {\textcolor{white}{\textbf{Accuracy}}} & \multicolumn{2}{l} {\textcolor{white}{\textbf{Cloudy}}} \\
\hline
\textbf{Detectors} & \textbf{Loss functions} & \textbf{Train} & \textbf{Test} & \textbf{Train} & \textbf{Test} \\
\hline
13 bands & - (no adv.~cubes) & 1.00 & 1.00 & 0.06 & 0.06 \\
& $\Psi + \alpha\Phi$ & 0.94 & 0.96 & 0.15 & 0.14 \\
\hline
$\sqrt{2}$ & - (no adv.~cubes) & 1.00 & 1.00 & 0.08 & 0.08 \\
& $\Psi + \alpha\Phi$ & 0.36 & 0.38 & 0.62 & 0.60 \\
\hline
$2\times$CONV & - (no adv.~cubes) & 1.00 & 1.00 & 0.08 & 0.08 \\
& $\Psi + \alpha\Phi$ & 0.26 & 0.25 & 0.74 & 0.73 \\
\end{tabular}
\vspace{-0.75em}
\caption{Effectiveness of 100$\times$100 adversarial cubes optimised for different cloud detector designs (Sec.~\ref{sec:mitigation}). Lower accuracy = more effective attack. Higher cloud confidence = more effective attack. Compare with single 100$\times$100 adversarial cube results in Table~\ref{tab:result_loss}.}
\label{tab:result_mitigations}
\end{table}
\section{Conclusions and limitations}\label{sec:conclusion}
We proposed a physical adversarial attack against a satellite-borne multispectral cloud detector. Our attack is based on optimising exterior paint mixtures that exhibit the required spectral signatures to bias the cloud detector. Evaluation in the digital domain illustrates the realistic threat of the attack, though the simple mitigation strategy of using all input multispectral bands seems to offer good protection.
As detailed in Sec.~\ref{sec:limitations}, our work is limited to digital evaluation due to several obstacles. Real-world testing of our attack and defence strategies will be left as future work.
\vfill
\section{Usage of existing assets and code release}
The results in this paper were partly produced from ESA remote sensing data, as accessed through the Copernicus Open Access Hub~\cite{2021copernicus}. Source code and/or data used in our paper will be released subject to securing permission.
\vfill
\section*{Acknowledgements}\label{sec:acknowledgement}
Tat-Jun Chin is SmartSat CRC Professorial Chair of Sentient Satellites.
{\small
\bibliographystyle{ieee_fullname}
| 2024-02-18T23:39:39.782Z | 2021-12-06T02:11:43.000Z | |
proofpile-arXiv_065-5 | \section{Limitations and Conclusion}
\label{sec:conclusion}
A major limitation of NeRF-SR{} is that it does not enjoy the nice arbitrary-scale property. It also introduces extra computation efficiency, albeit it consumes no more time than training a HR NeRF.
In conclusion, we presented NeRF-SR{} the first pipeline of HR novel view synthesis with mostly low resolution inputs and achieve photorealistic renderings without any external data. Specifically, we exploit the 3D consistency in NeRF from two perspectives: supersampling strategy that finds corresponding points through multi-views in sub-pixels and depth-guided refinement that hallucinates details from relevant patches on an HR reference image. Finally, region sensitive supersampling and generalized NeRF super-resolution may be explored for future works.
\section{Related Work}
\label{sec:related-work}
\noindent\textbf{Novel View Synthesis.}
Novel view synthesis can be categorized into image-based, learning-based, and geometry-based methods. Image-based methods warp and blend relevant patches in the observation frames to generate novel views based on measurements of quality \cite{gortler1996lumigraph, levoy1996light}. Learning-based methods predict blending weights and view-dependent effects via neural networks and/or other hand-crafted heuristics\cite{hedman2018deep, choi2019extreme, riegler2020free, thies2020image}. Deep learning has also facilitated methods that can predict novel views from a single image, but they often require a large amount of data for training\cite{tucker2020single, wiles2020synsin, shih20203d, niklaus20193d, rockwell2021pixelsynth}. Different from image-based and learning-based methods, geometry-based methods first reconstruct a 3D model \cite{schonberger2016structure} and render images from target poses. For example, Aliev \etal\cite{aliev2020neural} assigned multi-resolution features to point clouds and then performed neural rendering, Thies \etal\cite{thies2019deferred} stored neural textures on 3D meshes and then render the novel view with traditional graphics pipeline. Other geometry representations include multi-planes images \cite{zhou2018stereo, mildenhall2019local, flynn2019deepview, srinivasan2019pushing, li2020crowdsampling, li2021mine}, voxel grids \cite{henzler2020learning, penner2017soft, kalantari2016learning}, depth \cite{wiles2020synsin, flynn2019deepview, riegler2020free, riegler2021stable} and layered depth \cite{shih20203d, tulsiani2018layer}. These methods, although producing relatively high-quality results, the discrete representations require abundant data and memory and the rendered resolutions are also limited by the accuracy of reconstructed geometry.
\vspace{2mm}
\noindent\textbf{Neural Radiance Fields.} Implicit neural representation has demonstrated its effectiveness to represent shapes and scenes, which usually leverages multi-layer perceptrons (MLPs) to encode signed distance fields \cite{park2019deepsdf, duan2020curriculum}, occupancy \cite{mescheder2019occupancy, peng2020convolutional, chen2019learning} or volume density \cite{mildenhall2020nerf, niemeyer2020differentiable}. Together with differentiable rendering \cite{kato2018neural, liu2019soft}, these methods can reconstruct both geometry and appearance of objects and scenes \cite{sitzmann2019scene, saito2019pifu, niemeyer2020differentiable, sitzmann2019deepvoxels, liu2020neural}. Among them, Neural Radiance Fields (NeRF) \cite{mildenhall2020nerf} achieved remarkable results for synthesizing novel views of a static scene given a set of posed input images. There are a growing number of NeRF extensions emerged, \eg reconstruction without input camera poses\cite{wang2021nerf, lin2021barf}, modelling non-rigid scenes \cite{pumarola2021d, park2021nerfies, park2021hypernerf, martin2021nerf}, unbounded scenes\cite{zhang2020nerf++} and object categories \cite{yu2021pixelnerf, trevithick2021grf, jang2021codenerf}. Relevant to our work, Mip-NeRF~\cite{barron2021mip} also considers the issue of \textit{resolution} in NeRF. They showed that NeRFs rendered at various resolutions would introduce aliasing artifacts and resolved it by proposing an integrated positional encoding that featurize conical frustums instead of single points. Yet, Mip-NeRF only considers rendering with downsampled resolutions. To our knowledge, no prior work studies how to increase the resolution of NeRF.
\vspace{2mm}
\noindent\textbf{Image Super-Resolution}
Our work is also related to image super-resolution. Classical approaches in single-image super-resolution (SISR) utilize priors such as image statistics \cite{kim2010single, zontak2011internal} or gradients \cite{sun2008image}. CNN-based methods aim to learn the relationship between HR and LR images in CNN by minimizing the mean-square errors between SR images and ground truths \cite{dong2014learning, wang2015deep, dong2015image}. Generative Adversarial Networks (GANs) \cite{goodfellow2014generative} are also popular in super-resolution which hallucinates high resolution details by adversarial learning \cite{ledig2017photo, menon2020pulse, sajjadi2017enhancenet}. These methods mostly gain knowledge from large-scale datasets or existing HR and LR pairs for training. Besides, these 2D image-based methods, especially GAN-based methods do not take the view consistency into consideration and are sub-optimal for novel view synthesis.
Reference-based image super-resolution (Ref-SR) upscales input images with additional reference high-resolution (HR) images. Existing methods match the correspondences between HR references and LR inputs with patch-match \cite{zhang2019image, zheng2017combining}, feature extraction \cite{xie2020feature, yang2020learning} or attention \cite{yang2020learning}. Although we also aim to learn HR details from given reference images, we work in the 3D geometry perspective and can bring details for all novel views instead of one image.
\section{Introduction}
\label{sec:intro}
Synthesizing photorealistic views from a novel viewpoint given a set of posed images, known as \textit{novel view synthesis}, has been a long-standing problem in the computer vision community, and an important technique for VR and AR applications such as navigation, and telepresence. Traditional approaches mainly falls in the range of image-based rendering and follows the process of warping and blending source frames to target views \cite{gortler1996lumigraph, levoy1996light}. Image-based rendering methods heavily rely on the quality of input data and only produces reasonable renderings with dense observed views and accurate proxy geometry.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{figure/teaser1.pdf}
\caption{NeRF, the state-of-the-art novel view synthesis method, can synthesize photorealistic outputs at the resolution of training images but struggles at higher resolutions as shown in (a), while NeRF-SR{} produces high-quality novel views (b) even with low-resolution inputs.}
\label{fig:teaser}
\end{figure}
Most recently, \textit{neural rendering} has made significant progress on novel view synthesis by leveraging learnable components with 3D geometry context to reconstruct novel views with respect to input images. As the current state-of-the-art method, neural radiance fields (NeRF) \cite{mildenhall2020nerf} have emerged as a promising direction for neural scene representation even on sparse image sets of complex real-world scenes. NeRF uses the weights of multi-layer perceptrons (MLPs) to encode the radiance field and volume density of a scene. Most importantly, the implicit neural representation is continuous, which enables NeRF to take as input any position in the volume at inference time and render images at any arbitrary resolution.
At the same time, a high-resolution 3D scene is essential for many real-world applications, \eg a prerequisite to providing an immersive virtual environment in VR. However, a trained NeRF struggles to generalize directly to resolutions higher than that of the input images and generates blurry views (See \figref{fig:teaser}), which presents an obstacle for real-world scenarios, \eg images collected from the Internet may be low-resolution. To tackle this problem, we present NeRF-SR{}, a technique that extends NeRF and creates high-resolution (HR) novel views with better quality even with low-resolution (LR) inputs. We first observe there is a sampling gap between the training and testing phase for super-resolving a 3D scene, since the sparse inputs are far from satisfying Nyquist view sampling rates~\cite{mildenhall2019local}. To this end, we derive inspiration from traditional graphics pipeline and propose a supersampling strategy to better enforce the multi-view consistency embedded in NeRF in a sub-pixel manner, enabling the generation of both SR images and SR depth maps. Second, in the case of limited HR images such as panoramas and light field imaging systems that have a trade-off between angular and spatial resolutions, we find that directly incorporating them in the NeRF training only improves renderings \textit{nearby the HR images} in a small margin. Thus, we propose a patch-wise warp-and-refine strategy that utilizes the estimated 3D geometry and propagate the details of HR reference to \textit{all over the scene}. Moreover, the refinement stage is efficient and introduces negligible running time compared with NeRF rendering.
To the best of our knowledge, we are the first to produce visually pleasing results for novel view synthesis under mainly low-resolution inputs. Our method requires only posed multi-view images of the target scene, from which we dig into the internal statistics and does not rely on any external priors. We show that NeRF-SR{} outperforms baselines that require LR-HR pairs for training.
Our contributions are summarized as follows:
\begin{itemize}
\item the first framework that produces decent multi-view super-resolution results with mostly LR input images
\item a supersampling strategy that exploits the view consistency in images and supervises NeRF in the sub-pixel manner
\item a refinement network that blends details from any HR reference by finding relevant patches with available depth maps
\end{itemize}
\section{Limitation}
\section{Experiments}
\label{sec:experiments}
\begin{table*}[htbp]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{l|ccc|ccc|ccc|ccc}
& \multicolumn{3}{c|}{Blender$\times 2$ ($100 \times 100$)} & \multicolumn{3}{c|}{Blender$\times 4$ ($100 \times 100$)} & \multicolumn{3}{c}{Blender$\times 2$ ($200 \times 200$)} & \multicolumn{3}{c}{Blender$\times 4$ ($200 \times 200$)} \\
Method & PSNR$\uparrow$ & SSIM$\uparrow$ & LPIPS$\downarrow$ & PSNR$\uparrow$ & SSIM$\uparrow$ & LPIPS$\downarrow$ & PSNR$\uparrow$ & SSIM$\uparrow$ & LPIPS$\downarrow$ & PSNR$\uparrow$ & SSIM$\uparrow$ & LPIPS$\downarrow$ \\
\hline
NeRF~\cite{mildenhall2020nerf} & $\underline{27.54}$ & $\underline{0.921}$ & $0.100$ & $\underline{25.56}$ & $0.881$ & $0.170$ & $\underline{29.16}$ & $\underline{0.935}$ & $0.077$ & $\underline{27.47}$ & $0.910$ & $0.128$ \\
NeRF-Bi & $26.42$ & $0.909$ & $0.151$ & $24.74$ & $0.868$ & $0.244$ & $28.10$ & $0.926$ & $0.109$ & $26.67$ & $0.900$ & $0.175$ \\
NeRF-Liif & $27.07$ & $0.919$ & $\underline{0.067}$ & $25.36$ & $\underline{0.885}$ & $0.125$ & $28.81$ & $0.934$ & $\underline{0.058}$ & $27.34$ & $\underline{0.912}$ & $0.096$ \\
NeRF-Swin & $26.34$ & $0.913$ & $0.075$ & $24.85$ & $0.881$ & $\underline{0.108}$ & $28.03$ & $0.926$ & $\underline{0.058}$ & $26.78$ & $0.906$ & $\underline{0.086}$ \\
Ours-SS & $\boldsymbol{29.77}$ & $\boldsymbol{0.946}$ & $\boldsymbol{0.045}$ & $\boldsymbol{28.07}$ & $\boldsymbol{0.921}$ & $\boldsymbol{0.071}$ & $\boldsymbol{31.00}$ & $\boldsymbol{0.952}$ & $\boldsymbol{0.038}$ & $\boldsymbol{28.46}$ & $\boldsymbol{0.921}$ & $\boldsymbol{0.076}$
\end{tabular}
}
\caption{Quality metrics for novel view synthesis on blender dataset. We report PSNR/SSIM/LPIPS for scale factors $\times2$ and $\times4$ on two input resolutions ($100 \times 100$ and $200 \times 200$) respectively.
}
\label{table:blender-results}
\end{table*}
\setlength{\tabcolsep}{1.4pt}
In this section, we provide both quantitative and qualitative comparisons to demonstrate the advantages of the proposed NeRF-SR{}. We first show results and analysis of super-sampling, and then demonstrate how the refinement network adds more details to it. Our result only with super-sampling is denoted as Ours-SS and our result after patch-based refinement is denoted as Ours-Refine.
\subsection{Dataset and Metrics}
To evaluate our methods, we train and test our model on the following datasets. We evaluate the quality of view synthesis with respect to ground truth from the same pose using three metrics: Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) \cite{wang2003multiscale} and LPIPS\cite{zhang2018unreasonable}.
\topic{Blender Dataset} The Realistic Synthetic $360^{\circ}$ of \cite{mildenhall2019local} (known as Blender dataset) contains 8 detailed synthetic objects with 100 images taken from virtual cameras arranged on a hemisphere pointed inward. As in NeRF\cite{mildenhall2020nerf}, for each scene we input 100 views for training and hold out 200 images for testing.
\topic{LLFF Dataset} LLFF dataset\cite{mildenhall2019local, mildenhall2020nerf} consists of 8 real-world scenes that contain mainly forward-facing images. We train on all the images and report the average metrics on the whole set.
\subsection{Training Details}
In super-sampling, we implement all experiments on top of NeRF~\cite{mildenhall2020nerf} using PyTorch. As we train on different image resolutions independently, for fair comparison we train blender dataset and LLFF dataset for respectively 20 epochs and 30 epochs, where each epoch contains an iteration of the whole training set. We choose Adam as the optimizer (with hyperparameters $\beta_1 = 0.9$, $\beta_2 = 0.999$) with batch size set to 2048 (2048 rays a batch for all experimented scales) and learning rate decayed exponentially from $5\cdot 10^{-4}$ to $5 \cdot 10^{-6}$. Following NeRF, NeRF-SR{} also uses a hierarchical sampling with the same size ``coarse'' and ``fine'' MLP. The number of coarse samples and fine samples are both set to 64.
\begin{figure*}[htbp]
\centering
\includegraphics[width=1.0\linewidth]{results/results-final.pdf}
\caption{Qualitative comparison of blender dataset when the input images are $200 \times 200$ and upscale by 2 and 4. Note how NeRF-SR{} recovers correct details through super-sampling even when inputting low-resolution images, such as \textit{Lego}'s gears, \textit{Hotdog}'s sausage and sauce, \textit{Mic}'s magnets and shiny brackets. Note NeRF-SR{} is able to synthesize consistently over different viewpoints, here we provide two for \textit{Hotdog}, videos can be found on our \href{https://cwchenwang.github.io/NeRF-SR}{website}. Please zoom in for a better inspection of the results. }
\label{fig:res-blender}
\end{figure*}
\input{results/llff-results}
\subsection{Comparisons}
Since there are no previous work that deals with super-resolving NeRF, we devise several reasonable baselines for comparisons, detailed as the following:
\topic{NeRF} Vanilla NeRF is already capable of synthesising images at any resolution due to its implicit formation. Therefore, we train NeRF on LR inputs using the same hyperparameters in our method and directly render HR images.
\topic{NeRF-Bi} aims to super-resolve a trained LR NeRF. We use the same trained model in the NeRF baseline, but render LR images directly and upsample them with the commonly used bicubic upsampling.
\topic{NeRF-Liif} Liif~\cite{chen2021learning} achieves state-of-the-art performance on continuous single image super-resolution. Similar to the NeRF-Bi baseline, we super-resolve LR images using pretrained liif model instead. Note that to the training process of liff requires LR-HR pairs, therefore it introduces external data priors.
\topic{NeRF-Swin} SwinIR~\cite{liang2021swinir} is the start-of-the-art method on single image super-resolution. Like NeRF-Bi and NeRF-Liif, NeRF-Swin performs super-resolution on a LR NeRF with the released SwinIR models under the ``Real-World Image Super-Resolution'' setting, which has a training set of more than 10k LR-HR pairs.
\subsection{Effectiveness of supersampling}
For blender dataset, we super-sample on two resolutions: $100 \times 100$ and $200 \times 200$, and test scales $\times 2$ and $\times 4$. For the LLFF dataset, the input resolution is $504 \times 378$ and we also upscale by $\times 2$ and $\times 4$. The downscaling of images in the dataset from original resolution to training resolution is done by the default Lanczos method in the Pillow package.
\figref{fig:res-blender} shows qualitative results for all methods on a subset of blender scenes. Renderings from NeRF-Bi exhibit correct global shapes but lack high-frequency details. Vanilla NeRF produces renderings that have more details than NeRF-Bi if the scene is already well-reconstructed at input resolution. However, it is still restricted by the information in the input image. NeRF-Liif can recover some details, but lacks enough texture. NeRF-SR{} find sub-pixel level correspondence through supersampling, which means missing details in the input can be found from other views that lie in the neighboring region in 3D space.
Quantitative results of blender dataset are summarized in \tabref{table:blender-results}. NeRF-SR{} outperforms other baselines in all scenarios. NeRF-Liif or NeRF-Swin have the second best LPIPS, providing good visual quality but cannot even compete with NeRF in PSNR and SSIM. The reason is maybe the blender dataset is synthetic and has different domain than the dataset it is trained on, resulting false prediction (see NeRF-Swin on \textit{Lego} and \textit{Hotdog}).
The qualitative and quantitative results for LLFF dataset are demonstrated in \figref{fig:res-llff} and \tabref{table:llff-results} respectively. NeRF and NeRF-Bi suffers from blurry outputs. While NeRF-Liif and NeRF-Swin recovers some details and achieve satisfying visual quality (comparable LPIPS to Ours-SS) since they are trained on external datasets, they tend to be oversmooth and even predicts false color or geometry (See the leaves of \textit{Flower} in \figref{fig:res-llff}). NeRF-SR{} fill in the details on the complex scenes and outperforms other baselines significantly. Therefore, we can conclude that learning-based 2D baselines struggle to perform faithfully super-resolution, especially in the multi-view case.
In \secref{subsec:ss}, we mentioned that the supervision is performed by comparing the average color of sub-pixels due to the unknown nature of the degradation process (We call it ``average kernel''). However, in our experiments, the degradation kernel is actually Lanczos, resulting an asymmetric downscale and upscale operation. We further experiment on the condition that the degradation from high-resolution to input images is also ``average kernel'' for blender data at the resolution $100 \times 100$. Results show this symmetric downscale and upscale operation provides better renderings than asymmetric one. PSNR, SSIM, LPIPs are all improved to $30.94$ dB, $0.956$, $0.023$ for scale $\times 2$ and $28.28$ dB, $0.925$ and $0.061$ for $\times 4$ respectively. The sensitivity to the degradation process is similar to that exhibited in single-image super-resolution. Detailed Rendering can be found in the \href{https://cwchenwang.github.io/NeRF-SR/data/supp.pdf}{supplementary}.
\renewcommand{1.0in}{0.7in}
\renewcommand{0.85in}{0.6in}
\newcommand{\croplego}[1]{
\makecell{
\includegraphics[trim={255px 190px 95px 160px}, clip, width=0.85in]{#1} \\
\includegraphics[trim={118px 170px 222px 170px}, clip, width=0.85in]{#1}
}
}
\newcommand{\cropmicavg}[1]{
\makecell{
\includegraphics[trim={160px 130px 190px 220px}, clip, width=0.85in]{#1} \\
\includegraphics[trim={155px 280px 195px 70px}, clip, width=0.85in]{#1}
}
}
\subsection{Refinement network}
LLFF dataset contains real-world pictures that have a much more complex structure than the blender dataset, and super-sampling isn't enough for photorealistic renderings. We further boost its outputs with a refinement network introduced in \secref{subsec:refine}. We use a fixed number of reference patches ($K = 8$) and the dimensions of patches are set to $64 \times 64$. While inferencing, the input images are divided into non-overlapping patches and stitched together after refinement. Without the loss of generosity, we set the reference image is to the first image in the dataset for all scenes, which is omitted when calculating the metrics. The inference time of the refinement stage is neglibile compared to NeRF's volumetric rendering: for example, it takes about 48 seconds for NeRF's MLP to render a $1008 \times 756$ image, and it only takes another 1.3 seconds in the refinement stage on a single 1080Ti.
The quantitative results of refinement can be found in \tabref{table:llff-results}. After refinement, metrics are improved substantially at the scale of 4. For the scale of 2, PSNR increases only a bit after refining, a possible reason is that supersampling already learns a decent high-resolution neural radiance fields for small upscale factors and the refinement only improves subtle details (Please refer the \href{https://cwchenwang.github.io/NeRF-SR/data/supp.pdf}{supplementary} for an example). However, we can see that LPIPS is still promoted, meaning the visual appearance improves. The problem doesn't occur for larger magnifications such as 4 since supersampling derives much fewer details from low-resolution inputs, making the refinement process necessary.
We demonstrate the renderings qualitatively before and after refining in \figref{fig:res-llff}. It is clear to see that the refinement network boosts supersampling with texture details and edge sharpness.
\begin{table}[htbp]
\centering
\resizebox{\linewidth}{!}{%
\begin{tabular}{l|ccc|ccc}
& \multicolumn{3}{c|}{LLFF$\times 2$} & \multicolumn{3}{c}{LLFF$\times 4$} \\
Method & PSNR$\uparrow$ & SSIM$\uparrow$ & LPIPS$\downarrow$ & PSNR$\uparrow$ & SSIM$\uparrow$ & LPIPS$\downarrow$ \\
\hline
NeRF~\cite{mildenhall2020nerf} & $26.36$ & $0.805$ & $0.225$ & $24.47$ & $0.701$ & $0.388$ \\
NeRF-Bi & $25.50$ & $0.780$ & $0.270$ & $23.90$ & $0.676$ & $0.481$ \\
NeRF-Liif & $\underline{26.81}$ & $\underline{0.823}$ & $\underline{0.145}$ & $\underline{24.76}$ & $\underline{0.723}$ & $0.292$ \\
NeRF-Swin & $25.18$ & $0.793$ & $0.147$ & $23.26$ & $0.685$ & $\underline{0.247}$ \\
Ours-SS & $\boldsymbol{27.31}$ & $\boldsymbol{0.838}$ & $\boldsymbol{0.139}$ & $\boldsymbol{25.13}$ & $\boldsymbol{0.730}$ & $\boldsymbol{0.244}$ \\ \hline
Ours-Refine & $\boldsymbol{27.34}$ & $\boldsymbol{0.842}$ & $\boldsymbol{0.103}$ & $\boldsymbol{25.59}$ & $\boldsymbol{0.759}$ & $\boldsymbol{0.165}$ \\
\end{tabular}
}
\caption{Quality metrics for view synthesis on LLFF dataset. We report PSNR/SSIM/LPIPS for scale factors $\times2$ and $\times4$ on input resolutions ($504 \times 378$).
}
\label{table:llff-results}
\end{table}
\section{Background}
\label{sec:background}
Neural Radiance Fields (NeRF) \cite{mildenhall2020nerf} encodes a 3D scene as a continuous function which takes as input 3D position $\mathbf{x} = (x, y, z)$ and observed viewing direction $\mathbf{d} = (\boldsymbol{\theta}, \boldsymbol{\phi})$, and predicts the radiance $\mathbf{c}(\mathbf{x}, \mathbf{d}) = (r, g, b)$ and volume density $\sigma(\mathbf{x})$. The color depends both on viewing direction $\mathbf{d}$ and $\mathbf{x}$ to capture view dependent effects, while the density only depends on $\mathbf{x}$ to maintain view consistency. NeRF is typically parametrized by a multilayer perceptron (MLP) $f: (\mathbf{x}, \mathbf{d}) \rightarrow (\mathbf{c}, \sigma)$.
NeRF is an emission-only model (the color of a pixel only depends on the radiance along a ray with no other lighting factors). Therefore, according to volume rendering \cite{kajiya1984ray}, the color along the camera ray $\mathbf{r}(t) = \mathbf{o} + t\mathbf{d}$ that shots from the camera center $\mathbf{o}$ in direction $\mathbf{d}$ can be calculated as:
\begin{equation}
\mathbf{C}(\mathbf{r}) = \int_{t_n}^{t_f}T(t)\sigma(\mathbf{r}(t))\mathbf{c}(\mathbf{r}(t), \mathbf{d}) \mathrm{d}t
\label{equ:render}
\end{equation}
where
\begin{equation}
T(t) = \mathrm{exp}\Big(-\int_{t_n}^{t}\sigma(\mathbf{r}(t))\mathrm{d}t\Big)
\end{equation}
is the accumulated transmittance that indicates the probability that a ray travels from $t_n$ to $t$ without hitting any particle.
NeRF is trained to minimize the mean-squared error (MSE) between the predicted renderings and the corresponding ground-truth color:
\begin{equation}
\mathcal{L}_{\mathrm{MSE}} = \sum_{\mathbf{p} \in \mathcal{P}}\| \hat{\mathbf{C}}(\mathbf{r}_{\mathbf{p}}) - \mathbf{C}(\mathbf{r}_{\mathbf{p}}) \|_2^{2}
\label{equ:mse}
\end{equation}
where $\mathcal{P}$ denotes all pixels of training set images, $\mathbf{r}_{\mathbf{p}}(t) = \mathbf{o} + t\mathbf{d}_{\mathbf{p}}$ denotes the ray shooting from camera center to the corners (or centers in some variants \cite{barron2021mip}) of a given pixel $\mathbf{p}$. $\hat{\mathbf{C}}(\mathbf{r}_{\mathbf{p}})$ and $\mathbf{C}(\mathbf{r}_{\mathbf{p}})$ are the ground truth and output color of $\mathbf{p}$.
In practice, the integral in \eqnref{equ:render} is approximated by numeric quadrature that samples a finite number of points along with the rays and computes the summation of radiances according to the estimated per-point transmittance. The sampling in NeRF follows a \textit{coarse-to-fine} mechanism with two MLPs, \ie coarse network is queried on equally spaced samples whose outputs are utilized to sample another group of points for more accurate estimation and fine network is then queried on both groups of samples.
\section{Approach}
\label{sec:approach}
In this section, we introduce the details of NeRF-SR{}. The overall structure is presented in \figref{fig:framework}. The supersampling strategy and patch refinement network will be introduced in \secref{subsec:ss} and \secref{subsec:refine}.
\begin{figure}
\begin{center}
\includegraphics[width=1.0\linewidth]{figure/framework.pdf}
\end{center}
\caption{An overview of the proposed NeRF-SR{} that includes two components. (a), we adopt a super sampling strategy to produce super-resolution novel views from only low-resolution inputs. (b) Given an high-resolution reference at any viewpoint from which we utilize the depth map at hand to extract relevant patches, NeRF-SR{} generates more details for synthesized images.}
\label{fig:framework}
\end{figure}
\subsection{Supersampling}
\label{subsec:ss}
NeRF optimizes a 3D radiance field by enforcing multi-view color consistency and samples rays based on camera poses and pixels locations in the training set. Although NeRF can be rendered at any resolution and retain great performance when the input images satisfy the Nyquist sampling rate, it is impossible in practice. Compared to the infinity possible incoming ray directions in the space, the sampling is quite sparse given limited input image observations. NeRF can create plausible novel views because the output resolution is the same as the input one and it relies on the interpolation property of neural networks. However, this becomes a problem when we render an image at a higher resolution than training images, specifically, there is a gap between the training and testing phase. Suppose a NeRF was trained on images of resolution $\mathrm{H} \times \mathrm{W}$, the most straightforward way to reconstruct a training image on scale factor $s$, \ie an image of resolution $s\mathrm{H} \times s\mathrm{W}$ is sampling a grid of $s^{2}$ rays in an original pixel. Obviously, not only the sampled ray directions were never seen during training, but the pixel queried corresponds to a smaller region in the 3D space. Regarding this issue, we propose a supersampling strategy that tackles the problem of rendering SR images for NeRF. The intuition of supersampling is explained as follows and illustrated in \figref{fig:super-sampling}.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\linewidth]{figure/super-sampling.pdf}
\end{center}
\caption{Original NeRF casts a single ray through a pixel (solid line) and performance MSE loss directly (left), while our method (right) splits a pixel into multiple sub-pixels (dash line) and draws a ray for each sub-pixel, then the radiances of sub-pixels will be averaged for MSE loss. Compared to vanilla NeRF, more 3D points in the scene can be corresponded and constrained in supersampling.}
\label{fig:super-sampling}
\end{figure}
We start from the image formation process. The pixel values are mapped from scene irradiance through a \textit{camera response function} (CRF). For simplification, we assume a pinhole camera model as in NeRF and consider ISO gain, shutter speed as implicit factors. Let $\mathcal{R}(\mathbf{p})$ denotes the set of all possible ray directions for pixel $\mathbf{p}$ from a training image, then:
\begin{equation}
\mathcal{C}(\mathbf{p}) = f(E_{\mathcal{R}(\mathbf{p})})
\end{equation}
where $\mathcal{C}(\mathbf{p})$ indicates the color of $\mathbf{p}$, $f$ is CRF, $E$ is the incident irradiance over the area covered by $\mathbf{p}$, which is the integration of radiance over all incoming rays in $\mathcal{\mathbf{p}}$. Although ideally the training ray directions should be sampled from $\mathcal{R}(\mathbf{p})$, it is both computational expensive and challenging for the network to fit this huge amount of data. Therefore, in our work, to super-resolve images at the scale of $s$, we first evenly split a pixel from training set into a $s \times s$ grid sub-pixels $\mathcal{S}(\mathbf{p})$. As in NeRF, we do not model CRF and output the color of each sub-pixel using a multi-layer perceptron directly. During training stage, ray directions for a pixel $\mathbf{p}$ will be sampled from the sub-pixels instead, denoted as $\mathcal{R}^{\prime}(\mathbf{p}) = \{\mathbf{r}_{\mathbf{j}}\:|\: \mathbf{j} \in \mathcal{S}(\mathbf{p}) \} \subset \mathcal{R}(\mathbf{p})$. At inference stage, an $s\mathrm{H} \times s\mathrm{W}$ image can be directly obtained by directly rendering and organizing the sub-pixels, erasing the sampling gap between the training and testing phase.
Another concern is how to perform supervision with only ground truth images at dimension $\mathrm{H} \times \mathrm{W}$. Similar to the blind-SR problem, the degradation process from $s\mathrm{H} \times s\mathrm{W}$ is unknown and may be affected by many factors. Inspired by the graphics pipeline, we tackle this issue by compute the radiance for sub-pixels in $\mathcal{R}^{\prime}(p)$ using Equation \ref{equ:render} and then average them to compare with the color of $\mathbf{p}$. Thus, Equation \ref{equ:mse} can be extended as:
\begin{equation}
\mathcal{L}_{\mathrm{MSE}} = \sum_{\mathbf{p} \in \mathcal{P}}\Big\| \frac{1}{|\mathcal{R}^{\prime}(\mathbf{p})|}\sum_{\mathbf{r}^{\prime} \in \mathcal{R}^{\prime}(\mathbf{p})}\hat{\mathbf{C}}(\mathbf{r}^{\prime}) - \mathbf{C}(\mathbf{r}_{\mathbf{p}}) \Big\|_2^{2}
\label{equ:l_mse}
\end{equation}
where $\mathcal{R}^{\prime}(\mathbf{p})$ is the sub-pixel grid for pixel $\mathbf{p}$, $|\mathcal{R}^{\prime}(\mathbf{p})|$ is the number of sub-pixels in $\mathcal{R}^{\prime}(\mathbf{p})$, $\mathbf{r}^{\prime}$ is the ray direction for a single sub-pixel, $\hat{\mathbf{C}}(\mathbf{r}^{\prime})$ is the color of a sub-pixel predicted by the network. On the other hand, the LR images can be seen as downsampled from HR ones by averaging pixel color in a grid (We call it ``average'' kernel). This aborts any complex assumptions on the downsampling operation and make our method robust for various situations.
To summarize, supersampling extends original NeRF in two aspects: first it samples ray directions from $s \times s$ grid sub-pixels for pixel $\mathbf{p}$ instead of a single ray direction; second, it averages the color of the sub-pixels for supervision. In computer graphics, supersampling and averaging is often used in the rendering process to handle the problem of aliasing. In our work, we show that it fully exploits the cross-view consistency introduced by NeRF to a sub-pixel level, \ie a position can be corresponded through multiple viewpoints. While NeRF only shoots one ray for each pixel and optimizes points along that ray, supersampling constraints more positions in the 3D space and better utilize the multi-view information in input images. In other words, supersampling directly optimizes a denser radiance field at training time.
\begin{figure*}[htbp]
\centering
\includegraphics[width=1.0\linewidth]{figure/refinement.pdf}
\caption{Our refinement module encodes synthesized patches $\widetilde{P}$ from images produced by supersampling and reference patches $\{P^{\mathrm{REF}}\}_{k=1}^{K}$ from $\mathcal{I}_{\mathrm{REF}}$. The encoded features of $\mathcal{I}_{\mathrm{REF}}$ are maxpooled and concatenated with that of $\widetilde{P}$, which is then decoded to generate the refined patch. In the training phase, $\widetilde{P}$ is sampled from synthesized SR image at the camera pose of $\mathcal{I}_{\mathrm{REF}}$ and $\{P^{\mathrm{REF}}\}_{k=1}^{K}$ is sampled at adjacent regions. When testing, $\{P^{\mathrm{REF}}\}_{k=1}^{K}$ is obtained via depth warping. (The input and output patches are zoomed for better illustration, zoom in to see the details on leaves after refinement)}
\label{fig:refine}
\end{figure*}
\subsection{Patch-Based Refinement}
\label{subsec:refine}
With supersampling, the synthesized image achieves much better visual quality than vanilla NeRF. However, when the images for a scene do not have enough sub-pixel correspondence, the results of supersampling cannot find enough details for high-resolution synthesis. Also, often there are limited high-resolution images from which HR content are available for further improving the results.
Here, we present a patch-based refinement network to recover high-frequency details that works even in the \textit{extreme} case, \ie only one HR reference is available, as shown in \figref{fig:refine}. Our system is though not limited to one HR reference and can be easily extended to multiple HR settings. The core design consideration focuses on how to ``blend'' details on the reference image $\mathcal{I}_{\mathrm{REF}}$ into NeRF synthesized SR images that already captured the overall structure. We adopt a patch-by-patch refine strategy that turns an SR patch $\widetilde{P}$ into the refined patch $P$. Other than $\widetilde{P}$, the input should also include an HR patch from $\mathcal{I}_{\mathrm{REF}}$ that reveals how the objects or textures in $\widetilde{P}$ presents in high-resolution. However, due to occlusion and inaccuracy of depth estimation, multiple HR patches are required to cover the region in $\widetilde{P}$ and we use $K$ patches $\{P^{\mathrm{REF}}\}_{k=1}^{K}$ for reference. Also, patches in $\{P^{\mathrm{REF}}\}_{k=1}^{K}$ cover larger regions than $\widetilde{P}$ and contain less relevant information. The refinement stage aims at local detail enhancement and well preserve the view consistent structure from super-sampling with the spatial information of depth predictions.
We use a U-Net based convolutional architecture for the refinement network, which has demonstrated its efficacy in several existing novel view synthesis methods \cite{choi2019extreme, riegler2021stable, riegler2020free}. In earlier attempts, we model the refinement procedure as an image-to-image translation \cite{isola2017image} and find channel-wise stack $\widetilde{P}$ and $\{P^{\mathrm{REF}}\}_{k=1}^{K}$ were unable to fit the training set perfectly. Therefore, inspired by \cite{choi2019extreme, riegler2020free}, we instead encode each patch respectively with an encoder consisting of seven convolutional layers. The decoder of the network takes as input the nearest-neighbor upsampled features from previous layers concatenated with both the encoded features of $\widetilde{P}$ and maxpooled features of $\{P^{\mathrm{REF}}\}_{k=1}^{K}$ at the same spatial resolution. All convolutional layers are followed by a ReLU activation.
\topic{Training}
The training of the refinement network requires SR and HR patch pairs, which are only available at the camera pose of $\mathcal{I}_{\mathrm{REF}}$. Therefore, $\widetilde{P}$ is randomly sampled from the SR image and $P$ is the patch on $\mathcal{I}_{\mathrm{REF}}$ at the same location. We perform perspective transformations to $\widetilde{P}$ and $P$ as during testing, the input patches are mostly from different camera poses. Moreover, to account for the inaccuracy of reference patches at testing time, we sample $\{P^{\mathrm{REF}}\}_{k=1}^{K}$ within a fixed window around $P$. In order to preserve the spatial structure of $\widetilde{P}$ while improving its quality, our objective function combines reconstruction loss $\mathcal{L}_{rec}$ and perceptual loss $\mathcal{L}_{per}$, where
\begin{equation}
\mathcal{L_\mathrm{refine}} = \mathcal{L}_{rec} + \mathcal{L}_{per} = ||\widetilde{P} - P ||_1 + \Sigma_{l}\lambda_{l}||\phi_{l}(\widetilde{P}) - \phi_{l}(P) ||_1
\end{equation}
$\phi_{l}$ is a set of layers in a pretrained VGG-19 and $\lambda{l}$ is the reciprocal of the number of neurons in layer $l$. Note that we adopt $\mathnormal{l}_1$-norm instead of MSE in $L_{rec}$ because it is already minimized in supersampling and $\mathnormal{l}_1$-norm will sharpen the results.
\topic{Testing} At inference time, given a patch $\widetilde{P}$ on synthesized image $\mathcal{I}_n$, we can find a high-resolution reference patch on reference image $\mathcal{I}_{\mathrm{REF}}$ for each pixel on $\widetilde{P}$:
\begin{equation}
P_{i,j}^{\mathrm{REF}} = K_{\mathrm{REF}}T(K_{n}^{-1}d_{i,j}\widetilde{P}_{i,j})
\label{equ:warp}
\end{equation}
where $i,j$ denotes a location on patch $\widetilde{P}$, $d$ is the estimated depth, $T$ is the transformation between camera extrinsic matrices from $\mathcal{I}_n$ to $\mathcal{I}_{\mathrm{REF}}$, and $K_{\mathrm{REF}}$ and $K_{n}$ refer to the camera intrinsic matrices of $\mathcal{I}_{\mathrm{REF}}$ and $\mathcal{I}_n$. Therefore, \eqnref{equ:warp} computes the 3D world coordinate of $i,j$ based on $d_{i,j}$ and camera parameters, then backproject it to a pixel on $\mathcal{I}_{\mathrm{REF}}$ and extract the corresponding patch at that location (points fall out of $\mathcal{I}_{\mathrm{REF}}$ are discarded). In summary, to obtain the refined $P$, we first sample $K$ patches from $\{P_{i,j}^{\mathrm{REF}}\}$ to construct the set $\{P^{\mathrm{REF}}\}_{k=1}^{K}$ and then input them together with $\widetilde{P}$ into the network.
More details of the refinement network can be found in the \href{https://cwchenwang.github.io/NeRF-SR/data/supp.pdf}{supplementary material}.
The training of NeRF requires correspondences of input images in the 3D space. As long as the HR reference falls in the camera frustum of input images, it can be easily wrapped to other views and bring enough details. Therefore, our refinement network is well-suited for any NeRF compatible dataset.
| 2024-02-18T23:39:39.785Z | 2022-07-22T02:23:05.000Z | |
proofpile-arXiv_065-6 | \section{Introduction}
Machine Learning (ML) applications recently demonstrated widespread adoption in many critical missions, as a way to deal with large-scale and noisy datasets efficiently, in which human expertise cannot be used due to practical reasons. Although ML-based approaches have achieved impressive results in many data processing tasks, including classification, and object recognition, they have been shown to be vulnerable to small adversarial perturbations, and thus tend to misclassify, or not able to recognize minimally perturbed inputs. Figure~\ref{fig:adversarial-input} illustrates how an adversarial sample can be generated by adding a small perturbation, and as a result can get misclassified by a trained Neural Network (NN).
\begin{figure}[h]
\centering
\includegraphics[width=0.8\columnwidth]{{adversarial-input}.png}
\caption{By adding an unnoticeable perturbation to an image of "panda", an adversarial sample is created, and it was misclassified as "gibbon" by the trained network. (Image credit: ~\cite{goodfellow2015})\label{fig:adversarial-input}}
\end{figure}
Adversarial perturbation can be achieved either through \emph{white-box} or \emph{black-box} attacks. In the threat model of \emph{white-box} attacks, an attacker is supposed to have full knowledge of the target NN model, including the model architecture and all relevant hyperparameters. For the \emph{black-box} attacks, an attacker has no access to the NN model and associated parameters; thus, an attacker relies on generating adversarial samples using the NN model on hand (known as \emph{attacker model}), and then uses these adversarial samples on the target NN model (known as \emph{victim model}). White-box attacks are considered to be difficult to launch in real world scenarios, as it is often not possible for an attacker to have access to full information of the victim model. Thus, in this paper, we focus on \emph{black-box} attacks which are posing practical threats for many ML applications, and evaluate the strategies of generating adversarial samples (which can be used for launching black-box attacks) and their transferability to victim models.
{\bf\textit{Transferability}} is an ability of an adversarial sample that is generated by a machine learning attack on a particular machine learning model (i.e., on an attacker model) to be effective against a different, and potentially unknown machine learning model (i.e., on a victim model). Attacker model refers to the model used in generating the adversarial samples (i.e., malicious inputs that are modified to yield erroneous output while appearing as unmodified to the human or an agent), whereas the target model refers to the NN model to which the adversarial samples will be transferred. There is a long literature on transferability of adversarial samples and machine learning attacks that generate them; however, they often analyze the transferability from the perspective of a specific network model~\citep{szegedy2014, goodfellow2015, papernot2016, demontis2019}. That is, they have tried to present an explanation on why transferability is able to occur based on the NN model properties (of a given specific target model). Hence, we say that most research have taken a \emph{model-centric} approach. In contrast, we are presenting an {\bf \textit{attack-centric}} approach, in this paper. In \textit{attack-centric} approach, we provide insights on why adversarial samples can actually transfer by analyzing the adversarial samples generated using different machine learning attacks. A particular insight that we want to build is to see if machine learning attacks and input set have any inherent feature that causes or increases the likelihood of adversarial samples to transfer effectively to the victim models.
In the following, we provide motivation on studying transferability of adversarial samples and exemplify ML-based applications in which they may pose significant security and reliability threats.
\subsection{Motivation for Research on Transferability of Adversarial Samples}
Machine learning has become a driving force for many data intensive innovative technologies in different domains, including (but not limited to) health care, automotive, finance, security, and predictive analytics, thanks to the widespread availability of data sources and computational power allowing to process them in a reasonable time. However, machine learning systems may have security concerns which can be detrimental (and even life threatening) for many application use cases. For motivating the readers regarding the importance of transferability of adversarial samples, and demonstrate the feasibility and possible consequences of machine learning attacks, here we highlight some practical security threats which exploit the transferability of adversarial samples.
\cite{thys2019} generated adversarial samples that were able to successfully hide a person from a person detector camera which relies on a machine learning model. They showed that this kind of attack is feasible to maliciously circumvent surveillance systems and intruders can sneak around undetected by holding on to the adversarial sample/patch in the form of cardboard in front of their body which is aimed towards the surveillance camera.
Another sector that heavily relies on ML approaches health care due to high volume of data being processed is health care. A particular example of exploiting adversarial samples in this domain is as follows. Dermatologists usually operate under a "fee-for-service" revenue model in which physicians get paid for procedures they perform for a patient. This has caused unethical dermatologists to apply unnecessary procedures to increase their revenue. To avoid frauds in this nature, insurance companies often rely on machine learning models that analyze patient data (e.g., dermatoscopy images) to confirm that suggested procedures are indeed necessary. According to the hypothetical scenario presented by ~\cite{finlayson2018}, an attacker could generate adversarial samples composed of dermatoscopy images such that when they are analyzed with the machine learning model used by insurance company (victim's model), it would (incorrectly) report that a suggested procedure is appropriate and necessary for the patient.
For security applications that rely on audio commands (which are processed by a ML-based speech recognition system), an attacker can construct adversarial audio samples to be used in breaking into the targeted system. Such an attack, if successful, may lead to information leakage, cause denial of service, or executing unauthorized commands. A feasibility of an attack on speech recognition system is demonstrated by ~\cite{carlini2016} that generated adversarial audio samples (called obfuscated commands) which were used in attacking Google Now's speech recognition system.
~\cite{jia2017} used Stanford Question Answering Dataset (SQuAD) to test whether text recognition systems can answer questions about paragraphs that contain adversarial sentences inserted by a malicious user. These adversarial samples were automatically generated to mislead the system without changing the correct answers or misleading humans. Their results showed that the accuracy of sixteen published models drops from an average of 75\% F1 score to 36\%, and when the attacker was allowed to add ungrammatical sequences of words, the average accuracy on four of the tested models dropped further down to 7\%.
As machine learning approaches find their ways into many application domains, the concerns associated with the reliability and security of systems are getting profound. While covering all application areas is out of scope for this paper, our goal is to motivate the study of transferability of adversarial samples to better understand the mechanisms and factors that influence their effectiveness. Without loss of generality, we focus primarily on image classification as a use case to demonstrate the impact of machine learning attacks and their role on effectiveness of transferability of adversarial samples in this paper (though the findings and insights obtained can be generalized for other use cases).
\section{Related Work}
The study of machine learning attacks and transferability of adversarial samples have gained a momentum, following the widespread use of Deep Neural Networks (DNNs) in many application domains. In the following, we detail the recent studies in this area, and discuss their relevance to our work.
\cite{szegedy2014} studied the transferability of adversarial samples on different models that were trained using MNIST dataset. They focused on examining why DNNs were so vulnerable to images with little perturbation. In particular, they examined non-linearity and overfitting in neural networks as the cause of DNNs vulnerability to adversarial samples. Their experiments and methodology, however, were limited to the NN model characteristics to gain intuition on transferability.
\cite{goodfellow2015} carried out a new study on transferability of adversarial samples which was built on the previous study of~\cite{szegedy2014}. In contrast, they argued that non-linearity of NN models actually helps to reduce the vulnerability to adversarial samples, and linearity of a model is the cause that makes adversarial samples work. Also, they further suggest that transferability is more likely when the adversarial perturbation or noise is highly aligned with the weight vector of the model. The entire analysis was based on attack called Fast Gradient Sign Method (FGSM) that computes the gradient of the loss function once, and then finds the minimum step size that generates the adversarial samples.
Another study on transferability was conducted by~\cite{papernot2016} in which they aimed at experimenting how transferability works across traditional machine learning classifiers, such as Support Vector Machines (SVMs), Decision Trees (DT), K-nearest neighbors (KNN), Logistic Regression (LR) and DNNs. Their motivation is to determine if adversarial samples constitute a threat for a specific type or implementation of machine learning model. In other words, they would like to analyze if adversarial samples would be transferred to any of these models; and if so, which of the classifiers (or models) are more prone to such black-box attacks. They also examined intra-technique and cross-technique transferability across the models, and provided in depth explanation on why deep neural network and LR were more prone to intra-technique transferability when compared to SVM, DT, KNN, and LR. However, similar to previous studies, their analysis did not consider the possible impacts of intrinsic properties of attacks on transferability of adversarial samples.
\cite{papernot2017} extended their earlier findings by demonstrating how a black-box attack can be launched on hosting DNN without prior knowledge of the model structure nor its training dataset. The attack strategy employed consists of training a local model (i.e., substitute/attacker model) using synthetically generated data by the adversary that was labeled by the targeted DNN. They demonstrated the feasibility of this strategy to launch black-box attacks on machine learning services hosted by Amazon, Google and MetaMind. Similar study was conducted by~\cite{liu2017}, in which they assumed the model and training process, including both training and test datasets are unknown to them before launching the attack.
\cite{demontis2019} presented a comprehensive analysis on transferability for both test-time evasion and training-time poisoning attacks. They showed that there are two main factors contributing to the success of the attack that include intrinsic adversarial vulnerability of the target model, and the complexity of the substitute model used to optimize the attack. They further defined three metrics/factors that impacts transferability, which are: i) size of the input gradient, ii) alignment of the input gradients of the loss function computed using the target and the substitute (attacker) models, and iii) variability of the loss landscape.
All these findings and factors, while essential, are restricted to explain the transferability from the model-centric perspective. However, our investigation is not limited to the assessment of models, but expands the analysis on various attack implementations and the adversarial samples generated to see if there are underlying characteristics that contribute to increasing or decreasing the chances of transferability among NN models.
\section{Machine Learning Attacks}
The adversarial perturbations crafted to generate adversarial samples for fooling a trained network are referred as machine learning attacks. The full list of machine learning attacks presented in the literature is exhaustive, however, we present the subset of attacks analyzed in this work with a brief description of their characteristics in Table~\ref{tab:attacks}.
Following the categorization presented by~\cite{rauber2018}, we categorize the attacks used in this paper into two main families: i) gradient-based, and ii) decision-based attacks. Gradient-based attacks try to generate adversarial samples by finding the minimum perturbation through a gradient descent mechanism. Decision-based attacks involve the use of image processing techniques to generate adversarial samples. It is called decision-based because the algorithms rely on comparing the generated adversarial samples with the original output until misclassification occurs.
\begin{longtable}{| p{.25\textwidth} | p{.18\textwidth} | p{.46\textwidth}|}
\hline
Name of Attack & Attack Family & Short Description\\
\hline\hline
Deep Fool Attack & gradient-based & It obtains minimum perturbation by approximating the model classifier with a linear classifier~\citep{moosavi2016}.\vspace{0.1cm} \\
\hline
Additive Noise Attack & decision-based & Adds Gaussian or uniform noise and gradually increases the standard deviation until misprediction occurs~\citep{rauber2018}.\vspace{0.1cm} \\
\hline
Basic Iterative Attack & gradient-based & Applies a gradient with small step size and clips pixel values of intermediate results to ensure that they are in the neighborhood of the original image~\citep{kurakin2017}. \vspace{0.1cm} \\
\hline
Blended Noise Attack & decision-based & Blends the input image with a uniform noise until the image is misclassified.\vspace{0.1cm}\\
\hline
Blur Attack & decision-based & Finds the minimum blur needed to turn an input image into an adversarial sample by linearly increasing the standard deviation of a Gaussian filter. \vspace{0.1cm}\\
\hline
Carlini Wagner Attack & gradient-based & Generates adversarial sample by finding the smallest noise added to an image that will change the classification of the image~\citep{carlini2017}.\vspace{0.1cm}\\
\hline
Contrast Reduction Attack & decision-based & Reduces the contrast of an input image by performing a line-search internally to find minimal adversarial perturbation. \vspace{0.1cm}\\
\hline
Search Contrast Reduction Attack& decision-based & Reduces the contrast of an input image by performing a binary search internally to find minimal adversarial perturbation. \vspace{0.1cm}\\
\hline
Decoupled Direction and Norm (DDN) Attack & gradient-based & Induces misclassifications with low L2-norm, through decoupling the direction and norm of the adversarial perturbation that is added to the image~\citep{rony2019}. The attack compensates for the slowness of Carlini Wagner attack.\vspace{0.1cm}\\
\hline
Fast Gradient Sign Attack & gradient-based & Uses a one-step method that computes the gradient of the loss function with respect to the image once and then tries to find the minimum step size that will generate an adversarial sample~\citep{goodfellow2015}.\\
\hline
Inversion Attack & decision-based & Creates a negative image (i.e., image complement of the original image, in which the light pixels appear dark, and vice versa) by inverting the pixel values~\citep{hosseini2017}.\vspace{0.1cm}\\
\hline
Newton Fool Attack & gradient-based & Finds small adversarial perturbation on an input image by significantly reducing the confidence probability~\citep{jang2017}.\vspace{0.1cm}\\
\hline
Projected Gradient Descent Attack & gradient-based & Attempts to find the perturbation that maximizes the loss of a model (using gradient descent) on an input. It is ensured that the size of the perturbation is kept smaller than specified error by relying on clipping the samples generated~\citep{madry2017}.\vspace{0.1cm}\\
\hline
Salt and Pepper Noise Attack & decision-based & Involves adding salt and pepper noise to an image in each iteration until the image is misclassified, while keeping the perturbation size within the specified epsilon $\epsilon$.\vspace{0.1cm}\\
\hline
Virtual Adversarial Attack & gradient-based & Calculates untargeted adversarial perturbation by performing an approximated second order optimization step on the Kullback–Leibler divergence between the unperturbed predictions and the predictions for the adversarial perturbation~\citep{miyato2015}. \vspace{0.1cm}\\
\hline
Sparse Descent Attack & gradient-based & A version of basic iterative method that minimizes the L1 distance. \vspace{0.1cm}\\
\hline
Spatial Attack & decision-based & Relies on spatially chosen rotations, translations, scaling~\citep{engstrom2019}.\vspace{0.1cm}\\
\hline \hline
\caption{The machine learning attacks used in this work.}
\label{tab:attacks}
\end{longtable}
\section{Methodology}
In the following, we detail the Convolutional Neural Network (CNN) models, infrastructure and tools used in the evaluation, as well as the procedure employed in carrying out the experiments.
\subsection{Infrastructure and Tools}
To build, train and test the CNNs that use in our evaluation, we rely on PyTorch and TorchVision. We also use Foolbox~\citep{rauber2018} which is a Python library to generate adversarial samples. It provides reference implementations for many of the published adversarial attacks, all of which perform internal hyperparameter tuning to find the minimum adversarial perturbation. We use Python version 3.7.3 on Jupyter Notebook. We run our experiments on Google Colab which provides an interactive environment that allows to write and execute Python code. It is similar to Jupyter notebook, but rather than being installed locally, it is hosted on the cloud. It is heavily customized for data science workloads, as it contains most of the core libraries used in data science/machine learning research. We used this environment in training the neural network as it provides large memory capacity and access to GPUs, thereby reducing the training time.
\subsection{CNNs Used in This Study}
Here, we provide the brief description and details of CNNs used in this work. Note that a particular CNN may be in one of two roles, namely it can be either an attacker model (on which the adversarial samples are generated), or a victim model (to which the adversarial samples will be used to attack).
{\bf LeNet:}
It is simple, yet popular CNN architecture that was first introduced in 1995 but came to limelight in 1998 after it demonstrated success in handwritten digit recognition task~\citep{lecun1998}. The LeNet architecture used for this work is slightly modified to train for CIFAR-10 dataset (instead of MNIST).
{\bf AlexNet:}
It is an advanced form of LeNet architecture, with a depth of 8 layers. It showed groundbreaking results in 2012- ILSVRC competition by achieving an error rate from 25.8\% to 16.4\% on ImageNet dataset with about 60 million trainable parameters~\citep{krizhevsky2017}. It also has different optimization techniques such as dropout, activation function and Local Response (LR) normalization. Since LR normalization had shown minimal (if any) contribution in practice it was not included in the AlexNet model trained for this project. Aside from the increase in the depth of the network, another difference between the LeNet and AlexNet model trained in this work is that AlexNet has a dropout layer added to it.
{\bf Vgg-11:}
It was introduced to improve the image classification accuracy on ImageNet dataset by~\cite{simonyan2015}. Compared to LeNet and AlexNet, Vgg-11 has an increased network depth, and it made use of small (3 x 3) convolutional filters. The architecture secured a second place at the ILVRSC 2014 competition after reducing the error rate on the ImageNet dataset down to 7.3\%. Hence, the architecture is an improvement over AlexNet. There are different variants of Vgg : Vgg-11, 13, 16 and 19. Only Vgg-11 is used in this paper. In addition to being deeper than AlexNet architecture, batch normalization is also introduced in the Vgg-11 used in this project.
Table~\ref{tab:cnn-models} summarizes the major features of these three CNN models. We choose these models to evaluate how machine learning attacks and corresponding adversarial samples generated respond to these models.
\begin{longtable}{| p{.08\textwidth} | p{.072\textwidth} | p{.12\textwidth}| p{.109\textwidth} | p{.125\textwidth} | p{.065\textwidth} | p{.12\textwidth} | p{.1\textwidth}|}
\hline
CNN& \# Conv. Layers&\# Inner activation func., type&Output activation func.& \# Pooling Layers, type& \# FC Layers&\# Dropout Layers (\%)&\# BatchNorm Layers \vspace{0.1cm}\\
\hline
LeNet&2&4, RELU &Softmax& 2, maxpool& 3 &None & None \vspace{0.1cm}\\
\hline
AlexNet&5&7, RELU&Softmax &3, maxpool& 3 & 2 (\%0.5)& None \vspace{0.1cm}\\
\hline
Vgg-11& 8&8, RELU&Softmax&4, maxpool& 3 & 2 (\%0.5) & 8 \vspace{0.1cm}\\
\hline
\hline
\caption{Features of the CNN models used in this paper.}
\label{tab:cnn-models}
\end{longtable}
\subsection{Data Processing and Training}
{\bf Dataset:} We used CIFAR-10 dataset~\citep{Krizhevsky2009} for our analysis, since it is arguably one of the most widely used dataset in the field of image processing and computer vision research. It contains 60,000 images which belong to one of ten classes. Training dataset contains 45,000 images, validation dataset has 500 images, whereas testing dataset contains 10,000 images. To generate adversarial samples, 500 images are selected from the testing dataset (50 images picked from each class to have a balanced dataset).
\noindent {\bf Preprocessing:} At the very beginning, we performed training transformations, including random rotation, random horizontal flip, random cropping, converting the dataset to tensor and normalization. Likewise, we performed test transformations, including converting the dataset to tensor, and normalized it. Random rotation and horizontal flip introduce complexity to the input data which helps the model to learn in a more robust way. It is necessary to convert inputs to tensor because PyTorch works with tensor objects. The three channels are normalized (dividing by 255) to increase learning accuracy. Final step of data pre-processing was forming a batch size of 256 and creating a data loader for train and validation data (the method loads 256 images in each iteration during the training and validation). We choose batch size of 256 as it is large enough to make the training faster.
\noindent {\bf Training:} For the training, we first created the network model which comprises of feature extraction, classification and forward propagation. In each epoch, we calculated the training loss, training accuracy, validation loss and validation accuracy. To perform training, we specified the following parameters for the train function: model, training iterator, optimizer (Adam optimizer) and criterion (cross entropy loss criterion). To perform validation, we specified the following parameters to the evaluation function: model, validation iterator, and criterion (cross entropy loss criterion). After completing training phase, we saved parameter values for the given model.
\begin{longtable}{| p{.3\textwidth} | p{.2\textwidth}| p{.2\textwidth} | p{.2\textwidth} |}
\hline
Characteristics & LeNet & AlexNet & Vgg-11 \vspace{0.1cm}\\
\hline
\hline
Epoch number & 25 & 25 & 10 \vspace{0.1cm}\\
\hline
Training loss & 0.953 & 0.631 & 0.244 \vspace{0.1cm}\\
\hline
Validation loss & 0.956 & 0.695 & 0.468 \vspace{0.1cm}\\
\hline
Training accuracy & 66.34\% & 78.34\%& 91.94\% \vspace{0.1cm}\\
\hline
Validation accuracy & 66.70\% & 76.74\%&87.11\% \vspace{0.1cm}\\
\hline
Testing accuracy & 66.64\% &76.03\%& 85.87\% \vspace{0.1cm}\\
\hline
\hline
\caption{Training characteristics for NN models.}
\label{tab:training-characteristics}
\end{longtable}
The final step is the testing stage. To test the trained models, we loaded in the saved model parameters, including trained weights. Then, we checked for testing accuracy of the networks. Table~\ref{tab:training-characteristics} summarizes the training characteristics and reports train, validation and testing accuracy obtained.
\subsection{Adversarial Samples Generation}
{\bf Machine learning attacks:} Table~\ref{tab:attacks} detailed 17 unique machine learning attacks employed in the evaluation. However, for some of the attacks, more than one norms (L0, L1, L-infinity) are used for estimating the error ($\epsilon$), thus increasing the number of unique attacks evaluated to 40. For the sake of brevity, we enumerate the attacks ranging from 1 to 40 (as listed in Table~\ref{tab:attack-enumeration}), and used this enumeration as labels, instead of providing the full name and used-norm when showing the results in the following Figures.
\begin{longtable}{| p{.05\textwidth} | p{.3\textwidth}| p{.055\textwidth} || p{.05\textwidth} | p{.3\textwidth}| p{.055\textwidth} | }
\hline
Label & Attack Name & Norm & Label & Attack Name & Norm \\
\hline
\hline
1& Deep Fool Attack& L-inf & 21& BSCR Attack& L2\\
\hline
2& Deep Fool Attack& L2 & 22& BSCR Attack& L-inf\\
\hline
3& Additive Gaussian Noise (AGN) Attack& L2 & 23& Linear Search Contrast Reduction (LSCR) Attack& L1\\
\hline
4& Additive Uniform Noise (AUN) Attack& L2 & 24& LSCR Attack& L2\\
\hline
5& AUN Attack& L-inf & 25& LSCR Attack& L-inf\\
\hline
6& Repeated AGN Attack& L2 & 26& Decoupled Direction and Norm Attack& L2\\
\hline
7& Repeated AUN Attack& L2 & 27& Fast Gradient Sign Attack& L1\\
\hline
8& Repeated AUN Attack& L-inf & 28& Fast Gradient Sign Attack& L2\\
\hline
9& Basic Iterative Attack& L1 & 29& Fast Gradient Sign Attack& L-inf\\
\hline
10& Basic Iterative Attack& L2& 30& Inversion Attack& L1\\
\hline
11& Basic Iterative Attack& L-inf& 31& Inversion Attack& L2\\
\hline
12& Blended Uniform Noise Attack& L1 & 32& Inversion Attack& L-inf\\
\hline
13& Blended Uniform Noise Attack& L2 & 33& Newton Fool Attack& L2\\
\hline
14& Blended Uniform Noise Attack& L-inf & 34& Projected Gradient Descent Attack& L1\\
\hline
15& Blur Attack& L1 & 35& Projected Gradient Descent Attack& L2\\
\hline
16& Blur Attack& L2 & 36& Projected Gradient Descent Attack& L-inf\\
\hline
17& Blur Attack& L-inf & 37& Salt and Pepper Attack& L2\\
\hline
18& Calini Wagner Attack& L2 & 38& Sparse descent Attack& L1\\
\hline
19& Contrast Reduction Attack& L2 & 39& Virtual adversarial Attack& L2\\
\hline
20& Binary Search Contrast Reduction (BSCR) Attack& L1 & 40& Spatial Attack& N/A\\
\hline
\caption{Labels of attacks and norms used to generate adversarial samples.}
\label{tab:attack-enumeration}
\end{longtable}
{\bf Adversarial Sample Formulation:} Given a classification function $f(x)$, class $C_x$, adversarial classification function $f(x\prime)$, distance $D(x, x\prime)$ and epsilon $\epsilon$ (smallest allowable perturbation or error), adversarial sample $x$ can be mathematically expressed as:
\[
f(x)\; = \;C_x \land f(x\prime)\;\neq\;C_x \land D(x,x\prime) \leq \epsilon.
\]
To craft adversarial samples via Foolbox~\citep{rauber2018}, we need to specify a criterion that defines the impact of adversarial action (misclassification in our case), and a distance measure that defines the size of a perturbation (i.e., L1-norm, L2-norm, and/or L-inf which must be less than specified $\epsilon$). Then, these are taken into consideration in an attacker model to generate an adversarial sample.
The following equation shows the general distance formula. Depending on the value of p, L1, L2 or L-inf norm is obtained.
\[
||x - \hat{x}||_p \; = \; (\; \sum_{i=1}^{d} | x_i = \hat{x_i}|^p \;)^{1/2}
\]
We picked the value of epsilon as 1.0, since it allows to generate a significant number of adversarial samples for all the attack methods used. Because it takes a lot of time to generate adversarial samples using the attack algorithms, we used 500 balanced inputs (i.e., 50 images from each of the 10 classes) from the test data.
To demonstrate how well adversarial samples transfer, we use a confusion matrix as a visual guide. In a given confusion matrix, each row represents instances in a predicted class, whereas each column represents instances in an true/actual class that a given input belongs. The diagonal of the confusion matrix shows the number of each class that were correctly predicted after an attack is launched. For example, Figure~\ref{fig:confusion-linf} shows a confusion matrix of adversarial samples generated by using Deep Fool attack (with L-inf norm) on LeNet. It has all zero entries on the diagonal which means that the inputs (i.e., adversarial samples) were misclassified in all classes. This implies that the attack that generated the adversarial samples is very powerful since they were all misclassified. On the other hand, Figure~\ref{fig:confusion-l2}
shows a confusion matrix of adversarial samples generated by using Gaussian Noise attack (with L2 norm) on LeNet. In this confusion matrix, however, the diagonal has non-zero, larger positive entries that illustrates the attack used in generating the adversarial samples are less powerful leading as many of the samples correctly classified.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\columnwidth]{{confusion-linf}.png}
\caption{Confusion matrix of adversarial samples generated using Deep Fool attack with L-inf norm on LeNet. \label{fig:confusion-linf}}
\vspace{-0.2cm}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.6\columnwidth]{{confusion-l2}.png}
\caption{Confusion matrix of adversarial samples generated using Additive Gaussian Noise attack with L2 norm on LeNet.\label{fig:confusion-l2}}
\vspace{-0.2cm}
\end{figure}
\subsection{Experimental Procedure}
Here, we describe the procedure in performing the analysis and generating the results shown in the Evaluation. First, the adversarial samples are generated by using the attack model and original dataset on an attacker model (which can be one of the LeNet, AlexNet, or Vgg-11 at any given scenario). Once the adversarial samples are generated on the attacker model, they are used on the victim models (which can be one of the LeNet, AlexNet or Vgg-11). Then, the statistics regarding the number of mispredictions, as well as their prediction classes are collected. We also calculate the Structural Similarity Index Measure (SSIM) between adversarial samples and the original sample to compare how visually similar they are (SSIM value ranges from 0 to 1; the higher value indicates more similarity). This measure has been used in the literature to correlate more with human perception than Mean Absolute Distance (MAD). Hence, it serves as a metric for estimating how much perturbed (adversarial) and the original image will differ visually.
\section{Evaluation}
We obtained three kinds of results using adversarial samples generated on attacker models: i) number of mispredictions when adversarial samples are used on victim models; ii) the classes that (mis)predictions belong when adversarial samples are used on victim models; and iii) SSIM value between original and adversarial samples.
We used these results to assess the effectiveness of attacks used in generating adversarial samples. This assessment led us to identify four main factors that contribute immensely towards the transferability of adversarial samples. In the following, we discuss these factors and provide results obtained to backup our findings for each factor's implication.
\subsection{Factor 1: The attack itself}
We observed that some of the attacks used in generating adversarial samples are just more powerful than others (regardless of the victim model). That is, the adversarial samples generated by these attacks are easily transferable, hence leading to high number of misprediction on the target model.
\begin{figure}[h]
\centering
\includegraphics[width=1\columnwidth]{{attacks}.png}
\caption{Average number of mispredictions for adversarial samples transferred to the LeNet, AlexNet and Vgg-11. \label{fig:attacks}}
\end{figure}
Figure~\ref{fig:attacks} shows that the attacks with labels 1, 5, 8, 11, 14, 17, 25, 29, 32, 36, and 40 have higher number of mispredictions when adversarial samples are used on victim models. Hence, those attacks are more powerful. Further, attacks with labels 11, 29 and 36 appear to have the highest number of mispredictions (on any victim model). This result shows that the transferability of an adversarial sample highly depends on the attack that generated the given adversarial sample.
\subsection{Factor 2: Norm Used in the Attack}
We observed that a particular attack that uses different norm to generate adversarial samples yielded varying degree of transferability. In general, the attacks that use L-inf tend to produce adversarial samples that exhibit higher number of mispredictions compared to attacks using L2 and L1. Figures~\ref{fig:lenet-attacker-distances},~\ref{fig:alexnet-attacker-distances} and \ref{fig:vgg11-attacker-distances} show results for attacks that use different norms when generating adversarial samples. In particular, Figure~\ref{fig:lenet-attacker-distances} shows the average number of mispredictions per attack for adversarial samples that are generated on LeNet. Among the attacks, Deep Fool, AUN and RAUN are implemented by using just L-inf and L2, whereas the rest have implementation for L1, L2 and L-inf norms. Clearly, the adversarial samples generated with L-inf norm have stronger ability to transfer, compared to the ones generated with L1 and L2 norms. Likewise, Figure~\ref{fig:alexnet-attacker-distances} and~\ref{fig:vgg11-attacker-distances} show the average number of mispredictions per attack for adversarial samples that are generated on AlexNet and Vgg-11, respectively. The findings are consistent among the victim models, indicating the norm to be used for a given attack has a significant impact on transferability of adversarial samples.
\begin{figure}[h]
\centering
\includegraphics[width=1\columnwidth]{{lenet-attacker-distances}.png}
\caption{Average number of mispredictions per attack for adversarial samples generated on LeNet. \label{fig:lenet-attacker-distances}}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1\columnwidth]{{alexnet-attacker-distances}.png}
\caption{Average number of mispredictions per attack for adversarial samples generated on AlexNet. \label{fig:alexnet-attacker-distances}}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1\columnwidth]{{vgg11-attacker-distances}.png}
\caption{Average number of mispredictions per attack for adversarial samples generated on Vgg-11. \label{fig:vgg11-attacker-distances}}
\end{figure}
While L-inf norm yields adversarial samples to transfer better compared to other norms, it should be noticed that the disturbance made to an input sample may become more pronounced. Comparing SSIM values of adversarial samples generated by using different norms shows that L-inf always produces significantly perturbed samples. In Figure~\ref{fig:ssim}, the range for SSIM values are labeled as: Excellent = ( 0.75 $\leq$ SSIM $\leq$ 1.0 ), Good = ( 0.55 $\leq$ SSIM $\leq$0.74 ), Poor = (0.35 $\leq$ SSIM $\leq$ 0.54), and Bad = (0.00 $\leq$ SSIM $\leq$ 0.34). We observed that many of the adversarial samples generated by L-inf norm have lower SSIM, indicating that perturbations made may be perceived by human. Therefore, checking the SSIM values can be used to guide the effectiveness of a given attack. Although an attack aims to maximize the number of mispredictions, it should be considered as a stronger attack if it can keep SSIM higher while yielding higher number of mispredictions, at the same time.
\begin{figure}[h]
\centering
\includegraphics[width=1\columnwidth]{{ssim}.png}
\caption{SSIM values for adversarial samples generated on AlexNet. \label{fig:ssim}}
\end{figure}
\subsection{Factor 3: Closeness of the Target Model to the Attacker Model}
Not surprisingly, we observed that adversarial samples yielded higher number of mispredictions for the models on which they were generated (i.e., the case in which attacker and victim models are the same). For example, adversarial samples generated on AlexNet lead to higher number of misprediction when these samples are used on AlexNet, or on a closer model (e.g., a variation of AlexNet). However, when these adversarial samples are used on other (or dissimilar) victim models, they lead to a comparably lower number of mispredictions. These findings are shown in Figures~\ref{fig:lenet-attacker-model},~\ref{fig:alexnet-attacker-model} and \ref{fig:vgg11-attacker-model}.
\begin{figure}[h]
\centering
\includegraphics[width=1\columnwidth]{{lenet-attacker-model}.png}
\caption{Number of mispredictions for adversarial samples that are generated on LeNet.\label{fig:lenet-attacker-model}}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1\columnwidth]{{alexnet-attacker-model}.png}
\caption{Number of mispredictions for adversarial samples that are generated on AlexNet. \label{fig:alexnet-attacker-model}}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1\columnwidth]{{vgg11-attacker-model}.png}
\caption{Number of mispredictions for adversarial samples that are generated on Vgg-11.\label{fig:vgg11-attacker-model}}
\end{figure}
The implication of this factor is that if an attacker can generate adversarial samples on a model that is similar to victim models, then the probability of adversarial samples generated to be transferred effectively is higher. This methodology can be used by industry experts to test how well adversarial samples can transfer to their ML models. One way to exploit this observation for security-critical applications is to build multiple ML models that are dissimilar in terms of structure, but providing similar prediction accuracy; and then using majority vote (or similar schemes) to decide what should be proper prediction. If a particular attack would transfer and be effective on one of the ML model, (as evident by the analysis) it is very likely that other ML models (which are dissimilar) would be less sensitive to the same attack, providing a way to detect the anomaly and avoid the undesired consequences of adversarial samples. Building ML models that are different in structure, but yielding similar accuracy would be active research direction, not just for security-related concerns, but also be useful for reliability, power management, performance and scalability.
\subsection{Factor 4: Sensitivity of an Input}
Inherent sensitivity of an input to a particular attack can determine the strength of adversarial sample and how well it can transfer to a victim model. We can summarize our observations about the sensitivity of inputs used in the attacks as follows.
\begin{enumerate}
\item Some inputs are very sensitive to almost any attack, thus the adversarial samples generated for them can effectively transfer to victim models (e.g., input images with index 477, 479, 480 and 481 in Figure~\ref{fig:vgg11-misprediction}).
\item Some inputs are insensitive to attacks, thus the adversarial samples generated are ineffective and cannot get mispredicted, regardless of the victim model (e.g., input images with index 481, 484, 494 in Figure~\ref{fig:vgg11-misprediction}).
\item Some inputs are sensitive to specific attacks on a particular victim model, meaning the adversarial samples become effective when they are generated by particular subset of attacks, targeting a particular model (but not effective when used on other models). For example, the input images with index 465 and 467 in Figure~\ref{fig:vgg11-misprediction} become more sensitive (thus corresponding adversarial samples are more effective) when they are transferred to LeNet and AlexNet models, respectively (but not on other models).
\end{enumerate}
\begin{figure}[h]
\centering
\includegraphics[width=1.0\columnwidth]{{vgg11_models_last40_df}.png}
\caption{The number of effective attacks (yielding a adversarial sample that would be mispredicted) for a particular input used on Vgg-11 as an attacker model (zoomed in to see last 40 input images). \label{fig:vgg11-misprediction}}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1.0\columnwidth]{{lenet_models_total_df}.png}
\caption{The number of effective attacks (yielding a adversarial sample that would be mispredicted) for a particular input used on LeNet as an attacker model. \label{fig:lenet-misprediction-all}}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1.0\columnwidth]{{alexnet_models_total_df}.png}
\caption{The number of effective attacks (yielding a adversarial sample that would be mispredicted) for a particular input used on AlexNet as an attacker model. \label{fig:alexnet-misprediction-all}}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1.0\columnwidth]{{vgg11_models_total_df}.png}
\caption{The number of effective attacks (yielding a adversarial sample that would be mispredicted) for a particular input used on Vgg-11 as an attacker model. \label{fig:vgg11-misprediction-all}}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\columnwidth]{{collective-histogram}.png}
\caption{Histogram that summarizes the sensitivity of inputs to attacks. The x-axis indicates the number of effective attacks for a given input (i.e., generated adversarial sample would transfer to victim model successfully regardless of the attacker model), and y-axis indicates the number of inputs whose adversarial samples (generated by a set of attacks) would transfer effectively to the victim models. \label{fig:collective-histogram} }
\end{figure}
Figure~\ref{fig:vgg11-misprediction} shows the effective number of attacks used to generate adversarial samples on Vgg-11. For better visibility, only the last 40 input images (out of 500) are zoomed in Figure~\ref{fig:vgg11-misprediction} where the x- axis shows the index of input image and the y-axis shows the number of attacks that lead to misprediction of generated adversarial samples on victim models (please, see Figure~\ref{fig:vgg11-misprediction-all} for all 500 inputs used on Vgg-11). Since there are 40 attacks used to generate adversarial samples, the y-axis can be at most 40 (in which case it would mean that all of the attacks yielded adversarial sample that result in misprediction). The results obtained for the complete 500 input images used are shown in Figure~\ref{fig:alexnet-misprediction-all},~\ref{fig:lenet-misprediction-all} for AlexNet, and LeNet (as attacker model), respectively.
The implication of this factor is that the inherent characteristics of the input may play a role on how effectively the generated adversarial samples would be transferred to victim models. When combined with the strength of an attack, some inputs that are sensitive to the given set of attacks (irrespective of attacker model) may yield more effective adversarial samples than the other inputs.
Figure~\ref{fig:collective-histogram} illustrates this phenomena. It can be seen that most of the input images are sensitive to roughly 10 attacks out of the 40 (regardless of the attacker model being used), but relatively very few inputs are very sensitive to all the attacks (23 input images yield adversarial samples that were mispredicted on all the victim models, regardless of attacker model and attack used).
\section{Conclusion}
In its simplest form, \textit{transferability} can be defined as the ability of adversarial samples generated using the attacker model to be mispredicted when transferred to the victim model. We identified that most of the literature on transferability focuses on interpreting and evaluating transferability from the machine learning model perspective alone, which we refer as model-centric approach. In this work, we took an alternative path that we call attack-centric approach that focuses on investigating machine learning attacks to interpret and evaluate how adversarial samples transfer to the victim models. For each attacker model, we generated adversarial samples that are transferred to the three victim models (i.e., LeNet, AlexNet and Vgg-11).
We identified four factors that influence how well an adversarial sample would transfer.
Our hope is that these factors would be useful guidelines for researchers and practitioners in the field to prohibit the adverse impact of black-box attacks and to build more attack resistant/secure machine learning systems.
\vskip 0.2in
| 2024-02-18T23:39:39.787Z | 2021-12-06T02:15:43.000Z | |
proofpile-arXiv_065-8 | "\\section{Introduction}\n\\label{sec:intro}\nThere are numerous links between probabilistic cellula(...TRUNCATED) | 2024-02-18T23:39:39.793Z | 2022-03-29T02:19:48.000Z | |
proofpile-arXiv_065-17 | "\\section{introduction}\nThe interaction between light and matter is an important subject in the fi(...TRUNCATED) | 2024-02-18T23:39:39.826Z | 2021-12-06T02:11:56.000Z | |
proofpile-arXiv_065-18 | "\\section{Introduction}\nA number of new and upcoming applications require ultra-high data rates th(...TRUNCATED) | 2024-02-18T23:39:39.828Z | 2021-12-06T02:15:27.000Z | |
proofpile-arXiv_065-39 | "\\section{Introduction}\nHumans have a native capability to manipulate objects without much effort.(...TRUNCATED) | 2024-02-18T23:39:39.927Z | 2021-12-06T02:17:12.000Z | |
proofpile-arXiv_065-46 | "\\section{Introduction}\n\\label{sec:intro}\n\n\nFew-shot object detection (FSOD) \\cite{yan2019met(...TRUNCATED) | 2024-02-18T23:39:39.986Z | 2022-07-26T02:09:22.000Z | |
proofpile-arXiv_065-48 | "\\section{Introduction}\\label{sec:intro}}\n\\IEEEPARstart{D}{iscriminative} feature extraction fro(...TRUNCATED) | 2024-02-18T23:39:39.995Z | 2021-12-06T02:16:55.000Z | |
proofpile-arXiv_065-57 | "\\section{Introduction}\n\nThe classical one-sample goodness-of-fit problem is concerned with testi(...TRUNCATED) | 2024-02-18T23:39:40.016Z | 2021-12-06T02:17:13.000Z |
End of preview. Expand
in Dataset Viewer.
YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/datasets-cards)
Example Usage
from datasets import load_dataset
import huggingface_hub
for folder_name in huggingface_hub.list_repo_tree("orionweller/dolma_20_percent_sample", repo_type="dataset"):
if folder_name in ["README.md", ".gitattributes"]:
continue
# otherwise is a url from a particular part of Dolma, e.g. `algebraic_stack_train_0000`, total is 2419
# You can load only one part like this
dataset = load_dataset("orionweller/dolma_20_percent_sample", data_files={"data": f"{folder_name.path}/*"})["data"]
# dataset will have these keys: ["id", "text", "added", "created", "source"]
- Downloads last month
- 23