reference
stringlengths
376
444k
target
stringlengths
31
68k
Survey on Context-Aware Pervasive Learning Environments <s> E. Observation 5 <s> In this paper, we describe an interactive guide system for kids in museums. The system uses a sensing board which can rapidly recognize types and locations of multiple objects, and creates an immersive environment by giving users visual and auditory feedback to their manipulations on the board. The purpose of the system is to attract users' interests in a real exhibition by allowing them to interact with the corresponding virtual exhibition on the board, and providing them with its information. We have evaluated the system in a museum. From the interviews and questionnaires, we have confirmed that it is easy for kids to use the system and it can raise their motivation for visiting real exhibitions. <s> BIB001 </s> Survey on Context-Aware Pervasive Learning Environments <s> E. Observation 5 <s> We explore the use of ubiquitous sensing in the home for contextsensitive microlearning. To assess how users would respond to frequent and brief learning interactions tied to context, a sensor-triggered mobile phone application was developed, with foreign language vocabulary as the learning domain. A married couple used the system in a home environment, during the course of everyday activities, for a four-week study period. Built-in and stick-on multi-modal sensors detected the participants' interactions with hundreds of objects, furniture, and appliances. Sensor activations triggered the audio presentation of English and Spanish phrases associated with object use. Phrases were presented on average 57 times an hour; this intense interaction was found to be acceptable even after extended use. Based on interview feedback, we consider design attributes that may have reduced the interruption burden and helped sustain user interest, and which may be applicable to other context-sensitive, always-on systems. <s> BIB002 </s> Survey on Context-Aware Pervasive Learning Environments <s> E. Observation 5 <s> In this paper we present the concept and technical architecture of the SciMyst pervasive mobile game. Encouraged by the positive experiences of game deployment at the SciFest2007 science festival, in Joensuu, Finland, we discuss means to use SciMyst in the context of museums with an aim to boost visitor engagement and interaction with the surrounding environment. As the result, we propose an array of novel technologies to be used in the SciMyst framework for museums. <s> BIB003
The roles of the physical environment had some variation but in general three different roles were recognisable, albeit not explicitly presented. These roles and their respective frequencies were: context for learning , content for learning , and system resource (3). It is worth noticing that in one system an environment can have multiple roles. For example, there were five cases where the environment was both context and content for learning. Additionally, two of the reviewed systems, an interactive sensor board for museums BIB001 and an interactive toy set for children , did not utilise the environment, and one paper did not state the role of the environment at all. Environment is a context for learning when learning is situation-based and the system adapts according to situations and contexts in which the user is present. This is also called contextual or situational learning. The environment provides content for learning when the system utilises the information within the environment as a learning resource. Finally, environment is a system resource when some objects within the environment are triggers for system events (e.g. furniture with embedded sensors which trigger usage events BIB002 ). IV. DISCUSSION The evidence presented in Observation 1 suggests that RFID is the most prevalent sensor technology used in pervasive learning environments, in part due relatively cheap price of RFID tags (approx. 1€ each in the authors' countries) and readers (150€), compared to the cost of a basic wireless sensor node of at least 300€. RFID-based readers are already available in some mobile devices as integrated chips, including models by Nokia and Samsung, and we expect that RFID will become a mainstream technology in mobile devices within 5 years. This development will enable tagging any object in a pervasive learning environment, thus making the underlying system more aware of the environment. Observation 2 identifies several suitable learning models; however these require proper validation and comparison. Many of the proposed learning models were not validated, and those that were did not provide reliable results, as the test scenarios were inadequate in terms of the numbers of test participants and repetitions. It was discouraging to discover that only a handful of papers explicitly discussed learning models, and this leads us to believe that the authors of the other papers either did not consider learning models at all or did not include that information. All the learning models followed an informal constructivist approach. Authentic learning was mentioned more than once, thus suggesting its potentiality for pervasive learning. Nevertheless, the results of the observation 2 indicate that in this field learning model validations are required before any of the models can be seriously recommended. Observation 3 concentrated on technical implementations of pervasive learning environments and roles of mobile devices in them. The use of client-server architectures in most of the systems shows that centralised control is used in preference to a distributed system. The benefits of using a centralised approach are the ease of installation and maintenance. However, a distributed system consisting of autonomous sensor nodes and one or more coordinating servers would be more fault-tolerant and load-balanced. Fault tolerance is particularly important in large systems which are running constantly and have hundreds or thousands of resources. The systems presented in the reviewed papers were quite small, thus the absence of distributed control is justified. Popularity of PDA devices (6) as clients over Tablet PCs BIB002 and mobile phones (3) can be explained with screen size, physical dimensions, and processing capabilities. Displays on mobile phones are often too small for viewing information other than text and low quality images/video. On the other hand, Tablet PCs have large displays, but they are more difficult to carry around due to their large physical size. PDA devices often have larger displays than mobile phones and their size is smaller than that of Tablet PCs. Moreover, PDA devices have enough processing power for handling basic media types, while the resources are often more limited on mobile phones. Despite the popularity of PDA devices, mobile phone and PDA technologies have been converging, and there is a similar trend of convergence going on between laptops and mobile phones/PDAs. These new devices are called Ultra Mobile PCs (UMPCs) and their size is smaller than Tablet PCs, but bigger than mobile phones or PDAs. In addition to being highly portable, UMPC devices are capable of running a fullscale Windows XP operating system or equivalent Linux distribution, thus making them suitable client devices for various software solutions supporting pervasive learning activities. Currently the problems of UMPCs are high price and relatively short battery life. However, we can expect these aspects to improve in the near future. According to observation 3, there were five types of roles for mobile devices in the reviewed systems: data collection tool, content representation tool, communication tool, navigation tool and notice receiving tool. Since the content representation tool was the only role having a frequency more than 10, many of the systems merely concentrated on providing contextsensitive content to the user. This indicates that there is work to be done to increase interaction between the environment and the users, as well as among the users. For example, the data gathered with a data collection tool can be saved and processed later to continue the learning experience at another location, e.g. at home or in a classroom. As another example, communication with peers can help users to establish and strengthen social relationships. Observation 4 concluded that only a few pervasive learning environments are truly multi-user systems through supporting communication among users. The lack of voice-and video-based communication was also noted, and we suggest that a reason may be the requirement for other running applications to be closed before using mobile phones' built-in voice call capabilities. Furthermore, creating a new reliable VoIP (Voice Over IP) application is not a trivial task. Audio/video-based communication is more personal, instant and effective than forums or chats. If a pervasive learning environment is to be built on a principle of virtual collaboration, using instant communication is possibly a good way to implement it. An alternative method is to provide a meeting request tool for the users through which two or more users could meet physically after agreeing on it virtually. This kind of approach was used by BIB003 where two users of the system met physically after one user had sent a help request to another user. In Observation 5, we distinguished three different roles for the physical environment in pervasive learning systems: context for learning, content for learning and system resource, and the frequency figures (9, 7 and 3, respectively) indicate that context and content are used most often. Usage of the environment as a system resource would be higher if more systems would embed wireless sensor networking components for sensing different aspects of the environment. The low frequency of the system resource role is related to the lack of interaction with the environment; if the system would be able to closely observe user's behaviour and the state of the physical environment, the system would become more responsive and adaptive. This would in turn encourage users to interact more with the environment by using different objects and observing the consequences on the mobile device or in the physical environment.
A Review of Energy Efficient Dynamic Source Routing Protocol for Mobile Ad Hoc Networks <s> INTRODUCTION <s> Practical design and performance solutions for every ad hoc wireless networkAd Hoc Wireless Networks comprise mobile devices that use wireless transmission for communication. They can be set up anywhere and any time because they eliminate the complexities of infrastructure setup and central administration-and they have enormous commercial and military potential. Now, there's a book that addresses every major issue related to their design and performance. Ad Hoc Wireless Networks: Architectures and Protocols presents state-of-the-art techniques and solutions, and supports them with easy-to-understand examples. The book starts off with the fundamentals of wireless networking (wireless PANs, LANs, MANs, WANs, and wireless Internet) and goes on to address such current topics as Wi-Fi networks, optical wireless networks, and hybrid wireless architectures. Coverage includes: Medium access control, routing, multicasting, and transport protocols QoS provisioning, energy management, security, multihop pricing, and much more In-depth discussion of wireless sensor networks and ultra wideband technology More than 200 examples and end-of-chapter problemsAd Hoc Wireless Networks is an invaluable resource for every network engineer, technical manager, and researcher designing or building ad hoc wireless networks. <s> BIB001 </s> A Review of Energy Efficient Dynamic Source Routing Protocol for Mobile Ad Hoc Networks <s> INTRODUCTION <s> Nodes in a mobile ad hoc network have limited battery power. If a node is used frequently for transmission or overhearing of data packets, more energy is consumed by that node and after certain amount of time the energy level may not be sufficient for data transmission resulting in link failure. In this paper, we have considered two routing protocols-Dynamic Source Routing (DSR) & Minimum Maximum Battery cost Routing (MMBCR) and studied their performances in terms of network lifetime for the same network scenario. Simulations are carried out using NS2. Finally from the simulation results we have concluded that MMBCR gives more network lifetime by selecting route with maximum battery capacity thereby outperforming DSR. General Terms Energy efficiency, MANETS, Routing Protocols. <s> BIB002
Ad Hoc Network is a multi-hop wireless networks which is consist of autonomous mobile nodes interconnected by means of wireless medium without having any fixed infrastructure. It's quick and easy deployment in a situation where its highly impossible to set up any fixed infrastructure networks, has increased the potential used in different applications in different critical scenarios. Such as battle fields, emergency disaster relief, conference and etc. A mobile ad hoc network [MANET] BIB001 BIB002 can be characterized by the mobile nodes which have freedom to move at any direction and has the ability of self-configuring, self-maintaining and selforganizing themselves within the network by means of radio links without any fixed infrastructure like based station, fixed link, routers, and centralized servers. As in the network there is no base station or central coordinator exists, so the individual node plays the responsibility as a router during the communication has to be played by each and every node, participating in the network communication. Hence all the nodes are incorporated with a routing mechanism in order to transmit a data packet from source to destination. Nodes are operated by battery which is having limited capacity and they all suffer from severe battery consumption, especially when they participate for data communication for various sources and destinations. An uninterrupted data transmission from a particular source to destination requires a continual updating of path. If any moment path is not fond from source to destination, then Route Discovery Process has to be called. And multiple times route Discovery Process may introduce heavy power consumption. A number of routing approaches have been proposed to reduce various types of power consumption caused by various reasons in the wireless ad hoc network, which in result not only prolongs the life span of individual nodes but also reduces the network partition and enhances the performance of the network. In fixed infrastructure wireless network is a static network where its different components have to be set up permanently prior to the establishment of the communication. It takes not only huge time but also involves a huge cost for establishing the network. The best example of a fixed infrastructure based network is Global system for mobile Communication (GSM) known as Second generation Mobile cellular System which is also a wireless network. In GSM, network architecture comprises several base transceiver stations (BTS) which are clustered and connected to a base station controller (BSC).Several BSC are connected to an Mobile Switching Center (MSC). The MSC has access to several data base, including Visiting Location Register (VLR), Home Location register (HLR).It is also responsible for establishing, managing and clearing connection as well as routing calls to proper radio cells. Here even if the nodes are mobile but they are limited with a fixed number of hops while communicating with other nodes. But in case of MANET, it is completely different. The network is considered as a temporary network as it is meant for a specific purpose and for a certain period of time. And it is based on multi-hop technology where the data can be transmitted through number of intermediate nodes from source to destination. With the rapid demands of MANET in the recent years, certainly have challenged the researchers to take up some of the crucial issues like bandwidth utilization, limited wireless transmission range, hidden terminal and exposed terminal problem, packet loss due to transmission error, mobility, stimulated change of route, security problem and battery constraint. One of the important challenges of MANET is power constraint. The mobile ad hoc networks are operated on battery power. And the power usually gets consumed in mainly two ways. First one is due to transmitting of data to a desired recipient. Secondly, mobile node might offer itself as an intermediate forwarding node in the networks. The power level of the node is also getting affected while any route is established between two end points. The tradeoff between frequency of route update dissemination and battery power utilization is one of the major design issues of ad hoc network protocols. Because high power consumption will increase the battery depletion rate which in turn reduces the node's lie time, network lie time and causes network partition. Due to high network partition performance et a affected due to increase in number of retransmission, packet loss, higher end to end delay and many more problems. Therefore, various energy efficient routing protocols have been proposed to increase the lifetime of the nodes as well as lifetime of the networks, so that communication can be carried out without any interruption. This article provides as well as analyzes different energy efficient routing protocols designed for ad hoc wireless networks which are only based on the mechanism of traditional DSR routing protocol. The remaining of the session is organized as follow. The next section-2 presents two subdivision of Ad Hoc routing Protocols and their basic routing mechanism. We have basically emphasized the basic working principle o DSR routing protocol as it we have explained all energy efficient routing protocol which is based on DSR only. In section-3 we have shade some lights on the requirement of energy aware routing protocol for MANET and its different approaches to achieve that goal. The next section-4 highlights all related work that has been done to make DSR as an efficient energy aware routing Protocol. And finally the last section-5 concludes the article.
A Review of Energy Efficient Dynamic Source Routing Protocol for Mobile Ad Hoc Networks <s> ROUTING PROCESS IN AD HOC NETWORKS <s> An ad-hoc network of wireless static nodes is considered as it arises in a rapidly deployed, sensor-based, monitoring system. Information is generated in certain nodes and needs to reach a set of designated gateway nodes. Each node may adjust its power within a certain range that determines the set of possible one hop away neighbors. Traffic forwarding through multiple hops is employed when the intended destination is not within immediate reach. The nodes have limited initial amounts of energy that is consumed at different rates depending on the power level and the intended receiver. We propose algorithms to select the routes and the corresponding power levels such that the time until the batteries of the nodes drain-out is maximized. The algorithms are local and amenable to distributed implementation. When there is a single power level, the problem is reduced to a maximum flow problem with node capacities and the algorithms converge to the optimal solution. When there are multiple power levels then the achievable lifetime is close to the optimal (that is computed by linear programming) most of the time. It turns out that in order to maximize the lifetime, the traffic should be routed such that the energy consumption is balanced among the nodes in proportion to their energy reserves, instead of routing to minimize the absolute consumed power. <s> BIB001 </s> A Review of Energy Efficient Dynamic Source Routing Protocol for Mobile Ad Hoc Networks <s> ROUTING PROCESS IN AD HOC NETWORKS <s> Practical design and performance solutions for every ad hoc wireless networkAd Hoc Wireless Networks comprise mobile devices that use wireless transmission for communication. They can be set up anywhere and any time because they eliminate the complexities of infrastructure setup and central administration-and they have enormous commercial and military potential. Now, there's a book that addresses every major issue related to their design and performance. Ad Hoc Wireless Networks: Architectures and Protocols presents state-of-the-art techniques and solutions, and supports them with easy-to-understand examples. The book starts off with the fundamentals of wireless networking (wireless PANs, LANs, MANs, WANs, and wireless Internet) and goes on to address such current topics as Wi-Fi networks, optical wireless networks, and hybrid wireless architectures. Coverage includes: Medium access control, routing, multicasting, and transport protocols QoS provisioning, energy management, security, multihop pricing, and much more In-depth discussion of wireless sensor networks and ultra wideband technology More than 200 examples and end-of-chapter problemsAd Hoc Wireless Networks is an invaluable resource for every network engineer, technical manager, and researcher designing or building ad hoc wireless networks. <s> BIB002 </s> A Review of Energy Efficient Dynamic Source Routing Protocol for Mobile Ad Hoc Networks <s> ROUTING PROCESS IN AD HOC NETWORKS <s> Nodes in a mobile ad hoc network have limited battery power. If a node is used frequently for transmission or overhearing of data packets, more energy is consumed by that node and after certain amount of time the energy level may not be sufficient for data transmission resulting in link failure. In this paper, we have considered two routing protocols-Dynamic Source Routing (DSR) & Minimum Maximum Battery cost Routing (MMBCR) and studied their performances in terms of network lifetime for the same network scenario. Simulations are carried out using NS2. Finally from the simulation results we have concluded that MMBCR gives more network lifetime by selecting route with maximum battery capacity thereby outperforming DSR. General Terms Energy efficiency, MANETS, Routing Protocols. <s> BIB003
In MANET BIB002 BIB003 BIB001 , routing is a process of establishing a route and then forwarding packets from source to destination through some inter mediate nodes if the destination node is not directly within the range of sender node. The route establishment itself is a two steps process. First one is the Route Discovery where it finds the different routes from same source to destination. Second, the Route Selection, where it selects a particular route among all routes found for the same source to destination. Traditional protocols and data structure are available to maintain the routes and to execute it by selecting the path that is having minimum distance from source to destination where the minimum distance is in term of minimum hop count.
A Survey on Periocular Biometrics Research <s> Detection and segmentation of the periocular region <s> Retinotopic sampling and the Gabor decomposition have a well-established role in computer vision in general as well as in face authentication. The concept of Retinal Vision we introduce aims at complementing these biologically inspired tools with models of higher-order visual process, specifically the Human Saccadic System. We discuss the Saccadic Search strategy, a general purpose attentional mechanism that identifies semantically meaningful structures in images by performing "jumps" (saccades) between relevant locations. Saccade planning relies on a priori knowledge encoded by SVM classifiers. The raw visual input is analysed by means of a log-polar retinotopic sensor, whose receptive fields consist in a vector of modified Gabor filters designed in the log-polar frequency plane. Applicability to complex cognitive tasks is demonstrated by facial landmark detection and authentication experiments over the M2VTS and Extended M2VTS (XM2VTS) databases. <s> BIB001 </s> A Survey on Periocular Biometrics Research <s> Detection and segmentation of the periocular region <s> The term periocular refers to the facial region in the immediate vicinity of the eye. Acquisition of the periocular biometric is expected to require less subject cooperation while permitting a larger depth of field compared to traditional ocular biometric traits (viz., iris, retina, and sclera). In this work, we study the feasibility of using the periocular region as a biometric trait. Global and local information are extracted from the periocular region using texture and point operators resulting in a feature set for representing and matching this region. A number of aspects are studied in this work, including the 1) effectiveness of incorporating the eyebrows, 2) use of side information (left or right) in matching, 3) manual versus automatic segmentation schemes, 4) local versus global feature extraction schemes, 5) fusion of face and periocular biometrics, 6) use of the periocular biometric in partially occluded face images, 7) effect of disguising the eyebrows, 8) effect of pose variation and occlusion, 9) effect of masking the iris and eye region, and 10) effect of template aging on matching performance. Experimental results show a rank-one recognition accuracy of 87.32% using 1136 probe and 1136 gallery periocular images taken from 568 different subjects (2 images/subject) in the Face Recognition Grand Challenge (version 2.0) database with the fusion of three different matchers. <s> BIB002 </s> A Survey on Periocular Biometrics Research <s> Detection and segmentation of the periocular region <s> The blood vessel structure of the sclera is unique to each person, and it can be remotely obtained nonintrusively in the visible wavelengths. Therefore, it is well suited for human identification (ID). In this paper, we propose a new concept for human ID: sclera recognition. This is a challenging research problem because images of sclera vessel patterns are often defocused and/or saturated and, most importantly, the vessel structure in the sclera is multilayered and has complex nonlinear deformations. This paper has several contributions. First, we proposed the new approach for human ID: sclera recognition. Second, we developed a new method for sclera segmentation which works for both color and grayscale images. Third, we designed a Gabor wavelet-based sclera pattern enhancement method to emphasize and binarize the sclera vessel patterns. Finally, we proposed a line-descriptor-based feature extraction, registration, and matching method that is illumination, scale, orientation, and deformation invariant and can mitigate the multilayered deformation effects and tolerate segmentation error. The experimental results show that sclera recognition is a promising new biometrics for positive human ID. <s> BIB003 </s> A Survey on Periocular Biometrics Research <s> Detection and segmentation of the periocular region <s> This paper introduces a novel face recognition problem domain: the medically altered face for gender transformation. A data set of >1.2 million face images was constructed from wild videos obtained from You Tube of 38 subjects undergoing hormone replacement therapy (HRT) for gender transformation over a period of several months to three years. The HRT achieves gender transformation by severely altering the balance of sex hormones, which causes changes in the physical appearance of the face and body. This paper explores that the impact of face changes due to hormone manipulation and its ability to disguise the face and hence, its ability to effect match rates. Face disguise is achieved organically as hormone manipulation causes pathological changes to the body resulting in a modification of face appearance. This paper analyzes and evaluates face components versus full face algorithms in an attempt to identify regions of the face that are resilient to the HRT process. The experiments reveal that periocular face components using simple texture-based face matchers, local binary patterns, histogram of gradients, and patch-based local binary patterns out performs matching against the full face. Furthermore, the experiments reveal that a fusion of the periocular using one of the simple texture-based approaches (patched-based local binary patterns) out performs two Commercial Off The Shelf Systems full face systems: 1) PittPatt SDK and 2) Cognetic FaceVACs v8.5. The evaluated periocular-fused patch-based face matcher outperforms PittPatt SDK v5.2.2 by 76.83% and Cognetic FaceVACS v8.5 by 56.23% for rank-1 accuracy. <s> BIB004 </s> A Survey on Periocular Biometrics Research <s> Detection and segmentation of the periocular region <s> Recent studies in biometrics have shown that the periocular region of the face is sufficiently discriminative for robust recognition, and particularly effective in certain scenarios such as extreme occlusions, and illumination variations where traditional face recognition systems are unreliable. In this paper, we first propose a fully automatic, robust and fast graph-cut based eyebrow segmentation technique to extract the eyebrow shape from a given face image. We then propose an eyebrow shape-based identification system for periocular face recognition. Our experiments have been conducted over large datasets from the MBGC and AR databases and the resilience of the proposed approach has been evaluated under varying data conditions. The experimental results show that the proposed eyebrow segmentation achieves high accuracy with an F-Measure of 99.4% and the identification system achieves rates of 76.0% on the AR database and 85.0% on the MBGC database. <s> BIB005 </s> A Survey on Periocular Biometrics Research <s> Detection and segmentation of the periocular region <s> In this paper, we propose to combine sclera and periocular features for identity verification. The proposal is particularly useful in applications related to face recognition when the face is partially occluded with only periocular region revealed. Due to its relatively new exposition in the literature of biometrics, particular attention will be paid to sclera feature extraction in this work. For periocular feature extraction, structured random projections were adopted to extract compressed vertical and horizontal components of image features. The binary sclera features are eventually fused with the periocular features at a score level. Extensive experiments have been performed on UBIRIS v1 session1 and session2 databases to assess the verification performance before and after fusion. Around 5% of equal error rate performance was observed to be enhanced by fusing sclera with periocular features comparing with that before fusion. <s> BIB006 </s> A Survey on Periocular Biometrics Research <s> Detection and segmentation of the periocular region <s> The concept of periocular biometrics emerged to improve the robustness of iris recognition to degraded data. Being a relatively recent topic, most of the periocular recognition algorithms work in a holistic way and apply a feature encoding/matching strategy without considering each biological component in the periocular area. This not only augments the correlation between the components in the resulting biometric signature, but also increases the sensitivity to particular data covariates. The main novelty in this paper is to propose a periocular recognition ensemble made of two disparate components: 1) one expert analyses the iris texture and exhaustively exploits the multispectral information in visible-light data and 2) another expert parameterizes the shape of eyelids and defines a surrounding dimensionless region-of-interest, from where statistics of the eyelids, eyelashes, and skin wrinkles/furrows are encoded. Both experts work on disjoint regions of the periocular area and meet three important properties. First, they produce practically independent responses, which is behind the better performance of the ensemble when compared to the best individual recognizer. Second, they do not share particularly sensitivity to any image covariate, which accounts for augmenting the robustness against degraded data. Finally, it should be stressed that we disregard information in the periocular region that can be easily forged (e.g., shape of eyebrows), which constitutes an active anticounterfeit measure. An empirical evaluation was conducted on two public data sets (FRGC and UBIRIS.v2), and points for consistent improvements in performance of the proposed ensemble over the state-of-the-art periocular recognition algorithms. <s> BIB007 </s> A Survey on Periocular Biometrics Research <s> Detection and segmentation of the periocular region <s> Face recognition performance degrades significantly under occlusions that occur intentionally or unintentionally due to head gear or hair style. In many incidents captured by surveillance videos, the offenders cover their faces leaving only the periocular region visible. We present an extensive study on periocular region based person identification in video. While, previous techniques have handpicked a single best frame from videos, we formulate, for the first time, periocular region based person identification in video as an image-set classification problem. For thorough analysis, we perform experiments on periocular regions extracted automatically from RGB videos, NIR videos and hyperspectral image cubes. Each image-set is represented by four heterogeneous feature types and classified with six state-of-the-art image-set classification algorithms. We propose a novel two stage inverse Error Weighted Fusion algorithm for feature and classifier score fusion. The proposed two stage fusion is superior to single stage fusion. Comprehensive experiments were performed on four standard datasets, MBGC NIR and visible spectrum (Phillips et al., 2005), CMU Hyperspectral (Denes et al., 2002) and UBIPr (Padole and Proenca, 2012). We obtained average rank-1 recognition rates of 99.8, 98.5, 97.2, and 99.5% respectively which are significantly higher than the existing state of the art. Our results demonstrate the feasibility of image-set based periocular biometrics for real world applications. <s> BIB008
Initial studies were focused on feature extraction only (with the periocular region manually extracted), but automatic detection and segmentation have increasingly become a research target in itself. Some works have applied a full face detector first such as the Viola-Jones (VJ) detector ), e.g. BIB002 or Juefei-Xu and Savvides (2012), but successful extraction of the periocular region in this way relies on an accurate detection of the whole face. Using iris segmentation techniques may not be reliable under challenging conditions either . On the other hand, eye detection can be a decisive pre-processing task to ensure successful segmentation of the iris texture in difficult images, as in the study by . Here, they used correlation filters to detect the eye center over the difficult FOCS database of subjects walking through a portal, achieving a 95% success rate. However, despite this good result in indicating the eye position, accuracy of the iris segmentation algorithms evaluated were between 51% and 90% Correlation filters were also used for eye detection in BIB004 , although after applying the VJ face detector. Table 2 summarizes existing research dealing with the task of locating the eye position directly, without relying on full-face or iris detectors. and BIB008 used the VJ detector of face sub-parts. BIB008 also experimented with the CMU hyperspectral database, which has images captured simultaneously at multiple wavelengths. Since the eye is centered in all bands, accuracy can be boosted by collective detecting the eye over all bands. BIB001 made use of Gabor features for eye detection and face tracking purposes by performing saccades across the image, whereas Bigun (2014, 2015) proposed the use of symmetry filters tuned to detect circular symmetries. The latter has the advantage of not needing training, and detection is possible with a few 1D convolutions due to separability of the detection filters, built from derivatives of a Gaussian. BIB005 proposed a Local Eyebrow Active Shape Model (LE-ASM) to detect the eyebrow region directly from a given face image, with eyebrow pixels segmented afterwards using graph-cut based segmentation. ASMs were also used by JuefeiXu and Savvides (2012) to automatically extract the periocular region, albeit after the application of a VJ full-face detector. Recently, proposed a method to label seve components of the periocular region (iris, sclera, eyelashes, eyebrows, hair, skin and glasses) by using seven classifiers at the pixel level, with each classifier specialized in one component. Pixel features used for classification included the following texture and shape descriptors: RGB/HSV/YCbCr values, Local Binary Patterns (LBP), entropy and Gabor features. Some works have proposed the extraction of features from the sclera region only, therefore requiring an algorithm to specifically segment this region. For this purpose, BIB006 , BIB007 and BIB003 used the HSV/YCbCr color spaces. In these works, however, sclera detection is guided by a prior detection of the iris boundaries.
A Survey on Periocular Biometrics Research <s> Recognition using periocular features <s> The periocular region is the part of the face immediately surrounding the eye, and researchers have recently begun to investigate how to use the periocular region for recognition. Understanding how humans recognize faces helped computer vision researchers develop algorithms for face recognition. Likewise, understanding how humans analyze periocular images could benefit researchers developing algorithms for periocular recognition. We presented pairs of periocular images to testers and asked them to determine whether the two images were from the same person or from different people. Our testers correctly determined the relationship between the two images in over 90% of the queries. We asked them to describe what features in the images were helpful to them in making their decisions. We found that eyelashes, tear ducts, shape of the eye, and eyelids were used most frequently in determining whether two images were from the same person. The outer corner of the eye and the shape of the eye were used a higher proportion of the time for incorrect responses than they were for correct responses, suggesting that those two features are not as useful. <s> BIB001 </s> A Survey on Periocular Biometrics Research <s> Recognition using periocular features <s> Periocular biometrics is the recognition of individuals based on the appearance of the region around the eye. Periocular recognition may be useful in applications where it is difficult to obtain a clear picture of an iris for iris biometrics, or a complete picture of a face for face biometrics. Previous periocular research has used either visible-light (VL) or near-infrared (NIR) light images, but no prior research has directly compared the two illuminations using images with similar resolution. We conducted an experiment in which volunteers were asked to compare pairs of periocular images. Some pairs showed images taken in VL, and some showed images taken in NIR light. Participants labeled each pair as belonging to the same person or to different people. Untrained participants with limited viewing times correctly classified VL image pairs with 88% accuracy, and NIR image pairs with 79% accuracy. For comparison, we presented pairs of iris images from the same subjects. In addition, we investigated differences between performance on light and dark eyes and relative helpfulness of various features in the periocular region under different illuminations. We calculated performance of three computer algorithms on the periocular images. Performance for humans and computers was similar. <s> BIB002 </s> A Survey on Periocular Biometrics Research <s> Recognition using periocular features <s> Automated and accurate biometrics identification using periocular imaging has wide range of applications from human surveillance to improving performance for iris recognition systems, especially under less-constrained imaging environment. Restricted Boltzmann Machine is a generative stochastic neural network that can learn the probability distribution over its set of inputs. As a convolutional version of Restricted Boltzman Machines, CRBM aim to accommodate large image sizes and greatly reduce the computational burden. However in the best of our knowledge, the unsupervised feature learning methods have not been explored in biometrics area except for the face recognition. This paper explores the effectiveness of CRBM model for the periocular recognition. We perform experiments on periocular image database from the largest number of subjects (300 subjects as test subjects) and simultaneously exploit key point features for improving the matching accuracy. The experimental results are presented on publicly available database, the Ubripr database, and suggest effectiveness of RBM feature learning for automated periocular recognition with the large number of subjects. The results from the investigation in this paper also suggest that the supervised metric learning can be effectively used to achieve superior performance than the conventional Euclidean distance metric for the periocular identification. <s> BIB003 </s> A Survey on Periocular Biometrics Research <s> Recognition using periocular features <s> In biometrics research, the periocular region has been regarded as an interesting trade-off between the face and the iris, particularly in unconstrained data acquisition setups. As in other biometric traits, the current challenge is the development of more robust recognition algorithms. Having investigated the suitability of the ‘elastic graph matching’ (EGM) algorithm to handle non-linear distortions in the periocular region because of facial expressions, the authors observed that vertices locations often not correspond to displacements in the biological tissue. Hence, they propose a ‘globally coherent’ variant of EGM (GC-EGM) that avoids sudden local angular movements of vertices while maintains the ability to faithfully model non-linear distortions. Two main adaptations were carried out: (i) a new term for measuring vertices similarity and (ii) a new term in the edges-cost function penalises changes in orientation between the model and test graphs. Experiments were carried out both in synthetic and real data and point for the advantages of the proposed algorithm. Also, the recognition performance when using the EGM and GC-EGM was compared, and statistically significant improvements in the error rates were observed when using the GC-EGM variant. . <s> BIB004
Several feature extraction methods have been proposed for periocular recognition, with a taxonomy shown in Figure 3 . Existing features can be classified into: i) global features, which are extracted from the whole image or region of interest (ROI), and ii) local features, which are extracted from a set of discrete points, or key points, only. Table 3 gives an overview in chronological order of existing works for periocular recognition. The most widely used approaches include Local Binary Patterns (LBP) and, to a lesser extent, Histogram of Oriented Gradients (HOG) and Scale-Invariant Feature Transform (SIFT) key points. Over the course of the years, many other descriptors have been proposed. This section provides a brief description of the features used for periocular recognition (Section 4.1 and 4.2), followed by a review of the works mentioned in Table 3 (Section 4.3), highlighting their most important results or contributions. Due to pages limitation, we will omit references to the original works where features have been presented (unless they are originally proposed for periocular recognition in the mentioned reference). We refer to the references indicated for further information about the presented feature extraction techniques. Some preprocessing steps have been also used to cope with the difficulties found in unconstrained scenarios, such as pose correction by Active Appearance Models (AAM) (JuefeiXu et al., 2011) , illumination normalization BIB003 , correction of deformations due to expression change by Elastic Graph Matching (EGM) BIB004 , or color device-specific calibration . The use of subspace representation methods after feature extraction is also becoming a popular way either to improve performance or reducing the feature set, as mentioned next in this section. There are also periocular studies with human experts. BIB001 BIB002 evaluated the ability of (untrained) human observers to compare pairs of periocular images both with VW and NIR illumination, obtaining better results with the VW modality. They also tested three computer experts (LBP, HOG and SIFT), finding that the performance of humans and machines was similar.
A Survey on Periocular Biometrics Research <s> Textural-based features <s> Retinotopic sampling and the Gabor decomposition have a well-established role in computer vision in general as well as in face authentication. The concept of Retinal Vision we introduce aims at complementing these biologically inspired tools with models of higher-order visual process, specifically the Human Saccadic System. We discuss the Saccadic Search strategy, a general purpose attentional mechanism that identifies semantically meaningful structures in images by performing "jumps" (saccades) between relevant locations. Saccade planning relies on a priori knowledge encoded by SVM classifiers. The raw visual input is analysed by means of a log-polar retinotopic sensor, whose receptive fields consist in a vector of modified Gabor filters designed in the log-polar frequency plane. Applicability to complex cognitive tasks is demonstrated by facial landmark detection and authentication experiments over the M2VTS and Extended M2VTS (XM2VTS) databases. <s> BIB001 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> In this paper, we perform a detailed investigation of various features that can be extracted from the periocular region of human faces for biometric identification. The emphasis of this study is to explore the BEST feature extraction approach used in stand-alone mode without any generative or discriminative subspace training. Simple distance measures are used to determine the verification rate (VR) on a very large dataset. Several filter-based techniques and local feature extraction methods are explored in this study, where we show an increase of 15% verification performance at 0.1% false accept rate (FAR) compared to raw pixels with the proposed Local Walsh-Transform Binary Pattern encoding. Additionally, when fusing our best feature extraction method with Kernel Correlation Feature Analysis (KCFA) [36], we were able to obtain VR of 61.2%. Our experiments are carried out on the large validation set of the NIST FRGC database [6], which contains facial images from environments with uncontrolled illumination. Verification experiments based on a pure 1–1 similarity matrix of 16028×8014 (~128 million comparisons) carried out on the entire database, where we find that we can achieve a raw VR of 17.0% at 0.1% FAR using our proposed Local Walsh-Transform Binary Pattern approach. This result, while may seem low, is more than the NIST reported baseline VR on the same dataset (12% at 0.1% FAR), when PCA was trained on the entire facial features for recognition [6]. <s> BIB002 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> Given an image from a biometric sensor, it is important for the feature extraction module to extract an original set of features that can be used for identity recognition. This form of feature extraction has been referred to as Type I feature extraction. For some biometric systems, Type I feature extraction is used exclusively. However, a second form of feature extraction does exist and is concerned with optimizing/minimizing the original feature set given by a Type I feature extraction method. This second form of feature extraction has been referred to as Type II feature extraction (feature selection). In this paper, we present a genetic-based Type II feature extraction system, referred to as GEFE (Genetic & Evolutionary Feature Extraction), for optimizing the feature sets returned by Loocal Binary Pattern Type I feature extraction for periocular biometric recognition. Our results show that not only does GEFE dramatically reduce the number of features needed but the evolved features sets also have higher recognition rates. <s> BIB003 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> In this paper, we will present a novel framework of utilizing periocular region for age invariant face recognition. To obtain age invariant features, we first perform preprocessing schemes, such as pose correction, illumination and periocular region normalization. And then we apply robust Walsh-Hadamard transform encoded local binary patterns (WLBP) on preprocessed periocular region only. We find the WLBP feature on periocular region maintains consistency of the same individual across ages. Finally, we use unsupervised discriminant projection (UDP) to build subspaces on WLBP featured periocular images and gain 100% rank-1 identification rate and 98% verification rate at 0.1% false accept rate on the entire FG-NET database. Compared to published results, our proposed approach yields the best recognition and identification results. <s> BIB004 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> We consider the problem of matching highly non-ideal ocular images where the iris information cannot be reliably used. Such images are characterized by non-uniform illumination, motion and de-focus blur, off-axis gaze, and non-linear deformations. To handle these variations, a single feature extraction and matching scheme is not sufficient. Therefore, we propose an information fusion framework where three distinct feature extraction and matching schemes are utilized in order to handle the significant variability in the input ocular images. The Gradient Orientation Histogram (GOH) scheme extracts the global information in the image; the modified Scale Invariant Feature Transform (SIFT) extracts local edge anomalies in the image; and a Probabilistic Deformation Model (PDM) handles nonlinear deformations observed in image pairs. The simple sum rule is used to combine the match scores generated by the three schemes. Experiments on the extremely challenging Face and Ocular Challenge Series (FOCS) database and a subset of the Face Recognition Grand Challenge (FRGC) database confirm the efficacy of the proposed approach to perform ocular recognition. <s> BIB005 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> We present a new system for biometric recognition using periocular images based on retinotopic sampling grids and Gabor analysis of the local power spectrum. A number of aspects are studied, including: 1) grid adaptation to dimensions of the target eye vs. grids of constant size, 2) comparison between circular- and rectangular-shaped grids, 3) use of Gabor magnitude vs. phase vectors for recognition, 4) rotation compensation between query and test images, and 5) comparison with an iris machine expert. Results show that our system achieves competitive verification rates compared with other periocular recognition approaches. We also show that top verification rates can be obtained without rotation compensation, thus allowing to remove this step for computational efficiency. Also, the performance is not affected substantially if we use a grid of fixed dimensions, or it is even better in certain situations, avoiding the need of accurate detection of the iris region. <s> BIB006 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> The periocular biometric comes into the spotlight recently due to several advantageous characteristics such as easily available and provision of crucial face information. However, many existing works are dedicated to extracting image features using texture based techniques such as local binary pattern (LBP). In view of the simplicity and effectiveness offered, this paper proposes to investigate into projection-based methods for periocular identity verification. Several well established projection-based methods such as principal component analysis, its variants and linear discriminant analysis will be adopted in our performance evaluation based on a subset of FERET face database. Our empirical results show that supervised learning methods significantly outperform those unsupervised learning methods and LBP in terms of equal error rate performance. <s> BIB007 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> Iris recognition from at-a-distance face images has high applications in wide range of applications such as remote surveillance and for civilian identification. This paper presents a completely automated joint iris and periocular recognition approach from the face images acquired at-a-distance. Each of the acquired face images are used to detect and segment periocular images which are then employed for the iris segmentation. We employ complex texture descriptors using Leung-Mallik filters which can acquire multiple periocular features for more accurate recognition. Experimental results presented in this paper achieve 8.1% improvement in recognition accuracy over the best performing approach among SIFT, LBP and HoG presented in the literature. The combination of simultaneously segmented iris and periocular images achieves average rank-one recognition accuracy of 84.5%, i.e., an improvement of 52% than those from only using iris features, on independent test images from 131 subjects. In order to ensure the repeatability of the experiments, the CASIA.v4-distance, i.e., a publicly available database was employed and all the 142 subjects/images were considered in this work. <s> BIB008 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> In challenging image acquisition settings where the performance of iris recognition algorithms degrades due to poor segmentation of the iris, image blur, specular reflections, and occlusions from eye lids and eye lashes, the periocular region has been shown to offer better recognition rates. However, the definition of a periocular region is subject to interpretation. This paper investigates the question of what is the best periocular region for recognition by identifying sub-regions of the ocular image when using near-infrared (NIR) or visible light (VL) sensors. To determine the best periocular region, we test two fundamentally different algorithms on challenging periocular datasets of contrasting build on four different periocular regions. Our results indicate that system performance does not necessarily improve as the ocular region becomes larger. Rather in NIR images the eye shape is more important than the brow or cheek as the image has little to no skin texture (leading to a smaller accepted region), while in VL images the brow is very important (requiring a larger region). <s> BIB009 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> Iris and Periocular biometrics has proved its effectiveness in accurately verifying the subject of interest. Recent improvements in visible spectrum Iris and Periocular verification have further boosted its application to unconstrained scenarios. However existing visible Iris verification systems suffer from low quality samples because of the limited depth-of-field exhibited by the conventional Iris capture systems. In this work, we propose a robust Iris and Periocular erification scheme in visible spectrum using Light Field Camera (LFC). Since the light field camera can provide multiple focus images in single capture, we are motivated to investigate its applicability for robust Iris and Periocular verification by exploring its all-in-focus property. Further, the use of all-in-focus property will extend the depth-of-focus and overcome the problem of focus that plays a predominant role in robust Iris and Periocular verification. We first collect a new Iris and Periocular biometric database using both light field and conventional camera by simulating real life scenarios. We then propose a new scheme for feature extraction and classification by exploring the combination of Local Binary Patterns (LBP) and Sparse Reconstruction Classifier (SRC). Extensive experiments are carried out on the newly collected database to bring out the merits and demerits on applicability of light field camera for Iris and Periocular verification. Finally, we also present the results on combining the information from Iris and Periocular biometrics using weighted sum rule. <s> BIB010 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> This work develops a novel face-based matcher composed of a multi-resolution hierarchy of patch-based feature descriptors for periocular recognition - recognition based on the soft tissue surrounding the eye orbit. The novel patch-based framework for periocular recognition is compared against other feature descriptors and a commercial full-face recognition system against a set of four uniquely challenging face corpora. The framework, hierarchical three-patch local binary pattern, is compared against the three-patch local binary pattern and the uniform local binary pattern on the soft tissue area around the eye orbit. Each challenge set was chosen for its particular non-ideal face representations that may be summarized as matching against pose, illumination, expression, aging, and occlusions. The MORPH corpora consists of two mug shot datasets labeled Album 1 and Album 2. The Album 1 corpus is the more challenging of the two due to its incorporation of print photographs (legacy) captured with a variety of cameras from the late 1960s to 1990s. The second challenge dataset is the FRGC still image set. Corpus three, Georgia Tech face database, is a small corpus but one that contains faces under pose, illumination, expression, and eye region occlusions. The final challenge dataset chosen is the Notre Dame Twins database, which is comprised of 100 sets of identical twins and 1 set of triplets. The proposed framework reports top periocular performance against each dataset, as measured by rank-1 accuracy: (1) MORPH Album 1, 33.2%; (2) FRGC, 97.51%; (3) Georgia Tech, 92.4%; and (4) Notre Dame Twins, 98.03%. Furthermore, this work shows that the proposed periocular matcher (using only a small section of the face, about the eyes) compares favorably to a commercial full-face matcher. <s> BIB011 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> Human identification based on iris biometrics requires high resolution iris images of a cooperative subject. Such images cannot be obtained in non-intrusive applications such as surveillance. However, the full region around the eye, known as the periocular region, can be acquired non-intrusively and used as a biometric. In this paper we investigate the use of periocular region for person identification. Current techniques have focused on choosing a single best frame, mostly manually, for matching. In contrast, we formulate, for the first time, person identification based on periocular regions as an image set classification problem. We generate periocular region image sets from the Multi Bio-metric Grand Challenge (MBGC) NIR videos. Periocular regions of the right eyes are mirrored and combined with those of the left eyes to form an image set. Each image set contains periocular regions of a single subject. For imageset classification, we use six state-of-the-art techniques and report their comparative recognition and verification performances. Our results show that image sets of periocular regions achieve significantly higher recognition rates than currently reported in the literature for the same database. <s> BIB012 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> Visible spectrum iris verification has drawn substantial attention due to the feasibility, convenience and also accepted per-formance. This further allows one to perform the iris verification in an unconstrained environment at-a-distance and on the move. The integral part of the visible iris recognition rely on the accurate texture representation algorithm that can effectively capture the uniqueness of the texture even in the challenging conditions like reflection, illumination among others. In this paper, we explore a new scheme for the robust visible iris verification based on Binarized Statistical Image Features (BSIF). The core idea of the BSIF descriptor is to compute the binary code for each pixel by projecting them on the subspace which is learned from natural images using Independent Component Analysis (ICA). Thus, the BSIF is expected to encode the texture features more robustly when compared to contemporary schemes like Local Binary Patterns and its variants. The extensive experiments are carried out on the visible iris dataset captured using both Light field and conventional camera. The proposed feature extraction method is also extended for enhanced periocular recognition. Finally, we also present a comparative analysis with popular state-of-the-art iris recognition scheme. <s> BIB013 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> Automated and accurate biometrics identification using periocular imaging has wide range of applications from human surveillance to improving performance for iris recognition systems, especially under less-constrained imaging environment. Restricted Boltzmann Machine is a generative stochastic neural network that can learn the probability distribution over its set of inputs. As a convolutional version of Restricted Boltzman Machines, CRBM aim to accommodate large image sizes and greatly reduce the computational burden. However in the best of our knowledge, the unsupervised feature learning methods have not been explored in biometrics area except for the face recognition. This paper explores the effectiveness of CRBM model for the periocular recognition. We perform experiments on periocular image database from the largest number of subjects (300 subjects as test subjects) and simultaneously exploit key point features for improving the matching accuracy. The experimental results are presented on publicly available database, the Ubripr database, and suggest effectiveness of RBM feature learning for automated periocular recognition with the large number of subjects. The results from the investigation in this paper also suggest that the supervised metric learning can be effectively used to achieve superior performance than the conventional Euclidean distance metric for the periocular identification. <s> BIB014 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> Recently periocular biometrics has drawn lot of attention of researchers and some efforts have been presented in the literature. In this paper, we propose a novel and robust approach for periocular recognition. In the approach face is detected in still face images which is then aligned and normalized. We utilized entire strip containing both the eyes as periocular region. For feature extraction, we computed the magnitude responses of the image filtered with a filter bank of complex Gabor filters. Feature dimensions are reduced by applying Direct Linear Discriminant Analysis (DLDA). The reduced feature vector is classified using Parzen Probabilistic Neural Network (PPNN). The experimental results demonstrate a promising verification and identification accuracy, also the robustness of the proposed approach is ascertained by providing comprehensive comparison with some of the well known state-of-the-art methods using publicly available face databases; MBGC v2.0, GTDB, IITK and PUT. <s> BIB015 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> The concept of periocular biometrics emerged to improve the robustness of iris recognition to degraded data. Being a relatively recent topic, most of the periocular recognition algorithms work in a holistic way and apply a feature encoding/matching strategy without considering each biological component in the periocular area. This not only augments the correlation between the components in the resulting biometric signature, but also increases the sensitivity to particular data covariates. The main novelty in this paper is to propose a periocular recognition ensemble made of two disparate components: 1) one expert analyses the iris texture and exhaustively exploits the multispectral information in visible-light data and 2) another expert parameterizes the shape of eyelids and defines a surrounding dimensionless region-of-interest, from where statistics of the eyelids, eyelashes, and skin wrinkles/furrows are encoded. Both experts work on disjoint regions of the periocular area and meet three important properties. First, they produce practically independent responses, which is behind the better performance of the ensemble when compared to the best individual recognizer. Second, they do not share particularly sensitivity to any image covariate, which accounts for augmenting the robustness against degraded data. Finally, it should be stressed that we disregard information in the periocular region that can be easily forged (e.g., shape of eyebrows), which constitutes an active anticounterfeit measure. An empirical evaluation was conducted on two public data sets (FRGC and UBIRIS.v2), and points for consistent improvements in performance of the proposed ensemble over the state-of-the-art periocular recognition algorithms. <s> BIB016 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> In this paper, we propose a novel and robust approach for periocular recognition. Specifically, we propose fusion of Local Phase Quantization(LPQ) and Gabor wavelet descriptors to improve recognition performance and achieve robustness. We have utilized publicly available challenging still face images databases; MBGC v2.0, GTDB, PUT and Caltech. In the approach face is detected and normalized using eye centres. The region around left and right eyes, including eyebrow is extracted as left periocular and right periocular. The LPQ descriptor is then applied to extract the phase statistics features computed locally in a rectangular window. The descriptor is invariant to blur and also to uniform illumination changes. We also computed the Gabor magnitude response of the image, which encodes shape information over a broader range of scales. To reduce dimensionality of the operators and to extract discriminative features, we further utilized DLDA (Direct Linear Discriminant Analysis). The experimental analysis demonstrate that combination of LPQ and Gabor scores provides significant improvement in the performance and robustness, than applied individually. <s> BIB017 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> We consider the problem of matching face against iris images using ocular information. In biometrics, face and iris images are typically acquired using sensors operating in visible (VIS) and near-infrared (NIR) spectra, respectively. This presents a challenging problem of matching images corresponding to different biometric modalities, imaging spectra, and spatial resolutions. We propose the usage of ocular traits that are common between face and iris images (viz., iris and ocular region) to perform matching. Iris matching is performed using a commercial software, while ocular regions are matched using three different techniques: Local Binary Patterns (LBP), Normalized Gradient Correlation (NGC), and Joint Dictionary-based Sparse Representation (JDSR). Experimental results on a database containing 1358 images of 704 subjects indicate that ocular region can provide better performance than iris biometric under a challenging cross-modality matching scenario. <s> BIB018 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> Partially constrained human recognition through periocular region has emerged as a new paradigm in biometric security. This article proposes Phase Intensive Global Pattern (PIGP): a novel global feature based on variation of intensity of a pixel-neighbours with respect to different phases. The feature thus extracted is claimed to be rotation invariant and hence useful to identify human from images with face-tilt. The performance of proposed feature is experimented on UBIRISv2 database, which is a very large standard dataset with unconstrained periocular images captured under visible spectrum. The proposed work has been compared with Circular Local Binary Pattern (CLBP), and Walsh Transform, and experimentally found to yield higher accuracy, though with increased computation complexity and increased size of the feature vector. <s> BIB019 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> This paper introduces a novel face recognition problem domain: the medically altered face for gender transformation. A data set of >1.2 million face images was constructed from wild videos obtained from You Tube of 38 subjects undergoing hormone replacement therapy (HRT) for gender transformation over a period of several months to three years. The HRT achieves gender transformation by severely altering the balance of sex hormones, which causes changes in the physical appearance of the face and body. This paper explores that the impact of face changes due to hormone manipulation and its ability to disguise the face and hence, its ability to effect match rates. Face disguise is achieved organically as hormone manipulation causes pathological changes to the body resulting in a modification of face appearance. This paper analyzes and evaluates face components versus full face algorithms in an attempt to identify regions of the face that are resilient to the HRT process. The experiments reveal that periocular face components using simple texture-based face matchers, local binary patterns, histogram of gradients, and patch-based local binary patterns out performs matching against the full face. Furthermore, the experiments reveal that a fusion of the periocular using one of the simple texture-based approaches (patched-based local binary patterns) out performs two Commercial Off The Shelf Systems full face systems: 1) PittPatt SDK and 2) Cognetic FaceVACs v8.5. The evaluated periocular-fused patch-based face matcher outperforms PittPatt SDK v5.2.2 by 76.83% and Cognetic FaceVACS v8.5 by 56.23% for rank-1 accuracy. <s> BIB020 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> In this paper, we propose to combine sclera and periocular features for identity verification. The proposal is particularly useful in applications related to face recognition when the face is partially occluded with only periocular region revealed. Due to its relatively new exposition in the literature of biometrics, particular attention will be paid to sclera feature extraction in this work. For periocular feature extraction, structured random projections were adopted to extract compressed vertical and horizontal components of image features. The binary sclera features are eventually fused with the periocular features at a score level. Extensive experiments have been performed on UBIRIS v1 session1 and session2 databases to assess the verification performance before and after fusion. Around 5% of equal error rate performance was observed to be enhanced by fusing sclera with periocular features comparing with that before fusion. <s> BIB021 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> Announcement of an iris and periocular dataset, with 10 different mobile setups.Mobile biometric recognition approach based on iris and periocular information.Improvements from a sensor-specific color calibration technique are reported.Biometric recognition feasibility over mobile cross-sensor setups is shown.Preferable mobile setups are pointed out. In recent years, the usage of mobile devices has increased substantially, as have their capabilities and applications. Extending biometric technologies to these gadgets is desirable because it would facilitate biometric recognition almost anytime, anywhere, and by anyone. The present study focuses on biometric recognition in mobile environments using iris and periocular information as the main traits. Our study makes three main contributions, as follows. (1) We demonstrate the utility of an iris and periocular dataset, which contains images acquired with 10 different mobile setups and the corresponding iris segmentation data. This dataset allows us to evaluate iris segmentation and recognition methods, as well as periocular recognition techniques. (2) We report the outcomes of device-specific calibration techniques that compensate for the different color perceptions inherent in each setup. (3) We propose the application of well-known iris and periocular recognition strategies based on classical encoding and matching techniques, as well as demonstrating how they can be combined to overcome the issues associated with mobile environments. <s> BIB022 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> Face recognition performance degrades significantly under occlusions that occur intentionally or unintentionally due to head gear or hair style. In many incidents captured by surveillance videos, the offenders cover their faces leaving only the periocular region visible. We present an extensive study on periocular region based person identification in video. While, previous techniques have handpicked a single best frame from videos, we formulate, for the first time, periocular region based person identification in video as an image-set classification problem. For thorough analysis, we perform experiments on periocular regions extracted automatically from RGB videos, NIR videos and hyperspectral image cubes. Each image-set is represented by four heterogeneous feature types and classified with six state-of-the-art image-set classification algorithms. We propose a novel two stage inverse Error Weighted Fusion algorithm for feature and classifier score fusion. The proposed two stage fusion is superior to single stage fusion. Comprehensive experiments were performed on four standard datasets, MBGC NIR and visible spectrum (Phillips et al., 2005), CMU Hyperspectral (Denes et al., 2002) and UBIPr (Padole and Proenca, 2012). We obtained average rank-1 recognition rates of 99.8, 98.5, 97.2, and 99.5% respectively which are significantly higher than the existing state of the art. Our results demonstrate the feasibility of image-set based periocular biometrics for real world applications. <s> BIB023
BGM: Bayesian Graphical Models were used by . They adapted an iris matcher based on correlation filters applied to non-overlapping image patches. Patches of gallery and probe images are cross-correlated, and the output used to feed a Bayesian graphical model (BGM) trained to consider non-linear deformations and occlusions between images. BGM were also used by BIB009 and BIB005 , although called PDM or Probabilistic Deformation Models in these works. BSIF: Binarized Statistical Image Features BIB013 BIB010 ) computes a binary code for each pixel by linearly projecting image patches onto a subspace, whose basis vectors are learnt from natural images using Independent Component Analysis (ICA). Since it is based on natural images, it is expected that BSIF encodes texture features more robustly than other methods that also produce binary codes, such as LBPs. CRBM: Convolutional Restricted Boltzman Machines are a convolutional version of the Restricted Boltzman Machines, previously used in handwriting recognition, image classification, and face verification. CRBM, proposed for periocular recognition by BIB014 , is a generative stochastic neural network that learn a probability distribution over a set of inputs generated by filters which capture edge orientation and spatial connections between image patches. DCT: Discrete Cosine Transform (Juefei-Xu et al., 2010) expresses data points by a sum of cosine functions oscillating at different frequencies (which in 2D corresponds to horizontal and vertical frequencies). The 2D-DCT is computed in image blocks of size N × N (with N=3,5,7...) and the N 2 coefficients are assigned as featureto the center pixel of the block. DWT: Discrete Wavelet Transform was used by Juefei-Xu et al. (2010) and BIB015 with respect to the Haar wavelet, which, in 2D, leads to an approximation of image details in three orientations: horizontal, vertical and diagonal. Force Field Transform (Juefei-Xu et al., 2010) employs an analogy to gravitational force. Each pixel exerts a 'force' on its neighbors inversely proportional to the distance between them, weighted by the pixel value. The net force at one point is the aggregate of the forces exerted by all other 5 × 5 neighbors. Gabor filters are texture filters selective in frequency and orientation. A set of different frequencies and orientations are usually employed. For example, BIB001 and BIB006 BIB016 employed five frequencies and six orientations equally spaced in the logpolar frequency plane, achieving full coverage of the spectrum. BIB002 employed one frequency and four orientations, BIB017 employed one frequency and one orientation only, and BIB015 employed five frequencies and six orientations. Lastly, Cao and Schmid (2014) used two frequencies and eight orientations, with Gabor responses further encoded by LBP operators (below). GIST perceptual descriptors BIB022 consist of five perceptual dimensions related with scene description, correlated with the second-order statistics and spatial arrangement of structured image components: naturalness, which quantizes the vertical and horizontal edge distribution; openness, presence or lack of reference points; roughness, size of the largest prominent object; expansion, depth of the space gradient; and ruggedness, which quantizes the contour orientation that deviates from the horizontal. HOG: Histogram of Oriented Gradients. In HOG, the gradient orientation and magnitude are computed in each pixel. The histogram of orientations is then built, with each bin accumulating corresponding gradient magnitudes. In PHOG or Pyramid of Histogram of Oriented Gradients, instead of using image patches, HOG is extracted from the whole image. Then, the image is split up several times like a quad-tree and all subimages get their own HOG. JDSR: Joint Dictionary-based Sparse Representation BIB018 . computes a compact dictionary using a set of training images. A new image is represented as a sparse linear combination of the dictionary elements. A similar approach is SRC, or Sparse Representation Classification BIB010 ). An image is represented as a sparse linear combination of training images plus sparse errors due to perturbations. Images can be in original raw form or represented in any feature space. The features used included Eigenfaces, Laplacianfaces, Randomfaces, Fisherfaces, and downsampled versions of the raw image. BIB010 also tested BSIF and LBP features. Laws masks were used by BIB002 . Five 1D masks capturing shapes of level, edge, spot, wave and ripple were employed. In 2D, masks are 1D-convolved in all possible combinations with an image, thus producing 25 local features. LBP: Local Binary Patterns were first introduced for texture classification, since they can identify spots, line ends, edges, corners and other patterns. For each pixel p, a 3 × 3 neighborhood is considered. Every neighbor p i (i=1...8) is assigned a binary value of 1 if p i > p, or 0 otherwise. The binary values are then concatenated into a 8-bits binary number, and the decimal equivalent is assigned to characterize the texture at p, leading to 2 8 =256 possible labels. The LBP values of all pixels within a given patch are then quantized into a 8-bin histogram. LBP is one of the most popular periocular matching techniques in the literature (Table 3) , with many variants proposed. One is Uniform LBP or ULBP , used to reduce the length of the feature vector and achieve rotation invariance. A LBP is called uniform if it contains at most two bitwise transitions from 0 to 1 or vice-versa. A separate label is used for each uniform pattern, and all the non-uniform patterns are labeled with a single label, yielding to 59 different labels, instead of 256 as the regular LBP. The neighborhood can be also made larger to allow multi-resolution representations of the local texture pattern, leading to a circle of radius R, also called Circular LBP or CLBP BIB019 . To avoid a large number of binary values as R increases, only neighbors separated by certain angular distance may be chosen. In Three-Patch LBP or TPLBP/3PLBP BIB011 BIB020 , pixel p is compared with the central pixel of two (non-adjacent) patches situated across a circle R. Application of 3PLBP to multiple image scales across a Gaussian pyramid leads to the Hierarchical Three-Patch LBP or H3PLBP BIB011 . Further extension to two circles R 1 and R 2 results in Four-Patch LBP or FPLBP , involving four patches instead of three in the comparison. The use of subspace representation methods applied to LBPs is also very popular to reduce the feature set or improve performance, for example: BIB003 , BIB004 BIB007 , BIB012 BIB023 and BIB014 . Other works have also proposed to apply LBP upon other feature extraction itself, for example Juefei-Xu et al. (2010); Juefei-Xu and Savvides (2012), BIB019 or Cao and Schmid (2014) . LMF: Leung-Mallik filters is a set of filters constructed from Gaussian, Gaussian derivatives and Laplacian of Gaussian at different orientations and scales. In the experiments by BIB008 , filter responses from an image training set were clustered by k-means to construct a texton dictionary. The clusters (texton) producing the lowest EER were then used to classify test images. LoG: Laplacian of Gaussian filter is an edge detector, used by BIB002 for periocular recognition. LPQ: Local Phase Quantization extracts phase statistics of local patches by selective frequency filters in the Fourier domain. The phases of the four low-frequency coefficients are quantized in four bins. NGC: Normalized Gradient Correlation BIB018 computes in the Fourier domain the normalized correlation between the gradients of two images in pair-wise patches. PIGP: Phase Intensive Global Pattern BIB019 ) computes the intensity variation of pixel-neighborhoods with respect to different phases by convolution with a bank of 3 × 3 filters. The filters have 'U' shape when seen in 3D, with different rotations corresponding to the different phases. Four different angles between 0 and 3π/4 in steps of π/4 were considered. SRP: Structured Random Projections BIB021 encode horizontal and vertical directional features by means of 1D horizontal and vertical binary vectors (projection elements). Such elements have a single group of contiguous '1' values, with the location of '1's' randomly determined. The number k of projection elements and the length l of contiguous '1's' are to be fixed experimentally, with k=10 and l=3,6,...150 tested. Walsh masks are convolution filters which only contain +1 and -1 values, thus capturing the binary characteristics of an image in terms of contrast. N different 1D-filters of N elements are produced (N=3,5,7...) and combined in all possible pairs, yielding to N 2 2D-filters. Walsh masks were used by BIB002 , Juefei-Xu and Savvides (2012) and BIB019 to compute the Walsh-Hadamard Transform based LBPs (WLBP), which consists of extracting LBPs from the input image after being filtered with Walsh masks.
A Survey on Periocular Biometrics Research <s> Shape-based features <s> A wide variety of applications in forensic, government, and commercial fields require reliable personal identification. However, the recognition performance is severely affected when encountering non-ideal images caused by motion blur, poor contrast, various expressions, or illumination artifacts. In this paper, we investigated the use of shape-based eyebrow features under non-ideal imaging conditions for biometric recognition and gender classification. We extracted various shape-based features from the eyebrow images and compared three different classification methods: Minimum Distance Classifier (MD), Linear Discriminant Analysis Classifier (LDA) and Support Vector Machine Classifier (SVM). The methods were tested on images from two publicly available facial image databases: The Multiple Biometric Grand Challenge (MBGC) database and the Face Recognition Grand Challenge (FRGC) database. Obtained recognition rates of 90% using the MBGC database and 75% using the FRGC database as well as gender classification recognition rates of 96% and 97% for each database respectively, suggests the shape-based eyebrow features maybe be used for biometric recognition and soft biometric classification. <s> BIB001 </s> A Survey on Periocular Biometrics Research <s> Shape-based features <s> The concept of periocular biometrics emerged to improve the robustness of iris recognition to degraded data. Being a relatively recent topic, most of the periocular recognition algorithms work in a holistic way and apply a feature encoding/matching strategy without considering each biological component in the periocular area. This not only augments the correlation between the components in the resulting biometric signature, but also increases the sensitivity to particular data covariates. The main novelty in this paper is to propose a periocular recognition ensemble made of two disparate components: 1) one expert analyses the iris texture and exhaustively exploits the multispectral information in visible-light data and 2) another expert parameterizes the shape of eyelids and defines a surrounding dimensionless region-of-interest, from where statistics of the eyelids, eyelashes, and skin wrinkles/furrows are encoded. Both experts work on disjoint regions of the periocular area and meet three important properties. First, they produce practically independent responses, which is behind the better performance of the ensemble when compared to the best individual recognizer. Second, they do not share particularly sensitivity to any image covariate, which accounts for augmenting the robustness against degraded data. Finally, it should be stressed that we disregard information in the periocular region that can be easily forged (e.g., shape of eyebrows), which constitutes an active anticounterfeit measure. An empirical evaluation was conducted on two public data sets (FRGC and UBIRIS.v2), and points for consistent improvements in performance of the proposed ensemble over the state-of-the-art periocular recognition algorithms. <s> BIB002 </s> A Survey on Periocular Biometrics Research <s> Shape-based features <s> Recent studies in biometrics have shown that the periocular region of the face is sufficiently discriminative for robust recognition, and particularly effective in certain scenarios such as extreme occlusions, and illumination variations where traditional face recognition systems are unreliable. In this paper, we first propose a fully automatic, robust and fast graph-cut based eyebrow segmentation technique to extract the eyebrow shape from a given face image. We then propose an eyebrow shape-based identification system for periocular face recognition. Our experiments have been conducted over large datasets from the MBGC and AR databases and the resilience of the proposed approach has been evaluated under varying data conditions. The experimental results show that the proposed eyebrow segmentation achieves high accuracy with an F-Measure of 99.4% and the identification system achieves rates of 76.0% on the AR database and 85.0% on the MBGC database. <s> BIB003
Eyelids shape descriptors BIB002 extract several properties from the polynomial encoding each eyelid, including: accumulated curvature at point i (out of t), defined as i j=1 ∂ 2 y j ∂x 2 ; shape context, represented by the histogram h i of (x i − x j , y i − y j ) at each point (x i , y i ), ∀ j i; and the Elliptical Fourier Descriptors (EFD) parameterizing y i coordinates of the eyelids. Proenca (2014) also applied LBP to the eyelids region only. Eyebrows shape was studied by Dong and Woodard (2011) and BIB003 . BIB001 encoded rectangularity, eccentricity, isoperimetric quotient, area percentage of different sub-regions, and critical points (comprising the right/left-most points, the highest point and the centroid). BIB003 proposed the use of shape context histograms encoding the distribution of eyebrow points relative to a given (reference) point, and the Procrustes analysis representing the eyebrow shape asymmetry.
A Survey on Periocular Biometrics Research <s> Color-based features <s> We evaluate the utility of the periocular region appearance cues for biometric identification. Even though periocular region is considered to be a highly discriminative part of a face, its utility as an independent modality or as a soft biometric is still an open ended question. It is our goal to establish a performance metric for the periocular region features so that their potential use in conjunction with iris or face can be evaluated. In this approach, we employ the local appearance based feature representation, where the image is divided into spatially salient patches, and histograms of texture and color are computed for each patch. The images are matched by computing the distance between the corresponding feature representations using various distance metrics. We report recognition results on images captured in the visible and near-infrared (NIR) spectrum. For the color periocular region data consisting of about 410 subjects and the NIR images of 85 subjects, we obtain the Rank-1 recognition rate of 91% and 87% respectively. Furthermore, we also demonstrate that recognition performance of the periocular region images is comparable to that of face. <s> BIB001 </s> A Survey on Periocular Biometrics Research <s> Color-based features <s> This paper investigates the effectiveness of local appearance features such as Local Binary Patterns, Histograms of Oriented Gradient, Discrete Cosine Transform, and Local Color Histograms extracted from periocular region images for soft classification on gender and ethnicity. These features are classified by Artificial Neural Network or Support Vector Machine. Experiments are performed on visible and near-IR spectrum images derived from FRGC and MBGC datasets. For 4232 FRGC images of 404 subjects, we obtain baseline gender and ethnicity classifications of 97.3% and 94%. For 350 MBGC images of 60 subjects, we obtain baseline gender and ethnicity results of 90% and 89%. <s> BIB002
LCH: Local Color Histograms from image patches were used by BIB001 . They experimented with RGB and HSV spaces and their sub-spaces, finding that the RG (redgreen) color space outperformed the other, with a 4 × 4 histogram giving better results than coarser or finer resolutions. Thus each 4 × 4 histogram provides a 16 element feature vector per patch. LCH were also used by BIB002 for gender and ethnicity classification using periocular data (Section 7).
A Survey on Periocular Biometrics Research <s> Local features <s> In challenging image acquisition settings where the performance of iris recognition algorithms degrades due to poor segmentation of the iris, image blur, specular reflections, and occlusions from eye lids and eye lashes, the periocular region has been shown to offer better recognition rates. However, the definition of a periocular region is subject to interpretation. This paper investigates the question of what is the best periocular region for recognition by identifying sub-regions of the ocular image when using near-infrared (NIR) or visible light (VL) sensors. To determine the best periocular region, we test two fundamentally different algorithms on challenging periocular datasets of contrasting build on four different periocular regions. Our results indicate that system performance does not necessarily improve as the ocular region becomes larger. Rather in NIR images the eye shape is more important than the brow or cheek as the image has little to no skin texture (leading to a smaller accepted region), while in VL images the brow is very important (requiring a larger region). <s> BIB001 </s> A Survey on Periocular Biometrics Research <s> Local features <s> We concentrate on utilization of facial periocular region for biometric identification. Although this region has superior discriminative characteristics, as compared to mouth and nose, it has not been frequently used as an independent modality for personal identification. We employ a featurebased representation, where the associated periocular image is divided into left and right sides, and descriptor vectors are extracted from these using popular feature extraction algorithms SIFT, SURF, BRISK, ORB, and LBP. We also concatenate descriptor vectors. Utilizing FLANN and Brute Force matchers, we report recognition rates and ROC. For the periocular region image data, obtained from widely used FERET database consisting of 865 subjects, we obtain Rank-1 recognition rate of 96.8% for full frontal and different facial expressions in same session cases. We include a summary of existing methods, and show that the proposed method produces lower/comparable error rates with respect to the current state of the art. <s> BIB002 </s> A Survey on Periocular Biometrics Research <s> Local features <s> We present a new system for biometric recognition using periocular images. The feature extraction method employed describes neighborhoods around key points by projection onto harmonic functions which estimates the presence of a series of various symmetric curve families around such key points. The isocurves of such functions are highly symmetric w.r.t. The key points and the estimated coefficients have well defined geometric interpretations. The descriptors used are referred to as Symmetry Assessment by Feature Expansion (SAFE). Extraction is done across a set of discrete points of the image, uniformly distributed in a rectangular-shaped grid positioned in the eye centre. Experiments are done with two databases of iris data, one acquired with a close-up iris camera, and another in visible light with a webcam. The two databases have been annotated manually, meaning that the radius and centre of the pupil and sclera circles are available, which are used as input for the experiments. Results show that this new system has a performance comparable with other periocular recognition approaches. We particularly carry out comparative experiments with another periocular system based on Gabor features extracted from the same set of grid points, with the fusion of the two systems resulting in an improved performance. We also evaluate an iris texture matcher, providing fusion results with the periocular systems as well. <s> BIB003 </s> A Survey on Periocular Biometrics Research <s> Local features <s> Partially constrained human recognition through periocular region has emerged as a new paradigm in biometric security. This article proposes Phase Intensive Global Pattern (PIGP): a novel global feature based on variation of intensity of a pixel-neighbours with respect to different phases. The feature thus extracted is claimed to be rotation invariant and hence useful to identify human from images with face-tilt. The performance of proposed feature is experimented on UBIRISv2 database, which is a very large standard dataset with unconstrained periocular images captured under visible spectrum. The proposed work has been compared with Circular Local Binary Pattern (CLBP), and Walsh Transform, and experimentally found to yield higher accuracy, though with increased computation complexity and increased size of the feature vector. <s> BIB004 </s> A Survey on Periocular Biometrics Research <s> Local features <s> Abstract The article proposes a novel multi-scale local feature based on the periocular recognition technique which is capable of extracting high-dimensional subtle features existent in the iris region as well as low-dimensional gross features in the periphery skin region of the iris. A set of filter banks of different scales is employed to exploit the phase-intensive patterns in visible spectrum periocular image of a subject captured from a distance in partial non-cooperative scenario. The proposed technique is verified with experiments on near-infrared illumination databases like BATH and CASIA-IrisV3-Lamp. Experiments have been further extended to images from visible spectrum ocular databases like UBIRISv2 and low-resolution eye regions extracted from FERETv4 face database to establish that the proposed feature performs comparably better than existing local features. To find the robustness of the proposed approach, the low resolution visible spectrum images of mentioned databases are converted to grayscale images. The proposed approach yields unique patterns from these grayscale images. The ability to find coarse-to-fine features in multi-scale and different phases is accountable for the improved robustness of the proposed approach. <s> BIB005
In local approaches, a sparse set of characteristic points (called key points) is detected first. Local features encode properties of the neighborhood around key points only, leading to local key point descriptors. Since the number of detected key points is not necessarily the same in each image, the resulting feature vector may not be of constant length. Therefore, the matching algorithm has to compare each key point of one image against all key points of the other image to find a pair match, thus increasing the computation time. The output from the matching function is typically the number of matched points, although a distance measurement between pairs may also be returned. To achieve scale invariance, key points are usually detected at different scales. Different key point detection algorithms exist, with some of the feature extraction methods of this section also having its own key point extraction method. For example, detection of key points with the SIFT feature extractor relies on a difference of Gaussians (DOG) function in the scale space, whereas detection with SURF is based on the Hessian matrix, but relying on integral images to speed up computations. Newer algorithms such as BRISK and ORB claim to provide an even faster alternative to SIFT or SURF key point extraction methods. BIB002 employs one key point extraction method (SURF), and then compute the SIFT, SURF, BRISK and ORB descriptors from these key points. Other periocular works like BIB002 , BIB003 and extract key points descriptors at selected sampling points in the center of image patches only, resembling the grid-like analysis of global approaches (Figure 1, right) but using local features. This way, no key point detection is carried out, and the obtained feature vector is of fixed size. The following local descriptors have been proposed in the literature for periocular recognition. BRISK: Binary Robust Invariant Scalable Key points descriptor is composed of a binary string by concatenating the results of simple brightness comparison tests. BRISK applies a sampling pattern of N=60 locations equally spaced on circles concentric with the key point. The origin of the sampling pattern is rotated according to the gradient angle around the key point to achieve rotation invariance. The intensity of all possible short-distance pixel pairs p i and p j of the sampling pattern is then compared, assigning a binary value of 1 if p i > p j , and 0 otherwise. The resulting feature vector at each key point has 512 bits. BRISK is employed for periocular recognition by BIB002 . ORB: Oriented FAST and Rotated BRIEF is based on the FAST corner detector and the visual descriptor BRIEF (Binary Robust Independent Elementary Features). As in BRISK, BRIEF also uses binary tests between pixels. Pixel pairs are considered from an image patch of size S × S . The original BRIEF deals poorly with rotation, so in ORB it is proposed to steer the descriptor according to the dominant rotation of the key point (obtained from the first order moments). The parameters employed in ORB are S =31 and a vector length of 256 bits per key point. ORB was used for periocular recognition by BIB002 . PILP: Phase Intensive Local Pattern was used by BIB005 , following the work in BIB004 where they presented PIGP (Phase Intensive Global Pattern). PILP uses a similar filter bank than PIGP, but used for key point extraction, rather than for feature encoding. Size of the filters varies from 3 × 3 to 9 × 9, to allow to cope with scale variations. This way, key points are the local extrema among pixels in its own window and windows in its neighboring phases. Feature extraction is then done by computing a gradient orientation histogram in the neighborhood of each keypoint, in a similar way than SIFT descriptor, below. SAFE: Symmetry Assessment by Feature Expansion BIB003 describes neighborhoods around key points by projection onto harmonic functions which estimates the presence of various symmetric curve families. The iso-curves of such functions are highly symmetric w.r.t. the key points and the estimated coefficients have well defined geometric interpretations. The detected patterns resemble shapes such as parabolas, circles, spirals, etc. Detection is done in concentric circular bands of different radii around key points, with radii log-equidistantly sampled. Extracted features therefore quantify the presence of pattern families in annular rings around each key point. SIFT: Scale Invariant Feature Transformation. Together with LBP, SIFT is the most popular matching technique employed in the literature (Table 3) . SIFT encodes local orientation via histograms of gradients around key points. The dominant orientation of a key point is first obtained by the peak of the gradient orientation histogram in a 16×16 window. The key point feature vector of dimension 4×4×8 = 128 is then obtained by computing 8-bin gradient orientation histograms (relative to the dominant orientation to achieve rotation invariance) in 4 × 4 sub-regions around the key point. m-SIFT (modified SIFT) is a SIFT matcher where additional constraints are imposed to the angle and distance of matched key points BIB001 . SURF: Speeded Up Robust Features was aimed at providing a detector and feature extractor faster than SIFT and other local feature algorithms. Feature extraction is done over a 4 × 4 sub-region around the key point (relative to the dominant orientation) using Haar wavelet responses. SURF is employed for periocular recognition by Juefei-Xu et al. (2010), BIB002 and BIB005 .
A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> Retinotopic sampling and the Gabor decomposition have a well-established role in computer vision in general as well as in face authentication. The concept of Retinal Vision we introduce aims at complementing these biologically inspired tools with models of higher-order visual process, specifically the Human Saccadic System. We discuss the Saccadic Search strategy, a general purpose attentional mechanism that identifies semantically meaningful structures in images by performing "jumps" (saccades) between relevant locations. Saccade planning relies on a priori knowledge encoded by SVM classifiers. The raw visual input is analysed by means of a log-polar retinotopic sensor, whose receptive fields consist in a vector of modified Gabor filters designed in the log-polar frequency plane. Applicability to complex cognitive tasks is demonstrated by facial landmark detection and authentication experiments over the M2VTS and Extended M2VTS (XM2VTS) databases. <s> BIB001 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> A fundamental challenge in face recognition lies in determining which facial characteristics are important in the identification of faces. Several studies have indicated the significance of certain facial features in this regard, particularly internal ones such as the eyes and mouth. Surprisingly, however, one rather prominent facial feature has received little attention in this domain: the eyebrows. Past work has examined the role of eyebrows in emotional expression and nonverbal communication, as well as in facial aesthetics and sexual dimorphism. However, it has not been made clear whether the eyebrows play an important role in the identification of faces. Here, we report experimental results which suggest that for face recognition the eyebrows may be at least as influential as the eyes. Specifically, we find that the absence of eyebrows in familiar faces leads to a very large and significant disruption in recognition performance. In fact, a significantly greater decrement in face recognition is observ... <s> BIB002 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> Periocular biometric refers to the facial region in the immediate vicinity of the eye. Acquisition of the periocular biometric does not require high user cooperation and close capture distance unlike other ocular biometrics (e.g., iris, retina, and sclera). We study the feasibility of using periocular images of an individual as a biometric trait. Global and local information are extracted from the periocular region using texture and point operators resulting in a feature set that can be used for matching. The effect of fusing these feature sets is also studied. The experimental results show a 77% rank-1 recognition accuracy using 958 images captured from 30 different subjects. <s> BIB003 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> We evaluate the utility of the periocular region appearance cues for biometric identification. Even though periocular region is considered to be a highly discriminative part of a face, its utility as an independent modality or as a soft biometric is still an open ended question. It is our goal to establish a performance metric for the periocular region features so that their potential use in conjunction with iris or face can be evaluated. In this approach, we employ the local appearance based feature representation, where the image is divided into spatially salient patches, and histograms of texture and color are computed for each patch. The images are matched by computing the distance between the corresponding feature representations using various distance metrics. We report recognition results on images captured in the visible and near-infrared (NIR) spectrum. For the color periocular region data consisting of about 410 subjects and the NIR images of 85 subjects, we obtain the Rank-1 recognition rate of 91% and 87% respectively. Furthermore, we also demonstrate that recognition performance of the periocular region images is comparable to that of face. <s> BIB004 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> Given an image from a biometric sensor, it is important for the feature extraction module to extract an original set of features that can be used for identity recognition. This form of feature extraction has been referred to as Type I feature extraction. For some biometric systems, Type I feature extraction is used exclusively. However, a second form of feature extraction does exist and is concerned with optimizing/minimizing the original feature set given by a Type I feature extraction method. This second form of feature extraction has been referred to as Type II feature extraction (feature selection). In this paper, we present a genetic-based Type II feature extraction system, referred to as GEFE (Genetic & Evolutionary Feature Extraction), for optimizing the feature sets returned by Loocal Binary Pattern Type I feature extraction for periocular biometric recognition. Our results show that not only does GEFE dramatically reduce the number of features needed but the evolved features sets also have higher recognition rates. <s> BIB005 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> In this paper, we perform a detailed investigation of various features that can be extracted from the periocular region of human faces for biometric identification. The emphasis of this study is to explore the BEST feature extraction approach used in stand-alone mode without any generative or discriminative subspace training. Simple distance measures are used to determine the verification rate (VR) on a very large dataset. Several filter-based techniques and local feature extraction methods are explored in this study, where we show an increase of 15% verification performance at 0.1% false accept rate (FAR) compared to raw pixels with the proposed Local Walsh-Transform Binary Pattern encoding. Additionally, when fusing our best feature extraction method with Kernel Correlation Feature Analysis (KCFA) [36], we were able to obtain VR of 61.2%. Our experiments are carried out on the large validation set of the NIST FRGC database [6], which contains facial images from environments with uncontrolled illumination. Verification experiments based on a pure 1–1 similarity matrix of 16028×8014 (~128 million comparisons) carried out on the entire database, where we find that we can achieve a raw VR of 17.0% at 0.1% FAR using our proposed Local Walsh-Transform Binary Pattern approach. This result, while may seem low, is more than the NIST reported baseline VR on the same dataset (12% at 0.1% FAR), when PCA was trained on the entire facial features for recognition [6]. <s> BIB006 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> The performance of iris recognition is affected if iris is captured at a distance. Further, images captured in visible spectrum are more susceptible to noise than if captured in near infrared spectrum. This research proposes periocular biometrics as an alternative to iris recognition if the iris images are captured at a distance. We propose a novel algorithm to recognize periocular images in visible spectrum and study the effect of capture distance on the performance of periocular biometrics. The performance of the algorithm is evaluated on more than 11,000 images of the UBIRIS v2 database. The results show promise towards using periocular region for recognition when the information is not sufficient for iris recognition. <s> BIB007 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> The term periocular refers to the facial region in the immediate vicinity of the eye. Acquisition of the periocular biometric is expected to require less subject cooperation while permitting a larger depth of field compared to traditional ocular biometric traits (viz., iris, retina, and sclera). In this work, we study the feasibility of using the periocular region as a biometric trait. Global and local information are extracted from the periocular region using texture and point operators resulting in a feature set for representing and matching this region. A number of aspects are studied in this work, including the 1) effectiveness of incorporating the eyebrows, 2) use of side information (left or right) in matching, 3) manual versus automatic segmentation schemes, 4) local versus global feature extraction schemes, 5) fusion of face and periocular biometrics, 6) use of the periocular biometric in partially occluded face images, 7) effect of disguising the eyebrows, 8) effect of pose variation and occlusion, 9) effect of masking the iris and eye region, and 10) effect of template aging on matching performance. Experimental results show a rank-one recognition accuracy of 87.32% using 1136 probe and 1136 gallery periocular images taken from 568 different subjects (2 images/subject) in the Face Recognition Grand Challenge (version 2.0) database with the fusion of three different matchers. <s> BIB008 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> In this paper, we will present a novel framework of utilizing periocular region for age invariant face recognition. To obtain age invariant features, we first perform preprocessing schemes, such as pose correction, illumination and periocular region normalization. And then we apply robust Walsh-Hadamard transform encoded local binary patterns (WLBP) on preprocessed periocular region only. We find the WLBP feature on periocular region maintains consistency of the same individual across ages. Finally, we use unsupervised discriminant projection (UDP) to build subspaces on WLBP featured periocular images and gain 100% rank-1 identification rate and 98% verification rate at 0.1% false accept rate on the entire FG-NET database. Compared to published results, our proposed approach yields the best recognition and identification results. <s> BIB009 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> A wide variety of applications in forensic, government, and commercial fields require reliable personal identification. However, the recognition performance is severely affected when encountering non-ideal images caused by motion blur, poor contrast, various expressions, or illumination artifacts. In this paper, we investigated the use of shape-based eyebrow features under non-ideal imaging conditions for biometric recognition and gender classification. We extracted various shape-based features from the eyebrow images and compared three different classification methods: Minimum Distance Classifier (MD), Linear Discriminant Analysis Classifier (LDA) and Support Vector Machine Classifier (SVM). The methods were tested on images from two publicly available facial image databases: The Multiple Biometric Grand Challenge (MBGC) database and the Face Recognition Grand Challenge (FRGC) database. Obtained recognition rates of 90% using the MBGC database and 75% using the FRGC database as well as gender classification recognition rates of 96% and 97% for each database respectively, suggests the shape-based eyebrow features maybe be used for biometric recognition and soft biometric classification. <s> BIB010 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> Iris recognition from at-a-distance face images has high applications in wide range of applications such as remote surveillance and for civilian identification. This paper presents a completely automated joint iris and periocular recognition approach from the face images acquired at-a-distance. Each of the acquired face images are used to detect and segment periocular images which are then employed for the iris segmentation. We employ complex texture descriptors using Leung-Mallik filters which can acquire multiple periocular features for more accurate recognition. Experimental results presented in this paper achieve 8.1% improvement in recognition accuracy over the best performing approach among SIFT, LBP and HoG presented in the literature. The combination of simultaneously segmented iris and periocular images achieves average rank-one recognition accuracy of 84.5%, i.e., an improvement of 52% than those from only using iris features, on independent test images from 131 subjects. In order to ensure the repeatability of the experiments, the CASIA.v4-distance, i.e., a publicly available database was employed and all the 142 subjects/images were considered in this work. <s> BIB011 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> The periocular biometric comes into the spotlight recently due to several advantageous characteristics such as easily available and provision of crucial face information. However, many existing works are dedicated to extracting image features using texture based techniques such as local binary pattern (LBP). In view of the simplicity and effectiveness offered, this paper proposes to investigate into projection-based methods for periocular identity verification. Several well established projection-based methods such as principal component analysis, its variants and linear discriminant analysis will be adopted in our performance evaluation based on a subset of FERET face database. Our empirical results show that supervised learning methods significantly outperform those unsupervised learning methods and LBP in terms of equal error rate performance. <s> BIB012 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> We present a new system for biometric recognition using periocular images based on retinotopic sampling grids and Gabor analysis of the local power spectrum. A number of aspects are studied, including: 1) grid adaptation to dimensions of the target eye vs. grids of constant size, 2) comparison between circular- and rectangular-shaped grids, 3) use of Gabor magnitude vs. phase vectors for recognition, 4) rotation compensation between query and test images, and 5) comparison with an iris machine expert. Results show that our system achieves competitive verification rates compared with other periocular recognition approaches. We also show that top verification rates can be obtained without rotation compensation, thus allowing to remove this step for computational efficiency. Also, the performance is not affected substantially if we use a grid of fixed dimensions, or it is even better in certain situations, avoiding the need of accurate detection of the iris region. <s> BIB013 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> In this paper we proposed a novel multimodal biometric approach using iris and periocular biometrics to improve the performance of iris recognition in case of non-ideal iris images. Though iris recognition has the highest accuracy among all the available biometrics, still the noises at the image acquisition stage degrade the recognition accuracy. The periocular region can act as a supporting biometric, in case the iris is obstructed by several noises. The periocular region is the part of the face immediately surrounding the eye. The approach is based on fusion of features of iris and periocular region. The approach has shown significant improvement in the performance of iris recognition. The evaluation was done on a test database created from the images of UBIRIS V2 and CASIA iris interval database. We achieved identification accuracy upto 96 % on the test database. <s> BIB014 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> As biometrics has evolved, the iris has remained a preferred trait because its uniqueness, lifetime stability and regular shape contribute to good segmentation and recognition performance. However, commercially deployed systems are characterized by strong acquisition constraints based on active subject cooperation, which is not always achievable or even reasonable for extensive deployment in everyday scenarios. Research on new techniques has been focused on lowering these constraints without significantly impacting performance while increasing system usability, and new approaches have rapidly emerged. Here we propose a novel fusion of different recognition approaches and describe how it can contribute to more reliable noncooperative iris recognition by compensating for degraded images captured in less constrained acquisition setups and protocols under visible wavelengths and varying lighting conditions. The proposed method was tested at the NICE.II (Noisy Iris Challenge Evaluation - Part 2) contest, and its performance was corroborated by a third-place finish. <s> BIB015 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> Among the available biometric traits such as face, iris and fingerprint, there is an active research being carried out in the direction of unconstrained biometrics. Periocular recognition has proved its effectiveness and is regarded as complementary to iris recognition. The main objectives of this paper are three-fold: 1) to announce the availability of periocular dataset, which has a variability in terms of scale change (due to camera-subject distance), pose variation and non-uniform illumination; 2) to investigate the performance of periocular recognition methods with the presence of various degradation factors; 3) propose a new initialization strategy for the definition of the periocular region-of-interest (ROI), based on the geometric mean of eye corners. Our experiments confirm that performance can be consistently improved by this initialization method, when compared to the classical technique. <s> BIB016 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> This work develops a novel face-based matcher composed of a multi-resolution hierarchy of patch-based feature descriptors for periocular recognition - recognition based on the soft tissue surrounding the eye orbit. The novel patch-based framework for periocular recognition is compared against other feature descriptors and a commercial full-face recognition system against a set of four uniquely challenging face corpora. The framework, hierarchical three-patch local binary pattern, is compared against the three-patch local binary pattern and the uniform local binary pattern on the soft tissue area around the eye orbit. Each challenge set was chosen for its particular non-ideal face representations that may be summarized as matching against pose, illumination, expression, aging, and occlusions. The MORPH corpora consists of two mug shot datasets labeled Album 1 and Album 2. The Album 1 corpus is the more challenging of the two due to its incorporation of print photographs (legacy) captured with a variety of cameras from the late 1960s to 1990s. The second challenge dataset is the FRGC still image set. Corpus three, Georgia Tech face database, is a small corpus but one that contains faces under pose, illumination, expression, and eye region occlusions. The final challenge dataset chosen is the Notre Dame Twins database, which is comprised of 100 sets of identical twins and 1 set of triplets. The proposed framework reports top periocular performance against each dataset, as measured by rank-1 accuracy: (1) MORPH Album 1, 33.2%; (2) FRGC, 97.51%; (3) Georgia Tech, 92.4%; and (4) Notre Dame Twins, 98.03%. Furthermore, this work shows that the proposed periocular matcher (using only a small section of the face, about the eyes) compares favorably to a commercial full-face matcher. <s> BIB017 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> Human identification based on iris biometrics requires high resolution iris images of a cooperative subject. Such images cannot be obtained in non-intrusive applications such as surveillance. However, the full region around the eye, known as the periocular region, can be acquired non-intrusively and used as a biometric. In this paper we investigate the use of periocular region for person identification. Current techniques have focused on choosing a single best frame, mostly manually, for matching. In contrast, we formulate, for the first time, person identification based on periocular regions as an image set classification problem. We generate periocular region image sets from the Multi Bio-metric Grand Challenge (MBGC) NIR videos. Periocular regions of the right eyes are mirrored and combined with those of the left eyes to form an image set. Each image set contains periocular regions of a single subject. For imageset classification, we use six state-of-the-art techniques and report their comparative recognition and verification performances. Our results show that image sets of periocular regions achieve significantly higher recognition rates than currently reported in the literature for the same database. <s> BIB018 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> In challenging image acquisition settings where the performance of iris recognition algorithms degrades due to poor segmentation of the iris, image blur, specular reflections, and occlusions from eye lids and eye lashes, the periocular region has been shown to offer better recognition rates. However, the definition of a periocular region is subject to interpretation. This paper investigates the question of what is the best periocular region for recognition by identifying sub-regions of the ocular image when using near-infrared (NIR) or visible light (VL) sensors. To determine the best periocular region, we test two fundamentally different algorithms on challenging periocular datasets of contrasting build on four different periocular regions. Our results indicate that system performance does not necessarily improve as the ocular region becomes larger. Rather in NIR images the eye shape is more important than the brow or cheek as the image has little to no skin texture (leading to a smaller accepted region), while in VL images the brow is very important (requiring a larger region). <s> BIB019 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> Iris and Periocular biometrics has proved its effectiveness in accurately verifying the subject of interest. Recent improvements in visible spectrum Iris and Periocular verification have further boosted its application to unconstrained scenarios. However existing visible Iris verification systems suffer from low quality samples because of the limited depth-of-field exhibited by the conventional Iris capture systems. In this work, we propose a robust Iris and Periocular erification scheme in visible spectrum using Light Field Camera (LFC). Since the light field camera can provide multiple focus images in single capture, we are motivated to investigate its applicability for robust Iris and Periocular verification by exploring its all-in-focus property. Further, the use of all-in-focus property will extend the depth-of-focus and overcome the problem of focus that plays a predominant role in robust Iris and Periocular verification. We first collect a new Iris and Periocular biometric database using both light field and conventional camera by simulating real life scenarios. We then propose a new scheme for feature extraction and classification by exploring the combination of Local Binary Patterns (LBP) and Sparse Reconstruction Classifier (SRC). Extensive experiments are carried out on the newly collected database to bring out the merits and demerits on applicability of light field camera for Iris and Periocular verification. Finally, we also present the results on combining the information from Iris and Periocular biometrics using weighted sum rule. <s> BIB020 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> We concentrate on utilization of facial periocular region for biometric identification. Although this region has superior discriminative characteristics, as compared to mouth and nose, it has not been frequently used as an independent modality for personal identification. We employ a featurebased representation, where the associated periocular image is divided into left and right sides, and descriptor vectors are extracted from these using popular feature extraction algorithms SIFT, SURF, BRISK, ORB, and LBP. We also concatenate descriptor vectors. Utilizing FLANN and Brute Force matchers, we report recognition rates and ROC. For the periocular region image data, obtained from widely used FERET database consisting of 865 subjects, we obtain Rank-1 recognition rate of 96.8% for full frontal and different facial expressions in same session cases. We include a summary of existing methods, and show that the proposed method produces lower/comparable error rates with respect to the current state of the art. <s> BIB021 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> Automated and accurate biometrics identification using periocular imaging has wide range of applications from human surveillance to improving performance for iris recognition systems, especially under less-constrained imaging environment. Restricted Boltzmann Machine is a generative stochastic neural network that can learn the probability distribution over its set of inputs. As a convolutional version of Restricted Boltzman Machines, CRBM aim to accommodate large image sizes and greatly reduce the computational burden. However in the best of our knowledge, the unsupervised feature learning methods have not been explored in biometrics area except for the face recognition. This paper explores the effectiveness of CRBM model for the periocular recognition. We perform experiments on periocular image database from the largest number of subjects (300 subjects as test subjects) and simultaneously exploit key point features for improving the matching accuracy. The experimental results are presented on publicly available database, the Ubripr database, and suggest effectiveness of RBM feature learning for automated periocular recognition with the large number of subjects. The results from the investigation in this paper also suggest that the supervised metric learning can be effectively used to achieve superior performance than the conventional Euclidean distance metric for the periocular identification. <s> BIB022 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> This paper introduces a novel face recognition problem domain: the medically altered face for gender transformation. A data set of >1.2 million face images was constructed from wild videos obtained from You Tube of 38 subjects undergoing hormone replacement therapy (HRT) for gender transformation over a period of several months to three years. The HRT achieves gender transformation by severely altering the balance of sex hormones, which causes changes in the physical appearance of the face and body. This paper explores that the impact of face changes due to hormone manipulation and its ability to disguise the face and hence, its ability to effect match rates. Face disguise is achieved organically as hormone manipulation causes pathological changes to the body resulting in a modification of face appearance. This paper analyzes and evaluates face components versus full face algorithms in an attempt to identify regions of the face that are resilient to the HRT process. The experiments reveal that periocular face components using simple texture-based face matchers, local binary patterns, histogram of gradients, and patch-based local binary patterns out performs matching against the full face. Furthermore, the experiments reveal that a fusion of the periocular using one of the simple texture-based approaches (patched-based local binary patterns) out performs two Commercial Off The Shelf Systems full face systems: 1) PittPatt SDK and 2) Cognetic FaceVACs v8.5. The evaluated periocular-fused patch-based face matcher outperforms PittPatt SDK v5.2.2 by 76.83% and Cognetic FaceVACS v8.5 by 56.23% for rank-1 accuracy. <s> BIB023 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> The concept of periocular biometrics emerged to improve the robustness of iris recognition to degraded data. Being a relatively recent topic, most of the periocular recognition algorithms work in a holistic way and apply a feature encoding/matching strategy without considering each biological component in the periocular area. This not only augments the correlation between the components in the resulting biometric signature, but also increases the sensitivity to particular data covariates. The main novelty in this paper is to propose a periocular recognition ensemble made of two disparate components: 1) one expert analyses the iris texture and exhaustively exploits the multispectral information in visible-light data and 2) another expert parameterizes the shape of eyelids and defines a surrounding dimensionless region-of-interest, from where statistics of the eyelids, eyelashes, and skin wrinkles/furrows are encoded. Both experts work on disjoint regions of the periocular area and meet three important properties. First, they produce practically independent responses, which is behind the better performance of the ensemble when compared to the best individual recognizer. Second, they do not share particularly sensitivity to any image covariate, which accounts for augmenting the robustness against degraded data. Finally, it should be stressed that we disregard information in the periocular region that can be easily forged (e.g., shape of eyebrows), which constitutes an active anticounterfeit measure. An empirical evaluation was conducted on two public data sets (FRGC and UBIRIS.v2), and points for consistent improvements in performance of the proposed ensemble over the state-of-the-art periocular recognition algorithms. <s> BIB024 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> Iris recognition becomes an important technology in our society. Visual patterns of human iris provide rich texture information for personal identification. However, it is greatly challenging to match intra-class iris images with large variations in unconstrained environments because of noises, illumination variation, heterogeneity and so on. To track current state-of-the-art algorithms in iris recognition, we organized the first ICB* Competition on Iris Recognition in 2013 (or ICIR2013 shortly). In this competition, 8 participants from 6 countries submitted 13 algorithms totally. All the algorithms were trained on a public database (e.g. CASIA-Iris-Thousand [3]) and evaluated on an unpublished database. The testing results in terms of False Non-match Rate (FNMR) when False Match Rate (FMR) is 0.0001 are taken to rank the submitted algorithms. <s> BIB025 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> We present a new system for biometric recognition using periocular images. The feature extraction method employed describes neighborhoods around key points by projection onto harmonic functions which estimates the presence of a series of various symmetric curve families around such key points. The isocurves of such functions are highly symmetric w.r.t. The key points and the estimated coefficients have well defined geometric interpretations. The descriptors used are referred to as Symmetry Assessment by Feature Expansion (SAFE). Extraction is done across a set of discrete points of the image, uniformly distributed in a rectangular-shaped grid positioned in the eye centre. Experiments are done with two databases of iris data, one acquired with a close-up iris camera, and another in visible light with a webcam. The two databases have been annotated manually, meaning that the radius and centre of the pupil and sclera circles are available, which are used as input for the experiments. Results show that this new system has a performance comparable with other periocular recognition approaches. We particularly carry out comparative experiments with another periocular system based on Gabor features extracted from the same set of grid points, with the fusion of the two systems resulting in an improved performance. We also evaluate an iris texture matcher, providing fusion results with the periocular systems as well. <s> BIB026 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> In this paper, we propose a novel and robust approach for periocular recognition. Specifically, we propose fusion of Local Phase Quantization(LPQ) and Gabor wavelet descriptors to improve recognition performance and achieve robustness. We have utilized publicly available challenging still face images databases; MBGC v2.0, GTDB, PUT and Caltech. In the approach face is detected and normalized using eye centres. The region around left and right eyes, including eyebrow is extracted as left periocular and right periocular. The LPQ descriptor is then applied to extract the phase statistics features computed locally in a rectangular window. The descriptor is invariant to blur and also to uniform illumination changes. We also computed the Gabor magnitude response of the image, which encodes shape information over a broader range of scales. To reduce dimensionality of the operators and to extract discriminative features, we further utilized DLDA (Direct Linear Discriminant Analysis). The experimental analysis demonstrate that combination of LPQ and Gabor scores provides significant improvement in the performance and robustness, than applied individually. <s> BIB027 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> Recently periocular biometrics has drawn lot of attention of researchers and some efforts have been presented in the literature. In this paper, we propose a novel and robust approach for periocular recognition. In the approach face is detected in still face images which is then aligned and normalized. We utilized entire strip containing both the eyes as periocular region. For feature extraction, we computed the magnitude responses of the image filtered with a filter bank of complex Gabor filters. Feature dimensions are reduced by applying Direct Linear Discriminant Analysis (DLDA). The reduced feature vector is classified using Parzen Probabilistic Neural Network (PPNN). The experimental results demonstrate a promising verification and identification accuracy, also the robustness of the proposed approach is ascertained by providing comprehensive comparison with some of the well known state-of-the-art methods using publicly available face databases; MBGC v2.0, GTDB, IITK and PUT. <s> BIB028 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> Partially constrained human recognition through periocular region has emerged as a new paradigm in biometric security. This article proposes Phase Intensive Global Pattern (PIGP): a novel global feature based on variation of intensity of a pixel-neighbours with respect to different phases. The feature thus extracted is claimed to be rotation invariant and hence useful to identify human from images with face-tilt. The performance of proposed feature is experimented on UBIRISv2 database, which is a very large standard dataset with unconstrained periocular images captured under visible spectrum. The proposed work has been compared with Circular Local Binary Pattern (CLBP), and Walsh Transform, and experimentally found to yield higher accuracy, though with increased computation complexity and increased size of the feature vector. <s> BIB029 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> In this paper, we propose to combine sclera and periocular features for identity verification. The proposal is particularly useful in applications related to face recognition when the face is partially occluded with only periocular region revealed. Due to its relatively new exposition in the literature of biometrics, particular attention will be paid to sclera feature extraction in this work. For periocular feature extraction, structured random projections were adopted to extract compressed vertical and horizontal components of image features. The binary sclera features are eventually fused with the periocular features at a score level. Extensive experiments have been performed on UBIRIS v1 session1 and session2 databases to assess the verification performance before and after fusion. Around 5% of equal error rate performance was observed to be enhanced by fusing sclera with periocular features comparing with that before fusion. <s> BIB030 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> Recent studies in biometrics have shown that the periocular region of the face is sufficiently discriminative for robust recognition, and particularly effective in certain scenarios such as extreme occlusions, and illumination variations where traditional face recognition systems are unreliable. In this paper, we first propose a fully automatic, robust and fast graph-cut based eyebrow segmentation technique to extract the eyebrow shape from a given face image. We then propose an eyebrow shape-based identification system for periocular face recognition. Our experiments have been conducted over large datasets from the MBGC and AR databases and the resilience of the proposed approach has been evaluated under varying data conditions. The experimental results show that the proposed eyebrow segmentation achieves high accuracy with an F-Measure of 99.4% and the identification system achieves rates of 76.0% on the AR database and 85.0% on the MBGC database. <s> BIB031 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> This paper introduces the challenge of cross spectral periocular matching. The proposed algorithm utilizes neural network for learning the variabilities caused by two different spectrums. Two neural networks are first trained on each spectrum individually and then combined such that, by using the cross spectral training data, they jointly learn the cross spectral variability. To evaluate the performance, a cross spectral periocular database is prepared that contains images pertaining to visible night vision and near infrared spectrums. The proposed combined neural network architecture, on the cross spectral database, shows improved performance compared to existing feature descriptors and cross domain algorithms. <s> BIB032 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> We consider the problem of matching face against iris images using ocular information. In biometrics, face and iris images are typically acquired using sensors operating in visible (VIS) and near-infrared (NIR) spectra, respectively. This presents a challenging problem of matching images corresponding to different biometric modalities, imaging spectra, and spatial resolutions. We propose the usage of ocular traits that are common between face and iris images (viz., iris and ocular region) to perform matching. Iris matching is performed using a commercial software, while ocular regions are matched using three different techniques: Local Binary Patterns (LBP), Normalized Gradient Correlation (NGC), and Joint Dictionary-based Sparse Representation (JDSR). Experimental results on a database containing 1358 images of 704 subjects indicate that ocular region can provide better performance than iris biometric under a challenging cross-modality matching scenario. <s> BIB033 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> Visible spectrum iris verification has drawn substantial attention due to the feasibility, convenience and also accepted per-formance. This further allows one to perform the iris verification in an unconstrained environment at-a-distance and on the move. The integral part of the visible iris recognition rely on the accurate texture representation algorithm that can effectively capture the uniqueness of the texture even in the challenging conditions like reflection, illumination among others. In this paper, we explore a new scheme for the robust visible iris verification based on Binarized Statistical Image Features (BSIF). The core idea of the BSIF descriptor is to compute the binary code for each pixel by projecting them on the subspace which is learned from natural images using Independent Component Analysis (ICA). Thus, the BSIF is expected to encode the texture features more robustly when compared to contemporary schemes like Local Binary Patterns and its variants. The extensive experiments are carried out on the visible iris dataset captured using both Light field and conventional camera. The proposed feature extraction method is also extended for enhanced periocular recognition. Finally, we also present a comparative analysis with popular state-of-the-art iris recognition scheme. <s> BIB034 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> Face recognition performance degrades significantly under occlusions that occur intentionally or unintentionally due to head gear or hair style. In many incidents captured by surveillance videos, the offenders cover their faces leaving only the periocular region visible. We present an extensive study on periocular region based person identification in video. While, previous techniques have handpicked a single best frame from videos, we formulate, for the first time, periocular region based person identification in video as an image-set classification problem. For thorough analysis, we perform experiments on periocular regions extracted automatically from RGB videos, NIR videos and hyperspectral image cubes. Each image-set is represented by four heterogeneous feature types and classified with six state-of-the-art image-set classification algorithms. We propose a novel two stage inverse Error Weighted Fusion algorithm for feature and classifier score fusion. The proposed two stage fusion is superior to single stage fusion. Comprehensive experiments were performed on four standard datasets, MBGC NIR and visible spectrum (Phillips et al., 2005), CMU Hyperspectral (Denes et al., 2002) and UBIPr (Padole and Proenca, 2012). We obtained average rank-1 recognition rates of 99.8, 98.5, 97.2, and 99.5% respectively which are significantly higher than the existing state of the art. Our results demonstrate the feasibility of image-set based periocular biometrics for real world applications. <s> BIB035 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> Abstract The article proposes a novel multi-scale local feature based on the periocular recognition technique which is capable of extracting high-dimensional subtle features existent in the iris region as well as low-dimensional gross features in the periphery skin region of the iris. A set of filter banks of different scales is employed to exploit the phase-intensive patterns in visible spectrum periocular image of a subject captured from a distance in partial non-cooperative scenario. The proposed technique is verified with experiments on near-infrared illumination databases like BATH and CASIA-IrisV3-Lamp. Experiments have been further extended to images from visible spectrum ocular databases like UBIRISv2 and low-resolution eye regions extracted from FERETv4 face database to establish that the proposed feature performs comparably better than existing local features. To find the robustness of the proposed approach, the low resolution visible spectrum images of mentioned databases are converted to grayscale images. The proposed approach yields unique patterns from these grayscale images. The ability to find coarse-to-fine features in multi-scale and different phases is accountable for the improved robustness of the proposed approach. <s> BIB036 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> Announcement of an iris and periocular dataset, with 10 different mobile setups.Mobile biometric recognition approach based on iris and periocular information.Improvements from a sensor-specific color calibration technique are reported.Biometric recognition feasibility over mobile cross-sensor setups is shown.Preferable mobile setups are pointed out. In recent years, the usage of mobile devices has increased substantially, as have their capabilities and applications. Extending biometric technologies to these gadgets is desirable because it would facilitate biometric recognition almost anytime, anywhere, and by anyone. The present study focuses on biometric recognition in mobile environments using iris and periocular information as the main traits. Our study makes three main contributions, as follows. (1) We demonstrate the utility of an iris and periocular dataset, which contains images acquired with 10 different mobile setups and the corresponding iris segmentation data. This dataset allows us to evaluate iris segmentation and recognition methods, as well as periocular recognition techniques. (2) We report the outcomes of device-specific calibration techniques that compensate for the different color perceptions inherent in each setup. (3) We propose the application of well-known iris and periocular recognition strategies based on classical encoding and matching techniques, as well as demonstrating how they can be combined to overcome the issues associated with mobile environments. <s> BIB037
Periocular recognition started to gain popularity after the studies by BIB003 BIB008 . Some pioneering works can be traced back to 2002 BIB001 , although authors here did not call the local eye area 'periocular'. The approach by BIB008 combined global and local features, concretely LBP, HOG and SIFT. Reported performance of such study was fairly good, setting the framework for the use of the periocular modality. Many works have followed this approach as inspiration, with LBPs and their variations being particularly extensive in the literature Woodard et al., 2010a,b; BIB011 BIB017 BIB021 . The studies of (Woodard et al., 2010a,b) used for the first time NIR data (MBGC portal video), although they selected usable frames (higher quality) which mostly are in the earlier part of the video, where scale change is not substantial. also presented experiments over NIR portal data from the more difficult FOCS database, but with a different descriptor (BGM). BIB017 also evaluated the impact of covariates such as pose, expression, template aging, glasses and eyelids occlusion. Some works have also employed other features in addition to LBPs BIB011 BIB021 . BIB004 employed LCH (RG color histograms), reporting the best accuracy up to that date with the FRGC database of VW images. BIB011 proposed Leung-Mallik filters (LMF) as texture descriptors over the CASIA v4 Distance database of NIR images. BIB021 evaluated LBP, SIFT, and other local descriptors including SURF, BRISK and ORB over the FERET database. The use of subspace representation methods applied to raw pixels or LBP features is also becoming a popular way either to improve performance or reducing the feature set BIB005 BIB012 BIB018 BIB022 BIB035 . LBP has been also used in other works analyzing for example the impact of plastic surgery or gender transformation BIB023 on periocular recognition (see Section 7). Inspired by BIB003 BIB006 extended the experiments with additional global and local features to a significant larger set of the FRGC database with less ideal images (thus the lower accuracy w.r.t. previous studies): WLBP, Laws Masks, DCT, DWT, Force Field transform, SURF, Gabor filters and LoG filters. They later addressed the problem of aging degradation on periocular recognition using the FG-NET database BIB009 , reported to be an issue even at small time lapses BIB008 . To obtain age invariant features, they first performed preprocessing schemes, such as pose correction by Active Appearance Models (AAM), illumination and periocular region normalization. In a later work, Juefei-Xu and Savvides (2012) also applied WLBPs to study periocular recognition with data from a pan-tilt-zoom (PTZ) camera. As in the study above, they employed different schemes to correct illumination and pose variations. The mentioned work by BIB001 with Gabor filters served as inspiration to BIB013 BIB024 to carry out periocular experiments with several iris databases in NIR and VW, as well as a comparison with the iris modality (Section 6). A variation of this algorithm was fused with the SIFT descriptor, obtaining a leading position in the First ICB Competition on Iris Recognition, ICIR2013 BIB025 . They later proposed a matcher based on Symmetry Assessment by Feature Expansion (SAFE) descriptors BIB026 , which describes neighborhoods around key-points by estimating the presence of various symmetric curve families. Gabor filters were also used by BIB027 in their work presenting Local Phase Quantization (LPQ) as descriptors for periocular recognition. BIB028 also employed Gabor features over four different VW databases, with features reduced by Direct Linear Discriminant Analysis (DLDA) and further classified by a Parzen Probabilistic Neural Network (PPNN). BIB007 evaluated CLBP and GIST descriptors. They used the UBIRIS v2 database of uncontrolled VW iris images which includes a number of perturbations intentionally introduced (see Section 2). A number of subsequent works have also made use of UBIRIS v2 BIB014 BIB015 BIB029 BIB024 BIB036 . BIB014 used UBIRIS v2 in their comparison of iris and periocular modalities (Section 6), obtaining better results than BIB007 using just LBPs, although over a smaller set of images. Santos and Hoyle (2012) used LBPs and SIFT as by BIB003 in their study combining iris and periocular modalities (Section 6). BIB029 proposed global PIGP features, outperforming the Rank-1 performance of any previous study using UBIRIS v2. They later proposed local PILP features BIB036 , reporting the best Rank-1 periocular performance to date with UBIRIS v2. Proenca (2014) studied the fusion of iris and periocular biometrics (Section 6). Periocular features were extracted from the eyelids region only, consisting of the fusion of LBPs and eyelids shape descriptors. In a subsequent study, proposed a method to label seven components of the periocular region (see Section 3) with the purpose of demonstrating that regions such as hair or glasses should be avoided since they are unreliable for recognition (Section 5). They also proposed to use the center of mass of the cornea as reference point to define the periocular ROI, rather than the pupil center, which is much more sensitive to changes in gaze. Finally, BIB030 used the first version of UBIRIS in their study presenting directional projections or Structured Random Projections (SRP) as periocular features. Other shape features have been also proposed, such as eyebrow shape features, with surprisingly accurate results as a stand-alone trait. Indeed, eyebrows have been used by forensic analysts for years to aid in facial recognition BIB031 , suggested to be the most salient and stable features in a face BIB002 . BIB010 studied several geometrical shape properties over the MGBC/FRGC databases. They also used the extracted eyebrow features for gender classification (see Section 7). BIB031 proposed an eyebrow shape-based identification system, together with a eyebrow segmentation technique (presented in Section 3). BIB016 presented the first periocular database in VW range specifically acquired for periocular research (UBIPr). They also proposed to compute the ROI w.r.t. the midpoint of the eye corners (instead of the pupil center), which is less sensitive to gaze variations, leading to a significant improvement (EER from ∼30% to ∼20%). Posterior studies have managed to improve performance over the UBIPr database using a variety of features BIB019 BIB022 . The UBIPr database is also used by BIB035 in their extensive study evaluating data in VW (UBIPr, MBGC), NIR (MBGC) and multi-spectral (CMU-H database) range, with the reported Rank-1 results being the best published performance to date for the four databases employed. A new database of challenging periocular images in VW range (CSIP) was presented recently by BIB037 , the first one made public captured with smartphones. The paper proposed a device-specific calibration method to compensate for the chromatic disparity, as result of the variability of camera sensors and lenses used by different mobile phones. They also compared and fused the periocular and iris modalities (Section 6). Another database captured specifically for cross-spectral periocular research (IMP) was also recently presented by BIB032 , containing data in VW, NIR and night modalities. To match cross-spectral images, they proposed neural networks (NN) to learn the variability caused by different spectrums, with several variations of LBP and HOG tested as features. Crossspectral recognition was also addressed by BIB033 using a proprietary database of NIR and VW images. Finally, BIB020 and BIB034 presented a database in VW range acquired with a new type of camera, a Light Field Camera (LFC), which provides multiple images at different focus in a single capture. LFC overcomes one important disadvantage of sensors in VW range, which is guarantee-ing a good focused image. Unfortunately, the database has not been made available. Individuals were also acquired with a conventional digital camera, with a superior performance observed with the LFC camera. New periocular features were also presented in the two studies. BIB020 proposed Sparse Representation Classification (SRC), previously used in face recognition. BIB034 proposed Binarized Statistical Image Features (BSIF) for periocular recognition, further utilized as features of the SRC method described. Both BIB020 and BIB034 tested the fusion of iris and periocular modalities as well (Section 6).
A Survey on Periocular Biometrics Research <s> Best regions for periocular recognition <s> The principle that underlies the recognition of persons by their iris patterns is the failure of a test of statistical independence on texture phase structure as encoded by multiscale quadrature wavelets. The combinatorial complexity of this phase information across different persons spans about 249 degrees of freedom and generates a discrimination entropy of about 3.2 bits/mm/sup 2/ over the iris, enabling real-time decisions about personal identity with extremely high confidence. Algorithms first described by the author in 1993 have now been tested in several independent field trials and are becoming widely licensed. This presentation reviews how the algorithms work and presents the results of 9.1 million comparisons among different eye images acquired in trials in Britain, the USA, Korea, and Japan. <s> BIB001 </s> A Survey on Periocular Biometrics Research <s> Best regions for periocular recognition <s> This survey covers the historical development and current state of the art in image understanding for iris biometrics. Most research publications can be categorized as making their primary contribution to one of the four major modules in iris biometrics: image acquisition, iris segmentation, texture analysis and matching of texture representations. Other important research includes experimental evaluations, image databases, applications and systems, and medical conditions that may affect the iris. We also suggest a short list of recommended readings for someone new to the field to quickly grasp the big picture of iris biometrics. <s> BIB002 </s> A Survey on Periocular Biometrics Research <s> Best regions for periocular recognition <s> We evaluate the utility of the periocular region appearance cues for biometric identification. Even though periocular region is considered to be a highly discriminative part of a face, its utility as an independent modality or as a soft biometric is still an open ended question. It is our goal to establish a performance metric for the periocular region features so that their potential use in conjunction with iris or face can be evaluated. In this approach, we employ the local appearance based feature representation, where the image is divided into spatially salient patches, and histograms of texture and color are computed for each patch. The images are matched by computing the distance between the corresponding feature representations using various distance metrics. We report recognition results on images captured in the visible and near-infrared (NIR) spectrum. For the color periocular region data consisting of about 410 subjects and the NIR images of 85 subjects, we obtain the Rank-1 recognition rate of 91% and 87% respectively. Furthermore, we also demonstrate that recognition performance of the periocular region images is comparable to that of face. <s> BIB003 </s> A Survey on Periocular Biometrics Research <s> Best regions for periocular recognition <s> The term periocular refers to the facial region in the immediate vicinity of the eye. Acquisition of the periocular biometric is expected to require less subject cooperation while permitting a larger depth of field compared to traditional ocular biometric traits (viz., iris, retina, and sclera). In this work, we study the feasibility of using the periocular region as a biometric trait. Global and local information are extracted from the periocular region using texture and point operators resulting in a feature set for representing and matching this region. A number of aspects are studied in this work, including the 1) effectiveness of incorporating the eyebrows, 2) use of side information (left or right) in matching, 3) manual versus automatic segmentation schemes, 4) local versus global feature extraction schemes, 5) fusion of face and periocular biometrics, 6) use of the periocular biometric in partially occluded face images, 7) effect of disguising the eyebrows, 8) effect of pose variation and occlusion, 9) effect of masking the iris and eye region, and 10) effect of template aging on matching performance. Experimental results show a rank-one recognition accuracy of 87.32% using 1136 probe and 1136 gallery periocular images taken from 568 different subjects (2 images/subject) in the Face Recognition Grand Challenge (version 2.0) database with the fusion of three different matchers. <s> BIB004 </s> A Survey on Periocular Biometrics Research <s> Best regions for periocular recognition <s> Periocular biometrics is the recognition of individuals based on the appearance of the region around the eye. Periocular recognition may be useful in applications where it is difficult to obtain a clear picture of an iris for iris biometrics, or a complete picture of a face for face biometrics. Previous periocular research has used either visible-light (VL) or near-infrared (NIR) light images, but no prior research has directly compared the two illuminations using images with similar resolution. We conducted an experiment in which volunteers were asked to compare pairs of periocular images. Some pairs showed images taken in VL, and some showed images taken in NIR light. Participants labeled each pair as belonging to the same person or to different people. Untrained participants with limited viewing times correctly classified VL image pairs with 88% accuracy, and NIR image pairs with 79% accuracy. For comparison, we presented pairs of iris images from the same subjects. In addition, we investigated differences between performance on light and dark eyes and relative helpfulness of various features in the periocular region under different illuminations. We calculated performance of three computer algorithms on the periocular images. Performance for humans and computers was similar. <s> BIB005 </s> A Survey on Periocular Biometrics Research <s> Best regions for periocular recognition <s> The periocular biometric comes into the spotlight recently due to several advantageous characteristics such as easily available and provision of crucial face information. However, many existing works are dedicated to extracting image features using texture based techniques such as local binary pattern (LBP). In view of the simplicity and effectiveness offered, this paper proposes to investigate into projection-based methods for periocular identity verification. Several well established projection-based methods such as principal component analysis, its variants and linear discriminant analysis will be adopted in our performance evaluation based on a subset of FERET face database. Our empirical results show that supervised learning methods significantly outperform those unsupervised learning methods and LBP in terms of equal error rate performance. <s> BIB006 </s> A Survey on Periocular Biometrics Research <s> Best regions for periocular recognition <s> In challenging image acquisition settings where the performance of iris recognition algorithms degrades due to poor segmentation of the iris, image blur, specular reflections, and occlusions from eye lids and eye lashes, the periocular region has been shown to offer better recognition rates. However, the definition of a periocular region is subject to interpretation. This paper investigates the question of what is the best periocular region for recognition by identifying sub-regions of the ocular image when using near-infrared (NIR) or visible light (VL) sensors. To determine the best periocular region, we test two fundamentally different algorithms on challenging periocular datasets of contrasting build on four different periocular regions. Our results indicate that system performance does not necessarily improve as the ocular region becomes larger. Rather in NIR images the eye shape is more important than the brow or cheek as the image has little to no skin texture (leading to a smaller accepted region), while in VL images the brow is very important (requiring a larger region). <s> BIB007
Most periocular algorithms work in a holistic way, defining a ROI around the eye (usually a rectangle) which is fully used for feature extraction. Such holistic approach implies that some components not relevant for identity recognition, such as hair or glasses, might erroneously bias the process . It can also be the case that a feature is not equally discriminative in all parts of the periocular region. The study by BIB005 identified which ocular elements humans find more useful for periocular recognition. With NIR images, eyelashes, tear ducts, eye shape and eyelids, were identified as the most useful, while skin was the less useful. But for VW data, blood vessels and skin were reported more helpful than eye shape and eyelashes. Similar studies have been done with automatic algorithms BIB007 , with results in consonance with the study with humans, despite using several machine algorithms based on different features, and different databases. With NIR images, regions around the iris (including the inner tear duct and lower eyelash) were the most useful, while cheek and skin texture were the less important. With VW images, on the other hand, the skin texture surrounding the eye was found very important, with the eyebrow/brow region (when present) also favored in visible range. This is in line with the assumption largely accepted in the literature that the iris texture is more suited to NIR illumination BIB001 , whereas the periocular modality is best for VW illumination BIB005 BIB003 . This seems to be explained by the fact that NIR illumination reveals the details of the iris texture, while the skin reflects most of the light, appearing over-illuminated (see for example 'BioSec' or other NIR iris examples in Figure 2) ; on the other hand, the skin texture is clearly visible in VW range, but only irises with moderate levels of pigmentation image reasonably well in this range BIB002 . BIB004 carried out experiments by masking parts of the periocular area over VW images of the FRGC database. They found that inclusion of eyebrows is beneficial for a better identification performance, with differences in Rank-1 of 8-19%, depending on the machine expert. Similarly, they observed that occluding ocular information (iris and sclera) deteriorates the performance, with reductions in Rank-1 accuracy of up to 41%. In the same direction, BIB006 focused on the inclusion of a significant part of the cheek region over VW images of the FERET database, finding that it does not contain significant discriminative information while it increases the image size. Including the eyebrows and the ocular region was also found to be beneficial in this study, corroborating the results of BIB004 . Recently, proposed a method to label seven components of the periocular region: iris, sclera, eyelashes, eyebrows, hair, skin and glasses. The usefulness of such segmentation is demonstrated by avoiding hair and glasses in the feature encoding and matching stages, obtaining performance improvements by fusion of LBP, HOG and SIFT features BIB004 over the UBIRIS v2 database of VW images (EER reduced from 12.8% to 9.5%).
A Survey on Periocular Biometrics Research <s> Comparison and fusion with other modalities <s> The principle that underlies the recognition of persons by their iris patterns is the failure of a test of statistical independence on texture phase structure as encoded by multiscale quadrature wavelets. The combinatorial complexity of this phase information across different persons spans about 249 degrees of freedom and generates a discrimination entropy of about 3.2 bits/mm/sup 2/ over the iris, enabling real-time decisions about personal identity with extremely high confidence. Algorithms first described by the author in 1993 have now been tested in several independent field trials and are becoming widely licensed. This presentation reviews how the algorithms work and presents the results of 9.1 million comparisons among different eye images acquired in trials in Britain, the USA, Korea, and Japan. <s> BIB001 </s> A Survey on Periocular Biometrics Research <s> Comparison and fusion with other modalities <s> This survey covers the historical development and current state of the art in image understanding for iris biometrics. Most research publications can be categorized as making their primary contribution to one of the four major modules in iris biometrics: image acquisition, iris segmentation, texture analysis and matching of texture representations. Other important research includes experimental evaluations, image databases, applications and systems, and medical conditions that may affect the iris. We also suggest a short list of recommended readings for someone new to the field to quickly grasp the big picture of iris biometrics. <s> BIB002 </s> A Survey on Periocular Biometrics Research <s> Comparison and fusion with other modalities <s> The term periocular refers to the facial region in the immediate vicinity of the eye. Acquisition of the periocular biometric is expected to require less subject cooperation while permitting a larger depth of field compared to traditional ocular biometric traits (viz., iris, retina, and sclera). In this work, we study the feasibility of using the periocular region as a biometric trait. Global and local information are extracted from the periocular region using texture and point operators resulting in a feature set for representing and matching this region. A number of aspects are studied in this work, including the 1) effectiveness of incorporating the eyebrows, 2) use of side information (left or right) in matching, 3) manual versus automatic segmentation schemes, 4) local versus global feature extraction schemes, 5) fusion of face and periocular biometrics, 6) use of the periocular biometric in partially occluded face images, 7) effect of disguising the eyebrows, 8) effect of pose variation and occlusion, 9) effect of masking the iris and eye region, and 10) effect of template aging on matching performance. Experimental results show a rank-one recognition accuracy of 87.32% using 1136 probe and 1136 gallery periocular images taken from 568 different subjects (2 images/subject) in the Face Recognition Grand Challenge (version 2.0) database with the fusion of three different matchers. <s> BIB003 </s> A Survey on Periocular Biometrics Research <s> Comparison and fusion with other modalities <s> We consider the problem of matching highly non-ideal ocular images where the iris information cannot be reliably used. Such images are characterized by non-uniform illumination, motion and de-focus blur, off-axis gaze, and non-linear deformations. To handle these variations, a single feature extraction and matching scheme is not sufficient. Therefore, we propose an information fusion framework where three distinct feature extraction and matching schemes are utilized in order to handle the significant variability in the input ocular images. The Gradient Orientation Histogram (GOH) scheme extracts the global information in the image; the modified Scale Invariant Feature Transform (SIFT) extracts local edge anomalies in the image; and a Probabilistic Deformation Model (PDM) handles nonlinear deformations observed in image pairs. The simple sum rule is used to combine the match scores generated by the three schemes. Experiments on the extremely challenging Face and Ocular Challenge Series (FOCS) database and a subset of the Face Recognition Grand Challenge (FRGC) database confirm the efficacy of the proposed approach to perform ocular recognition. <s> BIB004 </s> A Survey on Periocular Biometrics Research <s> Comparison and fusion with other modalities <s> Iris recognition from at-a-distance face images has high applications in wide range of applications such as remote surveillance and for civilian identification. This paper presents a completely automated joint iris and periocular recognition approach from the face images acquired at-a-distance. Each of the acquired face images are used to detect and segment periocular images which are then employed for the iris segmentation. We employ complex texture descriptors using Leung-Mallik filters which can acquire multiple periocular features for more accurate recognition. Experimental results presented in this paper achieve 8.1% improvement in recognition accuracy over the best performing approach among SIFT, LBP and HoG presented in the literature. The combination of simultaneously segmented iris and periocular images achieves average rank-one recognition accuracy of 84.5%, i.e., an improvement of 52% than those from only using iris features, on independent test images from 131 subjects. In order to ensure the repeatability of the experiments, the CASIA.v4-distance, i.e., a publicly available database was employed and all the 142 subjects/images were considered in this work. <s> BIB005 </s> A Survey on Periocular Biometrics Research <s> Comparison and fusion with other modalities <s> As biometrics has evolved, the iris has remained a preferred trait because its uniqueness, lifetime stability and regular shape contribute to good segmentation and recognition performance. However, commercially deployed systems are characterized by strong acquisition constraints based on active subject cooperation, which is not always achievable or even reasonable for extensive deployment in everyday scenarios. Research on new techniques has been focused on lowering these constraints without significantly impacting performance while increasing system usability, and new approaches have rapidly emerged. Here we propose a novel fusion of different recognition approaches and describe how it can contribute to more reliable noncooperative iris recognition by compensating for degraded images captured in less constrained acquisition setups and protocols under visible wavelengths and varying lighting conditions. The proposed method was tested at the NICE.II (Noisy Iris Challenge Evaluation - Part 2) contest, and its performance was corroborated by a third-place finish. <s> BIB006
Periocular biometrics has rapidly evolved to competing with face or iris recognition. The periocular region appears in face or iris images, therefore comparison and/or fusion with these modalities has been also proposed. This section gives an overview of these works, with a summary provided in Table 4 . Under difficult conditions, such as acquisition portals BIB004 , distant acquisition BIB005 , smartphones , webcams or digital cameras , the periocular modality is shown to be clearly superior to the iris modality, mostly due to the small size of the iris or the use of visible illumination. Visible illumination is predominant in relaxed or uncooperative setups due to the impossibility of using NIR illumination. Iris texture is more suited to the NIR spectrum, since this type of lightning reveals the details of the iris texture BIB001 , while the skin reflects most of the light, appearing over-illuminated. On the other hand, the skin texture is clearly visible in VW range, but only irises with moderate levels of pigmentation image reasonably well in this range BIB002 . Nevertheless, despite the poor performance shown by the iris in the visible spectrum, fusion with periocular is shown to improve the performance in many cases as well BIB006 . Similar trends are observed with face. Under difficult conditions, such as blur or downsampling, the periocular modality performs considerably better . It is also the case of partial face occlusions, where performance of full-face matchers is severely degraded BIB003 .
A Survey on Periocular Biometrics Research <s> Iris Modality <s> The principle that underlies the recognition of persons by their iris patterns is the failure of a test of statistical independence on texture phase structure as encoded by multiscale quadrature wavelets. The combinatorial complexity of this phase information across different persons spans about 249 degrees of freedom and generates a discrimination entropy of about 3.2 bits/mm/sup 2/ over the iris, enabling real-time decisions about personal identity with extremely high confidence. Algorithms first described by the author in 1993 have now been tested in several independent field trials and are becoming widely licensed. This presentation reviews how the algorithms work and presents the results of 9.1 million comparisons among different eye images acquired in trials in Britain, the USA, Korea, and Japan. <s> BIB001 </s> A Survey on Periocular Biometrics Research <s> Iris Modality <s> We evaluate the utility of the periocular region appearance cues for biometric identification. Even though periocular region is considered to be a highly discriminative part of a face, its utility as an independent modality or as a soft biometric is still an open ended question. It is our goal to establish a performance metric for the periocular region features so that their potential use in conjunction with iris or face can be evaluated. In this approach, we employ the local appearance based feature representation, where the image is divided into spatially salient patches, and histograms of texture and color are computed for each patch. The images are matched by computing the distance between the corresponding feature representations using various distance metrics. We report recognition results on images captured in the visible and near-infrared (NIR) spectrum. For the color periocular region data consisting of about 410 subjects and the NIR images of 85 subjects, we obtain the Rank-1 recognition rate of 91% and 87% respectively. Furthermore, we also demonstrate that recognition performance of the periocular region images is comparable to that of face. <s> BIB002 </s> A Survey on Periocular Biometrics Research <s> Iris Modality <s> The term periocular refers to the facial region in the immediate vicinity of the eye. Acquisition of the periocular biometric is expected to require less subject cooperation while permitting a larger depth of field compared to traditional ocular biometric traits (viz., iris, retina, and sclera). In this work, we study the feasibility of using the periocular region as a biometric trait. Global and local information are extracted from the periocular region using texture and point operators resulting in a feature set for representing and matching this region. A number of aspects are studied in this work, including the 1) effectiveness of incorporating the eyebrows, 2) use of side information (left or right) in matching, 3) manual versus automatic segmentation schemes, 4) local versus global feature extraction schemes, 5) fusion of face and periocular biometrics, 6) use of the periocular biometric in partially occluded face images, 7) effect of disguising the eyebrows, 8) effect of pose variation and occlusion, 9) effect of masking the iris and eye region, and 10) effect of template aging on matching performance. Experimental results show a rank-one recognition accuracy of 87.32% using 1136 probe and 1136 gallery periocular images taken from 568 different subjects (2 images/subject) in the Face Recognition Grand Challenge (version 2.0) database with the fusion of three different matchers. <s> BIB003 </s> A Survey on Periocular Biometrics Research <s> Iris Modality <s> We consider the problem of matching highly non-ideal ocular images where the iris information cannot be reliably used. Such images are characterized by non-uniform illumination, motion and de-focus blur, off-axis gaze, and non-linear deformations. To handle these variations, a single feature extraction and matching scheme is not sufficient. Therefore, we propose an information fusion framework where three distinct feature extraction and matching schemes are utilized in order to handle the significant variability in the input ocular images. The Gradient Orientation Histogram (GOH) scheme extracts the global information in the image; the modified Scale Invariant Feature Transform (SIFT) extracts local edge anomalies in the image; and a Probabilistic Deformation Model (PDM) handles nonlinear deformations observed in image pairs. The simple sum rule is used to combine the match scores generated by the three schemes. Experiments on the extremely challenging Face and Ocular Challenge Series (FOCS) database and a subset of the Face Recognition Grand Challenge (FRGC) database confirm the efficacy of the proposed approach to perform ocular recognition. <s> BIB004 </s> A Survey on Periocular Biometrics Research <s> Iris Modality <s> As biometrics has evolved, the iris has remained a preferred trait because its uniqueness, lifetime stability and regular shape contribute to good segmentation and recognition performance. However, commercially deployed systems are characterized by strong acquisition constraints based on active subject cooperation, which is not always achievable or even reasonable for extensive deployment in everyday scenarios. Research on new techniques has been focused on lowering these constraints without significantly impacting performance while increasing system usability, and new approaches have rapidly emerged. Here we propose a novel fusion of different recognition approaches and describe how it can contribute to more reliable noncooperative iris recognition by compensating for degraded images captured in less constrained acquisition setups and protocols under visible wavelengths and varying lighting conditions. The proposed method was tested at the NICE.II (Noisy Iris Challenge Evaluation - Part 2) contest, and its performance was corroborated by a third-place finish. <s> BIB005 </s> A Survey on Periocular Biometrics Research <s> Iris Modality <s> In this paper we proposed a novel multimodal biometric approach using iris and periocular biometrics to improve the performance of iris recognition in case of non-ideal iris images. Though iris recognition has the highest accuracy among all the available biometrics, still the noises at the image acquisition stage degrade the recognition accuracy. The periocular region can act as a supporting biometric, in case the iris is obstructed by several noises. The periocular region is the part of the face immediately surrounding the eye. The approach is based on fusion of features of iris and periocular region. The approach has shown significant improvement in the performance of iris recognition. The evaluation was done on a test database created from the images of UBIRIS V2 and CASIA iris interval database. We achieved identification accuracy upto 96 % on the test database. <s> BIB006 </s> A Survey on Periocular Biometrics Research <s> Iris Modality <s> Iris recognition from at-a-distance face images has high applications in wide range of applications such as remote surveillance and for civilian identification. This paper presents a completely automated joint iris and periocular recognition approach from the face images acquired at-a-distance. Each of the acquired face images are used to detect and segment periocular images which are then employed for the iris segmentation. We employ complex texture descriptors using Leung-Mallik filters which can acquire multiple periocular features for more accurate recognition. Experimental results presented in this paper achieve 8.1% improvement in recognition accuracy over the best performing approach among SIFT, LBP and HoG presented in the literature. The combination of simultaneously segmented iris and periocular images achieves average rank-one recognition accuracy of 84.5%, i.e., an improvement of 52% than those from only using iris features, on independent test images from 131 subjects. In order to ensure the repeatability of the experiments, the CASIA.v4-distance, i.e., a publicly available database was employed and all the 142 subjects/images were considered in this work. <s> BIB007 </s> A Survey on Periocular Biometrics Research <s> Iris Modality <s> Periocular biometrics is the recognition of individuals based on the appearance of the region around the eye. Periocular recognition may be useful in applications where it is difficult to obtain a clear picture of an iris for iris biometrics, or a complete picture of a face for face biometrics. Previous periocular research has used either visible-light (VL) or near-infrared (NIR) light images, but no prior research has directly compared the two illuminations using images with similar resolution. We conducted an experiment in which volunteers were asked to compare pairs of periocular images. Some pairs showed images taken in VL, and some showed images taken in NIR light. Participants labeled each pair as belonging to the same person or to different people. Untrained participants with limited viewing times correctly classified VL image pairs with 88% accuracy, and NIR image pairs with 79% accuracy. For comparison, we presented pairs of iris images from the same subjects. In addition, we investigated differences between performance on light and dark eyes and relative helpfulness of various features in the periocular region under different illuminations. We calculated performance of three computer algorithms on the periocular images. Performance for humans and computers was similar. <s> BIB008 </s> A Survey on Periocular Biometrics Research <s> Iris Modality <s> Iris and Periocular biometrics has proved its effectiveness in accurately verifying the subject of interest. Recent improvements in visible spectrum Iris and Periocular verification have further boosted its application to unconstrained scenarios. However existing visible Iris verification systems suffer from low quality samples because of the limited depth-of-field exhibited by the conventional Iris capture systems. In this work, we propose a robust Iris and Periocular erification scheme in visible spectrum using Light Field Camera (LFC). Since the light field camera can provide multiple focus images in single capture, we are motivated to investigate its applicability for robust Iris and Periocular verification by exploring its all-in-focus property. Further, the use of all-in-focus property will extend the depth-of-focus and overcome the problem of focus that plays a predominant role in robust Iris and Periocular verification. We first collect a new Iris and Periocular biometric database using both light field and conventional camera by simulating real life scenarios. We then propose a new scheme for feature extraction and classification by exploring the combination of Local Binary Patterns (LBP) and Sparse Reconstruction Classifier (SRC). Extensive experiments are carried out on the newly collected database to bring out the merits and demerits on applicability of light field camera for Iris and Periocular verification. Finally, we also present the results on combining the information from Iris and Periocular biometrics using weighted sum rule. <s> BIB009 </s> A Survey on Periocular Biometrics Research <s> Iris Modality <s> Visible spectrum iris verification has drawn substantial attention due to the feasibility, convenience and also accepted per-formance. This further allows one to perform the iris verification in an unconstrained environment at-a-distance and on the move. The integral part of the visible iris recognition rely on the accurate texture representation algorithm that can effectively capture the uniqueness of the texture even in the challenging conditions like reflection, illumination among others. In this paper, we explore a new scheme for the robust visible iris verification based on Binarized Statistical Image Features (BSIF). The core idea of the BSIF descriptor is to compute the binary code for each pixel by projecting them on the subspace which is learned from natural images using Independent Component Analysis (ICA). Thus, the BSIF is expected to encode the texture features more robustly when compared to contemporary schemes like Local Binary Patterns and its variants. The extensive experiments are carried out on the visible iris dataset captured using both Light field and conventional camera. The proposed feature extraction method is also extended for enhanced periocular recognition. Finally, we also present a comparative analysis with popular state-of-the-art iris recognition scheme. <s> BIB010 </s> A Survey on Periocular Biometrics Research <s> Iris Modality <s> We present a new system for biometric recognition using periocular images. The feature extraction method employed describes neighborhoods around key points by projection onto harmonic functions which estimates the presence of a series of various symmetric curve families around such key points. The isocurves of such functions are highly symmetric w.r.t. The key points and the estimated coefficients have well defined geometric interpretations. The descriptors used are referred to as Symmetry Assessment by Feature Expansion (SAFE). Extraction is done across a set of discrete points of the image, uniformly distributed in a rectangular-shaped grid positioned in the eye centre. Experiments are done with two databases of iris data, one acquired with a close-up iris camera, and another in visible light with a webcam. The two databases have been annotated manually, meaning that the radius and centre of the pupil and sclera circles are available, which are used as input for the experiments. Results show that this new system has a performance comparable with other periocular recognition approaches. We particularly carry out comparative experiments with another periocular system based on Gabor features extracted from the same set of grid points, with the fusion of the two systems resulting in an improved performance. We also evaluate an iris texture matcher, providing fusion results with the periocular systems as well. <s> BIB011 </s> A Survey on Periocular Biometrics Research <s> Iris Modality <s> The concept of periocular biometrics emerged to improve the robustness of iris recognition to degraded data. Being a relatively recent topic, most of the periocular recognition algorithms work in a holistic way and apply a feature encoding/matching strategy without considering each biological component in the periocular area. This not only augments the correlation between the components in the resulting biometric signature, but also increases the sensitivity to particular data covariates. The main novelty in this paper is to propose a periocular recognition ensemble made of two disparate components: 1) one expert analyses the iris texture and exhaustively exploits the multispectral information in visible-light data and 2) another expert parameterizes the shape of eyelids and defines a surrounding dimensionless region-of-interest, from where statistics of the eyelids, eyelashes, and skin wrinkles/furrows are encoded. Both experts work on disjoint regions of the periocular area and meet three important properties. First, they produce practically independent responses, which is behind the better performance of the ensemble when compared to the best individual recognizer. Second, they do not share particularly sensitivity to any image covariate, which accounts for augmenting the robustness against degraded data. Finally, it should be stressed that we disregard information in the periocular region that can be easily forged (e.g., shape of eyebrows), which constitutes an active anticounterfeit measure. An empirical evaluation was conducted on two public data sets (FRGC and UBIRIS.v2), and points for consistent improvements in performance of the proposed ensemble over the state-of-the-art periocular recognition algorithms. <s> BIB012 </s> A Survey on Periocular Biometrics Research <s> Iris Modality <s> Announcement of an iris and periocular dataset, with 10 different mobile setups.Mobile biometric recognition approach based on iris and periocular information.Improvements from a sensor-specific color calibration technique are reported.Biometric recognition feasibility over mobile cross-sensor setups is shown.Preferable mobile setups are pointed out. In recent years, the usage of mobile devices has increased substantially, as have their capabilities and applications. Extending biometric technologies to these gadgets is desirable because it would facilitate biometric recognition almost anytime, anywhere, and by anyone. The present study focuses on biometric recognition in mobile environments using iris and periocular information as the main traits. Our study makes three main contributions, as follows. (1) We demonstrate the utility of an iris and periocular dataset, which contains images acquired with 10 different mobile setups and the corresponding iris segmentation data. This dataset allows us to evaluate iris segmentation and recognition methods, as well as periocular recognition techniques. (2) We report the outcomes of device-specific calibration techniques that compensate for the different color perceptions inherent in each setup. (3) We propose the application of well-known iris and periocular recognition strategies based on classical encoding and matching techniques, as well as demonstrating how they can be combined to overcome the issues associated with mobile environments. <s> BIB013
Woodard et al. (2010a) evaluated NIR portal videos of the MBGC database. The periocular modality showed considerable superiority, with the performance further improved by the fusion, demonstrating the benefits of fusing periocular and iris information in non-ideal conditions. and BIB004 also used NIR portal data from the FOCS database. Despite using other feature extraction methods, they also concluded that the periocular modality is considerable superior than the iris modality in such difficult data. BIB005 utilized VW images from the UBIRIS v2 database, which has several perturbations deliberately introduced. As with the above studies with NIR data, combining Table 4 . Overview of existing works on comparison and fusion of the periocular modality with other biometric modalities. The acronyms of this table are fully defined in the text or in the referenced papers. Features with best accuracy are those giving the best fusion results. If no fusion results are available, they indicate the best features of each individual modality. The following acronyms are not defined elsewhere: 'w-sum'='weighted sum', 'LR'='logistic regression', 'NN'='Neural Networks', 'TERELM'='Total Error Rate Minimization', 'LG'='Log-Gabor'. n/a n/a FRGC -blur (kernel=7 pix, σ=1.5) n/a 77.86% n/a 31.09% n/a n/a FRGC -downsampling (40%) n/a 97.76% n/a 70.40% n/a n/a FRGC -uncontrolled lightning n/a 11.17% n/a 12.18% n/a n/a BIB003 HOG, LBP, SIFT FaceVACS -FRGC (1704 VW images) n/a 87.32% n/a 99.77% n/a n/a FRGC -partial face n/a ∼84% n/a 39.55% n/a n/a 46.49% n/a n/a periocular and iris features improved the overall performance over difficult VW data too. BIB006 used a virtual database, with VW periocular data from UBIRIS v2 and NIR iris data from CASIA Interval. Fusion was carried out at the feature level, with vectors from the two modalities pooled together. They also tested a simple mean fusion rule at the score level, which resulted in a smaller performance improvement. BIB007 used at-a-distance images from CA-SIA v4 Distance database, with a considerable performance improvement w.r.t. the individual modalities. BIB009 used a VW Light Field Camera (LFC), which provides multiple images at different focus in a single capture. Individuals were also acquired with a conventional digital camera. A superior performance with the LFC camera was observed with both modalities, which was reinforced even more with the fusion. The same databases were used in a posterior study by BIB010 , obtaining even better performance. BIB013 used their new CSIP database, acquired with 4 different mobile telephones in 10 different setups. Using a sensor-specific color correction technique, they achieved a periocular EER cross-sensor performance of 15.5%. Despite the poor performance of Gabor wavelets applied to the iris modality (34.4%), they achieved a 14.5% EER with the fusion of the two modalities. evaluated their Gabor-based periocular system and a set of four iris matchers. They used five different databases, three in NIR and two in VW range, observing that performance of the iris matchers was, in general, much better than the periocular matcher with NIR data, and the opposite with VW data. This is in tune with the literature, which indicates that the iris modality is more suited to NIR illumination BIB001 , whereas the periocular modality is best for VW illumination BIB008 BIB002 . With regards to the fusion, despite the poor performance of the iris matchers with VW data, its fusion with the periocular system resulted with important performance improvements. This is remarkable given the adverse acquisition conditions and the small resolution of the VW databases used. They further extended the study with their SAFE matcher BIB011 , and a SIFT matcher. Here, the availability of more machine experts allowed to obtain performance improvements through the fusion also with NIR databases, something not observed in their previous stud-ies. BIB012 proposed the fusion of a iris matcher based on multi-lobe differential filters (MLDF), with a periocular expert that parameterizes the shape of eyelids, over VW data of FRGC and UBIRIS v2 databases, with an average 20% of EER improvement.
A Survey on Periocular Biometrics Research <s> Face Modality <s> The human periocular region is known to be one of the most discriminative regions of a face image, and recent studies have indicated its potential as a biometric trait. However, the bulk of the previous work concerning the periocular region consists of feasibility studies that report recognition results on controlled data, and lacks rigorous performance evaluation, thus leaving various open questions regarding the effectiveness of periocular region as a biometrie modality. In this paper we present a performance evaluation of a local periocular texture based recognition approach. Specifically, the paper investigates the effect of input image quality on recognition performance, the uniqueness of texture between different color channels, and texture information present in different color channels. Recognition results of periocular texture features are compared to those of full face texture features and suggest that periocular texture features are robust to varying image quality. <s> BIB001 </s> A Survey on Periocular Biometrics Research <s> Face Modality <s> The term periocular refers to the facial region in the immediate vicinity of the eye. Acquisition of the periocular biometric is expected to require less subject cooperation while permitting a larger depth of field compared to traditional ocular biometric traits (viz., iris, retina, and sclera). In this work, we study the feasibility of using the periocular region as a biometric trait. Global and local information are extracted from the periocular region using texture and point operators resulting in a feature set for representing and matching this region. A number of aspects are studied in this work, including the 1) effectiveness of incorporating the eyebrows, 2) use of side information (left or right) in matching, 3) manual versus automatic segmentation schemes, 4) local versus global feature extraction schemes, 5) fusion of face and periocular biometrics, 6) use of the periocular biometric in partially occluded face images, 7) effect of disguising the eyebrows, 8) effect of pose variation and occlusion, 9) effect of masking the iris and eye region, and 10) effect of template aging on matching performance. Experimental results show a rank-one recognition accuracy of 87.32% using 1136 probe and 1136 gallery periocular images taken from 568 different subjects (2 images/subject) in the Face Recognition Grand Challenge (version 2.0) database with the fusion of three different matchers. <s> BIB002 </s> A Survey on Periocular Biometrics Research <s> Face Modality <s> This paper introduces a novel face recognition problem domain: the medically altered face for gender transformation. A data set of >1.2 million face images was constructed from wild videos obtained from You Tube of 38 subjects undergoing hormone replacement therapy (HRT) for gender transformation over a period of several months to three years. The HRT achieves gender transformation by severely altering the balance of sex hormones, which causes changes in the physical appearance of the face and body. This paper explores that the impact of face changes due to hormone manipulation and its ability to disguise the face and hence, its ability to effect match rates. Face disguise is achieved organically as hormone manipulation causes pathological changes to the body resulting in a modification of face appearance. This paper analyzes and evaluates face components versus full face algorithms in an attempt to identify regions of the face that are resilient to the HRT process. The experiments reveal that periocular face components using simple texture-based face matchers, local binary patterns, histogram of gradients, and patch-based local binary patterns out performs matching against the full face. Furthermore, the experiments reveal that a fusion of the periocular using one of the simple texture-based approaches (patched-based local binary patterns) out performs two Commercial Off The Shelf Systems full face systems: 1) PittPatt SDK and 2) Cognetic FaceVACs v8.5. The evaluated periocular-fused patch-based face matcher outperforms PittPatt SDK v5.2.2 by 76.83% and Cognetic FaceVACS v8.5 by 56.23% for rank-1 accuracy. <s> BIB003
Smeraldi and Bigün (2002) presented a face recognition expert based on Gabor filters applied to each facial landmark (eyes and mouth), with a different classifier employed in each landmark. Face authentication was performed by fusion of the three classifier's output. This way, the face expert is really a fusion of two eye (periocular) experts and one mouth expert. BIB001 used LBP on the FRGC database, extracted both from the periocular region and from the full face. Rather than the best accuracy obtained (first sub-row in Table 4 ), the interest relies on the impact of the input image quality, demonstrating that, at extreme values of blur or down-sampling, periocular recognition performed significantly better than face. On the other hand, both face and periocular under uncontrolled lighting were very poor, indicating that LBPs are not well suited for this scenario. Another study of the effect of non-ideal conditions was also carried out by BIB002 . They masked the face region below the nose to simulate partial face occlusion, showing that face performance is severely degraded in the presence of occlusion, whereas the periocular modality is much more robust. Jillela and Ross (2012) studied the problem of matching face images before and after undergoing plastic surgery. The rank-one recognition performance reported by the fusion of periocular and face matchers (Rank-1: 87.4%) is the highest accuracy observed in the literature with the utilized database, up to the publication of the study. As full face matchers, they used two COTS systems: PittPatt and VeriLook. BIB003 extracted features in different regions of the face (periocular, nose, mouth), and in the full-face to study the impact of face changes due to gender transformation. They found that the periocular region greatly outperformed other face components (nose, mouth) and the full face. They also observed (not reported in Table 4 ) that their periocular approach outperformed two Commercial Off The Shelf full face Systems (COTS): PittPatt (by 76.83% in Rank-1 accuracy) and Cognetic FaceVACs (by 56.23%).
A Survey on Periocular Biometrics Research <s> Soft-biometrics, gender transformation and plastic surgery analysis <s> With periocular biometrics gaining attention recently, the goal of this paper is to investigate the effectiveness of local appearance features extracted from the periocular region images for soft biometrie classification. We extract gender and ethnicity information from the periocular region images using grayscale pixel intensities and periocular texture computed by Local Binary Patterns as our features and a SVM classifier. Results are presented on the visible spectrum periocular images obtained from the FRGC face dataset. For 4232 periocular images of 404 subjects, we obtain a baseline gender and ethnicity classification accuracy of 93% and 91%, respectively, using 5-fold cross validation. Furthermore, we show that fusion of the soft biometrie information obtained from our classification approach with the texture based periocular recognition approach results in an overall performance improvement. <s> BIB001 </s> A Survey on Periocular Biometrics Research <s> Soft-biometrics, gender transformation and plastic surgery analysis <s> The periocular region, the region of the face surrounding the eyes, has gained increasing attention in biometrics in recent years. This region of the face is of particular interest when trying to identify a person whose face is partially occluded. We propose the novel idea of applying the information obtained from the periocular region to identify the gender of a person, which is a type of soft biometrie recognition. We gradually narrow the region of interest of the face to explore the feasibility of using smaller, eye-centered regions for building a robust gender classifier around the periocular region alone. Our experimental results show that at least an 85% classification rate is still obtainable using only the periocular region with a database of 936 low resolution images collected from the web. <s> BIB002 </s> A Survey on Periocular Biometrics Research <s> Soft-biometrics, gender transformation and plastic surgery analysis <s> A wide variety of applications in forensic, government, and commercial fields require reliable personal identification. However, the recognition performance is severely affected when encountering non-ideal images caused by motion blur, poor contrast, various expressions, or illumination artifacts. In this paper, we investigated the use of shape-based eyebrow features under non-ideal imaging conditions for biometric recognition and gender classification. We extracted various shape-based features from the eyebrow images and compared three different classification methods: Minimum Distance Classifier (MD), Linear Discriminant Analysis Classifier (LDA) and Support Vector Machine Classifier (SVM). The methods were tested on images from two publicly available facial image databases: The Multiple Biometric Grand Challenge (MBGC) database and the Face Recognition Grand Challenge (FRGC) database. Obtained recognition rates of 90% using the MBGC database and 75% using the FRGC database as well as gender classification recognition rates of 96% and 97% for each database respectively, suggests the shape-based eyebrow features maybe be used for biometric recognition and soft biometric classification. <s> BIB003 </s> A Survey on Periocular Biometrics Research <s> Soft-biometrics, gender transformation and plastic surgery analysis <s> This paper investigates the effectiveness of local appearance features such as Local Binary Patterns, Histograms of Oriented Gradient, Discrete Cosine Transform, and Local Color Histograms extracted from periocular region images for soft classification on gender and ethnicity. These features are classified by Artificial Neural Network or Support Vector Machine. Experiments are performed on visible and near-IR spectrum images derived from FRGC and MBGC datasets. For 4232 FRGC images of 404 subjects, we obtain baseline gender and ethnicity classifications of 97.3% and 94%. For 350 MBGC images of 60 subjects, we obtain baseline gender and ethnicity results of 90% and 89%. <s> BIB004 </s> A Survey on Periocular Biometrics Research <s> Soft-biometrics, gender transformation and plastic surgery analysis <s> In recent years, the research over emerging trends of biometric has grabbed a lot of attention. Periocular biometric is one such field. Researchers have made attempts to extract computationally intensive local features from high quality periocular images. In contrast, this paper proposes a novel approach of extracting global features from periocular region of poor quality grayscale images for gender classification. Global gender features are extracted using independent component analysis and are evaluated using conventional neural network techniques, and further their performance is compared. All relevant experiments are held on periocular region cropped from FERET face database. The results exhibit promising classification accuracy establishing the fact that the approach can work in fusion with existing facial gender classification systems to help in improving its accuracy. <s> BIB005 </s> A Survey on Periocular Biometrics Research <s> Soft-biometrics, gender transformation and plastic surgery analysis <s> The task of successfully matching face images obtained before and after plastic surgery is a challenging problem. The degree to which a face is altered depends on the type and number of plastic surgeries performed, and it is difficult to model such variations. Existing approaches use learning based methods that are either computationally expensive or rely on a set of training images. In this work, a fusion approach is proposed that combines information from the face and ocular regions to enhance recognition performance in the identification mode. The proposed approach provides the highest reported recognition performance on a publicly accessible plastic surgery database, with a rank-one accuracy of 87.4%. Compared to existing approaches, the proposed approach is not learning based and reduces computational requirements. Furthermore, a systematic study of the matching accuracies corresponding to various types of surgeries is presented. <s> BIB006 </s> A Survey on Periocular Biometrics Research <s> Soft-biometrics, gender transformation and plastic surgery analysis <s> This paper introduces a novel face recognition problem domain: the medically altered face for gender transformation. A data set of >1.2 million face images was constructed from wild videos obtained from You Tube of 38 subjects undergoing hormone replacement therapy (HRT) for gender transformation over a period of several months to three years. The HRT achieves gender transformation by severely altering the balance of sex hormones, which causes changes in the physical appearance of the face and body. This paper explores that the impact of face changes due to hormone manipulation and its ability to disguise the face and hence, its ability to effect match rates. Face disguise is achieved organically as hormone manipulation causes pathological changes to the body resulting in a modification of face appearance. This paper analyzes and evaluates face components versus full face algorithms in an attempt to identify regions of the face that are resilient to the HRT process. The experiments reveal that periocular face components using simple texture-based face matchers, local binary patterns, histogram of gradients, and patch-based local binary patterns out performs matching against the full face. Furthermore, the experiments reveal that a fusion of the periocular using one of the simple texture-based approaches (patched-based local binary patterns) out performs two Commercial Off The Shelf Systems full face systems: 1) PittPatt SDK and 2) Cognetic FaceVACs v8.5. The evaluated periocular-fused patch-based face matcher outperforms PittPatt SDK v5.2.2 by 76.83% and Cognetic FaceVACS v8.5 by 56.23% for rank-1 accuracy. <s> BIB007
Besides the task of personal recognition, a number of other tasks have been also proposed using features from the periocular region, as shown in Table 5 . Soft-biometrics refer to the classification of an individual in broad categories such as gender, ethnicity, age, height, weight, hair color, etc. While these cannot be used to uniquely identify a subject, it can reduce the search space or provide additional information to boost the recognition performance. Due to the popularity of facial recognition, face images have been frequently used to obtain both gender and ethnicity information, with high accuracy (>96%, for a summary see BIB004 ). Recently, it has been also suggested that periocular features can be potentially used for soft-biometrics classification BIB005 BIB004 BIB001 BIB002 . With accuracies comparable to these obtained by using the entire face, it indicates the effectiveness of the periocular region by itself for soft-biometrics purposes. BIB002 addressed gender classification using a database of 936 low resolution images collected from the web (Flickr), reporting a 85% classification accuracy. BIB004 studied gender and ethnicity classification over the FRGC and MBGC databases, with an accuracy of 89% or higher in both classification tasks. In a previous paper, they also showed that fusion of the soft-biometrics information with texture features from the periocular region can improve the recognition performance BIB001 . BIB005 studied the problem of gender classification with images from the FERET database. The reported classification accuracy is of 90%. An interesting study by BIB003 made use of shape features from the eyebrow region only, with very good results over the MBGC/FRGC databases comprising both NIR/VW data (96/97% of gender classification rate, respectively). Other studies are related with the effect on the recognition performance of plastic surgery or gender transformation, as presented in Section 6.3 (see Figure 4 as well). BIB007 studied the impact of gender transformation via Hormone Replacement Theory (HRT), which causes changes in the physical appearance of the face and body gradually over the course of the treatment. A database of >1.2 million face images from YouTube videos was built, with data from 38 subjects undergoing HRT over a period of several months to three years, observing that accuracy of the periocular region greatly outperformed other face components (nose, mouth) and the full face. Also, face matchers began to fail after only a few months of HRT treatment. BIB006 studied the matching of face images before and after undergoing plastic surgery. The work proposed a fusion recognition approach that combines face and periocular information, outperforming previous studies where only full-face matchers were used.
A Survey on Periocular Biometrics Research <s> Conclusions and future work <s> As biometric technology is increasingly deployed, it will be common to replace parts of operational systems with newer designs. The cost and inconvenience of reacquiring enrolled users when a new vendor solution is incorporated makes this approach difficult and many applications will require to deal with information from different sources regularly. These interoperability problems can dramatically affect the performance of biometric systems and thus, they need to be overcome. Here, we describe and evaluate the ATVS-UAM fusion approach submitted to the quality-based evaluation of the 2007 BioSecure Multimodal Evaluation Campaign, whose aim was to compare fusion algorithms when biometric signals were generated using several biometric devices in mismatched conditions. Quality measures from the raw biometric data are available to allow system adjustment to changing quality conditions due to device changes. This system adjustment is referred to as quality-based conditional processing. The proposed fusion approach is based on linear logistic regression, in which fused scores tend to be log-likelihood-ratios. This allows the easy and efficient combination of matching scores from different devices assuming low dependence among modalities. In our system, quality information is used to switch between different system modules depending on the data source (the sensor in our case) and to reject channels with low quality data during the fusion. We compare our fusion approach to a set of rule-based fusion schemes over normalized scores. Results show that the proposed approach outperforms all the rule-based fusion schemes. We also show that with the quality-based channel rejection scheme, an overall improvement of 25% in the equal error rate is obtained. <s> BIB001 </s> A Survey on Periocular Biometrics Research <s> Conclusions and future work <s> Among the available biometric traits such as face, iris and fingerprint, there is an active research being carried out in the direction of unconstrained biometrics. Periocular recognition has proved its effectiveness and is regarded as complementary to iris recognition. The main objectives of this paper are three-fold: 1) to announce the availability of periocular dataset, which has a variability in terms of scale change (due to camera-subject distance), pose variation and non-uniform illumination; 2) to investigate the performance of periocular recognition methods with the presence of various degradation factors; 3) propose a new initialization strategy for the definition of the periocular region-of-interest (ROI), based on the geometric mean of eye corners. Our experiments confirm that performance can be consistently improved by this initialization method, when compared to the classical technique. <s> BIB002 </s> A Survey on Periocular Biometrics Research <s> Conclusions and future work <s> Periocular biometrics is the recognition of individuals based on the appearance of the region around the eye. Periocular recognition may be useful in applications where it is difficult to obtain a clear picture of an iris for iris biometrics, or a complete picture of a face for face biometrics. Previous periocular research has used either visible-light (VL) or near-infrared (NIR) light images, but no prior research has directly compared the two illuminations using images with similar resolution. We conducted an experiment in which volunteers were asked to compare pairs of periocular images. Some pairs showed images taken in VL, and some showed images taken in NIR light. Participants labeled each pair as belonging to the same person or to different people. Untrained participants with limited viewing times correctly classified VL image pairs with 88% accuracy, and NIR image pairs with 79% accuracy. For comparison, we presented pairs of iris images from the same subjects. In addition, we investigated differences between performance on light and dark eyes and relative helpfulness of various features in the periocular region under different illuminations. We calculated performance of three computer algorithms on the periocular images. Performance for humans and computers was similar. <s> BIB003 </s> A Survey on Periocular Biometrics Research <s> Conclusions and future work <s> Iris and Periocular biometrics has proved its effectiveness in accurately verifying the subject of interest. Recent improvements in visible spectrum Iris and Periocular verification have further boosted its application to unconstrained scenarios. However existing visible Iris verification systems suffer from low quality samples because of the limited depth-of-field exhibited by the conventional Iris capture systems. In this work, we propose a robust Iris and Periocular erification scheme in visible spectrum using Light Field Camera (LFC). Since the light field camera can provide multiple focus images in single capture, we are motivated to investigate its applicability for robust Iris and Periocular verification by exploring its all-in-focus property. Further, the use of all-in-focus property will extend the depth-of-focus and overcome the problem of focus that plays a predominant role in robust Iris and Periocular verification. We first collect a new Iris and Periocular biometric database using both light field and conventional camera by simulating real life scenarios. We then propose a new scheme for feature extraction and classification by exploring the combination of Local Binary Patterns (LBP) and Sparse Reconstruction Classifier (SRC). Extensive experiments are carried out on the newly collected database to bring out the merits and demerits on applicability of light field camera for Iris and Periocular verification. Finally, we also present the results on combining the information from Iris and Periocular biometrics using weighted sum rule. <s> BIB004 </s> A Survey on Periocular Biometrics Research <s> Conclusions and future work <s> In challenging image acquisition settings where the performance of iris recognition algorithms degrades due to poor segmentation of the iris, image blur, specular reflections, and occlusions from eye lids and eye lashes, the periocular region has been shown to offer better recognition rates. However, the definition of a periocular region is subject to interpretation. This paper investigates the question of what is the best periocular region for recognition by identifying sub-regions of the ocular image when using near-infrared (NIR) or visible light (VL) sensors. To determine the best periocular region, we test two fundamentally different algorithms on challenging periocular datasets of contrasting build on four different periocular regions. Our results indicate that system performance does not necessarily improve as the ocular region becomes larger. Rather in NIR images the eye shape is more important than the brow or cheek as the image has little to no skin texture (leading to a smaller accepted region), while in VL images the brow is very important (requiring a larger region). <s> BIB005 </s> A Survey on Periocular Biometrics Research <s> Conclusions and future work <s> Visible spectrum iris verification has drawn substantial attention due to the feasibility, convenience and also accepted per-formance. This further allows one to perform the iris verification in an unconstrained environment at-a-distance and on the move. The integral part of the visible iris recognition rely on the accurate texture representation algorithm that can effectively capture the uniqueness of the texture even in the challenging conditions like reflection, illumination among others. In this paper, we explore a new scheme for the robust visible iris verification based on Binarized Statistical Image Features (BSIF). The core idea of the BSIF descriptor is to compute the binary code for each pixel by projecting them on the subspace which is learned from natural images using Independent Component Analysis (ICA). Thus, the BSIF is expected to encode the texture features more robustly when compared to contemporary schemes like Local Binary Patterns and its variants. The extensive experiments are carried out on the visible iris dataset captured using both Light field and conventional camera. The proposed feature extraction method is also extended for enhanced periocular recognition. Finally, we also present a comparative analysis with popular state-of-the-art iris recognition scheme. <s> BIB006 </s> A Survey on Periocular Biometrics Research <s> Conclusions and future work <s> In biometrics research, the periocular region has been regarded as an interesting trade-off between the face and the iris, particularly in unconstrained data acquisition setups. As in other biometric traits, the current challenge is the development of more robust recognition algorithms. Having investigated the suitability of the ‘elastic graph matching’ (EGM) algorithm to handle non-linear distortions in the periocular region because of facial expressions, the authors observed that vertices locations often not correspond to displacements in the biological tissue. Hence, they propose a ‘globally coherent’ variant of EGM (GC-EGM) that avoids sudden local angular movements of vertices while maintains the ability to faithfully model non-linear distortions. Two main adaptations were carried out: (i) a new term for measuring vertices similarity and (ii) a new term in the edges-cost function penalises changes in orientation between the model and test graphs. Experiments were carried out both in synthetic and real data and point for the advantages of the proposed algorithm. Also, the recognition performance when using the EGM and GC-EGM was compared, and statistically significant improvements in the error rates were observed when using the GC-EGM variant. . <s> BIB007 </s> A Survey on Periocular Biometrics Research <s> Conclusions and future work <s> Automated and accurate biometrics identification using periocular imaging has wide range of applications from human surveillance to improving performance for iris recognition systems, especially under less-constrained imaging environment. Restricted Boltzmann Machine is a generative stochastic neural network that can learn the probability distribution over its set of inputs. As a convolutional version of Restricted Boltzman Machines, CRBM aim to accommodate large image sizes and greatly reduce the computational burden. However in the best of our knowledge, the unsupervised feature learning methods have not been explored in biometrics area except for the face recognition. This paper explores the effectiveness of CRBM model for the periocular recognition. We perform experiments on periocular image database from the largest number of subjects (300 subjects as test subjects) and simultaneously exploit key point features for improving the matching accuracy. The experimental results are presented on publicly available database, the Ubripr database, and suggest effectiveness of RBM feature learning for automated periocular recognition with the large number of subjects. The results from the investigation in this paper also suggest that the supervised metric learning can be effectively used to achieve superior performance than the conventional Euclidean distance metric for the periocular identification. <s> BIB008 </s> A Survey on Periocular Biometrics Research <s> Conclusions and future work <s> We consider the problem of matching face against iris images using ocular information. In biometrics, face and iris images are typically acquired using sensors operating in visible (VIS) and near-infrared (NIR) spectra, respectively. This presents a challenging problem of matching images corresponding to different biometric modalities, imaging spectra, and spatial resolutions. We propose the usage of ocular traits that are common between face and iris images (viz., iris and ocular region) to perform matching. Iris matching is performed using a commercial software, while ocular regions are matched using three different techniques: Local Binary Patterns (LBP), Normalized Gradient Correlation (NGC), and Joint Dictionary-based Sparse Representation (JDSR). Experimental results on a database containing 1358 images of 704 subjects indicate that ocular region can provide better performance than iris biometric under a challenging cross-modality matching scenario. <s> BIB009 </s> A Survey on Periocular Biometrics Research <s> Conclusions and future work <s> Partial face recognition has been a problem of interest for more than a decade. Most of previous publications on partial face recognition assume intra spectral matching. Matching Short Wave Infrared (SWIR), Middle Wave Infrared (MWIR) or Near Infrared (NIR) images of partial face to a gallery of color images is a much more challenging task. The photometric properties of images in these four spectral bands are highly distinct. Because of the limited space — and also sufficient interest to this biometric — in this paper we present results of cross spectral matching applied to periocular regions. Equipped with a well developed automatic recognition algorithm for heterogeneous face, we demonstrate that the algorithm can be tuned and applied to periocular regions for a positive cross spectral matching of SWIR, MWIR and NIR periocular regions to visible periocular regions at short (1.5 m) and long (50 and 106 m) standoff distances. Our numerical analysis demonstrates the results of the matching. To the best of our knowledge, the performance evaluation presented in this paper is the first of its kind. <s> BIB010 </s> A Survey on Periocular Biometrics Research <s> Conclusions and future work <s> This paper introduces the challenge of cross spectral periocular matching. The proposed algorithm utilizes neural network for learning the variabilities caused by two different spectrums. Two neural networks are first trained on each spectrum individually and then combined such that, by using the cross spectral training data, they jointly learn the cross spectral variability. To evaluate the performance, a cross spectral periocular database is prepared that contains images pertaining to visible night vision and near infrared spectrums. The proposed combined neural network architecture, on the cross spectral database, shows improved performance compared to existing feature descriptors and cross domain algorithms. <s> BIB011 </s> A Survey on Periocular Biometrics Research <s> Conclusions and future work <s> Display Omitted A literature review of ocular modalities such as iris and periocular is presented.Information fusion approaches that combine ocular modalities with other modalities are reviewed.Future research directions are presented on sensing technologies, algorithms, and fusion approaches. Biometrics, an integral component of Identity Science, is widely used in several large-scale-county-wide projects to provide a meaningful way of recognizing individuals. Among existing modalities, ocular biometric traits such as iris, periocular, retina, and eye movement have received significant attention in the recent past. Iris recognition is used in Unique Identification Authority of India's Aadhaar Program and the United Arab Emirate's border security programs, whereas the periocular recognition is used to augment the performance of face or iris when only ocular region is present in the image. This paper reviews the research progression in these modalities. The paper discusses existing algorithms and the limitations of each of the biometric traits and information fusion approaches which combine ocular modalities with other modalities. We also propose a path forward to advance the research on ocular recognition by (i) improving the sensing technology, (ii) heterogeneous recognition for addressing interoperability, (iii) utilizing advanced machine learning algorithms for better representation and classification, (iv) developing algorithms for ocular recognition at a distance, (v) using multimodal ocular biometrics for recognition, and (vi) encouraging benchmarking standards and open-source software development. <s> BIB012 </s> A Survey on Periocular Biometrics Research <s> Conclusions and future work <s> Face recognition performance degrades significantly under occlusions that occur intentionally or unintentionally due to head gear or hair style. In many incidents captured by surveillance videos, the offenders cover their faces leaving only the periocular region visible. We present an extensive study on periocular region based person identification in video. While, previous techniques have handpicked a single best frame from videos, we formulate, for the first time, periocular region based person identification in video as an image-set classification problem. For thorough analysis, we perform experiments on periocular regions extracted automatically from RGB videos, NIR videos and hyperspectral image cubes. Each image-set is represented by four heterogeneous feature types and classified with six state-of-the-art image-set classification algorithms. We propose a novel two stage inverse Error Weighted Fusion algorithm for feature and classifier score fusion. The proposed two stage fusion is superior to single stage fusion. Comprehensive experiments were performed on four standard datasets, MBGC NIR and visible spectrum (Phillips et al., 2005), CMU Hyperspectral (Denes et al., 2002) and UBIPr (Padole and Proenca, 2012). We obtained average rank-1 recognition rates of 99.8, 98.5, 97.2, and 99.5% respectively which are significantly higher than the existing state of the art. Our results demonstrate the feasibility of image-set based periocular biometrics for real world applications. <s> BIB013
Periocular recognition has emerged as a promising trait for unconstrained biometrics after demands for increased robust- Table 5 . Overview of existing works on soft-biometrics, gender transformation and plastic surgery analysis using periocular features. The acronyms of this table are fully defined in the text or in the referenced papers. The following acronyms are not defined elsewhere: 'SVM'='Support Vector Machines'. ness of face or iris systems, showing a surprisingly high discrimination ability . The fastgrowing uptake of face technologies in social networks and smartphones, as well as the widespread use of surveillance cameras, arguably increases the interest of periocular biometrics. The periocular region has shown to be more tolerant to variability in expression, occlusion, and it has more capability of matching partial faces . It also finds applicability in other areas such as forensics analysis (crime scene images where perpetrators intentionally mask part of their faces). In such situation, identifying a suspect where only the periocular region is visible is one of the toughest realworld challenges in biometrics. Even in this difficult case, the periocular region can aid in the reconstruction of the whole face . This paper reviews the state of the art in periocular biometrics research. Our target is to provide a comprehensive coverage of the existing literature, giving an insight of the most relevant issues and challenges. We start by presenting existing databases utilized in periocular research. Acquisition setups comprise digital cameras, webcams, videocameras, smartphones, or close-up iris sensors. A small number of databases contain video data of subjects walking through an acquisition portal, or in hallways or atria. There are databases for particular problems too, such as aging, plastic surgery effects, gender transformation effects, expression changes, or crossspectral matching. However, the use of databases acquired with personal devices such as smartphones or tablets is limited, with recognition accuracy still some steps behind . The same can be said about surveillance cameras (Juefei-Xu and Savvides, 2012). New sensors are being proposed, such as Light Field Cameras, which capture multiple images at different focus in a single capture BIB004 BIB006 , guaranteeing to have a good focused image. Since the periocular modality requires less constrained acquisition than other ocular or face modalities, it is likely that the research community will move towards exploring ocular recognition at a distance and on the move in more detail as compared to previous studies BIB012 . Automatic detection and/of segmentation of the periocular region has been increasingly addressed as well, avoiding the need of segmenting the iris or detecting the full face first (Table 2). Recently, the use of eye corners as reference points to define the periocular ROI has been suggested, instead of the eye center, since eye corners are less sensitive to gaze variations and also appear in closed eyes BIB002 BIB007 BIB008 . We further review the features employed for periocular recognition, which comprises the majority of works in the literature. They can be classified into global and local approaches (Figure 3 ). Some works have also addressed the task of assessing if there are regions of the periocular area more useful than others for recognition purposes. This has been done both by asking to humans BIB003 and by using several machine algorithms BIB005 , with both humans and machines agreeing in the usefulness of different parts. Automatic segmentation of periocular parts can aid in avoiding those which are non-useful, as well as other elements such as hair or glasses, that can also deteriorate the recognition performance, as shown by in the first work which present an algorithm to segment components of the periocular region. Since the periocular area appears in face and iris images, comparison and fusion with these modalities has been also proposed, with a review of related works also given (Table 4) . Fusion of multiple modalities using ocular data is a promising path forward that is receiving increasing attention BIB012 due to unconstrained environments where switching between available modalities may be necessary BIB001 . Soft-biometrics is another area where the periocular modality has found applicability, with periocular features showing accuracies comparable to these obtained by using the entire face for the tasks of gender and ethnicity classification ( Table 5) . The periocular modality is also shown to aid or outperform face matchers in case of undergoing plastic surgery or gender transformation. Another issues that are receiving increasing attention is cross-modality BIB009 , crossspectral (Cao and BIB010 BIB011 , hyperspectral BIB013 or cross-sensor matching. The periocular modality also has the potential to allow ocular recognition at large stand-off distances (Cao and Schmid, 2014) , with applications in surveillance. Samples captured with different sensors are to be matched if, for example, people is allowed to use their own smartphone or surveillance cameras, or when new or improved sensors have to co-exist with existing ones (cross-sensor), not to mention if the sensors work in different spectral range (cross-spectral). Iris images are traditionally acquired in NIR spectrum, whereas face images normally are captured with VW sensors. Exchange of biometric information between different law enforcement agencies worldwide also poses similar problems. These are examples of some scenarios where, if biometrics is extensively deployed, data acquired from heterogeneous sources will have to co-exist BIB001 . These issues are of high interest in new scenarios arising from the widespread use of biometric technologies and the availability of multiple sensors and vendor solutions. Another important direction therefore is to enable periocular heterogeneous data to work together BIB012 .
Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> The paper presents results of measurements and simulations concerning the application of the European GSM system in high speed trains travelling at up to 500 km/h. The aim is to answer the question to what extent GSM (performance specified up to 250 km/h) can cope with the high velocities which are demanded for future railways. Measurements along railway lines have shown that a railway mobile radio channel results in better performance (Rice channel) than standard mobile radio channels (Rayleigh or weak Rice channel, see GSM-Recs). BER and block error rate of GSM traffic channels up to 500 km/h are simulated. Comparison of the results at 250 km/h and 500 km/h shows that the GSM high velocity problem can be solved either by increasing the SNR by about 2 dB or by increasing the Rice parameter c by about 6 dB (numerical values for profile RA=rural area; railway channel with c=6 dB against standard channel with c=0 dB), i.e. the BER at 500 km/h (railway channel) is not worse than the BER at 250 km/h (standard channel). A simple example shows that the benefit in the transmission of telegrams consisting of blocks of decoded bits can be much higher, The desired channel performance, i.e. a strong direct path (high Rice parameter), can be achieved by careful radio coverage planning along the railway line. This means a GSM standard receiver is sufficient to cope with the GSM high velocity problem and no additional means are needed. <s> BIB001 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> This paper examines the railway environment from the point of view of the provision of 2nd generation voice services. It examines the radio environment including fading, Doppler, transients, and penetration loss into carriages, as well as special situations such as cuttings and tunnels. The paper reports on the operation of the IS-95 and GSM voice services at high speeds, both assuming track side base stations and non-track side base stations. The paper draws conclusions on the different types of environment encountered both for conventional and high speed rail lines, and the effect these factors have on the overall link budget. <s> BIB002 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> The paper analyzes the special characteristics of GSM mobile communications in systems where the mobile stations move at speeds up to 500 km/h. One of the principal propagation problems of the train track's special environment is the speed of the MS. Another problem is the short loss of communications during the handover process. The propagation environment of the train is very special. Typically, a high speed track is full of cuttings, tunnels, bridges, etc; for this reason, the planning of the coverage must be done very carefully. The facilities and capabilities of GSM technology would be very interesting for railway applications, but GSM-R technology has not been commercially proved for high speed applications, so that a great effort is required to adapt GSM technology to this new applications. Although the changes in BTS and terminal are very small, the planning and network design is completely different from that of commercial networks. <s> BIB003 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> With the development of high-speed railway and public growing demand on data traffic, people pay much more attention to provide high data rate and high reliable services under high mobility circumstance. Due to the higher data rate and lower system latency, long-term evolution (LTE) has been chosen as the next generation's evolution of railway mobile communication system by the International Union of Railways. However, there are still many problems to be solved in the high mobility applications of LTE, especially the higher handover failure probability, which seriously degrades the reliability of railway communication. This article proposes an optimized handover scheme, in which the coordinated multiple point transmission technology and dual vehicle station coordination mechanism are applied to improve the traditional hard handover performance of LTE. The scheme enables the high speed train to receive signals from both adjacent base stations and obtain diversity gain when it moves through the overlapping areas, so it improves the quality of the received signal and provides reliable communication between train and ground eNodeBs. Numerical analysis and simulation results show that the proposed scheme can decrease the outage probability remarkably during handover and guarantee the reliability of train to ground communication. <s> BIB004 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> Traffic telematics applications are currently under intense research and development for making transportation safer, more efficient, and more environmentally friendly. Reliable traffic telematics applications and services require vehicle-to-vehicle wireless communications that can provide robust connectivity, typically at data rates between 1 and 10 Mb/s. The development of such VTV communications systems and standards require, in turn, accurate models for the VTV propagation channel. A key characteristic of VTV channels is their temporal variability and inherent non-stationarity, which has major impact on data packet transmission reliability and latency. This article provides an overview of existing VTV channel measurement campaigns in a variety of important environments, and the channel characteristics (such as delay spreads and Doppler spreads) therein. We also describe the most commonly used channel modeling approaches for VTV channels: statistical as well as geometry-based channel models have been developed based on measurements and intuitive insights. Extensive references are provided. <s> BIB005 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> We present a survey of approaches for providing broadband Internet access to trains. We examine some of the barriers that hinder the use of broadband Internet on trains and then discuss some of the opportunities for broadband deployment to trains. This survey considers some of the basic concepts for providing broadband Internet access and then reviews associated network architectures. The review of network architectures shows that we can subdivide networks for providing broadband Internet access to trains into the train-based network, the access network-for connecting the train to the service provider(s)-and the aggregation network-for collecting user packets generated in the access network for transmission to the Internet. Furthermore, our review shows that the current trend is to provide Internet access to passengers on trains using IEEE 802.11; however, a clear method for connecting trains to the global Internet has yet to emerge. A summary of implementation efforts in Europe and North America serves to highlight some of the schemes that have been used thus far to connect trains to the Internet. We conclude by discussing some of the models developed, from a technical perspective, for testing the viability of deploying Internet access to trains. <s> BIB006 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> This paper proposes a communication system using Wi- Fi (IEEE802.11g) to link between the Internet and high speed rail systems traveling at around 300km/h. In order to adapt Wi-Fi for high speed mobile communication, we optimized its coverage on a rail track with a developed directional antenna, which has a communication range of around 500m by 10mW. With the antenna, however, a mobile entity had to switch over antennae (a layer 2 handover (L2HO)) every 6 to 7 seconds. Furthermore Mobile IP handovers (a layer 3 handover (L3HO)) had to be appropriately controlled to avoid a simultaneous handover of Layer 2 and 3, which results in a fatal communication disruption. Therefore the designed system in this paper separated a L3HO from a L2HO. As a result, a maximum of 25Mbps with an average of 16Mbps for the UDP throughput and an average L2HO time of 110ms were realized while travelling at 270km/h. <s> BIB007 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> Distributed antenna technology, as one of the important next-generation wireless communication technologies, has aroused extensive attention. The technology has been applied in high-speed movement environment. Due to the high density coverage of distributed antennas, almost anywhere in the area has line-of-sight (LOS) to reach at least one fixed antenna. However it may correspondingly result in smaller overlap between adjacent cells and higher probability of handover failure in high-speed movement scenario. In order to solve these problems, this paper proposes a novel handover scheme based on on-vehicle dual-antenna for high-speed railway distributed antenna system (DAS). On-vehicle antennas which collaborate each other, are mounted on the top of high-speed train (the one is in the front-end and the other is in the rear-end). The proposed scheme utilizes distributed transceivers and centralized processing technology. The numerical analysis results show that the novel scheme can pre-trigger handover appropriately, guarantee the higher handover success rate, and increase the system throughput by around 50%. In addition, the scheme is feasible and easy to be implemented. <s> BIB008 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> Due to the rapid development of high-speed railways all over the world, it is very promising to deliver public broadband wireless network to passengers aboard high-speed trains. However, deploying conventional cellular network along the railway will lead to low coverage efficiency and radio resource waste, where a large part of service coverage area may not have any users' traffic at all. A model of analyzing the coverage efficiency of the conventional network and Radio over Fiber (RoF) network is presented in this paper. Based on certain parameters of Chinese high-speed railway scenario, simulation results demonstrate that the coverage efficiency of the RoF network improves greatly compared with the conventional network. In addition, the number of Remote Antenna Units (RAUs) mounted along the railway may be less than the number of antennas installed on the roof of the high-speed train due to initial infrastructure cost. The optimal match of the antennas and RAUs is proposed to maximum the coverage efficiency in this paper. The conclusions provide observations to guide RoF cell planning for high-speed railways. <s> BIB009 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> This tutorial paper provides a comprehensive overview on the recent development in broadband wireless communications for high speed trains. Starting with the introduction of the two-hop network structure, radio-over-fiber (RoF) based cell planning is described in detail. Moreover, based on the analysis of differences between conventional cellular systems and the one for high speed trains, promising techniques are recommended to improve the performance of handover, which is one of the main challenges in high speed train communications. Finally, in order to combat the fast fading caused by the high mobility, robust algorithms are needed in physical layer signal processing, including synchronization, channel estimation, modulation/demodulation, and so on. <s> BIB010 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> In this paper, we propose a new concept called mobile Femtocell (MFemtocell) network, which can be considered as a practical implementation of mobile relays (more precisely, moving networks). MFemtocells can be deployed in moving vehicles, such as trains, buses, or private cars to provide enhanced user throughput, extended coverage, and reduction of the signaling overhead and drop calls. We investigate the spectral efficiency of cellular systems with MFemtocell deployment and two resource partitioning schemes. Simulation results demonstrate that with the deployment of MFemtocells, the spectral efficiency and average user throughput can significantly be increased while the signaling overhead is reduced. <s> BIB011 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> With the deployment of high speed train (HST) systems increasing worldwide and their popularity with travelers growing, providing broadband wireless communications (BWC) in HSTs is becoming crucial. In this paper, a tutorial is presented on recent research into BWC provision for HSTs. The basic HST BWC network architecture is described. Two potential cellular architectures, microcells and distributed antenna systems (DASs) based cells, are introduced. In particular, the DAS is discussed in conjunction with radio over fiber (RoF) technology for BWC for HSTs. The technical challenges in providing DAS-based BWC for HSTs, such as handoff and RoF are discussed and outlined. <s> BIB012 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> 3GPP has completed a study on coordinated multipoint transmission and reception techniques to facilitate cooperative communications across multiple transmission and reception points (e.g., cells) for the LTE-Advanced system. In CoMP operation, multiple points coordinate with each other in such a way that the transmission signals from/to other points do not incur serious interference or even can be exploited as a meaningful signal. The goal of the study is to evaluate the potential performance benefits of CoMP techniques and the implementation aspects including the complexity of the standards support for CoMP. This article discusses some of the deployment scenarios in which CoMP techniques will likely be most beneficial and provides an overview of CoMP schemes that might be supported in LTE-Advanced given the modern silicon/DSP technologies and backhaul designs available today. In addition, practical implementation and operational challenges are discussed. We also assess the performance benefits of CoMP in these deployment scenarios with traffic varying from low to high load. <s> BIB013 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> The recent advent of high speed trains introduces new mobility patterns in wireless environments. The LTE-A (Long Term Evolution of 3GPP - Advanced) networks have largely tackled the Doppler effect problem in the physical layer and are able to keep wireless service with 100Mpbs throughput within a cell in speeds up to 350 km/h. Yet the much more frequent handovers across cells greatly increases the possibility of service interruptions, and the problem is prominent for multimedia communications that demand both high-throughput and continuous connections. In this paper, we present a novel LTE-based solution to support high throughput and continuous multimedia services for high speed train passengers. Our solution is based on a Cell Array that smartly organizes the cells along a railway, together with a femto cell service that aggregates traffic demands within individual train cabins. Given that the movement direction and speed of a high-speed train are generally known, our Cell Array effectively predicts the upcoming LTE cells in service, and enables a seamless handover that will not interrupt multimedia streams. To accommodate the extreme channel variations, we further propose a scheduling and resource allocation mechanism to maximize the service rate based on periodical signal quality changes. Our simulation under diverse network and railway/train configurations demonstrates that the proposed solution achieves much lower handover latency and higher data throughput, as compared to existing solutions. It also well resists to network and traffic dynamics, thus enabling uninterrupted quality multimedia services for passengers in high speed trains. <s> BIB014 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> Provision of high data rate services on train has attracted a great attention recently. In this paper, the issues of time division duplex (TDD) DAS, including coverage of remote antenna units (RAUs), echo channel effect and system deployment cost were analyzed. The timing drift problem while solving echo channel effect with moving cell concept in DAS was also depicted in detail. Furthermore, the frequency response, propagation model, and time dispersion parameters of the RoF-DAS channel will be analyzed with simulation. This paper provided analyzed RoF-DAS channel profile over high-speed railway communication system for future researches. <s> BIB015 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> Long Term Evolution (LTE) is considered to be the natural evolution for the current Global System for Mobile Communications-Railways (GSM-R) in high speed railway environments, not only for its technical advantages and increased performance, but also due to the current evolution of public communication systems. In railway environments, mission critical services, operation assistance services, and passenger services must be supported by reliable mobile communication systems. Reliability and availability are key concerns for railway operators and as a consequence, railway operators are usually conservative adopters of information and communication technologies (ICT). This paper describes the feasibility of LTE as a successor to GSM-R for new railway mobile communication systems. We identify key features of LTE as a technology and analyze its ability to support both the migration of current railway services and the provisioning of potential future ones. We describe the key challenges to address specific requirements for railway communication services including the provisioning of voice service in LTE networks, handover performance, multicast multimedia transmission, and the provisioning of group communications service and railway emergency calls. <s> BIB016 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> Global System for Mobile Communications Railways (GSM-R) has been the de facto standard for wireless communications in the field of High Speed Railway (HSR). However due to the increasing requirements associated with HSR, Long Term Evolution for Railways (LTE-R) has been presented as the following wireless communication system. In this paper a complete performance analysis of LTE for HSR is carried out giving results for both Physical (PHY) and Medium Access Control (MAC) layer in order to identify appropriate Quality of Service (QoS) requirements. An adequate Rician channel model with a time-varying Doppler frequency offset has been integrated in a Downlink LTE simulator using Wireless Mobile SIMulator (WM-SIM) platform. The effect of Inter- Carrier Interference (ICI) degrades the Bit Error Rate (BER) whereas the reduced coherence time makes obsolete channel state information reducing the performance of adaptive modulation and coding (AMC). For this reason open loop Multiple Input Multiple Output (MIMO) techniques are attractive. Results illustrate the benefits of LTE in HSR scenario. <s> BIB017 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> When a train speeds up to 350km/h, it is challenging for continuous wireless coverage due to a number of critical issues, e.g. frequent handover and drop-offs. To address this problem, this paper proposes a novel handover scheme based on the dual antennas and Mobile Relay Station (MRS) for High Speed Railway (HSR) Distributed Antenna System (DAS). The scheme enables the dual antennas controlled by the MRS to receive signals from multiple Remote Antenna Units (RAUs), thus obtaining diversity gain when the train moves within one logic cell. While the train runs through the edge of a logic cell, a dual-antenna handover scheme is adopted to enhance the handover performance. Thereby, the proposal improves the quality of the received signal and provides reliable communication for train-tog round network. The numerical analysis and simulation results show the proposed handover scheme can reduce handover frequency dramatically and provide the seamless access for HSR compared to the standard LTE handover scheme. <s> BIB018 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> The fourth generation wireless communication systems have been deployed or are soon to be deployed in many countries. However, with an explosion of wireless mobile devices and services, there are still some challenges that cannot be accommodated even by 4G, such as the spectrum crisis and high energy consumption. Wireless system designers have been facing the continuously increasing demand for high data rates and mobility required by new wireless applications and therefore have started research on fifth generation wireless systems that are expected to be deployed beyond 2020. In this article, we propose a potential cellular architecture that separates indoor and outdoor scenarios, and discuss various promising technologies for 5G wireless communication systems, such as massive MIMO, energy-efficient communications, cognitive radio networks, and visible light communications. Future challenges facing these potential technologies are also discussed. <s> BIB019 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> High-speed railway (HSR) brings convenience to peoples' lives and is generally considered as one of the most sustainable developments for ground transportation. One of the important parts of HSR construction is the signaling system, which is also called the “operation control system,” where wireless communications play a key role in the transmission of train control data. We discuss in detail the main differences in scientific research for wireless communications between the HSR operation scenarios and the conventional public land mobile scenarios. The latest research progress in wireless channel modeling in viaducts, cuttings, and tunnels scenarios are discussed. The characteristics of nonstationary channel and the line-of-sight (LOS) sparse and LOS multiple-input-multiple-output channels, which are the typical channels in HSR scenarios, are analyzed. Some novel concepts such as composite transportation and key challenging techniques such as train-to-train communication, vacuum maglev train techniques, the security for HSR, and the fifth-generation wireless communications related techniques for future HSR development for safer, more comfortable, and more secure HSR operation are also discussed. <s> BIB020 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> High-speed railways (HSRs) have been widely introduced to meet the increasing demand for passenger rail travel. While it provides more and more conveniences to people, the huge cost of the HSR has laid big burden on the government finance. Reducing the cost of HSR has been necessary and urgent. Optimizing arrangement of base stations (BS) by improving prediction of the communication link is one of the most effective methods, which could reduce the number of BSs to a reasonable number. However, it requires a carefully developed propagation model, which has been largely neglected before in the research on the HSR. In this paper, we propose a standardized path loss/shadow fading model for HSR channels based on an extensive measurement campaign in 4594 HSR cells. The measurements are conducted using a practically deployed and operative GSM-Railway (GSM-R) system to reflect the real conditions of the HSR channels. The proposed model is validated by the measurements conducted in a different operative HSR line. Finally, a heuristic method to design the BS separation distance is proposed, and it is found that using an improved propagation model can theoretically save around 2/5 cost of the BSs. <s> BIB021 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> In this paper, the bit error rate (BER) performance of spatial modulation (SM) systems under a novel 3-D vehicle-to-vehicle (V2V) multiple-input multiple-output (MIMO) channel model is investigated both theoretically and by simulations. The impact of vehicle traffic density, Doppler effect, and 3-D and 2-D V2V MIMO channel models on the BER performance are thoroughly investigated. Simulation results show that the performance of SM is mainly affected by the spatial correlation of the underlying channel model. Compared with other MIMO technologies, the SM system can offer a better tradeoff between spectral efficiency and system complexity. <s> BIB022
H IGH-MOBILITY scenarios, e.g., high-speed train (HST) and vehicle-to-vehicle (V2V) scenarios, are expected to be typical scenarios for the fifth generation (5G) wireless communication systems BIB019 . Unlike V2V communication channels that have been thoroughly investigated in the literature BIB005 - BIB022 , a comprehensive study of HST communication channels is still missing. With the rapid development of HSTs, an increasing volume of wireless communication data is required to be transferred to train passengers. HST users demand high network capacity and reliable communication services regardless of their locations or speeds. To satisfy these demands, HST wireless communication systems have to overcome many challenges resulting from the high speed of the train that can easily exceed 250 km/h, such as fast handover, fast travel through diverse scenarios, and large Doppler spreads BIB002 , BIB006 besides some challenges inherited from conventional trains such as high penetration losses, limited visibility in tunnels, and the harsh electromagnetic environment BIB020 . Since 1998, the Global System for Mobile Communication Railway (GSM-R) has widely been adopted as Europe standard for train communications and control. However, GSM-R can only provide a data rate of up to 200 kbps BIB001 , besides the fact that it is mainly used for train control rather than providing communications for train passengers BIB003 . Therefore, GSM-R cannot meet the requirements for future high speed data transmissions and International Union of Railways has recommended that GSM-R has to be replaced by longterm evolution-Railway (LTE-R) BIB016 - BIB017 , which is a broadband railway wireless communication system based on LTEAdvanced (LTE-A) . Nevertheless, both systems still adopt the conventional cellular architecture where mobile stations (MSs) inside trains communicate directly with outdoor base stations (BSs). Such an architecture leads to a spotty coverage and high penetration losses of wireless signals traveling through the metal carriages of HSTs. In addition, the receiving signals at MSs on board will experience fast changing channels resulting in high signaling overhead and high possibility of drop calls and handover failure BIB007 . The aforementioned problems can be mitigated by deploying other cellular architectures, such as distributed antenna system (DAS) BIB012 - BIB008 , coordinated multipoint (CoMP) BIB013 , , mobile relay station (MRS) BIB010 - BIB014 (or mobile femtocell BIB019 , BIB011 , ) technologies, or a combination of these architectures, e.g., DAS with MRS BIB018 or CoMP with MRS BIB004 . In a DAS, distributed antenna elements are connected to a BS via wires or fibers (radio over fibers (RoF)) , BIB009 to provide considerable gain in coverage and capacity in comparison with the conventional cellular architecture. The spatially separated antenna elements can be used to transmit the same signal at different locations to provide spatial diversity against the fading. Combined with spatial diversity, frequency reuse in the DAS is an effective technique to increase system capacity. The enhancement in spectral efficiency of DASs in comparison with conventional systems was presented in BIB012 . In BIB015 , the authors analyzed the deployment of DAS over HST communication systems and some of the resulting problems such as the coverage of the remote antenna units (RAUs) and echo channel effect. In CoMP systems, the transmission of neighboring BSs will be coordinated in the downlink while the received signals at the uplink will be jointly processed. This will reduce the inter-cell interference and improve the cell edge throughput. CoMP systems will also provide an enhanced channel capacity by using the statistically independent properties of the channels resulting from the wide spatial separation of antenna elements. Adopting mobile femtocell architecture in HST communication systems can be performed by deploying dedicated MRSs on the surface of the train to extend the coverage of the outdoor BS into train carriages. As a result, we will have two channels: an outdoor channel between the BS and MRS, and an indoor one between the MRS and an MS of a train passenger as illustrated in Fig. 1 . In this case, the BS will mainly communicate with the MRS at high data rates instead of communicating with large numbers of MSs directly. An MRS and its associated MSs within a train carriage are all viewed as a single unit to the BS, while the MSs will see the relevant MRS as a regular BS. It follows that an MRS can perform a group handover on behalf of all its associated MSs, which can greatly reduce the frequent handover burden of the HST system . Since the complexity of radio resource allocation (i.e., transmit power, data rates, scheduling, power and frequency allocation, and antenna selection) in a BS is related to the number of active users BIB012 , the radio resource management complexity in one BS will be reduced significantly when dealing with a "group of users" rather than individuals. This promising MRS technology has been adopted by IMT-Advanced (IMT-A) and WINNER II channel models. Moreover, the transmitter (Tx) and receiver (Rx) of a HST wireless communication system encounter different channel conditions due to the difference of surrounding geographical environments. The HST environment can be generally classified into the following main scenarios: open space, viaduct, cutting, hilly terrain, tunnels, and stations. Considering some unique setup of the aforementioned scenarios and some other special HST scenarios, HST environment can be further classified into 12 scenarios . HSTs can operate across one or more of these scenarios during its travel. The propagation characteristics change significantly with the change of environments and the distance between the Tx and Rx, even in the same terrain. Scenarios have a close relationship with channel modeling and measurements. Most standard channel models in the literature, like UMTS , COST 2100 , and IMT-2000 , failed to introduce any of the HST scenarios. The moving networks scenario in the WINNER II channel model and rural macro-cell (RMa) scenario in the IMT-A channel model have only considered a rural environment for HSTs, while neglecting other HST scenarios. The aforementioned propagation scenarios will be introduced and explained in detail in Section II. The features of HST channels, e.g., non-stationarity and large Doppler shift, significantly differ from those of lowmobility mobile cellular communication channels. Therefore, many measurement campaigns have been conducted in the literature to understand the underlying physical phenomenon in HST propagation environments. Accurate channel models that are able to mimic key characteristics of wireless channels play an important role in designing and testing HST communication systems. Realistic and reliable large-scale fading channel models, i.e., path loss (PL) and shadow fading (SF) models, are indispensable for efficient and trustworthy network deployment and optimization. Small-scale fading channel models are crucial in physical layer design in order to develop and test different transmission schemes, such as diversity of transmission/reception, error correction coding, interleaving, and equalization algorithms. Inaccurate channel models may lead to over-optimistic or over-pessimistic performance evaluation results that will result in misjudgments in product development. Moreover, inaccurate channel models may lead to inaccurate link budgets that will result in huge errors of the estimated maximum distance between adjacent BSs. Consequently, this will cause poor coverage and increased drop calls due to failed handovers between BSs when the distance is underestimated, and unnecessary overlapped coverage area with unjustified installation and maintenance cost of the extra installed BSs when the distance is overestimated BIB021 . In the literature, several large-scale and small-scale fading HST channel models were proposed. This article will focus on the recent advances in HST channel measurements and modeling and their future challenges. The rest of this paper is organized as follows. In Section II, an overview of HST channel measurements is provided. The stateof-the-art of HST channel models is presented in Section III. Future research directions in HST channel measurements and models are outlined in Section IV. Finally, concluding remarks are highlighted in Section V.
Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> We present and analyse the results of wideband radio channel measurements performed in tunnels. Both a high speed train tunnel and a smaller test tunnel have been investigated with both antennas and leaky feeders as fixed radiators. The results show typical features of the tunnel radio channel with typically low delay spread combined to significant slow fading of the LOS signal due to interferences. The delay spread may increase substantially during the fading dips. <s> BIB001 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> This paper covers some of the work carried out in the planning of the global system for mobile communication for railway (GSM-R) of the tunnels on the new high-speed trains in Spain. Solutions based on distributed antenna systems have been tested by installing several 900-MHz transmitters inside and outside of a 4000-m tunnel and measuring the propagation in different conditions. The measurements have been used to model the effects of tunnel propagation, including curves, trains passing from the outside to the inside, and the effect of two trains passing inside the tunnel. All cases have been tested by comparing solutions using isofrequency and multifrequency distributed transmitters inside the tunnel. The improvements of signal-to-noise ratio and the reduction of the blocking effects of two trains passing have demonstrated the advantages of using isofrequency distributed antenna systems in tunnels. Finally, a complete propagation model combining both modal analysis and ray tracing has been applied to predict the propagation loss inside and outside these tunnels, and results have been compared with the measurements. The model has proven to be very useful for radio planning in new railway networks. <s> BIB002 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> This paper focuses on propagation path loss modeling in viaduct and plain scenarios of the High-speed Railway. The data used for modeling comes from measurement on Zhengzhou-Xi'an passenger dedicated line with the maximum moving speed of 340Km/h. Based on the measurement data, tuned Free-space path loss models in these two scenarios are proposed. The performance of the tuned models is compared with that of the Hata model. The evaluation of the models is in terms of mean error, root mean square error and standard deviation of the residuals between the models and measurement. The simulation results and related analysis show better performance of the proposed tuned models compared with the conventional Hata model. <s> BIB003 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> This paper presents the results of path loss measurements in "Zhengzhou-Xi'an" high-speed railway environment at 930 MHz band. A transmitter directional antenna height of 20~30 meters above the rail surface and a receiver omni-directional antenna height of 3.5 meters were used on the high-speed viaducts height of 10~30 meters above the ground. An automatic acquisition system was utilized in the measurements. The model makes distinctions among different terrain. The results of measurements provide practical values for path loss exponent and standard deviation of shadowing affected by the viaduct factor in suburban, open area, mountain area and urban propagation regions where the high-speed trains travel. Based on the measurement data, the empirical path loss model was developed, which could be used for predicting the path loss for the future railway communication systems, and provide the facilities for network optimization. <s> BIB004 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> A high performance wireless network is essential for for the railway communication and control systems. Research on the fading characteristics in railway environment is of great importance for the design of the railway wireless network. In this paper, measurements are taken in railway terrain cuttings area using track side base stations of the GSM-R network. The fitted path loss model, shadow fading, and dynamaic range of the small scale fading are obtained and compared to the results of viaduct scenario. The propagation environment of the terrain cuttings turns out to be worse than the viaduct area. The path loss exponent is found to be 4.3. The shadow loss can be reasonably described by a log-normal distribution. It is also found that the bridges over the cuttings can cause extra loss of about 5 dB. The dynamaic range of the small scale fading is from 27 dB to 40 dB with a mean value of about 33 dB. <s> BIB005 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> Near-ground channel characterization is an important issue in most military applications of wireless sensor networks. However, the channel at the ground level lacks characterization. In this paper, we present a path loss model for three near-ground scenarios. The path loss values for each scenario were captured through extensive measurements, and then a least-square linear regression was performed. This indicates that the log-distance-based model is still suitable for path loss modeling in near-ground scenarios, and the prediction accuracy of the two-slope model is superior to that of the one-slope model. The validity of the proposed model was further verified by comparisons between the predicted and measured far-field path losses. Finally, compared to the generic models, the proposed model is more effective for the path loss prediction in near-ground scenarios. <s> BIB006 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> Accurate characterization of the radio channel in tunnels is of great importance for new signaling and train control communications systems. To model this environment, measurements have been taken at 2.4 GHz in a real environment in Madrid subway. The measurements were carried out with four base station transmitters installed in a 2-km tunnel and using a mobile receiver installed on a standard train. First, with an optimum antenna configuration, all the propagation characteristics of a complex subway environment, including near shadowing, path loss, shadow fading, fast fading, level crossing rate (LCR), and average fade duration (AFD), have been measured and computed. Thereafter, comparisons of propagation characteristics in a double-track tunnel (9.8-m width) and a single-track tunnel (4.8-m width) have been made. Finally, all the measurement results have been shown in a complete table for accurate statistical modeling. <s> BIB007 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> LTE is expected to substitute GSM as the basis technology for railway communications. Recently, special attention has been deserved to HST as this particular environment (mainly due to the high speed condition) can severely impact wireless systems performance. Although several channel models have been derived during the few last years, most of them are not accurate enough as they are not supported by measurement campaigns. In this paper, the main requirements for HST environments are analyzed and a flexible, cost-affordable, and easily-scalable software and hardware architecture for a test bed suitable for assessing LTE at high speeds is proposed. <s> BIB008 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> The high-speed railway propagation channel has significant effect on the design and performance analysis of wireless railway control systems. An important feature of the high-speed railway communications is the usage of directional transmitting antennas, due to which the receiver may experience strong attenuation of the line-of-sight (LOS) path under the base station (BS). This leads to a drop, and strong variations, of the signal strength under the BS. While the physical origin of the signal variations is different from conventional shadowing, it can be described by similar statistical methods. However, the effect has been largely neglected in the literature. In this paper we first define the region of the bottom of the BS, and then present a simple shadow fading model based on the measurements performed in high-speed railways at 930 MHz. It is found that the bottom area of the BS has a range of 400 m – 800 m; the standard deviation of the shadowing also follows a Gaussian distribution; the double exponential model fits the autocovariance of the shadow fading very well. We find that the directivity of the transmitting antenna leads to a higher standard deviation of shadowing and a smaller decorrelation distance under the BS compared to the region away from the BS. <s> BIB009 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> High-speed railway (HSR), as an important deployment scenario for both the present and the future mobile wideband radio communication systems, has attracted more and more attention all over the world with the rapid increasing demand of the high data rate communication service on traveling. For the purpose of capturing the wideband channel characteristics of HSR, a channel measurement campaign was conducted at the center frequency of 2.35 GHz with 50 MHz bandwith in the U-shape cutting scenario of Zhengzhou--Xian HSR line in China. Based on the field measured data, we analyze the small scale characteristics in detail, which maily include path number, root mean square delay spread (rms DS), and doppler shift. It is found that the distribution of the path number is well fitted by a Gamma distribution. The statistics of rms DS in the U- shape cutting scenario are larger than the results in other scenario of HSR. In addition, an increasing tendency of rms DS against the transmitter-to- receiver distance is observed and can be modeled by a linear function. Finally, the doppler frequency shift is verified and meets the theoretical value. <s> BIB010 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> This paper focuses on the fading characteristics of wireless channel on High-Speed Railway (HSR) in hilly terrain scenario. Due to the rapid speed, the fading characteristics of HSR channel are highly correlated with time or Transmit-Receive distance and have their own special property. To investigate the fading characteristics, the measurement is conducted on the Guangzhou-Shenzhen passenger-dedicated line in China with the speed of 295 km/h in the data-collection area at 2.4 GHz. From the measured data, the amplitude of each path is estimated by using the Subspace-Alternating Generalized Expectation-Maximization (SAGE) algorithm along with other parameters of channel impulse responses. Then the fading parameters, including path loss, shadow fading, and K-factor, are analysed. With the numerical results in the measurement and analysis, the fading characteristics have been revealed and modelled. It is supposed that this work has a promotion for HSR communication system design and improvement. <s> BIB011 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> Compared with the leaky feeder, distributed antenna systems (DAS) are treated as a more economic and promising solution to support radio coverage in tunnels. Based on the measurements performed in realistic subway tunnels in Madrid at 2.4 GHz, a statistic model for the propagation in tunnels is presented. Two groups of measurements (conducted in subway tunnels and railway tunnels, respectively) are employed to validate the model. The results in this paper could be helpful for networking planning and interference analysis in the design of DAS in tunnels. <s> BIB012 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> This paper presents the nonisotropic scattering characteristic of the mobile radio channel in an alternant tree-blocked viaduct scenario on high-speed railway (HSR) by real field measuring at 2.35 GHz. An angle of arrival (AOA) probability density function (PDF) is proposed for the nonuniform AOA at the mobile caused by stochastically distributed scatterers. Two Von Mises angular distributions with broad applicability are used to represent the line of sight (LOS) component and part of the scattering component in the AOA model. Based on such a PDF statistical characteristics of Ricean factor, , and AOA of the scattering component are modeled in LOS and obstructed line of sight (OLOS) cases, respectively. The results may give a meaningful and accurate channel model and could be utilized in HSR viaduct scenario evaluation. <s> BIB013 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> A semiempirical multiple-input multiple-output (MIMO) channel model is proposed for high-speed railway (HSR) viaduct scenarios. The proposed MIMO model is based on the combination of realistic single-input single-output (SISO) channel measurement results and a theoretical geometry-based stochastic model (GBSM). Temporal fading characteristics involving -factor and Doppler power spectral density (PSD) are derived from the wideband measurement under an obstructed viaduct on Zhengzhou-Xi’an HSR in China. The GBSM composed of a one-ring model and an elliptical model is employed to describe the entire propagation environment. Environment-related parameters in the GBSM are determined by the measured temporal fading properties. And a close agreement is achieved between the model results and measured data. Finally, a deterministic simulation model is established to perform the analysis of the space-time correlation function, the space-Doppler PSD, and the channel capacity for the measured scenario. This model is more realistic and particularly beneficial for the performance evaluation of MIMO systems in HSR environments. <s> BIB014 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> This paper presents results for delay and Doppler spread characterization in high-speed railway (HSR) hilly scenario. To investigate the propagation characteristics in this specific terrain, a measurement campaign is conducted along the “Guangzhou-Shenzhen” HSR in China. A wideband channel sounder with 40 MHz bandwidth is used to collect raw data at 2.4 GHz band. The delay spread and Doppler frequency features are analyzed based on measured data. It is found that there are abundant multipath components (MPCs) in this scenario. We present the relationship between the delay spreads and the transceiver distances. The measured route can be divided into four areas with different delay and Doppler characteristics. Finally, a tapped delay line (TDL) model is proposed to parameterize the channel responses in the HSR hilly environment, which is supposed to provide criterions for evaluations of the radio interface and development of wireless communication system. <s> BIB015 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> Train stations are one of the most common structures along a high-speed railway. They can block the line of sight (LOS), generate multiple reflected and scattered waves, and aggravate the fading behavior; however, these effects have been rarely investigated. This paper presents a group of 930-MHz measurements conducted on train stations of high-speed railways in China. The whole process of a train passing stations has been measured with two typical types of stations. The results indicate that, when the station is far from the transmitter (Tx), the semi-closed station (in which the awnings cover both the platforms and the rails) influences the propagation much more seriously than the open station (in which the awnings only cover the platforms supporting a clear free space over the tracks). When the station is near the Tx, the fact of whether the train keeps the LOS and stays inside the station determines the propagation for both types of stations. All the propagation characteristics, including extra propagation loss, shadow fading, small-scale fading, level crossing rate (LCR), average fade duration (AFD), and fading depth (FD), have been measured and computed for the first time. Specific findings of propagation characteristics in the train station scenario are provided. Afterward, by filling the gap of the train station scenario, a table is made to establish the comprehensive understanding of main scenarios in the high-speed railway. Furthermore, comparisons of the propagation characteristics between the train station scenario and ten standard scenarios are made to emphasize the significance of the modeling exclusively for the train station scenario. Finally, rules of the influence of four conditions are quantitatively revealed. The measured results and quantitative analysis are significant for leading the simulation and design of signaling and train control communications systems toward the reality. <s> BIB016 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> Train stations are one of the largest and most unavoidable obstructions for electromagnetic wave propagation on a high-speed railway. They can bring about severe extra propagation loss, and therefore, lead to poor coverage or handover failure. However, their influence has been rarely investigated before. Based on rich experimental results of 930 MHz measurements conducted on train stations of high-speed railway in China, this paper proposes two empirical models for the extra propagation loss owing to train stations for the first time. The extra loss depends on four conditions: the distance between the transmitter (Tx) and the train station, the type of the train station, the track carrying the train, and the propagation mechanism zones. Hence, the models are established for every case of all the combinations of these four conditions. The validation shows that the proposed models accurately predict the extra propagation loss and support an effective way to involve the influence of the train station in the simulation and design of the signaling and train control communications systems. <s> BIB017 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> Bridges that cross a railway's right-of-way are one of the most common obstacles for wave propagation along a highspeed railway. They can lead to poor coverage or handover failure but have been rarely investigated before. To describe the influence of this nonnegligible structure on propagation, measurements have been taken at 930 MHz along a real high-speed railway in China. Based on different mechanisms, the entire propagation process is presented by four zones in the case of an independent crossing bridge (ICB) and two zones in the case of groups of crossing bridges. First, all the propagation characteristics, including extra propagation loss, shadow fading, small-scale fading, and fading depth, have been measured and extracted. The results are shown in a complete table for accurate statistical modeling. Then, two empirical models, i.e., ICB and crossing bridges group (CBG), are first established to describe the extra loss owing to the crossing bridges. The proposed models improve on the state-of-the-art models for this problem, achieving a root mean square error (RMSE) of 3.0 and 3.7 dB, respectively. <s> BIB018 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> In this paper, a recently conducted measurement campaign for high-speed-train (HST) channels is introduced, where the downlink signals of an in-service Universal Mobile Terrestrial System (UMTS) deployed along an HST railway between Beijing and Shanghai were acquired. The channel impulse responses (CIRs) are extracted from the data received in the common pilot channels (CPICHs). Within 1318 km, 144 base stations (BSs) were detected. Multipath components (MPCs) estimated from the CIRs are clustered and associated across the time slots. The results show that, limited by the sounding bandwidth of 3.84 MHz, most of the channels contain a single line-of-sight (LoS) cluster, and the rest consists of several LoS clusters due to distributed antennas, leaking cable, or neighboring BSs sharing the same CPICH. A new geometry-based random-cluster model is established for the clusters' behavior in delay and Doppler domains. Different from conventional models, the time-evolving behaviors of clusters are characterized by random geometrical parameters, i.e., the relative position of BS to railway, and the train speed. The distributions of these parameters, and the per-cluster path loss, shadowing, delay, and Doppler spreads, are extracted from the measurement data. <s> BIB019
Special attention has been given to HST channel measurements in recent years. Due to the high speed of the train and the hostile HST environments, conducting accurate channel measurements for HST communication systems is challenging and needs to address particular hardware and software requirements, e.g., robustness, scalability, hardware redundancy and traceability BIB008 . Many measurement campaigns - BIB019 for different HST environments were presented in the literature. Here, we will briefly review and classify the important measurements for HST communications according to the scenarios, cellular architecture, measurements' setup parameters (i.e., antenna configuration, carrier frequency, and bandwidth), and measured channel statistics, as shown in Table I . , also called plain scenario , the Rx is moving at a very high speed in a rural area where the BS antenna is much higher than the surroundings BIB009 . This environment focuses on large cells and continuous coverage where the link between the fixed Tx and moving Rx normally has a dominant line-of-sight (LoS) component. However, after a certain distance, called breakpoint distance, the impact of the sparse scatterers will be noticed at the Rx represented by non-LoS (NLoS) components. As a result, the slopes of the PL and Ricean K -factor will be noticeably changed at the breakpoint leading to dual-slope PL model . It has been proved that there is a strong link between the breakpoint distance and the antenna height. For a certain site, as the antenna height decreases, the breakpoint moves closer to the Tx. This is because a bigger Fresnel zone is intercepted by the ground, usually covered by vegetation, when the antenna height is lower. Furthermore, due to the influences of different environments, slight variations in the breakpoint distance can be noticed in different scenarios. Therefore, it can be concluded that the breakpoint distance is mainly determined by the antenna height while slightly affected by environments BIB006 . Based on the geographic nature and the distribution/height of the surrounding scatterers, the open scenarios can be further classified into rural , urban, and suburban scenarios as illustrated in Fig. 2 . 2) The viaduct scenario is very common for HSTs BIB003 - BIB013 . The main purpose of viaducts is to ensure the smoothness of the rail and the high speed of the train. In this scenario, the radio reflection, scattering, and diffraction caused by nearby scatterers, e.g., trees and buildings, can also be reduced significantly. The viaduct height and relative BS height have great influence on the received signal. Because of the relatively high altitude of the viaduct in comparison with the surrounding terrain, the LoS component is dominant in this scenario. However, the sparsity of the scatterers in the environment around the viaduct will still influence the received signal at the Rx BIB004 . Based on the relative altitude between the scatterers and the viaduct, this scenario can be further classified into high viaduct and low viaduct scenarios. In the former, most scatterers located within 50 m from the viaduct are lower than the surface of the viaduct and therefore their impact on the propagation characteristics is negligible. In the low viaduct scenario BIB013 , BIB014 , some of the nearby scatterers are higher than the surface of the viaduct and consequently they introduce rich reflections and scattering components that may result in a severe shadow fading and/or extra pathloss . 3) The cutting scenario is another common scenario for HST wireless communications BIB005 - BIB010 . It represents an environment where the HST passes a U-shaped geographical cut surface between the hills. The cutting is widely used for HST construction to ensure the smoothness of the rail and help to achieve a high speed of the train when passing through hills. The propagation of radio waveforms in this scenario is significantly affected by the steep walls on both sides. The LoS component can be observed along the route of the HST in this scenario. Here, we can recognize between two cutting scenarios: deep cutting if the receive antenna mounted on top of the train is lower than the upper eave of the cutting and low cutting if the height of the upper eave is lower than the top of the receive antenna. 4) In the hilly terrain scenario BIB011 , BIB015 , the surrounding environment is densely scattered with objects distributed irregularly and non-uniformly. With high-altitude transmit antennas and low-altitude obstacles, the LoS component is observable and can be detected along the entire railway. However, multipath components scattered/reflected from the surrounding obstacles will cause serious constructive or destructive effects on the received signal and therefore influence the channel's fading characteristics. 5) The tunnel scenario represents an environment where HST passes through tunnels BIB001 , BIB002 with different lengths ranging from hundreds of meters to several kilometers. The length, size, and shape of the tunnels and the encountered waveguide phenomena have significant impact on the communication channel. Because of the long limited space, bounding of tunnel, and poor smoothness of the interior wall, propagation characteristics of signals in tunnels are quite different from other scenarios. To overcome the problem of the limited visibility encountered in tunnels and to design an optimal wireless communication network, leaky feeders and DAS are often deployed. However, as HSTs may require long tunnels, the leaky feeder solution is very expensive especially at high operating frequencies and its maintenance is considerably complex BIB007 . As a result, DAS is more practical BIB012 . It can provide considerable gains in coverage and capacity, and provide spatial diversity against the fading by using the antenna elements at different locations. It also has advantages in future applications such as higher distance between repeaters and easy maintenance after being opened. 6) The stations scenario represents the railway facility where HSTs stop regularly to load/unload passengers BIB016 , BIB017 . HST stations can be classified according to their size or architecture. Based on the size of the station, which reflects the estimated communication traffic, station scenario can be categorized into small to medium size stations, large stations, and marshalling stations . From the architecture perspective, which affects the propagation characteristics inside the station, three HST station scenarios can be recognized, i.e., open station, semi-closed station, and closed station BIB017 as illustrated in Fig. 2 . Table II briefly summarizes the description and key-parameters of the aforementioned scenarios. The aforementioned scenarios are the most encountered ones in HST environments. However, recent measurement campaigns have shed some light on other special HST scenarios such as crossing bridges BIB018 . Besides the previous "individual" scenarios, HSTs may encounter more than one scenario (the socalled combination scenario ) in one cell. Two combination scenarios are reported in the literature. The first one is a combination between tunnel and viaduct where viaducts are usually used as transition between tunnels in mountain environments. The frequent transition between tunnels and viaducts will increase the severity of fading at the transition points causing a drop in the communication quality. The second combination is between cutting scenarios, i.e., deep and low cuttings, and rural scenario. The frequent and fast transition between these scenarios can degrade the quality of the communication link and makes signal prediction quite challenging.
Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> In this paper, the statistical channel properties and channel modeling of indoor part for high-speed train communication system is presented based on wideband channel measurement at 2.35GHz. Two configurations of base station (BS) antennas, the omni-directional ceiling antenna and the planar antenna on the wall, are used in the measurement, in order to compare different channel characteristics and facilitate the future wideband system deployment. Channel properties, such as Path Loss (PL), Delay Spread (DS) and Ricean K-factor, are analyzed and modeled. The empirical log-distance PL models are derived. It is found that PL with planar antenna at BS is 10dB bigger than that with omni-directional antenna. The latter is even smaller than the PL of the free space. The distributions of DS under these two configurations are both well fitted with the lognormal distribution, and the mean values of them are similar. However, K-factors in decibel are quite different, although both follow well with the normal distribution. The mean values of K-factor with the omni-directional antenna and the planar antenna at BS are 10.41 dB and 4.09 dB, respectively. <s> BIB001 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> This paper focuses on propagation path loss modeling in viaduct and plain scenarios of the High-speed Railway. The data used for modeling comes from measurement on Zhengzhou-Xi'an passenger dedicated line with the maximum moving speed of 340Km/h. Based on the measurement data, tuned Free-space path loss models in these two scenarios are proposed. The performance of the tuned models is compared with that of the Hata model. The evaluation of the models is in terms of mean error, root mean square error and standard deviation of the residuals between the models and measurement. The simulation results and related analysis show better performance of the proposed tuned models compared with the conventional Hata model. <s> BIB002 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> A network with high quality of service (QoS) is required for railway wireless communication and control systems. Research on radio-wave propagation in railway environment has great significance for the design and optimization of the railway wireless network. In this paper, measurements are taken in railway viaduct area using track side base stations of the GSM-R network. Comparison between the measured path loss values and the estimated values by a few prediction models shows a large deviation. Thus a new path loss prediction model for viaduct area is derived from statistical analysis of the measurement results in this paper. The novel proposed model has proven to be accurate for the planning of the railway wireless network. <s> BIB003 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> This paper presents the results of path loss measurements in "Zhengzhou-Xi'an" high-speed railway environment at 930 MHz band. A transmitter directional antenna height of 20~30 meters above the rail surface and a receiver omni-directional antenna height of 3.5 meters were used on the high-speed viaducts height of 10~30 meters above the ground. An automatic acquisition system was utilized in the measurements. The model makes distinctions among different terrain. The results of measurements provide practical values for path loss exponent and standard deviation of shadowing affected by the viaduct factor in suburban, open area, mountain area and urban propagation regions where the high-speed trains travel. Based on the measurement data, the empirical path loss model was developed, which could be used for predicting the path loss for the future railway communication systems, and provide the facilities for network optimization. <s> BIB004 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> As a very important parameter in link budget and channel modeling, the Ricean K factor in the viaduct and cutting scenarios along the high speed railway is estimated by using a moment-based estimator. The practical measurement is taken in the train at a speed of more than 250 km/h. The measured distributions are compared with the Ricean distributions and it's seen that the estimation of Kis accurate. Channel conditions of the two special scenarios are analyzed based on the measurement and estimation results. <s> BIB005 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> A high performance wireless network is essential for for the railway communication and control systems. Research on the fading characteristics in railway environment is of great importance for the design of the railway wireless network. In this paper, measurements are taken in railway terrain cuttings area using track side base stations of the GSM-R network. The fitted path loss model, shadow fading, and dynamaic range of the small scale fading are obtained and compared to the results of viaduct scenario. The propagation environment of the terrain cuttings turns out to be worse than the viaduct area. The path loss exponent is found to be 4.3. The shadow loss can be reasonably described by a log-normal distribution. It is also found that the bridges over the cuttings can cause extra loss of about 5 dB. The dynamaic range of the small scale fading is from 27 dB to 40 dB with a mean value of about 33 dB. <s> BIB006 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> Presented is the statistical analysis of radio wave propagation in a high-speed railway cutting scenario, derived from 930 MHz measurements taken along the ‘Zhengzhou-Xi'an’ high-speed railway of China. The crown width and bottom width of the cutting are well-covered by the proposed models. The Ricean distribution offers a good fit and the K-factor is found to be lognormal, with a mean value of 1.88 dB and standard deviation of 3.29 dB. <s> BIB007 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> Narrow band measurements at 930.2 MHz are carried out in two kinds of viaduct scenarios on the Zhengzhou-Xi'an high-speed railway at the speed of 300 km/h. The first-order and second-order statistics of the measured data, such as level crossing rate (LCR) and average fade duration (AFD), are compared with theoretical values of Rayleigh, Rice and Nakagami models. An emulation system is set up in the State Key Lab of Rail Traffic Control and Safety using a Propsim C8Radio Channel Emulator. Two new models based on WINNER II D2a channel model are proposed for viaduct scenarios according to the emulation results. <s> BIB008 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> This paper presents a novel and practical study on the position-based radio propagation channel for High-Speed Railway by performing extensive measurements at 2.35 GHz in China. The specification on the path loss model is developed. In particular, small scale fading properties such as K-factor, Doppler frequency feature and time delay spread are parameterized, which show dynamic variances depending on the train location and the transceiver separation. Finally, the statistical position-based channel models are firstly established to characterize the High-Speed Railway channel, which significantly promotes the evaluation and verification of wireless communications in relative scenarios. <s> BIB009 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> This paper presents a set of 930 MHz measurements conducted along the “Zhengzhou-Xi'an” high-speed rail of China, to characterize short-term fading behavior of the rail viaduct scenario. Three measurement cases covering viaducts with different heights are reported. The analysis results include fade depth (FD), Ricean distribution fit and K-factor modeling, level crossing rates (LCR), and average fade duration (AFD). A small value of fade depth, around 15 dB, is observed. The Ricean distribution offers good fit in this line-of-sight (LOS) propagation scenario, and the K-factor estimated using moment-based method is modeled as a piecewise function, whose break point equals to the reference distance. It is found that the viaduct height H greatly affects the severity of fading and the feature parameters. The results are applicable to the design of high-speed rail communication systems and the modeling of the rail viaduct fading channels. <s> BIB010 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> This paper proposes a distance-dependent Ricean K-factor model for a line-of-sight (LOS) viaduct scenario in the high-speed rail (HSR) of China. Extensive narrowband measurements conducted at 930 MHz are utilized. The propagation environment can be categorized into two cases: moderate suburban and dense suburban. The estimated K-factors are modeled as a piecewise-linear function of distance. The statistical fluctuations of K-factors are well considered by introducing the standard deviation to the expression. A detailed comparison between the piecewise-linear K-factor model and that of other literature validates the proposed model. Our results will be useful in the modeling of HSR viaduct channels and the performance analysis such as channel capacity and throughput for HSR wireless communication systems. <s> BIB011 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> In this paper, based on the measured data at Wuhan-Guangzhou high speed railway, the statistics characteristics of high speed mobile fading channel including long-term and short-term fading in this environment are presented. The measurement campaigns were conducted at GSM-R downlink band. The experimental data was analyzed to provide path loss model and short-term fading statistics including probability density functions (PDF) of signal amplitude, delay statistics, number of paths and path power statistics. From these statistics, it is shown that the path loss index at this railway area is n=2∼5 with standard deviation ranging from 3 to 6dB; the test PDF of the short-term fading in such channels approximately fit Nakagami distributions better; the cumulative density functions of average delay and root mean square delay demonstrated that the mean and root mean square delay are not larger than respectively 1.37µs and 1.69µs and maximum delays less than 6.7µs occur most frequently. At last, the PDFs of paths and PDFs of relative amplitude of paths with different threshold levels were computed and drawn. <s> BIB012 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> The paper describes the measurement campaigns for the broadband channel properties under the high- speed condition, which have been carried out on Zhengzhou to Xi'an (ZX) High-Speed Railway and Beijing to Tianjin (BT) High-Speed Railway. WCDMA with the bandwidth of 3.84MHz is employed as the excitation signal that is transmitted from the base station along the railway and received by the TSMQ by ROHDE & SCHWARZ inside the train. Different scenarios including plain, U-shape cutting, station and hilly terrain are chosen in the measurements and the parameters about the channel multipath properties are extracted, analyzed and briefly reported here. These results are informative for the system designers in the future wireless communication of High-Speed Railway. <s> BIB013 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> An efficient channel sounding method using cellular communication systems is proposed for high-speed railway (HSR) propagation environments. This channel measurement technique can be used conveniently to characterize different HSR scenarios, which can significantly improve the measurement efficiency. Based on downlink signals of wideband code division multiple access (WCDMA) and the long term evolution (LTE), principles and methodologies of HSR channel sounding are presented. Using the WCDMA signal, a measurement campaign is conducted in real-world HSR scenarios and statistical characterizations are provided using a radio network analyzer. Due to the limits of the radio network analyzer, afterwards, a software defined radio (SDR)-based channel data recorder is developed allowing users to collect the signals from different wireless cellular systems. Especially, the estimation accuracies are validated in lab by the faded signals emitted from a vector signal generator. The results show that the channel data recorder provides a particularly good match to the configured fading channels. Therefore, this measurement method can be employed to investigate the HSR channel, and to establish the channel models under the various HSR scenarios. <s> BIB014 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> Accurate characterization of the radio channel in tunnels is of great importance for new signaling and train control communications systems. To model this environment, measurements have been taken at 2.4 GHz in a real environment in Madrid subway. The measurements were carried out with four base station transmitters installed in a 2-km tunnel and using a mobile receiver installed on a standard train. First, with an optimum antenna configuration, all the propagation characteristics of a complex subway environment, including near shadowing, path loss, shadow fading, fast fading, level crossing rate (LCR), and average fade duration (AFD), have been measured and computed. Thereafter, comparisons of propagation characteristics in a double-track tunnel (9.8-m width) and a single-track tunnel (4.8-m width) have been made. Finally, all the measurement results have been shown in a complete table for accurate statistical modeling. <s> BIB015 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> The high-speed railway propagation channel has significant effect on the design and performance analysis of wireless railway control systems. An important feature of the high-speed railway communications is the usage of directional transmitting antennas, due to which the receiver may experience strong attenuation of the line-of-sight (LOS) path under the base station (BS). This leads to a drop, and strong variations, of the signal strength under the BS. While the physical origin of the signal variations is different from conventional shadowing, it can be described by similar statistical methods. However, the effect has been largely neglected in the literature. In this paper we first define the region of the bottom of the BS, and then present a simple shadow fading model based on the measurements performed in high-speed railways at 930 MHz. It is found that the bottom area of the BS has a range of 400 m – 800 m; the standard deviation of the shadowing also follows a Gaussian distribution; the double exponential model fits the autocovariance of the shadow fading very well. We find that the directivity of the transmitting antenna leads to a higher standard deviation of shadowing and a smaller decorrelation distance under the BS compared to the region away from the BS. <s> BIB016 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> This paper focuses on the fading characteristics of wireless channel on High-Speed Railway (HSR) in hilly terrain scenario. Due to the rapid speed, the fading characteristics of HSR channel are highly correlated with time or Transmit-Receive distance and have their own special property. To investigate the fading characteristics, the measurement is conducted on the Guangzhou-Shenzhen passenger-dedicated line in China with the speed of 295 km/h in the data-collection area at 2.4 GHz. From the measured data, the amplitude of each path is estimated by using the Subspace-Alternating Generalized Expectation-Maximization (SAGE) algorithm along with other parameters of channel impulse responses. Then the fading parameters, including path loss, shadow fading, and K-factor, are analysed. With the numerical results in the measurement and analysis, the fading characteristics have been revealed and modelled. It is supposed that this work has a promotion for HSR communication system design and improvement. <s> BIB017 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> Cuttings are widely used in high-speed railway (HSR) transportation to ensure the flatness of rails. The special structure of cuttings results in rich reflection and scattering, and creates dense multipath components. This paper presents a series of measurements of the propagation channel at 930 MHz conducted along the “Zhengzhou-Xi'an” HSR of China, to characterize the small-scale fading behavior of rail-cutting scenarios as a function of the geometry of cuttings, including crown width and bottom width. Raw data are collected in six cuttings (five cuttings are used for developing the model, while the other one is used for validation) in rural and suburban environments. We propose a set of effective methods to statistically model the spatial/temporal variations – including fade depth (FD), level crossing rate (LCR), average fade duration (AFD), and Ricean ${K}$ -factor – as a function of the structural parameters of cuttings. Akaike's Information Criterion (AIC)-based evaluation indicates that the Ricean distribution is the best to describe small-scale fading. In addition, the rich multipath and directionality of the transmitting antennas lead to a non-monotonous dependence of the ${K}$ -factor on the distance between transmitter and receiver. The autocovariance function of the deviation of the extracted ${K}$ -factors from the proposed model is presented and the coherence length is investigated. Our results show that even though a cutting is a scenario with severe fading, a “wide” cutting (i.e., with both wide crown and bottom widths) is conducive to the reduction of the severity of fading. <s> BIB018 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> For the design and performance evaluation of broadband wireless communication systems in High-Speed Railway (HSR) environments, it is of crucial importance to have accurate and realistic propagation channel model. Based on real measurement data in U-Shape Groove (USG) scenarios at 2.35 GHz on Zhengzhou-Xi'an (ZX) HSR in China, the channel fading characteristics such as path loss, shadowing, K factor, time dispersivity and Doppler effects are specialized. These technical guidelines will promote the development of the wireless communication system under HSR. <s> BIB019 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> With the rapid development of high speed railway (HSR), propagation characteristics of channels in HSR scenarios are therefore in urgent demand. We conducted numerous single input single output (SISO) measurements at 2.6 GHz with a bandwidth of 20 MHz along the Harbin-Dalian passenger dedicated railway line. Here, first analytical results in hilly terrains are provided. A double-slope path loss model fits measured data well and shadow fading is extracted to be log-normal distributed. Statistical results of small-scale fading are presented and compared in near regions and far regions relative to the transmitter, including the mean excess delay, root-mean-square (RMS) delay spread and the number of paths. Meanwhile, the delay Doppler spectrum is given out and verified. Finally, tapped-delay-line (TDL) channel model is established in detail based on the measured data. It is supposed that these results and models have a promotion for the further evaluation, simulation and design of the wireless communication system in HSR. <s> BIB020 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> Based on wideband radio channel measurements with a bandwidth of up to 50 MHz at 2.35 GHz in a Ushape cutting environment, we analyze the Ricean K-factor for high-speed railway communications. Three types of the K-factor, consisting of narrowband, wideband and delay K-factor, are extracted according to the measured channel responses by using the channel partitioning and combining method. Due to the rich reflecting and scattering components in the U-shape cutting scenario, the K-factor dramatically changes with the frequency. A distance-based statistical narrowband K-factor model covering the frequency variability is proposed. The channel bandwidth dependent property of the wideband K-factor is observed and then a bandwidth-based statistical wideband K-factor model is developed. Moreover, it is found that the K-factor just exists at the beginning of the delay bins in the deep U-shape cutting scenario. These results are provided for use in system design and channel modeling of high-speed railway communications. <s> BIB021
Channel statistics are essential for the analysis and design of a communication system. Most of HST measurement campaigns have concentrated on large-scale fading statistics, i.e., path loss (PL) and shadowing. The measurement campaign presented in studied the PL in HST channels when the Tx and Rx were located inside the same HST carriage and when they were located in different carriages. The measured results showed that the waves traveling inside the same train carriage are dominant compared to the ones reflected from scatterers outside the HST due to the high penetration loss of wireless signals traveling through the metal body of the carriages. On the contrary, the waves reflected from outer scatterers are dominant compared to the waves traveling inside the train carriages when the communication devices are located in different carriages due to the high insulation between these carriages. In BIB001 , the PL of indoor wideband HST channels was also investigated using two different indoor Tx configurations, i.e., omni-directional antenna mounted on the ceiling of the HST and a planner antenna mounted on the wall of the carriage. Measurements showed that the channel between the Tx planner antenna and Rx can suffer 10 dB greater loss compared with the one between the Tx omni-directional antenna and Rx. The aforementioned results from both measurement campaigns are very useful for the design of HSTs and measurement scenarios. However, more measurements for indoor scenarios in HSTs are needed before confirming that these observations are conclusive. PLs of HST channels in open space and hilly terrain scenarios were reported in , , BIB016 and BIB017 , BIB020 . Measurement data reported in both hilly terrain scenarios showed a breakpoint in the estimated PLs. A dominant and strong LoS component can be easily observed before the breakpoint while the impact of scatterers starts and grows beyond the breakpoint distance. The breakpoint distance depends on the clearance of the first Frensal zone and can be calculated based on the Tx and Rx antenna heights and the wavelength of the transmitted signal . Therefore, different breakpoint distances were reported in the aforementioned hilly terrain measurements, i.e., 778 m in BIB017 and 500 m in BIB020 . Since viaduct is a common HST scenario, the PL of HST viaduct channels has thoroughly been studied in the literature, e.g., BIB002 , BIB003 , BIB004 , BIB009 , BIB010 , . Most of these measurements highlighted the impact of the height of the viaduct and the relative height of the BS on the estimated PL. In general, there are two main observations that can be concluded from the aforementioned viaduct measurements. First, the higher the BS antenna, the smaller the PL exponent for a given viaduct height. Second, the viaduct reduces the severity of the channel fading. In other words, the higher the viaduct, the less fading severity. Both observations are physically meaningful considering that the increase of the heights of the BS and the viaduct over the surrounding obstacles will lead to a clear LoS and reduce the impact of these scatterers on the received signal. The measurements of HST channels in cutting scenarios reported in BIB006 , BIB007 , BIB018 , BIB019 have demonstrated the impact of the cutting structure, i.e., the depth and the widths of the top and bottom of the cutting, on the estimated PLs. A shallow cutting, or low cutting, will lead to a strong LoS condition while deep cutting will lead to a large PL exponent due to the reflections from the cuttings' slopes. A comparison between the PL of cutting and viaduct scenarios was carried out in BIB006 . It was suggested that the propagation conditions in the cutting scenarios can be worse than those of viaduct ones because of the reflected and scattered components caused by the slopes of the cutting. It is important to note that such a conclusion is highly dependant on the dimensions of the studied viaduct and cutting, as we have highlighted the impact of those dimensions on the estimated PL earlier. On the other hand, shadowing has generally been modeled as log-normal distributed in different HST scenarios. Various channel statistics studied in HST channel measurement campaigns are shown in Table I . The Ricean K -factor is a very important parameter in link budget and channel modeling. Therefore, many papers presented the estimation of K -factors in different scenarios, e.g., open space , viaduct BIB002 , BIB005 , BIB011 , BIB009 - , cutting BIB006 - BIB019 , and hilly terrain BIB017 . The previous discussions of the dominance of the LoS component, the breakpoint distance, and the impact of the viaduct and cutting structure are also related to the K -factor. For example, showed how a higher value of the viaduct height will lead to a higher value of the K -factor. In the same context, it showed that lower viaducts lead to more surrounding scatterers, which results in an increase in the severity of the fading and a considerable fluctuation of the K values. Moreover, the measurement in showed while the K -factor is a linear function of distance, the slopes of K values are different before and after the breakpoint. Similar comprehensive studies on K -factors of HST channels but in cutting scenarios were reported in BIB018 , BIB021 . The analysis showed that wide cuttings increase the possibility of dominant LoS components, which leads to higher K -values. Distance-dependant linear K models for different cutting dimensions before and after the breakpoint distance were proposed in BIB018 . In BIB008 , BIB010 , , BIB018 , the spatial/temporal variations, e.g., fade depth (FD), level-crossing rate (LCR), and average fade duration (AFD), were investigated. FD is a measure of variation in the channel energy about its local mean due to small scale fading and it is calculated from the difference in signal levels between 1% and 50%. Measurements in viaduct scenarios have shown that FD is independent of the viaduct's height but is affected by the number and closeness of surrounding scatterers that are higher than the viaduct BIB008 , . LCR is defined as the expected rate at which the received signal crosses a specified level in a positive-going or negative-going direction, while AFD is defined as the average period of time for which the received signal is below this specified level, i.e., threshold. LCR and AFD were statistically modeled as functions of the structural parameters of the viaduct and cutting scenarios in , BIB018 . The results showed that the severity of fading in viaduct scenarios is greatly reduced compared with that in open space , while power delay profiles (PDPs) were investigated in , , BIB012 , BIB013 , BIB014 . In BIB015 , a measurement was carried out in a tunnel scenario and the signal propagation characteristics at the breakpoint were discussed. The stationarity interval, defined as the maximum time duration over which the channel satisfies the wide sense stationary (WSS) condition, of HST channels was investigated in based on measurements. It showed that conventional channel models offered stationary intervals much larger than the actual measured ones. In , the non-stationarity of a HST channel in a cutting scenario was investigated using a metric called non-stationarity index. The non-stationarity index was defined as the distance between the auto-correlation of a real time-variant transfer function and the auto-correlation of this transfer function under the WSS assumption. The reported measurement data showed that the non-stationarity index increases when the Doppler frequency shift varies fast. In the future, more channel statistics, especially those related to small-scale fading parameters, are necessary to be investigated in measurements.
Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> We present and analyse the results of wideband radio channel measurements performed in tunnels. Both a high speed train tunnel and a smaller test tunnel have been investigated with both antennas and leaky feeders as fixed radiators. The results show typical features of the tunnel radio channel with typically low delay spread combined to significant slow fading of the LOS signal due to interferences. The delay spread may increase substantially during the fading dips. <s> BIB001 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> This paper covers some of the work carried out in the planning of the global system for mobile communication for railway (GSM-R) of the tunnels on the new high-speed trains in Spain. Solutions based on distributed antenna systems have been tested by installing several 900-MHz transmitters inside and outside of a 4000-m tunnel and measuring the propagation in different conditions. The measurements have been used to model the effects of tunnel propagation, including curves, trains passing from the outside to the inside, and the effect of two trains passing inside the tunnel. All cases have been tested by comparing solutions using isofrequency and multifrequency distributed transmitters inside the tunnel. The improvements of signal-to-noise ratio and the reduction of the blocking effects of two trains passing have demonstrated the advantages of using isofrequency distributed antenna systems in tunnels. Finally, a complete propagation model combining both modal analysis and ray tracing has been applied to predict the propagation loss inside and outside these tunnels, and results have been compared with the measurements. The model has proven to be very useful for radio planning in new railway networks. <s> BIB002 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> This paper presents an empirical path loss model derived from the 930MHz measurements along "Zhengzhou-Xi'an" high-speed railway in China. All the measurements were taken on the viaduct with the height of 23 meters above the ground surface. It applies to distances and base station antenna effective heights h not well-covered by existing models. The Least Squares Method (LS) is utilized in the curve fitting. The path loss exponent n determined by the slop of the linear fitting curve is statistically modeled, with the considering of base station antenna effective height h. Based on the log-normal shadowing model, a novel path loss model was developed. The proposed path loss model applies to high-speed railway viaduct scenarios, with base antenna heights from 15 to 30 m, base-to-train distances from 0.5 to 4 km. Compared with Hata and Winner II models, it raises path loss prediction accuracy for 3~10 dB and reduces the standard deviation by 1~3 dB. <s> BIB003 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> This paper focuses on the shadow fading characteristic in viaduct scenario of the High-speed Railway. Measurement is done on Beijing to Shanghai High-speed Railway. Based on the measurement data, distribution and autocorrelation of the shadow fading are researched. Statistical values of the shadow fading standard deviation and the decorrelation distance are analyzed. It is shown that the lognormal distribution suits most groups of the measurement data well. Evaluation of the exponential shadow fading autocorrelation model and the double exponential shadow fading autocorrelation model are made in terms of mean error, standard deviation and correlation coefficient of the residuals between the models and the measurement data. Simulation results show better performance of the double exponential model compared with the exponential model. <s> BIB004 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> Narrow band measurements at 930.2 MHz are carried out in two kinds of viaduct scenarios on the Zhengzhou-Xi'an high-speed railway at the speed of 300 km/h. The first-order and second-order statistics of the measured data, such as level crossing rate (LCR) and average fade duration (AFD), are compared with theoretical values of Rayleigh, Rice and Nakagami models. An emulation system is set up in the State Key Lab of Rail Traffic Control and Safety using a Propsim C8Radio Channel Emulator. Two new models based on WINNER II D2a channel model are proposed for viaduct scenarios according to the emulation results. <s> BIB005 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> A high performance wireless network is essential for for the railway communication and control systems. Research on the fading characteristics in railway environment is of great importance for the design of the railway wireless network. In this paper, measurements are taken in railway terrain cuttings area using track side base stations of the GSM-R network. The fitted path loss model, shadow fading, and dynamaic range of the small scale fading are obtained and compared to the results of viaduct scenario. The propagation environment of the terrain cuttings turns out to be worse than the viaduct area. The path loss exponent is found to be 4.3. The shadow loss can be reasonably described by a log-normal distribution. It is also found that the bridges over the cuttings can cause extra loss of about 5 dB. The dynamaic range of the small scale fading is from 27 dB to 40 dB with a mean value of about 33 dB. <s> BIB006 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> This paper proposes a distance-dependent Ricean K-factor model for a line-of-sight (LOS) viaduct scenario in the high-speed rail (HSR) of China. Extensive narrowband measurements conducted at 930 MHz are utilized. The propagation environment can be categorized into two cases: moderate suburban and dense suburban. The estimated K-factors are modeled as a piecewise-linear function of distance. The statistical fluctuations of K-factors are well considered by introducing the standard deviation to the expression. A detailed comparison between the piecewise-linear K-factor model and that of other literature validates the proposed model. Our results will be useful in the modeling of HSR viaduct channels and the performance analysis such as channel capacity and throughput for HSR wireless communication systems. <s> BIB007 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> This paper presents a set of 930 MHz measurements conducted along the “Zhengzhou-Xi'an” high-speed rail of China, to characterize short-term fading behavior of the rail viaduct scenario. Three measurement cases covering viaducts with different heights are reported. The analysis results include fade depth (FD), Ricean distribution fit and K-factor modeling, level crossing rates (LCR), and average fade duration (AFD). A small value of fade depth, around 15 dB, is observed. The Ricean distribution offers good fit in this line-of-sight (LOS) propagation scenario, and the K-factor estimated using moment-based method is modeled as a piecewise function, whose break point equals to the reference distance. It is found that the viaduct height H greatly affects the severity of fading and the feature parameters. The results are applicable to the design of high-speed rail communication systems and the modeling of the rail viaduct fading channels. <s> BIB008 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> In this paper, based on the measured data at Wuhan-Guangzhou high speed railway, the statistics characteristics of high speed mobile fading channel including long-term and short-term fading in this environment are presented. The measurement campaigns were conducted at GSM-R downlink band. The experimental data was analyzed to provide path loss model and short-term fading statistics including probability density functions (PDF) of signal amplitude, delay statistics, number of paths and path power statistics. From these statistics, it is shown that the path loss index at this railway area is n=2∼5 with standard deviation ranging from 3 to 6dB; the test PDF of the short-term fading in such channels approximately fit Nakagami distributions better; the cumulative density functions of average delay and root mean square delay demonstrated that the mean and root mean square delay are not larger than respectively 1.37µs and 1.69µs and maximum delays less than 6.7µs occur most frequently. At last, the PDFs of paths and PDFs of relative amplitude of paths with different threshold levels were computed and drawn. <s> BIB009 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> This paper presents a novel and practical study on the position-based radio propagation channel for High-Speed Railway by performing extensive measurements at 2.35 GHz in China. The specification on the path loss model is developed. In particular, small scale fading properties such as K-factor, Doppler frequency feature and time delay spread are parameterized, which show dynamic variances depending on the train location and the transceiver separation. Finally, the statistical position-based channel models are firstly established to characterize the High-Speed Railway channel, which significantly promotes the evaluation and verification of wireless communications in relative scenarios. <s> BIB010 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> The paper describes the measurement campaigns for the broadband channel properties under the high- speed condition, which have been carried out on Zhengzhou to Xi'an (ZX) High-Speed Railway and Beijing to Tianjin (BT) High-Speed Railway. WCDMA with the bandwidth of 3.84MHz is employed as the excitation signal that is transmitted from the base station along the railway and received by the TSMQ by ROHDE & SCHWARZ inside the train. Different scenarios including plain, U-shape cutting, station and hilly terrain are chosen in the measurements and the parameters about the channel multipath properties are extracted, analyzed and briefly reported here. These results are informative for the system designers in the future wireless communication of High-Speed Railway. <s> BIB011 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> An efficient channel sounding method using cellular communication systems is proposed for high-speed railway (HSR) propagation environments. This channel measurement technique can be used conveniently to characterize different HSR scenarios, which can significantly improve the measurement efficiency. Based on downlink signals of wideband code division multiple access (WCDMA) and the long term evolution (LTE), principles and methodologies of HSR channel sounding are presented. Using the WCDMA signal, a measurement campaign is conducted in real-world HSR scenarios and statistical characterizations are provided using a radio network analyzer. Due to the limits of the radio network analyzer, afterwards, a software defined radio (SDR)-based channel data recorder is developed allowing users to collect the signals from different wireless cellular systems. Especially, the estimation accuracies are validated in lab by the faded signals emitted from a vector signal generator. The results show that the channel data recorder provides a particularly good match to the configured fading channels. Therefore, this measurement method can be employed to investigate the HSR channel, and to establish the channel models under the various HSR scenarios. <s> BIB012 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> The validity of the maximum capacity criterion applied to realize high-rank line-of-sight (LoS) multiple-input multiple-output (MIMO) channels is investigated for high speed railway scenarios. Performance is evaluated by ergodic capacity. Numerical results demonstrate that by simply adjusting antenna spacing according to the maximum capacity criterion, significant capacity gains are achievable. We find relatively low sensitivity of the system to displacements from the optimal point and angle in relatively short range. Thus, we present two proposals to reconfigure antenna arrays so as to maximize LoS MIMO capacity in the high speed railway scenarios <s> BIB013 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> The high-speed railway propagation channel has significant effect on the design and performance analysis of wireless railway control systems. An important feature of the high-speed railway communications is the usage of directional transmitting antennas, due to which the receiver may experience strong attenuation of the line-of-sight (LOS) path under the base station (BS). This leads to a drop, and strong variations, of the signal strength under the BS. While the physical origin of the signal variations is different from conventional shadowing, it can be described by similar statistical methods. However, the effect has been largely neglected in the literature. In this paper we first define the region of the bottom of the BS, and then present a simple shadow fading model based on the measurements performed in high-speed railways at 930 MHz. It is found that the bottom area of the BS has a range of 400 m – 800 m; the standard deviation of the shadowing also follows a Gaussian distribution; the double exponential model fits the autocovariance of the shadow fading very well. We find that the directivity of the transmitting antenna leads to a higher standard deviation of shadowing and a smaller decorrelation distance under the BS compared to the region away from the BS. <s> BIB014 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> Cuttings are widely used in high-speed railway (HSR) transportation to ensure the flatness of rails. The special structure of cuttings results in rich reflection and scattering, and creates dense multipath components. This paper presents a series of measurements of the propagation channel at 930 MHz conducted along the “Zhengzhou-Xi'an” HSR of China, to characterize the small-scale fading behavior of rail-cutting scenarios as a function of the geometry of cuttings, including crown width and bottom width. Raw data are collected in six cuttings (five cuttings are used for developing the model, while the other one is used for validation) in rural and suburban environments. We propose a set of effective methods to statistically model the spatial/temporal variations – including fade depth (FD), level crossing rate (LCR), average fade duration (AFD), and Ricean ${K}$ -factor – as a function of the structural parameters of cuttings. Akaike's Information Criterion (AIC)-based evaluation indicates that the Ricean distribution is the best to describe small-scale fading. In addition, the rich multipath and directionality of the transmitting antennas lead to a non-monotonous dependence of the ${K}$ -factor on the distance between transmitter and receiver. The autocovariance function of the deviation of the extracted ${K}$ -factors from the proposed model is presented and the coherence length is investigated. Our results show that even though a cutting is a scenario with severe fading, a “wide” cutting (i.e., with both wide crown and bottom widths) is conducive to the reduction of the severity of fading. <s> BIB015 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> In this paper, Empirical results characterizing the joint statistical properties of the shadow fading (SF), the root-mean-square (rms) delay spread (DS), and the Ricean K-factor are presented. Measurement data from high-speed railway in viaduct scenario have been analyzed. It is found that a log-normal distribution accurately fits the distribution function of all the investigated parameters. The spatial autocorrelation function of SF, rms DS, and Ricean K-factor can be modeled with an exponential decay function. However, The spatial autocorrelation functions of all three variables are better characterized by a composite of double exponential decaying functions. A positive cross correlation is found between the SF and the Ricean K-factor, while both parameters are negatively correlated with rms DS. All essential parameters required for the implementation of a simulation model considering the joint statistical properties of SF, rms DS, and the Ricean K-factor are provided. <s> BIB016 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> For the design and performance evaluation of broadband wireless communication systems in High-Speed Railway (HSR) environments, it is of crucial importance to have accurate and realistic propagation channel model. Based on real measurement data in U-Shape Groove (USG) scenarios at 2.35 GHz on Zhengzhou-Xi'an (ZX) HSR in China, the channel fading characteristics such as path loss, shadowing, K factor, time dispersivity and Doppler effects are specialized. These technical guidelines will promote the development of the wireless communication system under HSR. <s> BIB017 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> This paper focuses on the fading characteristics of wireless channel on High-Speed Railway (HSR) in hilly terrain scenario. Due to the rapid speed, the fading characteristics of HSR channel are highly correlated with time or Transmit-Receive distance and have their own special property. To investigate the fading characteristics, the measurement is conducted on the Guangzhou-Shenzhen passenger-dedicated line in China with the speed of 295 km/h in the data-collection area at 2.4 GHz. From the measured data, the amplitude of each path is estimated by using the Subspace-Alternating Generalized Expectation-Maximization (SAGE) algorithm along with other parameters of channel impulse responses. Then the fading parameters, including path loss, shadow fading, and K-factor, are analysed. With the numerical results in the measurement and analysis, the fading characteristics have been revealed and modelled. It is supposed that this work has a promotion for HSR communication system design and improvement. <s> BIB018 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> In this paper, a recently conducted measurement campaign for high-speed-train (HST) channels is introduced, where the downlink signals of an in-service Universal Mobile Terrestrial System (UMTS) deployed along an HST railway between Beijing and Shanghai were acquired. The channel impulse responses (CIRs) are extracted from the data received in the common pilot channels (CPICHs). Within 1318 km, 144 base stations (BSs) were detected. Multipath components (MPCs) estimated from the CIRs are clustered and associated across the time slots. The results show that, limited by the sounding bandwidth of 3.84 MHz, most of the channels contain a single line-of-sight (LoS) cluster, and the rest consists of several LoS clusters due to distributed antennas, leaking cable, or neighboring BSs sharing the same CPICH. A new geometry-based random-cluster model is established for the clusters' behavior in delay and Doppler domains. Different from conventional models, the time-evolving behaviors of clusters are characterized by random geometrical parameters, i.e., the relative position of BS to railway, and the train speed. The distributions of these parameters, and the per-cluster path loss, shadowing, delay, and Doppler spreads, are extracted from the measurement data. <s> BIB019
Carrier Frequency and Bandwidth: most of the measurement campaigns in the literature were conducted at the carrier frequency of 930 MHz in GSM-R systems BIB014 - BIB007 , BIB003 - BIB004 , BIB005 , BIB008 , , BIB006 - BIB015 , BIB002 , BIB009 , . Correspondingly, all of the aforementioned measurements were for narrowband channels with bandwidth of 200 kHz. Wideband channel measurements with higher bandwidths, i.e., 10-100 MHz, and higher carrier frequencies, i.e., 2.1-5.2 GHz, were reported in - , BIB010 , BIB016 , BIB017 - BIB018 , BIB001 , BIB011 - BIB019 . Antenna Configuration: The majority of HST measurements campaigns so far have focused on single-input single-output (SISO) systems - , BIB014 - BIB004 , BIB005 - , BIB006 - BIB018 , BIB001 , BIB002 , BIB009 , , BIB011 , BIB012 . Multiple-input multipleoutput (MIMO) systems, where multiple antennas are equipped at both ends, are essential for providing higher capacity to meet the requirements of future high speed data transmissions BIB013 . The channel measurement, particularly the MIMO channel measurement at high moving speeds, remains to be a challenging task. So far, only very few measurement campaigns were conducted using multiple antennas at either the Tx, i.e., single-input multiple-output (SIMO) systems , , or Rx, i.e., multiple-input single-output (MISO) systems . Hence, HST MIMO wideband channel measurement campaigns with carrier frequency and bandwidth larger than GSM-R ones are needed for future HST communication system developments.
Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> III. HST CHANNEL MODELS <s> In this paper, the statistical channel properties and channel modeling of indoor part for high-speed train communication system is presented based on wideband channel measurement at 2.35GHz. Two configurations of base station (BS) antennas, the omni-directional ceiling antenna and the planar antenna on the wall, are used in the measurement, in order to compare different channel characteristics and facilitate the future wideband system deployment. Channel properties, such as Path Loss (PL), Delay Spread (DS) and Ricean K-factor, are analyzed and modeled. The empirical log-distance PL models are derived. It is found that PL with planar antenna at BS is 10dB bigger than that with omni-directional antenna. The latter is even smaller than the PL of the free space. The distributions of DS under these two configurations are both well fitted with the lognormal distribution, and the mean values of them are similar. However, K-factors in decibel are quite different, although both follow well with the normal distribution. The mean values of K-factor with the omni-directional antenna and the planar antenna at BS are 10.41 dB and 4.09 dB, respectively. <s> BIB001 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> III. HST CHANNEL MODELS <s> A network with high quality of service (QoS) is required for railway wireless communication and control systems. Research on radio-wave propagation in railway environment has great significance for the design and optimization of the railway wireless network. In this paper, measurements are taken in railway viaduct area using track side base stations of the GSM-R network. Comparison between the measured path loss values and the estimated values by a few prediction models shows a large deviation. Thus a new path loss prediction model for viaduct area is derived from statistical analysis of the measurement results in this paper. The novel proposed model has proven to be accurate for the planning of the railway wireless network. <s> BIB002 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> III. HST CHANNEL MODELS <s> This paper presents an empirical path loss model derived from the 930MHz measurements along "Zhengzhou-Xi'an" high-speed railway in China. All the measurements were taken on the viaduct with the height of 23 meters above the ground surface. It applies to distances and base station antenna effective heights h not well-covered by existing models. The Least Squares Method (LS) is utilized in the curve fitting. The path loss exponent n determined by the slop of the linear fitting curve is statistically modeled, with the considering of base station antenna effective height h. Based on the log-normal shadowing model, a novel path loss model was developed. The proposed path loss model applies to high-speed railway viaduct scenarios, with base antenna heights from 15 to 30 m, base-to-train distances from 0.5 to 4 km. Compared with Hata and Winner II models, it raises path loss prediction accuracy for 3~10 dB and reduces the standard deviation by 1~3 dB. <s> BIB003 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> III. HST CHANNEL MODELS <s> Based on the narrowband 930-MHz measurements taken along the “Zhengzhou-Xi'an” high-speed railway in China, an empirical path loss model is proposed. It is applicable to high-speed railway viaduct scenarios, considering the influences of viaduct height H and base station antenna relative height h, which are not well-covered by existing large-scale models. The path loss exponents are investigated, based on which the influence of viaduct on propagation is discussed. The fading depth up to 15.96 dB and the Ricean K -factor with mean value of 3.79 dB are obtained. <s> BIB004 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> III. HST CHANNEL MODELS <s> A high performance wireless network is essential for for the railway communication and control systems. Research on the fading characteristics in railway environment is of great importance for the design of the railway wireless network. In this paper, measurements are taken in railway terrain cuttings area using track side base stations of the GSM-R network. The fitted path loss model, shadow fading, and dynamaic range of the small scale fading are obtained and compared to the results of viaduct scenario. The propagation environment of the terrain cuttings turns out to be worse than the viaduct area. The path loss exponent is found to be 4.3. The shadow loss can be reasonably described by a log-normal distribution. It is also found that the bridges over the cuttings can cause extra loss of about 5 dB. The dynamaic range of the small scale fading is from 27 dB to 40 dB with a mean value of about 33 dB. <s> BIB005 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> III. HST CHANNEL MODELS <s> Presented is the statistical analysis of radio wave propagation in a high-speed railway cutting scenario, derived from 930 MHz measurements taken along the ‘Zhengzhou-Xi'an’ high-speed railway of China. The crown width and bottom width of the cutting are well-covered by the proposed models. The Ricean distribution offers a good fit and the K-factor is found to be lognormal, with a mean value of 1.88 dB and standard deviation of 3.29 dB. <s> BIB006 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> III. HST CHANNEL MODELS <s> This paper presents a novel and practical study on the position-based radio propagation channel for High-Speed Railway by performing extensive measurements at 2.35 GHz in China. The specification on the path loss model is developed. In particular, small scale fading properties such as K-factor, Doppler frequency feature and time delay spread are parameterized, which show dynamic variances depending on the train location and the transceiver separation. Finally, the statistical position-based channel models are firstly established to characterize the High-Speed Railway channel, which significantly promotes the evaluation and verification of wireless communications in relative scenarios. <s> BIB007 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> III. HST CHANNEL MODELS <s> In this paper, a channel modeling method based on random-propagation-graph is elaborated, validated, and applied to characterizing time-variant channels observed in typical environments for high-speed railway wireless communications. The advantage of the proposed method is that the frequency-tempo-spatial channel coefficients, as well as the multi-dimensional channel impulse responses in delay, Doppler frequency, direction of arrival (i.e. azimuth and elevation of arrival) and direction of departure are calculated analytically for specific environments. The validation of the proposed method is performed by comparing the statistics of two large-scale parameters obtained with those described in the well-established standards. Finally, stochastic geometry-based models in the same format as the well-known spatial channel model enhanced (SCME) are generated by using the proposed method for the high-speed scenarios in the rural, urban, and suburban environments. <s> BIB008 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> III. HST CHANNEL MODELS <s> Based on the realistic channel measurement on High-Speed Railway (HSR) in viaduct scenarios at 2.35 GHz, the dynamic evolution of multipath components is investigated from the birth-death process point of view. Due to the distinction in the amount of resolvable multipath signals, the channel is divided into five segments and can be completely parameterized by several sets of statistical parameters associated with the type of environment and scenario. Then the four-state Markov chain, describing the birth-death number variation of the detected propagation waves, is employed to specialize the temporal stochastic properties. Furthermore, the steady probabilities and transition probabilities are provided which will facilitate the development and evaluation of wireless communication systems under HSR. <s> BIB009 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> III. HST CHANNEL MODELS <s> For the design and performance evaluation of broadband wireless communication systems in High-Speed Railway (HSR) environments, it is of crucial importance to have accurate and realistic propagation channel model. Based on real measurement data in U-Shape Groove (USG) scenarios at 2.35 GHz on Zhengzhou-Xi'an (ZX) HSR in China, the channel fading characteristics such as path loss, shadowing, K factor, time dispersivity and Doppler effects are specialized. These technical guidelines will promote the development of the wireless communication system under HSR. <s> BIB010 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> III. HST CHANNEL MODELS <s> This paper focuses on the fading characteristics of wireless channel on High-Speed Railway (HSR) in hilly terrain scenario. Due to the rapid speed, the fading characteristics of HSR channel are highly correlated with time or Transmit-Receive distance and have their own special property. To investigate the fading characteristics, the measurement is conducted on the Guangzhou-Shenzhen passenger-dedicated line in China with the speed of 295 km/h in the data-collection area at 2.4 GHz. From the measured data, the amplitude of each path is estimated by using the Subspace-Alternating Generalized Expectation-Maximization (SAGE) algorithm along with other parameters of channel impulse responses. Then the fading parameters, including path loss, shadow fading, and K-factor, are analysed. With the numerical results in the measurement and analysis, the fading characteristics have been revealed and modelled. It is supposed that this work has a promotion for HSR communication system design and improvement. <s> BIB011
HST channel models in the literature can be categorized as large-scale fading models , BIB001 , , BIB002 , BIB003 - BIB004 , BIB007 , , BIB005 , BIB006 , BIB010 , BIB011 and small-scale fading models , , , BIB008 - BIB009 . The state-of-the-art of HST channel models has not been investigated yet. Therefore, we will first categorize PL models in Table III. In Table IV , the important HST small-scale fading channel models are briefly reviewed and classified according to the modeling approach, scenario, stationarity, antenna configuration, frequency selectivity (FS), scatterer region, and cellular architecture.
Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> A. Large-Scale Fading Models <s> The paper presents results of measurements and simulations concerning the application of the European GSM system in high speed trains travelling at up to 500 km/h. The aim is to answer the question to what extent GSM (performance specified up to 250 km/h) can cope with the high velocities which are demanded for future railways. Measurements along railway lines have shown that a railway mobile radio channel results in better performance (Rice channel) than standard mobile radio channels (Rayleigh or weak Rice channel, see GSM-Recs). BER and block error rate of GSM traffic channels up to 500 km/h are simulated. Comparison of the results at 250 km/h and 500 km/h shows that the GSM high velocity problem can be solved either by increasing the SNR by about 2 dB or by increasing the Rice parameter c by about 6 dB (numerical values for profile RA=rural area; railway channel with c=6 dB against standard channel with c=0 dB), i.e. the BER at 500 km/h (railway channel) is not worse than the BER at 250 km/h (standard channel). A simple example shows that the benefit in the transmission of telegrams consisting of blocks of decoded bits can be much higher, The desired channel performance, i.e. a strong direct path (high Rice parameter), can be achieved by careful radio coverage planning along the railway line. This means a GSM standard receiver is sufficient to cope with the GSM high velocity problem and no additional means are needed. <s> BIB001 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> A. Large-Scale Fading Models <s> Vehicle-to-vehicle communications have recently received much attention due to some new applications, such as wireless mobile ad hoc networks, relay-based cellular networks, and intelligent transportation systems for dedicated short range communications. The underlying V2V channels, as a foundation for the understanding and design of V2V communication systems, have not yet been sufficiently investigated. This article aims to review the state-of-the-art in V2V channel measurements and modeling. Some important V2V channel measurement campaigns and models are briefly described and classified. Finally, some challenges of V2V channel measurements and modeling are addressed for future studies. <s> BIB002
PL estimation is essential for wireless link budget computation and wireless network planning. PL and shadow fading channel models for various HST scenarios have been developed based on measurement results conducted in the open literature BIB002 - BIB001 . These PL models are typically expressed as where d is the distance between the Tx and Rx in meters (m), n is the PL exponent, and A is the intercept. Note that the SF follows log-normal distributions, the standard deviation of which for each model is given in Table III .
Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Cellular Architectures and Scenarios <s> A new deterministic approach for wave propagation modeling in high-speed train tunnels is presented. The model is based on a new ray launching method and results in the polarimetric and complex channel impulse response as well as the Doppler diagram for radio links between on-train stations and tunnel-fixed stations. Different channel simulations under certain propagation conditions are presented. <s> BIB001 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Cellular Architectures and Scenarios <s> A new deterministic approach for wave propagation modeling in high-speed train tunnels is presented. The model is based on a new ray launching method and results in the polarimetric and complex channel impulse response as well as the Doppler diagram for radio links between on-train stations and tunnel-fixed stations. Different channel simulations under certain propagation conditions are presented. <s> BIB002 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Cellular Architectures and Scenarios <s> For the design of an OFDM train communications system it is essential to characterise and consider the channel parameters. The transmission channel of a high-speed train scenario is frequency selective as well as time variant. Thus, delay spread and Doppler spread are investigated as crucial parameters for the OFDM system performance. Using a ray-tracing tool realistic impulse responses of the transmission channels are simulated. The investigated system includes three base stations operating in common frequency mode along a railway track and one mobile station situated on a high-speed train. For the mobile station different antenna patterns are included in the simulation model. The results are compared and assessed with respect to delay spread, Doppler spread and receive power. When using directional antennas a distinct reduction in Doppler spread is achieved. <s> BIB003 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Cellular Architectures and Scenarios <s> In this paper, the statistical channel properties and channel modeling of indoor part for high-speed train communication system is presented based on wideband channel measurement at 2.35GHz. Two configurations of base station (BS) antennas, the omni-directional ceiling antenna and the planar antenna on the wall, are used in the measurement, in order to compare different channel characteristics and facilitate the future wideband system deployment. Channel properties, such as Path Loss (PL), Delay Spread (DS) and Ricean K-factor, are analyzed and modeled. The empirical log-distance PL models are derived. It is found that PL with planar antenna at BS is 10dB bigger than that with omni-directional antenna. The latter is even smaller than the PL of the free space. The distributions of DS under these two configurations are both well fitted with the lognormal distribution, and the mean values of them are similar. However, K-factors in decibel are quite different, although both follow well with the normal distribution. The mean values of K-factor with the omni-directional antenna and the planar antenna at BS are 10.41 dB and 4.09 dB, respectively. <s> BIB004 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Cellular Architectures and Scenarios <s> In this paper, a channel modeling method based on random-propagation-graph is elaborated, validated, and applied to characterizing time-variant channels observed in typical environments for high-speed railway wireless communications. The advantage of the proposed method is that the frequency-tempo-spatial channel coefficients, as well as the multi-dimensional channel impulse responses in delay, Doppler frequency, direction of arrival (i.e. azimuth and elevation of arrival) and direction of departure are calculated analytically for specific environments. The validation of the proposed method is performed by comparing the statistics of two large-scale parameters obtained with those described in the well-established standards. Finally, stochastic geometry-based models in the same format as the well-known spatial channel model enhanced (SCME) are generated by using the proposed method for the high-speed scenarios in the rural, urban, and suburban environments. <s> BIB005 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Cellular Architectures and Scenarios <s> This paper focuses on the Geometry based Stochastic Channel Model (GSCM) and its application in high speed railway (HSR) multiple-input multiple-output (MIMO) systems. Different probability distribution functions (PDFs) of scatterers are studied and simulation results show that scatterers in Gaussian distribution make the best approximation to the realistic power delay profile (PDP) and power azimuth spectra (PAS). Additionally, the impacts of different scattering area shapes on the PDP and PAS in high speed railway scenario are simulated. Based on the conclusions above and existing measurement data, a novel channel model is established with local and far scatterer clusters for high speed railway scenario. The simulation results verify that our model is realistic. <s> BIB006 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Cellular Architectures and Scenarios <s> How to provide reliable, cost-effective wireless services for high-speed railway (HSR) users attracts increasing attention due to the fast deployment of HSRs worldwide. A key issue is to develop reasonably accurate and mathematically tractable models for HSR wireless communication channels. Finite-state Markov chains (FSMCs) have been extensively investigated to describe wireless channels. However, different from traditional wireless communication channels, HSR communication channels have the unique features such as very high speed, deterministic mobility pattern and frequent handoff events, which are not described by the existing FSMC models. In this paper, based on the Winner II physical layer channel model parameters, we propose a novel FSMC channel model for HSR communication systems, considering the path loss, fast fading and shadowing with high mobility. Extensive simulation results are given, which validate the accuracy of the proposed FSMC channel model. The model is not only ready for performance analysis, protocol design and optimization for HSR communication systems, but also provides an effective tool for faster HSR communication network simulation. <s> BIB007 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Cellular Architectures and Scenarios <s> This paper proposes a non-stationary wideband geometry-based stochastic model (GBSM) for multiple-input multiple-output (MIMO) high-speed train (HST) channels. The proposed model has the ability to investigate the non-stationarity of HST environment caused by the high speed movement of the receiver. Based on the proposed model, the space-time-frequency (STF) correlation function (CF) and STF local scattering function (LSF) are derived for different taps. Numerical results show the non-stationarity of the proposed channel model. <s> BIB008 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Cellular Architectures and Scenarios <s> Based on the realistic channel measurement on High-Speed Railway (HSR) in viaduct scenarios at 2.35 GHz, the dynamic evolution of multipath components is investigated from the birth-death process point of view. Due to the distinction in the amount of resolvable multipath signals, the channel is divided into five segments and can be completely parameterized by several sets of statistical parameters associated with the type of environment and scenario. Then the four-state Markov chain, describing the birth-death number variation of the detected propagation waves, is employed to specialize the temporal stochastic properties. Furthermore, the steady probabilities and transition probabilities are provided which will facilitate the development and evaluation of wireless communication systems under HSR. <s> BIB009 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Cellular Architectures and Scenarios <s> The geometry-based stochastic channel models are proposed in this paper for the terrain cutting, suburb, and urban scenarios in high-speed railway. First, the special scenarios in high-speed railway are described. And the channel models based on the geometry scenarios are introduced. Some channel parameters are based on measurement data. Then, the space-time correlation functions in analytical form are obtained in suburb and urban scenarios. Finally, the space correlation characteristics in three scenarios are compared. <s> BIB010
As mentioned earlier, adopting conventional cellular architecture in HST wireless communication systems may lead to several problems in terms of providing reliable and fast communication to HST passengers. Therefore, other cellular architectures, such as DAS, CoMP, and MRS need to be considered. In the literature, most of the proposed channel models have considered the conventional architecture where fixed BSs are installed on the track-side to provide wireless coverage to HST passengers inside carriages , BIB005 , BIB003 , BIB001 - BIB006 , BIB007 . By considering MRS solution, we will have two channels, outdoor channel between the BS and the MRS and an indoor one between the MRS and train passengers. The properties of radio channels in the carriages resemble those of indoor environments and hence they can be modeled using existing indoor channel models BIB004 . Therefore, , , BIB008 - , BIB009 have focused on modeling the outdoor channel because of the challenges that this channel faces due to the high velocity of the Rx. HST scenarios have been presented in details earlier in this paper in Section II. While most of these scenarios can only be encountered in railway environments, open space scenario is similar to the rural or urban scenarios that can be found in conventional V2I or V2V communication systems. Therefore, most of the current HST channel models, developed from V2I and V2V channel models by taking into account the effect of the high velocity of the Rx on the channel parameters, have been proposed for open space scenario , , , BIB005 , BIB003 , BIB006 - , BIB007 . Channel models for tunnel, cutting, and viaduct scenarios were studied in BIB002 , BIB010 , and BIB009 . In summary, more HST channel models that consider other cellular architectures, such as DAS, are needed in the future. In addition, more HST scenarios should be considered in proposing future HST channel models.
Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Modeling Approaches of HST Small-Scale Fading Models <s> A new deterministic approach for wave propagation modeling in high-speed train tunnels is presented. The model is based on a new ray launching method and results in the polarimetric and complex channel impulse response as well as the Doppler diagram for radio links between on-train stations and tunnel-fixed stations. Different channel simulations under certain propagation conditions are presented. <s> BIB001 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Modeling Approaches of HST Small-Scale Fading Models <s> In this paper, a channel modeling method based on random-propagation-graph is elaborated, validated, and applied to characterizing time-variant channels observed in typical environments for high-speed railway wireless communications. The advantage of the proposed method is that the frequency-tempo-spatial channel coefficients, as well as the multi-dimensional channel impulse responses in delay, Doppler frequency, direction of arrival (i.e. azimuth and elevation of arrival) and direction of departure are calculated analytically for specific environments. The validation of the proposed method is performed by comparing the statistics of two large-scale parameters obtained with those described in the well-established standards. Finally, stochastic geometry-based models in the same format as the well-known spatial channel model enhanced (SCME) are generated by using the proposed method for the high-speed scenarios in the rural, urban, and suburban environments. <s> BIB002 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Modeling Approaches of HST Small-Scale Fading Models <s> The geometry-based stochastic channel models are proposed in this paper for the terrain cutting, suburb, and urban scenarios in high-speed railway. First, the special scenarios in high-speed railway are described. And the channel models based on the geometry scenarios are introduced. Some channel parameters are based on measurement data. Then, the space-time correlation functions in analytical form are obtained in suburb and urban scenarios. Finally, the space correlation characteristics in three scenarios are compared. <s> BIB003 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Modeling Approaches of HST Small-Scale Fading Models <s> How to provide reliable, cost-effective wireless services for high-speed railway (HSR) users attracts increasing attention due to the fast deployment of HSRs worldwide. A key issue is to develop reasonably accurate and mathematically tractable models for HSR wireless communication channels. Finite-state Markov chains (FSMCs) have been extensively investigated to describe wireless channels. However, different from traditional wireless communication channels, HSR communication channels have the unique features such as very high speed, deterministic mobility pattern and frequent handoff events, which are not described by the existing FSMC models. In this paper, based on the Winner II physical layer channel model parameters, we propose a novel FSMC channel model for HSR communication systems, considering the path loss, fast fading and shadowing with high mobility. Extensive simulation results are given, which validate the accuracy of the proposed FSMC channel model. The model is not only ready for performance analysis, protocol design and optimization for HSR communication systems, but also provides an effective tool for faster HSR communication network simulation. <s> BIB004 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Modeling Approaches of HST Small-Scale Fading Models <s> Based on the realistic channel measurement on High-Speed Railway (HSR) in viaduct scenarios at 2.35 GHz, the dynamic evolution of multipath components is investigated from the birth-death process point of view. Due to the distinction in the amount of resolvable multipath signals, the channel is divided into five segments and can be completely parameterized by several sets of statistical parameters associated with the type of environment and scenario. Then the four-state Markov chain, describing the birth-death number variation of the detected propagation waves, is employed to specialize the temporal stochastic properties. Furthermore, the steady probabilities and transition probabilities are provided which will facilitate the development and evaluation of wireless communication systems under HSR. <s> BIB005
In terms of modeling approaches, the current HST channel models in the literature, presented in Table IV , can be classified as deterministic BIB002 - BIB001 and stochastic channel models. The latter can be further classified into geometrybased stochastic models (GBSMs) , , BIB003 - and non-geometrical stochastic models (NGSMs) BIB004 , BIB005 , as illustrated in Fig. 3 .
Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> A new deterministic approach for wave propagation modeling in high-speed train tunnels is presented. The model is based on a new ray launching method and results in the polarimetric and complex channel impulse response as well as the Doppler diagram for radio links between on-train stations and tunnel-fixed stations. Different channel simulations under certain propagation conditions are presented. <s> BIB001 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> For the design of an OFDM train communications system it is essential to characterise and consider the channel parameters. The transmission channel of a high-speed train scenario is frequency selective as well as time variant. Thus, delay spread and Doppler spread are investigated as crucial parameters for the OFDM system performance. Using a ray-tracing tool realistic impulse responses of the transmission channels are simulated. The investigated system includes three base stations operating in common frequency mode along a railway track and one mobile station situated on a high-speed train. For the mobile station different antenna patterns are included in the simulation model. The results are compared and assessed with respect to delay spread, Doppler spread and receive power. When using directional antennas a distinct reduction in Doppler spread is achieved. <s> BIB002 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> For the simulation of practical frequency-diversity wireless communication systems, such as frequency-hopping systems, multicarrier code-division multiple-access systems, and orthogonal frequency-division multiplexing systems, it is often desirable to produce multiple Rayleigh fading processes with given frequency correlation properties. In this paper, a novel stochastic wide-sense stationary sum-of-sinusoids channel simulator is proposed to emulate frequency-correlated wideband fading channels, where the frequency correlation properties are controlled by only adjusting the constant phases. Closed-form expressions are provided for all the parameters of the simulation model. This enables us to investigate analytically the overall correlation properties (not only the correlation coefficients) of the simulated processes with respect to both time separation and frequency separation. It is shown that the wideband channel simulator will be reduced to a narrowband Rayleigh fading-channel simulator by removing the frequency selectivity. Furthermore, the COST 207 typical-urban and rural-area channels are applied to evaluate the performance of the resulting wideband and narrowband channel simulators, respectively. The correlation properties of the simulation models approach the desired ones of the underlying reference models as the number of exponential functions tends to infinity, while very good approximations are achieved with the chosen limited number of exponential functions <s> BIB003 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> The delay caused by the reflected ray in broadband communication has a great influence on the communications in subway tunnel. This paper presents measurements taken in subway tunnels at 2.4 GHz, with 5 MHz bandwidth. According to propagation characteristics of tunnel, the measurements were carried out with a frequency domain channel sounding technique, in three typical scenarios: line of sight (LOS), Non-line-of-sight (NLOS) and far line of sight (FLOS), which lead to different delay distributions. Firstly IFFT was chosen to get channel impulse response (CIR) h(t) from measured three-dimensional transfer functions. Power delay profile (PDP) was investigated to give an overview of broadband channel model. Thereafter, a long delay caused by the obturation of tunnel is observed and investigated in all the scenarios. The measurements show that the reflection can be greatly remained by the tunnel, which leads to long delay cluster where the reflection, but direct ray, makes the main contribution for radio wave propagation. Four important parameters: distribution of whole PDP power, first peak arriving time, reflection cluster duration and PDP power distribution of reflection cluster were studied to give a detailed description of long delay characteristic in tunnel. This can be used to ensure high capacity communication in tunnels. <s> BIB004 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> In this paper, a channel modeling method based on random-propagation-graph is elaborated, validated, and applied to characterizing time-variant channels observed in typical environments for high-speed railway wireless communications. The advantage of the proposed method is that the frequency-tempo-spatial channel coefficients, as well as the multi-dimensional channel impulse responses in delay, Doppler frequency, direction of arrival (i.e. azimuth and elevation of arrival) and direction of departure are calculated analytically for specific environments. The validation of the proposed method is performed by comparing the statistics of two large-scale parameters obtained with those described in the well-established standards. Finally, stochastic geometry-based models in the same format as the well-known spatial channel model enhanced (SCME) are generated by using the proposed method for the high-speed scenarios in the rural, urban, and suburban environments. <s> BIB005 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> This paper focuses on the Geometry based Stochastic Channel Model (GSCM) and its application in high speed railway (HSR) multiple-input multiple-output (MIMO) systems. Different probability distribution functions (PDFs) of scatterers are studied and simulation results show that scatterers in Gaussian distribution make the best approximation to the realistic power delay profile (PDP) and power azimuth spectra (PAS). Additionally, the impacts of different scattering area shapes on the PDP and PAS in high speed railway scenario are simulated. Based on the conclusions above and existing measurement data, a novel channel model is established with local and far scatterer clusters for high speed railway scenario. The simulation results verify that our model is realistic. <s> BIB006 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> This paper proposes a non-stationary wideband geometry-based stochastic model (GBSM) for multiple-input multiple-output (MIMO) high-speed train (HST) channels. The proposed model has the ability to investigate the non-stationarity of HST environment caused by the high speed movement of the receiver. Based on the proposed model, the space-time-frequency (STF) correlation function (CF) and STF local scattering function (LSF) are derived for different taps. Numerical results show the non-stationarity of the proposed channel model. <s> BIB007 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> The geometry-based stochastic channel models are proposed in this paper for the terrain cutting, suburb, and urban scenarios in high-speed railway. First, the special scenarios in high-speed railway are described. And the channel models based on the geometry scenarios are introduced. Some channel parameters are based on measurement data. Then, the space-time correlation functions in analytical form are obtained in suburb and urban scenarios. Finally, the space correlation characteristics in three scenarios are compared. <s> BIB008 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> In this paper, a non-stationary wideband geometry-based stochastic model (GBSM) is proposed for multiple-input multiple-output (MIMO) high-speed train (HST) channels. The proposed model employs multiple confocal ellipses model, where the received signal is a superposition of the line-of-sight (LoS) and single-bounced rays. Because of the time-varying feature of angles of arrival (AoAs), angles of departure (AoDs), and LoS angle, the proposed GBSM has the ability to investigate the non-stationarity of HST environment caused by the high speed movement of the receiver. From the proposed model, the local spatial cross-correlation function (CCF) and the local temporal autocorrelation (ACF) are derived for different taps. Numerical results and analysis show that the proposed channel model is capable of characterizing the time-variant HST wireless channel. <s> BIB009 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> How to provide reliable, cost-effective wireless services for high-speed railway (HSR) users attracts increasing attention due to the fast deployment of HSRs worldwide. A key issue is to develop reasonably accurate and mathematically tractable models for HSR wireless communication channels. Finite-state Markov chains (FSMCs) have been extensively investigated to describe wireless channels. However, different from traditional wireless communication channels, HSR communication channels have the unique features such as very high speed, deterministic mobility pattern and frequent handoff events, which are not described by the existing FSMC models. In this paper, based on the Winner II physical layer channel model parameters, we propose a novel FSMC channel model for HSR communication systems, considering the path loss, fast fading and shadowing with high mobility. Extensive simulation results are given, which validate the accuracy of the proposed FSMC channel model. The model is not only ready for performance analysis, protocol design and optimization for HSR communication systems, but also provides an effective tool for faster HSR communication network simulation. <s> BIB010 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> Based on the realistic channel measurement on High-Speed Railway (HSR) in viaduct scenarios at 2.35 GHz, the dynamic evolution of multipath components is investigated from the birth-death process point of view. Due to the distinction in the amount of resolvable multipath signals, the channel is divided into five segments and can be completely parameterized by several sets of statistical parameters associated with the type of environment and scenario. Then the four-state Markov chain, describing the birth-death number variation of the detected propagation waves, is employed to specialize the temporal stochastic properties. Furthermore, the steady probabilities and transition probabilities are provided which will facilitate the development and evaluation of wireless communication systems under HSR. <s> BIB011 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> This paper presents a novel and practical study on the position-based radio propagation channel for High-Speed Railway by performing extensive measurements at 2.35 GHz in China. The specification on the path loss model is developed. In particular, small scale fading properties such as K-factor, Doppler frequency feature and time delay spread are parameterized, which show dynamic variances depending on the train location and the transceiver separation. Finally, the statistical position-based channel models are firstly established to characterize the High-Speed Railway channel, which significantly promotes the evaluation and verification of wireless communications in relative scenarios. <s> BIB012 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> In a realistic high-speed railway environment, the track, terrain, vegetation, cuttings, barriers, pylons, buildings, and crossing bridges are the main sources of reflection, diffraction, and scattering. Moreover, the radiation pattern and the polarization of the transmitting and receiving antennas considerably influence the propagation. This paper presents a deterministic modeling approach covering all the effects in a realistic highspeed railway environment for the first time. The antenna influence and the mechanisms of transmission, scattering, and reflection are evaluated by developing a 3D ray-optical tool. The diffraction loss is obtained by the multi-edge diffraction models using raster databases. This approach compensates the limitation of the existent empirical and stochastic models used for the high-speed railway, and promotes the deterministic modeling towards to the realistic environment. Therefore, it allows a detailed and realistic evaluation and verification of the train control communications systems. <s> BIB013 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> Semi-deterministic modeling with low data resolution requirement and low computation time is always of interest. By conjunctively utilizing the extended Hata model and the Deygout model, this letter presents a hybrid model for viaduct and cutting scenarios of high-speed railway. The proposed model achieves higher accuracy than empirical and statistical models, but uses totally free sources. It can be easily implemented for the network planning, and therefore, it meets the demand for fast development of high-speed railway. <s> BIB014 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> A field channel measurement is carried out in highspeed railways (HSRs) along the “Luoyang South-Lintong East” line of China, and the finite-state Markov channel (FSMC) modeling is exploited to characterize the small-scale fading channels. The large-scale path loss can be predicted relatively precisely since the line-of-sight (LOS) propagation component dominates the wireless channel in HSR, while the small-scale fadings will be a key to the future wireless network for HSR. Hence, this paper proposes a first-order FSMC modeling to describe the fast small-scale fadings in two typical HSR scenarios, i.e., viaduct and terrain cutting. Firstly, the sliding window method is used to remove the large-scale effect of the field data. Then the Rayleigh, Rician and Nakagami distributions are respectively tested to fit the envelope of small-scale fadings, and the results show that Rician distribution can effectively capture the statistical property of HSR channels. Then, a first-order FSMC is proposed based on the Rician distribution. Finally, the experimental results reveal that four-state FSMC modeling provides an effective way to reflect the dynamic nature of the fast fadings in HSR. <s> BIB015 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> High-speed railway (HSR) brings convenience to peoples' lives and is generally considered as one of the most sustainable developments for ground transportation. One of the important parts of HSR construction is the signaling system, which is also called the “operation control system,” where wireless communications play a key role in the transmission of train control data. We discuss in detail the main differences in scientific research for wireless communications between the HSR operation scenarios and the conventional public land mobile scenarios. The latest research progress in wireless channel modeling in viaducts, cuttings, and tunnels scenarios are discussed. The characteristics of nonstationary channel and the line-of-sight (LOS) sparse and LOS multiple-input-multiple-output channels, which are the typical channels in HSR scenarios, are analyzed. Some novel concepts such as composite transportation and key challenging techniques such as train-to-train communication, vacuum maglev train techniques, the security for HSR, and the fifth-generation wireless communications related techniques for future HSR development for safer, more comfortable, and more secure HSR operation are also discussed. <s> BIB016 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> The recent development of high-speed trains (HSTs) introduce new challenges to wireless communication systems for HSTs. For demonstrating the feasibility of these systems, accurate channel models which can mimic key characteristics of HST wireless channels are essential. In this paper, we focus on HST channel models for the tunnel scenario, which is different from other HST channel environments, such as rural area and viaducts. Considering unique characteristics of tunnel channel environments, we extend the existing multi-mode waveguide tunnel channel model to be time dependent, obtain the channel impulse responses, and then further investigate certain key tunnel channel characteristics such as temporal autocorrelation function (ACF) and power spectrum density (PSD). The impact of time on ACFs and PSDs, and the impact of frequency on the received power are revealed via numerical results. <s> BIB017 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> This paper proposes a generic non-stationary wideband geometry-based stochastic model (GBSM) for multiple-input multiple-output (MIMO) high-speed train (HST) channels. The proposed generic model can be applied on the three most common HST scenarios, i.e., open space, viaduct, and cutting scenarios. A good agreement between the statistical properties of the proposed generic model and those of relevant measurement data from the aforementioned scenarios demonstrates the utility of the proposed channel model. <s> BIB018 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> High-speed railways (HSRs) have been widely introduced to meet the increasing demand for passenger rail travel. While it provides more and more conveniences to people, the huge cost of the HSR has laid big burden on the government finance. Reducing the cost of HSR has been necessary and urgent. Optimizing arrangement of base stations (BS) by improving prediction of the communication link is one of the most effective methods, which could reduce the number of BSs to a reasonable number. However, it requires a carefully developed propagation model, which has been largely neglected before in the research on the HSR. In this paper, we propose a standardized path loss/shadow fading model for HSR channels based on an extensive measurement campaign in 4594 HSR cells. The measurements are conducted using a practically deployed and operative GSM-Railway (GSM-R) system to reflect the real conditions of the HSR channels. The proposed model is validated by the measurements conducted in a different operative HSR line. Finally, a heuristic method to design the BS separation distance is proposed, and it is found that using an improved propagation model can theoretically save around 2/5 cost of the BSs. <s> BIB019 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> The high-speed railway (HSR) propagation channel has a significant impact on the design and performance analysis of wireless railway control systems. This paper derives a stochastic model for the HSR wireless channel at 930 MHz. The model is based on a large number of measurements in 100 cells using a practically deployed and operative communication system. We use the Akaike information criterion to select the distribution of the parameter distributions, including the variations from cell to cell. The model incorporates the impact of directional base station (BS) antennas, includes several previously investigated HSR deployment scenarios as special cases, and is parameterized for practical HSR cell sizes, which can be several kilometers. The proposed model provides a consistent prediction of the propagation in HSR environments and allows a straightforward and time-saving implementation for simulation. <s> BIB020
Deterministic channel models are usually based on the detailed description of specific propagation environment and antenna configuration. The amplitudes, phases, and delays of the propagated waves are obtained using intensive simulations that incorporate details of propagation environments like roads, buildings, trees, houses, etc. Therefore, deterministic models are physically meaningful and potentially accurate. Geometry-based deterministic models (GBDMs) based on ray-tracing method were proposed in BIB002 - BIB001 to model HST propagation channels in different HST scenarios. In BIB001 , a three-dimensional (3D) ray-tracing approach for wave propagation modeling in HST tunnels was presented. The proposed model resulted in the complex channel impulse response that incorporates channel information, e.g., the wave-guide effect observed in tunnels and the impact of another train passing in the opposite direction on the Doppler shift and time delay. The authors in BIB013 adopted a similar approach to model HST channels in various scenarios. Both BIB001 and BIB013 used measurement results to verify the proposed channel models. Another HST channel model based on 3D ray-tracing approach was presented in BIB002 to analyze channel characteristics, e.g., the FS and time-variance (Doppler spread). The objects, e.g., trees, buildings, or barriers, on both sides of the railway track were modeled using rectangular boxes, the dimensions of which were statistically generated. Since the propagation characteristics of electromagnetic (EM) waves in tunnels are significantly different from those in other HST environments, a multi-mode waveguide channel model was proposed in BIB017 . The proposed model, which is a hybrid model that combines the geometrical optical model and waveguide model, can characterize the wave propagation both in near and far regions of the source. However, the aforementioned model failed to discuss the far LoS (FLOS) phenomena observed inside tunnels BIB004 or provide a mechanism to determine the breakpoint for different propagation regions in tunnels BIB016 . A GBDM based on random propagation-graph was proposed in BIB005 to characterize time-variant HST channels in open space scenarios. Similar to ray-tracing method, propagation-graph can predict channel impulse responses by a thorough search of propagation paths that connect the Tx and Rx. This modeling approach can be performed by considering the geometry of the simulated environments, e.g., the distribution, mobility, and visibility of the scatterers. Despite their high accuracy, GBDMs require detailed descriptions of the propagation environments and extensive computational resources to be implemented. To avoid the high complexity of implementing GBDMs while maintaining sufficient accuracy, semi-deterministic models for HST viaduct and cutting scenarios were proposed in BIB014 . However, the proposed models only considered large-scale fading and neglected the effect of small-scale fading parameters on the received signal. 2) GBSMs: In GBSMs, the impulse responses of HST channels are characterized by the law of wave propagation applied to specific Tx, Rx, and scatterer geometries which are predefined in a stochastic fashion according to certain probability distributions. Different types of GBSMs differ mainly in the proposed scatterer distributions. Based on the position of the effective scatterers, GBSMs can be further classified as regularshaped GBSMs (RS-GBSM) such as one-ring BIB006 , two-ring, and ellipse models BIB007 - , and irregular shaped GBSMs (IS-GBSMs) , , BIB008 . RS-GBSMs assume that all the effective scatterers are placed on regular shapes and therefore, different RS-GBSMs have different shapes of scatterer distributions, e.g., one-ring, two-ring, and ellipses for two-dimensional (2D) models and one sphere, two-sphere, and elliptic-cylinders for 3D ones. RS-GBSMs often result in closed-form solutions or at least mathematically tractable formulas. The generalized principle of designing RS-GBSMs follows the following steps. First, a geometrical model is adopted assuming that scatterers are located on regular shapes. Then, a stochastic reference model with an infinite number of scatterers is developed based on the adopted geometrical model. However, the reference model cannot be used for simulations and therefore a corresponding simulation model with a finite number of effective scatterers is needed. The parameters of the simulation model are computed by using proper parameter computation methods, e.g., the extended method of exact Doppler spread (EMEDS), modified method of equal area (MMEA) BIB003 , or the L p -Norm Method (LPNM) . In BIB006 , a one-ring RS-GBSM was proposed to model HST channels in open space scenarios. The scatterers were assumed to be distributed on a ring around the MS where different PDFs of the scatterers were analyzed. Considering the narrowband GSM-R for a HST communication system, a 3D one-sphere RS-GBSM was proposed in BIB008 for open space scenarios. The proposed model used the Von Mises distribution to describe the azimuth angles and the space-time (ST) crosscorrelation function (CCF) was derived. However, both of the aforementioned models assumed that the HST channel satisfies the WSS condition that has been proved incorrect by measurements . To fill this gap, non-stationary RS-GBSMs were proposed in BIB007 - BIB018 for wideband MIMO HST channels considering the deployment of MRS on the top of the train. Fig. 4 illustrates the proposed RS-GBSMs, which consist of multiple confocal ellipses with single-bounced rays and the LoS component. The model was first introduced in BIB007 , BIB009 , where it considered the distance between the Tx and Rx as time-varying to capture the non-stationarity of the HST channel. Then, the model was further developed in by considering other time-varying model parameters, i.e., angles of departure (AoDs) and angles of arrival (AoAs). By adopting some key scenario-specific channel parameters, this model was further extended in BIB018 to be applicable to the three most common HST scenarios, i.e., open-space, viaduct, and cutting scenarios BIB019 , and hence is the first generic HST channel model. To demonstrate its applicability, the proposed generic nonstationary HST channel model was verified by measurements in terms of stationary time for the open space scenario and the autocorrelation function (ACF), LCR, and stationary distance for the viaduct and cutting scenarios BIB018 . IS-GBSMs place the effective scatterers with predefined properties at random locations with certain statistical distributions usually obtained/approximated from measurements BIB020 . Unlike RS-GBSMs, the random locations of the scatterers do not form regular shapes and the signal contributions of the effective scatterers are determined from a greatly-simplified ray-tracing method and finally the total signal is summed up to obtain the complex impulse response. IS-GBSMs for HST channels were introduced in the RMa scenario in WINNER II channel model and the moving networks scenario in IMT-A channel model . In both cases, the train speed can be up to 350 km/h and the MRS technology is employed. In BIB008 , an IS-GBSM was proposed for HST channels in cutting scenarios assuming the scatterers to be uniformly distributed on the surface of the two slopes of the cutting. However, the aforementioned channel models have neglected the non-stationarity of HST channels and assumed that the WSS assumption can still be applied. Moreover, GBSMs are very complex for upper-layer protocol design and performance analysis and less complex channel models are preferred. 3) NGSMs: NGSMs characterize physical parameters of a HST propagation channel in a completely stochastic manner by providing their underlying probability distribution functions without assuming an underlying geometry. An NGSM based on finite-state Markov chains for HST wireless communication channels was proposed in BIB010 . The proposed model is able to capture the characteristics of time-varying HST wireless channels by using Markov chains to track the channel state variation at different received signal-to-noise ratio (SNR) intervals. However, the model has not been verified by using real-field measurements and thus deserves more investigation. The authors in BIB011 followed a similar approach to model the dynamic evolution of multi-path components, i.e., birth-death process, using a four-state Markov chain model. The four proposed states are no birth/death, births only, deaths only, and both births and deaths. The transition matrix of the birth-death process was calculated based on the measurement presented in BIB012 . Based on measurement of HST channels in viaduct and cutting scenarios, a finite-state Markov channel was also proposed in BIB015 . Simulation results showed that Ricean distribution can well characterize the measured amplitude of the small-scale fading in both HST scenarios and an NGSM can effectively capture the dynamic nature of the fast fading in HST channels.
Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> A. Nonstationarity of HST Channels <s> A new deterministic approach for wave propagation modeling in high-speed train tunnels is presented. The model is based on a new ray launching method and results in the polarimetric and complex channel impulse response as well as the Doppler diagram for radio links between on-train stations and tunnel-fixed stations. Different channel simulations under certain propagation conditions are presented. <s> BIB001 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> A. Nonstationarity of HST Channels <s> In this paper, a channel modeling method based on random-propagation-graph is elaborated, validated, and applied to characterizing time-variant channels observed in typical environments for high-speed railway wireless communications. The advantage of the proposed method is that the frequency-tempo-spatial channel coefficients, as well as the multi-dimensional channel impulse responses in delay, Doppler frequency, direction of arrival (i.e. azimuth and elevation of arrival) and direction of departure are calculated analytically for specific environments. The validation of the proposed method is performed by comparing the statistics of two large-scale parameters obtained with those described in the well-established standards. Finally, stochastic geometry-based models in the same format as the well-known spatial channel model enhanced (SCME) are generated by using the proposed method for the high-speed scenarios in the rural, urban, and suburban environments. <s> BIB002 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> A. Nonstationarity of HST Channels <s> How to provide reliable, cost-effective wireless services for high-speed railway (HSR) users attracts increasing attention due to the fast deployment of HSRs worldwide. A key issue is to develop reasonably accurate and mathematically tractable models for HSR wireless communication channels. Finite-state Markov chains (FSMCs) have been extensively investigated to describe wireless channels. However, different from traditional wireless communication channels, HSR communication channels have the unique features such as very high speed, deterministic mobility pattern and frequent handoff events, which are not described by the existing FSMC models. In this paper, based on the Winner II physical layer channel model parameters, we propose a novel FSMC channel model for HSR communication systems, considering the path loss, fast fading and shadowing with high mobility. Extensive simulation results are given, which validate the accuracy of the proposed FSMC channel model. The model is not only ready for performance analysis, protocol design and optimization for HSR communication systems, but also provides an effective tool for faster HSR communication network simulation. <s> BIB003 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> A. Nonstationarity of HST Channels <s> This paper proposes a non-stationary wideband geometry-based stochastic model (GBSM) for multiple-input multiple-output (MIMO) high-speed train (HST) channels. The proposed model has the ability to investigate the non-stationarity of HST environment caused by the high speed movement of the receiver. Based on the proposed model, the space-time-frequency (STF) correlation function (CF) and STF local scattering function (LSF) are derived for different taps. Numerical results show the non-stationarity of the proposed channel model. <s> BIB004 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> A. Nonstationarity of HST Channels <s> In a realistic high-speed railway environment, the track, terrain, vegetation, cuttings, barriers, pylons, buildings, and crossing bridges are the main sources of reflection, diffraction, and scattering. Moreover, the radiation pattern and the polarization of the transmitting and receiving antennas considerably influence the propagation. This paper presents a deterministic modeling approach covering all the effects in a realistic highspeed railway environment for the first time. The antenna influence and the mechanisms of transmission, scattering, and reflection are evaluated by developing a 3D ray-optical tool. The diffraction loss is obtained by the multi-edge diffraction models using raster databases. This approach compensates the limitation of the existent empirical and stochastic models used for the high-speed railway, and promotes the deterministic modeling towards to the realistic environment. Therefore, it allows a detailed and realistic evaluation and verification of the train control communications systems. <s> BIB005 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> A. Nonstationarity of HST Channels <s> This paper proposes a generic non-stationary wideband geometry-based stochastic model (GBSM) for multiple-input multiple-output (MIMO) high-speed train (HST) channels. The proposed generic model can be applied on the three most common HST scenarios, i.e., open space, viaduct, and cutting scenarios. A good agreement between the statistical properties of the proposed generic model and those of relevant measurement data from the aforementioned scenarios demonstrates the utility of the proposed channel model. <s> BIB006
Measurements in the literature have demonstrated that HST channels are non-stationary since the stationary conditions, measured by stationary interval, retain to a very short period of time in comparison with other types of channels, e.g., V2I and V2V channels . This is mainly caused by the very high speed of the trains and the encountered changes in surrounding areas. Although the non-stationarity of HST channels has been implicitly considered in GBDMs BIB002 - BIB005 , BIB001 , but these models are mainly site-specific and cannot be easily generalized to a variety of scenarios. The non-stationarity feature of HST channels has been considered in the NGSM proposed in BIB003 by implementing the birth-death process to simulate the appearance and disappearance of the scatterers, and in RS-GBSMs in BIB004 - BIB006 by providing the time-variant functions of several channel model parameters, i.e., angular parameters, Doppler frequency, Ricean K -factor, and the distance between the Tx and Rx. However, verifying the proposed models by real-field measurements was only performed in BIB006 and therefore more comprehensive investigations are required to validate the accuracy of those models. Future non-stationary channel models should consider more time-variant model parameters, such as cluster powers and delays, and investigate the effect of the drift of scatterers into different delay taps on the non-stationarity of HST channels and the resulting correlation between these taps.
Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Statistical Properties of HST Channels <s> The geometry-based stochastic channel models are proposed in this paper for the terrain cutting, suburb, and urban scenarios in high-speed railway. First, the special scenarios in high-speed railway are described. And the channel models based on the geometry scenarios are introduced. Some channel parameters are based on measurement data. Then, the space-time correlation functions in analytical form are obtained in suburb and urban scenarios. Finally, the space correlation characteristics in three scenarios are compared. <s> BIB001 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Statistical Properties of HST Channels <s> This paper proposes a generic non-stationary wideband geometry-based stochastic model (GBSM) for multiple-input multiple-output (MIMO) high-speed train (HST) channels. The proposed generic model can be applied on the three most common HST scenarios, i.e., open space, viaduct, and cutting scenarios. A good agreement between the statistical properties of the proposed generic model and those of relevant measurement data from the aforementioned scenarios demonstrates the utility of the proposed channel model. <s> BIB002
Investigating the statistical properties of HST channels is essential for understanding and analyzing HST communication systems. In Table I , several channel statistics obtained from measurements were presented. However, most of proposed HST channel models in the literature have failed to provide the corresponding theoretical analysis. In BIB001 , the ST CCF was derived based on the proposed stationary narrowband HST channel model. In , a novel theoretical framework that characterizes non-stationary mobile fading channels in terms of their system functions and correlation functions was proposed. Based on this theoretical framework, different time-variant statistical properties of the RS-GBSMs in , BIB002 were derived, i.e., time-variant space CCFs, time-variant ACFs, timevariant space-Doppler (SD) power spectrum densities (PSDs), local scattering functions (LSFs) , and LCRs BIB002 . It is highly desirable to investigate the statistical properties of other HST channel models and further develop the aforementioned theoretical framework to include more statistical properties.
Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> D. 3D HST Channel Models <s> A new deterministic approach for wave propagation modeling in high-speed train tunnels is presented. The model is based on a new ray launching method and results in the polarimetric and complex channel impulse response as well as the Doppler diagram for radio links between on-train stations and tunnel-fixed stations. Different channel simulations under certain propagation conditions are presented. <s> BIB001 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> D. 3D HST Channel Models <s> Many channel models for MIMO systems have appeared in the literature. However, with the exception of a few recent results, they are largely focussed on two dimensional (2D) propagation, i.e., propagation in the horizontal plane, and the impact of elevation angle is not considered. The assumption of 2D propagation breaks down when in some propagation environments the elevation angle distribution is significant. Consequently, the estimation of ergodic capacity assuming a 2D channel coefficient alone can lead to erroneous results. In this paper, for cross polarized channels, we define a composite channel model and channel coefficient that takes into account both 2D and 3D propagation. Using this composite channel coefficient we assess the ergodic channel capacity and discuss its sensitivity to a variety of different azimuth and elevation power distributions and other system parameters. <s> BIB002 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> D. 3D HST Channel Models <s> For the design of an OFDM train communications system it is essential to characterise and consider the channel parameters. The transmission channel of a high-speed train scenario is frequency selective as well as time variant. Thus, delay spread and Doppler spread are investigated as crucial parameters for the OFDM system performance. Using a ray-tracing tool realistic impulse responses of the transmission channels are simulated. The investigated system includes three base stations operating in common frequency mode along a railway track and one mobile station situated on a high-speed train. For the mobile station different antenna patterns are included in the simulation model. The results are compared and assessed with respect to delay spread, Doppler spread and receive power. When using directional antennas a distinct reduction in Doppler spread is achieved. <s> BIB003 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> D. 3D HST Channel Models <s> Recently there have been proposals to extend MIMO processing to the elevation dimension in addition to the azimuth direction. To accurately assess the promised gains of these "3D-MIMO" techniques, a channel model is needed that accurately accounts for the elevation angles of the rays. In addition it would be desirable for the 3D channel model to be a simple extension of an already defined 2D channel model to allow for ease of implementation and to assist the 3GPP standardization effort in the 3D MIMO area. In this paper we propose an extension of the ITU 2D channel model to 3D by adding a distance dependent elevation spread based on observations from ray tracing. Through system-level simulations we observe that the behavior of 3D MIMO is greatly impacted by the modeling of the 3D channel. <s> BIB004 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> D. 3D HST Channel Models <s> Recently there have been many proposals to generate a complete channel model that covers wide range of carrier frequencies and take into consideration different aspects of channel propagation statistics. Many of these models focus on two dimensional propagation, i.e. propagation in the azimuth plane only. The assumption of 2D propagation may lead to inaccurate estimation of channel capacity and system level performance. In addition, few studies have focused on the propagation characteristics in the 800 MHz band. In this paper a complete 3D channel model is generated and examined through 3D ray tracer tool. The paper proposes detailed channel related parameters for urban macro and micro-cell environments at carrier frequencies of 800 MHz and 2.6 GHz. The paper analyzes the channel in terms of best-fit normal parameters for large scale parameters, path loss models, cross-correlation of large scale parameters, and de-correlation distance for both line-of-sight and none line-of-sight conditions. The paper uses the generated statistics to extend the current 2D 3GPP/ITU channel model to 3D model and compare the propagation statistics generated by this model with the ray tracer predictions. <s> BIB005
Apart from the GBDMs that use 3D ray-tracing tool to model HST channels BIB003 - BIB001 , HST channel models were generally proposed assuming that propagation waves are traveling in two dimensions and therefore ignore the impact of the elevation angle on channel statistics. In reality, radio waves propagate in three dimensions and scatterers are disperse in elevation, i.e., vertical plane, as well as in azimuth, i.e., horizontal plane. Recently, the 3GPP has developed a 3D channel model in urban microcell and urban macrocell scenarios following the framework of WINNER II channel model . The proposed 3D 3GPP channel model introduced the zenith AoD and zenith AoA that are modeled by inverse Laplacian functions . The 3D extensions of SCM and the WINNERII/WINNER+ channel models were proposed in BIB002 and , respectively, and an extension of the IMT-A channel model to the elevation plane was proposed in BIB004 , BIB005 . However, none of the aforementioned channel models considered any of the HST scenarios. Thus, 3D channel measurements and models are necessary, especially when the HST is close to the BS where considering elevation angles can demonstrate the impact of the waves reflected from ground on the received signal.
Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> F. System Performance <s> The demand of broadband high-mobility communication increased dramatically with the rapid development of high speed railway system. A beamforming platform through changing transmitter antenna direction base on real time receiver Global Positioning System (GPS) information was proposed to improve communication quality. Experiments were carried out over Taiwan High Speed Railway (THSR) train to analysis the tracking capability and path loss model. The results show that the received signal strength indicator (RSSI), carrier to interference plus noise ratio (CINR) and throughput were improved. <s> BIB001 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> F. System Performance <s> While mobile broadband performance measured from moving vehicles in metropolitan areas has drawn significant attentions in recent studies, similar investigations have not been conducted for regional areas. Compared to metropolitan cities, regional suburbs are often serviced by wireless technologies with significantly lower data rates and less dense deployments. Conversely, vehicle speeds are usually much higher in the regional areas. In this paper, we seek to provide some insights to user experience of mobile broadband in terms of TCP throughput when travelling in a regional train. We find that (1) using a single broadband provider may lead to a large number of blackouts, which could be reduced drastically by simultaneously subscribing to multiple providers (provider blackouts are not highly correlated), (2) the choice of train route may have a more significant effect on broadband experience than the time-of-day of a particular trip, and (3) the speed of the train itself has no deterministic effect on TCP throughput. <s> BIB002 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> F. System Performance <s> The impact of antenna array geometry on MIMO (Multiple-input Multiple-output) system in high speed railway scenario is investigated in this paper. The capacity of different antenna arrays and the effect of ULA (uniform linear array) azimuthal orientation on capacity are studied with a double-directional channel model including antenna effects and Doppler shift. <s> BIB003 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> F. System Performance <s> Due to frequent handovers in broadband wireless communications in high-speed rail, communication interruption during handover could seriously degrade the experiences of passengers on the train. Aiming to reduce the interruption time, this paper proposes a seamless handover scheme based on a dual-layer and dual-link system architecture, where a Train Relay Station is employed to execute handover for all users in a train and two antennas are mounted at the front and rear of a train. In the proposed scheme, the front antenna executes handover while the rear antenna is still communicating with BS, so that the communication can keep non-interruptive throughout the handover. Moreover, bi-casting is adopted to eliminate the data forwarding delay between the serving BS and target BS. A complete handover protocol is designed and the performance of the proposed scheme is analyzed. It can be seen from analytical results that the handover failure probability decreases as cell overlap increases and the communication interruption probability decreases with the decrease of train handover location and the increase of cell overlap. The simulation results show that in the proposed scheme, the communication interruption probability is smaller than 1% when the handover location is properly selected and the system throughput is not affected by handover. In conclusion, both theoretical and simulation results show that the proposed scheme can efficiently perform seamless handover for high-speed rail with low implementation overhead. <s> BIB004 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> F. System Performance <s> The recent advent of high speed trains introduces new mobility patterns in wireless environments. The LTE-A (Long Term Evolution of 3GPP - Advanced) networks have largely tackled the Doppler effect problem in the physical layer and are able to keep wireless service with 100Mpbs throughput within a cell in speeds up to 350 km/h. Yet the much more frequent handovers across cells greatly increases the possibility of service interruptions, and the problem is prominent for multimedia communications that demand both high-throughput and continuous connections. In this paper, we present a novel LTE-based solution to support high throughput and continuous multimedia services for high speed train passengers. Our solution is based on a Cell Array that smartly organizes the cells along a railway, together with a femto cell service that aggregates traffic demands within individual train cabins. Given that the movement direction and speed of a high-speed train are generally known, our Cell Array effectively predicts the upcoming LTE cells in service, and enables a seamless handover that will not interrupt multimedia streams. To accommodate the extreme channel variations, we further propose a scheduling and resource allocation mechanism to maximize the service rate based on periodical signal quality changes. Our simulation under diverse network and railway/train configurations demonstrates that the proposed solution achieves much lower handover latency and higher data throughput, as compared to existing solutions. It also well resists to network and traffic dynamics, thus enabling uninterrupted quality multimedia services for passengers in high speed trains. <s> BIB005 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> F. System Performance <s> A new time-domain transmit beamforming algorithm is proposed for cancelling inter-channel-interference (ICI) due to Doppler frequency shift under high speed train communication scenario. Simulation results show that by employing the algorithm a high speed train communication system is capable of providing continuous 100Mbps data rate for passengers at a speed of 450km/h. This would guarantee continuous data-intensive services for today's high speed train passengers. <s> BIB006 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> F. System Performance <s> With the deployment of high speed train (HST) systems increasing worldwide and their popularity with travelers growing, providing broadband wireless communications (BWC) in HSTs is becoming crucial. In this paper, a tutorial is presented on recent research into BWC provision for HSTs. The basic HST BWC network architecture is described. Two potential cellular architectures, microcells and distributed antenna systems (DASs) based cells, are introduced. In particular, the DAS is discussed in conjunction with radio over fiber (RoF) technology for BWC for HSTs. The technical challenges in providing DAS-based BWC for HSTs, such as handoff and RoF are discussed and outlined. <s> BIB007 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> F. System Performance <s> In high speed train (HST) system, real-time multimedia entertainments are very important applications in which a data stream often contains packets with different quality of service requirements. For example, video stream encoded with scalability contains the base layer packets with high quality (HQ) bit error rate (BER) requirement and enhancement layers' packets with low quality (LQ) BER requirement. When a conventional allocation approach, which only considers one BER constraint for one data stream, is applied to orthogonal frequency division multiple access (OFDMA) systems, the BER constraint will be the strictest one among multiple requirements from different types of packets, which leads to inefficient allocation when each data stream has multiple BER requirements. This paper aims to develop novel resource allocation approach by considering multiple BER requirements for different types of packets in one data stream. In order to not only simplify the resource allocation, but also to compensate for the channel estimation error caused by Doppler shift in the HST environment, a proper number of contiguous subcarriers are grouped into chunks and spectrum is allocated chunk by chunk. Simulation results show that the developed resource allocation scheme outperforms the conventional scheme, particularly when the BER ratio of HQ packets to LQ packets is larger than one. Furthermore, in order to reduce the complexity of resource allocation further, an empirical allocation scheme is proposed to allocate better chunks to HQ packets. It is shown that the performance of the empirical allocation scheme is quite close to that of the optimal scheme. <s> BIB008 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> F. System Performance <s> In this paper, we propose a reliable downlink (DL) transmission scheme exploiting both location- and speed- related information in a high-speed railway scenario, which relies on a multi-functional antenna array (MFAA) combining space-time block coding (STBC) with adaptive receive beamforming techniques. Firstly, the state-of-the-art STBC and adaptive beamforming techniques are reviewed and analyzed in the context of both block-fading and time-varying channels. Then we propose to employ an antenna array on board of a high-speed train to form two beams for receiving the STBC signals from the DL transmit antennas in order to improve the reliability of the system. It is demonstrated that in the context of combined schemes, receive beamforming is more beneficial than transmit beamforming under high-speed railway linear topology to achieve low bit error rate (BER). Hence it is more attractive to employ receive beamforming antennas on the top of the train. <s> BIB009 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> F. System Performance <s> In this paper, the bit error rate (BER) performance of a new multiple-input-multiple-output technique, named spatial modulation (SM), is studied under a novel non-stationary wideband high-speed train (HST) channel model in different scenarios. Time-varying parameters obtained from measurement results are used to configure the channel model to make all results more realistic. A novel statistic property called the stationary interval in terms of the space-time correlation function is proposed to describe the channel model’s time-varying behavior. The accurate theoretical BER expression of SM systems is derived under the time-varying wideband HST channel model with the non-ideal channel estimation assumption. The simulation results demonstrate that the BER performance of SM systems shows a time-varying behavior due to the non-stationary property of the employed HST channel model. The system performance can maintain a relative stationary status within the specified stationary interval. It can also be observed that the BER performance of SM systems under the HST channel model is mainly affected by the correlation between sub-channels, inter-symbol-interference, Doppler shift, and channel estimation errors. <s> BIB010
Investigating the performance of HST communication systems is the basis for system design and network planning. In BIB004 , the HST communication system performance was investigated using data throughput to evaluate a seamless dual-link handover scheme. Another handover scheme was proposed in BIB005 and the system performance was evaluated by tracking the changes of throughput and signal-to-noise-and-interference ratio (SINR) over the time. The changes of SINR over the HST velocity were investigated in BIB006 to evaluate a transmit beamforming algorithm proposed for canceling the interchannel interference (ICI) in HST communication systems. The performance of HST communication system that implements beamforming technique was also evaluated in BIB001 using measured throughput, SINR, and received signal strength indicator level. The deployment of DAS in HST communication systems was evaluated in BIB007 by using spectrum efficiency as a system performance metric. In BIB008 , BER was used to evaluate a proposed radio resource allocation scheme for orthogonal frequency-division multiple access (OFDMA) HST systems. The BER performance of a HST communication system was also investigated in BIB009 where beamforming and Alamouti combined downlink transmission schemes were proposed. Mobile broadband performance experienced from regional HSTs was investigated in BIB002 by monitoring the fluctuation of system throughput caused by the varying distance between the BS and HST, multi-path fading, and co-channel interference conditions. A temporal proportional fair power allocation scheme for HST wireless communication systems was proposed in . The proposed scheme was designed to achieve a trade-off between power efficiency and fairness along the time. HST channel capacity was analyzed in BIB003 to study the impact of different antenna array configurations on MIMO HST communication systems. In BIB010 , the BER performance of spatial modulation systems was studied using the proposed non-stationary HST MIMO channel model in with different HST scenarios. It was shown that the correlation between sub-channels, inter-symbol-interference, Doppler shift, and channel estimation errors are the main factors that affect the BER performance of SM systems under the HST channel model. More comprehensive system performance analysis that evaluates other schemes and considers more system performance indicators, e.g., capacity and quality of service (QoS), is required in the future.
A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Introduction <s> Most designers know that yellow text presented against a blue background reads clearly and easily, but how many can explain why, and what really are the best ways to help others and ourselves clearly see key patterns in a bunch of data? When we use software, access a website, or view business or scientific graphics, our understanding is greatly enhanced or impeded by the way the information is presented. ::: ::: This book explores the art and science of why we see objects the way we do. Based on the science of perception and vision, the author presents the key principles at work for a wide range of applications--resulting in visualization of improved clarity, utility, and persuasiveness. The book offers practical guidelines that can be applied by anyone: interaction designers, graphic designers of all kinds (including web designers), data miners, and financial analysts. ::: ::: ::: ::: Complete update of the recognized source in industry, research, and academic for applicable guidance on information visualizing. ::: ::: Includes the latest research and state of the art information on multimedia presentation. ::: ::: More than 160 explicit design guidelines based on vision science. ::: ::: A new final chapter that explains the process of visual thinking and how visualizations help us to think about problems. ::: ::: Packed with over 400 informative full color illustrations, which are key to understanding of the subject. ::: ::: Table of Contents ::: ::: ::: Chapter 1. Foundations for an Applied Science of Data Visualization ::: ::: Chapter 2. The Environment, Optics, Resolution, and the Display ::: ::: Chapter 3. Lightness, Brightness, Contrast and Constancy ::: ::: Chapter 4. Color ::: ::: Chapter 5. Visual Salience and Finding Information ::: ::: Chapter 6. Static and Moving Patterns ::: ::: Chapter 7. Space Perception ::: ::: Chapter 8. Visual Objects and Data Objects ::: ::: Chapter 9. Images, Narrative, and Gestures for Explanation ::: ::: Chapter 10. Interacting with Visualizations ::: ::: Chapter 11. Visual Thinking Processes <s> BIB001 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Introduction <s> We discuss techniques for the visualization of medical volume data dedicated for their clinical use. We describe the need for rapid dynamic interaction facilities with such visualizations and discuss emphasis techniques in more detail. Another crucial aspect of medical visualization is the integration of 2d and 3d visualizations. In order to organize this discussion, we introduce 6 "Golden" rules for medical visualizations. <s> BIB002 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Introduction <s> One of the most important goals in volume rendering is to be able to visually separate and selectively enable specific objects of interest contained in a single volumetric data set, which can be approached by using explicit segmentation information. We show how segmented data sets can be rendered interactively on current consumer graphics hardware with high image quality and pixel-resolution filtering of object boundaries. In order to enhance object perception, we employ different levels of object distinction. First, each object can be assigned an individual transfer function, multiple of which can be applied in a single rendering pass. Second, different rendering modes such as direct volume rendering, iso-surfacing, and non-photorealistic techniques can be selected for each object. A minimal number of rendering passes is achieved by processing sets of objects that share the same rendering mode in a single pass. Third, local compositing modes such as alpha blending and MIP can be selected for each object in addition to a single global mode, thus enabling high-quality two-level volume rendering on GPUs. <s> BIB003 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Introduction <s> The need to improve medical diagnosis and reduce invasive surgery is dependent upon seeing into a living human system. The use of diverse types of medical imaging and endoscopic instruments has provided significant breakthroughs, but not without limiting the surgeon's natural, intuitive and direct 3D perception into the human body. This paper presents a method for the use of augmented reality (AR) for the convergence of improved perception of 3D medical imaging data (mimesis) in context to the patient's own anatomy (in-situ) incorporating the physician's intuitive multi- sensory interaction and integrating direct manipulation with endoscopic instruments. Transparency of the video images recorded by the color cameras of a video see-through, stereoscopic head- mounted-display (HMD) is adjusted according to the position and line of sight of the observer, the shape of the patient's skin and the location of the instrument. The modified video image of the real scene is then blended with the previously rendered virtual anatomy. The effectiveness has been demonstrated in a series of experiments at the Chirurgische Klinik in Munich, Germany with cadaver and in-vivo studies. The results can be applied for designing medical AR training and educational applications. <s> BIB004 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Introduction <s> Insight into the dynamics of blood-flow considerably improves the understanding of the complex cardiovascular system and its pathologies. Advances in MRI technology enable acquisition of 4D blood-flow data, providing quantitative blood-flow velocities over time. The currently typical slice-by-slice analysis requires a full mental reconstruction of the unsteady blood-flow field, which is a tedious and highly challenging task, even for skilled physicians. We endeavor to alleviate this task by means of comprehensive visualization and interaction techniques. In this paper we present a framework for pre-clinical cardiovascular research, providing tools to both interactively explore the 4D blood-flow data and depict the essential blood-flow characteristics. The framework encompasses a variety of visualization styles, comprising illustrative techniques as well as improved methods from the established field of flow visualization. Each of the incorporated styles, including exploded planar reformats, flow-direction highlights, and arrow-trails, locally captures the blood-flow dynamics and may be initiated by an interactively probed vessel cross-section. Additionally, we present the results of an evaluation with domain experts, measuring the value of each of the visualization styles and related rendering parameters. <s> BIB005 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Introduction <s> This book provides an introduction to human visual perception suitable for readers studying or working in the fields of computer graphics and visualization, cognitive science, and visual neuroscience. It focuses on how computer graphics images are generated, rather than solely on the organization of the visual system itself; therefore, the text provides a more direct tie between image generation and the resulting perceptual phenomena. It covers such topics as the perception of material properties, illumination, the perception of pictorial space, image statistics, perception and action, and spatial cognition. <s> BIB006 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Introduction <s> A fundamental goal of visualization is to produce images of data that support visual analysis, exploration, and discovery of novel insights. An important consideration during visualization design is the role of human visual perception. How we "see” details in an image can directly impact a viewer's efficiency and effectiveness. This paper surveys research on attention and visual perception, with a specific focus on results that have direct relevance to visualization and visual analytics. We discuss theories of low-level visual perception, then show how these findings form a foundation for more recent work on visual memory and visual attention. We conclude with a brief overview of how knowledge of visual attention and visual memory is being applied in visualization and graphics. We also discuss how challenges in visualization are motivating research in psychophysics. <s> BIB007 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Introduction <s> Line drawing techniques are important methods to illustrate shapes. Existing feature line methods, e.g., suggestive contours, apparent ridges, or photic extremum lines, solely determine salient regions and illustrate them with separate lines. Hatching methods convey the shape by drawing a wealth of lines on the whole surface. Both approaches are often not sufficient for a faithful visualization of organic surface models, e.g., in biology or medicine. In this paper, we present a novel object-space line drawing algorithm that conveys the shape of such surface models in real-time. Our approach employs contour- and feature-based illustrative streamlines to convey surface shape (ConFIS). For every triangle, precise streamlines are calculated on the surface with a given curvature vector field. Salient regions are detected by determining maxima and minima of a scalar field. Compared with existing feature lines and hatching methods, ConFIS uses the advantages of both categories in an effective and flexible manner. We demonstrate this with different anatomical and artificial surface models. In addition, we conducted a qualitative evaluation of our technique to compare our results with exemplary feature line and hatching methods. <s> BIB008
The purpose of medical-image-data visualization is to support "the inspection, analysis and interpretation of patient data" and, more specifically, to enable "physicians to explore patient data rapidly and accurately with minimal cognitive effort" [ABK * 15]. Medical image data, such as CT and MRI, are physical measurements which exhibit noise and inhomogeneities. The anatomical structures represented in the image data have organic shapes and may be quite complex (e. g., highly curved, branching). The spatial relations between the anatomical surfaces are often quite complex, which makes medical visualization problems unique. Although slice-based 2D visualizations dominate in the field of radiological diagnosis, there are many tasks-such as in treatment planning and in dealing with complex fractures-where 3D visualizations are employed (see BIB002 for a discussion of medical 2D and 3D visualizations). A large variety of medical visualization techniques are available . These techniques include basic surface and volume rendering techniques, tagged volume rendering to enable the selective emphasis of relevant objects, and smart visibility techniques BIB005 to reveal important structures that may otherwise be occluded. Illustrative visualization techniques may be used to represent surface details faithfully BIB008 . They may be combined with surface and volume rendering techniques BIB003 , display additional elements or details , and generally facilitate the use of abstraction [RBGV08] . Special techniques were developed to clearly display elongated branching structures such as vasculature [JQD * 08, KOCC14]. The rendering of fiber tracts extracted from Diffusion Tensor Imaging developed into its own research direction and a lot of research has been devoted to displaying blood flow [LGV * 16, vPBB * 10]. The above-mentioned techniques require users of visualization systems to adjust several parameters such as color, texture, or transparency to effectively represent tissue properties. Moreover, the final appearance depends on preprocessing (e. g., noise removal, vesselness filtering) and postprocessing (e. g., mesh smoothing or simplification). Consequently, the variety of methods, the resulting broad range of parameters, and the large number of possible parameter values-not to mention the impressive number of possible combinations-can be overwhelming for developers who want to create 3D visualizations for specific medical tasks. Perception guidance. In general, visualization design decisions may benefit from visual perception research. For example, there is an extensive literature on contrast and shape perception, on the effectiveness of depth cues, on attentional guidance for goal-directed emphasis of important structures, and on other low-level (i. e., uses simple very visual information such as edges, contrast, color, motion, etc.), bottom-up (i. e., data-driven) processes that explain why some objects in a larger scene may be immediately recognized without special efforts. Moreover, it is clear that shading, shadows, and surface texture contribute to the perception of 3D shapes from images . While this basic research is an essential background for designing medical visualizations, it is by far not sufficient. Research in visual perception is (for good reasons) often focused on simple geometries and simple layouts with a few objects, and interaction is usually not taken into account. Thus, the results cannot be easily generalized to complex visualizations of irregular anatomical structures that are interactively explored by experts who know the particular anatomical region well. Both Healey and Enns BIB007 as well as Ware BIB001 provide a comprehensive summary of visual perception research and its consequences for information visualization. Similarly, Bartz and colleagues discussed perception research and its con-sequences for computer graphics as well as virtual and augmented reality. Likewise, Thompson and colleagues BIB006 discuss visual perception at length, with a focus on its applications to computer graphics. Pouli and colleagues have examined image statistics and their relationship to both perception and computer graphics. This survey extends these other reviews, in particular by adding an explicit focus on medical visualization. Thus, we discuss perceptual experiments that take realistic medical visualization scenarios into account, and we discuss the details of designing evaluation experiments in order to help the reader design experiments for concrete medical visualization problems. Medical Tasks. In order to place this survey into an applicationrelevant context, it is necessary to consider the general functions that medical visualizations serve. In clinical practice, physicians analyze medical image data in a very goal-directed manner based on knowledge of clinical symptoms and previous examinations. They also use these images and derived visualizations to communicate with colleagues. Finally they sometimes, albeit much less often, freely explore medical image data without a clear hypothesis. There are a number of general tasks for which 3D medical visualizations are used. They provide an overview when there is a rare anatomical variant or complex fracture. They are used for treatment planning; for example, making decisions about resectability (can a tumor be resected at all?), the amount of surgery, and access paths. For these tasks faithful representations of local regions including vasculature are required. The display of fiber tracts is essential for neurosurgery planning. Physicians are interested in local shape variations, for example in order to assess bones and possible rheumatic modifications [ZCBM14] or to assess the malignancy of a tumor. Possible infiltrations, such as the specific relation between a tumor and its surrounding structures, are also often essential. The investigation of anatomical details for selecting an implant has a similar level of complexity. These tasks require a thorough understanding of the relevant structures-including their appearance and shape-which makes it essential to take perceptual findings into account. Scope and Organization. This state-of-the-art report (STAR) will focus on medical visualization techniques that display one dataset. Multimodal visualization, comparative visualization of datasets over time, or special data-such as functional MRI or perfusion data-are not considered here since there are very few perception-based studies for them. Blood flow and fiber tract visualization are considered, since there are a number of perceptually motivated techniques. Medical augmented reality is also not considered, although perceptionbased research is highly relevant there (see, e. g., Bichlmeier et al. BIB004 ). Furthermore, we restrict ourselves to true 3D visualizations and do not discuss projections, such as colon flattening, or curved planar reformation [KFW * 02]. This decision is motivated by the unique advantages and problems of 3D visualizations (e. g., occlusion). Furthermore, glyph-based medical visualization is not considered here, as it augments the anatomical 3D structures with artificial shapes. Moreover, we do not discuss the influence of display types such as stereo monitors [BHS * 14]. The remainder of this STAR is structured as follows. In Sect. 2, we provide the basic findings of visual perception research that are relevant for medical visualization, with a particular focus on depth and shape perception. In Sect. 3, we introduce a number of perceptually-motivated, 3D, medical-visualization techniques, including volume rendering, vascular visualization, blood flow, and fiber tract visualization. In Sect. 4, we discuss general issues in experimental design with a focus on evaluating (medical) visualization techniques. This should not only help the reader to understand existing studies but also should provide guidance for designing new studies (and ensure that the results are valid). In Sect. 5, we come back to a selection of the techniques described in Sect. 3 in order to discuss how they were evaluated with respect to perceptual effectiveness. Since there is clearly a need for future research, we discuss a research agenda in Sect. 6.
A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Depth Perception <s> The haloed line effect is a technique where when a line in three-dimensional space passes in front of another line, a gap is produced in the projection of the more distant line. The gap is produced as if an opaque halo surrounded the closer line. This method for approximate hidden-line-elimination is advantageous because explicit surface equations are not necessary. The relative depth of lines, axes, curves and lettering is easily perceived. This technique is especially suitable for the display of finite element grids, three-dimensional contour maps and ruled surfaces. When the lines or curves on a surface are closer than the gap size, the gaps produced close up to produce a complete hidden-line-elimination. A simple but efficient implementation is described which can be used in the rendering of a variety of three-dimensional situations. <s> BIB001 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Depth Perception <s> This study investigates human performance when using semitransparent tools in interactive 3D computer graphics environments. The article briefly reviews techniques for presenting depth information and examples of applying semitransparency in computer interface design. We hypothesize that when the user moves a semitransparent surface in a 3D environment, the “partial-occlusion” effect introduced through semitransparency acts as an effective cue in target localization—an essential component in many 3D interaction tasks. This hypothesis was tested in an experiment in which subjects were asked to capture dynamic targets (virtual fish) with two versions of a 3D box cursor, one with and one without semitransparent surfaces. Results showed that the partial-occlusion effect through semitransparency significantly improved users' performance in terms of trial completion time, error rate, and error magnitude in both monoscopic and stereoscopic displays. Subjective evaluations supported the conclusions drawn from performance measures. The experimental results and their implications are discussed, with emphasis on the relative, discrete nature of the partial-occlusion effect and on interactions between different depth cues. The article concludes with proposals of a few future research issues and applications of semitransparency in human-computer interaction. <s> BIB002 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Depth Perception <s> Volumetric data commonly has high depth complexity which makes it difficult to judge spatial relationships accurately. There are many different ways to enhance depth perception, such as shading, contours, and shadows. Artists and illustrators frequently employ halos for this purpose. In this technique, regions surrounding the edges of certain structures are darkened or brightened which makes it easier to judge occlusion. Based on this concept, we present a flexible method for enhancing and highlighting structures of interest using GPU-based direct volume rendering. Our approach uses an interactively defined halo transfer function to classify structures of interest based on data value, direction, and position. A feature-preserving spreading algorithm is applied to distribute seed values to neighboring locations, generating a controllably smooth field of halo intensities. These halo intensities are then mapped to colors and opacities using a halo profile function. Our method can be used to annotate features at interactive frame rates. <s> BIB003 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Depth Perception <s> We present a technique for the illustrative rendering of 3D line data at interactive frame rates. We create depth-dependent halos around lines to emphasize tight line bundles while less structured lines are de-emphasized. Moreover, the depth-dependent halos combined with depth cueing via line width attenuation increase depth perception, extending techniques from sparse line rendering to the illustrative visualization of dense line data. We demonstrate how the technique can be used, in particular, for illustrating DTI fiber tracts but also show examples from gas and fluid flow simulations and mathematics as well as describe how the technique extends to point data. We report on an informal evaluation of the illustrative DTI fiber tract visualizations with domain experts in neurosurgery and tractography who commented positively about the results and suggested a number of directions for future work. <s> BIB004 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Depth Perception <s> This book provides an introduction to human visual perception suitable for readers studying or working in the fields of computer graphics and visualization, cognitive science, and visual neuroscience. It focuses on how computer graphics images are generated, rather than solely on the organization of the visual system itself; therefore, the text provides a more direct tie between image generation and the resulting perceptual phenomena. It covers such topics as the perception of material properties, illumination, the perception of pictorial space, image statistics, perception and action, and spatial cognition. <s> BIB005
The study of depth perception is a core research area in visual perception with studies dating back to the late 1800's. It is clear that the speed and accuracy with which 3D scenes are perceived depends on depth cues [RHFL10]. Classes of Depth Cues. Monoscopic depth cues can be seen with a single eye. Shadows, perspective projection, partial occlusion, and shading are essential monoscopic depth cues. Motion parallax is one of the main motion-based, monoscopic depth cues. It exploits the image changes that occur when a 3D object or scene moves relative to the observer. There are a number of other motion-based cues (e. g., kinetic depth effect), all of which are collected under the term shape-from-motion. Stereoscopic depth cues employ the fact that the two eyes have slightly different views of the world. The two primary stereoscopic cues are binocular disparity (i. e., the difference in the location of an object in the two retinal images), and convergence (i. e., the angular deviation of the two eyes from straight ahead required to fixate on an object). In addition to categorizing depth cues based on how many eyes they use (monoscopic versus stereoscopic), one can categorize them based on the class of information they use. In general, there are motion-based cues, surface-texture cues, and illuminationbased cues. This last category is often referred to as shapefrom-shading [Hor70, BCD * 12] and follows the "Dark is Deep" paradigm . That is, the darkness of a small patch of a 2D image is directly related to the depth of that area in the 3D scene [TM83, Ram88, LB00]. Depth Cues in Stylization. In a photograph of the real world, a large number of monoscopic depth cues work together to provide explicit, metric information about the 3D layout of the scene, including information specifying that the input is a 2D image of a 3D scene. Careful attention to as many of these cues as possible allows us to synthesize photorealistic images. Using a subset of the cues still provides an effective way of clearly specifying the 3D structure of a scene without requiring full photorealism. Indeed, artists selectively use various image cues to create a stylized version of a scene. Naturally, computer graphics researchers have adopted and adapted the artist's stylized depth techniques. For example, the distance to a point on an object can be explicitly encoded by adapting line widths, by adapting the parameters of hatching techniques, or by indicating layering through halos BIB001 BIB003 BIB004 . Fig. 1 shows how some of these illustrative depth cues are used in medical visualization. The depth cues used here are based on real-world phenomena: silhouettes arising from grazing lighting (Fig. 1, left) and shadows from a camera-mounted light source (Fig. 1, right) . Both of these cues are known to work acceptably well in humans, and are also used in computer vision models of shape-from-shading [BCD * 12]. Beyond the effect of individual depth cues, there are a number of studies that examine the interaction between cues. For example, Zhai and colleagues BIB002 found that stereo projection and semitransparent volume cursors reinforced each other and enabled a faster and more accurate selection of objects compared to monoscopic rendering and opaque volume cursors. For more on depth perception research, the reader is directed to the overview books by Thompson and colleagues BIB005 and by Goldstein .
A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Shape Perception <s> 1. The striate cortex was studied in lightly anaesthetized macaque and spider monkeys by recording extracellularly from single units and stimulating the retinas with spots or patterns of light. Most cells can be categorized as simple, complex, or hypercomplex, with response properties very similar to those previously described in the cat. On the average, however, receptive fields are smaller, and there is a greater sensitivity to changes in stimulus orientation. A small proportion of the cells are colour coded.2. Evidence is presented for at least two independent systems of columns extending vertically from surface to white matter. Columns of the first type contain cells with common receptive-field orientations. They are similar to the orientation columns described in the cat, but are probably smaller in cross-sectional area. In the second system cells are aggregated into columns according to eye preference. The ocular dominance columns are larger than the orientation columns, and the two sets of boundaries seem to be independent.3. There is a tendency for cells to be grouped according to symmetry of responses to movement; in some regions the cells respond equally well to the two opposite directions of movement of a line, but other regions contain a mixture of cells favouring one direction and cells favouring the other.4. A horizontal organization corresponding to the cortical layering can also be discerned. The upper layers (II and the upper two-thirds of III) contain complex and hypercomplex cells, but simple cells are virtually absent. The cells are mostly binocularly driven. Simple cells are found deep in layer III, and in IV A and IV B. In layer IV B they form a large proportion of the population, whereas complex cells are rare. In layers IV A and IV B one finds units lacking orientation specificity; it is not clear whether these are cell bodies or axons of geniculate cells. In layer IV most cells are driven by one eye only; this layer consists of a mosaic with cells of some regions responding to one eye only, those of other regions responding to the other eye. Layers V and VI contain mostly complex and hypercomplex cells, binocularly driven.5. The cortex is seen as a system organized vertically and horizontally in entirely different ways. In the vertical system (in which cells lying along a vertical line in the cortex have common features) stimulus dimensions such as retinal position, line orientation, ocular dominance, and perhaps directionality of movement, are mapped in sets of superimposed but independent mosaics. The horizontal system segregates cells in layers by hierarchical orders, the lowest orders (simple cells monocularly driven) located in and near layer IV, the higher orders in the upper and lower layers. <s> BIB001 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Shape Perception <s> SUMMARY In natural vision, information overspecifies the relative distances between objects and their layout in three dimensions. Directed perception applies (Cutting, 1986), rather than direct or indirect perception, because any single source of information (or cue) might be adequate to reveal relative depth (or local depth order), but many are present and useful to observers. Such overspecification presents the theoretical problem of how perceivers use this multiplicity of information to arrive at a unitary appreciation of distance between objects in the environment. This article examines three models of directed perception: selection, in which only one source of information is used; addition, in which all sources are used in simple combination; and multiplication, in which interactions among sources can occur. To establish perceptual overspecification, we created stimuli with four possible sources of monocular spatial information, using all combinations of the presence or absence of relative size, height in the projection plane, occlusion, and motion parallax. Visual stimuli were computer generated and consisted of three untextured parallel planes arranged in depth. Three tasks were used: one of magnitude estimation of exocentric distance within a stimulus, one of dissimilarity judgment in how a pair of stimuli revealed depth, and one of choice judgment within a pair as to which one revealed depth best. Grouped and individual results of the one direct and two indirect scaling tasks suggest that perceivers use these sources of information in an additive fashion. That is, one source (or cue) is generally substitutable for another, and the more sources that are present, the more depth is revealed. This pattern of results suggests independent use of information by four separate, functional subsystems within the visual system, here called minimodules. Evidence for and advantages of mmimodularity are discussed. <s> BIB002 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Shape Perception <s> Three experiments were conducted to test Hoffman and Richards's (1984) hypothesis that, for purposes of visual recognition, the human visual system divides three-dimensional shapes into parts at negative minima of curvature. In the first two experiments, subjects observed a simulated object (surface of revolution) rotating about a vertical axis, followed by a display of four alternative parts. They were asked to select a part that was from the object. Two of the four parts were divided at negative minima of curvature and two at positive maxima. When both a minima part and a maxima part from the object were presented on each trial (experiment 1), most of the correct responses were minima parts (101 versus 55). When only one part from the object—either a minima part or a maxima part—was shown on each trial (experiment 2), accuracy on trials with correct minima parts and correct maxima parts did not differ significantly. However, some subjects indicated that they reversed figure and ground, thereby changing ... <s> BIB003 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Shape Perception <s> There are many applications that can benefit from the simultaneous display of multiple layers of data. The objective in these cases is to render the layered surfaces in a such way that the outer structures can be seen and seen through at the same time. The paper focuses on the particular application of radiation therapy treatment planning, in which physicians need to understand the three dimensional distribution of radiation dose in the context of patient anatomy. We describe a promising technique for communicating the shape and position of the transparent skin surface while at the same time minimally occluding underlying isointensity dose surfaces and anatomical objects: adding a sparse, opaque texture comprised of a small set of carefully chosen lines. We explain the perceptual motivation for explicitly drawing ridge and valley curves on a transparent surface, describe straightforward mathematical techniques for detecting and rendering these lines, and propose a small number of reasonably effective methods for selectively emphasizing the most perceptually relevant lines in the display. <s> BIB004 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Shape Perception <s> Transparency can be a useful device for simultaneously depicting multiple superimposed layers of information in a single image. However, in computer-generated pictures-as in photographs and in directly viewed actual objects-it can often be difficult to adequately perceive the three-dimensional shape of a layered transparent surface or its relative depth distance from underlying structures. Inspired by artists' use of line to show shape, we have explored methods for automatically defining a distributed set of opaque surface markings that intend to portray the three-dimensional shape and relative depth of a smoothly curving layered transparent surface in an intuitively meaningful (and minimally occluding) way. This paper describes the perceptual motivation, artistic inspiration and practical implementation of an algorithm for "texturing" a transparent surface with uniformly distributed opaque short strokes, locally oriented in the direction of greatest normal curvature, and of length proportional to the magnitude of the surface curvature in the stroke direction. The driving application for this work is the visualization of layered surfaces in radiation therapy treatment planning data, and the technique is illustrated on transparent isointensity surfaces of radiation dose. <s> BIB005 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Shape Perception <s> Line drawings produced by contours traced on a surface can produce a vivid impression of the surface shape. The stability of this perception is notable considering that the information provided by the surface contours is quite ambiguous. We have studied the stability of line drawing perception from psychophysical and computational standpoints. For a given family of simple line drawings, human observers could perceive the drawings as depicting either an elliptic (egg-shaped) or hyperbolic (saddle-shaped) smooth surface patch. Rotation of the image along the line of sight and change in aspect ratio of the line drawing could bias the observer toward either interpretation. The results were modeled by a simple Bayesian observer that computes the probability to choose either interpretation given the information in the image and prior preferences. The model’s decision rule is noncommitting: for a given input image its responses are still probabilistic, reflecting variability in the modeled observers’ judgements. A good fit to the data was obtained when three observer assumptions were introduced: a preference for convex surfaces, a preference for surface contours aligned with the principal lines of curvature, and a preference for a surface orientation consistent with an object viewed from above. We discuss how these assumptions might reflect regularities of the visual world. © 1998 Elsevier Science Ltd. All rights reserved. <s> BIB006 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Shape Perception <s> Li and Zaidi (Li, A., and Zaidi, Q. (2000) Vision Research, 40, 217–242) showed that the veridical perception of the 3-dimensional (3D) shape of a corrugated surface from texture cues is entirely dependent on the visibility of critical patterns of oriented energy. These patterns are created by perspective projection of surface markings oriented along lines of maximum 3D curvature. In images missing these orientation modulations, observers confused concavities with convexities, and leftward slants with rightward slants. In this paper, it is shown that these results were a direct consequence of the physical information conveyed by different oriented components of the texture pattern. For texture patterns consisting of single gratings of arbitrary spatial frequency and orientation, equations are derived from perspective geometry that describe the local spatial frequency and orientation for any slant at any height above and below eye level. The analysis shows that only gratings oriented within a few degrees of the axis of maximum curvature exhibit distinct patterns of orientation modulations for convex, concave, and leftward and rightward slanted portions of a corrugated surface. All other gratings exhibit patterns of frequency and orientation modulations that are distinct for curvatures on the one hand and slants on the other, but that are nearly identical for curvatures of different sign, and nearly identical for slants of different direction. The perceived shape of surfaces was measured in a 5AFC paradigm (concave, convex, leftward slant, rightward slant, and flat-frontoparallel). Observers perceived all five shapes correctly only for gratings oriented within a few degrees of the axis of maximum curvature. For all other oriented gratings, observers could distinguish curvatures from slants, but could not distinguish signs of curvature or directions of slant. These results demonstrate that human observers utilize the shape information provided by texture components along both critical and non-critical orientations. © 2001 Elsevier Science Ltd. All rights reserved. <s> BIB007 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Shape Perception <s> Under typical viewing conditions, we find it easy to distinguish between different materials, such as metal, plastic, and paper. Recognizing materials from their surface reflectance properties (such as lightness and gloss) is a nontrivial accomplishment because of confounding effects of illumination. However, if subjects have tacit knowledge of the statistics of illumination encountered in the real world, then it is possible to reject unlikely image interpretations, and thus to estimate surface reflectance even when the precise illumination is unknown. A surface reflectance matching task was used to measure the accuracy of human surface reflectance estimation. The results of the matching task demonstrate that subjects can match surface reflectance properties reliably and accurately in the absence of context, as long as the illumination is realistic. Matching performance declines when the illumination statistics are not representative of the real world. Together these findings suggest that subjects do use stored assumptions about the statistics of real-world illumination to estimate surface reflectance. Systematic manipulations of pixel and wavelet properties of illuminations reveal that the visual system’s assumptions about illumination are of intermediate complexity (e.g., presence of edges and bright light sources), rather than of high complexity (e.g., presence of recognizable objects in the environment). <s> BIB008 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Shape Perception <s> This paper presents a shading model for volumetric data which enhances the perception of surfaces within the volume. The model incorporates uniform diffuse illumination, which arrives equally from all directions at each surface point in the volume. This illumination is attenuated by occlusions in the local vicinity of the surface point, resulting in shadows in depressions and crevices. Experiments by other authors have shown that perception of a surface is superior under uniform diffuse lighting, compared to illumination from point source lighting. <s> BIB009 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Shape Perception <s> We describe the results of two comprehensive controlled observer experiments intended to yield insight into the following question: If we could design the ideal texture pattern to apply to an arbitrary smoothly curving surface in order to enable its 3D shape to be most accurately and effectively perceived, what would the characteristics of that texture pattern be? We begin by reviewing the results of our initial study in this series, which were presented at the 2003 IEEE Symposium on Information Visualization, and offer an expanded analysis of those findings. We continue by presenting the results of a follow-on study in which we sought to more specifically investigate the separate and combined influences on shape perception of particular texture components, with the goal of obtaining a clearer view of their potential information carrying capacities. In each study, we investigated the observers' ability to identify the intrinsic shape category of a surface patch (elliptical, hyperbolic, cylindrical, or flat) and its extrinsic surface orientation (convex, concave, both, or neither). In our first study, we compared performance under eight different texture type conditions, plus two projection conditions (perspective or orthographic) and two viewing conditions (head-on or oblique). We found that: 1) shape perception was better facilitated, in general, by the bidirectional "principal direction grid" pattern than by any of the seven other patterns tested; 2) shape type classification accuracy remained high under the orthographic projection condition for some texture types when the viewpoint was oblique; 3) perspective projection was required for accurate surface orientation classification; and 4) shape classification accuracy was higher when the surface patches were oriented at a (generic) oblique angle to the line of sight than when they were oriented (in a nongeneric pose) to face the viewpoint straight on. In our second study, we compared performance under eight new texture type conditions, redesigned to facilitate gathering insight into the cumulative effects of specific individual directional components in a wider variety of multidirectional texture patterns. We found that shape classification accuracy was equivalently good under a variety of test patterns that included components following either the first or first and second principal directions, in addition to other directions, suggesting that a principal direction grid texture is not the only possible "best option" for enhancing shape representation. <s> BIB010 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Shape Perception <s> Textures are commonly used to enhance the representation of shape in non-photorealistic rendering applications such as medical drawings. Textures that have elongated linear elements appear to be superior to random textures in that they can, by the way they conform to the surface, reveal the surface shape. We observe that shape following hache marks commonly used in cartography and copper-plate illustration are locally similar to the effect of the lines that can be generated by the intersection of a set of parallel planes with a surface. We use this as a basis for investigating the relationships between view direction, texture orientation and surface orientation in affording surface shape perception. We report two experiments using parallel plane textures. The results show that textures constructed from planes more nearly orthogonal to the line of sight tend to be better at revealing surface shape. Also, viewing surfaces from an oblique view is much better for revealing surface shape than viewing them from directly above. <s> BIB011 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Shape Perception <s> This book provides an introduction to human visual perception suitable for readers studying or working in the fields of computer graphics and visualization, cognitive science, and visual neuroscience. It focuses on how computer graphics images are generated, rather than solely on the organization of the visual system itself; therefore, the text provides a more direct tie between image generation and the resulting perceptual phenomena. It covers such topics as the perception of material properties, illumination, the perception of pictorial space, image statistics, perception and action, and spatial cognition. <s> BIB012 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Shape Perception <s> This paper provides a tutorial and survey for a specific kind of illustrative visualization technique: feature lines. We examine different feature line methods. For this, we provide the differential geometry behind these concepts and adapt this mathematical field to the discrete differential geometry. All discrete differential geometry terms are explained for triangulated surface meshes. These utilities serve as basis for the feature line methods. We provide the reader with all knowledge to re-implement every feature line method. Furthermore, we summarize the methods and suggest a guideline for which kind of surface which feature line algorithm is best suited. Our work is motivated by, but not restricted to, medical and biological surface models. <s> BIB013
The visual perception of 3D shapes is quite complex, in part due to the loss of information when the 3D object is projected to a 2D (retinal) image. Since the pattern of light on the retina is affected by an intricate interaction between the illumination and the geometry, orientation, and texture of the object, the same pattern of light sensations on the retina could have been caused by different 3D shapes. Thus, visual shape perception is inherently ambiguous. The ambiguity of diffusely shaded images, which is called bas-reliefambiguity, cannot be resolved by any change in lighting [BKY97] . Despite this ambiguity, shape-from-shading is believed to be evolutionarily one of the earliest depth mechanisms and is very effective ZTCS99] . The visual system relies on past experience and on several assumptions to resolve the ambiguities. For example, surfaces tend to be perceived as convex [CSD * 09]. These assumptions are not always appropriate, and can cause incorrect perception of surface category and local orientation . Moreover, the most frequently used model of the human visual system assumes a single light source which is above and to the right . This assumption has significant consequences for many perceptual phenomena beyond shape perception. There is, however, some evidence that the human visual system may in fact inherently assume a number of (locally independent) light sources (see, e. g., [GKB * 99]). Moreover, the visual system is remarkably insensitive to illumination inconsistencies under certain conditions . There is also evidence that the correct perception of material properties requires more realistic lighting conditions, such as multiple light sources BIB008 . The perception of 3D shapes occurs at different spatial scales. At least two levels need to be distinguished : a local scale, where the shape of individual objects is assessed and a global scale, where spatial relations, including depth relations and proximity of objects, are assessed. Indeed, there is considerable evidence that the human visual system represents the entire scene in a linear scale space, with a large number of scales, where each scale is a copy of the scene which has been convolved by a Gaussian kernel (and subsequent scales increase the size of the kernel; for more, see ). Thus, research on the influence of depth cues should incorporate be aware of different scales. Shape-From-Shading. The changes in brightness along a surface can provide shape information. Depending on the illumination model, shadow areas represent strong discontinuities in brightness (for point light sources) or smooth transitions, such as soft shadows (area light sources). For complex anatomical surfaces, such as the brain with its many creases, advanced shadow generation using diffuse lighting improves the depth perception BIB009 . The influence of the illumination model on perception was recently studied [HBM * 14]. Shape-From-Texture. Most surfaces are textured. This can be seen as a violation of the assumption that neighboring parts of a surface affect light in the same way and it poses a problem for both edge-detection-based segmentation and shape-from-shading techniques. Texture can, however, provide information about shape. Altough a considerable amount of information exists about the largescale structure of images , most of the information about textures is implicit (such as the structure of the Fourier transform of an image). One of the earliest examinations of texture is from Gibson . The most influential model of texture structure comes from Julesz and Caelli , which models texture elements as Gabor patches (a sinusoid convolved with a 2D Gaussian). Interestingly, Gabor patches bear a strong resemblance to the receptive field structure of human vision. Texture is particularly useful in determining the local curvature of a surface BIB012 . For example, surface textures that represent principal curvature directions (PCDs) improve shape perception: observers tend to interpret lines on a surface as curvature directions BIB006 . In visualization, texture has been used to represent essential properties of shape. Lines on a surface may help the viewer to separate it into meaningful substructures. If shapes are familiar, viewers look for features that enable such a separation. Interrante and colleagues have shown that a certain type of line-frequently used by illustrators-supports this separation BIB003 BIB004 . These lines are called valley lines and represent regions of a curved surface, where the curvature along the PCD has a local minimum (i. e., the location, where the surface is flattest). These regions are heavily affected by occlusion of surrounding structures and are thus drawn with dark colors. If there are not enough features that can be displayed with valley lines, ridge lines may be added, representing regions with a local maximum of the curvature along the PCD (i. e., the regions, where the surface curvature is highest; see BIB013 for mathematical descriptions of-and algorithms to compute-these lines). Such a sparse representation of a surface may be useful in displaying an outer surface in a multi-layer visualization (e. g., to display an organ surface and a deep-seated tumor as well as surrounding risk structures). This is a promising alternative to a semi-transparent display, where the ordinal depth cues, such as occlusion and shading are hardly recognizable for a transparent surface BIB004 . There is some debate about whether texture cues can be interpreted correctly when a 3D model is displayed in orthographic projection (a typical situation in medical visualization). Li and Zaidi found that "the surface must be viewed with a noticeable amount of perspective projection" BIB007 . Kim and colleagues BIB010 , however, found that curvature-directed lines convey shape even with orthogonal projection. Using only ridge lines may be "uninformative" if most of them are almost aligned with the viewing direction. Thus, a combination of ridge and valley lines yields better performance BIB011 . Shape From-Silhouettes. Most physiological studies on the neural basis of early visual processing show that one of the first steps in the visual cortex is to extract edges BIB001 . Edges are critical for segmenting an object from its background, and as such they are important for both human vision and for visualization. The explicit display of silhouettes [IFH * 03]-as boundary between an object and the background-supports object recognition. The display of silhouettes is particularly effective in low-contrast regions with a high density of objects. In medical visualization, this gives rise to the incorporation of edge detection and boundary emphasis techniques KWTM03] . Combining Cues. Depth and shape perception benefit from combining several depth cues that tend to reinforce each other instead of being just redundant BIB002 BIB005 . As an example, the combination of silhouettes and surface textures is effective . However, combining cues does not always improve perception and may even hamper it, as in case of various feature lines .
A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Volume Visualization <s> The sources of visual information that must be present to correctly interpret spatial relations in images, the relative importance of different visual information sources with regard to metric judgments of spatial relations in images, and the ways that the task in which the images are used affect the visual information's usefulness are discussed. Cue theory, which states that the visual system computes the distances of objects in the environment based on information from the posture of the eyes and from the patterns of light projected onto the retinas by the environment, is presented. Three experiments in which the influence of pictorial cues on perceived spatial relations in computer-generated images was assessed are discussed. Each experiment examined the accuracy with which subjects matched the position, orientation, and size of a test object with a standard by interactively translating, rotating, and scaling the test object. > <s> BIB001 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Volume Visualization <s> Accurately and automatically conveying the structure of a volume model is a problem which has not been fully solved by existing volume rendering approaches. Physics-based volume rendering approaches create images which may match the appearance of translucent materials in nature but may not embody important structural details. Transfer function approaches allow flexible design of the volume appearance but generally require substantial hand-tuning for each new data set in order to be effective. We introduce the volume illustration approach, combining the familiarity of a physics-based illumination model with the ability to enhance important features using non-photorealistic rendering techniques. Since the features to be enhanced are defined on the basis of local volume characteristics rather than volume sample values, the application of volume illustration techniques requires less manual tuning than the design of a good transfer function. Volume illustration provides a flexible unified framework for enhancing the structural perception of volume models through the amplification of features and the addition of illumination effects. <s> BIB002 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Volume Visualization <s> Lighting has a crucial impact on the appearance of 3D objects and on the ability of an image to communicate information about a 3D scene to a human observer. This paper presents a new automatic lighting design approach for comprehensible rendering of 3D objects. Given a geometric model of a 3D object or scene, the material properties of the surfaces in the model, and the desired viewing parameters, our approach automatically determines the values of various lighting parameters by optimizing a perception-based image quality objective function. This objective function is designed to quantify the extent to which an image of a 3D scene succeeds in communicating scene information, such as the 3D shapes of the objects, fine geometric details, and the spatial relationships between the objects. Our results demonstrate that the proposed approach is an effective lighting design tool, suitable for users without expertise or knowledge in visual perception or in lighting design. <s> BIB003 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Volume Visualization <s> In this paper, we present a user study in which we have investigated the influence of seven state-of-the-art volumetric illumination models on the spatial perception of volume rendered images. Within the study, we have compared gradient-based shading with half angle slicing, directional occlusion shading, multidirectional occlusion shading, shadow volume propagation, spherical harmonic lighting as well as dynamic ambient occlusion. To evaluate these models, users had to solve three tasks relying on correct depth as well as size perception. Our motivation for these three tasks was to find relations between the used illumination model, user accuracy and the elapsed time. In an additional task, users had to subjectively judge the output of the tested models. After first reviewing the models and their features, we will introduce the individual tasks and discuss their results. We discovered statistically significant differences in the testing performance of the techniques. Based on these findings, we have analyzed the models and extracted those features which are possibly relevant for the improved spatial comprehension in a relational task. We believe that a combination of these distinctive features could pave the way for a novel illumination model, which would be optimized based on our findings. <s> BIB004 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Volume Visualization <s> Visualizing complex volume data usually renders selected parts of the volume semitransparently to see inner structures of the volume or provide a context. This presents a challenge for volume rendering methods to produce images with unambiguous depth-ordering perception. Existing methods use visual cues such as halos and shadows to enhance depth perception. Along with other limitations, these methods introduce redundant information and require additional overhead. This paper presents a new approach to enhancing depth-ordering perception of volume rendered images without using additional visual cues. We set up an energy function based on quantitative perception models to measure the quality of the images in terms of the effectiveness of depth-ordering and transparency perception as well as the faithfulness of the information revealed. Guided by the function, we use a conjugate gradient method to iteratively and judiciously enhance the results. Our method can complement existing systems for enhancing volume rendering results. The experimental results demonstrate the usefulness and effectiveness of our approach. <s> BIB005
Corcoran and colleagues Fig. 2 ) and those which support shape perception (see Fig. 3 ). The shape perception techniques that are based on shading are shown in a separate diagram (see Fig. 4 ). In the following, we will discuss these techniques in greater detail. Ebert and Rheingans BIB002 showed that this weighting does not need to be linear-exponential functions can be employed. In their application scenarios, the background color is often blue. This is inspired by artists who use blue backgrounds to depict an aerial perspective. Svakhine et al. enhance depth perception for large-and small-scale features by employing color-based techniques which also mimic the effects of the aerial perspective. To give the user more control over how features are emphasized, Svakhine et al. introduce a depth filtering function, which allows depth enhancement to be constrained to a subset of the overall depth range. Illumination-Based Techniques. The second group of physicsbased techniques focus on illumination. These techniques exploit the peculiarities of light transport and the fact that the human visual system has evolved to interpret the effects resulting from the underlying physics. Thus, shadowing, shading and other effects play an important role in this group of techniques. Volume rendering with advanced illumination-based techniques was recently introduced in commercial medical diagnosis software (e. g. SIEMENS syngo.via Frontier) and is referred to as cinematic rendering. In addition to lighting effects, light source placement affects shape perception considerably. While lighting design for polygonal surface rendering was studied in depth (see BIB003 for a seminal contribution), it recently attracted interest in (medical) volume visualization [TLD * 12, ZWM13, ZCBM14]. Aerial perspective could also be considered an illumination-based technique, as it is based on the attenuation of light. Due to the striking similarity to chromadepth, however, we have classified it as chromadepth-based technique. In the following, we will briefly discuss other illumination-based techniques as they are often applied in 3D medical visualization. While most of the techniques follow the widely-spread, gradientbased illumination model proposed by Levoy , a large number of illumination models that consider shadowing, ambient occlusion, halos have recently been proposed. Shadowing Effects. Due to the importance of shadowing effects in depth perception BIB001 , shadows are often taken into account in perceptually-motivated volume rendering. Due to the computational complexity of these lighting effects, algorithms are often constrained to single scattering and to the use of a point or a directional light. To optimize the required computations, several approaches have been proposed in the area of medical visualization. Lighting goodness assesses the quality of lighting basically by analyzing differences between an unilluminated image and an illuminated one. Lighting similarity measures whether a light source is highly representative, which is desired for the placement of several light sources to ensure that they complement each other well. Finally, light stability refers to the differences that result when a light source position slightly changes. Ideally, the depiction of an object's shape is robust against small positional changes. Tao and colleagues employ this metric to optimally place an initial light source and then to add additional sources as long as they improve the recognizeability of shapes (according to their metric of shape perception). In their perceptual experiment, participants were asked to compare pairs of images with respect to recognizability of surface details. In most cases, images, where the light sources were optimized were rated as better than images with randomly placed light sources. The new metric turned out to be superior to optimizations based on other metrics (e. g., from Gumhold and colleagues ). These results are also true for the medical volume data. The following setup for light sources has been shown to be perceptually effective: • a key light with high intensity at the top left of the scene, • an auxiliary fill light placed in front of the scene, and • a back light that emphasizes the silhouettes. The back light should be blue and the other light sources should be white. The key light should have the highest intensity and the back light should have the lowest. One drawback of this configuration is that some thin structures may be overexposed BIB005 . One possible remedy is to use a global tone mapping. This configuration was used in a case study on the analysis of rheumatoid changes. The lighting configuration was perceptually evaluated and discussed with respect to a specific diagnostic task, namely the detection of small erosions from rheumatoid arthritis [ZCBM14]. Zheng and colleagues compared local and global illumination and found that local illumination depicts excessive detail, whereas global illumination leads to a softer appearance resulting in a lower rate of false positives. With this type of lighting and global illumination, the number of diagnostic errors decreased considerably and participants were twice as fast. It is also important to mention that the participants (who were all physicians) wanted to see both the globally illuminated data and the locally illuminated data. In addition to surface orientation and category assessment tasks, Zheng and colleagues employed lighting-specific metrics BIB005 to measure the degree to which-under different lighting conditions-the luminance histogram was nearly equalized and the degree to which edges (based on an edge detector) were very salient. Perceptual Benefits. Several studies have been conducted to investigate the effects of advanced volume illumination techniques on depth and shape perception. Lindemann and Ropinski BIB004 have compared seven state-of-the-art volumetric illumination techniques with respect to depth and size perception as well as to subjective preference. They presented participants with volume-rendered images generated using different illumination models and asked the participants to perform depth, size, and beauty-judgment tasks. The results indicate that global illumination improves the perceptual qualities of volume-rendered images. In particular, directional occlusion shading [SPH * 09] improved depth perception significantly. Interestingly, participants nonetheless had a subjective preference for the simple gradient-based shading technique. Soltészová and colleagues investigated the influence of shadow chromaticity through depth testing and found that shadow chromaticity influenced the perceptual qualities of volume-rendered images. In another work byŠoltészová and colleagues [STPV12], shape perception for complex slanted shapes-such as they occur in anatomy-was analyzed. Like previous authors, they found a systematic error in estimating surface slant. They also discovered that upwards-pointing normals are underestimated less than downwardspointing normals. This finding enabled them to automatically adjust the shading scheme to correct for these errors. In a follow-up experiment, they showed that shape orientation was indeed more precisely perceived after the correction. More recently, Diaz and colleagues [DRN * 16] investigated the influence of global volume illumination techniques in desktop-based VR systems and found a positive effect on depth perception.
A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> A novel stereoscopic depth encoding/decoding process has been developed which considerably simplifies the creation and presentation of stereoscopic images in a wide range of display media. The patented chromostereoscopic process is unique because the encoding of depth information is accomplished in a single image. The depth encoded image can be viewed with the unaided eye as a normal two dimensional image. The image attains the appearance of depth, however, when viewed by means of the inexpensive and compact depth decoding passive optical system. The process is compatible with photographic, printed, video, slide projected, computer graphic, and laser generated color images. The range of perceived depth in a given image can be selected by the viewer through the use of "tunable depth" decoding optics, allowing infinite and smooth tuning from exaggerated normal depth through zero depth to exaggerated inverse depth. The process is insensitive to the head position of the viewer. Depth encoding is accomplished by mapping the desired perceived depth of an image component into spectral color. Depth decoding is performed by an optical system which shifts the spatial positions of the colors in the image to create left and right views. The process is particularly well suited to the creation of stereoscopic laser shows. Other applications are also being pursued. <s> BIB001 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> We propose a new rendering technique that produces 3-D images with enhanced visual comprehensibility. Shape features can be readily understood if certain geometric properties are enhanced. To achieve this, we develop drawing algorithms for discontinuities, edges, contour lines, and curved hatching. All of them are realized with 2-D image processing operations instead of line tracking processes, so that they can be efficiently combined with conventional surface rendering algorithms.Data about the geometric properties of the surfaces are preserved as Geometric Buffers (G-buffers). Each G-buffer contains one geometric property such as the depth or the normal vector of each pixel. By using G-buffers as intermediate results, artificial enhancement processes are separated from geometric processes (projection and hidden surface removal) and physical processes (shading and texture mapping), and performed as postprocesses. This permits a user to rapidly examine various combinations of enhancement techniques without excessive recomputation, and easily obtain the most comprehensible image.Our method can be widely applied for various purposes. Several of these, edge enhancement, line drawing illustrations, topographical maps, medical imaging, and surface analysis, are presented in this paper. <s> BIB002 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> Authors and Editors. Acknowledgements. Introduction. PART I. BASICS. Generalized Steps. Studio Basics. Archival Considerations. Light on Form. PAT II. RENDERING TECHNIQUES. Line and Ink. Pencil. Carbon Dust. Watercolor and Wash. Gouache and Acrylics. Airbrush. Murals and Dioramas. Model Building. Introduction to Computer Graphics. From 2-D to 3-D. PART III. SUBJECT MATTER. Illustrating Molecules. Illustrating Earth Sciences. Illustrating Astronomy. Illustrating Plants. Illustrating Fossils. Illustrating Invertebrates. Illustrating Fishes. Illustrating Amphibians and Reptiles. Illustrating Birds. Illustrating Mammals. Illustrating Animals in Their Habitats. Illustrating Humans and Their Artifacts. Illustrating Medical Subjects. PART IV. BEYOND BASICS. Using the Microscope. Charts and Diagrams. Cartography for the Scientific Illustrator. Copy Photography. The Printing Process. PART V. THE BUSINESS OF SCIENTIFIC ILLUSTRATION. Copyright. Contracts. Operating a Freelance Business. Index of Illustrators. Index. About the Editors. <s> BIB003 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> There are many applications that can benefit from the simultaneous display of multiple layers of data. The objective in these cases is to render the layered surfaces in a such way that the outer structures can be seen and seen through at the same time. The paper focuses on the particular application of radiation therapy treatment planning, in which physicians need to understand the three dimensional distribution of radiation dose in the context of patient anatomy. We describe a promising technique for communicating the shape and position of the transparent skin surface while at the same time minimally occluding underlying isointensity dose surfaces and anatomical objects: adding a sparse, opaque texture comprised of a small set of carefully chosen lines. We explain the perceptual motivation for explicitly drawing ridge and valley curves on a transparent surface, describe straightforward mathematical techniques for detecting and rendering these lines, and propose a small number of reasonably effective methods for selectively emphasizing the most perceptually relevant lines in the display. <s> BIB004 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> Transparency can be a useful device for simultaneously depicting multiple superimposed layers of information in a single image. However, in computer-generated pictures-as in photographs and in directly viewed actual objects-it can often be difficult to adequately perceive the three-dimensional shape of a layered transparent surface or its relative depth distance from underlying structures. Inspired by artists' use of line to show shape, we have explored methods for automatically defining a distributed set of opaque surface markings that intend to portray the three-dimensional shape and relative depth of a smoothly curving layered transparent surface in an intuitively meaningful (and minimally occluding) way. This paper describes the perceptual motivation, artistic inspiration and practical implementation of an algorithm for "texturing" a transparent surface with uniformly distributed opaque short strokes, locally oriented in the direction of greatest normal curvature, and of length proportional to the magnitude of the surface curvature in the stroke direction. The driving application for this work is the visualization of layered surfaces in radiation therapy treatment planning data, and the technique is illustrated on transparent isointensity surfaces of radiation dose. <s> BIB005 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> This paper discusses strategies for effectively portraying 3D flow using volume line integral convolution. Issues include defining an appropriate input texture, clarifying the distinct identities and relative depths of the advected texture elements, and selectively highlighting regions of interest in both the input and output volumes. Apart from offering insights into the greater potential of 3D LIC as a method for effectively representing flow in a volume, a principal contribution of this work is the suggestion of a technique for generating and rendering 3D visibility-impeding "halos" that can help to intuitively indicate the presence of depth discontinuities between contiguous elements in a projection and thereby clarify the 3D spatial organization of elements in the flow. The proposed techniques are applied to the visualization of a hot, supersonic, laminar jet exiting into a colder, subsonic coflow. <s> BIB006 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> Transparency can be a useful device for depicting multiple overlapping surfaces in a single image. The challenge is to render the transparent surfaces in such a way that their 3D shape can be readily understood and their depth distance from underlying structures clearly perceived. This paper describes our investigations into the use of sparsely-distributed discrete, opaque texture as an artistic device for more explicitly indicating the relative depth of a transparent surface and for communicating the essential features of its 3D shape in an intuitively meaningful and minimally occluding way. The driving application for this work is the visualization of layered surfaces in radiation therapy treatment planning data, and the technique is illustrated on transparent isointensity surfaces of radiation dose. We describe the perceptual motivation and artistic inspiration for defining a stroke texture that is locally oriented in the direction of greatest normal curvature (and in which individual strokes are of a length proportional to the magnitude of the curvature in the direction they indicate), and we discuss two alternative methods for applying this texture to isointensity surfaces defined in a volume. We propose an experimental paradigm for objectively measuring observers' ability to judge the shape and depth of a layered transparent surface, in the course of a task which is relevant to the needs of radiotherapy treatment planning, and use this paradigm to evaluate the practical effectiveness of our approach through a controlled observer experiment based on images generated from actual clinical data. <s> BIB007 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> We discuss volume line integral convolution (LIC) techniques for effectively visualizing 3D flow, including using visibility-impeding halos and efficient asymmetric filter kernels. Specifically, we suggest techniques for selectively emphasizing critical regions of interest in a flow; facilitating the accurate perception of the 3D depth and orientation of overlapping streamlines; efficiently incorporating an indication of orientation into a flow representation; and conveying additional information about related scalar quantities such as temperature or vorticity over a flow via subtle, continuous line width and color variations. <s> BIB008 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> Accurately and automatically conveying the structure of a volume model is a problem which has not been fully solved by existing volume rendering approaches. Physics-based volume rendering approaches create images which may match the appearance of translucent materials in nature but may not embody important structural details. Transfer function approaches allow flexible design of the volume appearance but generally require substantial hand-tuning for each new data set in order to be effective. We introduce the volume illustration approach, combining the familiarity of a physics-based illumination model with the ability to enhance important features using non-photorealistic rendering techniques. Since the features to be enhanced are defined on the basis of local volume characteristics rather than volume sample values, the application of volume illustration techniques requires less manual tuning than the design of a good transfer function. Volume illustration provides a flexible unified framework for enhancing the structural perception of volume models through the amplification of features and the addition of illumination effects. <s> BIB009 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> Li and Zaidi (Li, A., and Zaidi, Q. (2000) Vision Research, 40, 217–242) showed that the veridical perception of the 3-dimensional (3D) shape of a corrugated surface from texture cues is entirely dependent on the visibility of critical patterns of oriented energy. These patterns are created by perspective projection of surface markings oriented along lines of maximum 3D curvature. In images missing these orientation modulations, observers confused concavities with convexities, and leftward slants with rightward slants. In this paper, it is shown that these results were a direct consequence of the physical information conveyed by different oriented components of the texture pattern. For texture patterns consisting of single gratings of arbitrary spatial frequency and orientation, equations are derived from perspective geometry that describe the local spatial frequency and orientation for any slant at any height above and below eye level. The analysis shows that only gratings oriented within a few degrees of the axis of maximum curvature exhibit distinct patterns of orientation modulations for convex, concave, and leftward and rightward slanted portions of a corrugated surface. All other gratings exhibit patterns of frequency and orientation modulations that are distinct for curvatures on the one hand and slants on the other, but that are nearly identical for curvatures of different sign, and nearly identical for slants of different direction. The perceived shape of surfaces was measured in a 5AFC paradigm (concave, convex, leftward slant, rightward slant, and flat-frontoparallel). Observers perceived all five shapes correctly only for gratings oriented within a few degrees of the axis of maximum curvature. For all other oriented gratings, observers could distinguish curvatures from slants, but could not distinguish signs of curvature or directions of slant. These results demonstrate that human observers utilize the shape information provided by texture components along both critical and non-critical orientations. © 2001 Elsevier Science Ltd. All rights reserved. <s> BIB010 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> We propose a simple and effective method for detecting view-and scale-independent ridge-valley lines defined via first- and second-order curvature derivatives on shapes approximated by dense triangle meshes. A high-quality estimation of high-order surface derivatives is achieved by combining multi-level implicit surface fitting and finite difference approximations. We demonstrate that the ridges and valleys are geometrically and perceptually salient surface features, and, therefore, can be potentially used for shape recognition, coding, and quality evaluation purposes. <s> BIB011 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> Textures are commonly used to enhance the representation of shape in non-photorealistic rendering applications such as medical drawings. Textures that have elongated linear elements appear to be superior to random textures in that they can, by the way they conform to the surface, reveal the surface shape. We observe that shape following hache marks commonly used in cartography and copper-plate illustration are locally similar to the effect of the lines that can be generated by the intersection of a set of parallel planes with a surface. We use this as a basis for investigating the relationships between view direction, texture orientation and surface orientation in affording surface shape perception. We report two experiments using parallel plane textures. The results show that textures constructed from planes more nearly orthogonal to the line of sight tend to be better at revealing surface shape. Also, viewing surfaces from an oblique view is much better for revealing surface shape than viewing them from directly above. <s> BIB012 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> We describe the results of two comprehensive controlled observer experiments intended to yield insight into the following question: If we could design the ideal texture pattern to apply to an arbitrary smoothly curving surface in order to enable its 3D shape to be most accurately and effectively perceived, what would the characteristics of that texture pattern be? We begin by reviewing the results of our initial study in this series, which were presented at the 2003 IEEE Symposium on Information Visualization, and offer an expanded analysis of those findings. We continue by presenting the results of a follow-on study in which we sought to more specifically investigate the separate and combined influences on shape perception of particular texture components, with the goal of obtaining a clearer view of their potential information carrying capacities. In each study, we investigated the observers' ability to identify the intrinsic shape category of a surface patch (elliptical, hyperbolic, cylindrical, or flat) and its extrinsic surface orientation (convex, concave, both, or neither). In our first study, we compared performance under eight different texture type conditions, plus two projection conditions (perspective or orthographic) and two viewing conditions (head-on or oblique). We found that: 1) shape perception was better facilitated, in general, by the bidirectional "principal direction grid" pattern than by any of the seven other patterns tested; 2) shape type classification accuracy remained high under the orthographic projection condition for some texture types when the viewpoint was oblique; 3) perspective projection was required for accurate surface orientation classification; and 4) shape classification accuracy was higher when the surface patches were oriented at a (generic) oblique angle to the line of sight than when they were oriented (in a nongeneric pose) to face the viewpoint straight on. In our second study, we compared performance under eight new texture type conditions, redesigned to facilitate gathering insight into the cumulative effects of specific individual directional components in a wider variety of multidirectional texture patterns. We found that shape classification accuracy was equivalently good under a variety of test patterns that included components following either the first or first and second principal directions, in addition to other directions, suggesting that a principal direction grid texture is not the only possible "best option" for enhancing shape representation. <s> BIB013 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> We introduce a flexible combination of volume, surface, and line rendering. We employ object-based edge detection because this allows a flexible parametrization of the generated lines. Our techniques were developed mainly for medical applications using segmented patient-individual volume datasets. In addition, we present an evaluation of the generated visualizations with 8 medical professionals and 25 laypersons. Integration of lines in conventional rendering turned out to be appropriate. <s> BIB014 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> Three-dimensional shape can be drawn using a variety of feature lines, but none of the current definitions alone seem to capture all visually-relevant lines. We introduce a new definition of feature lines based on two perceptual observations. First, human perception is sensitive to the variation of shading, and since shape perception is little affected by lighting and reflectance modification, we should focus on normal variation. Second, view-dependent lines better convey smooth surfaces. From this we define view-dependent curvature as the variation of the surface normal with respect to a viewing screen plane, and apparent ridges as the loci of points that maximize a view-dependent curvature. We present a formal definition of apparent ridges and an algorithm to render line drawings of 3D meshes. We show that our apparent ridges encompass or enhance aspects of several other feature lines. <s> BIB015 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> We present a psychophysical experiment to determine the effectiveness of perceptual shape cues for rigidly moving objects in an interactive, highly dynamic task. We use standard non-photorealistic (NPR) techniques to carefully separate and study shape cues common to many rendering systems. Our experiment is simple to implement, engaging and intuitive for participants, and sensitive enough to detect significant differences between individual shape cues. We demonstrate our experimental design with a user study. In that study, participants are shown 16 moving objects, 4 of which are designated targets, rendered in different shape-from-X styles. Participants select targets projected onto a touch-sensitive table. We find that simple Lambertian shading offers the best shape cue in our user study, followed by contours and, lastly, texturing. Further results indicate that multiple shape cues should be used with care, as these may not behave additively. <s> BIB016 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> This paper provides a tutorial and survey for a specific kind of illustrative visualization technique: feature lines. We examine different feature line methods. For this, we provide the differential geometry behind these concepts and adapt this mathematical field to the discrete differential geometry. All discrete differential geometry terms are explained for triangulated surface meshes. These utilities serve as basis for the feature line methods. We provide the reader with all knowledge to re-implement every feature line method. Furthermore, we summarize the methods and suggest a guideline for which kind of surface which feature line algorithm is best suited. Our work is motivated by, but not restricted to, medical and biological surface models. <s> BIB017
Illustrative techniques do not aim to mimic the real world, but instead borrow from art and illustrations BIB003 . This class of technique often helps to guide the viewer's attention in a goaldirected manner, emphasizing important aspects and suppressing or omitting other aspects. Selected examples, such as boundary emphasis, tone shading, feature lines, and texturing will be discussed in detail below. Boundary Emphasis. Boundary emphasis-usually as a contourhas shown much promise in enhancing volume rendering, presumably since silhouettes play a central role in object recognition. Early methods evaluated (only) the angle between the surface normal n and the view vector v, emphasizing regions, where the dot product of these vectors was close to zero. Unfortunately, the width of the contour cannot be controlled in this technique. Kindlmann and colleagues [KWTM03] solved this by analyzing the normal curvature in the viewing direction and then using this value to regulate contour thickness. While this method produces perceptually meaningful renditions, it requires curvature values (second order derivatives). A more computationally-effective solution was introduced by Bruckner and Gröller . Despite the fact that this latter method is not accurate-since curvature is only approximated by the change of normal directions-it is sufficient for creating expressive visualizations from volume data. Toon Shading. Many forms of medical image data-such as CT, MRI and PET-have no inherent color. Thus, color may be used to enhance the shape perception. A widespread strategy is to map the surface direction (approximated as normalized gradients in direct volume rendering) with a cool-to-warm color scale. This illustrative-rendering technique was introduced by Gooch and colleagues and is also used in medical visualization [JQD * 08]. In RGB space, the cool color uses a non-zero blue component, while the warm color is based on yellow and uses the red and green components: with L being the light vector, N being the surface normal or normalized gradient, K cool = (0, 0, T cool ), and K warm = (T warm , T warm , 0). Chromadepth. The selection of blue as background color in distance color blending [ER00] is very consistent with perceptual considerations since the light-sensitive cells that respond to blue col- ors primarily have a slow response time. Furthermore, the lens of the eye refracts colored light with different wavelength at different angles. Thus, the refraction of blue wavelength light at the eye's lens can result in an offset of the retinal image, which makes these objects seem to be further away than, for instance, red objects. Thus, the blue background naturally supports the focus on the foreground, which is typically rendered in red. This effect, called chromadepth, is employed for stereo perception (with diffraction grating glasses). It can also be used, however, for depth perception without glasses BIB001 , if the depth value is mapped to the rainbow color scale (red is proximal, blue is distal). Due to these benefits, chromadepth-based techniques have also been applied in medical visualization [RSH06, BGP * 11, SPV11]. One central application of chromadepth is to improve the depiction of shadows. The realistic simulation of shadows darkens the affected regions so strongly that there is often next to no contrast there, effectively hiding any information present there . Soltészová and colleagues noticed that illustrators often do not mix the object color with black, as shadowing algorithms do. Instead, they prefer to mix the original color with blue such that shadowed regions have a luminance and a color contrast.Šoltészová and colleagues suggested that shadowiness is mapped using an appropriate transfer function to a blueish color and to opacity. The specific color scale is derived from the perceptually motivated CIELAB color space, where the Euclidean distances roughly correspond to our perception of color differences. With this shadow transfer function they effectively compensate for the lower luminance range in the shadow region, and thus reveal more details by avoiding black concavities. This is an inspiring idea, as it mixes a depth cue from real-world perception (shadow) with an artificial depth cue (since the color assignment clearly deviates from physical illumination). The method was applied to a variety of medical datasets, including CT and ultrasound data. Fig. 6 illustrates the difference between chromadepth and conventional shadows. This kind of shadow generation is similar to illustrative cool-warm color shading . Halo Effects. Halos can be thought of as the opposite of shadows: shadows arise when occluding structures decrease the amount of illumination received by adjacent objects while halos are rim-like structures that shine on adjacent objects. Since halo effects are designed to support depth perception, the foreground features are usually emphasized with a bright surrounding halo BIB009 . The background object is made less prominent by making the surrounding more opaque or darker. When the halo color is dark, halos closely resemble shadowing effects. This well-known artistic technique was first applied in visualization in the context of flow visualization BIB006 BIB008 . There, the halo effect was computed per voxel by adding halo influences in the neighborhood. Fig. 7 shows an example, where halos are applied to medical volume rendering. Feature Lines. As mentioned above, object outlines and boundary emphasis techniques can improve the space perception. In addition to the outer object boundaries, a variety of lines exist to represent discontinuities in visibility, surface normal, curvature, and illumination. Generally, two classes of feature lines exist: • view-independent lines and • view-dependent lines. View-independent features are solely influenced by the shape of an object, and as such they are the same for different vantage points. These include crease lines based on "large" dihedral angles between adjacent faces and the previously mentioned ridge and valley lines (e. g., BIB004 BIB011 ) that are derived from second-order features (curvature) and are very sensitive to noise. In contrast, view-dependent feature lines take the view direction (and sometimes the illumination) into account. Among the viewdependent feature lines, suggestive contours [DFRS03] and apparent ridges BIB015 have been frequently used in medical visualization . Suggestive contours [DFRS03] characterize regions in a surface that would be silhouette regions if the viewpoint of the camera was to change slightly. Thus, they provide continuity during interactive exploration. Apparent ridges BIB015 are viewdependent versions of the static ridge-and-valley line concept: They extend the definition of ridges with a view-dependent curvature term. In interactive exploration, apparent ridges thus adapt to the viewing direction and slide over a surface instead of being constant. In contrast to suggestive contours, apparent ridges also include lines in convex regions. Both suggestive contours and apparent ridges have a relevance threshold that can be adjusted for drawing or suppressing lines. ridge-and-valley lines are subject to noise BIB017 , "seem to exaggerate curvature" [CGL * 08], make features "look overly sharp" BIB015 or "like surface markings" [DFRS03], and due to being locked to the surfaces they are easily occluded by the very features they represent. Only for mathematically ideal shapes with unrealistically sharp features (3D models of implants are a typical medical example) can static ridge-and-valley lines be equivalent to view-dependent concepts, such as apparent ridges. For organic shapes, in particular models obtained from medical scans, a large amount of smoothing needs to be used to avoid problems with view-independent lines. For specific recommendations on which view-dependent or -independent line concept should be used, we refer the reader to the survey of Lawonn and Preim BIB017 . Silhouettes [IFH * 03]-which are view-dependent lines on the surface of an object-were employed along with surface and volume rendering BIB014 to display context objects in a sparse manner to support attention to the focus objects (see Fig. 9 ). Corcoran and colleagues adjusted two-level volume rendering to incorporate object-space silhouettes and suggestive contours. Overall, shape perception was improved with both feature line techniques. By far the most comprehensive evaluation of the perceptual effectiveness of feature lines was performed by Cole and colleagues [CSD * 09] who performed an experiment with 275.000 gauge figure measurements using Amazon's mechanical turk. They investigated all major features lines (including apparent ridges and suggestive contours) and compared them with shaded images and illustrations Figure 9 : The focus objects-a liver tumor and the vascular trees of the liver-are displayed as colored, opaque objects. The liver surface is a near-focus structure rendered transparently but also colored. Other organs and skeletal structures are rendered with silhouettes. In the right image, skeletal structures are rendered also as strongly transparent shaded surfaces (from BIB014 ). performed by an artist. Among the twelve models used in the study were four (partially) complex anatomical structures (including the cervical bone, a vertebrae) and two less complex models (a tooth and a femur bone). The major results of that study are: • There are statistically significant differences between almost all pairs of feature line techniques. • All feature line techniques were less effective than shading (for all 12 models). • Shape perception was poor for the anatomical models with any type of feature line (even with ridge and valley lines, where the mean deviation was 35°, compared to 24°with shading). As a consequence, the sole use of feature lines for displaying single anatomical structures is perceptually not recommended. Hatching. Shape representation using feature lines can yield images that are too sparse when the shapes have only a few landmarks, such as is the case for the liver and the kidney. When an appropriate surface parameterization exists, hatching textures may improve shape perception. The strokes of such a hatching texture are more regularly distributed over a surface. The strokes are fully opaque, whereas the remaining elements of the texture are fully transparent. Obtaining an adequate surface representation is challenging, especially if surface models are derived from (noisy) medical-image data. Usually, mesh smoothing must be performed. The perceptual benefit of hatching strokes is influenced by the amount to which they "follow the shape," especially for organic (curved) shapes BIB010 . One of the earliest applications of this principle comes from Saito and Takahashi BIB002 , who applied regular hatching lines (latitude and longitude lines) to curved surfaces. Hatching has been shown to improve shape perception when it is used in combination with conventional shading with a local illumination model. It was also successfully used (based on experiments) for multi-layer medical visualizations BIB004 . It is unclear how well hatching works in isolation, as this has rarely been investigated. Hatching-like feature lines-may be stylized (i. e., parameters may be mapped to line style, width, brightness, or even color hue). This can be used to discriminate objects (e. g., by different hues) or to encode depth explicitly. So far, there has been no experimental comparison of feature lines, hatching, and shaded surfaces for anatomical surface models with respect to shape perception. Currently, one may suppose that the joint use of shading and appropriate hatching yields better performance than shading alone or feature lines alone. How a joint use of feature lines and shading would perform is also not known. The only comparison of feature lines, hatching, and shading that we are aware of was performed for moving objects with very simple shapes that do not resemble anatomy BIB016 . Interrante and colleagues BIB005 discussed another strategy that is more concretely rooted in perceptual research : they created strokes that indicate the local curvature of the surface. For this purpose, they computed the two PCDs and their respective scalar values. This computation results in two vector fields: a vector field representing vectors with maximum curvature and a second field with orthogonal vectors representing minimum curvature. The actual placement of the strokes is essential to the successful use of curvature-based hatching. The strokes provide essential shape cues in regions, where there is a considerable curvature. In flat regions, maximum curvature directions are unreliable, and therefore no hatching strokes should be generated there. Thresholding is thus necessary to avoid perceptual problems. Sweet and Ware BIB012 examined the perceptual effectiveness of parallel lines on surfaces in all three directions separately and compared it with a regular grid composed of parallel lines in two directions. In their large study, the average angular deviation was 20 degrees for surfaces that had only shading information. All types of line-based enhancements improved accuracy scores. The best results were achieved with a regular grid texture (angular deviation was reduced to 12 degrees). The regular grid texture even produced significantly better performance than overlays with horizontal and vertical lines. Fig. 10 depicts three of the six viewing conditions. Hatching Textures for Nested Anatomical Surfaces. Hatching textures are particularly useful for multilayered visualizations, especially when they are used to depict the outer shape in a manner so that the display of the inner shapes is only minimally occluded. Thus, instead of a semi-transparent outer surface, a small set of opaque strokes-indicating the surface location and its curvaturerepresents the outer surface. Interrante and colleagues applied this strategy to medical surface models (e. g., to indicate the dose distribution of simulated radiation treatment planning in anatomical models). In their first system, they used a hatching texture created from ridge and valley lines BIB004 . Unfortunately, not all dose distributions could be conveyed with these sparse feature lines. More evenly spaced curvature-directed hatching lines better revealed the outer surface BIB007 . In a series of experiments, they showed that hatching textures with lines that follow the PCDs conveyed the local orientation of smooth curved surfaces with convex and concave regions better than Phong shading BIB005 BIB007 BIB013 . Fig. 11 shows an example of the stimuli from these experiments. Hatching Medical 3D Visualizations. Hatching techniques in medical visualization may be adapted to the specific anatomical objects. The display of muscles, for example, benefits from hatching textures representing their fiber structures [DCLK03,TPB * 08]. Elongated structures, such as vasculature and long bones, are hatched orthogonally to their local centerline (following the tradition of medical illustrations ). These papers discuss generating high quality surface and volume textures, but do not perform any perceptual experiments or evaluations.
A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Vascular Visualization <s> Transparency can be a useful device for depicting multiple overlapping surfaces in a single image. The challenge is to render the transparent surfaces in such a way that their 3D shape can be readily understood and their depth distance from underlying structures clearly perceived. This paper describes our investigations into the use of sparsely-distributed discrete, opaque texture as an artistic device for more explicitly indicating the relative depth of a transparent surface and for communicating the essential features of its 3D shape in an intuitively meaningful and minimally occluding way. The driving application for this work is the visualization of layered surfaces in radiation therapy treatment planning data, and the technique is illustrated on transparent isointensity surfaces of radiation dose. We describe the perceptual motivation and artistic inspiration for defining a stroke texture that is locally oriented in the direction of greatest normal curvature (and in which individual strokes are of a length proportional to the magnitude of the curvature in the direction they indicate), and we discuss two alternative methods for applying this texture to isointensity surfaces defined in a volume. We propose an experimental paradigm for objectively measuring observers' ability to judge the shape and depth of a layered transparent surface, in the course of a task which is relevant to the needs of radiotherapy treatment planning, and use this paradigm to evaluate the practical effectiveness of our approach through a controlled observer experiment based on images generated from actual clinical data. <s> BIB001 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Vascular Visualization <s> A large variety of techniques has been developed to visualize vascular structures. These techniques differ in the necessary preprocessing effort, in the computational effort to create the visualizations, in the accuracy with respect to the underlying image data and in the visual quality of the result. In this overview, we compare 3D visualization methods and discuss their applicability for diagnosis, therapy planning and educational purposes. We consider direct volume rendering as well as surface rendering. <s> BIB002
Many different 3D vessel visualization techniques have been developed to support the treatment planning. One family of vessel visualization techniques employs direct volume rendering and uses a transfer function to emphasize vascular structures [JQD * 08, KGNP12]. While most of these techniques serve to enhance preoperatively Figure 11 : The inner surface represents a tumor and the outer surface an isosurface resulting from the dose simulation in radiation treatment planning. Both are shown together in order to assess whether the tumor is likely to be completely destroyed. The outer surface is rendered as a strongly transparent isosurface enhanced with curvature-directed strokes (from BIB001 , © IEEE, reprinted with permission). acquired images, the technique by [WSzBD * 14] is aimed at incorporating depth cues for improving interventional images of vascular structures. A second family of techniques reconstruct a surface mesh of vascular structures with explicit, implicit, or parametric visualization techniques (see, e. g., the survey of Preim and Oeltze BIB002 ). In the present survey, we do not consider the different geometric approaches, but assume that a smooth and accurate surface mesh is available. We do, however, discuss different ways of displaying this surface mesh (e. g., with illustrative methods). Vascular visualization has the same requirements as other 3D visualizations as well as a few new ones (this is particularly true wen the visualizations will be used for treatment planning): [RHP * 06]: • the spatial distance between vessel segments is essential (e. g., indications of when one segment occludes another); • the discrimination of vascular systems is needed since vessel segments can belong to the arterial or the venous system; • the spatial distance between lesions (e. g., tumors) and vessel segments is essential, especially if the vessel segments exhibit a larger diameter; and • during treatment planning, the exploration of vascular trees should be possible. During surgery, on the other hand, static images are desired in order to better reveal the important information at a glance. The visualization techniques described in the following are driven by these requirements. All of them are illustrative. Since vascular structures are particularly complex shapes, it comes as no surprise that the basic, perceptually-motivated techniques (recall Sect. 2), such as chromadepth shading, distance color shading, toon shading and halos are used [RSH06, JQD * 08]. The effect of distance color blending (with blue as distant color) is shown in Fig. 12.
A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Blood Flow Visualization <s> Currently, most researchers in visualization pay very little attention to vision science. The exception is when the effective use of color is the subject. Little research in flow visualization includes a discussion of the related perceptual theory. Nor does it include an evaluation of effectiveness of the display techniques that are generated. This is so, despite Laidlaw's paper showing that such an evaluation is relatively straightforward. Of course, it's not always necessary to relate visualization research to perceptual theory. If the purpose of the research is to increase the efficiency of an algorithm, then the proper test is one of efficiency, not of perceptual validity. But when a new representation of data is the subject of research, addressing how perceptually effective it is - either by means of a straightforward empirical comparison with existing methods or analytically, relating the new mapping to perceptual theory - should be a matter of course. A strong interdisciplinary approach, including the disciplines of perception, design, and computer science will produce better science and better design in that empirically and theoretically validated visual display techniques will result. <s> BIB001 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Blood Flow Visualization <s> Cerebral aneurysms are a vascular dilatation induced by a pathological change of the vessel wall and often require treatment to avoid rupture. Therefore, it is of main interest, to estimate the risk of rupture, to gain a deeper understanding of aneurysm genesis, and to plan an actual intervention, the surface morphology and the internal blood flow characteristics. Visual exploration is primarily used to understand such complex and variable type of data. Since the blood flow data is strongly influenced by the surrounding vessel morphology both have to be visually combined to efficiently support visual exploration. Since the flow is spatially embedded in the surrounding aneurysm surface, occlusion problems have to be tackled. Thereby, a meaningful visual reduction of the aneurysm surface that still provides morphological hints is necessary. We accomplish this by applying an adapted illustrative rendering style to the aneurysm surface. Our contribution lies in the combination and adaption of several rendering styles, which allow us to reduce the problem of occlusion and avoid most of the disadvantages of the traditional semi-transparent surface rendering, like ambiguities in perception of spatial relationships. In interviews with domain experts, we derived visual requirements. Later, we conducted an initial survey with 40 participants (13 medical experts of them), which leads to further improvements of our approach. <s> BIB002 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Blood Flow Visualization <s> The investigation of hemodynamic information for the assessment of cardiovascular diseases CVDs gained importance in recent years. Improved flow measuring modalities and computational fluid dynamics CFD simulations yield in reliable blood flow information. For a visual exploration of the flow information, domain experts are used to investigate the flow information combined with its enclosed vessel anatomy. Since the flow is spatially embedded in the surrounding vessel surface, occlusion problems have to be resolved. A visual reduction of the vessel surface that still provides important anatomical features is required. We accomplish this by applying an adaptive surface visualization inspired by the suggestive contour measure. Furthermore, an illustration is employed to highlight the animated pathlines and to emphasize nearby surface regions. Our approach combines several visualization techniques to improve the perception of surface shape and depth. Thereby, we ensure appropriate visibility of the embedded flow information, which can be depicted with established or advanced flow visualization techniques. We apply our approach to cerebral aneurysms and aortas with simulated and measured blood flow. An informal user feedback with nine domain experts, we confirm the advantages of our approach compared with existing methods, e.g. semi-transparent surface rendering. Additionally, we assessed the applicability and usefulness of the pathline animation with highlighting nearby surface regions. <s> BIB003
For any kind of blood flow visualization, measured or simulated, it is essential that the patterns of the flow can be studied along with the morphology of the surrounding vessels. Changes, such as narrowings or dilatations of vascular structures, cause vortices or helical flow patterns. These subtle changes may be true representations of the patients' state, but they might be due to artifacts. Three dimensional flow is often represented with streamlines (which may be illuminated) or pathlines in case of unsteady blood flow (see [VPvP * 14] for a survey). Color is used to convey the velocity magnitude and thus cannot be used to enhance the shape and depth perception (e. g.with toon shading or distance color-blending.) Due to the complexity of the underlying information, perceptually-motivated blood flow visualization techniques primarily employ illustrative concepts. The simultaneous visualization of vascular structures and embedded flow is an instance of a multi- Ghosted views. Ghosted views are a type of smart visibility technique. Often, the region, where real flow is represented, would define a 3D mesh (e. g., a hull), and the transparency of the vessel is adjusted such that the flow becomes visible. Regions of the vessel surface that do not occlude flow are rendered opaque. Gasteiger and colleagues BIB002 developed such a ghosted view technique, where the transparency is adjusted in a view-dependent manner so that vessel contours are clearly visible. Ghosted views can also be combined with feature lines that indicate, where a pathology starts and which vessels drain and feed the pathologic dilatation. Moreover, an optional depth enhancement has been introduced with a fog simulation and a simple approximate shadow representation. This gives rise to possible visualizations: simple global transparency adjustment, ghosting, and ghosting with additional depth enhancements (see Fig. 17 ). While Gasteiger and colleagues BIB002 only assessed the subjective preference of the techniques, a full perception-based study of this technique has been performed [BGCP11] and will be described in Sect. 5.3. The combination of blood flow and vascular structures was later refined and adapted to animating time-dependent flow BIB003 . Illustrative techniques were developed to provide simplified abstract flow representations [BMGS13, vPBB * 10], motivated by artist-created flow illustrations. Occluding contours emphasize major arteries and their branchings if drawn over a strongly transparent surface. Illustrative arrow glyphs were employed to display aggregated flow (using clustering). Long arrow glyphs are beneficial for the perception of the flow direction BIB001 .
A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Projection and Illumination of Stream Tubes <s> This paper discusses strategies for effectively portraying 3D flow using volume line integral convolution. Issues include defining an appropriate input texture, clarifying the distinct identities and relative depths of the advected texture elements, and selectively highlighting regions of interest in both the input and output volumes. Apart from offering insights into the greater potential of 3D LIC as a method for effectively representing flow in a volume, a principal contribution of this work is the suggestion of a technique for generating and rendering 3D visibility-impeding "halos" that can help to intuitively indicate the presence of depth discontinuities between contiguous elements in a projection and thereby clarify the 3D spatial organization of elements in the flow. The proposed techniques are applied to the visualization of a hot, supersonic, laminar jet exiting into a colder, subsonic coflow. <s> BIB001 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Projection and Illumination of Stream Tubes <s> We present a threads and halos representation for interactive volume rendering of vector-field structure and describe a number of additional components that combine to create effective visualizations of multivalued 3D scientific data. After filtering linear structures, such as flow lines, into a volume representation, we use a multilayer volume rendering approach to simultaneously display this derived volume along with other data values. We demonstrate the utility of threads and halos in clarifying depth relationships within dense renderings and we present results from two scientific applications: visualization of second-order tensor valued magnetic resonance imaging (MRI) data and simulated 3D fluid flow data. In both application areas, the interactivity of the visualizations proved to be important to the domain scientists. Finally, we describe a PC-based implementation of our framework along with domain specific transfer functions, including an exploratory data culling tool, that enable fast data exploration. <s> BIB002 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Projection and Illumination of Stream Tubes <s> Many rendering algorithms can be understood as numerical solvers for the light-transport equation. Local illumination is probably the most widely implemented rendering algorithm: it is simple, fast, and encoded in 3D graphics hardware. It is not, however, derived as a solution to the light-transport equation. We show that the light-transport equation can be re-interpreted to produce local illumination by using vector-valued light and matrix-valued reflectance. This result fills an important gap in the theory of rendering. Using this framework, local and global illumination result from merely changing the values of parameters in the governing equation, permitting the equation and its algorithmic implementation to remain fixed. <s> BIB003
Weigle and Banks created artificial datasets resembling fiber tracts visualized with stream tubes. To investigate shape perception (local scale), they modified the perspective (orthographic and perspective) and the illumination model (local and global), the latter having been introduced by Beason and Banks BIB003 . The illumination model includes shadow generation and multiple reflections that can be precomputed and thus be used in interactive settings. To fully exploit the perceptual potential of global illumination, several light sources need to be placed carefully (serving as key lights and fill lights). Overall, they found that global illumination and perspective projection improved the assessment of depth with highly significant results and a moderate effect size. Global illumination improved the depth perception in case of orthographic and perspective projection. Thus, the effects of realistic perspective and illumination are cumulative. In addition to, or instead of, using (local) illumination of the tubular fiber tract structures, researchers have also investigated the use of graphical techniques (i. e., illustrative visualization techniques) that have an effect similar to global illumination (recall Sect. 3.1.1), but which can be more computed rapidly. In particular, Wenger and colleagues BIB002 employed tube halos and motivated the use of halos with their improvement of perception, as shown in the previously mentioned work on flow visualization BIB001 . Generally, this use of similar visualization approaches illustrates the close connection of fiber tract visualization to that of other types of dense line data (e. g., streamlines) extracted from flow simulations. As an alternative, Klein and colleagues [KRH * 06] removed tube shading entirely and, instead, applied distance-encoded contours and tube shadows to improve the spatial perception-freeing up the tube surface for the visualization of additional data properties. No studies have been conducted to evaluate the perceptual benefits of the either of these visualization techniques.
A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Common Direct Tasks <s> A specific form for the internal representation of local surface orientation is proposed, which is similar to Gibson's (1950) “amount and direction of slant”. Slant amount is usually quantifed by the angle σ between the surface normal and the line of sight (0°≦σ≦90°). Slant direction corresponds to the direction of the gradient of distance from the viewer to the surface, and may be defined by the image direction τ to which the surface normal would project (0°≦τ≦360°). Since the direction of slant is specified by the tilt of the projected surface normal, it is referred to as surface tilt (Stevens, 1979; Marr, 1982). The two degrees of freedom of orientation are therefore quantified by slant, an angle measured perpendicular to the image plane, and tilt, an angle measured in the image plane. The slanttilt form provides several computational advantages relative to some other proposals and is consistent with various psychological phenomena. Slant might be encoded by various means, e.g. by the cosine of the angle, by the tangent, or linearly by the angle itself. Experimental results are reported that suggest that slant is encoded by an internal parameter that varies linearly with slant angle, with resolution of roughly one part in 100. Thus we propose that surface orientation is encoded in human vision by two quantities, one varying linearly with slant angle, the other varying linearly with tilt angle. <s> BIB001 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Common Direct Tasks <s> Textures are commonly used to enhance the representation of shape in non-photorealistic rendering applications such as medical drawings. Textures that have elongated linear elements appear to be superior to random textures in that they can, by the way they conform to the surface, reveal the surface shape. We observe that shape following hache marks commonly used in cartography and copper-plate illustration are locally similar to the effect of the lines that can be generated by the intersection of a set of parallel planes with a surface. We use this as a basis for investigating the relationships between view direction, texture orientation and surface orientation in affording surface shape perception. We report two experiments using parallel plane textures. The results show that textures constructed from planes more nearly orthogonal to the line of sight tend to be better at revealing surface shape. Also, viewing surfaces from an oblique view is much better for revealing surface shape than viewing them from directly above. <s> BIB002 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Common Direct Tasks <s> Recovering 3D shape from shading is an ill-posed problem that the visual system can solve only by making use of additional information such as the position of the light source. Previous research has shown that people tend to assume light is above and slightly to the left of the object [Sun and Perona 1998]. We present a study to investigate whether the visual system also assumes the angle between the light direction and the viewing direction. We conducted a shape perception experiment in which subjects estimated surface orientation on smooth, virtual 3D shapes displayed monocularly using local Lambertian shading without cast shadows. We varied the angle between the viewing direction and the light direction within a range +/- 66 deg (above/below), and subjects indicated local surface orientation by rotating a gauge figure to appear normal to the surface [Koenderink et al. 1992]. Observer settings were more accurate and precise when the light was positioned above rather than below the viewpoint. Additionally, errors were minimized when the angle between the light direction and the viewing direction was 20--30 deg. Measurements of surface slant and tilt error support this result. These findings confirm the light-from-above prior and provide evidence that the angle between the viewing direction and the light direction is assumed to be 20--30 deg above the viewpoint. <s> BIB003
In addition to actual manipulation tasks, common tasks include some form of verbal report, a forced choice among a short list of items, or a rating along a fixed scale (such as a Likert scale). Rating is usually done on a Likert scale, which uses (mostly) an odd number of possibilities (5, 7, or 9) and anchors the two ends of the scale, usually with opposing terms. The most common Likert scale is a 7-point scale with 1 meaning "strongly agree" and 7 meaning "strongly disagree". Typical quantitative tasks from perception research may be adapted to medical applications. Table 1 summarizes important tasks and specific measures with a focus on the shape and depth perception. Orientation Matching Tasks are rather complex, and require a more detailed discussion. The most common orientation matching task asks participants to place gauge figures (disks centered around an orthogonal line) at selected positions of a surface. Participants are asked to manipulate the orientation of each gauge figure so that its base plane is tangent to the surface and thus the orthogonal lines match to the normal vector at that point of the surface. The curved surface is thus probed at different positions. Gauge figure tasks were pioneered by Stevens BIB001 and are widely used to assess the influence of visualization techniques on the shape perception (e. g., [BGCP11,CSD * 09,KVDK92,SPV11]). Cole and colleagues [CSD * 09], for example, used a repeated measures shape task to determine which technique provided better shape perception as well as to measure how certain the participants were. Cole and colleagues also pioneered gauge string tasks, where a number of gauges (15 in their case) were placed on a horizontal line to analyze shape perception in a local region in-depth and to correlate the results with differential geometric properties, such as the occurrence of inflection points. Placing gauge figures is not easy, in particular because the gauge occludes parts of the surface. O'Shea and colleagues BIB003 have discussed guidelines for gauge figure tasks, suggesting that gauge figures should be • drawn in red, • drawn with a small line width to reduce occlusions, • initially oriented randomly, and • shown in perspective projection. Moreover, it is useful to present the gauge figure in its current orientation enlarged at the boundary of the image (where the currently interesting part of the surface is not occluded) BIB002 . It is also important that the gauge figure does not interact with the surface (including occlusion effects!), since this would give direct feedback as to the correct location of the surface. Participants need to practice placing gauge figures and should be shown correct and bad placements [CSD * 09].
A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Eye Tracking-Based Research <s> This article presents a method for automating rendering parameter selection to simplify tedious user interaction and improve the usability of visualization systems. Our approach acquires the important/interesting regions of a dataset through simple user interaction with an eye tracker. Based on this importance information, we automatically compute reasonable rendering parameters using a set of heuristic rules, which are adapted from visualization experience and psychophysical experiments. A user study has been conducted to evaluate these rendering parameters, and while the parameter selections for a specific visualization result are subjective, our approach provides good preliminary results for general users while allowing additional control adjustment. Furthermore, our system improves the interactivity of a visualization system by significantly reducing the required amount of parameter selections and providing good initial rendering parameters for newly acquired datasets of similar types. <s> BIB001 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Eye Tracking-Based Research <s> This meta-analysis integrates 296 effect sizes reported in eye-tracking research on expertise differences in the comprehension of visualizations. Three theories were evaluated: Ericsson and Kintsch’s (Psychol Rev 102:211–245, 1995) theory of long-term working memory, Haider and Frensch’s (J Exp Psychol Learn Mem Cognit 25:172–190, 1999) information-reduction hypothesis, and the holistic model of image perception of Kundel et al. (Radiology 242:396–402, 2007). Eye movement and performance data were cumulated from 819 experts, 187 intermediates, and 893 novices. In support of the evaluated theories, experts, when compared with non-experts, had shorter fixation durations, more fixations on task-relevant areas, and fewer fixations on task-redundant areas; experts also had longer saccades and shorter times to first fixate relevant information, owing to superiority in parafoveal processing and selective attention allocation. Eye movements, reaction time, and performance accuracy were moderated by characteristics of visualization (dynamics, realism, dimensionality, modality, and text annotation), task (complexity, time-on-task, and task control), and domain (sports, medicine, transportation, other). These findings are discussed in terms of their implications for theories of visual expertise in professional domains and their significance for the design of learning environments. <s> BIB002 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Eye Tracking-Based Research <s> Eye tracking can be a suitable evaluation method for determining which regions and objects of a stimulus a human viewer perceived. Analysts can use eye tracking as a complement to other evaluation methods for a more holistic assessment of novel visualization techniques beyond time and error measures. Up to now, most stimuli in eye tracking are either static stimuli or videos. Since interaction is an integral part of visualization, an evaluation should include interaction. In this paper, we present an extensive literature review on evaluation methods for interactive visualizations. Based on the literature review we propose ideas for analyzing eye movement data from interactive stimuli. This requires looking critically at challenges induced by interactive stimuli. The first step is to collect data using different study methods. In our case, we look at using eye tracking, interaction logs, and thinking-aloud protocols. In addition, this requires a thorough synchronization of the mentioned study methods. To analyze the collected data new analysis techniques have to be developed. We investigate existing approaches and how we can adapt them to new data types as well as sketch ideas how new approaches can look like. <s> BIB003
The use of eye tracking has become quite popular for evaluating user interfaces, web sites, and 2D visualizations BIB003 . Modern eye trackers can deliver precise and reliable results about foveatic vision (i. e., the regions of a 2D image observed in high resolution). Eye motion evaluation focuses on scan paths and fixation regions in 2D screen coordinates. The disadvantages of eye tracking include that eye movements are often unintentional, that we may fail to recognize an object even if we have looked at it for long time, and that eye motion is only weakly correlated with cognitive processes. Eye motion also does not indicate at which distance (e. g., which layer of a semitransparent 3D model) a person is focusing. Furthermore, peripheral vision cannot be detected with eye tracking [KDX * 12]. Eye tracking has been used in the visualization of medical image data, in particular to analyze how physicians inspect X-ray images in, for example, mammography data (see, e. g., [BHKS13, Kru00]). With respect to 3D medical visualization, Burgert and colleagues [BOJ * 07] investigated 3D renderings of the neck anatomy with enlarged lymph nodes. Experienced participants had significantly less saccadic movements and looked longer at the relevant regions while novices tend to look around more. Eye tracking has been used to automatically adjust parameters for volume rendering to highlight regions of interest, determined by means of eye tracking BIB001 . The central and by far most reliable result, however, is that novices and experts have different eye-motion behavior. In a meta-study of eye motion when looking at visualizations, Gegenfurtner and colleagues BIB002 found that when a large enough number of participants is used, experts show shorter fixation durations and have longer saccades. They also have more fixations in the relevant areas and take less time before the first fixation on relevant information. This is the same pattern found in many non-medical tasks, such as chess or driving [BYW * 11], and seems to reflect the degree of expertise in the relevant task. Lu and colleagues BIB001 provide a comprehensive overview of eye tracking-based research in visualization.
A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Combined visualization of instruments and anatomical data. <s> Cable conduit installation equipped with L-and T-connecting members which are provided with releasable means for sliding on and interengagement with the installation conduit and/or means for sliding on and underlapping or overlapping with the conduit covering. <s> BIB001 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Combined visualization of instruments and anatomical data. <s> Currently, most researchers in visualization pay very little attention to vision science. The exception is when the effective use of color is the subject. Little research in flow visualization includes a discussion of the related perceptual theory. Nor does it include an evaluation of effectiveness of the display techniques that are generated. This is so, despite Laidlaw's paper showing that such an evaluation is relatively straightforward. Of course, it's not always necessary to relate visualization research to perceptual theory. If the purpose of the research is to increase the efficiency of an algorithm, then the proper test is one of efficiency, not of perceptual validity. But when a new representation of data is the subject of research, addressing how perceptually effective it is - either by means of a straightforward empirical comparison with existing methods or analytically, relating the new mapping to perceptual theory - should be a matter of course. A strong interdisciplinary approach, including the disciplines of perception, design, and computer science will produce better science and better design in that empirically and theoretically validated visual display techniques will result. <s> BIB002
Often medical image data is visualized along with instruments, such as biopsy needles, stents, electrodes, and implants of all kinds. The precise location of instruments relative to anatomical structures needs to be conveyed. We know of no perception-based studies that compare different visualization techniques for such problems. Perception-guided visualization of blood flow. Compared to the large variety of blood flow visualizations [VPvP * 14], only few techniques are perceptually-motivated and only one was evaluated in a quantitative study. This evaluation relates to the nested visualization problem of displaying vascular structures and embedded flow. There is relevant research on flow perception (e. g., how to convey flow direction and orientation effectively BIB002 ) which can be used for guidance. Designing perceptually effective blood flow visualizations is particularly challenging for unsteady flow and has to consider motion perception as well. Exaggerated shading. One perceptually-motivated technique for displaying shape is exaggerated shading (ES), where subtle local changes of the geometry are performed to enhance features [RBD06, ZCF * 10]. The deliberate emphasis of surface features may be beneficial for educational applications. Multimodal medical visualization based on combined scanners, such as PET/CT and PET/MRI scanners, is increasingly important. The visualization challenge is to fuse these images in a visualization such that the essential information from both datasets is visible and the overall visualization conveys the shape and depth information correctly. Many multimodal visualization techniques have been developed, but there is no empirical, quantitative comparison between them. The role of reflection. Certain established depth and shape cues have not been considered in medical visualization so far. For instance, specular reflections, as they may also occur at some body organs, reveal a lot of information on spatial relations BIB001 . The effect of specular reflection is worth investigating (e. g., for virtual colonoscopy, where a procedure is simulated that includes real-life wetness and reflections). Patient-doctor communication. Medical visualizations-in particular perceptually-motivated, illustrative visualizations-have a great potential for patient-doctor communication and for interdisciplinary discussions (e. g., in a tumor board). In both settings, users include those that are not familiar with slice-based visualizations and benefit from visualizations that emphasize important features and abstract from unnecessary details. Only very few papers mention these use cases and even fewer assess whether medical visualization techniques are indeed useful for such use cases. Perceptual consequences of interaction. In this survey article, we discussed the influence of enhanced visualization techniques on shape and depth perception. Ultimately, an enhanced understanding of the spatial relations is desired. Advanced interaction techniques, such as cutting, (selective) clipping, lens-based exploration, and virtual resection contribute to this spatial understanding of 3D medical visualizations. It remains to be investigated how variants of these interaction techniques and combinations with the visualization techniques influence spatial understanding. More clinically relevant studies. The most important goal of medical visualization is to support diagnostic and treatment decisions in clinical practice, where 3D visualization techniques are incorporated in complex software assistants. To understand the consequences of decisions relating to visualization techniques, experiments with clinically-used software assistants (or very similar research prototypes) are required. Clinical decision situations, such as tumor board meetings, should be simulated to investigate, for example, whether the assessment of tumor infiltration changes as a consequence of advanced light source placement or global illumination. More studies are needed that focus on specific clinical tasks with medical experts as test persons. Such studies can reveal the influence of improved perception on cognitive processes, such as the selection of a treatment option. The ultimate goal is to understand whether the use of advanced visualization techniques matters for clinical decisions. Explore relations to other areas. Medical visualization has some special requirements based on the peculiarities of medical image data and the complex anatomical shapes to be depicted often along with instruments or simulation results. There are, however, similarities to other areas, such as in the visualization of plants and animals that also exhibit organic shapes as well as molecular visualization. Thus, an analysis of visualization techniques developed in these areas may inspire future medical visualization development.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> INTRODUCTION <s> We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors. The analogy with text retrieval is in the implementation where matches on descriptors are pre-computed (using vector quantization), and inverted file systems and document rankings are used. The result is that retrieved is immediate, returning a ranked list of key frames/shots in the manner of Google. The method is illustrated for matching in two full length feature films. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> INTRODUCTION <s> We present a novel method for generic visual categorization: the problem of identifying the object content of natural images while generalizing across variations inherent to the object class. This bag of keypoints method is based on vector quantization of affine invariant descriptors of image patches. We propose and compare two alternative implementations using different classifiers: Naive Bayes and SVM. The main advantages of the method are that it is simple, computationally efficient and intrinsically invariant. We present results for simultaneously classifying seven semantic visual categories. These results clearly demonstrate that the method is robust to background clutter and produces good categorization accuracy even without exploiting geometric information. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> INTRODUCTION <s> The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images. These features can then be used to reliably match objects in diering images. The algorithm was rst proposed by Lowe [12] and further developed to increase performance resulting in the classic paper [13] that served as foundation for SIFT which has played an important role in robotic and machine vision in the past decade. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> INTRODUCTION <s> We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> INTRODUCTION <s> Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the OverFeat network which was trained to perform object classification on ILSVRC13. We use features extracted from the OverFeat network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the OverFeat network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or L2 distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> INTRODUCTION <s> It has been shown that the activations invoked by an image within the top layers of a large convolutional neural network provide a high-level descriptor of the visual content of the image. In this paper, we investigate the use of such descriptors (neural codes) within the image retrieval application. In the experiments with several standard retrieval benchmarks, we establish that neural codes perform competitively even when the convolutional neural network has been trained for an unrelated classification task (e.g. Image-Net). We also evaluate the improvement in the retrieval performance of neural codes, when the network is retrained on a dataset of images that are similar to images encountered at test time. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> INTRODUCTION <s> Deep convolutional neural networks have been successfully applied to image classification tasks. When these same networks have been applied to image retrieval, the assumption has been made that the last layers would give the best performance, as they do in classification. We show that for instance-level image retrieval, lower layers often perform better than the last layers in convolutional neural networks. We present an approach for extracting convolutional features from different layers of the networks, and adopt VLAD encoding to encode features into a single vector for each image. We investigate the effect of different layers and scales of input images on the performance of convolutional features using the recent deep networks OxfordNet and GoogLeNet. Experiments demonstrate that intermediate layers or higher layers with finer scales produce better results for image retrieval, compared to the last layer. When using compressed 128-D VLAD descriptors, our method obtains state-of-the-art results and outperforms other VLAD and CNN based approaches on two out of three test datasets. Our work provides guidance for transferring deep networks trained on image classification to image retrieval tasks. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> INTRODUCTION <s> Recently, image representation built upon Convolutional Neural Network (CNN) has been shown to provide effective descriptors for image search, outperforming pre-CNN features as short-vector representations. Yet such models are not compatible with geometry-aware re-ranking methods and still outperformed, on some particular object retrieval benchmarks, by traditional image search systems relying on precise descriptor matching, geometric re-ranking, or query expansion. This work revisits both retrieval stages, namely initial search and re-ranking, by employing the same primitive information derived from the CNN. We build compact feature vectors that encode several image regions without the need to feed multiple inputs to the network. Furthermore, we extend integral images to handle max-pooling on convolutional layer activations, allowing us to efficiently localize matching objects. The resulting bounding box is finally used for image re-ranking. As a result, this paper significantly improves existing CNN-based recognition pipeline: We report for the first time results competing with traditional methods on the challenging Oxford5k and Paris6k datasets. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> INTRODUCTION <s> We propose a simple and straightforward way of creating powerful image representations via cross-dimensional weighting and aggregation of deep convolutional neural network layer outputs. We first present a generalized framework that encompasses a broad family of approaches and includes cross-dimensional pooling and weighting steps. We then propose specific non-parametric schemes for both spatial- and channel-wise weighting that boost the effect of highly active spatial responses and at the same time regulate burstiness effects. We experiment on different public datasets for image search and show that our approach outperforms the current state-of-the-art for approaches based on pre-trained networks. We also provide an easy-to-use, open source implementation that reproduces our results. <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> INTRODUCTION <s> We propose a novel approach for instance-level image retrieval. It produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors. In contrast to previous works employing pre-trained deep networks as a black box to produce features, our method leverages a deep architecture trained for the specific task of image retrieval. Our contribution is twofold: (i) we leverage a ranking framework to learn convolution and projection weights that are used to build the region features; and (ii) we employ a region proposal network to learn which regions should be pooled to form the final global descriptor. We show that using clean training data is key to the success of our approach. To that aim, we use a large scale but noisy landmark dataset and develop an automatic cleaning approach. The proposed architecture produces a global image representation in a single forward pass. Our approach significantly outperforms previous approaches based on global descriptors on standard datasets. It even surpasses most prior works based on costly local descriptor indexing and spatial verification. Additional material is available at www.xrce.xerox.com/Deep-Image-Retrieval. <s> BIB010
C ONTENT-BASED image retrieval (CBIR) has been a longstanding research topic in the computer vision society. In the early 1990s, the study of CBIR truly started. Images were indexed by the visual cues, such as texture and color, and a myriad of algorithms and image retrieval systems have been proposed. A straightforward strategy is to extract global descriptors. This idea dominated the image retrieval community in the 1990s and early 2000s. Yet, a well-known problem is that global signatures may fail the invariance expectation to image changes such as illumination, translation, occlusion and truncation. These variances compromise the retrieval accuracy and limit the application scope of global descriptors. This problem has given rise to local feature based image retrieval. The focus of this survey is instance-level image retrieval. In this task, given a query image depicting a particular object/scene/architecture, the aim is to retrieve images containing the same object/scene/architecture that may be captured under different views, illumination, or with occlusions. Instance retrieval departs from class retrieval in that the latter aims at retrieving images of the same class with the query. In the following, if not specified, we use "image retrieval" and "instance retrieval" interchangeably. The milestones of instance retrieval in the past years are presented in Fig. 1 , in which the times of the SIFT-based and CNN-based methods are highlighted. The majority of traditional methods can be considered to end in 2000 when Smeulders et al. presented a comprehensive survey of CBIR "at the end of the early years". Three years later (2003) the Bag-of-Words (BoW) model was introduced to the image retrieval community BIB001 , and in 2004 was applied to image classification BIB002 , both relying on the SIFT descriptor BIB003 . The retrieval community has since witnessed the prominence of the BoW model for over a decade during which many improvements were proposed. In 2012, Krizhevsky et al. BIB004 with the AlexNet achieved the state-of-the-art recognition accuracy in ILSRVC 2012, exceeding previous best results by a large margin. Since then, research focus has begun to transfer to deep learning based methods BIB005 , BIB006 , BIB007 , BIB008 , especially the convolutional neural network (CNN). The SIFT-based methods mostly rely on the BoW model. BoW was originally proposed for modeling documents because the text is naturally parsed into words. It builds a word histogram for a document by accumulating word responses into a global vector. In the image domain, the introduction of the scale-invariant feature transform (SIFT) BIB003 makes the BoW model feasible BIB001 . Originally, SIFT is comprised of a detector and descriptor, but which are used in isolation now; in this survey, if not specified, SIFT usually refers to the 128-dim descriptor, a common practice in the community. With a pre-trained codebook (vocabulary), local features are quantized to visual words. An image can thus be represented in a similar form to a document, and classic weighting and indexing schemes can be leveraged. In recent years, the popularity of SIFT-based models seems to be overtaken by the convolutional neural network, a hierarchical structure that has been shown to outperform hand-crafted features in many vision tasks. In retrieval, competitive performance compared to the BoW models has been reported, even with short CNN vectors BIB008 , BIB009 , BIB010 . The CNN-based retrieval models usually compute compact representations and employ the Euclidean distance or some approximate nearest neighbor (ANN) search methods for retrieval. Current literature may directly employ the pretrained CNN models or perform fine-tuning for specific retrieval tasks. A majority of these methods feed the image into the network only once to obtain the descriptor. Some are based on patches which are passed to the network multiple times, a similar manner to SIFT; we classify them into hybrid methods in this survey.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors. The analogy with text retrieval is in the implementation where matches on descriptors are pre-computed (using vector quantization), and inverted file systems and document rankings are used. The result is that retrieved is immediate, returning a ranked list of key frames/shots in the manner of Google. The method is illustrated for matching in two full length feature films. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> This paper improves recent methods for large scale image search. State-of-the-art methods build on the bag-of-features image representation. We, first, analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within the inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> The Fisher kernel (FK) is a generic framework which combines the benefits of generative and discriminative approaches. In the context of image classification the FK was shown to extend the popular bag-of-visual-words (BOV) by going beyond count statistics. However, in practice, this enriched representation has not yet shown its superiority over the BOV. In the first part we show that with several well-motivated modifications over the original framework we can boost the accuracy of the FK. On PASCAL VOC 2007 we increase the Average Precision (AP) from 47.9% to 58.3%. Similarly, we demonstrate state-of-the-art accuracy on CalTech 256. A major advantage is that these results are obtained using only SIFT descriptors and costless linear classifiers. Equipped with this representation, we can now explore image classification on a larger scale. In the second part, as an application, we compare two abundant resources of labeled images to learn classifiers: ImageNet and Flickr groups. In an evaluation involving hundreds of thousands of training images we show that classifiers learned on Flickr groups perform surprisingly well (although they were not intended for this purpose) and that they can complement classifiers learned on more carefully annotated datasets. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> This paper considers a family of metrics to compare images based on their local descriptors. It encompasses the VLAD descriptor and matching techniques such as Hamming Embedding. Making the bridge between these approaches leads us to propose a match kernel that takes the best of existing techniques by combining an aggregation procedure with a selective match kernel. Finally, the representation underpinning this kernel is approximated, providing a large scale image search both precise and scalable, as shown by our experiments on several benchmarks. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> We consider the design of a single vector representation for an image that embeds and aggregates a set of local patch descriptors such as SIFT. More specifically we aim to construct a dense representation, like the Fisher Vector or VLAD, though of small or intermediate size. We make two contributions, both aimed at regularizing the individual contributions of the local descriptors in the final representation. The first is a novel embedding method that avoids the dependency on absolute distances by encoding directions. The second contribution is a "democratization" strategy that further limits the interaction of unrelated descriptors in the aggregation stage. These methods are complementary and give a substantial performance boost over the state of the art in image search with short or mid-size vectors, as demonstrated by our experiments on standard public image retrieval benchmarks. <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the OverFeat network which was trained to perform object classification on ILSVRC13. We use features extracted from the OverFeat network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the OverFeat network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or L2 distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks. <s> BIB010 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> It has been shown that the activations invoked by an image within the top layers of a large convolutional neural network provide a high-level descriptor of the visual content of the image. In this paper, we investigate the use of such descriptors (neural codes) within the image retrieval application. In the experiments with several standard retrieval benchmarks, we establish that neural codes perform competitively even when the convolutional neural network has been trained for an unrelated classification task (e.g. Image-Net). We also evaluate the improvement in the retrieval performance of neural codes, when the network is retrained on a dataset of images that are similar to images encountered at test time. <s> BIB011 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations lack geometric invariance, which limits their robustness for classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (MOP-CNN). This scheme extracts CNN activations for local patches at multiple scale levels, performs orderless VLAD pooling of these activations at each level separately, and concatenates the result. The resulting MOP-CNN representation can be used as a generic feature for either supervised or unsupervised recognition tasks, from image classification to instance-level retrieval; it consistently outperforms global CNN activations without requiring any joint training of prediction layers for a particular target dataset. In absolute terms, it achieves state-of-the-art results on the challenging SUN397 and MIT Indoor Scenes classification datasets, and competitive results on ILSVRC2012/2013 classification and INRIA Holidays retrieval datasets. <s> BIB012 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> Recent works show that image comparison based on local descriptors is corrupted by visual bursts, which tend to dominate the image similarity. The existing strategies, like power-law normalization, improve the results by discounting the contribution of visual bursts to the image similarity. <s> BIB013 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> Deep convolutional neural networks have been successfully applied to image classification tasks. When these same networks have been applied to image retrieval, the assumption has been made that the last layers would give the best performance, as they do in classification. We show that for instance-level image retrieval, lower layers often perform better than the last layers in convolutional neural networks. We present an approach for extracting convolutional features from different layers of the networks, and adopt VLAD encoding to encode features into a single vector for each image. We investigate the effect of different layers and scales of input images on the performance of convolutional features using the recent deep networks OxfordNet and GoogLeNet. Experiments demonstrate that intermediate layers or higher layers with finer scales produce better results for image retrieval, compared to the last layer. When using compressed 128-D VLAD descriptors, our method obtains state-of-the-art results and outperforms other VLAD and CNN based approaches on two out of three test datasets. Our work provides guidance for transferring deep networks trained on image classification to image retrieval tasks. <s> BIB014 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> Recently, image representation built upon Convolutional Neural Network (CNN) has been shown to provide effective descriptors for image search, outperforming pre-CNN features as short-vector representations. Yet such models are not compatible with geometry-aware re-ranking methods and still outperformed, on some particular object retrieval benchmarks, by traditional image search systems relying on precise descriptor matching, geometric re-ranking, or query expansion. This work revisits both retrieval stages, namely initial search and re-ranking, by employing the same primitive information derived from the CNN. We build compact feature vectors that encode several image regions without the need to feed multiple inputs to the network. Furthermore, we extend integral images to handle max-pooling on convolutional layer activations, allowing us to efficiently localize matching objects. The resulting bounding box is finally used for image re-ranking. As a result, this paper significantly improves existing CNN-based recognition pipeline: We report for the first time results competing with traditional methods on the challenging Oxford5k and Paris6k datasets. <s> BIB015 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> We propose a novel approach for instance-level image retrieval. It produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors. In contrast to previous works employing pre-trained deep networks as a black box to produce features, our method leverages a deep architecture trained for the specific task of image retrieval. Our contribution is twofold: (i) we leverage a ranking framework to learn convolution and projection weights that are used to build the region features; and (ii) we employ a region proposal network to learn which regions should be pooled to form the final global descriptor. We show that using clean training data is key to the success of our approach. To that aim, we use a large scale but noisy landmark dataset and develop an automatic cleaning approach. The proposed architecture produces a global image representation in a single forward pass. Our approach significantly outperforms previous approaches based on global descriptors on standard datasets. It even surpasses most prior works based on costly local descriptor indexing and spatial verification. Additional material is available at www.xrce.xerox.com/Deep-Image-Retrieval. <s> BIB016 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> Convolutional Neural Networks (CNNs) achieve state-of-the-art performance in many computer vision tasks. However, this achievement is preceded by extreme manual annotation in order to perform either training from scratch or fine-tuning for the target task. In this work, we propose to fine-tune CNN for image retrieval from a large collection of unordered images in a fully automated manner. We employ state-of-the-art retrieval and Structure-from-Motion (SfM) methods to obtain 3D models, which are used to guide the selection of the training data for CNN fine-tuning. We show that both hard positive and hard negative examples enhance the final performance in particular object retrieval with compact codes. <s> BIB017
According to the different visual representations, this survey categorizes the retrieval literature into two broad types: SIFT-based and CNN-based. The SIFT-based methods are further organized into three classes: using large, mediumsized or small codebooks. We note that the codebook size is closely related to the choice of encoding methods. The CNNbased methods are categorized into using pre-trained or fine-tuned CNN models, as well as hybrid methods. Their similarities and differences are summarized in Table 1 . The SIFT-based methods had been predominantly studied before 2012 BIB007 (good works also appear in recent years BIB009 , BIB013 ). This line of methods usually use one type of detector, e.g., Hessian-Affine, and one type of descriptor, e.g., SIFT. Encoding maps a local feature into a vector. Based on the size of the codebook used during encoding, we classify SIFT-based methods into three categories as below. , Sivic and Zisserman BIB001 proposed Video Google in 2003, marking the beginning of the BoW model. Then, the hierarchical k-means and approximate k-means were proposed by Stew enius and Nist er and Philbin et al. BIB002 , respectively, marking the use of large codebooks in retrieval. In 2008, J egou et al. BIB003 proposed Hamming Embedding, a milestone in using medium-sized codebooks. Then, compact visual representations for retrieval were proposed by Perronnin et al. BIB005 and J egou et al. BIB006 in 2010. Although SIFT-based methods were still moving forward, CNN-based methods began to gradually take over, following the pioneering work of Krizhevsky et al. BIB007 . In 2014, Razavian et al. BIB010 proposed a hybrid method extracting multiple CNN features from an image. Babenko et al. BIB011 were the first to fine-tune a CNN model for generic instance retrieval. Both BIB014 , BIB015 employ the column features from pre-trained CNN models, and BIB015 inspires later state-of-the-art methods. These milestones are the representative works of the categorization scheme in this survey. VLAD, FV, pooling Low ANN methods Fine-tuned, single-pass A global feat. is end-to-end extracted from fine-tuned CNN models. Low ANN methods For SIFT-based methods, hand-crafted local invariant features are extracted, and according to the codebook sizes, different encoding and indexing strategies are leveraged. For CNN-based methods, pre-trained, fine-tuned CNN models and hybrid methods are the primary types; fixed-length compact vectors are usually produced, combined with approximate nearest neighbor (ANN) methods. Using small codebooks. The visual words are fewer than several thousand. Compact vectors are generated BIB005 , BIB006 before dimension reduction and coding. Using medium-sized codebooks. Given the sparsity of BoW and the low discriminative ability of visual words, the inverted index and binary signatures are used BIB003 . The trade-off between accuracy and efficiency is a major influencing factor BIB008 . Using large codebooks. Given the sparse BoW histograms and the high discriminative ability of visual words, the inverted index and memory-friendly signatures are used . Approximate methods are used in codebook generation and encoding , BIB002 . The CNN-based methods extract features using CNN models. Compact (fixed-length) representations are usually built. There are three classes: Hybrid methods. Image patches are fed into CNN multiple times for feature extraction BIB010 . Encoding and indexing are similar to SIFT-based methods BIB012 . Using pre-trained CNN models. Features are extracted in a single pass using CNN pre-trained on some large-scale datasets like ImageNet BIB004 . Compact Encoding/pooling techniques are used BIB014 , BIB015 . Using fine-tuned CNN models. The CNN model (e.g., pre-trained on ImageNet) is fine-tuned on a training set in which the images share similar distributions with the target database BIB011 . CNN features can be extracted in an end-to-end manner through a single pass to the CNN model. The visual representations exhibit improved discriminative ability BIB016 , BIB017 .
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pipeline <s> In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pipeline <s> The state of the art in visual object retrieval from large databases is achieved by systems that are inspired by text retrieval. A key component of these approaches is that local regions of images are characterized using high-dimensional descriptors which are then mapped to ldquovisual wordsrdquo selected from a discrete vocabulary.This paper explores techniques to map each visual region to a weighted set of words, allowing the inclusion of features which were lost in the quantization stage of previous systems. The set of visual words is obtained by selecting words based on proximity in descriptor space. We describe how this representation may be incorporated into a standard tf-idf architecture, and how spatial verification is modified in the case of this soft-assignment. We evaluate our method on the standard Oxford Buildings dataset, and introduce a new dataset for evaluation. Our results exceed the current state of the art retrieval performance on these datasets, particularly on queries with poor initial recall where techniques like query expansion suffer. Overall we show that soft-assignment is always beneficial for retrieval with large vocabularies, at a cost of increased storage requirements for the index. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pipeline <s> We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pipeline <s> The Fisher kernel (FK) is a generic framework which combines the benefits of generative and discriminative approaches. In the context of image classification the FK was shown to extend the popular bag-of-visual-words (BOV) by going beyond count statistics. However, in practice, this enriched representation has not yet shown its superiority over the BOV. In the first part we show that with several well-motivated modifications over the original framework we can boost the accuracy of the FK. On PASCAL VOC 2007 we increase the Average Precision (AP) from 47.9% to 58.3%. Similarly, we demonstrate state-of-the-art accuracy on CalTech 256. A major advantage is that these results are obtained using only SIFT descriptors and costless linear classifiers. Equipped with this representation, we can now explore image classification on a larger scale. In the second part, as an application, we compare two abundant resources of labeled images to learn classifiers: ImageNet and Flickr groups. In an evaluation involving hundreds of thousands of training images we show that classifiers learned on Flickr groups perform surprisingly well (although they were not intended for this purpose) and that they can complement classifiers learned on more carefully annotated datasets. <s> BIB004
The pipeline of SIFT-based retrieval is introduced in Fig. 2 . Local Feature Extraction. Suppose we have a gallery G consisting of N images. Given a feature detector, we extract local descriptors from the regions around the sparse interest points or dense patches. We denote the local descriptors of D detected regions in an image as ff i g D i¼i ; f i 2 R p . Codebook Training. SIFT-based methods train a codebook offline. Each visual word in the codebook lies in the center of a subspace, called the "Voronoi cell". A larger codebook corresponds to a finer partitioning, resulting in more discriminative visual words and vice versa. Suppose that a pool of local descriptors F ff i g M i¼1 are computed from an unlabeled training set. The baseline approach, i.e., k-means, partitions the M points into K clusters; the K visual words thus constitute a codebook of size K. Feature Encoding. A local descriptor f i 2 R p is mapped into a feature embedding g i 2 R l through the feature encoding process, f i ! g i . When k-means clustering is used, f i can be encoded according to its distances to the visual words. For large codebooks, hard , BIB001 and soft quantization BIB002 are good choices. In the former, the resulting embedding g i has only one non-zero entry; in the latter, f i can be quantized to a small number of visual words. A global signature is produced after a sum-pooling of all the embeddings of local features. For medium-sized codebooks, additional binary signatures can be generated to preserve the original information. When using small codebooks, popular encoding schemes include vector of locally aggregated descriptors (VLAD) BIB003 , Fisher vector (FV) BIB004 , etc.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors. The analogy with text retrieval is in the implementation where matches on descriptors are pre-computed (using vector quantization), and inverted file systems and document rankings are used. The result is that retrieved is immediate, returning a ranked list of key frames/shots in the manner of Google. The method is illustrated for matching in two full length feature films. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> Abstract The wide-baseline stereo problem, i.e. the problem of establishing correspondences between a pair of images taken from different viewpoints is studied. A new set of image elements that are put into correspondence, the so called extremal regions , is introduced. Extremal regions possess highly desirable properties: the set is closed under (1) continuous (and thus projective) transformation of image coordinates and (2) monotonic transformation of image intensities. An efficient (near linear complexity) and practically fast detection algorithm (near frame rate) is presented for an affinely invariant stable subset of extremal regions, the maximally stable extremal regions (MSER). A new robust similarity measure for establishing tentative correspondences is proposed. The robustness ensures that invariants from multiple measurement regions (regions obtained by invariant constructions from extremal regions), some that are significantly larger (and hence discriminative) than the MSERs, may be used to establish tentative correspondences. The high utility of MSERs, multiple measurement regions and the robust metric is demonstrated in wide-baseline experiments on image pairs from both indoor and outdoor scenes. Significant change of scale (3.5×), illumination conditions, out-of-plane rotation, occlusion, locally anisotropic scale change and 3D translation of the viewpoint are all present in the test problems. Good estimates of epipolar geometry (average distance from corresponding points to the epipolar line below 0.09 of the inter-pixel distance) are obtained. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> Stable local feature detection and representation is a fundamental component of many image registration and object recognition algorithms. Mikolajczyk and Schmid (June 2003) recently evaluated a variety of approaches and identified the SIFT [D. G. Lowe, 1999] algorithm as being the most resistant to common image deformations. This paper examines (and improves upon) the local image descriptor used by SIFT. Like SIFT, our descriptors encode the salient aspects of the image gradient in the feature point's neighborhood; however, instead of using SIFT's smoothed weighted histograms, we apply principal components analysis (PCA) to the normalized gradient patch. Our experiments demonstrate that the PCA-based local descriptors are more distinctive, more robust to image deformations, and more compact than the standard SIFT representation. We also present results showing that using these descriptors in an image retrieval application results in increased accuracy and faster matching. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> In this paper, we compare the performance of descriptors computed for local interest regions, as, for example, extracted by the Harris-Affine detector [Mikolajczyk, K and Schmid, C, 2004]. Many different descriptors have been proposed in the literature. It is unclear which descriptors are more appropriate and how their performance depends on the interest region detector. The descriptors should be distinctive and at the same time robust to changes in viewing conditions as well as to errors of the detector. Our evaluation uses as criterion recall with respect to precision and is carried out for different image transformations. We compare shape context [Belongie, S, et al., April 2002], steerable filters [Freeman, W and Adelson, E, Setp. 1991], PCA-SIFT [Ke, Y and Sukthankar, R, 2004], differential invariants [Koenderink, J and van Doorn, A, 1987], spin images [Lazebnik, S, et al., 2003], SIFT [Lowe, D. G., 1999], complex filters [Schaffalitzky, F and Zisserman, A, 2002], moment invariants [Van Gool, L, et al., 1996], and cross-correlation for different types of interest regions. We also propose an extension of the SIFT descriptor and show that it outperforms the original method. Furthermore, we observe that the ranking of the descriptors is mostly independent of the interest region detector and that the SIFT-based descriptors perform best. Moments and steerable filters show the best performance among the low dimensional descriptors. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> In this paper, we present a novel scale- and rotation-invariant interest point detector and descriptor, coined SURF (Speeded Up Robust Features). It approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster. ::: ::: This is achieved by relying on integral images for image convolutions; by building on the strengths of the leading existing detectors and descriptors (in casu, using a Hessian matrix-based measure for the detector, and a distribution-based descriptor); and by simplifying these methods to the essential. This leads to a combination of novel detection, description, and matching steps. The paper presents experimental results on a standard evaluation set, as well as on imagery obtained in the context of a real-life object recognition application. Both show SURF's strong performance. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> In this survey, we give an overview of invariant interest point detectors, how they evolvd over time, how they work, and what their respective strengths and weaknesses are. We begin with defining the properties of the ideal local feature detector. This is followed by an overview of the literature over the past four decades organized in different categories of feature extraction methods. We then provide a more detailed analysis of a selection of methods which had a particularly significant impact on the research field. We conclude with a summary and promising future research directions. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> This paper improves recent methods for large scale image search. State-of-the-art methods build on the bag-of-features image representation. We, first, analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within the inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> In state-of-the-art image retrieval systems, an image is represented by a bag of visual words obtained by quantizing high-dimensional local image descriptors, and scalable schemes inspired by text retrieval are then applied for large scale image indexing and retrieval. Bag-of-words representations, however: 1) reduce the discriminative power of image features due to feature quantization; and 2) ignore geometric relationships among visual words. Exploiting such geometric constraints, by estimating a 2D affine transformation between a query image and each candidate image, has been shown to greatly improve retrieval precision but at high computational cost. In this paper we present a novel scheme where image features are bundled into local groups. Each group of bundled features becomes much more discriminative than a single feature, and within each group simple and robust geometric constraints can be efficiently enforced. Experiments in Web image search, with a database of more than one million images, show that our scheme achieves a 49% improvement in average precision over the baseline bag-of-words approach. Retrieval performance is comparable to existing full geometric verification approaches while being much less computationally expensive. When combined with full geometric verification we achieve a 77% precision improvement over the baseline bag-of-words approach, and a 24% improvement over full geometric verification alone. <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> State of the art methods for image and object retrieval exploit both appearance (via visual words) and local geometry (spatial extent, relative pose). In large scale problems, memory becomes a limiting factor - local geometry is stored for each feature detected in each image and requires storage larger than the inverted file and term frequency and inverted document frequency weights together. We propose a novel method for learning discretized local geometry representation based on minimization of average reprojection error in the space of ellipses. The representation requires only 24 bits per feature without drop in performance. Additionally, we show that if the gravity vector assumption is used consistently from the feature description to spatial verification, it improves retrieval performance and decreases the memory footprint. The proposed method outperforms state of the art retrieval algorithms in a standard image retrieval benchmark. <s> BIB010 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> Many visual search and matching systems represent images using sparse sets of "visual words": descriptors that have been quantized by assignment to the best-matching symbol in a discrete vocabulary. Errors in this quantization procedure propagate throughout the rest of the system, either harming performance or requiring correction using additional storage or processing. This paper aims to reduce these quantization errors at source, by learning a projection from descriptor space to a new Euclidean space in which standard clustering techniques are more likely to assign matching descriptors to the same cluster, and nonmatching descriptors to different clusters. ::: ::: To achieve this, we learn a non-linear transformation model by minimizing a novel margin-based cost function, which aims to separate matching descriptors from two classes of non-matching descriptors. Training data is generated automatically by leveraging geometric consistency. Scalable, stochastic gradient methods are used for the optimization. ::: ::: For the case of particular object retrieval, we demonstrate impressive gains in performance on a ground truth dataset: our learnt 32-D descriptor without spatial re-ranking outperforms a baseline method using 128-D SIFT descriptors with spatial re-ranking. <s> BIB011 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images. These features can then be used to reliably match objects in diering images. The algorithm was rst proposed by Lowe [12] and further developed to increase performance resulting in the classic paper [13] that served as foundation for SIFT which has played an important role in robotic and machine vision in the past decade. <s> BIB012 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> We describe a scalable approach to 3D smooth object retrieval which searches for and localizes all the occurrences of a user outlined object in a dataset of images in real time. The approach is illustrated on sculptures. <s> BIB013 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> Feature matching is at the base of many computer vision problems, such as object recognition or structure from motion. Current methods rely on costly descriptors for detection and matching. In this paper, we propose a very fast binary descriptor based on BRIEF, called ORB, which is rotation invariant and resistant to noise. We demonstrate through experiments how ORB is at two orders of magnitude faster than SIFT, while performing as well in many situations. The efficiency is tested on several real-world applications, including object detection and patch-tracking on a smart phone. <s> BIB014 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> The objective of this work is object retrieval in large scale image datasets, where the object is specified by an image query and retrieval should be immediate at run time in the manner of Video Google [28]. We make the following three contributions: (i) a new method to compare SIFT descriptors (RootSIFT) which yields superior performance without increasing processing or storage requirements; (ii) a novel method for query expansion where a richer model for the query is learnt discriminatively in a form suited to immediate retrieval through efficient use of the inverted index; (iii) an improvement of the image augmentation method proposed by Turcot and Lowe [29], where only the augmenting features which are spatially consistent with the augmented image are kept. We evaluate these three methods over a number of standard benchmark datasets (Oxford Buildings 5k and 105k, and Paris 6k) and demonstrate substantial improvements in retrieval performance whilst maintaining immediate retrieval speeds. Combining these complementary methods achieves a new state-of-the-art performance on these datasets. <s> BIB015 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> The bag-of-features(BOF) image representation [7] is popular in largescale image retrieval. With BOF, the memory to store the inverted index file and the search complexity are both approximately linearly increased with the number of images. To address the retrieval efficiency and the memory constraint problem, besides some improvement work based on BOF, there come alternative approaches which aggregate local descriptors in one image into a single vector using Fisher Vector [6] or Vector of Local Aggregated Descriptor (VLAD) [1]. It has been shown in [1] that with as few as 16 bytes to represent an image, the retrieval performance is still comparable to that of the BOF representation. In this paper, we illustrate that Fisher Vector, VLAD and BOF can be uniformly derived in two steps: i Encoding – separately map each local descriptor into a code, and ii Pooling – aggregate all codes from one image into a single vector. Motivated by the success of these two-step approaches, we propose to use sparse coding(SC) framework to aggregate local feature for image retrieval. SC framework is firstly introduced by [10] for the task of image classification. It is a classical two-step approach: Step 1: Encoding. Each local descriptor x from an image is encoded into an N-dimensional vector u = [u1,u2, ...,uN ] by fitting a linear model with sparsity (L1) constraint: <s> BIB016 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> This paper proposes a pooling strategy for local descriptors to produce a vector representation that is orientation-invariant yet implicitly incorporates the relative angles between features measured by their dominant orientation. This pooling is associated with a similarity metric that ensures that all the features have undergone a comparable rotation. This approach is especially effective when combined with dense oriented features, in contrast to existing methods that either rely on oriented features extracted on key points or on non-oriented dense features. The interest of our approach in a retrieval scenario is demonstrated on popular benchmarks comprising up to 1 million database images. <s> BIB017 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> In Bag-of-Words (BoW) based image retrieval, the SIFT visual word has a low discriminative power, so false positive matches occur prevalently. Apart from the information loss during quantization, another cause is that the SIFT feature only describes the local gradient distribution. To address this problem, this paper proposes a coupled Multi-Index (c-MI) framework to perform feature fusion at indexing level. Basically, complementary features are coupled into a multi-dimensional inverted index. Each dimension of c-MI corresponds to one kind of feature, and the retrieval process votes for images similar in both SIFT and other feature spaces. Specifically, we exploit the fusion of local color feature into c-MI. While the precision of visual match is greatly enhanced, we adopt Multiple Assignment to improve recall. The joint cooperation of SIFT and color features significantly reduces the impact of false positive matches. Extensive experiments on several benchmark datasets demonstrate that c-MI improves the retrieval accuracy significantly, while consuming only half of the query time compared to the baseline. Importantly, we show that c-MI is well complementary to many prior techniques. Assembling these methods, we have obtained an mAP of 85.8% and N-S score of 3.85 on Holidays and Ukbench datasets, respectively, which compare favorably with the state-of-the-arts. <s> BIB018 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> The objective of this work is to learn descriptors suitable for the sparse feature detectors used in viewpoint invariant matching. We make a number of novel contributions towards this goal. First, it is shown that learning the pooling regions for the descriptor can be formulated as a convex optimisation problem selecting the regions using sparsity. Second, it is shown that descriptor dimensionality reduction can also be formulated as a convex optimisation problem, using Mahalanobis matrix nuclear norm regularisation. Both formulations are based on discriminative large margin learning constraints. As the third contribution, we evaluate the performance of the compressed descriptors, obtained from the learnt real-valued descriptors by binarisation. Finally, we propose an extension of our learning formulations to a weakly supervised case, which allows us to learn the descriptors from unannotated image collections. It is demonstrated that the new learning methods improve over the state of the art in descriptor learning on the annotated local patches data set of Brown et al. and unannotated photo collections of Philbin et al. . <s> BIB019 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> This paper focuses on the image retrieval task. We propose the use of dense feature points computed on several color channels to improve the retrieval system. To validate our approach, an evaluation of various SIFT extraction strategies is performed. Detected SIFT are compared with dense SIFT. Dense color descriptors: C-SIFT and T-SIFT are then utilized. A comparison between standard and rotation invariant features is further achieved. Finally, several encoding strategies are studied: Bag of Visual Words (BOW), Fisher vectors, and vector of locally aggregated descriptors (VLAD). The presented approaches are evaluated on several datasets and we show a large improvement over the baseline. <s> BIB020 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> We consider a pipeline for image classification or search based on coding approaches like bag of words or Fisher vectors. In this context, the most common approach is to extract the image patches regularly in a dense manner on several scales. This paper proposes and evaluates alternative choices to extract patches densely. Beyond simple strategies derived from regular interest region detectors, we propose approaches based on superpixels, edges, and a bank of Zernike filters used as detectors. The different approaches are evaluated on recent image retrieval and fine-grained classification benchmarks. Our results show that the regular dense detector is outperformed by other methods in most situations, leading us to improve the state-of-the-art in comparable setups on standard retrieval and fined-grained benchmarks. As a byproduct of our study, we show that existing methods for blob and superpixel extraction achieve high accuracy if the patches are extracted along the edges and not around the detected regions. <s> BIB021
Local invariant features aim at accurate matching of local structures between images BIB007 . SIFT-based methods usually share a similar feature extraction step composed of a feature detector and a descriptor. Local Detector. The interest point detectors aim to reliably localize a set of stable local regions under various imaging conditions. In the retrieval community, finding affine-covariant regions has been preferred. It is called "covariant" because the shapes of the detected regions change with the affine transformations, so that the region content (descriptors) can be invariant. This kind of detectors are different from keypoint-centric detectors such as the Hessian detector , and from those focusing on scale-invariant regions such as the difference of Gaussians (DoG) BIB012 detector. Elliptical regions which are adapted to the local intensity patterns A general pipeline of SIFT-and CNN-based retrieval models. Features are computed from hand-crafted detectors for SIFT, and densely applied filters or image patches for CNN. In both methods, under small codebooks, encoding/pooling is employed to produce compact vectors. In SIFT-based methods, the inverted index is necessary under large/medium-sized codebooks. The CNN features can also be computed in an end-toend way using fine-tuned CNN models. are produced by affine detectors. This ensures that the same local structure is covered under deformations caused by viewpoint variances, a problem often encountered in instance retrieval. In the milestone work BIB001 , the Maximally Stable Extremal Region (MSER) detector BIB002 and the affine extended Harris-Laplace detector are employed, both of which are affine-invariant region detectors. MSER is used in several later works , BIB009 . Starting from BIB006 , the Hessianaffine detector has been widely adopted in retrieval. It has been shown to be superior to the difference of Gaussians detector BIB008 , BIB018 , due to its advantage in reliably detecting local structures under large viewpoint changes. To fix the orientation ambiguity of these affine-covariant regions, the gravity assumption is made BIB010 . The practice which dismisses the orientation estimation is employed by later works BIB015 , BIB019 and demonstrates consistent improvement on architecture datasets where the objects are usually upright. Other non-affine detectors have also been tested in retrieval, such as the Laplacian of Gaussian (LOG) and Harris detectors used in BIB016 . For objects with smooth surfaces BIB013 , few interest points can be detected, so the object boundaries are good candidates for local description. On the other hand, some employ the dense region detectors. In the comparison between densely sampled image patches and the detected patches, Sicre et al. BIB020 report the superiority of the former. To recover the rotation invariance of dense sampling, the dominant angle of patches is estimated in BIB017 . A comprehensive comparison of various dense sampling strategies, the interest point detectors, and those in between can be accessed in BIB021 . Local Descriptor. With a set of detected regions, descriptors encode the local content. SIFT BIB012 has been used as the default descriptor. The 128-dim vector has been shown to outperform competing descriptors in matching accuracy BIB004 . In an extension, PCA-SIFT BIB003 reduces the dimension from 128 to 36 to speed up the matching process at the cost of more time in feature computation and loss of distinctiveness. Another improvement is RootSIFT BIB015 , calculated by two steps: 1) ' 1 normalize the SIFT descriptor, 2) square root each element. RootSIFT is now used as a routine in SIFT-based retrieval. Apart from SIFT, SURF BIB005 is also widely used. It combines the Hessian-Laplace detector and a local descriptor of the local gradient histograms. The integral image is used for acceleration. SURF has a comparable matching accuracy with SIFT and is faster to compute. See for comparisons between SIFT, PCA-SIFT, and SURF. To further accelerate the matching speed, binary descriptors BIB014 replace Euclidean distance with Hamming distance during matching. Apart from hand-crafted descriptors, some also propose learning schemes to improve the discriminative ability of local descriptors. For example, Philbin et al. BIB011 proposes a non-linear transformation so that the projected SIFT descriptor yields smaller distances for true matches. Simoyan et al. BIB019 improve this process by learning both the pooling region and a linear descriptor projection.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Retrieval Using Small Codebooks <s> We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors. The analogy with text retrieval is in the implementation where matches on descriptors are pre-computed (using vector quantization), and inverted file systems and document rankings are used. The result is that retrieved is immediate, returning a ranked list of key frames/shots in the manner of Google. The method is illustrated for matching in two full length feature films. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Retrieval Using Small Codebooks <s> We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Retrieval Using Small Codebooks <s> The Fisher kernel (FK) is a generic framework which combines the benefits of generative and discriminative approaches. In the context of image classification the FK was shown to extend the popular bag-of-visual-words (BOV) by going beyond count statistics. However, in practice, this enriched representation has not yet shown its superiority over the BOV. In the first part we show that with several well-motivated modifications over the original framework we can boost the accuracy of the FK. On PASCAL VOC 2007 we increase the Average Precision (AP) from 47.9% to 58.3%. Similarly, we demonstrate state-of-the-art accuracy on CalTech 256. A major advantage is that these results are obtained using only SIFT descriptors and costless linear classifiers. Equipped with this representation, we can now explore image classification on a larger scale. In the second part, as an application, we compare two abundant resources of labeled images to learn classifiers: ImageNet and Flickr groups. In an evaluation involving hundreds of thousands of training images we show that classifiers learned on Flickr groups perform surprisingly well (although they were not intended for this purpose) and that they can complement classifiers learned on more carefully annotated datasets. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Retrieval Using Small Codebooks <s> This paper addresses the construction of a short-vector (128D) image representation for large-scale image and particular object retrieval. In particular, the method of joint dimensionality reduction of multiple vocabularies is considered. We study a variety of vocabulary generation techniques: different k-means initializations, different descriptor transformations, different measurement regions for descriptor extraction. Our extensive evaluation shows that different combinations of vocabularies, each partitioning the descriptor space in a different yet complementary manner, results in a significant performance improvement, which exceeds the state-of-the-art. <s> BIB004
A small codebook has several thousand, several hundred or fewer visual words, so the computational complexity of codebook generation and encoding is moderate. Representative works include BoW BIB001 , VLAD BIB002 and FV BIB003 . We mainly discuss VLAD and FV and refer readers to BIB004 for a comprehensive evaluation of the BoW compact vectors.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Codebook Generation <s> In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Codebook Generation <s> Semantic hashing[1] seeks compact binary codes of data-points so that the Hamming distance between codewords correlates with semantic similarity. In this paper, we show that the problem of finding a best code for a given dataset is closely related to the problem of graph partitioning and can be shown to be NP hard. By relaxing the original problem, we obtain a spectral method whose solutions are simply a subset of thresholded eigenvectors of the graph Laplacian. By utilizing recent results on convergence of graph Laplacian eigenvectors to the Laplace-Beltrami eigenfunctions of manifolds, we show how to efficiently calculate the code of a novel data-point. Taken together, both learning the code and applying it to a novel point are extremely simple. Our experiments show that our codes outperform the state-of-the art. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Codebook Generation <s> We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Codebook Generation <s> The Fisher kernel (FK) is a generic framework which combines the benefits of generative and discriminative approaches. In the context of image classification the FK was shown to extend the popular bag-of-visual-words (BOV) by going beyond count statistics. However, in practice, this enriched representation has not yet shown its superiority over the BOV. In the first part we show that with several well-motivated modifications over the original framework we can boost the accuracy of the FK. On PASCAL VOC 2007 we increase the Average Precision (AP) from 47.9% to 58.3%. Similarly, we demonstrate state-of-the-art accuracy on CalTech 256. A major advantage is that these results are obtained using only SIFT descriptors and costless linear classifiers. Equipped with this representation, we can now explore image classification on a larger scale. In the second part, as an application, we compare two abundant resources of labeled images to learn classifiers: ImageNet and Flickr groups. In an evaluation involving hundreds of thousands of training images we show that classifiers learned on Flickr groups perform surprisingly well (although they were not intended for this purpose) and that they can complement classifiers learned on more carefully annotated datasets. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Codebook Generation <s> The latest generation of Convolutional Neural Networks (CNN) have achieved impressive results in challenging benchmarks on image recognition and object detection, significantly raising the interest of the community in these methods. Nevertheless, it is still unclear how different CNN methods compare with each other and with previous state-of-the-art shallow representations such as the Bag-of-Visual-Words and the Improved Fisher Vector. This paper conducts a rigorous evaluation of these new techniques, exploring different deep architectures and comparing them on a common ground, identifying and disclosing important implementation details. We identify several useful properties of CNN-based representations, including the fact that the dimensionality of the CNN output layer can be reduced significantly without having an adverse effect on performance. We also identify aspects of deep and shallow methods that can be successfully shared. In particular, we show that the data augmentation techniques commonly applied to CNN-based methods can also be applied to shallow methods, and result in an analogous performance boost. Source code and models to reproduce the experiments in the paper is made publicly available. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Codebook Generation <s> For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching. <s> BIB006
Clustering complexity depends heavily on the codebook size. In works based on VLAD BIB003 or FV BIB004 , the codebook sizes are typically small, e.g., BIB002 BIB005 256 . For VLAD, flat kmeans is employed for codebook generation. For FV, the Gaussian mixture model (GMM), i.e., u ðxÞ ¼ P K i¼1 w i u i ðxÞ, where K is the number of Gaussian mixtures, is trained using the maximum likelihood estimation. GMM describes the feature space with a mixture of K Gaussian distributions, and can be denoted as ¼ fw i ; m i ; P i ; i ¼ 1; . . . ; Kg, where w i , m i and P i represent the mixture weight, the mean vector and the covariance matrix of Gaussian u i , respectively. Approximate methods are critical in assigning data into a large number of clusters. In the retrieval community, two representative works are hierarchical k-means (HKM) and approximate k-means (AKM) BIB001 , as illustrated in Figs. 1 and 3. Proposed in 2006, HKM applies standard kmeans on the training features hierarchically. It first partitions the points into a few clusters (e.g., k ( K) and then recursively partitions each cluster into further clusters. In every recursion, each point should be assigned to one of the k clusters, with the depth of the cluster tree being Oðlog KÞ, where K is the target cluster number. The computational cost of HKM is therefore Oð kM log KÞ, where M is the number of training samples. It is much smaller than the complexity of flat k-means OðMKÞ when K is large (a large codebook). The other milestone in large codebook generation is AKM BIB001 . This method indexes the K cluster centers using a forest of random k-d trees so that the assignment step can be performed efficiently with ANN search. In AKM, the cost of assignment can be written as OðK log K þ vM log KÞ ¼ OðvM log KÞ, where v is the number of nearest cluster candidates to be accessed in the k-d trees. So the computational complexity of AKM is on par with HKM and is significantly smaller than flat k-means when K is large. Experiments show that AKM is superior to HKM BIB001 due to its lower quantization error (see Section 3.4.2). In most AKM-based methods, the default choice for ANN search is FLANN BIB006 .
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> This paper improves recent methods for large scale image search. State-of-the-art methods build on the bag-of-features image representation. We, first, analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within the inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> The problem of large-scale image search has been traditionally addressed with the bag-of-visual-words (BOV). In this article, we propose to use as an alternative the Fisher kernel framework. We first show why the Fisher representation is well-suited to the retrieval problem: it describes an image by what makes it different from other images. One drawback of the Fisher vector is that it is high-dimensional and, as opposed to the BOV, it is dense. The resulting memory and computational costs do not make Fisher vectors directly amenable to large-scale retrieval. Therefore, we compress Fisher vectors to reduce their memory footprint and speed-up the retrieval. We compare three binarization approaches: a simple approach devised for this representation and two standard compression techniques. We show on two publicly available datasets that compressed Fisher vectors perform very well using as little as a few hundreds of bits per image, and significantly better than a very recent compressed BOV approach. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> The Fisher kernel (FK) is a generic framework which combines the benefits of generative and discriminative approaches. In the context of image classification the FK was shown to extend the popular bag-of-visual-words (BOV) by going beyond count statistics. However, in practice, this enriched representation has not yet shown its superiority over the BOV. In the first part we show that with several well-motivated modifications over the original framework we can boost the accuracy of the FK. On PASCAL VOC 2007 we increase the Average Precision (AP) from 47.9% to 58.3%. Similarly, we demonstrate state-of-the-art accuracy on CalTech 256. A major advantage is that these results are obtained using only SIFT descriptors and costless linear classifiers. Equipped with this representation, we can now explore image classification on a larger scale. In the second part, as an application, we compare two abundant resources of labeled images to learn classifiers: ImageNet and Flickr groups. In an evaluation involving hundreds of thousands of training images we show that classifiers learned on Flickr groups perform surprisingly well (although they were not intended for this purpose) and that they can complement classifiers learned on more carefully annotated datasets. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> The traditional SPM approach based on bag-of-features (BoF) requires nonlinear classifiers to achieve good image classification performance. This paper presents a simple but effective coding scheme called Locality-constrained Linear Coding (LLC) in place of the VQ coding in traditional SPM. LLC utilizes the locality constraints to project each descriptor into its local-coordinate system, and the projected coordinates are integrated by max pooling to generate the final representation. With linear classifier, the proposed approach performs remarkably better than the traditional nonlinear SPM, achieving state-of-the-art performance on several benchmarks. Compared with the sparse coding strategy [22], the objective function used by LLC has an analytical solution. In addition, the paper proposes a fast approximated LLC method by first performing a K-nearest-neighbor search and then solving a constrained least square fitting problem, bearing computational complexity of O(M + K2). Hence even with very large codebooks, our system can still process multiple frames per second. This efficiency significantly adds to the practical values of LLC for real applications. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> This paper addresses the problem of large-scale image search. Three constraints have to be taken into account: search accuracy, efficiency, and memory usage. We first present and evaluate different ways of aggregating local image descriptors into a vector and show that the Fisher kernel achieves better performance than the reference bag-of-visual words approach for any given vector dimension. We then jointly optimize dimensionality reduction and indexing in order to obtain a precise vector comparison as well as a compact representation. The evaluation shows that the image representation can be reduced to a few dozen bytes while preserving high accuracy. Searching a 100 million image data set takes about 250 ms on one processor core. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> The paper addresses large scale image retrieval with short vector representations. We study dimensionality reduction by Principal Component Analysis (PCA) and propose improvements to its different phases. We show and explicitly exploit relations between i) mean subtrac- tion and the negative evidence, i.e., a visual word that is mutually miss- ing in two descriptions being compared, and ii) the axis de-correlation and the co-occurrences phenomenon. Finally, we propose an effective way to alleviate the quantization artifacts through a joint dimensionality re- duction of multiple vocabularies. The proposed techniques are simple, yet significantly and consistently improve over the state of the art on compact image representations. Complementary experiments in image classification show that the methods are generally applicable. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> Bag-of-Words lies at a heart of modern object category recognition systems. After descriptors are extracted from images, they are expressed as vectors representing visual word content, referred to as mid-level features. In this paper, we review a number of techniques for generating mid-level features, including two variants of Soft Assignment, Locality-constrained Linear Coding, and Sparse Coding. We also isolate the underlying properties that affect their performance. Moreover, we investigate various pooling methods that aggregate mid-level features into vectors representing images. Average pooling, Max-pooling, and a family of likelihood inspired pooling strategies are scrutinised. We demonstrate how both coding schemes and pooling methods interact with each other. We generalise the investigated pooling methods to account for the descriptor interdependence and introduce an intuitive concept of improved pooling. We also propose a coding-related improvement to increase its speed. Lastly, state-of-the-art performance in classification is demonstrated on Caltech101, Flower17, and ImageCLEF11 datasets. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> The objective of this paper is large scale object instance retrieval, given a query image. A starting point of such systems is feature detection and description, for example using SIFT. The focus of this paper, however, is towards very large scale retrieval where, due to storage requirements, very compact image descriptors are required and no information about the original SIFT descriptors can be accessed directly at run time. We start from VLAD, the state-of-the art compact descriptor introduced by Jegou et al. for this purpose, and make three novel contributions: first, we show that a simple change to the normalization method significantly improves retrieval performance, second, we show that vocabulary adaptation can substantially alleviate problems caused when images are added to the dataset after initial vocabulary learning. These two methods set a new state-of-the-art over all benchmarks investigated here for both mid-dimensional (20k-D to 30k-D) and small (128-D) descriptors. Our third contribution is a multiple spatial VLAD representation, MultiVLAD, that allows the retrieval and localization of objects that only extend over a small part of an image (again without requiring use of the original image SIFT descriptors). <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> Recent works on image retrieval have proposed to index images by compact representations encoding powerful local descriptors, such as the closely related VLAD and Fisher vector. By combining such a representation with a suitable coding technique, it is possible to encode an image in a few dozen bytes while achieving excellent retrieval results. This paper revisits some assumptions proposed in this context regarding the handling of "visual burstiness", and shows that ad-hoc choices are implicitly done which are not desirable. Focusing on VLAD without loss of generality, we propose to modify several steps of the original design. Albeit simple, these modifications significantly improve VLAD and make it compare favorably against the state of the art. <s> BIB010 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> Wining method of Fine-grain image classification challenge 2013.Late combination of two indexing and classification strategies.Good practices for fine grain image classification.Key features: descriptors filtering, spatial coordinates coding, active learning. This paper describes the joint submission of Inria and Xerox to their joint participation to the FGCOMP'2013 challenge. Although the proposed system follows most of the standard Fisher classification pipeline, we describe a few key features and good practices that significantly improve the accuracy when specifically considering fine-grain classification tasks. In particular, we consider the late fusion of two systems both based on Fisher vectors, but for which we choose drastically design choices that make them very complementary. Moreover, we propose a simple yet effective filtering strategy, which significantly boosts the performance for several class domains. <s> BIB011 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> Image search systems based on local descriptors typically achieve orientation invariance by aligning the patches on their dominant orientations. Albeit successful, this choice introduces too much invariance because it does not guarantee that the patches are rotated consistently. <s> BIB012 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> We consider the design of a single vector representation for an image that embeds and aggregates a set of local patch descriptors such as SIFT. More specifically we aim to construct a dense representation, like the Fisher Vector or VLAD, though of small or intermediate size. We make two contributions, both aimed at regularizing the individual contributions of the local descriptors in the final representation. The first is a novel embedding method that avoids the dependency on absolute distances by encoding directions. The second contribution is a "democratization" strategy that further limits the interaction of unrelated descriptors in the aggregation stage. These methods are complementary and give a substantial performance boost over the state of the art in image search with short or mid-size vectors, as demonstrated by our experiments on standard public image retrieval benchmarks. <s> BIB013 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> State-of-the-art patch-based image representations involve a pooling operation that aggregates statistics computed from local descriptors. Standard pooling operations include sum- and max-pooling. Sum-pooling lacks discriminability because the resulting representation is strongly influenced by frequent yet often uninformative descriptors, but only weakly influenced by rare yet potentially highly-informative ones. Max-pooling equalizes the influence of frequent and rare descriptorsbut is only applicable to representations that rely on count statistics, such as the bag-of-visual-words (BOV)and its soft- and sparse-coding extensions. We propose a novel pooling mechanism that achieves the same effect as max-pooling but is applicable beyond the BOV and especially to the state-of-the-art Fisher Vector --hence the name Generalized Max Pooling (GMP). It involves equalizing the similarity between each patch and the pooled representation, which is shown to be equivalent to re-weighting the per-patch statistics. We show on five public image classification benchmarks that the proposedGMP can lead to significant performance gains with respect toheuristic alternatives. <s> BIB014 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> The bag-of-words (BoW) model treats images as sets of local descriptors and represents them by visual word histograms. The Fisher vector (FV) representation extends BoW, by considering the first and second order statistics of local descriptors. In both representations local descriptors are assumed to be identically and independently distributed (iid), which is a poor assumption from a modeling perspective. It has been experimentally observed that the performance of BoW and FV representations can be improved by employing discounting transformations such as power normalization. In this paper, we introduce non-iid models by treating the model parameters as latent variables which are integrated out, rendering all local regions dependent. Using the Fisher kernel principle we encode an image by the gradient of the data log-likelihood w.r.t. the model hyper-parameters. Our models naturally generate discounting effects in the representations; suggesting that such transformations have proven successful because they closely correspond to the representations obtained for non-iid models. To enable tractable computation, we rely on variational free-energy bounds to learn the hyper-parameters and to compute approximate Fisher kernels. Our experimental evaluation results validate that our models lead to performance improvements comparable to using power normalization, as employed in state-of-the-art feature aggregation methods. <s> BIB015 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> This paper revisits the vector of locally aggregated descriptors (VLAD), which aggregates the residuals of local descriptors to their cluster centers. Since VLAD usually adopts a small-size codebook, the clusters are coarse and residuals not discriminative. To address this problem, this paper proposes to generate a number of residual codebooks descended from the original clusters. After quantizing local descriptors with these codebooks, we pool the resulting secondary residuals as well as the primary ones to obtain the fine residuals. We show that, with two-step aggregation, the fine-residual VLAD has the same dimension as the original. Experiments on two image search benchmarks confirm the improved discriminative power of our method: we observe consistent superiority to the baseline and competitive performance to the state-of-the-arts. <s> BIB016 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> Visual search and image retrieval underpin numerous applications, however the task is still challenging predominantly due to the variability of object appearance and ever increasing size of the databases, often exceeding billions of images. Prior art methods rely on aggregation of local scale-invariant descriptors, such as SIFT, via mechanisms including Bag of Visual Words (BoW), Vector of Locally Aggregated Descriptors (VLAD) and Fisher Vectors (FV). However, their performance is still short of what is required. This paper presents a novel method for deriving a compact and distinctive representation of image content called Robust Visual Descriptor with Whitening (RVD-W). It significantly advances the state of the art and delivers world-class performance. In our approach local descriptors are rank-assigned to multiple clusters. Residual vectors are then computed in each cluster, normalized using a direction-preserving normalization function and aggregated based on the neighborhood rank. Importantly, the residual vectors are de-correlated and whitened in each cluster before aggregation, leading to a balanced energy distribution in each dimension and significantly improved performance. We also propose a new post-PCA normalization approach which improves separability between the matching and non-matching global descriptors. This new normalization benefits not only our RVD-W descriptor but also improves existing approaches based on FV and VLAD aggregation. Furthermore, we show that the aggregation framework developed using hand-crafted SIFT features also performs exceptionally well with Convolutional Neural Network (CNN) based features. The RVD-W pipeline outperforms state-of-the-art global descriptors on both the Holidays and Oxford datasets. On the large scale datasets, Holidays1M and Oxford1M, SIFT-based RVD-W representation obtains a mAP of 45.1 and 35.1 percent, while CNN-based RVD-W achieve a mAP of 63.5 and 44.8 percent, all yielding superior performance to the state-of-the-art. <s> BIB017
Due to the small codebook size, relative complex and information-preserving encoding techniques can be applied. We mainly describe FV, VLAD and their improvements in this section. With a pre-trained GMM model, FV describes the averaged first and second order difference between local features and the GMM centers. Its dimension is 2pK, where p is the dimension of the local descriptors and K is the codebook size of GMM. FV usually undergoes power normalization BIB002 , BIB003 to suppress the burstiness problem (to be described in Section 3.4.3). In this step, each component of FV undergoes non-linear transformation featured by parameter a, x i :¼ signðx i Þkx i k a . Then ' 2 normalization is employed. Later, FV is improved from different aspects. For example, Koniusz et al. BIB008 augment each descriptor with its spatial coordinates and associated tunable weights. In BIB011 , larger codebooks (up to 4,096) are generated and demonstrate superior classification accuracy to smaller codebooks, at the cost of computational efficiency. To correct the assumption that local regions are identically and independently distributed (iid), Cinbis et al. BIB015 propose non-iid models that discount the burstiness effect and yield improvement over the power normalization. The VLAD encoding scheme proposed by J egou et al. BIB004 can be thought of as a simplified version of FV. It quantizes a local feature to its nearest visual word in the codebook and records the difference between them. Nearest neighbor search is performed because of the small codebook size. The residual vectors are then aggregated by sum pooling followed by normalizations. The dimension of VLAD is pK. Comparisons of some important encoding techniques are presented in , BIB006 . Again, the improvement of VLAD comes from multiple aspects. In BIB007 , J egou and Chum suggest the usage of PCA and whitening (denoted as PCA w in Table 5 ) to de-correlate visual word co-occurrences, and the training of multiple codebooks to reduce quantization loss. In BIB009 , Arandjelovi c et al. extend VLAD in three aspects: 1) normalize the residual sum within each coarse cluster, called intra-normalization, 2) vocabulary adaptation to address the dataset transfer problem and 3) multi-VLAD for small object discovery. Concurrent to BIB009 , Delhumeau et al. BIB010 propose to normalize each residual vector instead of the residual sums; they also advocate for local PCA within each Voronoi cell which does not perform dimension reduction as BIB006 . A recent work BIB017 employs soft assignment and empirically learns optimal weights for each rank to improve over the hard quantization. Note that some general techniques benefit various embedding methods, such as VLAD, FV, BoW, locality-constrained linear coding (LLC) BIB005 and monomial embeddings. To improve the discriminative ability of embeddings, Tolias et al. BIB012 propose the orientation covariant embedding to encode the dominant orientation of the SIFT regions jointly with the SIFT descriptor. It achieves a similar covariance property to weak geometric consistency (WGC) BIB001 by using geometric cues within regions of interest so that matching points with similar dominant orientations are up-weighted and vice versa. The triangulation embedding BIB013 only considers the direction instead of the magnitude of the input vectors. J egou et al. BIB013 also present a democratic aggregation that limits the interference between the mapped vectors. Baring a similar idea with democratic aggregation, Murray and Perronnin BIB014 propose the generalized max pooling (GMP) optimized by equalizing the similarity between the pooled vector and each coding representation. The computational complexity of BoW, VLAD and FV is similar. We neglect the offline training and SIFT extraction steps. During visual word assignment, each feature should compute its distance (or soft assignment coefficient) with all the visual words (or Gaussians) for VLAD (or FV). So this step has a complexity of OðpKÞ. In the other steps, complexity does not exceed OðpKÞ. Considering the sum-pooling of the embeddings, the encoding process has an overall complexity of OðpKDÞ, where D is the number of features in an image. Triangulation embedding BIB013 , a variant of VLAD, has a similar complexity. The complexity of multi-VLAD BIB009 is OðpKDÞ, too, but it has a more costly matching process. Hierarchical VLAD BIB016 has a complexity of OðpKK 0 DÞ, where K 0 is the size of the secondary codebook. In the aggregation stage, both GMP BIB014 and democratic aggregation BIB013 have high complexity. The complexity of GMP is Oð P 2 K Þ, where P is the dimension of the feature embedding, while the computational cost of democratic aggregation comes from the Sinkhorn algorithm.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> ANN Search <s> We present two algorithms for the approximate nearest neighbor problem in high-dimensional spaces. For data sets of size n living in R d , the algorithms require space that is only polynomial in n and d, while achieving query times that are sub-linear in n and polynomial in d. We also show applications to other high-dimensional geometric problems, such as the approximate minimum spanning tree. The article is based on the material from the authors' STOC'98 and FOCS'01 papers. It unifies, generalizes and simplifies the results from those papers. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> ANN Search <s> Semantic hashing[1] seeks compact binary codes of data-points so that the Hamming distance between codewords correlates with semantic similarity. In this paper, we show that the problem of finding a best code for a given dataset is closely related to the problem of graph partitioning and can be shown to be NP hard. By relaxing the original problem, we obtain a spectral method whose solutions are simply a subset of thresholded eigenvectors of the graph Laplacian. By utilizing recent results on convergence of graph Laplacian eigenvectors to the Laplace-Beltrami eigenfunctions of manifolds, we show how to efficiently calculate the code of a novel data-point. Taken together, both learning the code and applying it to a novel point are extremely simple. Our experiments show that our codes outperform the state-of-the art. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> ANN Search <s> The problem of large-scale image search has been traditionally addressed with the bag-of-visual-words (BOV). In this article, we propose to use as an alternative the Fisher kernel framework. We first show why the Fisher representation is well-suited to the retrieval problem: it describes an image by what makes it different from other images. One drawback of the Fisher vector is that it is high-dimensional and, as opposed to the BOV, it is dense. The resulting memory and computational costs do not make Fisher vectors directly amenable to large-scale retrieval. Therefore, we compress Fisher vectors to reduce their memory footprint and speed-up the retrieval. We compare three binarization approaches: a simple approach devised for this representation and two standard compression techniques. We show on two publicly available datasets that compressed Fisher vectors perform very well using as little as a few hundreds of bits per image, and significantly better than a very recent compressed BOV approach. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> ANN Search <s> This paper introduces a product quantization-based approach for approximate nearest neighbor search. The idea is to decompose the space into a Cartesian product of low-dimensional subspaces and to quantize each subspace separately. A vector is represented by a short code composed of its subspace quantization indices. The euclidean distance between two vectors can be efficiently estimated from their codes. An asymmetric version increases precision, as it computes the approximate distance between a vector and a code. Experimental results show that our approach searches for nearest neighbors efficiently, in particular in combination with an inverted file system. Results for SIFT and GIST image descriptors show excellent search accuracy, outperforming three state-of-the-art approaches. The scalability of our approach is validated on a data set of two billion vectors. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> ANN Search <s> The paper addresses large scale image retrieval with short vector representations. We study dimensionality reduction by Principal Component Analysis (PCA) and propose improvements to its different phases. We show and explicitly exploit relations between i) mean subtrac- tion and the negative evidence, i.e., a visual word that is mutually miss- ing in two descriptions being compared, and ii) the axis de-correlation and the co-occurrences phenomenon. Finally, we propose an effective way to alleviate the quantization artifacts through a joint dimensionality re- duction of multiple vocabularies. The proposed techniques are simple, yet significantly and consistently improve over the state of the art on compact image representations. Complementary experiments in image classification show that the methods are generally applicable. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> ANN Search <s> For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> ANN Search <s> This paper deals with content-based large-scale image retrieval using the state-of-the-art framework of VLAD and Product Quantization proposed by Jegou as a starting point. Demonstrating an excellent accuracy-efficiency trade-off, this framework has attracted increased attention from the community and numerous extensions have been proposed. In this work, we make an in-depth analysis of the framework that aims at increasing our understanding of its different processing steps and boosting its overall performance. Our analysis involves the evaluation of numerous extensions (both existing and novel) as well as the study of the effects of several unexplored parameters. We specifically focus on: a) employing more efficient and discriminative local features; b) improving the quality of the aggregated representation; and c) optimizing the indexing scheme. Our thorough experimental evaluation provides new insights into extensions that consistently contribute, and others that do not, to performance improvement, and sheds light onto the effects of previously unexplored parameters of the framework. As a result, we develop an enhanced framework that significantly outperforms the previous best reported accuracy results on standard benchmarks and is more efficient. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> ANN Search <s> This paper introduces a group testing framework for detecting large similarities between high-dimensional vectors, such as descriptors used in state-of-the-art description of multimedia documents.At the crossroad of multimedia information retrieval and signal processing, we produce a set of group representations that jointly encode several vectors into a single one, in the spirit of group testing approaches. By comparing a query vector to several of these intermediate representations, we screen the large values taken by the similarities between the query and all the vectors, at a fraction of the cost of exhaustive similarity calculation. Unlike concurrent indexing methods that suffer from the curse of dimensionality, our method exploits the properties of high-dimensional spaces. It therefore complements other strategies for approximate nearest neighbor search. Our preliminary experiments demonstrate the potential of group testing for searching large databases of multimedia objects represented by vectors. We obtain a large improvement in terms of the theoretical complexity, at the cost of a small or negligible decrease of accuracy.We hope that this preliminary work will pave the way to subsequent works for multimedia retrieval with limited resources. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> ANN Search <s> We consider the image retrieval problem of finding the images in a dataset that are most similar to a query image. Our goal is to reduce the number of vector operations and memory for performing a search without sacrificing accuracy of the returned images. We adopt a group testing formulation and design the decoding architecture using either dictionary learning or eigendecomposition. The latter is a plausible option for small-to-medium sized problems with high-dimensional global image descriptors, whereas dictionary learning is applicable in large-scale scenarios. We evaluate our approach for global descriptors obtained from both SIFT and CNN features. Experiments with standard image search benchmarks, including the Yahoo100M dataset comprising 100 million images, show that our method gives comparable (and sometimes superior) accuracy compared to exhaustive search while requiring only 10% of the vector operations and memory. Moreover, for the same search complexity, our method gives significantly better accuracy compared to approaches based on dimensionality reduction or locality sensitive hashing. <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> ANN Search <s> Nearest neighbor search is a problem of finding the data points from the database such that the distances from them to the query point are the smallest. Learning to hash is one of the major solutions to this problem and has been widely studied recently. In this paper, we present a comprehensive survey of the learning to hash algorithms, categorize them according to the manners of preserving the similarities into: pairwise similarity preserving, multiwise similarity preserving, implicit similarity preserving, as well as quantization, and discuss their relations. We separate quantization from pairwise similarity preserving as the objective function is very different though quantization, as we show, can be derived from preserving the pairwise similarities. In addition, we present the evaluation protocols, and the general performance analysis, and point out that the quantization algorithms perform superiorly in terms of search accuracy, search time cost, and space cost. Finally, we introduce a few emerging topics. <s> BIB010 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> ANN Search <s> We study an indexing architecture to store and search in a database of high-dimensional vectors from the perspective of statistical signal processing and decision theory. This architecture is composed of several memory units, each of which summarizes a fraction of the database by a single representative vector. The potential similarity of the query to one of the vectors stored in the memory unit is gauged by a simple correlation with the memory unit's representative vector. This representative optimizes the test of the following hypothesis: the query is independent from any vector in the memory unit versus the query is a simple perturbation of one of the stored vectors. Compared to exhaustive search, our approach finds the most similar database vectors significantly faster without a noticeable reduction in search quality. Interestingly, the reduction of complexity is provably better in high-dimensional spaces. We empirically demonstrate its practical interest in a large-scale image search scenario with off-the-shelf state-of-the-art descriptors. <s> BIB011
Due to the high dimensionality of the VLAD/FV embeddings, efficient compression and ANN search methods have been employed BIB004 , BIB006 . For example, the principle component analysis (PCA) is usually adapted to for dimension reduction, and it is shown that retrieval accuracy even increases after PCA BIB005 . For hashing-based ANN methods, Perronnin et al. BIB003 use standard binary encoding techniques such as locality sensitive hashing BIB001 and spectral hashing BIB002 . Nevertheless, when being tested on the SIFT and GIST feature datasets, spectral hashing is shown to be outperformed by Product Quantization (PQ) BIB004 . In these quantization-based ANN methods, PQ is demonstrated to be better than other popular ANN methods such as FLANN BIB006 as well. A detailed discussion of VLAD and PQ can be viewed in BIB007 . PQ has since then been improved in a number of works. In , Douze et al. propose to re-order the cluster centroids so that adjacent centroids have small Hamming distances. This method is compatible with Hamming distance based ANN search, which offers significant speedup for PQ. We refer readers to BIB010 for a survey of ANN approaches. We also mention an emerging ANN technique, i.e., group testing BIB008 , BIB009 , BIB011 . In a nutshell, the database is decomposed into groups, each represented by a group vector. Comparisons between the query and group vectors reveal how likely a group contains a true match. Since group vectors are much fewer than the database vectors, search time is reduced. Iscen et al. BIB009 propose to directly find the best group vectors summarizing the database without explicitly forming the groups, which reduces the memory consumption.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Retrieval Using Large Codebooks <s> In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Retrieval Using Large Codebooks <s> A novel similarity measure for bag-of-words type large scale image retrieval is presented. The similarity function is learned in an unsupervised manner, requires no extra space over the standard bag-of-words method and is more discriminative than both L2-based soft assignment and Hamming embedding. ::: ::: We show experimentally that the novel similarity function achieves mean average precision that is superior to any result published in the literature on a number of standard datasets. At the same time, retrieval with the proposed similarity function is faster than the reference method. <s> BIB002
A large codebook may contain 1 million , BIB001 visual words or more BIB002 , . Some major steps undergo important changes compared with using small codebooks.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding (Quantization) <s> In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding (Quantization) <s> The state of the art in visual object retrieval from large databases is achieved by systems that are inspired by text retrieval. A key component of these approaches is that local regions of images are characterized using high-dimensional descriptors which are then mapped to ldquovisual wordsrdquo selected from a discrete vocabulary.This paper explores techniques to map each visual region to a weighted set of words, allowing the inclusion of features which were lost in the quantization stage of previous systems. The set of visual words is obtained by selecting words based on proximity in descriptor space. We describe how this representation may be incorporated into a standard tf-idf architecture, and how spatial verification is modified in the case of this soft-assignment. We evaluate our method on the standard Oxford Buildings dataset, and introduce a new dataset for evaluation. Our results exceed the current state of the art retrieval performance on these datasets, particularly on queries with poor initial recall where techniques like query expansion suffer. Overall we show that soft-assignment is always beneficial for retrieval with large vocabularies, at a cost of increased storage requirements for the index. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding (Quantization) <s> Recently SVMs using spatial pyramid matching (SPM) kernel have been highly successful in image classification. Despite its popularity, these nonlinear SVMs have a complexity O(n2 ~ n3) in training and O(n) in testing, where n is the training size, implying that it is nontrivial to scaleup the algorithms to handle more than thousands of training images. In this paper we develop an extension of the SPM method, by generalizing vector quantization to sparse coding followed by multi-scale spatial max pooling, and propose a linear SPM kernel based on SIFT sparse codes. This new approach remarkably reduces the complexity of SVMs to O(n) in training and a constant in testing. In a number of image categorization experiments, we find that, in terms of classification accuracy, the suggested linear SPM based on sparse coding of SIFT descriptors always significantly outperforms the linear SPM kernel on histograms, and is even better than the nonlinear SPM kernels, leading to state-of-the-art performance on several benchmarks by using a single type of descriptors. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding (Quantization) <s> The Fisher kernel (FK) is a generic framework which combines the benefits of generative and discriminative approaches. In the context of image classification the FK was shown to extend the popular bag-of-visual-words (BOV) by going beyond count statistics. However, in practice, this enriched representation has not yet shown its superiority over the BOV. In the first part we show that with several well-motivated modifications over the original framework we can boost the accuracy of the FK. On PASCAL VOC 2007 we increase the Average Precision (AP) from 47.9% to 58.3%. Similarly, we demonstrate state-of-the-art accuracy on CalTech 256. A major advantage is that these results are obtained using only SIFT descriptors and costless linear classifiers. Equipped with this representation, we can now explore image classification on a larger scale. In the second part, as an application, we compare two abundant resources of labeled images to learn classifiers: ImageNet and Flickr groups. In an evaluation involving hundreds of thousands of training images we show that classifiers learned on Flickr groups perform surprisingly well (although they were not intended for this purpose) and that they can complement classifiers learned on more carefully annotated datasets. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding (Quantization) <s> A novel similarity measure for bag-of-words type large scale image retrieval is presented. The similarity function is learned in an unsupervised manner, requires no extra space over the standard bag-of-words method and is more discriminative than both L2-based soft assignment and Hamming embedding. ::: ::: We show experimentally that the novel similarity function achieves mean average precision that is superior to any result published in the literature on a number of standard datasets. At the same time, retrieval with the proposed similarity function is faster than the reference method. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding (Quantization) <s> Bag-of-words models are among the most widely used and successful representations in multimedia retrieval. However, the quantization error which is introduced when mapping keypoints to visual words is one of the main drawbacks of the bag-of-words model. Although some techniques, such as soft-assignment to bags [23] and query expansion [27], have been introduced to deal with the problem, the performance gain is always at the cost of longer query response time, which makes them difficult to apply to large-scale multimedia retrieval applications. In this paper, we propose a simple "constrained keypoint quantization" method which can effectively reduce the overall quantization error of the bag-of-words representation and greatly improve the retrieval efficiency at the same time. The central idea of the proposed quantization method is that if a keypoint is far away from all visual words, we simply remove it. At first glance, this simple strategy seems naive and dangerous. However, we show that the proposed method has a solid theoretical background. Our experimental results on three widely used datasets for near duplicate image and video retrieval confirm that by removing a large amount of keypoints which have high quantization error, we obtain comparable or even better retrieval performance while dramatically boosting retrieval efficiency. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding (Quantization) <s> Bag-of-Words (BoW) model based on SIFT has been widely used in large scale image retrieval applications. Feature quantization plays a crucial role in BoW model, which generates visual words from the high dimensional SIFT features, so as to adapt to the inverted file structure for indexing. Traditional feature quantization approaches suffer several problems: 1) high computational cost---visual words generation (codebook construction) is time consuming especially with large amount of features; 2) limited reliability---different collections of images may produce totally different codebooks and quantization error is hard to be controlled; 3) update inefficiency--once the codebook is constructed, it is not easy to be updated. In this paper, a novel feature quantization algorithm, scalar quantization, is proposed. With scalar quantization, a SIFT feature is quantized to a descriptive and discriminative bit-vector, of which the first tens of bits are taken out as code word. Our quantizer is independent of collections of images. In addition, the result of scalar quantization naturally lends itself to adapt to the classic inverted file structure for image indexing. Moreover, the quantization error can be flexibly reduced and controlled by efficiently enumerating nearest neighbors of code words. The performance of scalar quantization has been evaluated in partial-duplicate Web image search on a database of one million images. Experiments reveal that the proposed scalar quantization achieves a relatively 42% improvement in mean average precision over the baseline (hierarchical visual vocabulary tree approach), and also outperforms the state-of-the-art Hamming Embedding approach and soft assignment method. <s> BIB007
Feature encoding is interleaved with codebook clustering, because ANN search is critical in both components. The ANN techniques implied in some classic methods like AKM and HKM can be used in both clustering and encoding steps. Under a large codebook, the key trade-off is between quantization error and computational complexity. In the encoding step, information-preserving encoding methods such as FV BIB004 , sparse coding BIB003 are mostly infeasible due to their computational complexity. It therefore remains a challenging problem how to reduce the quantization error while keeping the quantization process efficient. Fro the ANN methods, the earliest solution is to quantize a local feature along the hierarchical tree structure . Quantized tree nodes in different levels are assigned different weights. However, due to the highly imbalanced tree structure, this method is outperformed by k-d tree based quantization method BIB001 : one visual word is assigned to each local feature, using a k-d tree built from the codebook for fast ANN search. In an improvement to this hard quantization scheme, Philbin et al. BIB002 propose soft quantization by quantizing a feature into several nearest visual words. The weight of each assigned visual word relates negatively to its distance from the feature by expðÀ d 2 2s 2 Þ, where d is the distance between the descriptor and the cluster center. While soft quantization is based on the Euclidean distance, Mikulik et al. BIB005 propose to find relevant visual words for each visual word through an unsupervised set of matching features. Built on a probabilistic model, these alternative words tend to contain descriptors of matching features. To reduce the memory cost of soft quantization BIB002 and the number of query visual words, Cai et al. BIB006 suggest that when a local feature is far away from even the nearest visual word, this feature can be discarded without a performance drop. To further accelerate quantization, scalar quantization BIB007 suggests that local features be quantized without an explicitly trained codebook. A floating-point vector is binarized, and the first dimensions of the resulting binary vector are directly converted to a decimal number as a visual word. In the case of large quantization error and low recall, scalar quantization uses bit-flop to generate hundreds of visual words for a local feature.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Weighting <s> Burstiness, a phenomenon initially observed in text retrieval, is the property that a given visual element appears more times in an image than a statistically independent model would predict. In the context of image search, burstiness corrupts the visual similarity measure, i.e., the scores used to rank the images. In this paper, we propose a strategy to handle visual bursts for bag-of-features based image search systems. Experimental results on three reference datasets show that our method significantly and consistently outperforms the state of the art. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Weighting <s> There has been recent progress on the problem of recognizing specific objects in very large datasets. The most common approach has been based on the bag-of-words (BOW) method, in which local image features are clustered into visual words. This can provide significant savings in memory compared to storing and matching each feature independently. In this paper we take an additional step to reducing memory requirements by selecting only a small subset of the training features to use for recognition. This is based on the observation that many local features are unreliable or represent irrelevant clutter. We are able to select “useful” features, which are both robust and distinctive, by an unsupervised preprocessing step that identifies correctly matching features among the training images. We demonstrate that this selection approach allows an average of 4% of the original features per image to provide matching performance that is as accurate as the full set. In addition, we employ a graph to represent the matching relationships between images. Doing so enables us to effectively augment the feature set for each image through merging of useful features of neighboring images. We demonstrate adjacent and 2-adjacent augmentation, both of which give a substantial boost in performance. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Weighting <s> Detecting logos in photos is challenging. A reason is that logos locally resemble patterns frequently seen in random images. We propose to learn a statistical model for the distribution of incorrect detections output by an image matching algorithm. It results in a novel scoring criterion in which the weight of correlated keypoint matches is reduced, penalizing irrelevant logo detections. In experiments on two very different logo retrieval benchmarks, our approach largely improves over the standard matching criterion as well as other state-of-the-art approaches. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Weighting <s> The objective of this work is object retrieval in large scale image datasets, where the object is specified by an image query and retrieval should be immediate at run time in the manner of Video Google [28]. We make the following three contributions: (i) a new method to compare SIFT descriptors (RootSIFT) which yields superior performance without increasing processing or storage requirements; (ii) a novel method for query expansion where a richer model for the query is learnt discriminatively in a form suited to immediate retrieval through efficient use of the inverted index; (iii) an improvement of the image augmentation method proposed by Turcot and Lowe [29], where only the augmenting features which are spatially consistent with the augmented image are kept. We evaluate these three methods over a number of standard benchmark datasets (Oxford Buildings 5k and 105k, and Paris 6k) and demonstrate substantial improvements in retrieval performance whilst maintaining immediate retrieval speeds. Combining these complementary methods achieves a new state-of-the-art performance on these datasets. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Weighting <s> The Inverse Document Frequency (IDF) is prevalently utilized in the Bag-of-Words based image search. The basic idea is to assign less weight to terms with high frequency, and vice versa. However, the estimation of visual word frequency is coarse and heuristic. Therefore, the effectiveness of the conventional IDF routine is marginal, and far from optimal. To tackle this problem, this paper introduces a novel IDF expression by the use of Lp-norm pooling technique. Carefully designed, the proposed IDF takes into account the term frequency, document frequency, the complexity of images, as well as the codebook information. Optimizing the IDF function towards optimal balancing between TF and pIDF weights yields the so-called Lp-norm IDF (pIDF). We show that the conventional IDF is a special case of our generalized version, and two novel IDFs, i.e. the average IDF and the max IDF, can also be derived from our formula. Further, by counting for the term-frequency in each image, the proposed Lp-norm IDF helps to alleviate the visual word burstiness phenomenon. Our method is evaluated through extensive experiments on three benchmark datasets (Oxford 5K, Paris 6K and Flickr 1M). We report a performance improvement of as large as 27.1% over the baseline approach. Moreover, since the Lp-norm IDF is computed offline, no extra computation or memory cost is introduced to the system at all. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Weighting <s> Repeated structures such as building facades, fences or road markings often represent a significant challenge for place recognition. Repeated structures are notoriously hard for establishing correspondences using multi-view geometry. They violate the feature independence assumed in the bag-of-visual-words representation which often leads to over-counting evidence and significant degradation of retrieval performance. In this work we show that repeated structures are not a nuisance but, when appropriately represented, they form an important distinguishing feature for many places. We describe a representation of repeated structures suitable for scalable retrieval and geometric verification. The retrieval is based on robust detection of repeated image structures and a suitable modification of weights in the bag-of-visual-word model. We also demonstrate that the explicit detection of repeated patterns is beneficial for robust visual word matching for geometric verification. Place recognition results are shown on datasets of street-level imagery from Pittsburgh and San Francisco demonstrating significant gains in recognition performance compared to the standard bag-of-visual-words baseline as well as the more recently proposed burstiness weighting and Fisher vector encoding. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Weighting <s> This paper deals with a novel concept of an exponential IDF in the BM25 formulation and compares the search accuracy with that of the BM25 with the original IDF in a content-based video retrieval (CBVR) task. Our video retrieval method is based on a bag of keypoints (local visual features) and the exponential IDF estimates the keypoint importance weights more accurately than the original IDF. The exponential IDF is capable of suppressing the keypoints from frequently occurring background objects in videos, and we found that this effect is essential for achieving improved search accuracy in CBVR. Our proposed method is especially designed to tackle instance video search, one of the CBVR tasks, and we demonstrate its effectiveness in significantly enhancing the instance search accuracy using the TRECVID2012 video retrieval dataset. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Weighting <s> Recent works show that image comparison based on local descriptors is corrupted by visual bursts, which tend to dominate the image similarity. The existing strategies, like power-law normalization, improve the results by discounting the contribution of visual bursts to the image similarity. <s> BIB008
TF-IDF. The visual words in codebook C are typically assigned specific weights, called the term frequency and inverse document frequency (TF-IDF), which are integrated with the BoW encoding. TF is defined as where o j i is the number of occurrences of a visual word c i within an image j. TF is thus a local weight. IDF, on the other hand, determines the contribution of a given visual word through global statistics. The classic IDF weight of visual word c i is calculated as where N is the number of gallery images, and n i encodes the number of images in which word c i appears. The TF-IDF weight for visual word c i in image j is Improvements. A major problem associated with visual word weighting is burstiness BIB001 . It refers to the phenomenon whereby repetitive structures appear in an image. This problem tends to dominate image similarity. J egou et al. BIB001 propose several TF variants to deal with burstiness. An effective strategy consists in exerting a square operation on TF. Instead of grouping features with the same word index, Revaud et al. BIB003 propose detecting keypoint groups frequently happening in irrelevant images which are downweighted in the scoring function. While the above two methods detect bursty groups after quantization, Shi et al. BIB008 propose detecting them in the descriptor stage. The detected bursty descriptors undergo average pooling and are fed in the BoW architectures. From the aspect of IDF, Zheng et al. BIB005 propose the L p -norm IDF to tackle burstiness and Murata et al. BIB007 design the exponential IDF which is later incorporated into the BM25 formula. When most works try to suppress burstiness, Torii et al. BIB006 view it as a distinguishing feature for architectures and design new similarity measurement following burstiness detection. Another feature weighting strategy is feature augmentation on the database side BIB004 , BIB002 . Both methods construct an image graph offline, with edges indicating whether two images share a same object. For BIB002 , only features that pass the geometric verification are preserved, which reduces the memory cost. Then, the feature of the base image is augmented with all the visual words of its connecting images. This method is improved in BIB004 by only adding those visual words which are estimated to be visible in the augmented image, so that noisy visual words can be excluded.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> The Inverted Index <s> In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> The Inverted Index <s> Recently SVMs using spatial pyramid matching (SPM) kernel have been highly successful in image classification. Despite its popularity, these nonlinear SVMs have a complexity O(n2 ~ n3) in training and O(n) in testing, where n is the training size, implying that it is nontrivial to scaleup the algorithms to handle more than thousands of training images. In this paper we develop an extension of the SPM method, by generalizing vector quantization to sparse coding followed by multi-scale spatial max pooling, and propose a linear SPM kernel based on SIFT sparse codes. This new approach remarkably reduces the complexity of SVMs to O(n) in training and a constant in testing. In a number of image categorization experiments, we find that, in terms of classification accuracy, the suggested linear SPM based on sparse coding of SIFT descriptors always significantly outperforms the linear SPM kernel on histograms, and is even better than the nonlinear SPM kernels, leading to state-of-the-art performance on several benchmarks by using a single type of descriptors. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> The Inverted Index <s> In this paper we address the problem of image retrieval from millions of database images. We improve the vocabulary tree based approach by introducing contextual weighting of local features in both descriptor and spatial domains. Specifically, we propose to incorporate efficient statistics of neighbor descriptors both on the vocabulary tree and in the image spatial domain into the retrieval. These contextual cues substantially enhance the discriminative power of individual local features with very small computational overhead. We have conducted extensive experiments on benchmark datasets, i.e., the UKbench, Holidays, and our new Mobile dataset, which show that our method reaches state-of-the-art performance with much less computation. Furthermore, the proposed method demonstrates excellent scalability in terms of both retrieval accuracy and efficiency on large-scale experiments using 1.26 million images from the ImageNet database as distractors. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> The Inverted Index <s> A new data structure for efficient similarity search in very large dataseis of high-dimensional vectors is introduced. This structure called the inverted multi-index generalizes the inverted index idea by replacing the standard quantization within inverted indices with product quantization. For very similar retrieval complexity and preprocessing time, inverted multi-indices achieve a much denser subdivision of the search space compared to inverted indices, while retaining their memory efficiency. Our experiments with large dataseis of SIFT and GIST vectors demonstrate that because of the denser subdivision, inverted multi-indices are able to return much shorter candidate lists with higher recall. Augmented with a suitable reranking procedure, multi-indices were able to improve the speed of approximate nearest neighbor search on the dataset of 1 billion SIFT vectors by an order of magnitude compared to the best previously published systems, while achieving better recall and incurring only few percent of memory overhead. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> The Inverted Index <s> In Bag-of-Words (BoW) based image retrieval, the SIFT visual word has a low discriminative power, so false positive matches occur prevalently. Apart from the information loss during quantization, another cause is that the SIFT feature only describes the local gradient distribution. To address this problem, this paper proposes a coupled Multi-Index (c-MI) framework to perform feature fusion at indexing level. Basically, complementary features are coupled into a multi-dimensional inverted index. Each dimension of c-MI corresponds to one kind of feature, and the retrieval process votes for images similar in both SIFT and other feature spaces. Specifically, we exploit the fusion of local color feature into c-MI. While the precision of visual match is greatly enhanced, we adopt Multiple Assignment to improve recall. The joint cooperation of SIFT and color features significantly reduces the impact of false positive matches. Extensive experiments on several benchmark datasets demonstrate that c-MI improves the retrieval accuracy significantly, while consuming only half of the query time compared to the baseline. Importantly, we show that c-MI is well complementary to many prior techniques. Assembling these methods, we have obtained an mAP of 85.8% and N-S score of 3.85 on Holidays and Ukbench datasets, respectively, which compare favorably with the state-of-the-arts. <s> BIB005
The inverted index is designed to enable efficient storage and retrieval and is usually used under large/medium-sized codebooks. Its structure is illustrated in Fig. 4 . The inverted index is a one-dimensional structure where each entry corresponds to a visual word in the codebook. An inverted list is attached to each word entry, and those indexed in the each inverted list are called indexed features or postings. The inverted index takes advantages of the sparse nature of the visual word histogram under a large codebook. In literature, it is required that new retrieval methods be adjustable to the inverted index. In the baseline , BIB001 , the image ID and term frequency (TF) are stored in a posting. When other information is integrated, they should be small in size. For example, in BIB003 , the metadata are quantized, such as descriptor contextual weight, descriptor density, mean relative log scale and the mean orientation difference in each posting. Similarly, quantized spatial information such as the orientation can also be stored , BIB002 . In coindexing , when the inverted index is enlarged with globally consistent neighbors, semantically isolated images are deleted to reduce memory consumption. In BIB004 , the original one-dimensional inverted index is expanded to two-dimensional for ANN search, which learns a codebook for each SIFT sub-vector. Later, it is applied to instance retrieval by BIB005 to fuse local color and SIFT descriptors.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Codebook Generation and Quantization <s> In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Codebook Generation and Quantization <s> The state of the art in visual object retrieval from large databases is achieved by systems that are inspired by text retrieval. A key component of these approaches is that local regions of images are characterized using high-dimensional descriptors which are then mapped to ldquovisual wordsrdquo selected from a discrete vocabulary.This paper explores techniques to map each visual region to a weighted set of words, allowing the inclusion of features which were lost in the quantization stage of previous systems. The set of visual words is obtained by selecting words based on proximity in descriptor space. We describe how this representation may be incorporated into a standard tf-idf architecture, and how spatial verification is modified in the case of this soft-assignment. We evaluate our method on the standard Oxford Buildings dataset, and introduce a new dataset for evaluation. Our results exceed the current state of the art retrieval performance on these datasets, particularly on queries with poor initial recall where techniques like query expansion suffer. Overall we show that soft-assignment is always beneficial for retrieval with large vocabularies, at a cost of increased storage requirements for the index. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Codebook Generation and Quantization <s> This article improves recent methods for large scale image search. We first analyze the bag-of-features approach in the framework of approximate nearest neighbor search. This leads us to derive a more precise representation based on Hamming embedding (HE) and weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within an inverted file and are efficiently exploited for all images in the dataset. We then introduce a graph-structured quantizer which significantly speeds up the assignment of the descriptors to visual words. A comparison with the state of the art shows the interest of our approach when high accuracy is needed. ::: ::: Experiments performed on three reference datasets and a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short-list of images, is shown to be complementary to our weak geometric consistency constraints. Our approach is shown to outperform the state-of-the-art on the three datasets. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Codebook Generation and Quantization <s> A novel similarity measure for bag-of-words type large scale image retrieval is presented. The similarity function is learned in an unsupervised manner, requires no extra space over the standard bag-of-words method and is more discriminative than both L2-based soft assignment and Hamming embedding. ::: ::: We show experimentally that the novel similarity function achieves mean average precision that is superior to any result published in the literature on a number of standard datasets. At the same time, retrieval with the proposed similarity function is faster than the reference method. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Codebook Generation and Quantization <s> Bag-of-words models are among the most widely used and successful representations in multimedia retrieval. However, the quantization error which is introduced when mapping keypoints to visual words is one of the main drawbacks of the bag-of-words model. Although some techniques, such as soft-assignment to bags [23] and query expansion [27], have been introduced to deal with the problem, the performance gain is always at the cost of longer query response time, which makes them difficult to apply to large-scale multimedia retrieval applications. In this paper, we propose a simple "constrained keypoint quantization" method which can effectively reduce the overall quantization error of the bag-of-words representation and greatly improve the retrieval efficiency at the same time. The central idea of the proposed quantization method is that if a keypoint is far away from all visual words, we simply remove it. At first glance, this simple strategy seems naive and dangerous. However, we show that the proposed method has a solid theoretical background. Our experimental results on three widely used datasets for near duplicate image and video retrieval confirm that by removing a large amount of keypoints which have high quantization error, we obtain comparable or even better retrieval performance while dramatically boosting retrieval efficiency. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Codebook Generation and Quantization <s> This paper considers a family of metrics to compare images based on their local descriptors. It encompasses the VLAD descriptor and matching techniques such as Hamming Embedding. Making the bridge between these approaches leads us to propose a match kernel that takes the best of existing techniques by combining an aggregation procedure with a selective match kernel. Finally, the representation underpinning this kernel is approximated, providing a large scale image search both precise and scalable, as shown by our experiments on several benchmarks. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Codebook Generation and Quantization <s> In Bag-of-Words (BoW) based image retrieval, the SIFT visual word has a low discriminative power, so false positive matches occur prevalently. Apart from the information loss during quantization, another cause is that the SIFT feature only describes the local gradient distribution. To address this problem, this paper proposes a coupled Multi-Index (c-MI) framework to perform feature fusion at indexing level. Basically, complementary features are coupled into a multi-dimensional inverted index. Each dimension of c-MI corresponds to one kind of feature, and the retrieval process votes for images similar in both SIFT and other feature spaces. Specifically, we exploit the fusion of local color feature into c-MI. While the precision of visual match is greatly enhanced, we adopt Multiple Assignment to improve recall. The joint cooperation of SIFT and color features significantly reduces the impact of false positive matches. Extensive experiments on several benchmark datasets demonstrate that c-MI improves the retrieval accuracy significantly, while consuming only half of the query time compared to the baseline. Importantly, we show that c-MI is well complementary to many prior techniques. Assembling these methods, we have obtained an mAP of 85.8% and N-S score of 3.85 on Holidays and Ukbench datasets, respectively, which compare favorably with the state-of-the-arts. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Codebook Generation and Quantization <s> Feature fusion has been proven effective [35, 36] in image search. Typically, it is assumed that the to-be-fused heterogeneous features work well by themselves for the query. However, in a more realistic situation, one does not know in advance whether a feature is effective or not for a given query. As a result, it is of great importance to identify feature effectiveness in a query-adaptive manner. <s> BIB008
Considering the relatively small computational cost compared with large codebooks (Section 3.4.1), flat k-means can be adopted for codebook generation BIB006 , BIB003 . It is also shown in BIB007 , BIB008 that using AKM BIB001 for clustering also yields very competitive retrieval accuracy. For quantization, nearest neighbor search can be used to find the nearest visual words in the codebook. Practice may tell that using some strict ANN algorithms produces competitive retrieval results. So comparing with the extensive study on quantization under large codebooks (Section 3.4.2) BIB002 , BIB004 , BIB005 , relatively fewer works focus on the quantization problem under a medium-sized codebook.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hamming Embedding and Its Improvements <s> This paper improves recent methods for large scale image search. State-of-the-art methods build on the bag-of-features image representation. We, first, analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within the inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hamming Embedding and Its Improvements <s> This article improves recent methods for large scale image search. We first analyze the bag-of-features approach in the framework of approximate nearest neighbor search. This leads us to derive a more precise representation based on Hamming embedding (HE) and weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within an inverted file and are efficiently exploited for all images in the dataset. We then introduce a graph-structured quantizer which significantly speeds up the assignment of the descriptors to visual words. A comparison with the state of the art shows the interest of our approach when high accuracy is needed. ::: ::: Experiments performed on three reference datasets and a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short-list of images, is shown to be complementary to our weak geometric consistency constraints. Our approach is shown to outperform the state-of-the-art on the three datasets. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hamming Embedding and Its Improvements <s> This paper introduces a video copy detection system which efficiently matches individual frames and then verifies their spatio-temporal consistency. The approach for matching frames relies on a recent local feature indexing method, which is at the same time robust to significant video transformations and efficient in terms of memory usage and computation time. We match either keyframes or uniformly sampled frames. To further improve the results, a verification step robustly estimates a spatio-temporal model between the query video and the potentially corresponding video segments. Experimental results evaluate the different parameters of our system and measure the trade-off between accuracy and efficiency. We show that our system obtains excellent results for the TRECVID 2008 copy detection task. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hamming Embedding and Its Improvements <s> This paper introduces the contextual dissimilarity measure, which significantly improves the accuracy of bag-of-features-based image search. Our measure takes into account the local distribution of the vectors and iteratively estimates distance update terms in the spirit of Sinkhorn's scaling algorithm, thereby modifying the neighborhood structure. Experimental results show that our approach gives significantly better results than a standard distance and outperforms the state of the art in terms of accuracy on the Nisteacuter-Steweacutenius and Lola data sets. This paper also evaluates the impact of a large number of parameters, including the number of descriptors, the clustering method, the visual vocabulary size, and the distance measure. The optimal parameter choice is shown to be quite context-dependent. In particular, using a large number of descriptors is interesting only when using our dissimilarity measure. We have also evaluated two novel variants: multiple assignment and rank aggregation. They are shown to further improve accuracy at the cost of higher memory usage and lower efficiency. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hamming Embedding and Its Improvements <s> This paper proposes an asymmetric Hamming Embedding scheme for large scale image search based on local descriptors. The comparison of two descriptors relies on an vector-to-binary code comparison, which limits the quantization error associated with the query compared with the original Hamming Embedding method. The approach is used in combination with an inverted file structure that offers high efficiency, comparable to that of a regular bag-of-features retrieval system. The comparison is performed on two popular datasets. Our method consistently improves the search quality over the symmetric version. The trade-off between memory usage and precision is evaluated, showing that the method is especially useful for short binary signatures. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hamming Embedding and Its Improvements <s> In this paper, we propose a novel image classification framework based on patch matching. More precisely, we adapt the Hamming Embedding technique, first introduced for image search to improve the bag-of-words representation. This matching technique allows the fast comparison of descriptors based on their binary signatures, which refines the matching rule based on visual words and thereby limits the quantization error. Then, in order to allow the use of efficient and suitable linear kernel-based SVM classification, we propose a mapping method to cast the scores output by the Hamming Embedding matching technique into a proper similarity space. Comparative experiments of our proposed approach and other existing encoding methods on two challenging datasets PASCAL VOC 2007 and Caltech-256, report the interest of the proposed scheme, which outperforms all methods based on patch matching and even provide competitive results compared with the state-of-the-art coding techniques. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hamming Embedding and Its Improvements <s> Bag-of-Words (BoW) model based on SIFT has been widely used in large scale image retrieval applications. Feature quantization plays a crucial role in BoW model, which generates visual words from the high dimensional SIFT features, so as to adapt to the inverted file structure for indexing. Traditional feature quantization approaches suffer several problems: 1) high computational cost---visual words generation (codebook construction) is time consuming especially with large amount of features; 2) limited reliability---different collections of images may produce totally different codebooks and quantization error is hard to be controlled; 3) update inefficiency--once the codebook is constructed, it is not easy to be updated. In this paper, a novel feature quantization algorithm, scalar quantization, is proposed. With scalar quantization, a SIFT feature is quantized to a descriptive and discriminative bit-vector, of which the first tens of bits are taken out as code word. Our quantizer is independent of collections of images. In addition, the result of scalar quantization naturally lends itself to adapt to the classic inverted file structure for image indexing. Moreover, the quantization error can be flexibly reduced and controlled by efficiently enumerating nearest neighbors of code words. The performance of scalar quantization has been evaluated in partial-duplicate Web image search on a database of one million images. Experiments reveal that the proposed scalar quantization achieves a relatively 42% improvement in mean average precision over the baseline (hierarchical visual vocabulary tree approach), and also outperforms the state-of-the-art Hamming Embedding approach and soft assignment method. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hamming Embedding and Its Improvements <s> This paper considers a family of metrics to compare images based on their local descriptors. It encompasses the VLAD descriptor and matching techniques such as Hamming Embedding. Making the bridge between these approaches leads us to propose a match kernel that takes the best of existing techniques by combining an aggregation procedure with a selective match kernel. Finally, the representation underpinning this kernel is approximated, providing a large scale image search both precise and scalable, as shown by our experiments on several benchmarks. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hamming Embedding and Its Improvements <s> Many recent object retrieval systems rely on local features for describing an image. The similarity between a pair of images is measured by aggregating the similarity between their corresponding local features. In this paper we present a probabilistic framework for modeling the feature to feature similarity measure. We then derive a query adaptive distance which is appropriate for global similarity evaluation. Furthermore, we propose a function to score the individual contributions into an image to image similarity within the probabilistic framework. Experimental results show that our method improves the retrieval accuracy significantly and consistently. Moreover, our result compares favorably to the state-of-the-art. <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hamming Embedding and Its Improvements <s> This paper proposes a query expansion technique for image search that is faster and more precise than the existing ones. An enriched representation of the query is obtained by exploiting the binary representation offered by the Hamming Embedding image matching approach: The initial local descriptors are refined by aggregating those of the database, while new descriptors are produced from the images that are deemed relevant. The technique has two computational advantages over other query expansion techniques. First, the size of the enriched representation is comparable to that of the initial query. Second, the technique is effective even without using any geometry, in which case searching a database comprising 105k images typically takes 80 ms on a desktop machine. Overall, our technique significantly outperforms the visual query expansion state of the art on popular benchmarks. It is also the first query expansion technique shown effective on the UKB benchmark, which has few relevant images per query. <s> BIB010 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hamming Embedding and Its Improvements <s> We consider the design of a single vector representation for an image that embeds and aggregates a set of local patch descriptors such as SIFT. More specifically we aim to construct a dense representation, like the Fisher Vector or VLAD, though of small or intermediate size. We make two contributions, both aimed at regularizing the individual contributions of the local descriptors in the final representation. The first is a novel embedding method that avoids the dependency on absolute distances by encoding directions. The second contribution is a "democratization" strategy that further limits the interaction of unrelated descriptors in the aggregation stage. These methods are complementary and give a substantial performance boost over the state of the art in image search with short or mid-size vectors, as demonstrated by our experiments on standard public image retrieval benchmarks. <s> BIB011
The discriminative ability of visual words in medium-sized codebooks lies in between that of small and large codebooks. So it is important to compensate the information loss during quantization. To this end, a milestone work, i.e., Hamming embedding (HE) has been dominantly employed. Proposed by J egou et al. BIB001 , HE greatly improves the discriminative ability of visual words under medium-sized codebooks. HE first maps a SIFT descriptor f 2 R p from the p-dimensional space to a p b -dimensional space where P 2 R p b  p is a projecting matrix, and x is a lowdimensional vector. By creating a matrix of random Gaussian values and applying a QR factorization to it, matrix P is taken as the first p b rows of the resulting orthogonal matrix. To binarize x, Jegou et al. propose to compute the median vector x i ¼ ðx 1;i ; . . . ; x p b ;i Þ of the low-dimensional vector using descriptors falling in each Voronoi cell c i . Given descriptor f and its projected vector x, HE computes its visual word c t , and the HE binary vector is computed as where bðxÞ ¼ ðb 1 ðxÞ; . . . ; b p b ðxÞÞ is the resulting HE vector of dimension p b . The binary feature bðxÞ serves as a secondary check for feature matching. A pair of local features are a true match when two criteria are satisfied: 1) identical visual words and 2) small Hamming distance between their HE signatures. The extension of HE BIB002 estimates the matching strength between feature f 1 and f 2 reversely to the Hamming distance by an exponential function where bðx 1 Þ and bðx 2 Þ are the HE binary vector of f 1 and f 2 , respectively, HðÁ; ÁÞ computes the Hamming distance between two binary vectors, and g is a weighting parameter. As shown in Fig. 6 , HE BIB001 and its weighted version BIB002 improves accuracy considerably in 2008 and 2010. Applications of HE include video copy detection BIB003 , image classification BIB006 and re-ranking BIB010 . For example, in image classification, patch matching similarity is efficiently estimated by HE which is integrated into linear kernel-based SVM BIB006 . In image re-ranking, Tolias et al. BIB010 use lower HE thresholds to find strict correspondences which resemble those found by RANSAC, and the resulting image subset is more likely to contain true positives for query reformulation. The improvement over HE has been observed in a number of works, especially from the view of match kernel BIB008 . To reduce the information loss on the query side, Jain et al. BIB005 propose a vector-to-binary distance comparison. It exploits the vector-to-hyperplane distance while retaining the efficiency of the inverted index. Further, Qin et al. BIB009 design a higher-order match kernel within a probabilistic framework and adaptively normalize the local feature distances by the distance distribution of false matches. This method is in the spirit similar to BIB004 , in which the wordword distance, instead of the feature-feature distance BIB009 , is normalized, according to the neighborhood distribution of each visual word. While the average distance between a word to its neighbors is regularized to be almost constant in BIB004 , the idea of democratizing the contribution of individual embeddings has later been employed in BIB011 . In BIB008 , Tolias et al. show that VLAD and HE share similar natures and propose a new match kernel which trades off between local feature aggregation and feature-to-feature matching, using a similar matching function to BIB009 . They also demonstrate that using more bits (e.g., 128) in HE is superior to the original 64 bits scheme at the cost of decreased efficiency. Even more bits (256) are used in BIB007 , but this method may be prone to relatively low recall.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Fusion <s> The aim of salient feature detection is to find distinctive local events in images. Salient features are generally determined from the local differential structure of images. They focus on the shape-saliency of the local neighborhood. The majority of these detectors are luminance-based, which has the disadvantage that the distinctiveness of the local color information is completely ignored in determining salient image features. To fully exploit the possibilities of salient point detection in color images, color distinctiveness should be taken into account in addition to shape distinctiveness. In this paper, color distinctiveness is explicitly incorporated into the design of saliency detection. The algorithm, called color saliency boosting, is based on an analysis of the statistics of color image derivatives. Color saliency boosting is designed as a generic method easily adaptable to existing feature detectors. Results show that substantial improvements in information content are acquired by targeting color salient features. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Fusion <s> We investigate whether dimensionality reduction using a latent generative model is beneficial for the task of weakly supervised scene classification. In detail, we are given a set of labeled images of scenes (for example, coast, forest, city, river, etc.), and our objective is to classify a new image into one of these categories. Our approach consists of first discovering latent ";topics"; using probabilistic Latent Semantic Analysis (pLSA), a generative model from the statistical text literature here applied to a bag of visual words representation for each image, and subsequently, training a multiway classifier on the topic distribution vector for each image. We compare this approach to that of representing each image by a bag of visual words vector directly and training a multiway classifier on these vectors. To this end, we introduce a novel vocabulary using dense color SIFT descriptors and then investigate the classification performance under changes in the size of the visual vocabulary, the number of latent topics learned, and the type of discriminative classifier used (k-nearest neighbor or SVM). We achieve superior classification performance to recent publications that have used a bag of visual word representation, in all cases, using the authors' own data sets and testing protocols. We also investigate the gain in adding spatial information. We show applications to image retrieval with relevance feedback and to scene classification in videos. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Fusion <s> Image category recognition is important to access visual information on the level of objects and scene types. So far, intensity-based descriptors have been widely used for feature extraction at salient points. To increase illumination invariance and discriminative power, color descriptors have been proposed. Because many different descriptors exist, a structured overview is required of color invariant descriptors in the context of image category recognition. Therefore, this paper studies the invariance properties and the distinctiveness of color descriptors (software to compute the color descriptors from this paper is available from http://www.colordescriptors.com) in a structured way. The analytical invariance properties of color descriptors are explored, using a taxonomy based on invariance properties with respect to photometric transformations, and tested experimentally using a data set with known illumination conditions. In addition, the distinctiveness of color descriptors is assessed experimentally using two benchmarks, one from the image domain and one from the video domain. From the theoretical and experimental results, it can be derived that invariance to light intensity changes and light color changes affects category recognition. The results further reveal that, for light intensity shifts, the usefulness of invariance is category-specific. Overall, when choosing a single descriptor and no prior knowledge about the data set and object and scene categories is available, the OpponentSIFT is recommended. Furthermore, a combined set of color descriptors outperforms intensity-based SIFT and improves category recognition by 8 percent on the PASCAL VOC 2007 and by 7 percent on the Mediamill Challenge. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Fusion <s> This paper investigates the use of color information when used within a state-of-the-art large scale image search system. We introduce a simple yet effective and efficient color signature generation procedure. It is used either to produce global or local descriptors. As a global descriptor, it outperforms several state-of-the-art color description methods, in particular the bag-of-words method based on color SIFT. As a local descriptor, our signature is used jointly with SIFT descriptors (no color) to provide complementary information. This significantly improves the recognition rate, outperforming the state of the art on two image search benchmarks. We provide an open source package of our signature (http://www.kooaba.com/en/learnmore/labs/). <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Fusion <s> We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Fusion <s> Visual reranking has been widely deployed to refine the quality of conventional content-based image retrieval engines. The current trend lies in employing a crowd of retrieved results stemming from multiple feature modalities to boost the overall performance of visual reranking. However, a major challenge pertaining to current reranking methods is how to take full advantage of the complementary property of distinct feature modalities. Given a query image and one feature modality, a regular visual reranking framework treats the top-ranked images as pseudo positive instances which are inevitably noisy, difficult to reveal this complementary property, and thus lead to inferior ranking performance. This paper proposes a novel image reranking approach by introducing a Co-Regularized Multi-Graph Learning (Co-RMGL) framework, in which the intra-graph and inter-graph constraints are simultaneously imposed to encode affinities in a single graph and consistency across different graphs. Moreover, weakly supervised learning driven by image attributes is performed to denoise the pseudo-labeled instances, thereby highlighting the unique strength of individual feature modality. Meanwhile, such learning can yield a few anchors in graphs that vitally enable the alignment and fusion of multiple graphs. As a result, an edge weight matrix learned from the fused graph automatically gives the ordering to the initially retrieved results. We evaluate our approach on four benchmark image retrieval datasets, demonstrating a significant performance gain over the state-of-the-arts. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Fusion <s> In Bag-of-Words (BoW) based image retrieval, the SIFT visual word has a low discriminative power, so false positive matches occur prevalently. Apart from the information loss during quantization, another cause is that the SIFT feature only describes the local gradient distribution. To address this problem, this paper proposes a coupled Multi-Index (c-MI) framework to perform feature fusion at indexing level. Basically, complementary features are coupled into a multi-dimensional inverted index. Each dimension of c-MI corresponds to one kind of feature, and the retrieval process votes for images similar in both SIFT and other feature spaces. Specifically, we exploit the fusion of local color feature into c-MI. While the precision of visual match is greatly enhanced, we adopt Multiple Assignment to improve recall. The joint cooperation of SIFT and color features significantly reduces the impact of false positive matches. Extensive experiments on several benchmark datasets demonstrate that c-MI improves the retrieval accuracy significantly, while consuming only half of the query time compared to the baseline. Importantly, we show that c-MI is well complementary to many prior techniques. Assembling these methods, we have obtained an mAP of 85.8% and N-S score of 3.85 on Holidays and Ukbench datasets, respectively, which compare favorably with the state-of-the-arts. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Fusion <s> This paper aims for generic instance search from one example where the instance can be an arbitrary 3D object like shoes, not just near-planar and one-sided instances like buildings and logos. Firstly, we evaluate state-of-the-art instance search methods on this problem. We observe that what works for buildings loses its generality on shoes. Secondly, we propose to use automatically learned category-specific attributes to address the large appearance variations present in generic instance search. On the problem of searching among instances from the same category as the query, the category-specific attributes outperform existing approaches by a large margin. On a shoe dataset containing 6624 shoe images recorded from all viewing angles, we improve the performance from 36.73 to 56.56 using category-specific attributes. Thirdly, we extend our methods to search objects without restricting to the specifically known category. We show the combination of category-level information and the category-specific attributes is superior to combining category-level information with low-level features such as Fisher vector. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Fusion <s> Feature fusion has been proven effective [35, 36] in image search. Typically, it is assumed that the to-be-fused heterogeneous features work well by themselves for the query. However, in a more realistic situation, one does not know in advance whether a feature is effective or not for a given query. As a result, it is of great importance to identify feature effectiveness in a query-adaptive manner. <s> BIB009
Local-Local Fusion. A problem with the SIFT feature is that only local gradient description is provided. Other discriminative information encoded in an image is still not leveraged. In Fig. 5B , a pair of false matches cannot be rejected by HE due to their similarity in the SIFT space, but the fusion of other local (or regional) features may correct this problem. A good choice for local-local fusion is to couple SIFT with color descriptors. The usage of color-SIFT descriptors can partially address the trade-off between invariance and discriminative ability. Evaluation has been conducted on several recognition benchmarks BIB003 of the descriptors such as HSV-SIFT BIB002 , HueSIFT BIB001 and OpponentSIFT BIB003 . Both HSV-SIFT and HueSIFT are scale-invariant and shift-invariant. Oppo-nentSIFT describes all the channels in the opponent color space using the SIFT descriptor and is largely robust to the light color changes. In BIB003 , OpponentSIFT is recommended when no prior knowledge about the datasets is available. In more recent works, the binary color signatures are stored in the inverted index BIB007 , BIB004 . Despite the good retrieval accuracy on some datasets, the potential problem is that intensive variation in illumination may compromise the effectiveness of colors. Local-Global Fusion. Local and global features describe images from different aspects and can be complementary. In Fig. 5C , when local (and regional) cues are not enough to reject a false match pair, it would be effective to further incorporate visual information from a larger context scale. Early and late fusion are two possible ways. In early fusion, the image neighborhood relationship mined by global features such as FC8 in AlexNet BIB005 is fused in the SIFT-based inverted index . In late fusion, Zhang et al. build an offline graph for each type of feature, which is subsequently fused during the online query. In an improvement of , Deng et al. BIB006 add weakly supervised anchors to aid graph fusion. Both works on the rank level. For score-level fusion, automatically learned category-specific attributes are combined with pre-trained category-level information BIB008 . Zheng et al. BIB009 propose the query-adaptive late fusion by extracting a number of features (local or global, good or bad) and weighting them in a query-adaptive manner.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> Abstract The wide-baseline stereo problem, i.e. the problem of establishing correspondences between a pair of images taken from different viewpoints is studied. A new set of image elements that are put into correspondence, the so called extremal regions , is introduced. Extremal regions possess highly desirable properties: the set is closed under (1) continuous (and thus projective) transformation of image coordinates and (2) monotonic transformation of image intensities. An efficient (near linear complexity) and practically fast detection algorithm (near frame rate) is presented for an affinely invariant stable subset of extremal regions, the maximally stable extremal regions (MSER). A new robust similarity measure for establishing tentative correspondences is proposed. The robustness ensures that invariants from multiple measurement regions (regions obtained by invariant constructions from extremal regions), some that are significantly larger (and hence discriminative) than the MSERs, may be used to establish tentative correspondences. The high utility of MSERs, multiple measurement regions and the robust metric is demonstrated in wide-baseline experiments on image pairs from both indoor and outdoor scenes. Significant change of scale (3.5×), illumination conditions, out-of-plane rotation, occlusion, locally anisotropic scale change and 3D translation of the viewpoint are all present in the test problems. Good estimates of epipolar geometry (average distance from corresponding points to the epipolar line below 0.09 of the inter-pixel distance) are obtained. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> In this paper, we draw an analogy between image retrieval and text retrieval and propose a visual phrase-based approach to retrieve images containing desired objects. The visual phrase is defined as a pair of adjacent local image patches and is constructed using data mining. We devise methods on how to construct visual phrases from images and how to encode the visual phrase for indexing and retrieval. Our experiments demonstrate that visual phrase-based retrieval approach can be very efficient and can be 20% more effective than its visual word-based counterpart. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> Given a query image of an object, our objective is to retrieve all instances of that object in a large (1M+) image database. We adopt the bag-of-visual-words architecture which has proven successful in achieving high precision at low recall. Unfortunately, feature detection and quantization are noisy processes and this can result in variation in the particular visual words that appear in different images of the same object, leading to missed results. In the text retrieval literature a standard method for improving performance is query expansion. A number of the highly ranked documents from the original query are reissued as a new query. In this way, additional relevant terms can be added to the query. This is a form of blind rele- vance feedback and it can fail if 'outlier' (false positive) documents are included in the reissued query. In this paper we bring query expansion into the visual domain via two novel contributions. Firstly, strong spatial constraints between the query image and each result allow us to accurately verify each return, suppressing the false positives which typically ruin text-based query expansion. Secondly, the verified images can be used to learn a latent feature model to enable the controlled construction of expanded queries. We illustrate these ideas on the 5000 annotated image Oxford building database together with more than 1M Flickr images. We show that the precision is substantially boosted, achieving total recall in many cases. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> A visual word lexicon can be constructed by clustering primitive visual features, and a visual object can be described by a set of visual words. Such a "bag-of-words" representation has led to many significant results in various vision tasks including object recognition and categorization. However, in practice, the clustering of primitive visual features tends to result in synonymous visual words that over-represent visual patterns, as well as polysemous visual words that bring large uncertainties and ambiguities in the representation. This paper aims at generating a higher-level lexicon, i.e. visual phrase lexicon, where a visual phrase is a meaningful spatially co-occurrent pattern of visual words. This higher-level lexicon is much less ambiguous than the lower-level one. The contributions of this paper include: (1) a fast and principled solution to the discovery of significant spatial co-occurrent patterns using frequent itemset mining; (2) a pattern summarization method that deals with the compositional uncertainties in visual phrases; and (3) a top-down refinement scheme of the visual word lexicon by feeding back discovered phrases to tune the similarity measure through metric learning. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> Known object recognition is the task of recognizing specific objects, such as cereal boxes or soda cans. Millions of such objects exist, and finding a computationally feasible method for recognition can be difficult. Ideally, the computational costs should scale with the complexity of the testing image, and not the size of the object database. To accomplish this goal we propose a method for detection and recognition based on triplets of feature descriptors. Each feature is given a label based on a modified K-means clustering algorithm. Object matching is then done by inverse lookup within a table of possible triplets. The ambiguity of the matches is further reduced by having each triplet vote on its proposed object center. For planar objects, the proposed object centers should cluster at a single point. In general, assuming orthographic projection, the proposed centers will lie along a line. If enough triplets are in agreement on a specific object’s center, the object is labeled as detected. Our algorithm has been evaluated on a new database with 118 training objects and various testing scenarios. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> This paper improves recent methods for large scale image search. State-of-the-art methods build on the bag-of-features image representation. We, first, analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within the inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> There has been recent progress on the problem of recognizing specific objects in very large datasets. The most common approach has been based on the bag-of-words (BOW) method, in which local image features are clustered into visual words. This can provide significant savings in memory compared to storing and matching each feature independently. In this paper we take an additional step to reducing memory requirements by selecting only a small subset of the training features to use for recognition. This is based on the observation that many local features are unreliable or represent irrelevant clutter. We are able to select “useful” features, which are both robust and distinctive, by an unsupervised preprocessing step that identifies correctly matching features among the training images. We demonstrate that this selection approach allows an average of 4% of the original features per image to provide matching performance that is as accurate as the full set. In addition, we employ a graph to represent the matching relationships between images. Doing so enables us to effectively augment the feature set for each image through merging of useful features of neighboring images. We demonstrate adjacent and 2-adjacent augmentation, both of which give a substantial boost in performance. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> The Bag-of-visual Words (BoW) image representation has been applied for various problems in the fields of multimedia and computer vision. The basic idea is to represent images as visual documents composed of repeatable and distinctive visual elements, which are comparable to the words in texts. However, massive experiments show that the commonly used visual words are not as expressive as the text words, which is not desirable because it hinders their effectiveness in various applications. In this paper, Descriptive Visual Words (DVWs) and Descriptive Visual Phrases (DVPs) are proposed as the visual correspondences to text words and phrases, where visual phrases refer to the frequently co-occurring visual word pairs. Since images are the carriers of visual objects and scenes, novel descriptive visual element set can be composed by the visual words and their combinations which are effective in representing certain visual objects or scenes. Based on this idea, a general framework is proposed for generating DVWs and DVPs from classic visual words for various applications. In a large-scale image database containing 1506 object and scene categories, the visual words and visual word pairs descriptive to certain scenes or objects are identified as the DVWs and DVPs. Experiments show that the DVWs and DVPs are compact and descriptive, thus are more comparable with the text words than the classic visual words. We apply the identified DVWs and DVPs in several applications including image retrieval, image re-ranking, and object recognition. The DVW and DVP combination outperforms the classic visual words by 19.5% and 80% in image retrieval and object recognition tasks, respectively. The DVW and DVP based image re-ranking algorithm: DWPRank outperforms the state-of-the-art VisualRank by 12.4% in accuracy and about 11 times faster in efficiency. <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> We describe an algorithm for similar-image search which is designed to be efficient for extremely large collections of images. For each query, a small response set is selected by a fast prefilter, after which a more accurate ranker may be applied to each image in the response set. We consider a class of prefilters comprising disjunctions of conjunctions (“ORs of ANDs”) of Boolean features. AND filters can be implemented efficiently using skipped inverted files, a key component of Web-scale text search engines. These structures permit search in time proportional to the response set size. The prefilters are learned from training examples, and refined at query time to produce an approximately bounded response set. We cast prefiltering as an optimization problem: for each test query, select the OR-of-AND filter which maximizes training-set recall for an adjustable bound on response set size. This may be efficiently implemented by selecting from a large pool of candidate conjunctions of Boolean features using a linear program relaxation. Tests on object class recognition show that this relatively simple filter is nevertheless powerful enough to capture some semantic information. <s> BIB010 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> In state-of-the-art image retrieval systems, an image is represented by a bag of visual words obtained by quantizing high-dimensional local image descriptors, and scalable schemes inspired by text retrieval are then applied for large scale image indexing and retrieval. Bag-of-words representations, however: 1) reduce the discriminative power of image features due to feature quantization; and 2) ignore geometric relationships among visual words. Exploiting such geometric constraints, by estimating a 2D affine transformation between a query image and each candidate image, has been shown to greatly improve retrieval precision but at high computational cost. In this paper we present a novel scheme where image features are bundled into local groups. Each group of bundled features becomes much more discriminative than a single feature, and within each group simple and robust geometric constraints can be efficiently enforced. Experiments in Web image search, with a database of more than one million images, show that our scheme achieves a 49% improvement in average precision over the baseline bag-of-words approach. Retrieval performance is comparable to existing full geometric verification approaches while being much less computationally expensive. When combined with full geometric verification we achieve a 77% precision improvement over the baseline bag-of-words approach, and a 24% improvement over full geometric verification alone. <s> BIB011 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> Given a large-scale collection of images our aim is to efficiently associate images which contain the same entity, for example a building or object, and to discover the significant entities. To achieve this, we introduce the Geometric Latent Dirichlet Allocation (gLDA) model for unsupervised discovery of particular objects in unordered image collections. This explicitly represents images as mixtures of particular objects or facades, and builds rich latent topic models which incorporate the identity and locations of visual words specific to the topic in a geometrically consistent way. Applying standard inference techniques to this model enables images likely to contain the same object to be probabilistically grouped and ranked. ::: ::: Additionally, to reduce the computational cost of applying the gLDA model to large datasets, we propose a scalable method that first computes a matching graph over all the images in a dataset. This matching graph connects images that contain the same object, and rough image groups can be mined from this graph using standard clustering techniques. The gLDA model can then be applied to generate a more nuanced representation of the data. We also discuss how "hub images" (images representative of an object or landmark) can easily be extracted from our matching graph representation. ::: ::: We evaluate our techniques on the publicly available Oxford buildings dataset (5K images) and show examples of automatically mined objects. The methods are evaluated quantitatively on this dataset using a ground truth labeling for a number of Oxford landmarks. To demonstrate the scalability of the matching graph method, we show qualitative results on two larger datasets of images taken of the Statue of Liberty (37K images) and Rome (1M+ images). <s> BIB012 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> Most effective particular object and image retrieval approaches are based on the bag-of-words (BoW) model. All state-of-the-art retrieval results have been achieved by methods that include a query expansion that brings a significant boost in performance. We introduce three extensions to automatic query expansion: (i) a method capable of preventing tf-idf failure caused by the presence of sets of correlated features (confusers), (ii) an improved spatial verification and re-ranking step that incrementally builds a statistical model of the query object and (iii) we learn relevant spatial context to boost retrieval performance. The three improvements of query expansion were evaluated on standard Paris and Oxford datasets according to a standard protocol, and state-of-the-art results were achieved. <s> BIB013 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> In this paper we introduce visual phrases, complex visual composites like “a person riding a horse”. Visual phrases often display significantly reduced visual complexity compared to their component objects, because the appearance of those objects can change profoundly when they participate in relations. We introduce a dataset suitable for phrasal recognition that uses familiar PASCAL object categories, and demonstrate significant experimental gains resulting from exploiting visual phrases. We show that a visual phrase detector significantly outperforms a baseline which detects component objects and reasons about relations, even though visual phrase training sets tend to be smaller than those for objects. We argue that any multi-class detection system must decode detector outputs to produce final results; this is usually done with non-maximum suppression. We describe a novel decoding procedure that can account accurately for local context without solving difficult inference problems. We show this decoding procedure outperforms the state of the art. Finally, we show that decoding a combination of phrasal and object detectors produces real improvements in detector results. <s> BIB014 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> The objective of this work is object retrieval in large scale image datasets, where the object is specified by an image query and retrieval should be immediate at run time in the manner of Video Google [28]. We make the following three contributions: (i) a new method to compare SIFT descriptors (RootSIFT) which yields superior performance without increasing processing or storage requirements; (ii) a novel method for query expansion where a richer model for the query is learnt discriminatively in a form suited to immediate retrieval through efficient use of the inverted index; (iii) an improvement of the image augmentation method proposed by Turcot and Lowe [29], where only the augmenting features which are spatially consistent with the augmented image are kept. We evaluate these three methods over a number of standard benchmark datasets (Oxford Buildings 5k and 105k, and Paris 6k) and demonstrate substantial improvements in retrieval performance whilst maintaining immediate retrieval speeds. Combining these complementary methods achieves a new state-of-the-art performance on these datasets. <s> BIB015 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> Accurate matching of local features plays an essential role in visual object search. Instead of matching individual features separately, using the spatial context, e.g., bundling a group of co-located features into a visual phrase, has shown to enable more discriminative matching. Despite previous work, it remains a challenging problem to extract appropriate spatial context for matching. We propose a randomized approach to deriving visual phrase, in the form of spatial random partition. By averaging the matching scores over multiple randomized visual phrases, our approach offers three benefits: 1) the aggregation of the matching scores over a collection of visual phrases of varying sizes and shapes provides robust local matching; 2) object localization is achieved by simple thresholding on the voting map, which is more efficient than subimage search; 3) our algorithm lends itself to easy parallelization and also allows a flexible trade-off between accuracy and speed by adjusting the number of partition times. Both theoretical studies and experimental comparisons with the state-of-the-art methods validate the advantages of our approach. <s> BIB016 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> One fundamental problem in object retrieval with the bag-of-visual words (BoW) model is its lack of spatial information. Although various approaches are proposed to incorporate spatial constraints into the BoW model, most of them are either too strict or too loose so that they are only effective in limited cases. We propose a new spatially-constrained similarity measure (SCSM) to handle object rotation, scaling, view point change and appearance deformation. The similarity measure can be efficiently calculated by a voting-based method using inverted files. Object retrieval and localization are then simultaneously achieved without post-processing. Furthermore, we introduce a novel and robust re-ranking method with the k-nearest neighbors of the query for automatically refining the initial search results. Extensive performance evaluations on six public datasets show that SCSM significantly outperforms other spatial models, while k-NN re-ranking outperforms most state-of-the-art approaches using query expansion. <s> BIB017 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> Exploiting local feature shape has made geometry indexing possible, but at a high cost of index space, while a sequential spatial verification and re-ranking stage is still indispensable for large scale image retrieval. In this work we investigate an accelerated approach for the latter problem. We develop a simple spatial matching model inspired by Hough voting in the transformation space, where votes arise from single feature correspondences. Using a histogram pyramid, we effectively compute pair-wise affinities of correspondences without ever enumerating all pairs. Our Hough pyramid matching algorithm is linear in the number of correspondences and allows for multiple matching surfaces or non-rigid objects under one-to-one mapping. We achieve re-ranking one order of magnitude more images at the same query time with superior performance compared to state of the art methods, while requiring the same index space. We show that soft assignment is compatible with this matching scheme, preserving one-to-one mapping and further increasing performance. <s> BIB018 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> Hough voting in a geometric transformation space allows us to realize spatial verification, but remains sensitive to feature detection errors because of the inflexible quantization of single feature correspondences. To handle this problem, we propose a new method, called adaptive dither voting, for robust spatial verification. For each correspondence, instead of hard-mapping it to a single transformation, the method augments its description by using multiple dithered transformations that are deterministically generated by the other correspondences. The method reduces the probability of losing correspondences during transformation quantization, and provides high robustness as regards mismatches by imposing three geometric constraints on the dithering process. We also propose exploiting the non-uniformity of a Hough histogram as the spatial similarity to handle multiple matching surfaces. Extensive experiments conducted on four datasets show the superiority of our method. The method outperforms its state-of-the-art counterparts in both accuracy and scalability, especially when it comes to the retrieval of small, rotated objects. <s> BIB019 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> Spatial verification is a key step in boosting the performance of object-based image retrieval. It serves to eliminate unreliable correspondences between salient points in a given pair of images, and is typically performed by analyzing the consistency of spatial transformations between the image regions involved in individual correspondences. In this paper, we consider the pairwise geometric relations between correspondences and propose a strategy to incorporate these relations at significantly reduced computational cost, which makes it suitable for large-scale object retrieval. In addition, we combine the information on geometric relations from both the individual correspondences and pairs of correspondences to further improve the verification accuracy. Experimental results on three reference datasets show that the proposed approach results in a substantial performance improvement compared to the existing methods, without making concessions regarding computational efficiency. <s> BIB020 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> Spatial verification is a crucial part of every image retrieval system, as it accounts for the fact that geometric feature configurations are typically ignored by the Bag-of-Words representation. Since spatial verification quickly becomes the bottleneck of the retrieval process, runtime efficiency is extremely important. At the same time, spatial verification should be able to reliably distinguish between related and unrelated images. While methods based on RANSAC’s hypothesize-and-verify framework achieve high accuracy, they are not particularly efficient. Conversely, verification approaches based on Hough voting are extremely efficient but not as accurate. In this paper, we develop a novel spatial verification approach that uses an efficient voting scheme to identify promising transformation hypotheses that are subsequently verified and refined. Through comprehensive experiments, we show that our method is able to achieve a verification accuracy similar to state-of-the-art hypothesize-and-verify approaches while providing faster runtimes than state-of-the-art voting-based methods. <s> BIB021 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> In object recognition, the Bag-of-Words model assumes: i) extraction of local descriptors from images, ii) embedding the descriptors by a coder to a given visual vocabulary space which results in mid-level features, iii) extracting statistics from mid-level features with a pooling operator that aggregates occurrences of visual words in images into signatures, which we refer to as First-order Occurrence Pooling. This paper investigates higher-order pooling that aggregates over co-occurrences of visual words. We derive Bag-of-Words with Higher-order Occurrence Pooling based on linearisation of Minor Polynomial Kernel, and extend this model to work with various pooling operators. This approach is then effectively used for fusion of various descriptor types. Moreover, we introduce Higher-order Occurrence Pooling performed directly on local image descriptors as well as a novel pooling operator that reduces the correlation in the image signatures. Finally, First-, Second-, and Third-order Occurrence Pooling are evaluated given various coders and pooling operators on several widely used benchmarks. The proposed methods are compared to other approaches such as Fisher Vector Encoding and demonstrate improved results. <s> BIB022
A frequent concern with the BoW model is the lack of geometric constraints among local features. Geometric verification can be used as a critical pre-processing step various scenarios, such as query expansion BIB003 , BIB013 , feature selection BIB008 , database-side feature augmentation BIB015 , BIB008 , largescale object mining BIB012 , etc. The most well-known method for global spatial verification is RANSAC BIB004 . It calculates affine transformations for each correspondence repeatedly which are verified by the number of inliers that fit the transformation. RANSAC is effective in re-ranking a subset of top-ranked images but has efficiency problems. As a result, how to efficiently and accurately incorporate spatial cues in the SIFT-based framework has been extensively studied. A good choice is to discover the spatial context among local features. For example, visual phrases BIB002 , BIB009 , BIB014 , BIB016 are generated among individual visual words to provide more strict matching criterion. Visual word cooccurrences in the entire image are estimated BIB010 and aggregated BIB022 , while in BIB011 , BIB005 , BIB006 visual word clusters within local neighborhoods are discovered. Visual phrases can also be constructed from adjacent image patches BIB002 , random spatial partitioning BIB016 , and localized stable regions BIB011 such as MSER BIB001 . Another strategy uses voting to check geometric consistency. In the voting space, a bin with a larger value is more likely to represent the true transformation. An important work is weak geometrical consistency BIB007 , which focuses on the difference in scale and orientation between matched features. The space of difference is quantized into bins. Hough voting is used to locate the subset of correspondences similar in scale or orientation differences. Many later works can be viewed as extensions of WGC. For example, the method of Zhang et al. can be viewed as WGC using x, y offsets instead of scale and orientation. This method is invariant to object translations, but may be sensitive to scale and rotation changes due to the rigid coordinate quantization. To regain the scale and the rotation variance, Shen et al. BIB017 quantize the angle and scale of the query region after applying several transformations. A drawback of BIB017 is that query time and memory cost are both increased. To enable efficient voting and alleviate quantization artifacts, Hough pyramid matching (HPM) BIB018 distributes the matches over a hierarchical partition of the transformation space. HPM trades off between flexibility and accuracy and is very efficient. Quantization artifact can also be reduced by allowing a single correspondence to vote for multiple bins BIB019 . HPM and BIB019 are much faster than RANSAC and can be viewed as extensions in the rotation and the scale invariance to the weak geometry consistency proposed along with Hamming Embedding BIB007 . In BIB020 , a rough global estimate of orientation and scale changes is made by voting, which is used to verify the transformation obtained by the matched features. A recent method BIB021 combines the advantage of hypothesis-based methods such as RANSAC BIB004 and voting-based methods , BIB018 , BIB019 , BIB020 . Possible hypothesises are identified by voting and later verified and refined. This method inherits efficiency from voting and supports query expansion since it outputs an explicit transformation and a set of inliers.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Query Expansion <s> Given a query image of an object, our objective is to retrieve all instances of that object in a large (1M+) image database. We adopt the bag-of-visual-words architecture which has proven successful in achieving high precision at low recall. Unfortunately, feature detection and quantization are noisy processes and this can result in variation in the particular visual words that appear in different images of the same object, leading to missed results. In the text retrieval literature a standard method for improving performance is query expansion. A number of the highly ranked documents from the original query are reissued as a new query. In this way, additional relevant terms can be added to the query. This is a form of blind rele- vance feedback and it can fail if 'outlier' (false positive) documents are included in the reissued query. In this paper we bring query expansion into the visual domain via two novel contributions. Firstly, strong spatial constraints between the query image and each result allow us to accurately verify each return, suppressing the false positives which typically ruin text-based query expansion. Secondly, the verified images can be used to learn a latent feature model to enable the controlled construction of expanded queries. We illustrate these ideas on the 5000 annotated image Oxford building database together with more than 1M Flickr images. We show that the precision is substantially boosted, achieving total recall in many cases. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Query Expansion <s> In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Query Expansion <s> Most effective particular object and image retrieval approaches are based on the bag-of-words (BoW) model. All state-of-the-art retrieval results have been achieved by methods that include a query expansion that brings a significant boost in performance. We introduce three extensions to automatic query expansion: (i) a method capable of preventing tf-idf failure caused by the presence of sets of correlated features (confusers), (ii) an improved spatial verification and re-ranking step that incrementally builds a statistical model of the query object and (iii) we learn relevant spatial context to boost retrieval performance. The three improvements of query expansion were evaluated on standard Paris and Oxford datasets according to a standard protocol, and state-of-the-art results were achieved. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Query Expansion <s> This paper introduces a simple yet effective method to improve visual word based image retrieval. Our method is based on an analysis of the k-reciprocal nearest neighbor structure in the image space. At query time the information obtained from this process is used to treat different parts of the ranked retrieval list with different distance measures. This leads effectively to a re-ranking of retrieved images. As we will show, this has two benefits: first, using different similarity measures for different parts of the ranked list allows for compensation of the “curse of dimensionality”. Second, it allows for dealing with the uneven distribution of images in the data space. Dealing with both challenges has very beneficial effect on retrieval accuracy. Furthermore, a major part of the process happens offline, so it does not affect speed at retrieval time. Finally, the method operates on the bag-of-words level only, thus it could be combined with any additional measures on e.g. either descriptor level or feature geometry making room for further improvement. We evaluate our approach on common object retrieval benchmarks and demonstrate a significant improvement over standard bag-of-words retrieval. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Query Expansion <s> The objective of this work is object retrieval in large scale image datasets, where the object is specified by an image query and retrieval should be immediate at run time in the manner of Video Google [28]. We make the following three contributions: (i) a new method to compare SIFT descriptors (RootSIFT) which yields superior performance without increasing processing or storage requirements; (ii) a novel method for query expansion where a richer model for the query is learnt discriminatively in a form suited to immediate retrieval through efficient use of the inverted index; (iii) an improvement of the image augmentation method proposed by Turcot and Lowe [29], where only the augmenting features which are spatially consistent with the augmented image are kept. We evaluate these three methods over a number of standard benchmark datasets (Oxford Buildings 5k and 105k, and Paris 6k) and demonstrate substantial improvements in retrieval performance whilst maintaining immediate retrieval speeds. Combining these complementary methods achieves a new state-of-the-art performance on these datasets. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Query Expansion <s> One fundamental problem in object retrieval with the bag-of-visual words (BoW) model is its lack of spatial information. Although various approaches are proposed to incorporate spatial constraints into the BoW model, most of them are either too strict or too loose so that they are only effective in limited cases. We propose a new spatially-constrained similarity measure (SCSM) to handle object rotation, scaling, view point change and appearance deformation. The similarity measure can be efficiently calculated by a voting-based method using inverted files. Object retrieval and localization are then simultaneously achieved without post-processing. Furthermore, we introduce a novel and robust re-ranking method with the k-nearest neighbors of the query for automatically refining the initial search results. Extensive performance evaluations on six public datasets show that SCSM significantly outperforms other spatial models, while k-NN re-ranking outperforms most state-of-the-art approaches using query expansion. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Query Expansion <s> This paper proposes a query expansion technique for image search that is faster and more precise than the existing ones. An enriched representation of the query is obtained by exploiting the binary representation offered by the Hamming Embedding image matching approach: The initial local descriptors are refined by aggregating those of the database, while new descriptors are produced from the images that are deemed relevant. The technique has two computational advantages over other query expansion techniques. First, the size of the enriched representation is comparable to that of the initial query. Second, the technique is effective even without using any geometry, in which case searching a database comprising 105k images typically takes 80 ms on a desktop machine. Overall, our technique significantly outperforms the visual query expansion state of the art on popular benchmarks. It is also the first query expansion technique shown effective on the UKB benchmark, which has few relevant images per query. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Query Expansion <s> Recently, image representation built upon Convolutional Neural Network (CNN) has been shown to provide effective descriptors for image search, outperforming pre-CNN features as short-vector representations. Yet such models are not compatible with geometry-aware re-ranking methods and still outperformed, on some particular object retrieval benchmarks, by traditional image search systems relying on precise descriptor matching, geometric re-ranking, or query expansion. This work revisits both retrieval stages, namely initial search and re-ranking, by employing the same primitive information derived from the CNN. We build compact feature vectors that encode several image regions without the need to feed multiple inputs to the network. Furthermore, we extend integral images to handle max-pooling on convolutional layer activations, allowing us to efficiently localize matching objects. The resulting bounding box is finally used for image re-ranking. As a result, this paper significantly improves existing CNN-based recognition pipeline: We report for the first time results competing with traditional methods on the challenging Oxford5k and Paris6k datasets. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Query Expansion <s> We propose a novel approach for instance-level image retrieval. It produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors. In contrast to previous works employing pre-trained deep networks as a black box to produce features, our method leverages a deep architecture trained for the specific task of image retrieval. Our contribution is twofold: (i) we leverage a ranking framework to learn convolution and projection weights that are used to build the region features; and (ii) we employ a region proposal network to learn which regions should be pooled to form the final global descriptor. We show that using clean training data is key to the success of our approach. To that aim, we use a large scale but noisy landmark dataset and develop an automatic cleaning approach. The proposed architecture produces a global image representation in a single forward pass. Our approach significantly outperforms previous approaches based on global descriptors on standard datasets. It even surpasses most prior works based on costly local descriptor indexing and spatial verification. Additional material is available at www.xrce.xerox.com/Deep-Image-Retrieval. <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Query Expansion <s> Convolutional Neural Networks (CNNs) achieve state-of-the-art performance in many computer vision tasks. However, this achievement is preceded by extreme manual annotation in order to perform either training from scratch or fine-tuning for the target task. In this work, we propose to fine-tune CNN for image retrieval from a large collection of unordered images in a fully automated manner. We employ state-of-the-art retrieval and Structure-from-Motion (SfM) methods to obtain 3D models, which are used to guide the selection of the training data for CNN fine-tuning. We show that both hard positive and hard negative examples enhance the final performance in particular object retrieval with compact codes. <s> BIB010
As a post-processing step, query expansion (QE) significantly improves the retrieval accuracy. In a nutshell, a number of top-ranked images from the original rank list are employed to issue a new query which is in turn used to obtain a new rank list. QE allows additional discriminative features to be added to the original query, thus improving recall. In instance retrieval, Chum et al. BIB001 are the first to exploit this idea. They propose the average query expansion (AQE) which averages features of the top-ranked images to issue the new query. Usually, spatial verification BIB002 is employed for re-ranking and obtaining the ROIs from which the local features undergo average pooling. AQE is used by many later works BIB008 , BIB009 , BIB010 as a standard tool. The recursive AQE and the scale-band recursive QE are effective improvement but incur more computational cost BIB001 . Four years later, Chum et al. BIB003 improve QE from the perspectives of learning background confusers, expanding the query region and incremental spatial verification. In BIB005 , a linear SVM is trained online using the top-ranked and bottom-ranked images as positive and negative training samples, respectively. The learned weight vector is used to compute the average query. Other important extensions include "hello neighbor" based on reciprocal neighbors BIB004 , QE with rank-based weighting BIB006 , Hamming QE BIB007 (see Section 3.5), etc.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Small Object Retrieval <s> We propose a novel hashing scheme for image retrieval, clustering and automatic object discovery. Unlike commonly used bag-of-words approaches, the spatial extent of image features is exploited in our method. The geometric information is used both to construct repeatable hash keys and to increase the discriminability of the description. Each hash key combines visual appearance (visual words) with semi-local geometric information. Compared with the state-of-the-art min-hash, the proposed method has both higher recall (probability of collision for hashes on the same object) and lower false positive rates (random collisions). The advantages of geometric min-hashing approach are most pronounced in the presence of viewpoint and scale change, significant occlusion or small physical overlap of the viewing fields. We demonstrate the power of the proposed method on small object discovery in a large unordered collection of images and on a large scale image clustering problem. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Small Object Retrieval <s> We propose a scalable logo recognition approach that extends the common bag-of-words model and incorporates local geometry in the indexing process. Given a query image and a large logo database, the goal is to recognize the logo contained in the query, if any. We locally group features in triples using multi-scale Delaunay triangulation and represent triangles by signatures capturing both visual appearance and local geometry. Each class is represented by the union of such signatures over all instances in the class. We see large scale recognition as a sub-linear search problem where signatures of the query image are looked up in an inverted index structure of the class models. We evaluate our approach on a large-scale logo recognition dataset with more than four thousand classes. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Small Object Retrieval <s> Detecting logos in photos is challenging. A reason is that logos locally resemble patterns frequently seen in random images. We propose to learn a statistical model for the distribution of incorrect detections output by an image matching algorithm. It results in a novel scoring criterion in which the weight of correlated keypoint matches is reduced, penalizing irrelevant logo detections. In experiments on two very different logo retrieval benchmarks, our approach largely improves over the standard matching criterion as well as other state-of-the-art approaches. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Small Object Retrieval <s> We present a scalable logo recognition technique based on feature bundling. Individual local features are aggregated with features from their spatial neighborhood into bundles. These bundles carry more information about the image content than single visual words. The recognition of logos in novel images is then performed by querying a database of reference images. We further propose a novel WGC-constrained RANSAC and a technique that boosts recall for object retrieval by synthesizing images from original query or reference images. We demonstrate the benefits of these techniques for both small object retrieval and logo recognition. Our logo recognition system clearly outperforms the current state-of-the-art with a recall of 83% at a precision of 99%. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Small Object Retrieval <s> Visual object retrieval aims at retrieving, from a collection of images, all those in which a given query object appears. It is inherently asymmetric: the query object is mostly included in the database image, while the converse is not necessarily true. However, existing approaches mostly compare the images with symmetrical measures, without considering the different roles of query and database. This paper first measure the extent of asymmetry on large-scale public datasets reflecting this task. Considering the standard bag-of-words representation, we then propose new asymmetrical dissimilarities accounting for the different inlier ratios associated with query and database images. These asymmetrical measures depend on the query, yet they are compatible with an inverted file structure, without noticeably impacting search efficiency. Our experiments show the benefit of our approach, and show that the visual object retrieval task is better treated asymmetrically, in the spirit of state-of-the-art text retrieval. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Small Object Retrieval <s> Retrieving objects from large image collection is challenging due to the so-called background-interference, i.e., matching between query object and reference images is usually confused by cluttered background, especially when objects are small. In this paper, we propose an object retrieval technique addressing this problem by partitioning the images. Specifically, several object proposals are partitioned from the images by jointly optimizing their objectness and coverage. The proposal set with maximum objectness score and minimum redundancy is obtained. Therefore,the interference of cluttered background is greatly reduced. Next, the objects are retrieved based on the partitioned proposals, separately and independently to the background. Our method is featured by the fine partitioning, which not only removes interferences from background, but also significantly reduces the number of objects to index. In this way, the effectiveness and efficiency are both achieved, which better suits big data retrieval. Subsequently, feature coding on partitioned objects generates much meaningful representation, and object level connectivity also introduces novel clues into the reranking. Extensive experiments on three popular object retrieval benchmark datasets (Oxford Buildings, Paris, Holiday) show the effectiveness of our method in retrieving small objects out of big data. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Small Object Retrieval <s> Query expansion is a popular method to improve the quality of image retrieval with both conventional and CNN representations. It has been so far limited to global image similarity. This work focuses on diffusion, a mechanism that captures the image manifold in the feature space. The diffusion is carried out on descriptors of overlapping image regions rather than on a global image descriptor like in previous approaches. An efficient off-line stage allows optional reduction in the number of stored regions. In the on-line stage, the proposed handling of unseen queries in the indexing stage removes additional computation to adjust the precomputed data. We perform diffusion through a sparse linear system solver, yielding practical query times well below one second. Experimentally, we observe a significant boost in performance of image retrieval with compact CNN descriptors on standard benchmarks, especially when the query object covers only a small part of the image. Small objects have been a common failure case of CNN-based retrieval. <s> BIB007
Retrieving objects that cover a small portion of images is a challenging task due to 1) the few detected local features and 2) the large amount of background noise. The Instance Search task in the TRECVID campaign and the task of logo retrieval are important venues/applications for this task. Generally speaking, both TRECVID and logo retrieval can be tackled with similar pipelines. For keypoint-based methods, the spatial context among the local features is important to discriminative target objects from others, especially in cases of rigid objects. Examples include BIB001 , BIB002 , BIB004 . Other effective methods include burstiness handling BIB003 (discussed in Section 3.4.3), considering the different inlier ratios between the query and target objects BIB005 , etc. In the second type of methods, effective region proposals BIB006 or multi-scale image patches BIB007 can be used as object region candidates. In BIB007 , a recent state-of-the-art method, a regional diffusion mechanism based on neighborhood graphs is proposed to further improve the recall of small objects.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pre-Trained CNN Models <s> The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pre-Trained CNN Models <s> We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pre-Trained CNN Models <s> Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark Krizhevsky et al. [18]. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we explore both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pre-Trained CNN Models <s> In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pre-Trained CNN Models <s> The latest generation of Convolutional Neural Networks (CNN) have achieved impressive results in challenging benchmarks on image recognition and object detection, significantly raising the interest of the community in these methods. Nevertheless, it is still unclear how different CNN methods compare with each other and with previous state-of-the-art shallow representations such as the Bag-of-Visual-Words and the Improved Fisher Vector. This paper conducts a rigorous evaluation of these new techniques, exploring different deep architectures and comparing them on a common ground, identifying and disclosing important implementation details. We identify several useful properties of CNN-based representations, including the fact that the dimensionality of the CNN output layer can be reduced significantly without having an adverse effect on performance. We also identify aspects of deep and shallow methods that can be successfully shared. In particular, we show that the data augmentation techniques commonly applied to CNN-based methods can also be applied to shallow methods, and result in an analogous performance boost. Source code and models to reproduce the experiments in the paper is made publicly available. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pre-Trained CNN Models <s> Scene recognition is one of the hallmark tasks of computer vision, allowing definition of a context for object recognition. Whereas the tremendous recent progress in object recognition tasks is due to the availability of large datasets like ImageNet and the rise of Convolutional Neural Networks (CNNs) for learning high-level features, performance at scene recognition has not attained the same level of success. This may be because current deep features trained from ImageNet are not competitive enough for such tasks. Here, we introduce a new scene-centric database called Places with over 7 million labeled pictures of scenes. We propose new methods to compare the density and diversity of image datasets and show that Places is as dense as other scene datasets and has more diversity. Using CNN, we learn deep features for scene recognition tasks, and establish new state-of-the-art results on several scene-centric datasets. A visualization of the CNN layers' responses allows us to show differences in the internal representations of object-centric and scene-centric networks. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pre-Trained CNN Models <s> Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations lack geometric invariance, which limits their robustness for classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (MOP-CNN). This scheme extracts CNN activations for local patches at multiple scale levels, performs orderless VLAD pooling of these activations at each level separately, and concatenates the result. The resulting MOP-CNN representation can be used as a generic feature for either supervised or unsupervised recognition tasks, from image classification to instance-level retrieval; it consistently outperforms global CNN activations without requiring any joint training of prediction layers for a particular target dataset. In absolute terms, it achieves state-of-the-art results on the challenging SUN397 and MIT Indoor Scenes classification datasets, and competitive results on ILSVRC2012/2013 classification and INRIA Holidays retrieval datasets. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pre-Trained CNN Models <s> This paper provides an extensive study on the availability of image representations based on convolutional networks (ConvNets) for the task of visual instance retrieval. Besides the choice of convolutional layers, we present an efficient pipeline exploiting multi-scale schemes to extract local features, in particular, by taking geometric invariance into explicit account, i.e. positions, scales and spatial consistency. In our experiments using five standard image retrieval datasets, we demonstrate that generic ConvNet image representations can outperform other state-of-the-art methods if they are extracted appropriately. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pre-Trained CNN Models <s> We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection. <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pre-Trained CNN Models <s> Recently, image representation built upon Convolutional Neural Network (CNN) has been shown to provide effective descriptors for image search, outperforming pre-CNN features as short-vector representations. Yet such models are not compatible with geometry-aware re-ranking methods and still outperformed, on some particular object retrieval benchmarks, by traditional image search systems relying on precise descriptor matching, geometric re-ranking, or query expansion. This work revisits both retrieval stages, namely initial search and re-ranking, by employing the same primitive information derived from the CNN. We build compact feature vectors that encode several image regions without the need to feed multiple inputs to the network. Furthermore, we extend integral images to handle max-pooling on convolutional layer activations, allowing us to efficiently localize matching objects. The resulting bounding box is finally used for image re-ranking. As a result, this paper significantly improves existing CNN-based recognition pipeline: We report for the first time results competing with traditional methods on the challenging Oxford5k and Paris6k datasets. <s> BIB010 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pre-Trained CNN Models <s> Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. <s> BIB011 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pre-Trained CNN Models <s> Evidence is mounting that Convolutional Networks (ConvNets) are the most effective representation learning method for visual recognition tasks. In the common scenario, a ConvNet is trained on a large labeled dataset (source) and the feed-forward units activation of the trained network, at a certain layer of the network, is used as a generic representation of an input image for a task with relatively smaller training set (target). Recent studies have shown this form of representation transfer to be suitable for a wide range of target visual recognition tasks. This paper introduces and investigates several factors affecting the transferability of such representations. It includes parameters for training of the source ConvNet such as its architecture, distribution of the training data, etc. and also the parameters of feature extraction such as layer of the trained ConvNet, dimensionality reduction, etc. Then, by optimizing these factors, we show that significant improvements can be achieved on various (17) visual recognition tasks. We further show that these visual recognition tasks can be categorically ordered based on their similarity to the source task such that a correlation between the performance of tasks and their similarity to the source task w.r.t. the proposed factors is observed. <s> BIB012 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pre-Trained CNN Models <s> The objective of this paper is the effective transfer of the Convolutional Neural Network (CNN) feature in image search and classification. Systematically, we study three facts in CNN transfer. 1) We demonstrate the advantage of using images with a properly large size as input to CNN instead of the conventionally resized one. 2) We benchmark the performance of different CNN layers improved by average/max pooling on the feature maps. Our observation suggests that the Conv5 feature yields very competitive accuracy under such pooling step. 3) We find that the simple combination of pooled features extracted across various CNN layers is effective in collecting evidences from both low and high level descriptors. Following these good practices, we are capable of improving the state of the art on a number of benchmarks to a large margin. <s> BIB013
Popular CNN Architectures. Several CNN models serve as good choices for extracting features, including AlexNet BIB002 , VGGNet BIB004 , GoogleNet BIB009 and ResNet BIB011 , which are listed in Table 2 . Briefly, CNN can be viewed as a set of non-linear functions and is composed of a number of layers such as convolution, pooling, non-linearities, etc. CNN has a hierarchical structure. From bottom to top layers, the image undergoes convolution with filters, and the receptive field of these image filters increases. Filters in the same layer have the same size but different parameters. AlexNet BIB002 was proposed the earliest among these networks, which has five convolutional layers and three fully connected (FC) layers. It has 96 filters in the first layer of sizes 11 Â 11 Â 3 and has 256 filters of size 3 Â 3 Â 192 in the 5th layer. Zeiler et al. BIB003 observe that the filters are sensitive to certain visual patterns and that these patterns evolve from lowlevel bars in bottom layers to high-level objects in top layers. For low-level and simple visual stimulus, the CNN filters act as the detectors in the local hand-crafted features, but for the high-level and complex stimulus, the CNN filters have distinct characteristics that depart from SIFT-like detectors. AlexNet has been shown to be outperformed by newer ones such as VGGNet, which has the largest number of parameters. ResNet and GoogleNet won the ILSVRC 2014 and 2015 challenges, respectively, showing that CNNs are more effective with more layers. A full review of these networks is beyond the scope of this paper, and we refer readers to BIB002 , BIB005 , BIB004 for details. Datasets for Pre-Training. Several large-scale recognition datasets are used for CNN pre-training. Among them, the ImageNet dataset BIB001 is mostly commonly used. It contains 1.2 million images of 1,000 semantic classes and is usually thought of as being generic. Another data source for pretraining is the Places-205 dataset BIB006 which is twice as large as ImageNet but has five times fewer classes. It is a scene-centric dataset depicting various indoor and outdoor scenes. A hybrid dataset combining the Places-205 and the ImageNet datasets has also been used for pre-training BIB006 . The resulting HybridNet is evaluated in BIB009 , BIB011 , BIB012 , for instance retrieval. The Transfer Issue. Comprehensive evaluation of various CNNs on instance retrieval has been conducted in several recent works BIB012 , , BIB013 . The transfer effect is mostly concerned. It is considered in BIB012 that instance retrieval, as a target task, lies farthest from the source, i.e., ImageNet. Studies reveal some critical insights in the transfer process. First, during model transfer, features extracted from different layers exhibit different retrieval performance. Experiments confirm that the top layers may exhibit lower generalization ability than the layer before it. For example, for AlexNet pre-trained on ImageNet, it is shown that FC6, FC7, and FC8 are in descending order regarding retrieval accuracy BIB012 . It is also shown in BIB010 , BIB013 that the pool5 feature of AlexNet and VGGNet is even superior to FC6 AlexNet BIB002 60M 5+3 ImageNet BIB007 , BIB008 PlacesNet BIB006 Places BIB012 , HybridNet BIB006 ImageNet+Places BIB012 , [ when proper encoding techniques are employed. Second, the source training set is relevant to retrieval accuracy on different datasets. For example, Azizpour et al. BIB012 report that HybridNet yields the best performance on Holidays after PCA. They also observe that AlexNet pre-trained on ImageNet is superior to PlacesNet and HybridNet on the Ukbench dataset which contains common objects instead of architectures or scenes. So the similarity of the source and target plays a critical role in instance retrieval when using a pre-trained CNN model.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> The Fisher kernel (FK) is a generic framework which combines the benefits of generative and discriminative approaches. In the context of image classification the FK was shown to extend the popular bag-of-visual-words (BOV) by going beyond count statistics. However, in practice, this enriched representation has not yet shown its superiority over the BOV. In the first part we show that with several well-motivated modifications over the original framework we can boost the accuracy of the FK. On PASCAL VOC 2007 we increase the Average Precision (AP) from 47.9% to 58.3%. Similarly, we demonstrate state-of-the-art accuracy on CalTech 256. A major advantage is that these results are obtained using only SIFT descriptors and costless linear classifiers. Equipped with this representation, we can now explore image classification on a larger scale. In the second part, as an application, we compare two abundant resources of labeled images to learn classifiers: ImageNet and Flickr groups. In an evaluation involving hundreds of thousands of training images we show that classifiers learned on Flickr groups perform surprisingly well (although they were not intended for this purpose) and that they can complement classifiers learned on more carefully annotated datasets. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> Many visual search and matching systems represent images using sparse sets of "visual words": descriptors that have been quantized by assignment to the best-matching symbol in a discrete vocabulary. Errors in this quantization procedure propagate throughout the rest of the system, either harming performance or requiring correction using additional storage or processing. This paper aims to reduce these quantization errors at source, by learning a projection from descriptor space to a new Euclidean space in which standard clustering techniques are more likely to assign matching descriptors to the same cluster, and nonmatching descriptors to different clusters. ::: ::: To achieve this, we learn a non-linear transformation model by minimizing a novel margin-based cost function, which aims to separate matching descriptors from two classes of non-matching descriptors. Training data is generated automatically by leveraging geometric consistency. Scalable, stochastic gradient methods are used for the optimization. ::: ::: For the case of particular object retrieval, we demonstrate impressive gains in performance on a ground truth dataset: our learnt 32-D descriptor without spatial re-ranking outperforms a baseline method using 128-D SIFT descriptors with spatial re-ranking. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> Over the last decade, the availability of public image repositories and recognition benchmarks has enabled rapid progress in visual object category and instance detection. Today we are witnessing the birth of a new generation of sensing technologies capable of providing high quality synchronized videos of both color and depth, the RGB-D (Kinect-style) camera. With its advanced sensing capabilities and the potential for mass adoption, this technology represents an opportunity to dramatically increase robotic object recognition, manipulation, navigation, and interaction capabilities. In this paper, we introduce a large-scale, hierarchical multi-view object dataset collected using an RGB-D camera. The dataset contains 300 objects organized into 51 categories and has been made publicly available to the research community so as to enable rapid progress based on this promising technology. This paper describes the dataset collection procedure and introduces techniques for RGB-D based object recognition and detection, demonstrating that combining color and depth information substantially improves quality of results. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the OverFeat network which was trained to perform object classification on ILSVRC13. We use features extracted from the OverFeat network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the OverFeat network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or L2 distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> It has been shown that the activations invoked by an image within the top layers of a large convolutional neural network provide a high-level descriptor of the visual content of the image. In this paper, we investigate the use of such descriptors (neural codes) within the image retrieval application. In the experiments with several standard retrieval benchmarks, we establish that neural codes perform competitively even when the convolutional neural network has been trained for an unrelated classification task (e.g. Image-Net). We also evaluate the improvement in the retrieval performance of neural codes, when the network is retrained on a dataset of images that are similar to images encountered at test time. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> This paper provides an extensive study on the availability of image representations based on convolutional networks (ConvNets) for the task of visual instance retrieval. Besides the choice of convolutional layers, we present an efficient pipeline exploiting multi-scale schemes to extract local features, in particular, by taking geometric invariance into explicit account, i.e. positions, scales and spatial consistency. In our experiments using five standard image retrieval datasets, we demonstrate that generic ConvNet image representations can outperform other state-of-the-art methods if they are extracted appropriately. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations lack geometric invariance, which limits their robustness for classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (MOP-CNN). This scheme extracts CNN activations for local patches at multiple scale levels, performs orderless VLAD pooling of these activations at each level separately, and concatenates the result. The resulting MOP-CNN representation can be used as a generic feature for either supervised or unsupervised recognition tasks, from image classification to instance-level retrieval; it consistently outperforms global CNN activations without requiring any joint training of prediction layers for a particular target dataset. In absolute terms, it achieves state-of-the-art results on the challenging SUN397 and MIT Indoor Scenes classification datasets, and competitive results on ILSVRC2012/2013 classification and INRIA Holidays retrieval datasets. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> Latest results indicate that features learned via convolutional neural networks outperform previous descriptors on classification tasks by a large margin. It has been shown that these networks still work well when they are applied to datasets or recognition tasks different from those they were trained on. However, descriptors like SIFT are not only used in recognition but also for many correspondence problems that rely on descriptor matching. In this paper we compare features from various layers of convolutional neural nets to standard SIFT descriptors. We consider a network that was trained on ImageNet and another one that was trained without supervision. Surprisingly, convolutional neural networks clearly outperform SIFT on descriptor matching. This paper has been merged with arXiv:1406.6909 <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96% object recall at overlap threshold of 0.5 and over 75% recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy. <s> BIB010 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> The objective of this work is to learn descriptors suitable for the sparse feature detectors used in viewpoint invariant matching. We make a number of novel contributions towards this goal. First, it is shown that learning the pooling regions for the descriptor can be formulated as a convex optimisation problem selecting the regions using sparsity. Second, it is shown that descriptor dimensionality reduction can also be formulated as a convex optimisation problem, using Mahalanobis matrix nuclear norm regularisation. Both formulations are based on discriminative large margin learning constraints. As the third contribution, we evaluate the performance of the compressed descriptors, obtained from the learnt real-valued descriptors by binarisation. Finally, we propose an extension of our learning formulations to a weakly supervised case, which allows us to learn the descriptors from unannotated image collections. It is demonstrated that the new learning methods improve over the state of the art in descriptor learning on the annotated local patches data set of Brown et al. and unannotated photo collections of Philbin et al. . <s> BIB011 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> Deep convolutional neural networks have been successfully applied to image classification tasks. When these same networks have been applied to image retrieval, the assumption has been made that the last layers would give the best performance, as they do in classification. We show that for instance-level image retrieval, lower layers often perform better than the last layers in convolutional neural networks. We present an approach for extracting convolutional features from different layers of the networks, and adopt VLAD encoding to encode features into a single vector for each image. We investigate the effect of different layers and scales of input images on the performance of convolutional features using the recent deep networks OxfordNet and GoogLeNet. Experiments demonstrate that intermediate layers or higher layers with finer scales produce better results for image retrieval, compared to the last layer. When using compressed 128-D VLAD descriptors, our method obtains state-of-the-art results and outperforms other VLAD and CNN based approaches on two out of three test datasets. Our work provides guidance for transferring deep networks trained on image classification to image retrieval tasks. <s> BIB012 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> Recently, image representation built upon Convolutional Neural Network (CNN) has been shown to provide effective descriptors for image search, outperforming pre-CNN features as short-vector representations. Yet such models are not compatible with geometry-aware re-ranking methods and still outperformed, on some particular object retrieval benchmarks, by traditional image search systems relying on precise descriptor matching, geometric re-ranking, or query expansion. This work revisits both retrieval stages, namely initial search and re-ranking, by employing the same primitive information derived from the CNN. We build compact feature vectors that encode several image regions without the need to feed multiple inputs to the network. Furthermore, we extend integral images to handle max-pooling on convolutional layer activations, allowing us to efficiently localize matching objects. The resulting bounding box is finally used for image re-ranking. As a result, this paper significantly improves existing CNN-based recognition pipeline: We report for the first time results competing with traditional methods on the challenging Oxford5k and Paris6k datasets. <s> BIB013 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> Recognition algorithms based on convolutional networks (CNNs) typically use the output of the last layer as a feature representation. However, the information in this layer may be too coarse spatially to allow precise localization. On the contrary, earlier layers may be precise in localization but will not capture semantics. To get the best of both worlds, we define the hypercolumn at a pixel as the vector of activations of all CNN units above that pixel. Using hypercolumns as pixel descriptors, we show results on three fine-grained localization tasks: simultaneous detection and segmentation [22], where we improve state-of-the-art from 49.7 mean APr [22] to 60.0, keypoint localization, where we get a 3.3 point boost over [20], and part labeling, where we show a 6.6 point gain over a strong baseline. <s> BIB014 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> Convolutional Neural Network (CNN) features have been successfully employed in recent works as an image descriptor for various vision tasks. But the inability of the deep CNN features to exhibit invariance to geometric transformations and object compositions poses a great challenge for image search. In this work, we demonstrate the effectiveness of the objectness prior over the deep CNN features of image regions for obtaining an invariant image representation. The proposed approach represents the image as a vector of pooled CNN features describing the underlying objects. This representation provides robustness to spatial layout of the objects in the scene and achieves invariance to general geometric transformations, such as translation, rotation and scaling. The proposed approach also leads to a compact representation of the scene, making each image occupy a smaller memory footprint. Experiments show that the proposed representation achieves state of the art retrieval results on a set of challenging benchmark image datasets, while maintaining a compact representation. <s> BIB015 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> In this paper we present an efficient and accurate method to aggregate a set of Deep Convolutional Neural Network (CNN) responses, extracted from a set of image windows. CNN features are usually computed on the whole frame or with a dense multi scale approach. There is evidence that using multiple windows yields a better image representation nonetheless it is still not clear how windows should be sampled and how CNN responses should be aggregated. Instead of sampling the image densely in scale and space we show that selecting a few hundred windows is enough to obtain an effective image signature. We show how to use Fisher Vectors and PCA to obtain a short and highly descriptive signature that can be used effectively for image retrieval. We test our method on two relevant computer vision tasks: image retrieval and image tagging. We report state-of-the art results for both tasks on three standard datasets. <s> BIB016 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> In this paper, we demonstrate that the essentials of image classification and retrieval are the same, since both tasks could be tackled by measuring the similarity between images. To this end, we propose ONE (Online Nearest-neighbor Estimation), a unified algorithm for both image classification and retrieval. ONE is surprisingly simple, which only involves manual object definition, regional description and nearest-neighbor search. We take advantage of PCA and PQ approximation and GPU parallelization to scale our algorithm up to large-scale image search. Experimental results verify that ONE achieves state-of-the-art accuracy in a wide range of image classification and retrieval benchmarks. <s> BIB017 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> Popular sites like Houzz, Pinterest, and LikeThatDecor, have communities of users helping each other answer questions about products in images. In this paper we learn an embedding for visual search in interior design. Our embedding contains two different domains of product images: products cropped from internet scenes, and products in their iconic form. With such a multi-domain embedding, we demonstrate several applications of visual search including identifying products in scenes and finding stylistically similar products. To obtain the embedding, we train a convolutional neural network on pairs of images. We explore several training architectures including re-purposing object classifiers, using siamese networks, and using multitask learning. We evaluate our search quantitatively and qualitatively and demonstrate high quality results for search across multiple visual domains, enabling new applications in interior design. <s> BIB018 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> Patch-level descriptors underlie several important computer vision tasks, such as stereo-matching or content-based image retrieval. We introduce a deep convolutional architecture that yields patch-level descriptors, as an alternative to the popular SIFT descriptor for image retrieval. The proposed family of descriptors, called Patch-CKN, adapt the recently introduced Convolutional Kernel Network (CKN), an unsupervised framework to learn convolutional architectures. We present a comparison framework to benchmark current deep convolutional approaches along with Patch-CKN for both patch and image retrieval, including our novel "RomePatches" dataset. Patch-CKN descriptors yield competitive results compared to supervised CNN alternatives on patch and image retrieval. <s> BIB019 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> This paper considers the task of image search using the Bag-of-Words (BoW) model. In this model, the precision of visual matching plays a critical role. Conventionally, local cues of a keypoint, e.g., SIFT, are employed. However, such strategy does not consider the contextual evidences of a keypoint, a problem which would lead to the prevalence of false matches. To address this problem and enable accurate visual matching, this paper proposes to integrate discriminative cues from multiple contextual levels, i.e., local, regional, and global, via probabilistic analysis. "True match" is defined as a pair of keypoints corresponding to the same scene location on all three levels (Fig. 1). Specifically, the Convolutional Neural Network (CNN) is employed to extract features from regional and global patches. We show that CNN feature is complementary to SIFT due to its semantic awareness and compares favorably to several other descriptors such as GIST, HSV, etc. To reduce memory usage, we propose to index CNN features outside the inverted file, communicated by memory-efficient pointers. Experiments on three benchmark datasets demonstrate that our method greatly promotes the search accuracy when CNN feature is integrated. We show that our method is efficient in terms of time cost compared with the BoW baseline, and yields competitive accuracy with the state-of-the-arts. <s> BIB020 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> The objective of this paper is the effective transfer of the Convolutional Neural Network (CNN) feature in image search and classification. Systematically, we study three facts in CNN transfer. 1) We demonstrate the advantage of using images with a properly large size as input to CNN instead of the conventionally resized one. 2) We benchmark the performance of different CNN layers improved by average/max pooling on the feature maps. Our observation suggests that the Conv5 feature yields very competitive accuracy under such pooling step. 3) We find that the simple combination of pooled features extracted across various CNN layers is effective in collecting evidences from both low and high level descriptors. Following these good practices, we are capable of improving the state of the art on a number of benchmarks to a large margin. <s> BIB021 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> Image representations derived from pre-trained Convolutional Neural Networks (CNNs) have become the new state of the art in computer vision tasks such as instance retrieval. This work explores the suitability for instance retrieval of image- and region-wise representations pooled from an object detection CNN such as Faster R-CNN. We take advantage of the object proposals learned by a Region Proposal Network (RPN) and their associated CNN features to build an instance search pipeline composed of a first filtering stage followed by a spatial reranking. We further investigate the suitability of Faster R-CNN features when the network is fine-tuned for the same objects one wants to retrieve. We assess the performance of our proposed system with the Oxford Buildings 5k, Paris Buildings 6k and a subset of TRECVid Instance Search 2013, achieving competitive results. <s> BIB022 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> Convolutional Neural Networks (CNNs) achieve state-of-the-art performance in many computer vision tasks. However, this achievement is preceded by extreme manual annotation in order to perform either training from scratch or fine-tuning for the target task. In this work, we propose to fine-tune CNN for image retrieval from a large collection of unordered images in a fully automated manner. We employ state-of-the-art retrieval and Structure-from-Motion (SfM) methods to obtain 3D models, which are used to guide the selection of the training data for CNN fine-tuning. We show that both hard positive and hard negative examples enhance the final performance in particular object retrieval with compact codes. <s> BIB023 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following four principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the “Vector of Locally Aggregated Descriptors” image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we create a new weakly supervised ranking loss, which enables end-to-end learning of the architecture's parameters from images depicting the same places over time downloaded from Google Street View Time Machine. Third, we develop an efficient training procedure which can be applied on very large-scale weakly labelled tasks. Finally, we show that the proposed architecture and training procedure significantly outperform non-learnt image representations and off-the-shelf CNN descriptors on challenging place recognition and image retrieval benchmarks. <s> BIB024 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> We introduce a novel Deep Network architecture that implements the full feature point handling pipeline, that is, detection, orientation estimation, and feature description. While previous works have successfully tackled each one of these problems individually, we show how to learn to do all three in a unified manner while preserving end-to-end differentiability. We then demonstrate that our Deep pipeline outperforms state-of-the-art methods on a number of benchmark datasets, without the need of retraining. <s> BIB025 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features—using the recently popular terminology of neural networks with ’attention’ mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps ( including all steps ) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available. <s> BIB026
FC Descriptors. The most straightforward idea is to extract the descriptor from the fully-connected layer of the network BIB005 , BIB006 , BIB020 , e.g., the 4,096-dim FC6 or FC7 descriptor in AlexNet. The FC descriptor is generated after layers of convolutions with the input image, has a global receptive field, and thus can be viewed as a global feature. It yields fair retrieval accuracy under Euclidean distance and can be improved with power normalization BIB001 . Intermediate Local Features. Many recent retrieval methods BIB012 , BIB013 , BIB021 focus on local descriptors in the intermediate layers. In these methods, lower-level convolutional filters (kernels) are used to detect local visual patterns. Viewed as local detectors, these filters have a smaller receptive field and are densely applied on the entire image. Compared with the global FC feature, local detectors are more robust to image transformations such as truncation and occlusion, in ways that are similar to the local invariant detectors (Section 3.2). Local descriptors are tightly coupled with these intermediate local detectors, i.e., they are the responses of the input image to these convolution operations. In other words, after the convolutions, the resulting activation maps can be viewed as a feature ensemble, which is called the "column feature" in this survey. For example in AlexNet BIB004 , there are n ¼ 96 detectors (convolutional filters) in the 1st convolutional layer. These filters produces n ¼ 96 heat maps of size 27 Â 27 (after max pooling). Each pixel in the maps has a receptive field of 19 Â 19 and records the response of the image w.r.t. the corresponding filter BIB012 , BIB013 , BIB021 . The column feature is therefore of size 1 Â 1 Â 96 (Fig. 2 ) and can be viewed as a description of a certain patch in the original image. Each dimension of this descriptor denotes the level of activation of the corresponding detector and resembles the SIFT descriptor to some extent. The column feature initially appears in BIB007 , where Razavian et al. first do maxpooling over regularly partitioned windows on the feature maps and then concatenate them across all filter responses, yielding column-like features. In BIB014 , column features from multiple layers of the networks are concatenated, forming the "hypercolumn" feature. In hybrid methods, the feature extraction process consists of patch detection and description steps. For the first step, the literature has seen three major types of region detectors. The first is grid image patches. For example, in BIB008 , a twoscale sliding window strategy is employed to generate patches. In BIB005 , the dataset images are first cropped and rotated, and then divided into patches of different scales, the union of which covers the whole image. The second type is invariant keypoint/region detectors. For instance, the difference of Gaussian feature points are used in ; the MSER region detector is leveraged in BIB009 . Third, region proposals also provide useful information on the locations of the potential objects. Mopuri et al. BIB015 employ selective search to generate image patches, while EdgeBox BIB010 is used in BIB016 . In BIB022 , the region proposal network (RPN) BIB026 is applied to locate the potential objects in an image. The use of CNN as region descriptors is validated in BIB009 , showing that CNN is superior to SIFT in image matching except on blurred images. Given the image patches, the hybrid CNN method usually employs the FC or pooled intermediate CNN features. Examples using the FC descriptors include BIB005 , BIB008 , BIB015 , BIB017 . In these works, the 4,096-dim FC features are extracted from the multi-scale image regions BIB005 , BIB008 , BIB017 or object proposals BIB015 . On the other hand, Razavian et al. BIB007 also uses the intermediate descriptors after max-pooling as region descriptors. BIB023 163,671 713 Landmark Tokyo TM BIB024 112,623 n.a Landmark MV RGB-D BIB003 250,000 300 House. object Product BIB018 101,945Â2 n.a Furniture The above methods use pre-trained models for patch feature extraction. Based on the hand-crafted detectors, patch descriptors can also be learned through CNN in either supervised BIB019 or unsupervised manner , which improves over the previous works on SIFT descriptor learning BIB011 , BIB002 . Yi et al. BIB025 further propose an end-to-end learning method integrating region detector, orientation estimator and feature descriptor in a single pipeline.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Pooling <s> We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Pooling <s> The Fisher kernel (FK) is a generic framework which combines the benefits of generative and discriminative approaches. In the context of image classification the FK was shown to extend the popular bag-of-visual-words (BOV) by going beyond count statistics. However, in practice, this enriched representation has not yet shown its superiority over the BOV. In the first part we show that with several well-motivated modifications over the original framework we can boost the accuracy of the FK. On PASCAL VOC 2007 we increase the Average Precision (AP) from 47.9% to 58.3%. Similarly, we demonstrate state-of-the-art accuracy on CalTech 256. A major advantage is that these results are obtained using only SIFT descriptors and costless linear classifiers. Equipped with this representation, we can now explore image classification on a larger scale. In the second part, as an application, we compare two abundant resources of labeled images to learn classifiers: ImageNet and Flickr groups. In an evaluation involving hundreds of thousands of training images we show that classifiers learned on Flickr groups perform surprisingly well (although they were not intended for this purpose) and that they can complement classifiers learned on more carefully annotated datasets. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Pooling <s> The paper addresses large scale image retrieval with short vector representations. We study dimensionality reduction by Principal Component Analysis (PCA) and propose improvements to its different phases. We show and explicitly exploit relations between i) mean subtrac- tion and the negative evidence, i.e., a visual word that is mutually miss- ing in two descriptions being compared, and ii) the axis de-correlation and the co-occurrences phenomenon. Finally, we propose an effective way to alleviate the quantization artifacts through a joint dimensionality re- duction of multiple vocabularies. The proposed techniques are simple, yet significantly and consistently improve over the state of the art on compact image representations. Complementary experiments in image classification show that the methods are generally applicable. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Pooling <s> This paper provides an extensive study on the availability of image representations based on convolutional networks (ConvNets) for the task of visual instance retrieval. Besides the choice of convolutional layers, we present an efficient pipeline exploiting multi-scale schemes to extract local features, in particular, by taking geometric invariance into explicit account, i.e. positions, scales and spatial consistency. In our experiments using five standard image retrieval datasets, we demonstrate that generic ConvNet image representations can outperform other state-of-the-art methods if they are extracted appropriately. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Pooling <s> Deep convolutional neural networks have been successfully applied to image classification tasks. When these same networks have been applied to image retrieval, the assumption has been made that the last layers would give the best performance, as they do in classification. We show that for instance-level image retrieval, lower layers often perform better than the last layers in convolutional neural networks. We present an approach for extracting convolutional features from different layers of the networks, and adopt VLAD encoding to encode features into a single vector for each image. We investigate the effect of different layers and scales of input images on the performance of convolutional features using the recent deep networks OxfordNet and GoogLeNet. Experiments demonstrate that intermediate layers or higher layers with finer scales produce better results for image retrieval, compared to the last layer. When using compressed 128-D VLAD descriptors, our method obtains state-of-the-art results and outperforms other VLAD and CNN based approaches on two out of three test datasets. Our work provides guidance for transferring deep networks trained on image classification to image retrieval tasks. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Pooling <s> Deep Convolutional Neural Networks (DCNN) have established a remarkable performance benchmark in the field of image classification, displacing classical approaches based on hand-tailored aggregations of local descriptors. Yet DCNNs impose high computational burdens both at training and at testing time, and training them requires collecting and annotating large amounts of training data. Supervised adaptation methods have been proposed in the literature that partially re-learn a transferred DCNN structure from a new target dataset. Yet these require expensive bounding-box annotations and are still computationally expensive to learn. In this paper, we address these shortcomings of DCNN adaptation schemes by proposing a hybrid approach that combines conventional, unsupervised aggregators such as Bag-of-Words (BoW), with the DCNN pipeline by treating the output of intermediate layers as densely extracted local descriptors. We test a variant of our approach that uses only intermediate DCNN layers on the standard PASCAL VOC 2007 dataset and show performance significantly higher than the standard BoW model and comparable to Fisher vector aggregation but with a feature that is 150 times smaller. A second variant of our approach that includes the fully connected DCNN layers significantly outperforms Fisher vector schemes and performs comparably to DCNN approaches adapted to Pascal VOC 2007, yet at only a small fraction of the training and testing cost. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Pooling <s> Recently, image representation built upon Convolutional Neural Network (CNN) has been shown to provide effective descriptors for image search, outperforming pre-CNN features as short-vector representations. Yet such models are not compatible with geometry-aware re-ranking methods and still outperformed, on some particular object retrieval benchmarks, by traditional image search systems relying on precise descriptor matching, geometric re-ranking, or query expansion. This work revisits both retrieval stages, namely initial search and re-ranking, by employing the same primitive information derived from the CNN. We build compact feature vectors that encode several image regions without the need to feed multiple inputs to the network. Furthermore, we extend integral images to handle max-pooling on convolutional layer activations, allowing us to efficiently localize matching objects. The resulting bounding box is finally used for image re-ranking. As a result, this paper significantly improves existing CNN-based recognition pipeline: We report for the first time results competing with traditional methods on the challenging Oxford5k and Paris6k datasets. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Pooling <s> Several recent works have shown that image descriptors produced by deep convolutional neural networks provide state-of-the-art performance for image classification and retrieval problems. It has also been shown that the activations from the convolutional layers can be interpreted as local features describing particular image regions. These local features can be aggregated using aggregation approaches developed for local features (e.g. Fisher vectors), thus providing new powerful global descriptors. In this paper we investigate possible ways to aggregate local deep features to produce compact global descriptors for image retrieval. First, we show that deep features and traditional hand-engineered features have quite different distributions of pairwise similarities, hence existing aggregation methods have to be carefully re-evaluated. Such re-evaluation reveals that in contrast to shallow features, the simple aggregation method based on sum pooling provides arguably the best performance for deep convolutional features. This method is efficient, has few parameters, and bears little risk of overfitting when e.g. learning the PCA matrix. Overall, the new compact global descriptor improves the state-of-the-art on four common benchmarks considerably. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Pooling <s> We propose a simple and straightforward way of creating powerful image representations via cross-dimensional weighting and aggregation of deep convolutional neural network layer outputs. We first present a generalized framework that encompasses a broad family of approaches and includes cross-dimensional pooling and weighting steps. We then propose specific non-parametric schemes for both spatial- and channel-wise weighting that boost the effect of highly active spatial responses and at the same time regulate burstiness effects. We experiment on different public datasets for image search and show that our approach outperforms the current state-of-the-art for approaches based on pre-trained networks. We also provide an easy-to-use, open source implementation that reproduces our results. <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Pooling <s> We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following four principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the “Vector of Locally Aggregated Descriptors” image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we create a new weakly supervised ranking loss, which enables end-to-end learning of the architecture's parameters from images depicting the same places over time downloaded from Google Street View Time Machine. Third, we develop an efficient training procedure which can be applied on very large-scale weakly labelled tasks. Finally, we show that the proposed architecture and training procedure significantly outperform non-learnt image representations and off-the-shelf CNN descriptors on challenging place recognition and image retrieval benchmarks. <s> BIB010 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Pooling <s> This work proposes a simple instance retrieval pipeline based on encoding the convolutional features of CNN using the bag of words aggregation scheme (BoW). Assigning each local array of activations in a convolutional layer to a visual word produces an assignment map, a compact representation that relates regions of an image with a visual word. We use the assignment map for fast spatial reranking, obtaining object localizations that are used for query expansion. We demonstrate the suitability of the BoW representation based on local CNN features for instance retrieval, achieving competitive performance on the Oxford and Paris buildings benchmarks. We show that our proposed system for CNN feature aggregation with BoW outperforms state-of-the-art techniques using sum pooling at a subset of the challenging TRECVid INS benchmark. <s> BIB011 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Pooling <s> The objective of this paper is the effective transfer of the Convolutional Neural Network (CNN) feature in image search and classification. Systematically, we study three facts in CNN transfer. 1) We demonstrate the advantage of using images with a properly large size as input to CNN instead of the conventionally resized one. 2) We benchmark the performance of different CNN layers improved by average/max pooling on the feature maps. Our observation suggests that the Conv5 feature yields very competitive accuracy under such pooling step. 3) We find that the simple combination of pooled features extracted across various CNN layers is effective in collecting evidences from both low and high level descriptors. Following these good practices, we are capable of improving the state of the art on a number of benchmarks to a large margin. <s> BIB012 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Pooling <s> An increasing number of computer vision tasks can be tackled with deep features, which are the intermediate outputs of a pre-trained Convolutional Neural Network. Despite the astonishing performance, deep features extracted from low-level neurons are still below satisfaction, arguably because they cannot access the spatial context contained in the higher layers. In this paper, we present InterActive, a novel algorithm which computes the activeness of neurons and network connections. Activeness is propagated through a neural network in a top-down manner, carrying highlevel context and improving the descriptive power of lowlevel and mid-level neurons. Visualization indicates that neuron activeness can be interpreted as spatial-weighted neuron responses. We achieve state-of-the-art classification performance on a wide range of image datasets. <s> BIB013
When column features are extracted, an image is represented by a set of descriptors. To aggregate these descriptors into a global representation, currently two strategies are adopted: encoding and direct pooling (Fig. 2) . Encoding. A set of column features resembles a set of SIFT features. So standard encoding schemes can be directly employed. The most commonly used methods are VLAD BIB001 and FV BIB002 . A brief review of VLAD and FV can be seen in Section 3.3.2. A milestone work is BIB005 , in which the column features are encoded into VLAD for the first time. This idea was later extended to CNN model fine-tuning BIB010 . The BoW encoding can also be leveraged, as the case in BIB006 . The column features within each layer are aggregated into a BoW vector which is then concatenated across the layers. An exception to these fix-length representations is BIB011 , in which the column features are quantized with a codebook of size 25k and an inverted index is employed for efficiency. Pooling. A major difference between the CNN column feature and SIFT is that the former has an explicit meaning in each dimension, i.e., the response of a particular region of the input image to a filter. Therefore, apart from the encoding schemes mentioned above, direct pooling techniques can produce discriminative features as well. A milestone work in this direction consists in the Maximum activations of convolutions (MAC) proposed by Tolias et al. BIB007 . Without distorting or cropping images, MAC computes a global descriptor with a single forward pass. Specifically, MAC calculates the maximum value of each intermediate feature map and concatenates all these values within a convolutional layer. In its multi-region version, the integral image and an approximate maximum operator are used for fast computation. The regional MAC descriptors are subsequently sum-pooled along with a series of normalization and PCA-whitening operations BIB003 . We also note in this survey that several other works BIB004 , BIB012 , BIB008 also employ similar ideas with BIB007 in employing max or average pooling on the intermediate feature maps and that Razavian et al. BIB004 are the first. It has been observed that the last convolutional layer (e.g., pool5 in VGGNet), after pooling usually yields superior accuracy to the FC descriptors and the other convolutional layers BIB012 . Apart from direct feature pooling, it is also beneficial to assign some specific weights to the feature maps within each layer before pooling. In BIB008 , Babenko et al. propose the injection of the prior knowledge that objects tend to be located toward image centers, and impose a 2-D Gaussian mask on the feature maps before sum pooling. Xie et al. BIB013 improve the MAC representation BIB007 by propagating the high-level semantics and spatial context to low-level neurons for improving the descriptive ability of these bottom-layer activations. With a more general weighting strategy, Kalantidis et al. BIB009 perform both feature map-wise and channel-wise weighing, which aims to highlight the highly active spatial responses while reducing burstiness effects.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Datasets for Fine-Tuning <s> In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Datasets for Fine-Tuning <s> This paper improves recent methods for large scale image search. State-of-the-art methods build on the bag-of-features image representation. We, first, analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within the inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Datasets for Fine-Tuning <s> The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Datasets for Fine-Tuning <s> Over the last decade, the availability of public image repositories and recognition benchmarks has enabled rapid progress in visual object category and instance detection. Today we are witnessing the birth of a new generation of sensing technologies capable of providing high quality synchronized videos of both color and depth, the RGB-D (Kinect-style) camera. With its advanced sensing capabilities and the potential for mass adoption, this technology represents an opportunity to dramatically increase robotic object recognition, manipulation, navigation, and interaction capabilities. In this paper, we introduce a large-scale, hierarchical multi-view object dataset collected using an RGB-D camera. The dataset contains 300 objects organized into 51 categories and has been made publicly available to the research community so as to enable rapid progress based on this promising technology. This paper describes the dataset collection procedure and introduces techniques for RGB-D based object recognition and detection, demonstrating that combining color and depth information substantially improves quality of results. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Datasets for Fine-Tuning <s> The paper addresses large scale image retrieval with short vector representations. We study dimensionality reduction by Principal Component Analysis (PCA) and propose improvements to its different phases. We show and explicitly exploit relations between i) mean subtrac- tion and the negative evidence, i.e., a visual word that is mutually miss- ing in two descriptions being compared, and ii) the axis de-correlation and the co-occurrences phenomenon. Finally, we propose an effective way to alleviate the quantization artifacts through a joint dimensionality re- duction of multiple vocabularies. The proposed techniques are simple, yet significantly and consistently improve over the state of the art on compact image representations. Complementary experiments in image classification show that the methods are generally applicable. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Datasets for Fine-Tuning <s> It has been shown that the activations invoked by an image within the top layers of a large convolutional neural network provide a high-level descriptor of the visual content of the image. In this paper, we investigate the use of such descriptors (neural codes) within the image retrieval application. In the experiments with several standard retrieval benchmarks, we establish that neural codes perform competitively even when the convolutional neural network has been trained for an unrelated classification task (e.g. Image-Net). We also evaluate the improvement in the retrieval performance of neural codes, when the network is retrained on a dataset of images that are similar to images encountered at test time. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Datasets for Fine-Tuning <s> Popular sites like Houzz, Pinterest, and LikeThatDecor, have communities of users helping each other answer questions about products in images. In this paper we learn an embedding for visual search in interior design. Our embedding contains two different domains of product images: products cropped from internet scenes, and products in their iconic form. With such a multi-domain embedding, we demonstrate several applications of visual search including identifying products in scenes and finding stylistically similar products. To obtain the embedding, we train a convolutional neural network on pairs of images. We explore several training architectures including re-purposing object classifiers, using siamese networks, and using multitask learning. We evaluate our search quantitatively and qualitatively and demonstrate high quality results for search across multiple visual domains, enabling new applications in interior design. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Datasets for Fine-Tuning <s> We propose a novel approach for instance-level image retrieval. It produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors. In contrast to previous works employing pre-trained deep networks as a black box to produce features, our method leverages a deep architecture trained for the specific task of image retrieval. Our contribution is twofold: (i) we leverage a ranking framework to learn convolution and projection weights that are used to build the region features; and (ii) we employ a region proposal network to learn which regions should be pooled to form the final global descriptor. We show that using clean training data is key to the success of our approach. To that aim, we use a large scale but noisy landmark dataset and develop an automatic cleaning approach. The proposed architecture produces a global image representation in a single forward pass. Our approach significantly outperforms previous approaches based on global descriptors on standard datasets. It even surpasses most prior works based on costly local descriptor indexing and spatial verification. Additional material is available at www.xrce.xerox.com/Deep-Image-Retrieval. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Datasets for Fine-Tuning <s> Convolutional Neural Networks (CNNs) achieve state-of-the-art performance in many computer vision tasks. However, this achievement is preceded by extreme manual annotation in order to perform either training from scratch or fine-tuning for the target task. In this work, we propose to fine-tune CNN for image retrieval from a large collection of unordered images in a fully automated manner. We employ state-of-the-art retrieval and Structure-from-Motion (SfM) methods to obtain 3D models, which are used to guide the selection of the training data for CNN fine-tuning. We show that both hard positive and hard negative examples enhance the final performance in particular object retrieval with compact codes. <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Datasets for Fine-Tuning <s> We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following four principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the “Vector of Locally Aggregated Descriptors” image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we create a new weakly supervised ranking loss, which enables end-to-end learning of the architecture's parameters from images depicting the same places over time downloaded from Google Street View Time Machine. Third, we develop an efficient training procedure which can be applied on very large-scale weakly labelled tasks. Finally, we show that the proposed architecture and training procedure significantly outperform non-learnt image representations and off-the-shelf CNN descriptors on challenging place recognition and image retrieval benchmarks. <s> BIB010 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Datasets for Fine-Tuning <s> Image representations derived from pre-trained Convolutional Neural Networks (CNNs) have become the new state of the art in computer vision tasks such as instance retrieval. This work explores the suitability for instance retrieval of image- and region-wise representations pooled from an object detection CNN such as Faster R-CNN. We take advantage of the object proposals learned by a Region Proposal Network (RPN) and their associated CNN features to build an instance search pipeline composed of a first filtering stage followed by a spatial reranking. We further investigate the suitability of Faster R-CNN features when the network is fine-tuned for the same objects one wants to retrieve. We assess the performance of our proposed system with the Oxford Buildings 5k, Paris Buildings 6k and a subset of TRECVid Instance Search 2013, achieving competitive results. <s> BIB011
The nature of the datasets used in fine-tuning is the key to learning discriminative CNN features. ImageNet BIB003 only provides images with class labels. So the pre-trained CNN model is competent in discriminating images of different object/scene classes, but may be less effective to tell the difference between images that fall in the same class (e.g., architecture) but depict different instances (e.g., "Eiffel Tower" and "Notre-Dame"). Therefore, it is important to fine-tune the CNN model on task-oriented datasets. The datasets having been used for fine-tuning in recent years are shown in Table 3 . Buildings and common objects are the focus. The milestone work on fine-tuning is BIB006 . It collects the Landmarks dataset by a semi-automated approach: automated searching for the popular landmarks in Yandex search engine, followed by a manual estimation of the proportion of relevant image among the top ranks. This dataset contains 672 classes of various architectures, and the finetuned network produces superior features on landmark related datasets such as Oxford5k BIB001 and Holidays BIB002 , but has decreased performance on Ukbench where common objects are presented. Babenko et al. BIB006 have also fine-tuned CNNs on the Multi-view RGB-D dataset BIB004 containing turntable views of 300 household objects, in order to improve performance on Ukbench. The Landmark dataset is later used by Gordo et al. BIB008 for fine-tuning, after an automatic cleaning approach based on SIFT matching. In BIB009 , Radenovi c et al. employ the retrieval and Structure-From-Motion methods to build 3D landmark models so that images depicting the same architecture can be grouped. Using this labeled dataset, the linear discriminative projections (denoted as L w in Table 5 ) outperform the previous whitening technique BIB005 . Another dataset called Tokyo Time Machine is collected using Google Street View Time Machine which provides images depicting the same places over time BIB010 . While most of the above datasets focus on landmarks, Bell et al. BIB007 build a Product dataset consisting of furniture by developing a crowdsourced pipeline to draw connections between in-situ objects and the corresponding products. It is also feasible to finetune on the query sets suggested in BIB011 , but this method may not be adaptable to new query types.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Networks in Fine-Tuning <s> It has been shown that the activations invoked by an image within the top layers of a large convolutional neural network provide a high-level descriptor of the visual content of the image. In this paper, we investigate the use of such descriptors (neural codes) within the image retrieval application. In the experiments with several standard retrieval benchmarks, we establish that neural codes perform competitively even when the convolutional neural network has been trained for an unrelated classification task (e.g. Image-Net). We also evaluate the improvement in the retrieval performance of neural codes, when the network is retrained on a dataset of images that are similar to images encountered at test time. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Networks in Fine-Tuning <s> Popular sites like Houzz, Pinterest, and LikeThatDecor, have communities of users helping each other answer questions about products in images. In this paper we learn an embedding for visual search in interior design. Our embedding contains two different domains of product images: products cropped from internet scenes, and products in their iconic form. With such a multi-domain embedding, we demonstrate several applications of visual search including identifying products in scenes and finding stylistically similar products. To obtain the embedding, we train a convolutional neural network on pairs of images. We explore several training architectures including re-purposing object classifiers, using siamese networks, and using multitask learning. We evaluate our search quantitatively and qualitatively and demonstrate high quality results for search across multiple visual domains, enabling new applications in interior design. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Networks in Fine-Tuning <s> Recently, image representation built upon Convolutional Neural Network (CNN) has been shown to provide effective descriptors for image search, outperforming pre-CNN features as short-vector representations. Yet such models are not compatible with geometry-aware re-ranking methods and still outperformed, on some particular object retrieval benchmarks, by traditional image search systems relying on precise descriptor matching, geometric re-ranking, or query expansion. This work revisits both retrieval stages, namely initial search and re-ranking, by employing the same primitive information derived from the CNN. We build compact feature vectors that encode several image regions without the need to feed multiple inputs to the network. Furthermore, we extend integral images to handle max-pooling on convolutional layer activations, allowing us to efficiently localize matching objects. The resulting bounding box is finally used for image re-ranking. As a result, this paper significantly improves existing CNN-based recognition pipeline: We report for the first time results competing with traditional methods on the challenging Oxford5k and Paris6k datasets. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Networks in Fine-Tuning <s> Convolutional Neural Networks (CNNs) achieve state-of-the-art performance in many computer vision tasks. However, this achievement is preceded by extreme manual annotation in order to perform either training from scratch or fine-tuning for the target task. In this work, we propose to fine-tune CNN for image retrieval from a large collection of unordered images in a fully automated manner. We employ state-of-the-art retrieval and Structure-from-Motion (SfM) methods to obtain 3D models, which are used to guide the selection of the training data for CNN fine-tuning. We show that both hard positive and hard negative examples enhance the final performance in particular object retrieval with compact codes. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Networks in Fine-Tuning <s> We propose a novel approach for instance-level image retrieval. It produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors. In contrast to previous works employing pre-trained deep networks as a black box to produce features, our method leverages a deep architecture trained for the specific task of image retrieval. Our contribution is twofold: (i) we leverage a ranking framework to learn convolution and projection weights that are used to build the region features; and (ii) we employ a region proposal network to learn which regions should be pooled to form the final global descriptor. We show that using clean training data is key to the success of our approach. To that aim, we use a large scale but noisy landmark dataset and develop an automatic cleaning approach. The proposed architecture produces a global image representation in a single forward pass. Our approach significantly outperforms previous approaches based on global descriptors on standard datasets. It even surpasses most prior works based on costly local descriptor indexing and spatial verification. Additional material is available at www.xrce.xerox.com/Deep-Image-Retrieval. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Networks in Fine-Tuning <s> We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following four principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the “Vector of Locally Aggregated Descriptors” image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we create a new weakly supervised ranking loss, which enables end-to-end learning of the architecture's parameters from images depicting the same places over time downloaded from Google Street View Time Machine. Third, we develop an efficient training procedure which can be applied on very large-scale weakly labelled tasks. Finally, we show that the proposed architecture and training procedure significantly outperform non-learnt image representations and off-the-shelf CNN descriptors on challenging place recognition and image retrieval benchmarks. <s> BIB006
The CNN architectures used in fine-tuning mainly fall into two types: the classification-based network and the verification-based network. The classification-based network is trained to classify architectures into pre-defined categories. Since there is usually no class overlap between the training set and the query images, the learned embedding e.g., FC6 or FC7 in AlexNet, is used for Euclidean distance based retrieval. This train/test strategy is employed in BIB001 , in which the last FC layer is modified to have 672 nodes corresponding to the number of classes in the Landmark dataset. The verification network may either use a siamese network with pairwise loss or use a triplet loss and has been more widely employed for fine-tuning. A standard siamese network based on AlexNet and the contrastive loss is employed in BIB002 . In BIB004 , Radenovi c et al. propose to replace the FC layers with a MAC layer BIB003 . Moreover, with the 3D architecture models built in BIB004 , training pairs can be mined. Positive image pairs are selected based on the number of co-observed 3D points (matched SIFT features), while hard negatives are defined as those with small distances in their CNN descriptors. These image pairs are fed into the siamese network, and the contrastive loss is calculated from the ' 2 normalized MAC features. In a concurrent work to BIB004 , Gordo et al. BIB005 fine-tune a triplet-loss network and a region proposal network on the Landmark dataset BIB001 . The superiority of BIB005 consists in its localization ability, which excludes the background in feature learning and extraction. In both works, the fine-tuned models exhibit state-of-the-art accuracy on landmark retrieval datasets including Oxford5k, Paris6k and Holidays, and also good generalization ability on Ukbench ( Table 5 ). In BIB006 , a VLAD-like layer is plugged in the network at the last convolutional layer which is amenable to training via back-propagation. Meanwhile, a new triplet loss is designed to make use of the weakly supervised Google Street View Time Machine data.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Indexing <s> The Fisher kernel (FK) is a generic framework which combines the benefits of generative and discriminative approaches. In the context of image classification the FK was shown to extend the popular bag-of-visual-words (BOV) by going beyond count statistics. However, in practice, this enriched representation has not yet shown its superiority over the BOV. In the first part we show that with several well-motivated modifications over the original framework we can boost the accuracy of the FK. On PASCAL VOC 2007 we increase the Average Precision (AP) from 47.9% to 58.3%. Similarly, we demonstrate state-of-the-art accuracy on CalTech 256. A major advantage is that these results are obtained using only SIFT descriptors and costless linear classifiers. Equipped with this representation, we can now explore image classification on a larger scale. In the second part, as an application, we compare two abundant resources of labeled images to learn classifiers: ImageNet and Flickr groups. In an evaluation involving hundreds of thousands of training images we show that classifiers learned on Flickr groups perform surprisingly well (although they were not intended for this purpose) and that they can complement classifiers learned on more carefully annotated datasets. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Indexing <s> We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Indexing <s> Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations lack geometric invariance, which limits their robustness for classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (MOP-CNN). This scheme extracts CNN activations for local patches at multiple scale levels, performs orderless VLAD pooling of these activations at each level separately, and concatenates the result. The resulting MOP-CNN representation can be used as a generic feature for either supervised or unsupervised recognition tasks, from image classification to instance-level retrieval; it consistently outperforms global CNN activations without requiring any joint training of prediction layers for a particular target dataset. In absolute terms, it achieves state-of-the-art results on the challenging SUN397 and MIT Indoor Scenes classification datasets, and competitive results on ILSVRC2012/2013 classification and INRIA Holidays retrieval datasets. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Indexing <s> Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the OverFeat network which was trained to perform object classification on ILSVRC13. We use features extracted from the OverFeat network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the OverFeat network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or L2 distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Indexing <s> This paper provides an extensive study on the availability of image representations based on convolutional networks (ConvNets) for the task of visual instance retrieval. Besides the choice of convolutional layers, we present an efficient pipeline exploiting multi-scale schemes to extract local features, in particular, by taking geometric invariance into explicit account, i.e. positions, scales and spatial consistency. In our experiments using five standard image retrieval datasets, we demonstrate that generic ConvNet image representations can outperform other state-of-the-art methods if they are extracted appropriately. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Indexing <s> Convolutional Neural Network (CNN) features have been successfully employed in recent works as an image descriptor for various vision tasks. But the inability of the deep CNN features to exhibit invariance to geometric transformations and object compositions poses a great challenge for image search. In this work, we demonstrate the effectiveness of the objectness prior over the deep CNN features of image regions for obtaining an invariant image representation. The proposed approach represents the image as a vector of pooled CNN features describing the underlying objects. This representation provides robustness to spatial layout of the objects in the scene and achieves invariance to general geometric transformations, such as translation, rotation and scaling. The proposed approach also leads to a compact representation of the scene, making each image occupy a smaller memory footprint. Experiments show that the proposed representation achieves state of the art retrieval results on a set of challenging benchmark image datasets, while maintaining a compact representation. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Indexing <s> In the well-known Bag-of-Words model, local features, such as the SIFT descriptor, are extracted and quantized into visual words. Then, an index is created to reduce computational burden. However, local clues serve as low-level representations that can not represent high-level semantic concepts. Recently, the success of deep features extracted from convolutional neural networks(CNN) has shown promising results toward bridging the semantic gap. Inspired by this, we attempt to introduce deep features into inverted index based image retrieval and thus propose the DeepIndex framework. Moreover, considering the compensation of different deep features, we incorporate multiple deep features from different fully connected layers, resulting in the multiple DeepIndex. We find the optimal integration of one midlevel deep feature and one high-level deep feature, from two different CNN architectures separately. This can be treated as an attempt to further reduce the semantic gap. Extensive experiments on three benchmark datasets demonstrate that, the proposed DeepIndex method is competitive with the state-of-the-art on Holidays(85:65% mAP), Paris(81:24% mAP), and UKB(3:76 score). In addition, our method is efficient in terms of both memory and time cost. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Indexing <s> In this paper, we demonstrate that the essentials of image classification and retrieval are the same, since both tasks could be tackled by measuring the similarity between images. To this end, we propose ONE (Online Nearest-neighbor Estimation), a unified algorithm for both image classification and retrieval. ONE is surprisingly simple, which only involves manual object definition, regional description and nearest-neighbor search. We take advantage of PCA and PQ approximation and GPU parallelization to scale our algorithm up to large-scale image search. Experimental results verify that ONE achieves state-of-the-art accuracy in a wide range of image classification and retrieval benchmarks. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Indexing <s> Fisher Vectors (FV) and Convolutional Neural Networks (CNN) are two image classification pipelines with different strengths. While CNNs have shown superior accuracy on a number of classification tasks, FV classifiers are typically less costly to train and evaluate. We propose a hybrid architecture that combines their strengths: the first unsupervised layers rely on the FV while the subsequent fully-connected supervised layers are trained with back-propagation. We show experimentally that this hybrid architecture significantly outperforms standard FV systems without incurring the high cost that comes with CNNs. We also derive competitive mid-level features from our architecture that are readily applicable to other class sets and even to new tasks. <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Indexing <s> This work proposes a simple instance retrieval pipeline based on encoding the convolutional features of CNN using the bag of words aggregation scheme (BoW). Assigning each local array of activations in a convolutional layer to a visual word produces an assignment map, a compact representation that relates regions of an image with a visual word. We use the assignment map for fast spatial reranking, obtaining object localizations that are used for query expansion. We demonstrate the suitability of the BoW representation based on local CNN features for instance retrieval, achieving competitive performance on the Oxford and Paris buildings benchmarks. We show that our proposed system for CNN feature aggregation with BoW outperforms state-of-the-art techniques using sum pooling at a subset of the challenging TRECVid INS benchmark. <s> BIB010 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Indexing <s> The Convolutional Neural Networks (CNNs) have achieved breakthroughs on several image retrieval benchmarks. Most previous works re-formulate CNNs as global feature extractors used for linear scan. This paper proposes a Multi-layer Orderless Fusion (MOF) approach to integrate the activations of CNN in the Bag-of-Words (BoW) framework. Specifically, through only one forward pass in the network, we extract multi-layer CNN activations of local patches. Activations from each layer are aggregated in one BoW model, and several BoW models are combined with late fusion. Experimental results on two benchmark datasets demonstrate the effectiveness of the proposed method. <s> BIB011
The encoding/indexing procedure of hybrid methods resembles SIFT-based retrieval, e.g., VLAD/FV encoding under a small codebook or the inverted index under a large codebook. The VLAD/FV encoding, such as BIB003 , BIB006 , follow the standard practice in the case of SIFT features BIB001 , BIB002 , so we do not detail here. On the other hand, several works exploit the inverted index on the patch-based CNN features BIB010 , BIB007 , BIB011 . Again, standard techniques in SIFT-based methods such as HE are employed BIB011 . Apart from the abovementioned strategies, we notice that several works BIB004 , BIB005 , BIB008 extract several region descriptors per image to do a many-to-many matching, called "spatial search" BIB004 . This method improves the translation and scale invariance of the retrieval system but may encounter efficiency problems. A reverse strategy to applying encoding on top of CNN activations is to build a CNN structure (mainly consisting of FC layers) on top of SIFT-based representations such as FV. By training a classification model on natural images, the intermediate FC layer can be used for retrieval BIB009 .
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Relationship between SIFT-and CNN-Based Methods <s> We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Relationship between SIFT-and CNN-Based Methods <s> The problem of large-scale image search has been traditionally addressed with the bag-of-visual-words (BOV). In this article, we propose to use as an alternative the Fisher kernel framework. We first show why the Fisher representation is well-suited to the retrieval problem: it describes an image by what makes it different from other images. One drawback of the Fisher vector is that it is high-dimensional and, as opposed to the BOV, it is dense. The resulting memory and computational costs do not make Fisher vectors directly amenable to large-scale retrieval. Therefore, we compress Fisher vectors to reduce their memory footprint and speed-up the retrieval. We compare three binarization approaches: a simple approach devised for this representation and two standard compression techniques. We show on two publicly available datasets that compressed Fisher vectors perform very well using as little as a few hundreds of bits per image, and significantly better than a very recent compressed BOV approach. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Relationship between SIFT-and CNN-Based Methods <s> We consider the design of a single vector representation for an image that embeds and aggregates a set of local patch descriptors such as SIFT. More specifically we aim to construct a dense representation, like the Fisher Vector or VLAD, though of small or intermediate size. We make two contributions, both aimed at regularizing the individual contributions of the local descriptors in the final representation. The first is a novel embedding method that avoids the dependency on absolute distances by encoding directions. The second contribution is a "democratization" strategy that further limits the interaction of unrelated descriptors in the aggregation stage. These methods are complementary and give a substantial performance boost over the state of the art in image search with short or mid-size vectors, as demonstrated by our experiments on standard public image retrieval benchmarks. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Relationship between SIFT-and CNN-Based Methods <s> Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations lack geometric invariance, which limits their robustness for classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (MOP-CNN). This scheme extracts CNN activations for local patches at multiple scale levels, performs orderless VLAD pooling of these activations at each level separately, and concatenates the result. The resulting MOP-CNN representation can be used as a generic feature for either supervised or unsupervised recognition tasks, from image classification to instance-level retrieval; it consistently outperforms global CNN activations without requiring any joint training of prediction layers for a particular target dataset. In absolute terms, it achieves state-of-the-art results on the challenging SUN397 and MIT Indoor Scenes classification datasets, and competitive results on ILSVRC2012/2013 classification and INRIA Holidays retrieval datasets. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Relationship between SIFT-and CNN-Based Methods <s> It has been shown that the activations invoked by an image within the top layers of a large convolutional neural network provide a high-level descriptor of the visual content of the image. In this paper, we investigate the use of such descriptors (neural codes) within the image retrieval application. In the experiments with several standard retrieval benchmarks, we establish that neural codes perform competitively even when the convolutional neural network has been trained for an unrelated classification task (e.g. Image-Net). We also evaluate the improvement in the retrieval performance of neural codes, when the network is retrained on a dataset of images that are similar to images encountered at test time. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Relationship between SIFT-and CNN-Based Methods <s> Deep convolutional neural networks have been successfully applied to image classification tasks. When these same networks have been applied to image retrieval, the assumption has been made that the last layers would give the best performance, as they do in classification. We show that for instance-level image retrieval, lower layers often perform better than the last layers in convolutional neural networks. We present an approach for extracting convolutional features from different layers of the networks, and adopt VLAD encoding to encode features into a single vector for each image. We investigate the effect of different layers and scales of input images on the performance of convolutional features using the recent deep networks OxfordNet and GoogLeNet. Experiments demonstrate that intermediate layers or higher layers with finer scales produce better results for image retrieval, compared to the last layer. When using compressed 128-D VLAD descriptors, our method obtains state-of-the-art results and outperforms other VLAD and CNN based approaches on two out of three test datasets. Our work provides guidance for transferring deep networks trained on image classification to image retrieval tasks. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Relationship between SIFT-and CNN-Based Methods <s> Recently, image representation built upon Convolutional Neural Network (CNN) has been shown to provide effective descriptors for image search, outperforming pre-CNN features as short-vector representations. Yet such models are not compatible with geometry-aware re-ranking methods and still outperformed, on some particular object retrieval benchmarks, by traditional image search systems relying on precise descriptor matching, geometric re-ranking, or query expansion. This work revisits both retrieval stages, namely initial search and re-ranking, by employing the same primitive information derived from the CNN. We build compact feature vectors that encode several image regions without the need to feed multiple inputs to the network. Furthermore, we extend integral images to handle max-pooling on convolutional layer activations, allowing us to efficiently localize matching objects. The resulting bounding box is finally used for image re-ranking. As a result, this paper significantly improves existing CNN-based recognition pipeline: We report for the first time results competing with traditional methods on the challenging Oxford5k and Paris6k datasets. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Relationship between SIFT-and CNN-Based Methods <s> Convolutional Neural Network (CNN) features have been successfully employed in recent works as an image descriptor for various vision tasks. But the inability of the deep CNN features to exhibit invariance to geometric transformations and object compositions poses a great challenge for image search. In this work, we demonstrate the effectiveness of the objectness prior over the deep CNN features of image regions for obtaining an invariant image representation. The proposed approach represents the image as a vector of pooled CNN features describing the underlying objects. This representation provides robustness to spatial layout of the objects in the scene and achieves invariance to general geometric transformations, such as translation, rotation and scaling. The proposed approach also leads to a compact representation of the scene, making each image occupy a smaller memory footprint. Experiments show that the proposed representation achieves state of the art retrieval results on a set of challenging benchmark image datasets, while maintaining a compact representation. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Relationship between SIFT-and CNN-Based Methods <s> We propose a novel approach for instance-level image retrieval. It produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors. In contrast to previous works employing pre-trained deep networks as a black box to produce features, our method leverages a deep architecture trained for the specific task of image retrieval. Our contribution is twofold: (i) we leverage a ranking framework to learn convolution and projection weights that are used to build the region features; and (ii) we employ a region proposal network to learn which regions should be pooled to form the final global descriptor. We show that using clean training data is key to the success of our approach. To that aim, we use a large scale but noisy landmark dataset and develop an automatic cleaning approach. The proposed architecture produces a global image representation in a single forward pass. Our approach significantly outperforms previous approaches based on global descriptors on standard datasets. It even surpasses most prior works based on costly local descriptor indexing and spatial verification. Additional material is available at www.xrce.xerox.com/Deep-Image-Retrieval. <s> BIB009
In this survey, we categorize current literature into six finegrained classes. The differences and some representative works of the six categories are summarized in Tables 1 and 5 . Our observation goes below. First, the hybrid method can be viewed as a transition zone from SIFT-to CNN-based methods. It resembles the SIFT-based methods in all the aspects except that it extracts CNN features as the local descriptor. Since the network is accessed multiple times during patch feature extraction, the efficiency of the feature extraction step may be compromised. Second, the single-pass CNN methods tend to combine the individual steps in the SIFT-based and hybrid methods. In Table 5 , the "pre-trained single-pass" category integrates the feature detection and description steps; in the "finetuned single-pass" methods, the image-level descriptor is usually extracted in an end-to-end mode, so that no separate encoding process is needed. In BIB009 , a "PCA" layer is integrated for discriminative dimension reduction, making a further step towards end-to-end feature learning. Third, fixed-length representations are gaining more popularity due to efficiency considerations. It can be obtained by aggregating local descriptors (SIFT or CNN) BIB006 , BIB001 , BIB003 , BIB004 , direct pooling BIB007 , BIB008 , or end-to-end feature computation BIB005 , BIB009 . Usually, dimension reduction methods such as PCA can employed on top of the fixedlength representations, and ANN search methods such as PQ BIB001 or hashing BIB002 can be used for fast retrieval.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hashing and Instance Retrieval <s> We present two algorithms for the approximate nearest neighbor problem in high-dimensional spaces. For data sets of size n living in R d , the algorithms require space that is only polynomial in n and d, while achieving query times that are sub-linear in n and polynomial in d. We also show applications to other high-dimensional geometric problems, such as the approximate minimum spanning tree. The article is based on the material from the authors' STOC'98 and FOCS'01 papers. It unifies, generalizes and simplifies the results from those papers. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hashing and Instance Retrieval <s> Semantic hashing[1] seeks compact binary codes of data-points so that the Hamming distance between codewords correlates with semantic similarity. In this paper, we show that the problem of finding a best code for a given dataset is closely related to the problem of graph partitioning and can be shown to be NP hard. By relaxing the original problem, we obtain a spectral method whose solutions are simply a subset of thresholded eigenvectors of the graph Laplacian. By utilizing recent results on convergence of graph Laplacian eigenvectors to the Laplace-Beltrami eigenfunctions of manifolds, we show how to efficiently calculate the code of a novel data-point. Taken together, both learning the code and applying it to a novel point are extremely simple. Our experiments show that our codes outperform the state-of-the art. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hashing and Instance Retrieval <s> SIFT-like local feature descriptors are ubiquitously employed in computer vision applications such as content-based retrieval, video analysis, copy detection, object recognition, photo tourism, and 3D reconstruction. Feature descriptors can be designed to be invariant to certain classes of photometric and geometric transformations, in particular, affine and intensity scale transformations. However, real transformations that an image can undergo can only be approximately modeled in this way, and thus most descriptors are only approximately invariant in practice. Second, descriptors are usually high dimensional (e.g., SIFT is represented as a 128-dimensional vector). In large-scale retrieval and matching problems, this can pose challenges in storing and retrieving descriptor data. We map the descriptor vectors into the Hamming space in which the Hamming metric is used to compare the resulting representations. This way, we reduce the size of the descriptors by representing them as short binary strings and learn descriptor invariance from examples. We show extensive experimental validation, demonstrating the advantage of the proposed approach. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hashing and Instance Retrieval <s> Approximate nearest neighbor search is an efficient strategy for large-scale image retrieval. Encouraged by the recent advances in convolutional neural networks (CNNs), we propose an effective deep learning framework to generate binary hash codes for fast image retrieval. Our idea is that when the data labels are available, binary codes can be learned by employing a hidden layer for representing the latent concepts that dominate the class labels. The utilization of the CNN also allows for learning image representations. Unlike other supervised methods that require pair-wised inputs for binary code learning, our method learns hash codes and image representations in a point-wised manner, making it suitable for large-scale datasets. Experimental results show that our method outperforms several state-of-the-art hashing algorithms on the CIFAR-10 and MNIST datasets. We further demonstrate its scalability and efficacy on a large-scale dataset of 1 million clothing images. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hashing and Instance Retrieval <s> With the rapid growth of web images, hashing has received increasing interests in large scale image retrieval. Research efforts have been devoted to learning compact binary codes that preserve semantic similarity based on labels. However, most of these hashing methods are designed to handle simple binary similarity. The complex multi-level semantic structure of images associated with multiple labels have not yet been well explored. Here we propose a deep semantic ranking based method for learning hash functions that preserve multilevel semantic similarity between multi-label images. In our approach, deep convolutional neural network is incorporated into hash functions to jointly learn feature representations and mappings from them to hash codes, which avoids the limitation of semantic representation power of hand-crafted features. Meanwhile, a ranking list that encodes the multilevel similarity information is employed to guide the learning of such deep hash functions. An effective scheme based on surrogate loss is used to solve the intractable optimization problem of nonsmooth and multivariate ranking measures involved in the learning procedure. Experimental results show the superiority of our proposed approach over several state-of-the-art hashing methods in term of ranking evaluation metrics when tested on multi-label image datasets. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hashing and Instance Retrieval <s> Nearest neighbor search is a problem of finding the data points from the database such that the distances from them to the query point are the smallest. Learning to hash is one of the major solutions to this problem and has been widely studied recently. In this paper, we present a comprehensive survey of the learning to hash algorithms, categorize them according to the manners of preserving the similarities into: pairwise similarity preserving, multiwise similarity preserving, implicit similarity preserving, as well as quantization, and discuss their relations. We separate quantization from pairwise similarity preserving as the objective function is very different though quantization, as we show, can be derived from preserving the pairwise similarities. In addition, we present the evaluation protocols, and the general performance analysis, and point out that the quantization algorithms perform superiorly in terms of search accuracy, search time cost, and space cost. Finally, we introduce a few emerging topics. <s> BIB006
Hashing is a major solution to the approximate nearest neighbor problem. It can be categorized into locality sensitive hashing (LSH) BIB001 and learning to hash. LSH is dataindependent and is usually outperformed by learning to hash, a data-dependent hashing approach. For learning to hash, a recent survey BIB006 categorizes it into quantization and pairwise similarity preserving. The quantization methods are briefly discussed in Section 3.3.2. For the pairwise similarity preserving methods, some popular hand-crafted methods include Spectral hashing BIB002 , LDA hashing BIB003 , etc. Recently, hashing has seen a major shift from handcrafted to supervised hashing with deep neural networks. These methods take the original image as input and produce a learned feature before binarization BIB004 , BIB005 . Most of these methods, however, focus on class-level image retrieval, a different task with instance retrieval discussed in this survey. For instance retrieval, when adequate training data can be collected, such as architecture and pedestrians, the deep hashing methods may be of critical importance.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Image Retrieval Datasets <s> In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Image Retrieval Datasets <s> This paper improves recent methods for large scale image search. State-of-the-art methods build on the bag-of-features image representation. We, first, analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within the inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Image Retrieval Datasets <s> The state of the art in visual object retrieval from large databases is achieved by systems that are inspired by text retrieval. A key component of these approaches is that local regions of images are characterized using high-dimensional descriptors which are then mapped to ldquovisual wordsrdquo selected from a discrete vocabulary.This paper explores techniques to map each visual region to a weighted set of words, allowing the inclusion of features which were lost in the quantization stage of previous systems. The set of visual words is obtained by selecting words based on proximity in descriptor space. We describe how this representation may be incorporated into a standard tf-idf architecture, and how spatial verification is modified in the case of this soft-assignment. We evaluate our method on the standard Oxford Buildings dataset, and introduce a new dataset for evaluation. Our results exceed the current state of the art retrieval performance on these datasets, particularly on queries with poor initial recall where techniques like query expansion suffer. Overall we show that soft-assignment is always beneficial for retrieval with large vocabularies, at a cost of increased storage requirements for the index. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Image Retrieval Datasets <s> State of the art methods for image and object retrieval exploit both appearance (via visual words) and local geometry (spatial extent, relative pose). In large scale problems, memory becomes a limiting factor - local geometry is stored for each feature detected in each image and requires storage larger than the inverted file and term frequency and inverted document frequency weights together. We propose a novel method for learning discretized local geometry representation based on minimization of average reprojection error in the space of ellipses. The representation requires only 24 bits per feature without drop in performance. Additionally, we show that if the gravity vector assumption is used consistently from the feature description to spatial verification, it improves retrieval performance and decreases the memory footprint. The proposed method outperforms state of the art retrieval algorithms in a standard image retrieval benchmark. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Image Retrieval Datasets <s> A novel similarity measure for bag-of-words type large scale image retrieval is presented. The similarity function is learned in an unsupervised manner, requires no extra space over the standard bag-of-words method and is more discriminative than both L2-based soft assignment and Hamming embedding. ::: ::: We show experimentally that the novel similarity function achieves mean average precision that is superior to any result published in the literature on a number of standard datasets. At the same time, retrieval with the proposed similarity function is faster than the reference method. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Image Retrieval Datasets <s> It has been shown that the activations invoked by an image within the top layers of a large convolutional neural network provide a high-level descriptor of the visual content of the image. In this paper, we investigate the use of such descriptors (neural codes) within the image retrieval application. In the experiments with several standard retrieval benchmarks, we establish that neural codes perform competitively even when the convolutional neural network has been trained for an unrelated classification task (e.g. Image-Net). We also evaluate the improvement in the retrieval performance of neural codes, when the network is retrained on a dataset of images that are similar to images encountered at test time. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Image Retrieval Datasets <s> We propose a simple and straightforward way of creating powerful image representations via cross-dimensional weighting and aggregation of deep convolutional neural network layer outputs. We first present a generalized framework that encompasses a broad family of approaches and includes cross-dimensional pooling and weighting steps. We then propose specific non-parametric schemes for both spatial- and channel-wise weighting that boost the effect of highly active spatial responses and at the same time regulate burstiness effects. We experiment on different public datasets for image search and show that our approach outperforms the current state-of-the-art for approaches based on pre-trained networks. We also provide an easy-to-use, open source implementation that reproduces our results. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Image Retrieval Datasets <s> Several recent works have shown that image descriptors produced by deep convolutional neural networks provide state-of-the-art performance for image classification and retrieval problems. It has also been shown that the activations from the convolutional layers can be interpreted as local features describing particular image regions. These local features can be aggregated using aggregation approaches developed for local features (e.g. Fisher vectors), thus providing new powerful global descriptors. In this paper we investigate possible ways to aggregate local deep features to produce compact global descriptors for image retrieval. First, we show that deep features and traditional hand-engineered features have quite different distributions of pairwise similarities, hence existing aggregation methods have to be carefully re-evaluated. Such re-evaluation reveals that in contrast to shallow features, the simple aggregation method based on sum pooling provides arguably the best performance for deep convolutional features. This method is efficient, has few parameters, and bears little risk of overfitting when e.g. learning the PCA matrix. Overall, the new compact global descriptor improves the state-of-the-art on four common benchmarks considerably. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Image Retrieval Datasets <s> We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following four principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the “Vector of Locally Aggregated Descriptors” image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we create a new weakly supervised ranking loss, which enables end-to-end learning of the architecture's parameters from images depicting the same places over time downloaded from Google Street View Time Machine. Third, we develop an efficient training procedure which can be applied on very large-scale weakly labelled tasks. Finally, we show that the proposed architecture and training procedure significantly outperform non-learnt image representations and off-the-shelf CNN descriptors on challenging place recognition and image retrieval benchmarks. <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Image Retrieval Datasets <s> Image representations derived from pre-trained Convolutional Neural Networks (CNNs) have become the new state of the art in computer vision tasks such as instance retrieval. This work explores the suitability for instance retrieval of image- and region-wise representations pooled from an object detection CNN such as Faster R-CNN. We take advantage of the object proposals learned by a Region Proposal Network (RPN) and their associated CNN features to build an instance search pipeline composed of a first filtering stage followed by a spatial reranking. We further investigate the suitability of Faster R-CNN features when the network is fine-tuned for the same objects one wants to retrieve. We assess the performance of our proposed system with the Oxford Buildings 5k, Paris Buildings 6k and a subset of TRECVid Instance Search 2013, achieving competitive results. <s> BIB010
Five popular instance retrieval datasets are used in this survey. Statistics of these datasets can be accessed in Table 4 . Holidays BIB002 is collected by J egou et al. from personal holiday albums, so most of the images are of various scene types. The database has 1,491 images composed of 500 groups of similar images. Each image group has 1 query, totaling 500 query images. Most SIFT-based methods employ the original images, except BIB004 , BIB005 which manually rotate the images into upright orientations. Many recent CNN-based methods BIB007 , BIB009 , BIB008 also use the rotated version of Holidays. In Table 5 , results of both versions of Holidays are shown (separated by "/"). Rotating the images usually brings 2-3 percent mAP improvement. Ukbench consists of 10,200 images of various content, such as objects, scenes, and CD covers. All the images are divided into 2,550 groups. Each group has four images depicting the same object/scene, under various angles, illuminations, translations, etc. Each image in this dataset is taken as the query in turn, so there are 10,200 queries. Oxford5k BIB001 is collected by crawling images from Flickr using the names of 11 different landmarks in Oxford. A total of 5,062 images form the image database. The dataset defines five queries for each landmark by hand-drawn bounding boxes, so that 55 query Regions of Interest (ROI) exist in total. Each database image is assigned one of four labels, good, OK, junk, or bad. The first two labels are true matches to the query ROIs, while "bad" denotes the distractors. In junk images, less than 25 percent of the objects are visible, or they undergo severe occlusion or distortion, so these images have zero impact on retrieval accuracy. Flickr100k BIB003 contains 99,782 high resolution images crawled from Flickr's 145 most popular tags. In literature, this dataset is typically added to Oxford5k to test the scalability of retrieval algorithms. Paris6k BIB003 is featured by 6,412 images crawled from 11 queries on specific Paris architecture. Each landmark has five queries, so there are also 55 queries with bounding boxes. The database images are annotated with the same four types of labels as Oxford5k. Two major evaluation protocols exist for Oxford5k and Paris6k. For SIFT-based methods, the cropped regions are usually used as query. For CNN-based methods, some employ the full-sized query images BIB006 , BIB009 ; some follow the standard cropping protocol, either by cropping the ROI and feeding it into CNN BIB007 or extracting CNN features using the full image and selecting those falling in the ROI BIB010 . Using the full image may lead to mAP improvement. These protocols are used in Table 5 .
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> This paper improves recent methods for large scale image search. State-of-the-art methods build on the bag-of-features image representation. We, first, analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within the inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> The state of the art in visual object retrieval from large databases is achieved by systems that are inspired by text retrieval. A key component of these approaches is that local regions of images are characterized using high-dimensional descriptors which are then mapped to ldquovisual wordsrdquo selected from a discrete vocabulary.This paper explores techniques to map each visual region to a weighted set of words, allowing the inclusion of features which were lost in the quantization stage of previous systems. The set of visual words is obtained by selecting words based on proximity in descriptor space. We describe how this representation may be incorporated into a standard tf-idf architecture, and how spatial verification is modified in the case of this soft-assignment. We evaluate our method on the standard Oxford Buildings dataset, and introduce a new dataset for evaluation. Our results exceed the current state of the art retrieval performance on these datasets, particularly on queries with poor initial recall where techniques like query expansion suffer. Overall we show that soft-assignment is always beneficial for retrieval with large vocabularies, at a cost of increased storage requirements for the index. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> This article improves recent methods for large scale image search. We first analyze the bag-of-features approach in the framework of approximate nearest neighbor search. This leads us to derive a more precise representation based on Hamming embedding (HE) and weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within an inverted file and are efficiently exploited for all images in the dataset. We then introduce a graph-structured quantizer which significantly speeds up the assignment of the descriptors to visual words. A comparison with the state of the art shows the interest of our approach when high accuracy is needed. ::: ::: Experiments performed on three reference datasets and a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short-list of images, is shown to be complementary to our weak geometric consistency constraints. Our approach is shown to outperform the state-of-the-art on the three datasets. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> Burstiness, a phenomenon initially observed in text retrieval, is the property that a given visual element appears more times in an image than a statistically independent model would predict. In the context of image search, burstiness corrupts the visual similarity measure, i.e., the scores used to rank the images. In this paper, we propose a strategy to handle visual bursts for bag-of-features based image search systems. Experimental results on three reference datasets show that our method significantly and consistently outperforms the state of the art. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> A novel similarity measure for bag-of-words type large scale image retrieval is presented. The similarity function is learned in an unsupervised manner, requires no extra space over the standard bag-of-words method and is more discriminative than both L2-based soft assignment and Hamming embedding. ::: ::: We show experimentally that the novel similarity function achieves mean average precision that is superior to any result published in the literature on a number of standard datasets. At the same time, retrieval with the proposed similarity function is faster than the reference method. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> The Fisher kernel (FK) is a generic framework which combines the benefits of generative and discriminative approaches. In the context of image classification the FK was shown to extend the popular bag-of-visual-words (BOV) by going beyond count statistics. However, in practice, this enriched representation has not yet shown its superiority over the BOV. In the first part we show that with several well-motivated modifications over the original framework we can boost the accuracy of the FK. On PASCAL VOC 2007 we increase the Average Precision (AP) from 47.9% to 58.3%. Similarly, we demonstrate state-of-the-art accuracy on CalTech 256. A major advantage is that these results are obtained using only SIFT descriptors and costless linear classifiers. Equipped with this representation, we can now explore image classification on a larger scale. In the second part, as an application, we compare two abundant resources of labeled images to learn classifiers: ImageNet and Flickr groups. In an evaluation involving hundreds of thousands of training images we show that classifiers learned on Flickr groups perform surprisingly well (although they were not intended for this purpose) and that they can complement classifiers learned on more carefully annotated datasets. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> In this paper we address the problem of image retrieval from millions of database images. We improve the vocabulary tree based approach by introducing contextual weighting of local features in both descriptor and spatial domains. Specifically, we propose to incorporate efficient statistics of neighbor descriptors both on the vocabulary tree and in the image spatial domain into the retrieval. These contextual cues substantially enhance the discriminative power of individual local features with very small computational overhead. We have conducted extensive experiments on benchmark datasets, i.e., the UKbench, Holidays, and our new Mobile dataset, which show that our method reaches state-of-the-art performance with much less computation. Furthermore, the proposed method demonstrates excellent scalability in terms of both retrieval accuracy and efficiency on large-scale experiments using 1.26 million images from the ImageNet database as distractors. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> This paper proposes an asymmetric Hamming Embedding scheme for large scale image search based on local descriptors. The comparison of two descriptors relies on an vector-to-binary code comparison, which limits the quantization error associated with the query compared with the original Hamming Embedding method. The approach is used in combination with an inverted file structure that offers high efficiency, comparable to that of a regular bag-of-features retrieval system. The comparison is performed on two popular datasets. Our method consistently improves the search quality over the symmetric version. The trade-off between memory usage and precision is evaluated, showing that the method is especially useful for short binary signatures. <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. <s> BIB010 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> One fundamental problem in object retrieval with the bag-of-visual words (BoW) model is its lack of spatial information. Although various approaches are proposed to incorporate spatial constraints into the BoW model, most of them are either too strict or too loose so that they are only effective in limited cases. We propose a new spatially-constrained similarity measure (SCSM) to handle object rotation, scaling, view point change and appearance deformation. The similarity measure can be efficiently calculated by a voting-based method using inverted files. Object retrieval and localization are then simultaneously achieved without post-processing. Furthermore, we introduce a novel and robust re-ranking method with the k-nearest neighbors of the query for automatically refining the initial search results. Extensive performance evaluations on six public datasets show that SCSM significantly outperforms other spatial models, while k-NN re-ranking outperforms most state-of-the-art approaches using query expansion. <s> BIB011 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> In Bag-of-Words (BoW) based image retrieval, the SIFT visual word has a low discriminative power, so false positive matches occur prevalently. Apart from the information loss during quantization, another cause is that the SIFT feature only describes the local gradient distribution. To address this problem, this paper proposes a coupled Multi-Index (c-MI) framework to perform feature fusion at indexing level. Basically, complementary features are coupled into a multi-dimensional inverted index. Each dimension of c-MI corresponds to one kind of feature, and the retrieval process votes for images similar in both SIFT and other feature spaces. Specifically, we exploit the fusion of local color feature into c-MI. While the precision of visual match is greatly enhanced, we adopt Multiple Assignment to improve recall. The joint cooperation of SIFT and color features significantly reduces the impact of false positive matches. Extensive experiments on several benchmark datasets demonstrate that c-MI improves the retrieval accuracy significantly, while consuming only half of the query time compared to the baseline. Importantly, we show that c-MI is well complementary to many prior techniques. Assembling these methods, we have obtained an mAP of 85.8% and N-S score of 3.85 on Holidays and Ukbench datasets, respectively, which compare favorably with the state-of-the-arts. <s> BIB012 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> Most of the current image indexing systems for retrieval view a database as a set of individual images. It limits the flexibility of the retrieval framework to conduct sophisticated cross-image analysis, resulting in higher memory consumption and sub-optimal retrieval accuracy. To conquer this issue, we propose cross indexing with grouplets, where the core idea is to view the database images as a set of grouplets, each of which is defined as a group of highly relevant images. Because a grouplet groups similar images together, the number of grouplets is smaller than the number of images, thus naturally leading to less memory cost. Moreover, the definition of a grouplet could be based on customized relations, allowing for seamless integration of advanced image features and data mining techniques like the deep convolutional neural network (DCNN) in off-line indexing . To validate the proposed framework, we construct three different types of grouplets , which are respectively based on local similarity , regional relation, and global semantic modeling. Extensive experiments on public benchmark datasets demonstrate the efficiency and superior performance of our approach. <s> BIB013 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> This paper considers the task of image search using the Bag-of-Words (BoW) model. In this model, the precision of visual matching plays a critical role. Conventionally, local cues of a keypoint, e.g., SIFT, are employed. However, such strategy does not consider the contextual evidences of a keypoint, a problem which would lead to the prevalence of false matches. To address this problem and enable accurate visual matching, this paper proposes to integrate discriminative cues from multiple contextual levels, i.e., local, regional, and global, via probabilistic analysis. "True match" is defined as a pair of keypoints corresponding to the same scene location on all three levels (Fig. 1). Specifically, the Convolutional Neural Network (CNN) is employed to extract features from regional and global patches. We show that CNN feature is complementary to SIFT due to its semantic awareness and compares favorably to several other descriptors such as GIST, HSV, etc. To reduce memory usage, we propose to index CNN features outside the inverted file, communicated by memory-efficient pointers. Experiments on three benchmark datasets demonstrate that our method greatly promotes the search accuracy when CNN feature is integrated. We show that our method is efficient in terms of time cost compared with the BoW baseline, and yields competitive accuracy with the state-of-the-arts. <s> BIB014 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> We propose a novel approach for instance-level image retrieval. It produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors. In contrast to previous works employing pre-trained deep networks as a black box to produce features, our method leverages a deep architecture trained for the specific task of image retrieval. Our contribution is twofold: (i) we leverage a ranking framework to learn convolution and projection weights that are used to build the region features; and (ii) we employ a region proposal network to learn which regions should be pooled to form the final global descriptor. We show that using clean training data is key to the success of our approach. To that aim, we use a large scale but noisy landmark dataset and develop an automatic cleaning approach. The proposed architecture produces a global image representation in a single forward pass. Our approach significantly outperforms previous approaches based on global descriptors on standard datasets. It even surpasses most prior works based on costly local descriptor indexing and spatial verification. Additional material is available at www.xrce.xerox.com/Deep-Image-Retrieval. <s> BIB015 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> Most image instance retrieval pipelines are based on comparison of vectors known as global image descriptors between a query image and the database images. Due to their success in large scale image classification, representations extracted from Convolutional Neural Networks (CNN) are quickly gaining ground on Fisher Vectors (FVs) as state-of-the-art global descriptors for image instance retrieval. While CNN-based descriptors are generally remarked for good retrieval performance at lower bitrates, they nevertheless present a number of drawbacks including the lack of robustness to common object transformations such as rotations compared with their interest point based FV counterparts. ::: In this paper, we propose a method for computing invariant global descriptors from CNNs. Our method implements a recently proposed mathematical theory for invariance in a sensory cortex modeled as a feedforward neural network. The resulting global descriptors can be made invariant to multiple arbitrary transformation groups while retaining good discriminativeness. ::: Based on a thorough empirical evaluation using several publicly available datasets, we show that our method is able to significantly and consistently improve retrieval results every time a new type of invariance is incorporated. We also show that our method which has few parameters is not prone to overfitting: improvements generalize well across datasets with different properties with regard to invariances. Finally, we show that our descriptors are able to compare favourably to other state-of-the-art compact descriptors in similar bitranges, exceeding the highest retrieval results reported in the literature on some datasets. A dedicated dimensionality reduction step --quantization or hashing-- may be able to further improve the competitiveness of the descriptors. <s> BIB016 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> Convolutional Neural Networks (CNNs) achieve state-of-the-art performance in many computer vision tasks. However, this achievement is preceded by extreme manual annotation in order to perform either training from scratch or fine-tuning for the target task. In this work, we propose to fine-tune CNN for image retrieval from a large collection of unordered images in a fully automated manner. We employ state-of-the-art retrieval and Structure-from-Motion (SfM) methods to obtain 3D models, which are used to guide the selection of the training data for CNN fine-tuning. We show that both hard positive and hard negative examples enhance the final performance in particular object retrieval with compact codes. <s> BIB017
We present the improvement in retrieval accuracy over the past ten years in Fig. 6 and the numbers of some representative methods in Table 5 . The results are computed using codebooks trained on independent datasets BIB002 . We can clearly observe that the field of instance retrieval has constantly been improving. The baseline approach (HKM) proposed over ten years ago only yields a retrieval accuracy of 59.7 percent, 2.85, 44.3 percent, 26.6, and 46.5 percent on Holidays, Ukbench, Oxford5k, Oxford5k+Flickr100k, and Paris6k, respectively. Starting from the baseline approaches , BIB001 , methods using large codebooks improve steadily when more discriminative codebooks BIB006 , spatial constraints , BIB008 , and complementary descriptors , BIB013 are introduced. For medium-sized codebooks, the most significant accuracy advance has been witnessed in the years 2008-2010 with the introduction of Hamming Embedding BIB002 , BIB004 and its improvements BIB005 , BIB004 , BIB009 . From then on, major improvements come from the strength of feature fusion BIB012 , BIB013 , BIB014 with the color and CNN features, especially on the Holidays and Ukbench datasets. On the other hand, CNN-based retrieval models have quickly demonstrated their strengths in instance retrieval. In the year 2012 when the AlexNet BIB010 was introduced, the performance of the off-the-shelf FC features is still far from satisfactory compared with SIFT models during the same period. For example, the FC descriptor of AlexNet pre-trained on ImageNet yields 64.2, 3.42, and 43.3 percent in mAP, N-S score, and mAP, respectively, on the Holidays, Ukbench, and Oxford5k datasets. These numbers are lower than BIB008 by 13.85 percent, 0.14 on Holidays and Ukbench, respectively, and lower than BIB011 by 31.9 percent on Oxford5k. However, with the advance in CNN architectures and finetuning strategies, the performance of the CNN-based 10,200 10,200 common objects Paris6k BIB003 6,412 55 buildings Oxford5k BIB001 5,062 55 buildings Flickr100k BIB003 99,782 -from Flickr's popular tags For each year, the best accuracy of each category is reported. For the compact representations, results of 128-bit vectors are preferentially selected. The purple star denotes the results produced by 2,048-dim vectors BIB015 , the best performance in fine-tuned CNN methods. Methods with a pink asterisk denote using rotated images on Holidays, full-sized queries on Oxford5k, or spatial verification and QE on Oxford5k (see Table 5 ). "+100k" -> the addition of Flickr100k into Oxford5k. "pw." -> power low normalization BIB007 . "MP" -> max pooling. "SP" -> sum pooling. * and parentheses -> results are obtained with post-processing steps such as spatial verification or QE. x -> numbers are estimated from the curves. y numbers are reported by our implementation. For Holidays, results using the rotated images are presented after "/". For Oxford5k (+100k) and Paris6k, results using the full-sized queries are shown after "/". \ -> the full query image is fed into the network, but only the features whose centers fall into the query region of interest are aggregated. Note that in many fixed-length representations, ANN algorithms such as PQ are not used to report the results, but ANN can be readily applied after PCA during indexing. methods is improving fast, being competitive on the Holidays and Ukbench datasets BIB015 , BIB016 , and slightly lower on Oxford5k but with much smaller memory cost BIB017 .
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Accuracy Comparisons <s> This paper improves recent methods for large scale image search. State-of-the-art methods build on the bag-of-features image representation. We, first, analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within the inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Accuracy Comparisons <s> We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Accuracy Comparisons <s> In Bag-of-Words (BoW) based image retrieval, the SIFT visual word has a low discriminative power, so false positive matches occur prevalently. Apart from the information loss during quantization, another cause is that the SIFT feature only describes the local gradient distribution. To address this problem, this paper proposes a coupled Multi-Index (c-MI) framework to perform feature fusion at indexing level. Basically, complementary features are coupled into a multi-dimensional inverted index. Each dimension of c-MI corresponds to one kind of feature, and the retrieval process votes for images similar in both SIFT and other feature spaces. Specifically, we exploit the fusion of local color feature into c-MI. While the precision of visual match is greatly enhanced, we adopt Multiple Assignment to improve recall. The joint cooperation of SIFT and color features significantly reduces the impact of false positive matches. Extensive experiments on several benchmark datasets demonstrate that c-MI improves the retrieval accuracy significantly, while consuming only half of the query time compared to the baseline. Importantly, we show that c-MI is well complementary to many prior techniques. Assembling these methods, we have obtained an mAP of 85.8% and N-S score of 3.85 on Holidays and Ukbench datasets, respectively, which compare favorably with the state-of-the-arts. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Accuracy Comparisons <s> We consider the design of a single vector representation for an image that embeds and aggregates a set of local patch descriptors such as SIFT. More specifically we aim to construct a dense representation, like the Fisher Vector or VLAD, though of small or intermediate size. We make two contributions, both aimed at regularizing the individual contributions of the local descriptors in the final representation. The first is a novel embedding method that avoids the dependency on absolute distances by encoding directions. The second contribution is a "democratization" strategy that further limits the interaction of unrelated descriptors in the aggregation stage. These methods are complementary and give a substantial performance boost over the state of the art in image search with short or mid-size vectors, as demonstrated by our experiments on standard public image retrieval benchmarks. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Accuracy Comparisons <s> It has been shown that the activations invoked by an image within the top layers of a large convolutional neural network provide a high-level descriptor of the visual content of the image. In this paper, we investigate the use of such descriptors (neural codes) within the image retrieval application. In the experiments with several standard retrieval benchmarks, we establish that neural codes perform competitively even when the convolutional neural network has been trained for an unrelated classification task (e.g. Image-Net). We also evaluate the improvement in the retrieval performance of neural codes, when the network is retrained on a dataset of images that are similar to images encountered at test time. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Accuracy Comparisons <s> Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the OverFeat network which was trained to perform object classification on ILSVRC13. We use features extracted from the OverFeat network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the OverFeat network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or L2 distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Accuracy Comparisons <s> Recent works show that image comparison based on local descriptors is corrupted by visual bursts, which tend to dominate the image similarity. The existing strategies, like power-law normalization, improve the results by discounting the contribution of visual bursts to the image similarity. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Accuracy Comparisons <s> In this paper, we demonstrate that the essentials of image classification and retrieval are the same, since both tasks could be tackled by measuring the similarity between images. To this end, we propose ONE (Online Nearest-neighbor Estimation), a unified algorithm for both image classification and retrieval. ONE is surprisingly simple, which only involves manual object definition, regional description and nearest-neighbor search. We take advantage of PCA and PQ approximation and GPU parallelization to scale our algorithm up to large-scale image search. Experimental results verify that ONE achieves state-of-the-art accuracy in a wide range of image classification and retrieval benchmarks. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Accuracy Comparisons <s> We propose a novel approach for instance-level image retrieval. It produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors. In contrast to previous works employing pre-trained deep networks as a black box to produce features, our method leverages a deep architecture trained for the specific task of image retrieval. Our contribution is twofold: (i) we leverage a ranking framework to learn convolution and projection weights that are used to build the region features; and (ii) we employ a region proposal network to learn which regions should be pooled to form the final global descriptor. We show that using clean training data is key to the success of our approach. To that aim, we use a large scale but noisy landmark dataset and develop an automatic cleaning approach. The proposed architecture produces a global image representation in a single forward pass. Our approach significantly outperforms previous approaches based on global descriptors on standard datasets. It even surpasses most prior works based on costly local descriptor indexing and spatial verification. Additional material is available at www.xrce.xerox.com/Deep-Image-Retrieval. <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Accuracy Comparisons <s> Convolutional Neural Networks (CNNs) achieve state-of-the-art performance in many computer vision tasks. However, this achievement is preceded by extreme manual annotation in order to perform either training from scratch or fine-tuning for the target task. In this work, we propose to fine-tune CNN for image retrieval from a large collection of unordered images in a fully automated manner. We employ state-of-the-art retrieval and Structure-from-Motion (SfM) methods to obtain 3D models, which are used to guide the selection of the training data for CNN fine-tuning. We show that both hard positive and hard negative examples enhance the final performance in particular object retrieval with compact codes. <s> BIB010 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Accuracy Comparisons <s> Visual search and image retrieval underpin numerous applications, however the task is still challenging predominantly due to the variability of object appearance and ever increasing size of the databases, often exceeding billions of images. Prior art methods rely on aggregation of local scale-invariant descriptors, such as SIFT, via mechanisms including Bag of Visual Words (BoW), Vector of Locally Aggregated Descriptors (VLAD) and Fisher Vectors (FV). However, their performance is still short of what is required. This paper presents a novel method for deriving a compact and distinctive representation of image content called Robust Visual Descriptor with Whitening (RVD-W). It significantly advances the state of the art and delivers world-class performance. In our approach local descriptors are rank-assigned to multiple clusters. Residual vectors are then computed in each cluster, normalized using a direction-preserving normalization function and aggregated based on the neighborhood rank. Importantly, the residual vectors are de-correlated and whitened in each cluster before aggregation, leading to a balanced energy distribution in each dimension and significantly improved performance. We also propose a new post-PCA normalization approach which improves separability between the matching and non-matching global descriptors. This new normalization benefits not only our RVD-W descriptor but also improves existing approaches based on FV and VLAD aggregation. Furthermore, we show that the aggregation framework developed using hand-crafted SIFT features also performs exceptionally well with Convolutional Neural Network (CNN) based features. The RVD-W pipeline outperforms state-of-the-art global descriptors on both the Holidays and Oxford datasets. On the large scale datasets, Holidays1M and Oxford1M, SIFT-based RVD-W representation obtains a mAP of 45.1 and 35.1 percent, while CNN-based RVD-W achieve a mAP of 63.5 and 44.8 percent, all yielding superior performance to the state-of-the-art. <s> BIB011
The retrieval accuracy of different categories on different datasets can be viewed in Fig. 6 , Tables 5 and 6 . From these results, we arrive at three observations. First, among the SIFT-based methods, those with medium-sized codebooks BIB001 , BIB003 , BIB007 usually lead to superior (or competitive) performance, while those based on small codebook (compact representations) BIB002 , BIB004 , BIB011 exhibit inferior accuracy. On the one hand, the visual words in the medium-sized codebooks lead to relatively high matching recall due to the large Voronoi cells. The further integration of HE methods largely improves the discriminative ability, achieving a desirable trade-off between matching recall and precision. On the other hand, although the visual words in small codebooks have the highest matching recall, their discriminative ability is not significantly improved due to the aggregation procedure and the small dimensionality. So its performance can be compromised. Second, among the CNN-based categories, the fine-tuned category BIB005 , BIB009 , BIB010 is advantageous in specific tasks (such as landmark/scene retrieval) which have similar data distribution with the training set. While this observation is within expectation, we find it interesting that the fine-tuned model proposed in BIB009 yields very competitive performance on generic retrieval (such as Ukbench) which has distinct data distribution with the training set. In fact, Babenko et al. BIB005 show that the CNN features fine-tuned on Landmarks compromise the accuracy on Ukbench. The generalization ability of BIB009 could be attributed to the effective training of the region proposal network. In comparison, using pre-trained models may exhibit high accuracy on Ukbench, but only yields moderate performance on landmarks. Similarly, the hybrid methods have fair performance on all the tasks, when it may still encounter efficiency problems BIB006 , BIB008 . Third, comparing all the six categories, the "CNN finetuned" and "SIFT mid voc." categories have the best overall accuracy, while the "SIFT small voc." category has a relatively low accuracy.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> Given a query image of an object, our objective is to retrieve all instances of that object in a large (1M+) image database. We adopt the bag-of-visual-words architecture which has proven successful in achieving high precision at low recall. Unfortunately, feature detection and quantization are noisy processes and this can result in variation in the particular visual words that appear in different images of the same object, leading to missed results. In the text retrieval literature a standard method for improving performance is query expansion. A number of the highly ranked documents from the original query are reissued as a new query. In this way, additional relevant terms can be added to the query. This is a form of blind rele- vance feedback and it can fail if 'outlier' (false positive) documents are included in the reissued query. In this paper we bring query expansion into the visual domain via two novel contributions. Firstly, strong spatial constraints between the query image and each result allow us to accurately verify each return, suppressing the false positives which typically ruin text-based query expansion. Secondly, the verified images can be used to learn a latent feature model to enable the controlled construction of expanded queries. We illustrate these ideas on the 5000 annotated image Oxford building database together with more than 1M Flickr images. We show that the precision is substantially boosted, achieving total recall in many cases. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> This paper introduces a product quantization-based approach for approximate nearest neighbor search. The idea is to decompose the space into a Cartesian product of low-dimensional subspaces and to quantize each subspace separately. A vector is represented by a short code composed of its subspace quantization indices. The euclidean distance between two vectors can be efficiently estimated from their codes. An asymmetric version increases precision, as it computes the approximate distance between a vector and a code. Experimental results show that our approach searches for nearest neighbors efficiently, in particular in combination with an inverted file system. Results for SIFT and GIST image descriptors show excellent search accuracy, outperforming three state-of-the-art approaches. The scalability of our approach is validated on a data set of two billion vectors. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> Most effective particular object and image retrieval approaches are based on the bag-of-words (BoW) model. All state-of-the-art retrieval results have been achieved by methods that include a query expansion that brings a significant boost in performance. We introduce three extensions to automatic query expansion: (i) a method capable of preventing tf-idf failure caused by the presence of sets of correlated features (confusers), (ii) an improved spatial verification and re-ranking step that incrementally builds a statistical model of the query object and (iii) we learn relevant spatial context to boost retrieval performance. The three improvements of query expansion were evaluated on standard Paris and Oxford datasets according to a standard protocol, and state-of-the-art results were achieved. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> The paper addresses large scale image retrieval with short vector representations. We study dimensionality reduction by Principal Component Analysis (PCA) and propose improvements to its different phases. We show and explicitly exploit relations between i) mean subtrac- tion and the negative evidence, i.e., a visual word that is mutually miss- ing in two descriptions being compared, and ii) the axis de-correlation and the co-occurrences phenomenon. Finally, we propose an effective way to alleviate the quantization artifacts through a joint dimensionality re- duction of multiple vocabularies. The proposed techniques are simple, yet significantly and consistently improve over the state of the art on compact image representations. Complementary experiments in image classification show that the methods are generally applicable. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> The objective of this work is object retrieval in large scale image datasets, where the object is specified by an image query and retrieval should be immediate at run time in the manner of Video Google [28]. We make the following three contributions: (i) a new method to compare SIFT descriptors (RootSIFT) which yields superior performance without increasing processing or storage requirements; (ii) a novel method for query expansion where a richer model for the query is learnt discriminatively in a form suited to immediate retrieval through efficient use of the inverted index; (iii) an improvement of the image augmentation method proposed by Turcot and Lowe [29], where only the augmenting features which are spatially consistent with the augmented image are kept. We evaluate these three methods over a number of standard benchmark datasets (Oxford Buildings 5k and 105k, and Paris 6k) and demonstrate substantial improvements in retrieval performance whilst maintaining immediate retrieval speeds. Combining these complementary methods achieves a new state-of-the-art performance on these datasets. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> Exploiting local feature shape has made geometry indexing possible, but at a high cost of index space, while a sequential spatial verification and re-ranking stage is still indispensable for large scale image retrieval. In this work we investigate an accelerated approach for the latter problem. We develop a simple spatial matching model inspired by Hough voting in the transformation space, where votes arise from single feature correspondences. Using a histogram pyramid, we effectively compute pair-wise affinities of correspondences without ever enumerating all pairs. Our Hough pyramid matching algorithm is linear in the number of correspondences and allows for multiple matching surfaces or non-rigid objects under one-to-one mapping. We achieve re-ranking one order of magnitude more images at the same query time with superior performance compared to state of the art methods, while requiring the same index space. We show that soft assignment is compatible with this matching scheme, preserving one-to-one mapping and further increasing performance. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the OverFeat network which was trained to perform object classification on ILSVRC13. We use features extracted from the OverFeat network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the OverFeat network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or L2 distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks. <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> This paper provides an extensive study on the availability of image representations based on convolutional networks (ConvNets) for the task of visual instance retrieval. Besides the choice of convolutional layers, we present an efficient pipeline exploiting multi-scale schemes to extract local features, in particular, by taking geometric invariance into explicit account, i.e. positions, scales and spatial consistency. In our experiments using five standard image retrieval datasets, we demonstrate that generic ConvNet image representations can outperform other state-of-the-art methods if they are extracted appropriately. <s> BIB010 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> It has been shown that the activations invoked by an image within the top layers of a large convolutional neural network provide a high-level descriptor of the visual content of the image. In this paper, we investigate the use of such descriptors (neural codes) within the image retrieval application. In the experiments with several standard retrieval benchmarks, we establish that neural codes perform competitively even when the convolutional neural network has been trained for an unrelated classification task (e.g. Image-Net). We also evaluate the improvement in the retrieval performance of neural codes, when the network is retrained on a dataset of images that are similar to images encountered at test time. <s> BIB011 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations lack geometric invariance, which limits their robustness for classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (MOP-CNN). This scheme extracts CNN activations for local patches at multiple scale levels, performs orderless VLAD pooling of these activations at each level separately, and concatenates the result. The resulting MOP-CNN representation can be used as a generic feature for either supervised or unsupervised recognition tasks, from image classification to instance-level retrieval; it consistently outperforms global CNN activations without requiring any joint training of prediction layers for a particular target dataset. In absolute terms, it achieves state-of-the-art results on the challenging SUN397 and MIT Indoor Scenes classification datasets, and competitive results on ILSVRC2012/2013 classification and INRIA Holidays retrieval datasets. <s> BIB012 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> In this paper, we demonstrate that the essentials of image classification and retrieval are the same, since both tasks could be tackled by measuring the similarity between images. To this end, we propose ONE (Online Nearest-neighbor Estimation), a unified algorithm for both image classification and retrieval. ONE is surprisingly simple, which only involves manual object definition, regional description and nearest-neighbor search. We take advantage of PCA and PQ approximation and GPU parallelization to scale our algorithm up to large-scale image search. Experimental results verify that ONE achieves state-of-the-art accuracy in a wide range of image classification and retrieval benchmarks. <s> BIB013 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> Recently, image representation built upon Convolutional Neural Network (CNN) has been shown to provide effective descriptors for image search, outperforming pre-CNN features as short-vector representations. Yet such models are not compatible with geometry-aware re-ranking methods and still outperformed, on some particular object retrieval benchmarks, by traditional image search systems relying on precise descriptor matching, geometric re-ranking, or query expansion. This work revisits both retrieval stages, namely initial search and re-ranking, by employing the same primitive information derived from the CNN. We build compact feature vectors that encode several image regions without the need to feed multiple inputs to the network. Furthermore, we extend integral images to handle max-pooling on convolutional layer activations, allowing us to efficiently localize matching objects. The resulting bounding box is finally used for image re-ranking. As a result, this paper significantly improves existing CNN-based recognition pipeline: We report for the first time results competing with traditional methods on the challenging Oxford5k and Paris6k datasets. <s> BIB014 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> Deep convolutional neural networks have been successfully applied to image classification tasks. When these same networks have been applied to image retrieval, the assumption has been made that the last layers would give the best performance, as they do in classification. We show that for instance-level image retrieval, lower layers often perform better than the last layers in convolutional neural networks. We present an approach for extracting convolutional features from different layers of the networks, and adopt VLAD encoding to encode features into a single vector for each image. We investigate the effect of different layers and scales of input images on the performance of convolutional features using the recent deep networks OxfordNet and GoogLeNet. Experiments demonstrate that intermediate layers or higher layers with finer scales produce better results for image retrieval, compared to the last layer. When using compressed 128-D VLAD descriptors, our method obtains state-of-the-art results and outperforms other VLAD and CNN based approaches on two out of three test datasets. Our work provides guidance for transferring deep networks trained on image classification to image retrieval tasks. <s> BIB015 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> Convolutional Neural Network (CNN) features have been successfully employed in recent works as an image descriptor for various vision tasks. But the inability of the deep CNN features to exhibit invariance to geometric transformations and object compositions poses a great challenge for image search. In this work, we demonstrate the effectiveness of the objectness prior over the deep CNN features of image regions for obtaining an invariant image representation. The proposed approach represents the image as a vector of pooled CNN features describing the underlying objects. This representation provides robustness to spatial layout of the objects in the scene and achieves invariance to general geometric transformations, such as translation, rotation and scaling. The proposed approach also leads to a compact representation of the scene, making each image occupy a smaller memory footprint. Experiments show that the proposed representation achieves state of the art retrieval results on a set of challenging benchmark image datasets, while maintaining a compact representation. <s> BIB016 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> Hough voting in a geometric transformation space allows us to realize spatial verification, but remains sensitive to feature detection errors because of the inflexible quantization of single feature correspondences. To handle this problem, we propose a new method, called adaptive dither voting, for robust spatial verification. For each correspondence, instead of hard-mapping it to a single transformation, the method augments its description by using multiple dithered transformations that are deterministically generated by the other correspondences. The method reduces the probability of losing correspondences during transformation quantization, and provides high robustness as regards mismatches by imposing three geometric constraints on the dithering process. We also propose exploiting the non-uniformity of a Hough histogram as the spatial similarity to handle multiple matching surfaces. Extensive experiments conducted on four datasets show the superiority of our method. The method outperforms its state-of-the-art counterparts in both accuracy and scalability, especially when it comes to the retrieval of small, rotated objects. <s> BIB017 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> We propose a novel approach for instance-level image retrieval. It produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors. In contrast to previous works employing pre-trained deep networks as a black box to produce features, our method leverages a deep architecture trained for the specific task of image retrieval. Our contribution is twofold: (i) we leverage a ranking framework to learn convolution and projection weights that are used to build the region features; and (ii) we employ a region proposal network to learn which regions should be pooled to form the final global descriptor. We show that using clean training data is key to the success of our approach. To that aim, we use a large scale but noisy landmark dataset and develop an automatic cleaning approach. The proposed architecture produces a global image representation in a single forward pass. Our approach significantly outperforms previous approaches based on global descriptors on standard datasets. It even surpasses most prior works based on costly local descriptor indexing and spatial verification. Additional material is available at www.xrce.xerox.com/Deep-Image-Retrieval. <s> BIB018 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> Convolutional Neural Networks (CNNs) achieve state-of-the-art performance in many computer vision tasks. However, this achievement is preceded by extreme manual annotation in order to perform either training from scratch or fine-tuning for the target task. In this work, we propose to fine-tune CNN for image retrieval from a large collection of unordered images in a fully automated manner. We employ state-of-the-art retrieval and Structure-from-Motion (SfM) methods to obtain 3D models, which are used to guide the selection of the training data for CNN fine-tuning. We show that both hard positive and hard negative examples enhance the final performance in particular object retrieval with compact codes. <s> BIB019 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> The objective of this paper is the effective transfer of the Convolutional Neural Network (CNN) feature in image search and classification. Systematically, we study three facts in CNN transfer. 1) We demonstrate the advantage of using images with a properly large size as input to CNN instead of the conventionally resized one. 2) We benchmark the performance of different CNN layers improved by average/max pooling on the feature maps. Our observation suggests that the Conv5 feature yields very competitive accuracy under such pooling step. 3) We find that the simple combination of pooled features extracted across various CNN layers is effective in collecting evidences from both low and high level descriptors. Following these good practices, we are capable of improving the state of the art on a number of benchmarks to a large margin. <s> BIB020 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> Spatial verification is a crucial part of every image retrieval system, as it accounts for the fact that geometric feature configurations are typically ignored by the Bag-of-Words representation. Since spatial verification quickly becomes the bottleneck of the retrieval process, runtime efficiency is extremely important. At the same time, spatial verification should be able to reliably distinguish between related and unrelated images. While methods based on RANSAC’s hypothesize-and-verify framework achieve high accuracy, they are not particularly efficient. Conversely, verification approaches based on Hough voting are extremely efficient but not as accurate. In this paper, we develop a novel spatial verification approach that uses an efficient voting scheme to identify promising transformation hypotheses that are subsequently verified and refined. Through comprehensive experiments, we show that our method is able to achieve a verification accuracy similar to state-of-the-art hypothesize-and-verify approaches while providing faster runtimes than state-of-the-art voting-based methods. <s> BIB021
Feature Computation Time. For the SIFT-based methods, the dominating step is local feature extraction. Usually, it takes 1-2s for a CPU to extract the Hessian-Affine region based SIFT descriptors for a 640Â480 image, depending on the complexity (texture) of the image. For the CNN-based method, it takes 0.082 and 0.347 s for a single forward pass of a 224Â224 and 1,024Â768 image through VGG16 on a TitanX card, respectively. It is reported in BIB018 that four images (with largest side of 724 pixels) can be processed in 1 second. The encoding (VLAD or FV) time of the pretrained column features is very fast. For the CNN Hybrid methods, extracting CNN features out of tens of regions may take seconds. Overall speaking, the CNN pre-trained and fine-tuned models are efficient in feature computation using GPUs. Yet it should be noted that when using GPUs for SIFT extraction, high efficiency could also be achieved. Retrieval Time. The efficiency of nearest neighbor search is high for "SIFT large voc.", "SIFT small voc.", "CNN pretrained" and "CNN fine-tuned", because the inverted lists are short for a properly trained large codebook, and because the latter three have a compact representation to be accelerated by ANN search methods like PQ BIB004 . Efficiency for the medium-sized codebook is low because the inverted list contains more postings compared to a large codebook, and the filtering effect of HE methods can only correct this problem to some extent. The retrieval complexity for hybrid methods, as mentioned in Section 4.3, may suffer from the expensive many-to-many matching strategy BIB009 , BIB010 , BIB013 . Training Time. Training a large or medium-sized codebook usually takes several hours with AKM or HKM. Using small codebooks reduces the codebook training time. For the fine-tuned model, Gordo et al. BIB018 report using five days on a K40 GPU for the triplet-loss model. It may take less time for the siamese BIB019 or the classification models BIB011 , but should still much longer than SIFT codebook generation. Therefore, in terms of training, those using direct pooling BIB014 , BIB020 or small codebooks BIB003 , BIB015 are more time efficient. Memory Cost. Table 5 and Fig. 8 show that the SIFT methods with large codebooks and the compact representations are both efficient in memory cost. But the compact representations can be compressed into compact codes BIB006 using PQ or other competing quantization/hashing methods, so their memory consumption can be further reduced. In comparison, the methods using medium-sized codebooks are the most memory-consuming because the binary signatures should be stored in the inverted index. The hybrid methods somehow have mixed memory cost because the many-tomany strategy requires storing a number of region descriptors per image BIB009 , BIB013 while some others employ efficient encoding methods BIB012 , BIB016 . Spatial Verification and Query Expansion. Spatial verification which provides refined rank lists is often used in conjunction with QE. The RANSAC verification proposed in BIB001 has a complexity of Oðz 2 Þ, where z is the number of matched features. So this method is computationally expensive. The ADV approach BIB017 is less expensive with Oðz log zÞ complexity due to its ability to avoid unrelated Hough votes. The most efficient methods consist in BIB008 , BIB021 which has a complexity of OðzÞ, and BIB021 further outputs the transformation and inliers for QE. From the perspective of query expansion, since new queries are issued, search efficiency is compromised. For example, AQE BIB002 almost doubles the search time due to BIB007 , BIB005 , the proposed improvements only add marginal cost compared to performing another search, so their complexity is similar to basic QE methods.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Important Parameters <s> In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Important Parameters <s> This paper improves recent methods for large scale image search. State-of-the-art methods build on the bag-of-features image representation. We, first, analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within the inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Important Parameters <s> Convolutional Neural Network (CNN) features have been successfully employed in recent works as an image descriptor for various vision tasks. But the inability of the deep CNN features to exhibit invariance to geometric transformations and object compositions poses a great challenge for image search. In this work, we demonstrate the effectiveness of the objectness prior over the deep CNN features of image regions for obtaining an invariant image representation. The proposed approach represents the image as a vector of pooled CNN features describing the underlying objects. This representation provides robustness to spatial layout of the objects in the scene and achieves invariance to general geometric transformations, such as translation, rotation and scaling. The proposed approach also leads to a compact representation of the scene, making each image occupy a smaller memory footprint. Experiments show that the proposed representation achieves state of the art retrieval results on a set of challenging benchmark image datasets, while maintaining a compact representation. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Important Parameters <s> We propose a novel approach for instance-level image retrieval. It produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors. In contrast to previous works employing pre-trained deep networks as a black box to produce features, our method leverages a deep architecture trained for the specific task of image retrieval. Our contribution is twofold: (i) we leverage a ranking framework to learn convolution and projection weights that are used to build the region features; and (ii) we employ a region proposal network to learn which regions should be pooled to form the final global descriptor. We show that using clean training data is key to the success of our approach. To that aim, we use a large scale but noisy landmark dataset and develop an automatic cleaning approach. The proposed architecture produces a global image representation in a single forward pass. Our approach significantly outperforms previous approaches based on global descriptors on standard datasets. It even surpasses most prior works based on costly local descriptor indexing and spatial verification. Additional material is available at www.xrce.xerox.com/Deep-Image-Retrieval. <s> BIB004
We summarize the impact of codebook size on SIFT methods using large/medium-sized codebooks, and the impact of dimensionality on compact representations including SIFT small codebooks and CNN-based methods. Codebook Size. The mAP results on Oxford5k are drawn in Fig. 9 , and methods using large/medium-sized codebooks are compared. Two observations can be made. First, mAP usually increases with the codebook size but may reach saturation when the codebook is large enough. This is because a larger codebook improves the matching precision, but if it is too large, matching recall is lower, leading to saturated or even compromised performance BIB001 . Second, methods using the medium-sized codebooks have more stable performance when codebook size changes. This can be attributed to HE BIB002 , which contributes more for a smaller codebook, compensating the lower baseline performance. Dimensionality. The impact of dimensionality on compact vectors is presented in Fig. 7 . Our finding is that the retrieval accuracy usually remains stable under larger dimensions, and drops quickly when the dimensionality is below 256 or 128. Our second finding favors the methods based on region proposals BIB004 , BIB003 . These methods demonstrate very competitive performance under various feature lengths, probably due to their superior ability in object localization.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Towards Generic Instance Retrieval <s> It has been shown that the activations invoked by an image within the top layers of a large convolutional neural network provide a high-level descriptor of the visual content of the image. In this paper, we investigate the use of such descriptors (neural codes) within the image retrieval application. In the experiments with several standard retrieval benchmarks, we establish that neural codes perform competitively even when the convolutional neural network has been trained for an unrelated classification task (e.g. Image-Net). We also evaluate the improvement in the retrieval performance of neural codes, when the network is retrained on a dataset of images that are similar to images encountered at test time. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Towards Generic Instance Retrieval <s> Learning fine-grained image similarity is a challenging task. It needs to capture between-class and within-class image differences. This paper proposes a deep ranking model that employs deep learning techniques to learn similarity metric directly from images. It has higher learning capability than models based on hand-crafted features. A novel multiscale network structure has been developed to describe the images effectively. An efficient triplet sampling algorithm is also proposed to learn the model with distributed asynchronized stochastic gradient. Extensive experiments show that the proposed algorithm outperforms models based on hand-crafted visual features and deep classification models. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Towards Generic Instance Retrieval <s> We propose a novel approach for instance-level image retrieval. It produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors. In contrast to previous works employing pre-trained deep networks as a black box to produce features, our method leverages a deep architecture trained for the specific task of image retrieval. Our contribution is twofold: (i) we leverage a ranking framework to learn convolution and projection weights that are used to build the region features; and (ii) we employ a region proposal network to learn which regions should be pooled to form the final global descriptor. We show that using clean training data is key to the success of our approach. To that aim, we use a large scale but noisy landmark dataset and develop an automatic cleaning approach. The proposed architecture produces a global image representation in a single forward pass. Our approach significantly outperforms previous approaches based on global descriptors on standard datasets. It even surpasses most prior works based on costly local descriptor indexing and spatial verification. Additional material is available at www.xrce.xerox.com/Deep-Image-Retrieval. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Towards Generic Instance Retrieval <s> Convolutional Neural Networks (CNNs) achieve state-of-the-art performance in many computer vision tasks. However, this achievement is preceded by extreme manual annotation in order to perform either training from scratch or fine-tuning for the target task. In this work, we propose to fine-tune CNN for image retrieval from a large collection of unordered images in a fully automated manner. We employ state-of-the-art retrieval and Structure-from-Motion (SfM) methods to obtain 3D models, which are used to guide the selection of the training data for CNN fine-tuning. We show that both hard positive and hard negative examples enhance the final performance in particular object retrieval with compact codes. <s> BIB004
A critical direction is to make the search engine applicable to generic search purpose. Towards this goal, two important issues should be addressed. First, large-scale instance-level datasets are to be introduced. While several instance datasets have been released as shown in Table 3 , these datasets usually contain a particular type of instances such as landmarks or indoor objects. Although the RPN structure used by Gordo et al. BIB003 has proven competitive on Ukbench in addition to the building datasets, it remains unknown if training CNNs on more generic datasets will bring further improvement. Therefore, the community is in great need of large-scale instance-level datasets or efficient methods for generating such a dataset in either a supervised or unsupervised manner. Second, designing new CNN architectures and learning methods are important in fully exploiting the training data. Previous works employ standard classification BIB001 , pairwise-loss BIB004 or Triplet-loss BIB003 , BIB002 CNN models for fine-tuning. The introduction of Faster R-CNN to instance retrieval is a promising starting point towards more accurate object localization BIB003 . Moreover, transfer learning methods are also important when adopting a fine-tuned model in another retrieval task .
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Towards Specialized Instance Retrieval <s> Detecting logos in photos is challenging. A reason is that logos locally resemble patterns frequently seen in random images. We propose to learn a statistical model for the distribution of incorrect detections output by an image matching algorithm. It results in a novel scoring criterion in which the weight of correlated keypoint matches is reduced, penalizing irrelevant logo detections. In experiments on two very different logo retrieval benchmarks, our approach largely improves over the standard matching criterion as well as other state-of-the-art approaches. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Towards Specialized Instance Retrieval <s> This paper contributes a new high quality dataset for person re-identification, named "Market-1501". Generally, current datasets: 1) are limited in scale, 2) consist of hand-drawn bboxes, which are unavailable under realistic settings, 3) have only one ground truth and one query image for each identity (close environment). To tackle these problems, the proposed Market-1501 dataset is featured in three aspects. First, it contains over 32,000 annotated bboxes, plus a distractor set of over 500K images, making it the largest person re-id dataset to date. Second, images in Market-1501 dataset are produced using the Deformable Part Model (DPM) as pedestrian detector. Third, our dataset is collected in an open system, where each identity has multiple images under each camera. As a minor contribution, inspired by recent advances in large-scale image search, this paper proposes an unsupervised Bag-of-Words descriptor. We view person re-identification as a special task of image search. In experiment, we show that the proposed descriptor yields competitive accuracy on VIPeR, CUHK03, and Market-1501 datasets, and is scalable on the large-scale 500k dataset. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Towards Specialized Instance Retrieval <s> Vehicle, as a significant object class in urban surveillance, attracts massive focuses in computer vision field, such as detection, tracking, and classification. Among them, vehicle re-identification (Re-Id) is an important yet frontier topic, which not only faces the challenges of enormous intra-class and subtle inter-class differences of vehicles in multicameras, but also suffers from the complicated environments in urban surveillance scenarios. Besides, the existing vehicle related datasets all neglect the requirements of vehicle Re-Id: 1) massive vehicles captured in real-world traffic environment; and 2) applicable recurrence rate to give cross-camera vehicle search for vehicle Re-Id. To facilitate vehicle Re-Id research, we propose a large-scale benchmark dataset for vehicle Re-Id in the real-world urban surveillance scenario, named “VeRi”. It contains over 40,000 bounding boxes of 619 vehicles captured by 20 cameras in unconstrained traffic scene. Moreover, each vehicle is captured by 2∼18 cameras in different viewpoints, illuminations, and resolutions to provide high recurrence rate for vehicle Re-Id. Finally, we evaluate six competitive vehicle Re-Id methods on VeRi and propose a baseline which combines the color, texture, and highlevel semantic information extracted by deep neural network. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Towards Specialized Instance Retrieval <s> We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following four principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the “Vector of Locally Aggregated Descriptors” image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we create a new weakly supervised ranking loss, which enables end-to-end learning of the architecture's parameters from images depicting the same places over time downloaded from Google Street View Time Machine. Third, we develop an efficient training procedure which can be applied on very large-scale weakly labelled tasks. Finally, we show that the proposed architecture and training procedure significantly outperform non-learnt image representations and off-the-shelf CNN descriptors on challenging place recognition and image retrieval benchmarks. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Towards Specialized Instance Retrieval <s> We address the problem of large-scale visual place recognition for situations where the scene undergoes a major change in appearance, for example, due to illumination (day/night), change of seasons, aging, or structural modifications over time such as buildings being built or destroyed. Such situations represent a major challenge for current large-scale place recognition methods. This work has the following three principal contributions. First, we demonstrate that matching across large changes in the scene appearance becomes much easier when both the query image and the database image depict the scene from approximately the same viewpoint. Second, based on this observation, we develop a new place recognition approach that combines (i) an efficient synthesis of novel views with (ii) a compact indexable image representation. Third, we introduce a new challenging dataset of 1,125 camera-phone query images of Tokyo that contain major changes in illumination (day, sunset, night) as well as structural changes in the scene. We demonstrate that the proposed approach significantly outperforms other large-scale place recognition techniques on this challenging data. <s> BIB005
To the other end, there are also increasing interests in specialized instance retrieval. Examples include place retrieval BIB005 , pedestrian retrieval BIB002 , vehicle retrieval BIB003 , logo retrieval BIB001 , etc. Images in these tasks have specific prior knowledge that can be made use of. For example in pedestrian retrieval, the recurrent neural network (RNN) can be employed to pool the body part or patch descriptors. In vehicle retrieval, the view information can be inferred during feature learning, and the license plate can also provide critical information when being captured within a short distance. Meanwhile, the process of training data collection can be further explored. For example, training images of different places can be collected via Google Street View BIB004 . Vehicle images can be accessed either through surveillance videos or internet images. Exploring new learning strategies in these specialized datasets and studying the transfer effect would be interesting. Finally, compact vectors or short codes will also become important in realistic retrieval settings.
A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Introduction <s> We report that human walks performed in outdoor settings of tens of kilometers resemble a truncated form of Levy walks commonly observed in animals such as monkeys, birds and jackals. Our study is based on about one thousand hours of GPS traces involving 44 volunteers in various outdoor settings including two different college campuses, a metropolitan area, a theme park and a state fair. This paper shows that many statistical features of human walks follow truncated power-law, showing evidence of scale-freedom and do not conform to the central limit theorem. These traits are similar to those of Levy walks. It is conjectured that the truncation, which makes the mobility deviate from pure Levy walks, comes from geographical constraints including walk boundary, physical obstructions and traffic. None of commonly used mobility models for mobile networks captures these properties. Based on these findings, we construct a simple Levy walk mobility model which is versatile enough in emulating diverse statistical patterns of human walks observed in our traces. The model is also used to recreate similar power-law inter-contact time distributions observed in previous human mobility studies. Our network simulation indicates that the Levy walk features are important in characterizing the performance of mobile network routing performance. <s> BIB001 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Introduction <s> Location sharing services (LSS) like Foursquare, Gowalla, and Facebook Places support hundreds of millions of user-driven footprints (i.e., "checkins"). Those global-scale footprints provide a unique opportunity to study the social and temporal characteristics of how people use these services and to model patterns of human mobility, which are significant factors for the design of future mobile+location-based services, traffic forecasting, urban planning, as well as epidemiological models of disease spread. In this paper, we investigate 22 million checkins across 220,000 users and report a quantitative assessment of human mobility patterns by analyzing the spatial, temporal, social, and textual aspects associated with these footprints. We find that: (i) LSS users follow the “Levy Flight” mobility pattern and adopt periodic behaviors; (ii) While geographic and economic constraints affect mobility patterns, so does individual social status; and (iii) Content and sentiment-based analysis of posts associated with checkins can provide a rich source of context for better understanding how users engage with these services. <s> BIB002 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Introduction <s> The popularity of location-based social networks provide us with a new platform to understand users' preferences based on their location histories. In this paper, we present a location-based and preference-aware recommender system that offers a particular user a set of venues (such as restaurants) within a geospatial range with the consideration of both: 1) User preferences, which are automatically learned from her location history and 2) Social opinions, which are mined from the location histories of the local experts. This recommender system can facilitate people's travel not only near their living areas but also to a city that is new to them. As a user can only visit a limited number of locations, the user-locations matrix is very sparse, leading to a big challenge to traditional collaborative filtering-based location recommender systems. The problem becomes even more challenging when people travel to a new city. To this end, we propose a novel location recommender system, which consists of two main parts: offline modeling and online recommendation. The offline modeling part models each individual's personal preferences with a weighted category hierarchy (WCH) and infers the expertise of each user in a city with respect to different category of locations according to their location histories using an iterative learning model. The online recommendation part selects candidate local experts in a geospatial range that matches the user's preferences using a preference-aware candidate selection algorithm and then infers a score of the candidate locations based on the opinions of the selected local experts. Finally, the top-k ranked locations are returned as the recommendations for the user. We evaluated our system with a large-scale real dataset collected from Foursquare. The results confirm that our method offers more effective recommendations than baselines, while having a good efficiency of providing location recommendations. <s> BIB003 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Introduction <s> Location-based social networks (LBSNs) have attracted an increasing number of users in recent years. The availability of geographical and social information of online LBSNs provides an unprecedented opportunity to study the human movement from their socio-spatial behavior, enabling a variety of location-based services. Previous work on LBSNs reported limited improvements from using the social network information for location prediction; as users can check-in at new places, traditional work on location prediction that relies on mining a user's historical trajectories is not designed for this "cold start" problem of predicting new check-ins. In this paper, we propose to utilize the social network information for solving the "cold start" location prediction problem, with a geo-social correlation model to capture social correlations on LBSNs considering social networks and geographical distance. The experimental results on a real-world LBSN demonstrate that our approach properly models the social correlations of a user's new check-ins by considering various correlation strengths and correlation measures. <s> BIB004 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Introduction <s> Location-based social networks (LBSNs) have attracted an increasing number of users in recent years, resulting in large amounts of geographical and social data. Such LBSN data provide an unprecedented opportunity to study the human movement from their socio-spatial behavior, in order to improve location-based applications like location recommendation. As users can check-in at new places, traditional work on location prediction that relies on mining a user's historical moving trajectories fails as it is not designed for the cold-start problem of recommending new check-ins. While previous work on LBSNs attempting to utilize a user's social connections for location recommendation observed limited help from social network information. In this work, we propose to address the cold-start location recommendation problem by capturing the correlations between social networks and geographical distance on LBSNs with a geo-social correlation model. The experimental results on a real-world LBSN dataset demonstrate that our approach properly models the geo-social correlations of a user's cold-start check-ins and significantly improves the location recommendation performance. <s> BIB005 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Introduction <s> Location recommendation plays an essential role in helping people find places they are likely to enjoy. Though some recent research has studied how to recommend locations with the presence of social network and geographical information, few of them addressed the cold-start problem, specifically, recommending locations for new users. Because the visits to locations are often shared on social networks, rich semantics (e.g., tweets) that reveal a person's interests can be leveraged to tackle this challenge. A typical way is to feed them into traditional explicit-feedback content-aware recommendation methods (e.g., LibFM). As a user's negative preferences are not explicitly observable in most human mobility data, these methods need draw negative samples for better learning performance. However, prior studies have empirically shown that sampling-based methods don't perform as well as a method that considers all unvisited locations as negative but assigns them a lower confidence. To this end, we propose an Implicit-feedback based Content-aware Collaborative Filtering (ICCF) framework to incorporate semantic content and steer clear of negative sampling. For efficient parameter learning, we develop a scalable optimization algorithm, scaling linearly with the data size and the feature size. Furthermore, we offer a good explanation to ICCF, such that the semantic content is actually used to refine user similarity based on mobility. Finally, we evaluate ICCF with a large-scale LBSN dataset where users have profiles and text content. The results show that ICCF outperforms LibFM of the best configuration, and that user profiles and text content are not only effective at improving recommendation but also helpful for coping with the cold-start problem. <s> BIB006 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Introduction <s> While on the go, people are using their phones as a personal concierge discovering what is around and deciding what to do. Mobile phone has become a recommendation terminal customized for individuals—capable of recommending activities and simplifying the accomplishment of related tasks. In this article, we conduct usage mining on the check-in data, with summarized statistics identifying the local recommendation challenges of huge solution space, sparse available data, and complicated user intent, and discovered observations to motivate the hierarchical, contextual, and sequential solution. We present a point-of-interest (POI) category-transition--based approach, with a goal of estimating the visiting probability of a series of successive POIs conditioned on current user context and sensor context. A mobile local recommendation demo application is deployed. The objective and subjective evaluations validate the effectiveness in providing mobile users both accurate recommendation and favorable user experience. <s> BIB007 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Introduction <s> As location-based social networks (LBSNs) rapidly grow, it is a timely topic to study how to recommend users with interesting locations, known as points-of-interest (POIs). Most existing POI recommendation techniques only employ the check-in data of users in LBSNs to learn their preferences on POIs by assuming a user's check-in frequency to a POI explicitly reflects the level of her preference on the POI. However, in reality users usually visit POIs only once, so the users' check-ins may not be sufficient to derive their preferences using their check-in frequencies only. Actually, the preferences of users are exactly implied in their opinions in text-based tips commenting on POIs. In this paper, we propose an opinion-based POI recommendation framework called ORec to take full advantage of the user opinions on POIs expressed as tips. In ORec, there are two main challenges: (i) detecting the polarities of tips (positive, neutral or negative), and (ii) integrating them with check-in data including social links between users and geographical information of POIs. To address these two challenges, (1) we develop a supervised aspect-dependent approach to detect the polarity of a tip, and (2) we devise a method to fuse tip polarities with social links and geographical information into a unified POI recommendation framework. Finally, we conduct a comprehensive performance evaluation for ORec using two large-scale real data sets collected from Foursquare and Yelp. Experimental results show that ORec achieves significantly superior polarity detection and POI recommendation accuracy compared to other state-of-the-art polarity detection and POI recommendation techniques. <s> BIB008 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Introduction <s> Given the abundance of online information available to mobile users, particularly tourists and weekend travelers, recommender systems that effectively filter this information and suggest interesting participatory opportunities will become increasingly important. Previous work has explored recommending interesting locations; however, users would also benefit from recommendations for activities in which to participate at those locations along with suitable times and days. Thus, systems that provide collaborative recommendations involving multiple dimensions such as location, activities and time would enhance the overall experience of users.The relationship among these dimensions can be modeled by higher-order matrices called tensors which are then solved by tensor factorization. However, these tensors can be extremely sparse. In this paper, we present a system and an approach for performing multi-dimensional collaborative recommendations for Who (User), What (Activity), When (Time) and Where (Location), using tensor factorization on sparse user-generated data. We formulate an objective function which simultaneously factorizes coupled tensors and matrices constructed from heterogeneous data sources. We evaluate our system and approach on large-scale real world data sets consisting of 588,000 Flickr photos collected from three major metro regions in USA. We compare our approach with several state-of-the-art baselines and demonstrate that it outperforms all of them. <s> BIB009 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Introduction <s> With the recent surge of location based social networks (LBSNs), activity data of millions of users has become attainable. This data contains not only spatial and temporal stamps of user activity, but also its semantic information. LBSNs can help to understand mobile users’ spatial temporal activity preference (STAP), which can enable a wide range of ubiquitous applications, such as personalized context-aware location recommendation and group-oriented advertisement. However, modeling such user-specific STAP needs to tackle high-dimensional data, i.e., user-location-time-activity quadruples, which is complicated and usually suffers from a data sparsity problem. In order to address this problem, we propose a STAP model. It first models the spatial and temporal activity preference separately, and then uses a principle way to combine them for preference inference. In order to characterize the impact of spatial features on user activity preference, we propose the notion of personal functional region and related parameters to model and infer user spatial activity preference. In order to model the user temporal activity preference with sparse user activity data in LBSNs, we propose to exploit the temporal activity similarity among different users and apply nonnegative tensor factorization to collaboratively infer temporal activity preference. Finally, we put forward a context-aware fusion framework to combine the spatial and temporal activity preference models for preference inference. We evaluate our proposed approach on three real-world datasets collected from New York and Tokyo, and show that our STAP model consistently outperforms the baseline approaches in various settings. <s> BIB010 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Introduction <s> With the rapid development of location-based social networks (LBSNs), spatial item recommendation has become an important way of helping users discover interesting locations to increase their engagement with location-based services. Although human movement exhibits sequential patterns in LBSNs, most current studies on spatial item recommendations do not consider the sequential influence of locations. Leveraging sequential patterns in spatial item recommendation is, however, very challenging, considering 1) users' check-in data in LBSNs has a low sampling rate in both space and time, which renders existing prediction techniques on GPS trajectories ineffective; 2) the prediction space is extremely large, with millions of distinct locations as the next prediction target, which impedes the application of classical Markov chain models; and 3) there is no existing framework that unifies users' personal interests and the sequential influence in a principled manner. In light of the above challenges, we propose a sequential personalized spatial item recommendation framework (SPORE) which introduces a novel latent variable topic-region to model and fuse sequential influence with personal interests in the latent and exponential space. The advantages of modeling the sequential effect at the topic-region level include a significantly reduced prediction space, an effective alleviation of data sparsity and a direct expression of the semantic meaning of users' spatial activities. Furthermore, we design an asymmetric Locality Sensitive Hashing (ALSH) technique to speed up the online top-k recommendation process by extending the traditional LSH. We evaluate the performance of SPORE on two real datasets and one large-scale synthetic dataset. The results demonstrate a significant improvement in SPORE's ability to recommend spatial items, in terms of both effectiveness and efficiency, compared with the state-of-the-art methods. <s> BIB011 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Introduction <s> Point-of-Interest (POI) recommendation has become an important means to help people discover attractive and interesting places, especially when users travel out of town. However, the extreme sparsity of a user-POI matrix creates a severe challenge. To cope with this challenge, we propose a unified probabilistic generative model, the Topic-Region Model (TRM), to simultaneously discover the semantic, temporal, and spatial patterns of users’ check-in activities, and to model their joint effect on users’ decision making for selection of POIs to visit. To demonstrate the applicability and flexibility of TRM, we investigate how it supports two recommendation scenarios in a unified way, that is, hometown recommendation and out-of-town recommendation. TRM effectively overcomes data sparsity by the complementarity and mutual enhancement of the diverse information associated with users’ check-in activities (e.g., check-in content, time, and location) in the processes of discovering heterogeneous patterns and producing recommendations. To support real-time POI recommendations, we further extend the TRM model to an online learning model, TRM-Online, to track changing user interests and speed up the model training. In addition, based on the learned model, we propose a clustering-based branch and bound algorithm (CBB) to prune the POI search space and facilitate fast retrieval of the top-k recommendations. ::: We conduct extensive experiments to evaluate the performance of our proposals on two real-world datasets, including recommendation effectiveness, overcoming the cold-start problem, recommendation efficiency, and model-training efficiency. The experimental results demonstrate the superiority of our TRM models, especially TRM-Online, compared with state-of-the-art competitive methods, by making more effective and efficient mobile recommendations. In addition, we study the importance of each type of pattern in the two recommendation scenarios, respectively, and find that exploiting temporal patterns is most important for the hometown recommendation scenario, while the semantic patterns play a dominant role in improving the recommendation effectiveness for out-of-town users. <s> BIB012 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Introduction <s> Successive point-of-interest (POI) recommendation in location-based social networks (LBSNs) becomes a significant task since it helps users to navigate a number of candidate POIs and provides the best POI recommendations based on users' most recent check-in knowledge. However, all existing methods for successive POI recommendation only focus on modeling the correlation between POIs based on users' check-in sequences, but ignore an important fact that successive POI recommendation is a time-subtle recommendation task. In fact, even with the same previous check-in information, users would prefer different successive POIs at different time. To capture the impact of time on successive POI recommendation, in this paper, we propose a spatial-temporal latent ranking (STELLAR) method to explicitly model the interactions among user, POI, and time. In particular, the proposed STELLAR model is built upon a ranking-based pairwise tensor factorization framework with a fine-grained modeling of user-POI, POI-time, and POI-POI interactions for successive POI recommendation. Moreover, we propose a new interval-aware weight utility function to differentiate successive check-ins' correlations, which breaks the time interval constraint in prior work. Evaluations on two real-world datasets demonstrate that the STELLAR model outperforms state-of-the-art successive POI recommendation model about 20% in [email protected] and [email protected] <s> BIB013 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Introduction <s> Social community detection is a growing field of interest in the area of social network applications, and many approaches have been developed, including graph partitioning, latent space model, block model and spectral clustering. Most existing work purely focuses on network structure information which is, however, often sparse, noisy and lack of interpretability. To improve the accuracy and interpretability of community discovery, we propose to infer users' social communities by incorporating their spatiotemporal data and semantic information. Technically, we propose a unified probabilistic generative model, User-Community-Geo-Topic (UCGT), to simulate the generative process of communities as a result of network proximities, spatiotemporal co-occurrences and semantic similarity. With a well-designed multi-component model structure and a parallel inference implementation to leverage the power of multicores and clusters, our UCGT model is expressive while remaining efficient and scalable to growing large-scale geo-social networking data. We deploy UCGT to two application scenarios of user behavior predictions: check-in prediction and social interaction prediction. Extensive experiments on two large-scale geo-social networking datasets show that UCGT achieves better performance than existing state-of-the-art comparison methods. <s> BIB014
Location-based social networks (LBSNs) such as Foursqaure, Facebook Places, and Yelp are popular now owning to the explosive increase of smart phones. Sharp increase of smart phones arouses prosperous online LBSNs. Until June 2016, Foursquare has collected more than 8 billion check-ins and more than 65 million place shapes mapping businesses around the world; over 55 million people in the world use the service from Foursquare each month BIB003 . LBSNs collect users' check-in information including visited locations' geographical information (latitude and longitude) and users' tips at the location. LBSNs also allow users to make friends and share information. Figure 1 demonstrates a typical LBSN, exhibiting the interactions (e.g., check-in activity) between users and POIs, and interactions (friendship) among users. In order to improve user experience in LBSNs, point-of-interest (POI) recommendation is proposed that suggests new places for users to visit from mining users' check-in records and social relationships. POI recommendation is one of the most important tasks in LBSNs, which helps users discover new interesting locations in the LBSNs. POI recommendation typically mines users' check-in records, venue information such as categories, and users' social relationships to recommend a list of POIs where users most likely check-in in the future. POI recommendation not only improves user viscosity to LBSN service providers, but also benefits advertising agencies with an effective way of launching advertisements to the potential consumers. Specifically, users can explore nearby restaurants and downtown shopping malls in Foursquare. Meanwhile, the merchants are able to make the users to easily find them through POI recommendation. Owning to the convenience to users and business opportunities for merchants, POI recommendation attracts intensive attention and a bunch of POI recommendation systems have been proposed recently BIB006 BIB007 BIB011 BIB012 BIB008 BIB013 . POI recommendation is a branch of recommendation systems, which indicates to borrow ideas for this task from conventional recommendation systems, e.g., movie recommendation. We suffice to make use of conventional recommendation system techniques, e.g., collaborative filtering methods. However, the specific fact that location concatenates the physical world and the online networking services, arouses new challenges to the traditional recommendation system techniques. We summarize some confronting challenges as follows, 1. Physical constraints: Check-in activity is limited by physical constraints, compared with shopping online from Amazon and watching movie in Netflix. For one thing, users in LBSNs check-in at geographically constrained areas; for another, shops regularly provide services in some limited time. Such physical constraints make the check-in activity in LBSN exhibit significantly spatial and temporal properties BIB009 BIB002 BIB004 BIB005 BIB001 BIB010 BIB014 ].
A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> Internet-based recommender systems have traditionally employed collaborative filtering techniques to deliver relevant "digital" results to users. In the mobile Internet however, recommendations typically involve "physical" entities (e.g., restaurants), requiring additional user effort for fulfillment. Thus, in addition to the inherent requirements of high scalability and low latency, we must also take into account a "convenience" metric in making recommendations. In this paper, we propose an enhanced collaborative filtering solution that uses location as a key criterion for generating recommendations. We frame the discussion in the context of our "restaurant recommender" system, and describe preliminary results that indicate the utility of such an approach. We conclude with a look at open issues in this space, and motivate a future discussion on the business impact and implications of mining the data in such systems. <s> BIB001 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> The advance of location-acquisition technologies enables people to record their location histories with spatio-temporal datasets, which imply the correlation between geographical regions. This correlation indicates the relationship between locations in the space of human behavior, and can enable many valuable services, such as sales promotion and location recommendation. In this paper, by taking into account a user's travel experience and the sequentiality locations have been visited, we propose an approach to mine the correlation between locations from a large number of users' location histories. We conducted a personalized location recommendation system using the location correlation, and evaluated this system with a large-scale real-world GPS dataset. As a result, our method outperforms the related work using the Pearson correlation. <s> BIB002 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> Online Social Networks (OSNs) are increasingly becoming one of the key media of communication over the Internet. The potential of these services as the basis to gather statistics and exploit information about user behavior is appealing and, as a consequence, the number of applications developed for these purposes has been soaring. At the same time, users are now willing to share information about their location, allowing for the study of the role of geographic distance in social ties. ::: ::: In this paper we present a graph analysis based approach to study social networks with geographic information and new metrics to characterize how geographic distance affects social structure. We apply our analysis to four large-scale OSN datasets: our results show that there is a vast portion of users with short-distance links and that clusters of friends are often geographically close. In addition, we demonstrate that different social networking services exhibit different geo-social properties: OSNs based mainly on location-advertising largely foster local ties and clusters, while services used mainly for news and content sharing present more connections and clusters on longer distances. The results of this work can be exploited to improve many classes of systems and a potential vast number of applications, as we illustrate by means of some practical examples. <s> BIB003 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> In this paper, we study the research issues in realizing location recommendation services for large-scale location-based social networks, by exploiting the social and geographical characteristics of users and locations/places. Through our analysis on a dataset collected from Foursquare, a popular location-based social networking system, we observe that there exists strong social and geospatial ties among users and their favorite locations/places in the system. Accordingly, we develop a friend-based collaborative filtering (FCF) approach for location recommendation based on collaborative ratings of places made by social friends. Moreover, we propose a variant of FCF technique, namely Geo-Measured FCF (GM-FCF), based on heuristics derived from observed geospatial characteristics in the Foursquare dataset. Finally, the evaluation results show that the proposed family of FCF techniques holds comparable recommendation effectiveness against the state-of-the-art recommendation algorithms, while incurring significantly lower computational overhead. Meanwhile, the GM-FCF provides additional flexibility in tradeoff between recommendation effectiveness and computational overhead. <s> BIB004 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> Link prediction systems have been largely adopted to recommend new friends in online social networks using data about social interactions. With the soaring adoption of location-based social services it becomes possible to take advantage of an additional source of information: the places people visit. In this paper we study the problem of designing a link prediction system for online location-based social networks. We have gathered extensive data about one of these services, Gowalla, with periodic snapshots to capture its temporal evolution. We study the link prediction space, finding that about 30% of new links are added among "place-friends", i.e., among users who visit the same places. We show how this prediction space can be made 15 times smaller, while still 66% of future connections can be discovered. Thus, we define new prediction features based on the properties of the places visited by users which are able to discriminate potential future links among them. Building on these findings, we describe a supervised learning framework which exploits these prediction features to predict new links among friends-of-friends and place-friends. Our evaluation shows how the inclusion of information about places and related user activity offers high link prediction performance. These results open new directions for real-world link recommendation systems on location-based social networks. <s> BIB005 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> The advance of GPS-enabled devices allows people to record their location histories with GPS traces, which imply human behaviors and preferences related to travel. In this article, we perform two types of travel recommendations by mining multiple users' GPS traces. The first is a generic one that recommends a user with top interesting locations and travel sequences in a given geospatial region. The second is a personalized recommendation that provides an individual with locations matching her travel preferences. To achieve the first recommendation, we model multiple users' location histories with a tree-based hierarchical graph (TBHG). Based on the TBHG, we propose a HITS (Hypertext Induced Topic Search)-based model to infer the interest level of a location and a user's travel experience (knowledge). In the personalized recommendation, we first understand the correlation between locations, and then incorporate this correlation into a collaborative filtering (CF)-based model, which predicts a user's interests in an unvisited location based on her locations histories and that of others. We evaluated our system based on a real-world GPS trace dataset collected by 107 users over a period of one year. As a result, our HITS-based inference model outperformed baseline approaches like rank-by-count and rank-by-frequency. Meanwhile, we achieved a better performance in recommending travel sequences beyond baselines like rank-by-count. Regarding the personalized recommendation, our approach is more effective than the weighted Slope One algorithm with a slightly additional computation, and is more efficient than the Pearson correlation-based CF model with the similar effectiveness. <s> BIB006 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> The development of a city gradually fosters different functional regions, such as educational areas and business districts. In this paper, we propose a framework (titled DRoF) that Discovers Regions of different Functions in a city using both human mobility among regions and points of interests (POIs) located in a region. Specifically, we segment a city into disjointed regions according to major roads, such as highways and urban express ways. We infer the functions of each region using a topic-based inference model, which regards a region as a document, a function as a topic, categories of POIs (e.g., restaurants and shopping malls) as metadata (like authors, affiliations, and key words), and human mobility patterns (when people reach/leave a region and where people come from and leave for) as words. As a result, a region is represented by a distribution of functions, and a function is featured by a distribution of mobility patterns. We further identify the intensity of each function in different locations. The results generated by our framework can benefit a variety of applications, including urban planning, location choosing for a business, and social recommendations. We evaluated our method using large-scale and real-world datasets, consisting of two POI datasets of Beijing (in 2010 and 2011) and two 3-month GPS trajectory datasets (representing human mobility) generated by over 12,000 taxicabs in Beijing in 2010 and 2011 respectively. The results justify the advantages of our approach over baseline methods solely using POIs or human mobility. <s> BIB007 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> Location-based social networks (LBSNs) have become a popular form of social media in recent years. They provide location related services that allow users to “check-in” at geographical locations and share such experiences with their friends. Millions of “check-in” records in LBSNs contain rich information of social and geographical context and provide a unique opportunity for researchers to study user’s social behavior from a spatial-temporal aspect, which in turn enables a variety of services including place advertisement, traffic forecasting, and disaster relief. In this paper, we propose a social-historical model to explore user’s check-in behavior on LBSNs. Our model integrates the social and historical effects and assesses the role of social correlation in user’s check-in behavior. In particular, our model captures the property of user’s check-in history in forms of power-law distribution and short-term effect, and helps in explaining user’s check-in behavior. The experimental results on a real world LBSN demonstrate that our approach properly models user’s checkins and shows how social and historical ties can help location prediction. <s> BIB008 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> In this paper, we propose a new model to integrate additional data, which is obtained from geospatial resources other than original data set in order to improve Location/Activity recommendations. The data set that is used in this work is a GPS trajectory of some users, which is gathered over 2 years. In order to have more accurate predictions and recommendations, we present a model that injects additional information to the main data set and we aim to apply a mathematical method on the merged data. On the merged data set, singular value decomposition technique is applied to extract latent relations. Several tests have been conducted, and the results of our proposed method are compared with a similar work for the same data set. <s> BIB009 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> Mobile location-based services are thriving, providing an unprecedented opportunity to collect fine grained spatio-temporal data about the places users visit. This multi-dimensional source of data offers new possibilities to tackle established research problems on human mobility, but it also opens avenues for the development of novel mobile applications and services. In this work we study the problem of predicting the next venue a mobile user will visit, by exploring the predictive power offered by different facets of user behavior. We first analyze about 35 million check-ins made by about 1 million Foursquare users in over 5 million venues across the globe, spanning a period of five months. We then propose a set of features that aim to capture the factors that may drive users' movements. Our features exploit information on transitions between types of places, mobility flows between venues, and spatio-temporal characteristics of user check-in patterns. We further extend our study combining all individual features in two supervised learning models, based on linear regression and M5 model trees, resulting in a higher overall prediction accuracy. We find that the supervised methodology based on the combination of multiple features offers the highest levels of prediction accuracy: M5 model trees are able to rank in the top fifty venues one in two user check-ins, amongst thousands of candidate items in the prediction list. <s> BIB010 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> The wide spread use of location based social networks (LBSNs) has enabled the opportunities for better location based services through Point-of-Interest (POI) recommendation. Indeed, the problem of POI recommendation is to provide personalized recommendations of places of interest. Unlike traditional recommendation tasks, POI recommendation is personalized, locationaware, and context depended. In light of this difference, this paper proposes a topic and location aware POI recommender system by exploiting associated textual and context information. Specifically, we first exploit an aggregated latent Dirichlet allocation (LDA) model to learn the interest topics of users and to infer the interest POIs by mining textual information associated with POIs. Then, a Topic and Location-aware probabilistic matrix factorization (TL-PMF) method is proposed for POI recommendation. A unique perspective of TL-PMF is to consider both the extent to which a user interest matches the POI in terms of topic distribution and the word-of-mouth opinions of the POIs. Finally, experiments on real-world LBSNs data show that the proposed recommendation method outperforms state-of-the-art probabilistic latent factor models with a significant margin. Also, we have studied the impact of personalized interest topics and word-of-mouth opinions on POI recommendations. <s> BIB011 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> ocation-based social networks (LBSNs) are one kind of online social networks offering geographic services and have been attracting much attention in recent years. LBSNs usually have complex structures, involving heterogeneous nodes and links. Many recommendation services in LBSNs (e.g., friend and location recommendation) can be cast as link prediction problems (e.g., social link and location link prediction). Traditional link prediction researches on LBSNs mostly focus on predicting either social links or location links, assuming the prediction tasks of different types of links to be independent. However, in many real-world LBSNs, the prediction tasks for social links and location links are strongly correlated and mutually influential. Another key challenge in link prediction on LBSNs is the data sparsity problem (i.e., "new network" problem), which can be encountered when LBSNs branch into new geographic areas or social groups. Actually, nowadays, many users are involved in multiple networks simultaneously and users who just join one LBSN may have been using other LBSNs for a long time. In this paper, we study the problem of predicting multiple types of links simultaneously for a new LBSN across partially aligned LBSNs and propose a novel method TRAIL (TRAnsfer heterogeneous lInks across LBSNs). TRAIL can accumulate information for locations from online posts and extract heterogeneous features for both social links and location links. TRAIL can predict multiple types of links simultaneously. In addition, TRAIL can transfer information from other aligned networks to the new network to solve the problem of lacking information. Extensive experiments conducted on two real-world aligned LBSNs show that TRAIL can achieve very good performance and substantially outperform the baseline methods. <s> BIB012 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> Geographical characteristics derived from the historical check-in data have been reported effective in improving location recommendation accuracy. However, previous studies mainly exploit geographical characteristics from a user's perspective, via modeling the geographical distribution of each individual user's check-ins. In this paper, we are interested in exploiting geographical characteristics from a location perspective, by modeling the geographical neighborhood of a location. The neighborhood is modeled at two levels: the instance-level neighborhood defined by a few nearest neighbors of the location, and the region-level neighborhood for the geographical region where the location exists. We propose a novel recommendation approach, namely Instance-Region Neighborhood Matrix Factorization (IRenMF), which exploits two levels of geographical neighborhood characteristics: a) instance-level characteristics, i.e., nearest neighboring locations tend to share more similar user preferences; and b) region-level characteristics, i.e., locations in the same geographical region may share similar user preferences. In IRenMF, the two levels of geographical characteristics are naturally incorporated into the learning of latent features of users and locations, so that IRenMF predicts users' preferences on locations more accurately. Extensive experiments on the real data collected from Gowalla, a popular LBSN, demonstrate the effectiveness and advantages of our approach. <s> BIB013 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> The availability of user check-in data in large volume from the rapid growing location-based social networks (LBSNs) enables a number of important location-aware services. Point-of-interest (POI) recommendation is one of such services, which is to recommend POIs that users have not visited before. It has been observed that: (i) users tend to visit nearby places, and (ii) users tend to visit different places in different time slots, and in the same time slot, users tend to periodically visit the same places. For example, users usually visit a restaurant during lunch hours, and visit a pub at night. In this paper, we focus on the problem of time-aware POI recommendation, which aims at recommending a list of POIs for a user to visit at a given time. To exploit both geographical and temporal influences in time aware POI recommendation, we propose the Geographical-Temporal influences Aware Graph (GTAG) to model check-in records, geographical influence and temporal influence. For effective and efficient recommendation based on GTAG, we develop a preference propagation algorithm named Breadth first Preference Propagation (BPP). The algorithm follows a relaxed breath-first search strategy, and returns recommendation results within at most 6 propagation steps. Our experimental results on two real-world datasets show that the proposed graph-based approach outperforms state-of-the-art POI recommendation methods substantially. <s> BIB014 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> Social media provides valuable resources to analyze user behaviors and capture user preferences. This article focuses on analyzing user behaviors in social media systems and designing a latent class statistical mixture model, named temporal context-aware mixture model (TCAM), to account for the intentions and preferences behind user behaviors. Based on the observation that the behaviors of a user in social media systems are generally influenced by intrinsic interest as well as the temporal context (e.g., the public's attention at that time), TCAM simultaneously models the topics related to users' intrinsic interests and the topics related to temporal context and then combines the influences from the two factors to model user behaviors in a unified way. Considering that users' interests are not always stable and may change over time, we extend TCAM to a dynamic temporal context-aware mixture model (DTCAM) to capture users' changing interests. To alleviate the problem of data sparsity, we exploit the social and temporal correlation information by integrating a social-temporal regularization framework into the DTCAM model. To further improve the performance of our proposed models (TCAM and DTCAM), an item-weighting scheme is proposed to enable them to favor items that better represent topics related to user interests and topics related to temporal context, respectively. Based on our proposed models, we design a temporal context-aware recommender system (TCARS). To speed up the process of producing the top-k recommendations from large-scale social media data, we develop an efficient query-processing technique to support TCARS. Extensive experiments have been conducted to evaluate the performance of our models on four real-world datasets crawled from different social media sites. The experimental results demonstrate the superiority of our models, compared with the state-of-the-art competitor methods, by modeling user behaviors more precisely and making more effective and efficient recommendations. <s> BIB015 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> A point of interest (POI) is a specific location that people may find useful or interesting. Examples include restaurants, stores, attractions, and hotels. With recent proliferation of location-based social networks (LBSNs), numerous users are gathered to share information on various POIs and to interact with each other. POI recommendation is then a crucial issue because it not only helps users to explore potential places but also gives LBSN providers a chance to post POI advertisements. As we utilize a heterogeneous information network to represent a LBSN in this work, POI recommendation is remodeled as a link prediction problem, which is significant in the field of social network analysis. Moreover, we propose to utilize the meta-path-based approach to extract implicit (but potentially useful) relationships between a user and a POI. Then, the extracted topological features are used to construct a prediction model with appropriate data classification techniques. In our experimental studies, the Yelp dataset is utilized as our testbed for performance evaluation purposes. Results of the experiments show that our prediction model is of good prediction quality in practical applications. <s> BIB016 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> Mobility prediction enables appealing proactive experiences for location-aware services and offers essential intelligence to business and governments. Recent studies suggest that human mobility is highly regular and predictable. Additionally, social conformity theory indicates that people's movements are influenced by others. However, existing approaches for location prediction fail to organically combine both the regularity and conformity of human mobility in a unified model, and lack the capacity to incorporate heterogeneous mobility datasets to boost prediction performance. To address these challenges, in this paper we propose a hybrid predictive model integrating both the regularity and conformity of human mobility as well as their mutual reinforcement. In addition, we further elevate the predictive power of our model by learning location profiles from heterogeneous mobility datasets based on a gravity model. We evaluate the proposed model using several city-scale mobility datasets including location check-ins, GPS trajectories of taxis, and public transit data. The experimental results validate that our model significantly outperforms state-of-the-art approaches for mobility prediction in terms of multiple metrics such as accuracy and percentile rank. The results also suggest that the predictability of human mobility is time-varying, e.g., the overall predictability is higher on workdays than holidays while predicting users' unvisited locations is more challenging for workdays than holidays. <s> BIB017 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> The problem of point of interest (POI) recommendation is to provide personalized recommendations of places, such as restaurants and movie theaters. The increasing prevalence of mobile devices and of location based social networks (LBSNs) poses significant new opportunities as well as challenges, which we address. The decision process for a user to choose a POI is complex and can be influenced by numerous factors, such as personal preferences, geographical considerations, and user mobility behaviors. This is further complicated by the connection LBSNs and mobile devices. While there are some studies on POI recommendations, they lack an integrated analysis of the joint effect of multiple factors. Meanwhile, although latent factor models have been proved effective and are thus widely used for recommendations, adopting them to POI recommendations requires delicate consideration of the unique characteristics of LBSNs. To this end, in this paper, we propose a general geographical probabilistic factor model ( $\sf{Geo}$ -PFM) framework which strategically takes various factors into consideration. Specifically, this framework allows to capture the geographical influences on a user’s check-in behavior. Also, user mobility behaviors can be effectively leveraged in the recommendation model. Moreover, based our $\sf{Geo}$ -PFM framework, we further develop a Poisson $\sf{Geo}$ -PFM which provides a more rigorous probabilistic generative process for the entire model and is effective in modeling the skewed user check-in count data as implicit feedback for better POI recommendations. Finally, extensive experimental results on three real-world LBSN datasets (which differ in terms of user mobility, POI geographical distribution, implicit response data skewness, and user-POI observation sparsity), show that the proposed recommendation methods outperform state-of-the-art latent factor models by a significant margin. <s> BIB018 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> While on the go, people are using their phones as a personal concierge discovering what is around and deciding what to do. Mobile phone has become a recommendation terminal customized for individuals—capable of recommending activities and simplifying the accomplishment of related tasks. In this article, we conduct usage mining on the check-in data, with summarized statistics identifying the local recommendation challenges of huge solution space, sparse available data, and complicated user intent, and discovered observations to motivate the hierarchical, contextual, and sequential solution. We present a point-of-interest (POI) category-transition--based approach, with a goal of estimating the visiting probability of a series of successive POIs conditioned on current user context and sensor context. A mobile local recommendation demo application is deployed. The objective and subjective evaluations validate the effectiveness in providing mobile users both accurate recommendation and favorable user experience. <s> BIB019 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> With the rapid development of mobile devices, global position system (GPS) and Web 2.0 technologies, location-based social networks (LBSNs) have attracted millions of users to share rich information, such as experiences and tips. Point-of-Interest (POI) recommender system plays an important role in LBSNs since it can help users explore attractive locations as well as help social network service providers design location-aware advertisements for Point-of-Interest. In this paper, we present a brief survey over the task of Point-of-Interest recommendation in LBSNs and discuss some research directions for Point-of-Interest recommendation. We first describe the unique characteristics of Point-of-Interest recommendation, which distinguish Point-of-Interest recommendation approaches from traditional recommendation approaches. Then, according to what type of additional information are integrated with check-in data by POI recommendation algorithms, we classify POI recommendation algorithms into four categories: pure check-in data based POI recommendation approaches, geographical influence enhanced POI recommendation approaches, social influence enhanced POI recommendation approaches and temporal influence enhanced POI recommendation approaches. Finally, we discuss future research directions for Point-of-Interest recommendation. <s> BIB020
Facebook, location is a new object, which yields new relation between locations BIB007 , between users and locations BIB008 BIB009 BIB015 . In addition, location sharing activities alter relations between users since people are apt to make new friends with geographical neighbors BIB003 BIB005 . 3. Heterogeneous information: LNSNs consist of different kinds of information, including not only check-in records, the geographical information of locations, and venue descriptions but also users' social relation information and media information (e.g., user comments and tweets). The heterogeneous information depicts the user activity from a variety of perspectives BIB016 BIB017 BIB012 , inspiring POI recommendation systems of different kinds BIB018 BIB013 BIB011 BIB010 BIB019 BIB014 . A bunch of researches are carried out to address this significant but challenging problem-POI recommendation. Ye et al. BIB004 first propose POI recommendation for LBSNs such as Foursquare and Gowalla. After that, more than 50 papers about the problem are published in top conferences and journals, including SIGKDD, SIGIR, IJCAI, AAAI, WWW, CIKM, ICDM, RecSys, TIST, TKDE, TIST, and so on so forth. Table 1 shows the statistics on the literature. Some similar researches with POI recommendation, such as restaurant recommendation system BIB001 or location recommendation from GPS trajectories BIB002 BIB006 BIB009 , base on the other types of data, beyond our scope. In this survey, we focus on the POI recommendation for LBSNs. We surpass the latest survey BIB020 in this field in depth and scope: 1) Yu et al. BIB020 only categorize the POI recommendation according to the influential factors, while, we show the taxonomies from three perspectives. 2) We incorporate more researches, especially systems established on joint models and some recently published papers. 3) We show the trends and new directions in this field. We follow the scheme shown in Fig. 2 to reveal academic progress in the area of POI recommendation. We categorize the POI recommendation systems Table 1 Statistics on the literature Name 2010 2011 2012 2013 2014 2015 2016 Conference AAAI 1 1 3 IJCAI 1 1 1 ICDE 2 ICDM 1 1 2 WWW 1 KDD 1 2 1 1 2 SIGIR 1 1 4 SIGSPATIAL 1 1 2 1 CIKM 1 1 2 in three aspects: influential factors, methodology, and task. More specifically, we discuss four types of influential factors: geographical influence, social influence, temporal influence, and content indications. In addition, we categorize the methodologies for POI recommendation as fused models and joint models. Moreover, we categorize POI recommendation systems as general POI recommendation and successive POI recommendation according to the sub-tle difference in task whether to be inclined to the recent check-in. To report these contents, we organize the remain of this paper as follows. Section 2 reports the problem definition. Section 3 demonstrates the influential factors for POI recommendation. Next, Section 4 and 5 show the POI recommendation systems categorized by methodology and task, respectively. Then, Section 6 introduces data sources and metrics for system performance evaluation. Further, Section 7 points out the trends and new directions in the POI recommendation area. Finally, Section 8 draws the conclusion of this paper.
A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Geographical Influence <s> The website wheresgeorge.com invites its users to enter the serial numbers of their US dollar bills and track them across America and beyond. Why? “For fun and because it had not been done yet”, they say. But the dataset accumulated since December 1998 has provided the ideal raw material to test the mathematical laws underlying human travel, and that has important implications for the epidemiology of infectious diseases. Analysis of the trajectories of over half a million dollar bills shows that human dispersal is described by a ‘two-parameter continuous-time random walk’ model: our travel habits conform to a type of random proliferation known as ‘superdiffusion’. And with that much established, it should soon be possible to develop a new class of models to account for the spread of human disease. The dynamic spatial redistribution of individuals is a key driving force of various spatiotemporal phenomena on geographical scales. It can synchronize populations of interacting species, stabilize them, and diversify gene pools1,2,3. Human travel, for example, is responsible for the geographical spread of human infectious disease4,5,6,7,8,9. In the light of increasing international trade, intensified human mobility and the imminent threat of an influenza A epidemic10, the knowledge of dynamical and statistical properties of human travel is of fundamental importance. Despite its crucial role, a quantitative assessment of these properties on geographical scales remains elusive, and the assumption that humans disperse diffusively still prevails in models. Here we report on a solid and quantitative assessment of human travelling statistics by analysing the circulation of bank notes in the United States. Using a comprehensive data set of over a million individual displacements, we find that dispersal is anomalous in two ways. First, the distribution of travelling distances decays as a power law, indicating that trajectories of bank notes are reminiscent of scale-free random walks known as Levy flights. Second, the probability of remaining in a small, spatially confined region for a time T is dominated by algebraically long tails that attenuate the superdiffusive spread. We show that human travelling behaviour can be described mathematically on many spatiotemporal scales by a two-parameter continuous-time random walk model to a surprising accuracy, and conclude that human travel on geographical scales is an ambivalent and effectively superdiffusive process. <s> BIB001 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Geographical Influence <s> Despite their importance for urban planning, traffic forecasting and the spread of biological and mobile viruses, our understanding of the basic laws governing human motion remains limited owing to the lack of tools to monitor the time-resolved location of individuals. Here we study the trajectory of 100,000 anonymized mobile phone users whose position is tracked for a six-month period. We find that, in contrast with the random trajectories predicted by the prevailing Lévy flight and random walk models, human trajectories show a high degree of temporal and spatial regularity, each individual being characterized by a time-independent characteristic travel distance and a significant probability to return to a few highly frequented locations. After correcting for differences in travel distances and the inherent anisotropy of each trajectory, the individual travel patterns collapse into a single spatial probability distribution, indicating that, despite the diversity of their travel history, humans follow simple reproducible patterns. This inherent similarity in travel patterns could impact all phenomena driven by human mobility, from epidemic prevention to emergency response, urban planning and agent-based modelling. <s> BIB002 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Geographical Influence <s> We report that human walks performed in outdoor settings of tens of kilometers resemble a truncated form of Levy walks commonly observed in animals such as monkeys, birds and jackals. Our study is based on about one thousand hours of GPS traces involving 44 volunteers in various outdoor settings including two different college campuses, a metropolitan area, a theme park and a state fair. This paper shows that many statistical features of human walks follow truncated power-law, showing evidence of scale-freedom and do not conform to the central limit theorem. These traits are similar to those of Levy walks. It is conjectured that the truncation, which makes the mobility deviate from pure Levy walks, comes from geographical constraints including walk boundary, physical obstructions and traffic. None of commonly used mobility models for mobile networks captures these properties. Based on these findings, we construct a simple Levy walk mobility model which is versatile enough in emulating diverse statistical patterns of human walks observed in our traces. The model is also used to recreate similar power-law inter-contact time distributions observed in previous human mobility studies. Our network simulation indicates that the Levy walk features are important in characterizing the performance of mobile network routing performance. <s> BIB003 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Geographical Influence <s> In this paper, we study the research issues in realizing location recommendation services for large-scale location-based social networks, by exploiting the social and geographical characteristics of users and locations/places. Through our analysis on a dataset collected from Foursquare, a popular location-based social networking system, we observe that there exists strong social and geospatial ties among users and their favorite locations/places in the system. Accordingly, we develop a friend-based collaborative filtering (FCF) approach for location recommendation based on collaborative ratings of places made by social friends. Moreover, we propose a variant of FCF technique, namely Geo-Measured FCF (GM-FCF), based on heuristics derived from observed geospatial characteristics in the Foursquare dataset. Finally, the evaluation results show that the proposed family of FCF techniques holds comparable recommendation effectiveness against the state-of-the-art recommendation algorithms, while incurring significantly lower computational overhead. Meanwhile, the GM-FCF provides additional flexibility in tradeoff between recommendation effectiveness and computational overhead. <s> BIB004 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Geographical Influence <s> In this paper, we aim to provide a point-of-interests (POI) recommendation service for the rapid growing location-based social networks (LBSNs), e.g., Foursquare, Whrrl, etc. Our idea is to explore user preference, social influence and geographical influence for POI recommendations. In addition to deriving user preference based on user-based collaborative filtering and exploring social influence from friends, we put a special emphasis on geographical influence due to the spatial clustering phenomenon exhibited in user check-in activities of LBSNs. We argue that the geographical influence among POIs plays an important role in user check-in behaviors and model it by power law distribution. Accordingly, we develop a collaborative recommendation algorithm based on geographical influence based on naive Bayesian. Furthermore, we propose a unified POI recommendation framework, which fuses user preference to a POI with social influence and geographical influence. Finally, we conduct a comprehensive performance evaluation over two large-scale datasets collected from Foursquare and Whrrl. Experimental results with these real datasets show that the unified collaborative recommendation approach significantly outperforms a wide spectrum of alternative recommendation approaches. <s> BIB005 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Geographical Influence <s> Recommender Systems (RSs) are software tools and techniques providing suggestions for items to be of use to a user. In this introductory chapter we briefly discuss basic RS ideas and concepts. Our main goal is to delineate, in a coherent and structured way, the chapters included in this handbook and to help the reader navigate the extremely rich and detailed content that the handbook offers. <s> BIB006 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Geographical Influence <s> Even though human movement and mobility patterns have a high degree of freedom and variation, they also exhibit structural patterns due to geographic and social constraints. Using cell phone location data, as well as data from two online location-based social networks, we aim to understand what basic laws govern human motion and dynamics. We find that humans experience a combination of periodic movement that is geographically limited and seemingly random jumps correlated with their social networks. Short-ranged travel is periodic both spatially and temporally and not effected by the social network structure, while long-distance travel is more influenced by social network ties. We show that social relationships can explain about 10% to 30% of all human movement, while periodic behavior explains 50% to 70%. Based on our findings, we develop a model of human mobility that combines periodic short range movements with travel due to the social network structure. We show that our model reliably predicts the locations and dynamics of future human movement and gives an order of magnitude better performance than present models of human mobility. <s> BIB007 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Geographical Influence <s> Recently, location-based social networks (LBSNs), such as Gowalla, Foursquare, Facebook, and Brightkite, etc., have attracted millions of users to share their social friendship and their locations via check-ins. The available check-in information makes it possible to mine users' preference on locations and to provide favorite recommendations. Personalized Point-of-interest (POI) recommendation is a significant task in LBSNs since it can help targeted users explore their surroundings as well as help third-party developers to provide personalized services. To solve this task, matrix factorization is a promising tool due to its success in recommender systems. However, previously proposed matrix factorization (MF) methods do not explore geographical influence, e.g., multi-center check-in property, which yields suboptimal solutions for the recommendation. In this paper, to the best of our knowledge, we are the first to fuse MF with geographical and social influence for POI recommendation in LBSNs. We first capture the geographical influence via modeling the probability of a user's check-in on a location as a Multicenter Gaussian Model (MGM). Next, we include social information and fuse the geographical influence into a generalized matrix factorization framework. Our solution to POI recommendation is efficient and scales linearly with the number of observations. Finally, we conduct thorough experiments on a large-scale real-world LBSNs dataset and demonstrate that the fused matrix factorization framework with MGM utilizes the distance information sufficiently and outperforms other state-of-the-art methods significantly. <s> BIB008 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Geographical Influence <s> The availability of user check-in data in large volume from the rapid growing location based social networks (LBSNs) enables many important location-aware services to users. Point-of-interest (POI) recommendation is one of such services, which is to recommend places where users have not visited before. Several techniques have been recently proposed for the recommendation service. However, no existing work has considered the temporal information for POI recommendations in LBSNs. We believe that time plays an important role in POI recommendations because most users tend to visit different places at different time in a day, \eg visiting a restaurant at noon and visiting a bar at night. In this paper, we define a new problem, namely, the time-aware POI recommendation, to recommend POIs for a given user at a specified time in a day. To solve the problem, we develop a collaborative recommendation model that is able to incorporate temporal information. Moreover, based on the observation that users tend to visit nearby POIs, we further enhance the recommendation model by considering geographical information. Our experimental results on two real-world datasets show that the proposed approach outperforms the state-of-the-art POI recommendation methods substantially. <s> BIB009 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Geographical Influence <s> With the rapidly growing location-based social networks (LBSNs), personalized geo-social recommendation becomes an important feature for LBSNs. Personalized geo-social recommendation not only helps users explore new places but also makes LBSNs more prevalent to users. In LBSNs, aside from user preference and social influence, geographical influence has also been intensively exploited in the process of location recommendation based on the fact that geographical proximity significantly affects users' check-in behaviors. Although geographical influence on users should be personalized, current studies only model the geographical influence on all users' check-in behaviors in a universal way. In this paper, we propose a new framework called iGSLR to exploit personalized social and geographical influence on location recommendation. iGSLR uses a kernel density estimation approach to personalize the geographical influence on users' check-in behaviors as individual distributions rather than a universal distribution for all users. Furthermore, user preference, social influence, and personalized geographical influence are integrated into a unified geo-social recommendation framework. We conduct a comprehensive performance evaluation for iGSLR using two large-scale real data sets collected from Foursquare and Gowalla which are two of the most popular LBSNs. Experimental results show that iGSLR provides significantly superior location recommendation compared to other state-of-the-art geo-social recommendation techniques. <s> BIB010 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Geographical Influence <s> Point-of-Interest POI recommendation is a significant service for location-based social networks LBSNs. It recommends new places such as clubs, restaurants, and coffee bars to users. Whether recommended locations meet users' interests depends on three factors: user preference, social influence, and geographical influence. Hence extracting the information from users' check-in records is the key to POI recommendation in LBSNs. Capturing user preference and social influence is relatively easy since it is analogical to the methods in a movie recommender system. However, it is a new topic to capture geographical influence. Previous studies indicate that check-in locations disperse around several centers and we are able to employ Gaussian distribution based models to approximate users' check-in behaviors. Yet centers discovering methods are dissatisfactory. In this paper, we propose two models--Gaussian mixture model GMM and genetic algorithm based Gaussian mixture model GA-GMM to capture geographical influence. More specifically, we exploit GMM to automatically learn users' activity centers; further we utilize GA-GMM to improve GMM by eliminating outliers. Experimental results on a real-world LBSN dataset show that GMM beats several popular geographical capturing models in terms of POI recommendation, while GA-GMM excludes the effect of outliers and enhances GMM. <s> BIB011 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Geographical Influence <s> Point-of-Interest (POI) recommendation has become an important means to help people discover attractive locations. However, extreme sparsity of user-POI matrices creates a severe challenge. To cope with this challenge, viewing mobility records on location-based social networks (LBSNs) as implicit feedback for POI recommendation, we first propose to exploit weighted matrix factorization for this task since it usually serves collaborative filtering with implicit feedback better. Besides, researchers have recently discovered a spatial clustering phenomenon in human mobility behavior on the LBSNs, i.e., individual visiting locations tend to cluster together, and also demonstrated its effectiveness in POI recommendation, thus we incorporate it into the factorization model. Particularly, we augment users' and POIs' latent factors in the factorization model with activity area vectors of users and influence area vectors of POIs, respectively. Based on such an augmented model, we not only capture the spatial clustering phenomenon in terms of two-dimensional kernel density estimation, but we also explain why the introduction of such a phenomenon into matrix factorization helps to deal with the challenge from matrix sparsity. We then evaluate the proposed algorithm on a large-scale LBSN dataset. The results indicate that weighted matrix factorization is superior to other forms of factorization models and that incorporating the spatial clustering phenomenon into matrix factorization improves recommendation performance. <s> BIB012 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Geographical Influence <s> Geographical characteristics derived from the historical check-in data have been reported effective in improving location recommendation accuracy. However, previous studies mainly exploit geographical characteristics from a user's perspective, via modeling the geographical distribution of each individual user's check-ins. In this paper, we are interested in exploiting geographical characteristics from a location perspective, by modeling the geographical neighborhood of a location. The neighborhood is modeled at two levels: the instance-level neighborhood defined by a few nearest neighbors of the location, and the region-level neighborhood for the geographical region where the location exists. We propose a novel recommendation approach, namely Instance-Region Neighborhood Matrix Factorization (IRenMF), which exploits two levels of geographical neighborhood characteristics: a) instance-level characteristics, i.e., nearest neighboring locations tend to share more similar user preferences; and b) region-level characteristics, i.e., locations in the same geographical region may share similar user preferences. In IRenMF, the two levels of geographical characteristics are naturally incorporated into the learning of latent features of users and locations, so that IRenMF predicts users' preferences on locations more accurately. Extensive experiments on the real data collected from Gowalla, a popular LBSN, demonstrate the effectiveness and advantages of our approach. <s> BIB013 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Geographical Influence <s> Recommending users with their preferred points-of-interest (POIs), e.g., museums and restaurants, has become an important feature for location-based social networks (LBSNs), which benefits people to explore new places and businesses to discover potential customers. However, because users only check in a few POIs in an LBSN, the user-POI check-in interaction is highly sparse, which renders a big challenge for POI recommendations. To tackle this challenge, in this study we propose a new POI recommendation approach called GeoSoCa through exploiting geographical correlations, social correlations and categorical correlations among users and POIs. The geographical, social and categorical correlations can be learned from the historical check-in data of users on POIs and utilized to predict the relevance score of a user to an unvisited POI so as to make recommendations for users. First, in GeoSoCa we propose a kernel estimation method with an adaptive bandwidth to determine a personalized check-in distribution of POIs for each user that naturally models the geographical correlations between POIs. Then, GeoSoCa aggregates the check-in frequency or rating of a user's friends on a POI and models the social check-in frequency or rating as a power-law distribution to employ the social correlations between users. Further, GeoSoCa applies the bias of a user on a POI category to weigh the popularity of a POI in the corresponding category and models the weighed popularity as a power-law distribution to leverage the categorical correlations between POIs. Finally, we conduct a comprehensive performance evaluation for GeoSoCa using two large-scale real-world check-in data sets collected from Foursquare and Yelp. Experimental results show that GeoSoCa achieves significantly superior recommendation quality compared to other state-of-the-art POI recommendation techniques. <s> BIB014
Geographical influence is an important factor that distinguishes the POI recommendation from traditional item recommendation, because the check-in behavior depends on locations' geographical features. Analysis on users' check-in data show that, a user acts in geographically constrained areas and prefers to visiting POIs nearby those where the user has checked-in. Several studies BIB008 BIB012 BIB013 BIB005 BIB009 BIB010 BIB014 BIB011 attempt to employ the geographical influence to improve POI recommendation systems. In particular, three representative models, i.e., power law distribution model, Gaussian distribution model, and kernel density estimation model, are proposed to capture the geographical influence in POI recommendation. Fig. 5 Power law distribution pattern BIB005 In BIB005 , Ye et al. employ a power law distribution model to capture the geographical influence. Power law distribution pattern has been observed in human mobility such as withdraw activities in ATMs and travel in different cities BIB001 BIB002 BIB003 . Also, Ye et al. discover similar pattern in users' check-in activity in LBSNs BIB004 BIB005 . Figure 5 demonstrates two POIs' co-occurrence probability distribution over distance between two POIs. Because of the power law distribution in Figure 5 , we are able to model the geographical influence as follows. The co-occurrence probability y of two POIs by the same user can be formulated as follows, where x denotes the distance between two POIs, a and b are parameters of the power-law distribution. Here, a and b should be learned from the observed check-in data, depicting the geographical feature of the check-in activity. A standard way to learn the parameters, a and b, is to transform Eq. (1) to a linear equation via a logarithmic operation, and learn the parameters by fitting a linear regression problem. On basis of the geographical influence model depicted through the power law distribution, new POIs can be suggested according to the following formula. Given a past checked-in POI set L i , the probability of visiting POI l j for user u i , is formulated as, where d(l j , l y ) denotes the distance between POI l j and l y , and In BIB004 BIB005 , Ye et al. leverage the power law distribution to model the geographical influence and combine it with collaborative filtering techniques BIB006 to recommend POIs. In addition, Yuan et al. BIB009 also adopt the power law distribution model, but learn the parameter using a Bayesian rule instead. Fig. 6 Check-in distribution in multi-centers BIB007 The second type to model the geographical influence is a series of Gaussian distribution based methods. Cho et al. BIB007 observe that users in LBSNs always act round some activity centers, e.g., home and office, as shown in Fig. 6 . Further, Cheng et al. BIB008 propose a Multi-center Gaussian Model (MGM) to capture the geographical influence for POI recommendation. Given the multicenter set C u , the probability of visiting POI l by user u is defined by where is the probability of the POI l belonging to the center c u , denotes the normalized effect of the check-in frequency on the center c u and parameter α maintains the frequency aversion property, N (l|µ Cu , Cu ) is the probability density function of Gaussian distribution with mean µ Cu and covariance matrix Cu . Specifically, the MGM employs a greedy clustering algorithm on the check-in data to find the user activity centers. That may result in unbalanced assignment of POIs to different activity centers. Hence, Zhao et al. BIB011 propose a genetic-based Gaussian mixture model to capture the geographical influence, which outperforms the MGM in POI recommendation. Fig. 7 Distributions of personal check-in locations BIB010 The third type of geographical model is the kernel density estimation (KDE) model. In order to mine the personalized geographical influence, Zhang et al. BIB010 argue that the geographical influence on each individual user should be personalized rather than modeling though a common distribution, e.g., pow law distribution BIB005 and MGM BIB008 . As shown in Fig. 7 , it is hard to model different users using the same distribution. To this end, they leverage kernel density estimation to model the geographical influence using a personalized distance distribution for each user. Specifically, the kernel density estimation model consists of two steps: distance sample collection and distance distribution estimation. The step of distance sample collection generates a sample X u for a user by computing the distance between every pair of locations visited by the user. Then, the distance distribution can be estimated through the probability density function f over distance d, where σ is a smoothing parameter, called the bandwidth. Denote L u = {l 1 , l 2 , . . . , l n } as the visited locations of user u. The probability of user u visiting a new POI l j given the checked-in POI set L u is defined as, where d ij is the distance between l i and l j , f (·) is the distance distribution function in Eq. (4).
A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Social Influence <s> Recommender Systems based on Collaborative Filtering suggest to users items they might like. However due to data sparsity of the input ratings matrix, the step of finding similar users often fails. We propose to replace this step with the use of a trust metric, an algorithm able to propagate trust over the trust network and to estimate a trust weight that can be used in place of the similarity weight. An empirical evaluation on Epinions.com dataset shows that Recommender Systems that make use of trust information are the most effective in term of accuracy while preserving a good coverage. This is especially evident on users who provided few ratings. <s> BIB001 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Social Influence <s> Many existing approaches to collaborative filtering can neither handle very large datasets nor easily deal with users who have very few ratings. In this paper we present the Probabilistic Matrix Factorization (PMF) model which scales linearly with the number of observations and, more importantly, performs well on the large, sparse, and very imbalanced Netflix dataset. We further extend the PMF model to include an adaptive prior on the model parameters and show how the model capacity can be controlled automatically. Finally, we introduce a constrained version of the PMF model that is based on the assumption that users who have rated similar sets of movies are likely to have similar preferences. The resulting model is able to generalize considerably better for users with very few ratings. When the predictions of multiple PMF models are linearly combined with the predictions of Restricted Boltzmann Machines models, we achieve an error rate of 0.8861, that is nearly 7% better than the score of Netflix's own system. <s> BIB002 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Social Influence <s> Data sparsity, scalability and prediction quality have been recognized as the three most crucial challenges that every collaborative filtering algorithm or recommender system confronts. Many existing approaches to recommender systems can neither handle very large datasets nor easily deal with users who have made very few ratings or even none at all. Moreover, traditional recommender systems assume that all the users are independent and identically distributed; this assumption ignores the social interactions or connections among users. In view of the exponential growth of information generated by online social networks, social network analysis is becoming important for many Web applications. Following the intuition that a person's social network will affect personal behaviors on the Web, this paper proposes a factor analysis approach based on probabilistic matrix factorization to solve the data sparsity and poor prediction accuracy problems by employing both users' social network information and rating records. The complexity analysis indicates that our approach can be applied to very large datasets since it scales linearly with the number of observations, while the experimental results shows that our method performs much better than the state-of-the-art approaches, especially in the circumstance that users have made few or no ratings. <s> BIB003 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Social Influence <s> Collaborative filtering is the most popular approach to build recommender systems and has been successfully employed in many applications. However, it cannot make recommendations for so-called cold start users that have rated only a very small number of items. In addition, these methods do not know how confident they are in their recommendations. Trust-based recommendation methods assume the additional knowledge of a trust network among users and can better deal with cold start users, since users only need to be simply connected to the trust network. On the other hand, the sparsity of the user item ratings forces the trust-based approach to consider ratings of indirect neighbors that are only weakly trusted, which may decrease its precision. In order to find a good trade-off, we propose a random walk model combining the trust-based and the collaborative filtering approach for recommendation. The random walk model allows us to define and to measure the confidence of a recommendation. We performed an evaluation on the Epinions dataset and compared our model with existing trust-based and collaborative filtering methods. <s> BIB004 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Social Influence <s> In this paper, we study the research issues in realizing location recommendation services for large-scale location-based social networks, by exploiting the social and geographical characteristics of users and locations/places. Through our analysis on a dataset collected from Foursquare, a popular location-based social networking system, we observe that there exists strong social and geospatial ties among users and their favorite locations/places in the system. Accordingly, we develop a friend-based collaborative filtering (FCF) approach for location recommendation based on collaborative ratings of places made by social friends. Moreover, we propose a variant of FCF technique, namely Geo-Measured FCF (GM-FCF), based on heuristics derived from observed geospatial characteristics in the Foursquare dataset. Finally, the evaluation results show that the proposed family of FCF techniques holds comparable recommendation effectiveness against the state-of-the-art recommendation algorithms, while incurring significantly lower computational overhead. Meanwhile, the GM-FCF provides additional flexibility in tradeoff between recommendation effectiveness and computational overhead. <s> BIB005 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Social Influence <s> Recommender systems are becoming tools of choice to select the online information relevant to a given user. Collaborative filtering is the most popular approach to building recommender systems and has been successfully employed in many applications. With the advent of online social networks, the social network based approach to recommendation has emerged. This approach assumes a social network among users and makes recommendations for a user based on the ratings of the users that have direct or indirect social relations with the given user. As one of their major benefits, social network based approaches have been shown to reduce the problems with cold start users. In this paper, we explore a model-based approach for recommendation in social networks, employing matrix factorization techniques. Advancing previous work, we incorporate the mechanism of trust propagation into the model. Trust propagation has been shown to be a crucial phenomenon in the social sciences, in social network analysis and in trust-based recommendation. We have conducted experiments on two real life data sets, the public domain Epinions.com dataset and a much larger dataset that we have recently crawled from Flixster.com. Our experiments demonstrate that modeling trust propagation leads to a substantial increase in recommendation accuracy, in particular for cold start users. <s> BIB006 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Social Influence <s> Although Recommender Systems have been comprehensively analyzed in the past decade, the study of social-based recommender systems just started. In this paper, aiming at providing a general method for improving recommender systems by incorporating social network information, we propose a matrix factorization framework with social regularization. The contributions of this paper are four-fold: (1) We elaborate how social network information can benefit recommender systems; (2) We interpret the differences between social-based recommender systems and trust-aware recommender systems; (3) We coin the term Social Regularization to represent the social constraints on recommender systems, and we systematically illustrate how to design a matrix factorization objective function with social regularization; and (4) The proposed method is quite general, which can be easily extended to incorporate other contextual information, like social tags, etc. The empirical analysis on two large datasets demonstrates that our approaches outperform other state-of-the-art methods. <s> BIB007 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Social Influence <s> Recently, location-based social networks (LBSNs), such as Gowalla, Foursquare, Facebook, and Brightkite, etc., have attracted millions of users to share their social friendship and their locations via check-ins. The available check-in information makes it possible to mine users' preference on locations and to provide favorite recommendations. Personalized Point-of-interest (POI) recommendation is a significant task in LBSNs since it can help targeted users explore their surroundings as well as help third-party developers to provide personalized services. To solve this task, matrix factorization is a promising tool due to its success in recommender systems. However, previously proposed matrix factorization (MF) methods do not explore geographical influence, e.g., multi-center check-in property, which yields suboptimal solutions for the recommendation. In this paper, to the best of our knowledge, we are the first to fuse MF with geographical and social influence for POI recommendation in LBSNs. We first capture the geographical influence via modeling the probability of a user's check-in on a location as a Multicenter Gaussian Model (MGM). Next, we include social information and fuse the geographical influence into a generalized matrix factorization framework. Our solution to POI recommendation is efficient and scales linearly with the number of observations. Finally, we conduct thorough experiments on a large-scale real-world LBSNs dataset and demonstrate that the fused matrix factorization framework with MGM utilizes the distance information sufficiently and outperforms other state-of-the-art methods significantly. <s> BIB008 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Social Influence <s> Location-based social networks (LBSNs) have become a popular form of social media in recent years. They provide location related services that allow users to “check-in” at geographical locations and share such experiences with their friends. Millions of “check-in” records in LBSNs contain rich information of social and geographical context and provide a unique opportunity for researchers to study user’s social behavior from a spatial-temporal aspect, which in turn enables a variety of services including place advertisement, traffic forecasting, and disaster relief. In this paper, we propose a social-historical model to explore user’s check-in behavior on LBSNs. Our model integrates the social and historical effects and assesses the role of social correlation in user’s check-in behavior. In particular, our model captures the property of user’s check-in history in forms of power-law distribution and short-term effect, and helps in explaining user’s check-in behavior. The experimental results on a real world LBSN demonstrate that our approach properly models user’s checkins and shows how social and historical ties can help location prediction. <s> BIB009 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Social Influence <s> Location-based social networks (LBSNs) have attracted an increasing number of users in recent years. The availability of geographical and social information of online LBSNs provides an unprecedented opportunity to study the human movement from their socio-spatial behavior, enabling a variety of location-based services. Previous work on LBSNs reported limited improvements from using the social network information for location prediction; as users can check-in at new places, traditional work on location prediction that relies on mining a user's historical trajectories is not designed for this "cold start" problem of predicting new check-ins. In this paper, we propose to utilize the social network information for solving the "cold start" location prediction problem, with a geo-social correlation model to capture social correlations on LBSNs considering social networks and geographical distance. The experimental results on a real-world LBSN demonstrate that our approach properly models the social correlations of a user's new check-ins by considering various correlation strengths and correlation measures. <s> BIB010 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Social Influence <s> Although online recommendation systems such as recommendation of movies or music have been systematically studied in the past decade, location recommendation in Location Based Social Networks (LBSNs) is not well investigated yet. In LBSNs, users can check in and leave tips commenting on a venue. These two heterogeneous data sources both describe users' preference of venues. However, in current research work, only users' check-in behavior is considered in users' location preference model, users' tips on venues are seldom investigated yet. Moreover, while existing work mainly considers social influence in recommendation, we argue that considering venue similarity can further improve the recommendation performance. In this research, we ameliorate location recommendation by enhancing not only the user location preference model but also recommendation algorithm. First, we propose a hybrid user location preference model by combining the preference extracted from check-ins and text-based tips which are processed using sentiment analysis techniques. Second, we develop a location based social matrix factorization algorithm that takes both user social influence and venue similarity influence into account in location recommendation. Using two datasets extracted from the location based social networks Foursquare, experiment results demonstrate that the proposed hybrid preference model can better characterize user preference by maintaining the preference consistency, and the proposed algorithm outperforms the state-of-the-art methods. <s> BIB011 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Social Influence <s> Providing location recommendations becomes an important feature for location-based social networks (LBSNs), since it helps users explore new places and makes LBSNs more prevalent to users. In LBSNs, geographical influence and social influence have been intensively used in location recommendations based on the facts that geographical proximity of locations significantly affects users' check-in behaviors and social friends often have common interests. Although human movement exhibits sequential patterns, most current studies on location recommendations do not consider any sequential influence of locations on users' check-in behaviors. In this paper, we propose a new approach called LORE to exploit sequential influence on location recommendations. First, LORE incrementally mines sequential patterns from location sequences and represents the sequential patterns as a dynamic Location-Location Transition Graph (L2TG). LORE then predicts the probability of a user visiting a location by Additive Markov Chain (AMC) with L2TG. Finally, LORE fuses sequential influence with geographical influence and social influence into a unified recommendation framework; in particular the geographical influence is modeled as two-dimensional check-in probability distributions rather than one-dimensional distance probability distributions in existing works. We conduct a comprehensive performance evaluation for LORE using two large-scale real data sets collected from Foursquare and Gowalla. Experimental results show that LORE achieves significantly superior location recommendations compared to other state-of-the-art recommendation techniques. <s> BIB012 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Social Influence <s> Recommending users with their preferred points-of-interest (POIs), e.g., museums and restaurants, has become an important feature for location-based social networks (LBSNs), which benefits people to explore new places and businesses to discover potential customers. However, because users only check in a few POIs in an LBSN, the user-POI check-in interaction is highly sparse, which renders a big challenge for POI recommendations. To tackle this challenge, in this study we propose a new POI recommendation approach called GeoSoCa through exploiting geographical correlations, social correlations and categorical correlations among users and POIs. The geographical, social and categorical correlations can be learned from the historical check-in data of users on POIs and utilized to predict the relevance score of a user to an unvisited POI so as to make recommendations for users. First, in GeoSoCa we propose a kernel estimation method with an adaptive bandwidth to determine a personalized check-in distribution of POIs for each user that naturally models the geographical correlations between POIs. Then, GeoSoCa aggregates the check-in frequency or rating of a user's friends on a POI and models the social check-in frequency or rating as a power-law distribution to employ the social correlations between users. Further, GeoSoCa applies the bias of a user on a POI category to weigh the popularity of a POI in the corresponding category and models the weighed popularity as a power-law distribution to leverage the categorical correlations between POIs. Finally, we conduct a comprehensive performance evaluation for GeoSoCa using two large-scale real-world check-in data sets collected from Foursquare and Yelp. Experimental results show that GeoSoCa achieves significantly superior recommendation quality compared to other state-of-the-art POI recommendation techniques. <s> BIB013 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Social Influence <s> The emergence of Location-based Social Network (LBSN) services provides a wonderful opportunity to build personalized Point-of-Interest (POI) recommender systems. Although a personalized POI recommender system can significantly facilitate users' outdoor activities, it faces many challenging problems, such as the hardness to model user's POI decision making process and the difficulty to address data sparsity and user/location cold-start problem. To cope with these challenges, we define three types of friends (i.e., social friends, location friends, and neighboring friends) in LBSN, and develop a two-step framework to leverage the information of friends to improve POI recommendation accuracy and address cold-start problem. Specifically, we first propose to learn a set of potential locations that each individual's friends have checked-in before and this individual is most interested in. Then we incorporate three types of check-ins (i.e., observed check-ins, potential check-ins and other unobserved check-ins) into matrix factorization model using two different loss functions (i.e., the square error based loss and the ranking error based loss). To evaluate the proposed model, we conduct extensive experiments with many state-of-the-art baseline methods and evaluation metrics on two real-world data sets. The experimental results demonstrate the effectiveness of our methods. <s> BIB014
Inspired by the assumption that friends in LBSNs share more common interests than non-friends, social influence is explored to enhance POI recommendation BIB008 BIB009 BIB010 BIB014 BIB005 BIB011 BIB013 BIB012 . In fact, employing social influence to enhance recommendation systems has been explored in traditional recommendation systems, both in memory-based methods BIB004 BIB001 and model-based methods BIB006 BIB003 BIB007 . Researchers borrow the ideas from traditional recommendation systems to POI recommendation. In the following, we demonstrate representative researches capturing social influence in two aspects: memory-based and model-based. Ye et al. BIB005 propose a memory-based model, friend-based collaborative filtering (FCF) approach for POI recommendation. FCF model constrains the user-based collaborative filtering to find top similar users in friends rather than all users of LBSNs. Hence, the preference r ij of user u i at l j is calculated as follows, where F i is the set of friends with top-n similarity, w ik is similarity weight between u i and u k . FCF enhances the efficiency by reducing the computation cost of finding top similar users. However, it overlooks the non-friends who share many common check-ins with the target user. Experimental results show that FCF brings very limited improvements over user-based POI recommendation in terms of precision. Cheng et al. BIB008 apply the probabilistic matrix factorization with social regularization (PMFSR) BIB007 in POI recommendation, which integrates social influence into PMF BIB002 . Denote U and L are the set of users and POIs, respectively. PMFSR learns the latent features of users and POIs by minimizing the following objective function arg min where U i , U f , and L j are the latent features of user u i , u f , and POI l j respectively, I ij is an indicator denoting user u i has checked-in POI l j , F i is the set of user u i 's friends, sim(i, f ) denotes the social weight of user u i and u f , and g(·) is the sigmoid function to mapping the check-in frequency value c ij into the range of [0,1]. In this framework, the social influence is incorporated by the social constraints that ensure latent features of friends keep in close distance at the latent subspace. Due to its validity, Yang et al. BIB011 also employ the same framework to their sentiment-aware POI recommendation. Fig. 8 The significance of social influence on POI recommendation BIB010 Although social influence improves traditional recommendation system significantly BIB006 BIB003 BIB007 , the social influence on POI recommendation shows limited improvements BIB008 BIB010 BIB005 . Figure 8 shows the limited improvement achieved from social influence in BIB010 . Why this happens can be explained as follows. Users in LBSNs make friends online without any limitation; on the contrary, the check-in activity requires physical interactions between users and POIs. Hence, friends in LBSNs may share common interest but may not visit common locations. For instance, friends in favour of Italian food from different cities will visit their own local Italian food restaurants. This phenomenon differs from the online movie and music recommendation scenarios such as Netflix and Spotify.
A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Temporal Influence <s> Customer preferences for products are drifting over time. Product perception and popularity are constantly changing as new selection emerges. Similarly, customer inclinations are evolving, leading them to ever redefine their taste. Thus, modeling temporal dynamics should be a key when designing recommender systems or general customer preference models. However, this raises unique challenges. Within the eco-system intersecting multiple products and customers, many different characteristics are shifting simultaneously, while many of them influence each other and often those shifts are delicate and associated with a few data instances. This distinguishes the problem from concept drift explorations, where mostly a single concept is tracked. Classical time-window or instance-decay approaches cannot work, as they lose too much signal when discarding data instances. A more sensitive approach is required, which can make better distinctions between transient effects and long term patterns. The paradigm we offer is creating a model tracking the time changing behavior throughout the life span of the data. This allows us to exploit the relevant components of all data instances, while discarding only what is modeled as being irrelevant. Accordingly, we revamp two leading collaborative filtering recommendation approaches. Evaluation is made on a large movie rating dataset by Netflix. Results are encouraging and better than those previously reported on this dataset. <s> BIB001 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Temporal Influence <s> Recommender systems are an important component of many websites. Two of the most popular approaches are based on matrix factorization (MF) and Markov chains (MC). MF methods learn the general taste of a user by factorizing the matrix over observed user-item preferences. On the other hand, MC methods model sequential behavior by learning a transition graph over items that is used to predict the next action based on the recent actions of a user. In this paper, we present a method bringing both approaches together. Our method is based on personalized transition graphs over underlying Markov chains. That means for each user an own transition matrix is learned - thus in total the method uses a transition cube. As the observations for estimating the transitions are usually very limited, our method factorizes the transition cube with a pairwise interaction model which is a special case of the Tucker Decomposition. We show that our factorized personalized MC (FPMC) model subsumes both a common Markov chain and the normal matrix factorization model. For learning the model parameters, we introduce an adaption of the Bayesian Personalized Ranking (BPR) framework for sequential basket data. Empirically, we show that our FPMC model outperforms both the common matrix factorization and the unpersonalized MC model both learned with and without factorization. <s> BIB002 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Temporal Influence <s> Real-world relational data are seldom stationary, yet traditional collaborative filtering algorithms generally rely on this assumption. Motivated by our sales prediction problem, we propose a factor-based algorithm that is able to take time into account. By introducing additional factors for time, we formalize this problem as a tensor factorization with a special constraint on the time dimension. Further, we provide a fully Bayesian treatment to avoid tuning parameters and achieve automatic model complexity control. To learn the model we develop an efficient sampling procedure that is capable of analyzing large-scale data sets. This new algorithm, called Bayesian Probabilistic Tensor Factorization (BPTF), is evaluated on several real-world problems including sales prediction and movie recommendation. Empirical results demonstrate the superiority of our temporal model. <s> BIB003 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Temporal Influence <s> Even though human movement and mobility patterns have a high degree of freedom and variation, they also exhibit structural patterns due to geographic and social constraints. Using cell phone location data, as well as data from two online location-based social networks, we aim to understand what basic laws govern human motion and dynamics. We find that humans experience a combination of periodic movement that is geographically limited and seemingly random jumps correlated with their social networks. Short-ranged travel is periodic both spatially and temporally and not effected by the social network structure, while long-distance travel is more influenced by social network ties. We show that social relationships can explain about 10% to 30% of all human movement, while periodic behavior explains 50% to 70%. Based on our findings, we develop a model of human mobility that combines periodic short range movements with travel due to the social network structure. We show that our model reliably predicts the locations and dynamics of future human movement and gives an order of magnitude better performance than present models of human mobility. <s> BIB004 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Temporal Influence <s> The exponential growth of Web service makes building high-quality service-oriented applications an urgent and crucial research problem. User-side QoS evaluations of Web services are critical for selecting the optimal Web service from a set of functionally equivalent service candidates. Since QoS performance of Web services is highly related to the service status and network environments which are variable against time, service invocations are required at different instances during a long time interval for making accurate Web service QoS evaluation. However, invoking a huge number of Web services from user-side for quality evaluation purpose is time-consuming, resource-consuming, and sometimes even impractical (e.g., service invocations are charged by service providers). To address this critical challenge, this paper proposes a Web service QoS prediction framework, called WSPred, to provide time-aware personalized QoS value prediction service for different service users. WSPred requires no additional invocation of Web services. Based on the past Web service usage experience from different service users, WSPred builds feature models and employs these models to make personalized QoS prediction for different users. The extensive experimental results show the effectiveness and efficiency of WSPred. Moreover, we publicly release our real-world time-aware Web service QoS dataset for future research, which makes our experiments verifiable and reproducible. <s> BIB005 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Temporal Influence <s> Location-based social networks (LBSNs) have attracted an increasing number of users in recent years. The availability of geographical and social information of online LBSNs provides an unprecedented opportunity to study the human movement from their socio-spatial behavior, enabling a variety of location-based services. Previous work on LBSNs reported limited improvements from using the social network information for location prediction; as users can check-in at new places, traditional work on location prediction that relies on mining a user's historical trajectories is not designed for this "cold start" problem of predicting new check-ins. In this paper, we propose to utilize the social network information for solving the "cold start" location prediction problem, with a geo-social correlation model to capture social correlations on LBSNs considering social networks and geographical distance. The experimental results on a real-world LBSN demonstrate that our approach properly models the social correlations of a user's new check-ins by considering various correlation strengths and correlation measures. <s> BIB006 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Temporal Influence <s> Location-based social networks (LBSNs) have attracted an inordinate number of users and greatly enriched the urban experience in recent years. The availability of spatial, temporal and social information in online LBSNs offers an unprecedented opportunity to study various aspects of human behavior, and enable a variety of location-based services such as location recommendation. Previous work studied spatial and social influences on location recommendation in LBSNs. Due to the strong correlations between a user's check-in time and the corresponding check-in location, recommender systems designed for location recommendation inevitably need to consider temporal effects. In this paper, we introduce a novel location recommendation framework, based on the temporal properties of user movement observed from a real-world LBSN dataset. The experimental results exhibit the significance of temporal patterns in explaining user behavior, and demonstrate their power to improve location recommendation performance. <s> BIB007 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Temporal Influence <s> The availability of user check-in data in large volume from the rapid growing location based social networks (LBSNs) enables many important location-aware services to users. Point-of-interest (POI) recommendation is one of such services, which is to recommend places where users have not visited before. Several techniques have been recently proposed for the recommendation service. However, no existing work has considered the temporal information for POI recommendations in LBSNs. We believe that time plays an important role in POI recommendations because most users tend to visit different places at different time in a day, \eg visiting a restaurant at noon and visiting a bar at night. In this paper, we define a new problem, namely, the time-aware POI recommendation, to recommend POIs for a given user at a specified time in a day. To solve the problem, we develop a collaborative recommendation model that is able to incorporate temporal information. Moreover, based on the observation that users tend to visit nearby POIs, we further enhance the recommendation model by considering geographical information. Our experimental results on two real-world datasets show that the proposed approach outperforms the state-of-the-art POI recommendation methods substantially. <s> BIB008 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Temporal Influence <s> Personalized point-of-interest (POI) recommendation is a significant task in location-based social networks (LBSNs) as it can help provide better user experience as well as enable third-party services, e.g., launching advertisements. To provide a good recommendation, various research has been conducted in the literature. However, pervious efforts mainly consider the "check-ins" in a whole and omit their temporal relation. They can only recommend POI globally and cannot know where a user would like to go tomorrow or in the next few days. In this paper, we consider the task of successive personalized POI recommendation in LBSNs, which is a much harder task than standard personalized POI recommendation or prediction. To solve this task, we observe two prominent properties in the check-in sequence: personalized Markov chain and region localization. Hence, we propose a novel matrix factorization method, namely FPMC-LR, to embed the personalized Markov chains and the localized regions. Our proposed FPMC-LR not only exploits the personalized Markov chain in the check-in sequence, but also takes into account users' movement constraint, i.e., moving around a localized region. More importantly, utilizing the information of localized regions, we not only reduce the computation cost largely, but also discard the noisy information to boost recommendation. Results on two real-world LBSNs datasets demonstrate the merits of our proposed FPMC-LR. <s> BIB009 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Temporal Influence <s> Providing location recommendations becomes an important feature for location-based social networks (LBSNs), since it helps users explore new places and makes LBSNs more prevalent to users. In LBSNs, geographical influence and social influence have been intensively used in location recommendations based on the facts that geographical proximity of locations significantly affects users' check-in behaviors and social friends often have common interests. Although human movement exhibits sequential patterns, most current studies on location recommendations do not consider any sequential influence of locations on users' check-in behaviors. In this paper, we propose a new approach called LORE to exploit sequential influence on location recommendations. First, LORE incrementally mines sequential patterns from location sequences and represents the sequential patterns as a dynamic Location-Location Transition Graph (L2TG). LORE then predicts the probability of a user visiting a location by Additive Markov Chain (AMC) with L2TG. Finally, LORE fuses sequential influence with geographical influence and social influence into a unified recommendation framework; in particular the geographical influence is modeled as two-dimensional check-in probability distributions rather than one-dimensional distance probability distributions in existing works. We conduct a comprehensive performance evaluation for LORE using two large-scale real data sets collected from Foursquare and Gowalla. Experimental results show that LORE achieves significantly superior location recommendations compared to other state-of-the-art recommendation techniques. <s> BIB010 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Temporal Influence <s> The rapidly growing of Location-based Social Networks (LBSNs) provides a vast amount of check-in data, which enables many services, e.g., point-of-interest (POI) recommendation. In this paper, we study the next new POI recommendation problem in which new POIs with respect to users' current location are to be recommended. The challenge lies in the difficulty in precisely learning users' sequential information and personalizing the recommendation model. To this end, we resort to the Metric Embedding method for the recommendation, which avoids drawbacks of the Matrix Factorization technique. We propose a personalized ranking metric embedding method (PRME) to model personalized check-in sequences. We further develop a PRME-G model, which integrates sequential information, individual preference, and geographical influence, to improve the recommendation performance. Experiments on two real-world LBSN datasets demonstrate that our new algorithm outperforms the state-of-the-art next POI recommendation methods. <s> BIB011 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Temporal Influence <s> In location-based social networks (LBSNs), time significantly affects users’ check-in behaviors, for example, people usually visit different places at different times of weekdays and weekends, e.g., restaurants at noon on weekdays and bars at midnight on weekends. Current studies use the temporal influence to recommend locations through dividing users’ check-in locations into time slots based on their check-in time and learning their preferences to locations in each time slot separately. Unfortunately, these studies generally suffer from two major limitations: (1) the loss of time information because of dividing a day into time slots and (2) the lack of temporal influence correlations due to modeling users’ preferences to locations for each time slot separately. In this paper, we propose a probabilistic framework called TICRec that utilizes temporal influence correlations (TIC) of both weekdays and weekends for time-aware location recommendations. TICRec not only recommends locations to users, but it also suggests when a user should visit a recommended location. In TICRec, we estimate a time probability density of a user visiting a new location without splitting the continuous time into discrete time slots to avoid the time information loss. To leverage the TIC, TICRec considers both user-based TIC (i.e., different users’ check-in behaviors to the same location at different times ) and location-based TIC (i.e., the same user's check-in behaviors to different locations at different times ). Finally, we conduct a comprehensive performance evaluation for TICRec using two real data sets collected from Foursquare and Gowalla. Experimental results show that TICRec achieves significantly superior location recommendations compared to other state-of-the-art recommendation techniques with temporal influence. <s> BIB012 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Temporal Influence <s> Successive point-of-interest (POI) recommendation in location-based social networks (LBSNs) becomes a significant task since it helps users to navigate a number of candidate POIs and provides the best POI recommendations based on users' most recent check-in knowledge. However, all existing methods for successive POI recommendation only focus on modeling the correlation between POIs based on users' check-in sequences, but ignore an important fact that successive POI recommendation is a time-subtle recommendation task. In fact, even with the same previous check-in information, users would prefer different successive POIs at different time. To capture the impact of time on successive POI recommendation, in this paper, we propose a spatial-temporal latent ranking (STELLAR) method to explicitly model the interactions among user, POI, and time. In particular, the proposed STELLAR model is built upon a ranking-based pairwise tensor factorization framework with a fine-grained modeling of user-POI, POI-time, and POI-POI interactions for successive POI recommendation. Moreover, we propose a new interval-aware weight utility function to differentiate successive check-ins' correlations, which breaks the time interval constraint in prior work. Evaluations on two real-world datasets demonstrate that the STELLAR model outperforms state-of-the-art successive POI recommendation model about 20% in [email protected] and [email protected] <s> BIB013 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Temporal Influence <s> In this paper, we address the problem of personalized next Point-of-interest (POI) recommendation which has become an important and very challenging task in location-based social networks (LBSNs), but not well studied yet. With the conjecture that, under different contextual scenario, human exhibits distinct mobility patterns, we attempt here to jointly model the next POI recommendation under the influence of user's latent behavior pattern. We propose to adopt a third-rank tensor to model the successive check-in behaviors. By incorporating softmax function to fuse the personalized Markov chain with latent pattern, we furnish a Bayesian Personalized Ranking (BPR) approach and derive the optimization criterion accordingly. Expectation Maximization (EM) is then used to estimate the model parameters. Extensive experiments on two large-scale LBSNs datasets demonstrate the significant improvements of our model over several state-of-the-art methods. <s> BIB014 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Temporal Influence <s> Point-of-interest (POI) recommendation, which helps mobile users explore new places, has become an important location-based service. Existing approaches for POI recommendation have been mainly focused on exploiting the information about user preferences, social influence, and geographical influence. However, these approaches cannot handle the scenario where users are expecting to have POI recommendation for a specific time period. To this end, in this paper, we propose a unified recommender system, named the 'Where and When to gO' (WWO) recommender system, to integrate the user interests and their evolving sequential preferences with temporal interval assessment. As a result, the WWO system can make recommendations dynamically for a specific time period and the traditional POI recommender system can be treated as the special case of the WWO system by setting this time period long enough. Specifically, to quantify users' sequential preferences, we consider the distributions of the temporal intervals between dependent POIs in the historical check-in sequences. Then, to estimate the distributions with only sparse observations, we develop the low-rank graph construction model, which identifies a set of bi-weighted graph bases so as to learn the static user preferences and the dynamic sequential preferences in a coherent way. Finally, we evaluate the proposed approach using real-world data sets from several location-based social networks (LBSNs). The experimental results show that our method outperforms the state-of-the-art approaches for POI recommendation in terms of various metrics, such as F-measure and NDCG, with a significant margin. <s> BIB015
Temporal influence is of vital importance for POI recommendation because physical constraints on the check-in activity result in specific patterns. Temporal influence in a POI recommendation system performs in three aspects: periodicity, consecutiveness, and non-uniformness. Users' check-in behaviors in LBSNs exhibit periodic pattern. For instance, users always check-in restaurants at noon and have fun in nightclubs at night. Also users visit places around the office on weekdays and spend time in shopping malls on weekends. Figure 9 shows the periodic pattern in a day and a week, respectively. The check-in activity exhibits this kind periodic pattern, visiting the same or similar POIs at the same time slot. This observation inspires the researches exploiting this periodic pattern for POI recommendation BIB004 BIB007 BIB008 BIB012 . Consecutiveness performs in the check-in sequences, especially in the successive check-ins. Successive check-ins are usually correlated. For instance, users may have fun in a nightclub after diner in a restaurant. This frequent check-in pattern implies that the nightclub and the restaurant are geographically adjacent and correlated from the perspective of venue function. Data analysis on Foursquare and Gowalla in BIB013 explores the spatial and temporal property of successive check-ins in Fig. 10 , namely, complementary cumulative distributive function (CCDF) of intervals and distances between successive check-ins. It is observed that many successive check-ins are highly correlated: over 40% and 60% successive check-in behaviors happen in less than 4 hours since last check-in in Foursquare and Gowalla respectively; about 90% successive check-ins happen in less than 32 kilometers (half an hour driving distance) in Foursquare and Gowalla. Researchers exploit Markov chain to model the sequential pattern BIB009 BIB011 BIB014 BIB010 . Researches in BIB009 BIB011 assume that two checked-in POIs in a short term are highly correlated and employ the factorized personalized Markov Chain (FPMC) model BIB002 to recommend successive POIs. Zhang et al. BIB010 propose an additive Markov model to learn the transitive probability between two successive check-ins. Zhao et al. BIB013 exploit a latent factorization model to capture the consecutiveness, which is similar to the FPMC model in mathematical. Fig. 11 Demonstration of non-uniformness BIB006 The non-uniformness feature depicts a user's check-in preference variance at different hours of a day, or at different months of a year, or at different days of a week BIB007 . As shown in Fig. 11 , the study in BIB007 demonstrates an example of a random user's aggregated check-in activities on the user's top five most visited POIs. It is observed that a user's check-in preference changes at different hours of a day-the most frequent checked-in POI alters at different hours. Similar temporal characteristics also appear at different months of a year, and different days of a week as well. This non-uniformness feature can be explained from the user's daily life customs: 1) A user may check-in at POIs around the user's home in the morning hours, visit places around the office in the day hours, and have fun in bars in night hours. 2) A user may visit more locations around the user's home or office on weekdays. On weekends, the user may check-in more at shopping malls or vacation places. 3) At different months, a user may have different hobbies for food and entertainment. For instance, a user would visit ice cream shops in the months of summer while visit hot pot restaurants in the months of winter. Although the temporal feature has been modeled to enhance the recommendation task, e.g., movie recommendation BIB001 BIB003 and web service recommendation BIB005 , the distinct temporal characteristics mentioned above make the previous temporal models unsatisfactory for POI recommendation. For example, the work in BIB001 mines temporal patterns of the Netflix data and incorporates the temporal influence into a matrix factorization model to capture the user preference trends in a long range. The studies in BIB003 BIB005 model the preference variance using a tensor factorization model. Since the previous proposed temporal models cannot meet the POI recommendation scenario, a variety of systems are proposed to enhance POI recommendation performance BIB009 BIB007 BIB015 BIB008 BIB013
A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Content Indication <s> Although online recommendation systems such as recommendation of movies or music have been systematically studied in the past decade, location recommendation in Location Based Social Networks (LBSNs) is not well investigated yet. In LBSNs, users can check in and leave tips commenting on a venue. These two heterogeneous data sources both describe users' preference of venues. However, in current research work, only users' check-in behavior is considered in users' location preference model, users' tips on venues are seldom investigated yet. Moreover, while existing work mainly considers social influence in recommendation, we argue that considering venue similarity can further improve the recommendation performance. In this research, we ameliorate location recommendation by enhancing not only the user location preference model but also recommendation algorithm. First, we propose a hybrid user location preference model by combining the preference extracted from check-ins and text-based tips which are processed using sentiment analysis techniques. Second, we develop a location based social matrix factorization algorithm that takes both user social influence and venue similarity influence into account in location recommendation. Using two datasets extracted from the location based social networks Foursquare, experiment results demonstrate that the proposed hybrid preference model can better characterize user preference by maintaining the preference consistency, and the proposed algorithm outperforms the state-of-the-art methods. <s> BIB001 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Content Indication <s> In this paper, we address the problem of recommending Point-of-Interests (POIs) to users in a location-based social network. To the best of our knowledge, we are the first to propose the ST (Social Topic) model capturing both the social and topic aspects of user check-ins. We conduct experiments on real life data sets from Foursquare and Yelp. We evaluate the effectiveness of ST by evaluating the accuracy of top-k POI recommendation. The experimental results show that ST achieves better performance than the state-of-the-art models in the areas of social network-based recommender systems, and exploits the power of the location-based social network that has never been utilized before. <s> BIB002 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Content Indication <s> Newly emerging location-based and event-based social network services provide us with a new platform to understand users' preferences based on their activity history. A user can only visit a limited number of venues/events and most of them are within a limited distance range, so the user-item matrix is very sparse, which creates a big challenge to the traditional collaborative filtering-based recommender systems. The problem becomes even more challenging when people travel to a new city where they have no activity information. In this article, we propose LCARS, a location-content-aware recommender system that offers a particular user a set of venues (e.g., restaurants and shopping malls) or events (e.g., concerts and exhibitions) by giving consideration to both personal interest and local preference. This recommender system can facilitate people's travel not only near the area in which they live, but also in a city that is new to them. Specifically, LCARS consists of two components: offline modeling and online recommendation. The offline modeling part, called LCA-LDA, is designed to learn the interest of each individual user and the local preference of each individual city by capturing item cooccurrence patterns and exploiting item contents. The online recommendation part takes a querying user along with a querying city as input, and automatically combines the learned interest of the querying user and the local preference of the querying city to produce the top-k recommendations. To speed up the online process, a scalable query processing technique is developed by extending both the Threshold Algorithm (TA) and TA-approximation algorithm. We evaluate the performance of our recommender system on two real datasets, that is, DoubanEvent and Foursquare, and one large-scale synthetic dataset. The results show the superiority of LCARS in recommending spatial items for users, especially when traveling to new cities, in terms of both effectiveness and efficiency. Besides, the experimental analysis results also demonstrate the excellent interpretability of LCARS. <s> BIB003 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Content Indication <s> The rapid urban expansion has greatly extended the physical boundary of users' living area and developed a large number of POIs (points of interest). POI recommendation is a task that facilitates users' urban exploration and helps them filter uninteresting POIs for decision making. While existing work of POI recommendation on location-based social networks (LBSNs) discovers the spatial, temporal, and social patterns of user check-in behavior, the use of content information has not been systematically studied. The various types of content information available on LBSNs could be related to different aspects of a user's check-in action, providing a unique opportunity for POI recommendation. In this work, we study the content information on LB-SNs w.r.t. POI properties, user interests, and sentiment indications. We model the three types of information under a unified POI recommendation framework with the consideration of their relationship to check-in actions. The experimental results exhibit the significance of content information in explaining user behavior, and demonstrate its power to improve POI recommendation performance on LBSNs. <s> BIB004 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Content Indication <s> Location recommendation plays an essential role in helping people find places they are likely to enjoy. Though some recent research has studied how to recommend locations with the presence of social network and geographical information, few of them addressed the cold-start problem, specifically, recommending locations for new users. Because the visits to locations are often shared on social networks, rich semantics (e.g., tweets) that reveal a person's interests can be leveraged to tackle this challenge. A typical way is to feed them into traditional explicit-feedback content-aware recommendation methods (e.g., LibFM). As a user's negative preferences are not explicitly observable in most human mobility data, these methods need draw negative samples for better learning performance. However, prior studies have empirically shown that sampling-based methods don't perform as well as a method that considers all unvisited locations as negative but assigns them a lower confidence. To this end, we propose an Implicit-feedback based Content-aware Collaborative Filtering (ICCF) framework to incorporate semantic content and steer clear of negative sampling. For efficient parameter learning, we develop a scalable optimization algorithm, scaling linearly with the data size and the feature size. Furthermore, we offer a good explanation to ICCF, such that the semantic content is actually used to refine user similarity based on mobility. Finally, we evaluate ICCF with a large-scale LBSN dataset where users have profiles and text content. The results show that ICCF outperforms LibFM of the best configuration, and that user profiles and text content are not only effective at improving recommendation but also helpful for coping with the cold-start problem. <s> BIB005
In LBSNs, users generate contents including tips and ratings for POIs and also photos about the POIs as well. Although contents do not accompany each check-in record, the available contents, especially the user comments, can be used to enhance the POI recommendation BIB004 BIB002 BIB005 BIB001 BIB003 . Because user comments provide extra information from the shared tips beyond the checkin behavior, e.g., the preference on a location. For instance, the check-in at an Italian restaurant does not necessarily mean the user likes this restaurant. Probably the user just likes Italian food but not this restaurant, even dislikes the taste of this restaurant. Compared with the check-in activity, the comments usually provide explicit preference information, which is a kind of complementary explanations for the check-in behavior. As a result, the comments are able to be used to deeply understand the users' check-in behavior and improve POI recommendation BIB004 BIB002 BIB001 . The research in BIB001 is the first and representative work exploiting the comments to strengthen the POI recommendation. Yang et al. BIB001 propose a sentiment-enhanced location recommendation method, which utilizes the user comments to adjust the check-in preference estimation. As shown in Fig. 12 , the raw tips in LBSNs are collected and analysed using natural language processing techniques, including language detection, sentence split, POS identification, processed by SentiWordNet, and Noun phrase chunking. Then, each comment is given a sentiment score. According to the estimated sentiment, a preference score of one user at a POI is generated. Figure 12 also shows how to handle a comment example: transforming it to several noun phases such as "Reasonable price", "Good place", and "Long waiting time", generating a sentiment score of 0.3, and mapping this value to the preference measure of 5. Moreover, through combining the preference measure from sentiment analysis and the check-in frequency, the proposed model in BIB001 generates a modified ratingĈ i,j measuring the preference of user u i at a POI l j . Accordingly, the traditional matrix factorization method can be employed to recommend POIs through the following objective, arg min where U i and L j are latent features of user u i and l j respectively,Ĉ i,j is the combined rating value, α and β are regularizations.
A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Fused Model <s> Many existing approaches to collaborative filtering can neither handle very large datasets nor easily deal with users who have very few ratings. In this paper we present the Probabilistic Matrix Factorization (PMF) model which scales linearly with the number of observations and, more importantly, performs well on the large, sparse, and very imbalanced Netflix dataset. We further extend the PMF model to include an adaptive prior on the model parameters and show how the model capacity can be controlled automatically. Finally, we introduce a constrained version of the PMF model that is based on the assumption that users who have rated similar sets of movies are likely to have similar preferences. The resulting model is able to generalize considerably better for users with very few ratings. When the predictions of multiple PMF models are linearly combined with the predictions of Restricted Boltzmann Machines models, we achieve an error rate of 0.8861, that is nearly 7% better than the score of Netflix's own system. <s> BIB001 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Fused Model <s> Recommender Systems (RSs) are software tools and techniques providing suggestions for items to be of use to a user. In this introductory chapter we briefly discuss basic RS ideas and concepts. Our main goal is to delineate, in a coherent and structured way, the chapters included in this handbook and to help the reader navigate the extremely rich and detailed content that the handbook offers. <s> BIB002 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Fused Model <s> In this paper, we aim to provide a point-of-interests (POI) recommendation service for the rapid growing location-based social networks (LBSNs), e.g., Foursquare, Whrrl, etc. Our idea is to explore user preference, social influence and geographical influence for POI recommendations. In addition to deriving user preference based on user-based collaborative filtering and exploring social influence from friends, we put a special emphasis on geographical influence due to the spatial clustering phenomenon exhibited in user check-in activities of LBSNs. We argue that the geographical influence among POIs plays an important role in user check-in behaviors and model it by power law distribution. Accordingly, we develop a collaborative recommendation algorithm based on geographical influence based on naive Bayesian. Furthermore, we propose a unified POI recommendation framework, which fuses user preference to a POI with social influence and geographical influence. Finally, we conduct a comprehensive performance evaluation over two large-scale datasets collected from Foursquare and Whrrl. Experimental results with these real datasets show that the unified collaborative recommendation approach significantly outperforms a wide spectrum of alternative recommendation approaches. <s> BIB003 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Fused Model <s> Location sharing services (LSS) like Foursquare, Gowalla, and Facebook Places support hundreds of millions of user-driven footprints (i.e., "checkins"). Those global-scale footprints provide a unique opportunity to study the social and temporal characteristics of how people use these services and to model patterns of human mobility, which are significant factors for the design of future mobile+location-based services, traffic forecasting, urban planning, as well as epidemiological models of disease spread. In this paper, we investigate 22 million checkins across 220,000 users and report a quantitative assessment of human mobility patterns by analyzing the spatial, temporal, social, and textual aspects associated with these footprints. We find that: (i) LSS users follow the “Levy Flight” mobility pattern and adopt periodic behaviors; (ii) While geographic and economic constraints affect mobility patterns, so does individual social status; and (iii) Content and sentiment-based analysis of posts associated with checkins can provide a rich source of context for better understanding how users engage with these services. <s> BIB004 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Fused Model <s> Due to the prevalence of personalization and information filtering applications, modeling users' interests on the Web has become increasingly important during the past few years. In this paper, aiming at providing accurate personalized Web site recommendations for Web users, we propose a novel probabilistic factor model based on dimensionality reduction techniques. We also extend the proposed method to collective probabilistic factor modeling, which further improves model performance by incorporating heterogeneous data sources. The proposed method is general, and can be applied to not only Web site recommendations, but also a wide range of Web applications, including behavioral targeting, sponsored search, etc. The experimental analysis on Web site recommendation shows that our method outperforms other traditional recommendation approaches. Moreover, the complexity analysis indicates that our approach can be applied to very large datasets since it scales linearly with the number of observations. <s> BIB005 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Fused Model <s> Recently, location-based social networks (LBSNs), such as Gowalla, Foursquare, Facebook, and Brightkite, etc., have attracted millions of users to share their social friendship and their locations via check-ins. The available check-in information makes it possible to mine users' preference on locations and to provide favorite recommendations. Personalized Point-of-interest (POI) recommendation is a significant task in LBSNs since it can help targeted users explore their surroundings as well as help third-party developers to provide personalized services. To solve this task, matrix factorization is a promising tool due to its success in recommender systems. However, previously proposed matrix factorization (MF) methods do not explore geographical influence, e.g., multi-center check-in property, which yields suboptimal solutions for the recommendation. In this paper, to the best of our knowledge, we are the first to fuse MF with geographical and social influence for POI recommendation in LBSNs. We first capture the geographical influence via modeling the probability of a user's check-in on a location as a Multicenter Gaussian Model (MGM). Next, we include social information and fuse the geographical influence into a generalized matrix factorization framework. Our solution to POI recommendation is efficient and scales linearly with the number of observations. Finally, we conduct thorough experiments on a large-scale real-world LBSNs dataset and demonstrate that the fused matrix factorization framework with MGM utilizes the distance information sufficiently and outperforms other state-of-the-art methods significantly. <s> BIB006 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Fused Model <s> Recommending users with their preferred points-of-interest (POIs), e.g., museums and restaurants, has become an important feature for location-based social networks (LBSNs), which benefits people to explore new places and businesses to discover potential customers. However, because users only check in a few POIs in an LBSN, the user-POI check-in interaction is highly sparse, which renders a big challenge for POI recommendations. To tackle this challenge, in this study we propose a new POI recommendation approach called GeoSoCa through exploiting geographical correlations, social correlations and categorical correlations among users and POIs. The geographical, social and categorical correlations can be learned from the historical check-in data of users on POIs and utilized to predict the relevance score of a user to an unvisited POI so as to make recommendations for users. First, in GeoSoCa we propose a kernel estimation method with an adaptive bandwidth to determine a personalized check-in distribution of POIs for each user that naturally models the geographical correlations between POIs. Then, GeoSoCa aggregates the check-in frequency or rating of a user's friends on a POI and models the social check-in frequency or rating as a power-law distribution to employ the social correlations between users. Further, GeoSoCa applies the bias of a user on a POI category to weigh the popularity of a POI in the corresponding category and models the weighed popularity as a power-law distribution to leverage the categorical correlations between POIs. Finally, we conduct a comprehensive performance evaluation for GeoSoCa using two large-scale real-world check-in data sets collected from Foursquare and Yelp. Experimental results show that GeoSoCa achieves significantly superior recommendation quality compared to other state-of-the-art POI recommendation techniques. <s> BIB007
The fused model usually establishes a model for each influential factor and combines their recommended results with suggestions from the collaborative filtering model BIB002 that captures user preference on POIs. Since social influence provides limited improvements in POI recommendation and user comments are usually missing in users' check-ins, geographical influence and temporal influence constitute two important factors for POI recommendation. Hence, a typical fused model BIB006 BIB003 BIB007 recommends POIs through combining the traditional collaborative filtering methods and influential factors, especially including geographical influence or temporal influence. In BIB004 , Cheng et al. employ probabilistic matrix factorization (PMF) BIB001 and probabilistic factor model (PFM) BIB005 to learn user preference for recommending POIs. Suppose the number of users is m, and the number of POIs is n. U i and L j denote the latent feature of user u i and POI l j . PMF based method assumes Gaussian distribution on observed check-in data and Gaussian priors on the user latent feature matrix U and POI latent feature matrix L. Then, the objective function to learn the model is as follows, where g(x) = 1 1+e −x is the logistic function, c ij is the checked-in frequency of user u i at POI l j . I ij is the indicator function to record the check-in state of u i at l j . Namely, I ij equals one when the i-th user has checked-in at j-th POI; otherwise zero. After learning the user and POI latent features, the preference score of u i over l j is measured by the following score function, where σ is the sigmoid function. In addition, the geographical influence can be modeled through MGM, shown in Eq. (3) of Sect. 3.1. Then, a fused model is proposed to combine user preferences learned from Eq. (10) and geographical influence modeled in Eq. (3). The proposed model determines the probability P ul of a user u visiting a location l via the product of the preference socre estimation and the probability of whether a user will visit that place in terms of geographical influence , where P (l|C u ) is calculated via the MGM and P (F ul ) encodes a users preference on a location.
A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Representative Work for MF-based Joint Model <s> Location-based social networks (LBSNs) have attracted an inordinate number of users and greatly enriched the urban experience in recent years. The availability of spatial, temporal and social information in online LBSNs offers an unprecedented opportunity to study various aspects of human behavior, and enable a variety of location-based services such as location recommendation. Previous work studied spatial and social influences on location recommendation in LBSNs. Due to the strong correlations between a user's check-in time and the corresponding check-in location, recommender systems designed for location recommendation inevitably need to consider temporal effects. In this paper, we introduce a novel location recommendation framework, based on the temporal properties of user movement observed from a real-world LBSN dataset. The experimental results exhibit the significance of temporal patterns in explaining user behavior, and demonstrate their power to improve location recommendation performance. <s> BIB001 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Representative Work for MF-based Joint Model <s> Point-of-Interest (POI) recommendation has become an important means to help people discover attractive locations. However, extreme sparsity of user-POI matrices creates a severe challenge. To cope with this challenge, viewing mobility records on location-based social networks (LBSNs) as implicit feedback for POI recommendation, we first propose to exploit weighted matrix factorization for this task since it usually serves collaborative filtering with implicit feedback better. Besides, researchers have recently discovered a spatial clustering phenomenon in human mobility behavior on the LBSNs, i.e., individual visiting locations tend to cluster together, and also demonstrated its effectiveness in POI recommendation, thus we incorporate it into the factorization model. Particularly, we augment users' and POIs' latent factors in the factorization model with activity area vectors of users and influence area vectors of POIs, respectively. Based on such an augmented model, we not only capture the spatial clustering phenomenon in terms of two-dimensional kernel density estimation, but we also explain why the introduction of such a phenomenon into matrix factorization helps to deal with the challenge from matrix sparsity. We then evaluate the proposed algorithm on a large-scale LBSN dataset. The results indicate that weighted matrix factorization is superior to other forms of factorization models and that incorporating the spatial clustering phenomenon into matrix factorization improves recommendation performance. <s> BIB002
In this section, we report two representative researches about the MF-based joint model, which incorporate temporal effect and geographical effect into a matrix factorization framework, respectively. In BIB001 , Gao et al. propose a Location Recommendation framework with Temporal effects (LRT), which incorporates temporal influence into a matrix factorization model. The LRT model contains two assumptions on temporal effect: 1) non-uniformness, users' check-in preferences change at different hours of one day; 2) consecutiveness, users' check-in preferences are similar in consecutive time slots. To model the non-uniformness, LRT separates a day into T slots, and defines time-dependent user latent feature U t ∈ R m×d , where m is the number of users, d is the latent feature dimension, and t ∈ [1, T ] indexes time slots. Suppose that C t ∈ R m×n denotes a matrix depicting the check-in frequency at temporal state t. U and L denote the latent feature matrix for user and POI, respectively. Using the non-negative matrix factorization to model the POI recommendation system, the time-dependent objective function is as follows, where Y t is the corresponding indicator matrix, α and β are the regularizations. Furthermore, the temporal consecutiveness inspires to minimize the following term, where φ i (t, t − 1) ∈ [0, 1] is defined as a temporal coefficient that measures user preference similarity between temporal state t and t − 1. The temporal coefficient could be calculated via cosine similarity according to users' checkins at state t and t − 1. To represent the Eq. (14) in matrix form, we get where Σ t ∈ R m×m is the diagonal temporal coefficient matrix among m users. Combining the two minimization targets, the objective function of the LRT model is gained as follows, where λ is a non-negative parameter to control the temporal regularization. User and location latent representations can be learned by solving the above optimization problem. Then, the user check-in preferenceĈ t (i, j) at each temporal state can be estimated by the product of user latent feature and location feature (U t (i, :)L(j, :) T ). Recommending POIs for users is to find POIs with higher value ofĈ(i, j). To aggregate different temporal states' contributions, C(i, j) is estimated througĥ where f (·) is an aggregation function, e.g., sum, mean, maximum, and voting operation. In BIB002 , Lian et al. propose the GeoMF model to incorporate geographical influence into a weighted regularized matrix factorization model (WRMF) [22, Fig. 13 Demonstration of GeoMF model BIB002 43]. WRMF is a popular model for one-class collaborative filtering problem, learning implicit feedback for recommendations. GeoMF treats the user checkin as implicit feedback and leverages a 0/1 rating matrix to represent the user check-ins. Furthermore, GeoMF employs an augmented matrix to recover the rating matrix, as shown in Fig. 13 . Each entry in the rating matrix is the combination of two interactions: user feature and POI feature, users' activity area representation and POIs' influence area representation. Suppose there are m users and n POIs. The latent feature dimension is d for user and POI representations, and is l for users' activity area and POIs' influence area representations. Then the estimated rating matrix can be formulated as, whereR ∈ R m×n is the estimated matrix, P ∈ R m×d and Q ∈ R n×d are user latent matrix and POI latent matrix, respectively. In addition, X ∈ R m×l and Y ∈ R n×l are user activity area representation matrix and POI activity area representation matrix, respectively. Define W as the binary weighted matrix whose entry w ui is set as follows, where c ui is user u's check-in frequency at POI l i , α(c ui ) > 0 is a monotonically increasing function with respect to c ui . Following the scheme of WRMF model, the objective function of GeoMF is formulated as, arg min where Y is POIs' influence area matrix generated from a Gaussian kernel function, P , Q, and X are parameters that need to learn, and γ and λ are regularizations. After learning the latent features from Eq. (20), the proposed model estimates the check-in possibility according to Eq. (18), and then recommends the POIs with higher values for each user.
A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Taxonomy by Task <s> In this paper, we study the research issues in realizing location recommendation services for large-scale location-based social networks, by exploiting the social and geographical characteristics of users and locations/places. Through our analysis on a dataset collected from Foursquare, a popular location-based social networking system, we observe that there exists strong social and geospatial ties among users and their favorite locations/places in the system. Accordingly, we develop a friend-based collaborative filtering (FCF) approach for location recommendation based on collaborative ratings of places made by social friends. Moreover, we propose a variant of FCF technique, namely Geo-Measured FCF (GM-FCF), based on heuristics derived from observed geospatial characteristics in the Foursquare dataset. Finally, the evaluation results show that the proposed family of FCF techniques holds comparable recommendation effectiveness against the state-of-the-art recommendation algorithms, while incurring significantly lower computational overhead. Meanwhile, the GM-FCF provides additional flexibility in tradeoff between recommendation effectiveness and computational overhead. <s> BIB001 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Taxonomy by Task <s> The popularity of location-based social networks provide us with a new platform to understand users' preferences based on their location histories. In this paper, we present a location-based and preference-aware recommender system that offers a particular user a set of venues (such as restaurants) within a geospatial range with the consideration of both: 1) User preferences, which are automatically learned from her location history and 2) Social opinions, which are mined from the location histories of the local experts. This recommender system can facilitate people's travel not only near their living areas but also to a city that is new to them. As a user can only visit a limited number of locations, the user-locations matrix is very sparse, leading to a big challenge to traditional collaborative filtering-based location recommender systems. The problem becomes even more challenging when people travel to a new city. To this end, we propose a novel location recommender system, which consists of two main parts: offline modeling and online recommendation. The offline modeling part models each individual's personal preferences with a weighted category hierarchy (WCH) and infers the expertise of each user in a city with respect to different category of locations according to their location histories using an iterative learning model. The online recommendation part selects candidate local experts in a geospatial range that matches the user's preferences using a preference-aware candidate selection algorithm and then infers a score of the candidate locations based on the opinions of the selected local experts. Finally, the top-k ranked locations are returned as the recommendations for the user. We evaluated our system with a large-scale real dataset collected from Foursquare. The results confirm that our method offers more effective recommendations than baselines, while having a good efficiency of providing location recommendations. <s> BIB002 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Taxonomy by Task <s> Personalized point-of-interest (POI) recommendation is a significant task in location-based social networks (LBSNs) as it can help provide better user experience as well as enable third-party services, e.g., launching advertisements. To provide a good recommendation, various research has been conducted in the literature. However, pervious efforts mainly consider the "check-ins" in a whole and omit their temporal relation. They can only recommend POI globally and cannot know where a user would like to go tomorrow or in the next few days. In this paper, we consider the task of successive personalized POI recommendation in LBSNs, which is a much harder task than standard personalized POI recommendation or prediction. To solve this task, we observe two prominent properties in the check-in sequence: personalized Markov chain and region localization. Hence, we propose a novel matrix factorization method, namely FPMC-LR, to embed the personalized Markov chains and the localized regions. Our proposed FPMC-LR not only exploits the personalized Markov chain in the check-in sequence, but also takes into account users' movement constraint, i.e., moving around a localized region. More importantly, utilizing the information of localized regions, we not only reduce the computation cost largely, but also discard the noisy information to boost recommendation. Results on two real-world LBSNs datasets demonstrate the merits of our proposed FPMC-LR. <s> BIB003 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Taxonomy by Task <s> The rapidly growing of Location-based Social Networks (LBSNs) provides a vast amount of check-in data, which enables many services, e.g., point-of-interest (POI) recommendation. In this paper, we study the next new POI recommendation problem in which new POIs with respect to users' current location are to be recommended. The challenge lies in the difficulty in precisely learning users' sequential information and personalizing the recommendation model. To this end, we resort to the Metric Embedding method for the recommendation, which avoids drawbacks of the Matrix Factorization technique. We propose a personalized ranking metric embedding method (PRME) to model personalized check-in sequences. We further develop a PRME-G model, which integrates sequential information, individual preference, and geographical influence, to improve the recommendation performance. Experiments on two real-world LBSN datasets demonstrate that our new algorithm outperforms the state-of-the-art next POI recommendation methods. <s> BIB004 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Taxonomy by Task <s> In location-based social networks (LBSNs), new successive point-of-interest (POI) recommendation is a newly formulated task which tries to regard the POI a user currently visits as his POI-related query and recommend new POIs the user has not visited before. While carefully designed methods are proposed to solve this problem, they ignore the essence of the task which involves retrieval and recommendation problem simultaneously and fail to employ the social relations or temporal information adequately to improve the results. In order to solve this problem, we propose a new model called location and time aware social collaborative retrieval model (LTSCR), which has two distinct advantages: (1) it models the location, time, and social information simultaneously for the successive POI recommendation task; (2) it efficiently utilizes the merits of the collaborative retrieval model which leverages weighted approximately ranked pairwise (WARP) loss for achieving better top-n ranking results, just as the new successive POI recommendation task needs. We conducted some comprehensive experiments on publicly available datasets and demonstrate the power of the proposed method, with 46.6% growth in Precision@5 and 47.3% improvement in Recall@5 over the best previous method. <s> BIB005 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Taxonomy by Task <s> In this paper, we address the problem of personalized next Point-of-interest (POI) recommendation which has become an important and very challenging task in location-based social networks (LBSNs), but not well studied yet. With the conjecture that, under different contextual scenario, human exhibits distinct mobility patterns, we attempt here to jointly model the next POI recommendation under the influence of user's latent behavior pattern. We propose to adopt a third-rank tensor to model the successive check-in behaviors. By incorporating softmax function to fuse the personalized Markov chain with latent pattern, we furnish a Bayesian Personalized Ranking (BPR) approach and derive the optimization criterion accordingly. Expectation Maximization (EM) is then used to estimate the model parameters. Extensive experiments on two large-scale LBSNs datasets demonstrate the significant improvements of our model over several state-of-the-art methods. <s> BIB006 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Taxonomy by Task <s> Successive point-of-interest (POI) recommendation in location-based social networks (LBSNs) becomes a significant task since it helps users to navigate a number of candidate POIs and provides the best POI recommendations based on users' most recent check-in knowledge. However, all existing methods for successive POI recommendation only focus on modeling the correlation between POIs based on users' check-in sequences, but ignore an important fact that successive POI recommendation is a time-subtle recommendation task. In fact, even with the same previous check-in information, users would prefer different successive POIs at different time. To capture the impact of time on successive POI recommendation, in this paper, we propose a spatial-temporal latent ranking (STELLAR) method to explicitly model the interactions among user, POI, and time. In particular, the proposed STELLAR model is built upon a ranking-based pairwise tensor factorization framework with a fine-grained modeling of user-POI, POI-time, and POI-POI interactions for successive POI recommendation. Moreover, we propose a new interval-aware weight utility function to differentiate successive check-ins' correlations, which breaks the time interval constraint in prior work. Evaluations on two real-world datasets demonstrate that the STELLAR model outperforms state-of-the-art successive POI recommendation model about 20% in [email protected] and [email protected] <s> BIB007
In terms of whether to bias to the recent check-in, we categorize the POI recommendation task as general POI recommendation and successive POI recommendation. General POI recommendation in LBSNs is first proposed in BIB001 , which recommends the top-N POIs for users, similar to movie recommendation task in Netflix competition. Further researches observe that two successive check-ins are significantly correlated in high probability, as shown in Fig. 10 . Bao et al. BIB002 employ the recent check-in's information to recommend POIs for online scenario. Moreover, Cheng et al. BIB003 propose the successive POI recommendation that provides favorite recommendations sensitive to the user's recent check-in. Namely, successive POI recommendation does not recommend users a general list of POIs but a list sensitive to their recent check-ins. Because successive POI recommendation takes advantage of the recent check-in information, it strikingly improves system performance on the recall metric. Hence, several studies BIB004 BIB006 BIB005 BIB007 are proposed for this specific POI recommendation task.
A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> General POI Recommendation <s> Customer preferences for products are drifting over time. Product perception and popularity are constantly changing as new selection emerges. Similarly, customer inclinations are evolving, leading them to ever redefine their taste. Thus, modeling temporal dynamics should be a key when designing recommender systems or general customer preference models. However, this raises unique challenges. Within the eco-system intersecting multiple products and customers, many different characteristics are shifting simultaneously, while many of them influence each other and often those shifts are delicate and associated with a few data instances. This distinguishes the problem from concept drift explorations, where mostly a single concept is tracked. Classical time-window or instance-decay approaches cannot work, as they lose too much signal when discarding data instances. A more sensitive approach is required, which can make better distinctions between transient effects and long term patterns. The paradigm we offer is creating a model tracking the time changing behavior throughout the life span of the data. This allows us to exploit the relevant components of all data instances, while discarding only what is modeled as being irrelevant. Accordingly, we revamp two leading collaborative filtering recommendation approaches. Evaluation is made on a large movie rating dataset by Netflix. Results are encouraging and better than those previously reported on this dataset. <s> BIB001 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> General POI Recommendation <s> Collaborative filtering is the most popular approach to build recommender systems and has been successfully employed in many applications. However, it cannot make recommendations for so-called cold start users that have rated only a very small number of items. In addition, these methods do not know how confident they are in their recommendations. Trust-based recommendation methods assume the additional knowledge of a trust network among users and can better deal with cold start users, since users only need to be simply connected to the trust network. On the other hand, the sparsity of the user item ratings forces the trust-based approach to consider ratings of indirect neighbors that are only weakly trusted, which may decrease its precision. In order to find a good trade-off, we propose a random walk model combining the trust-based and the collaborative filtering approach for recommendation. The random walk model allows us to define and to measure the confidence of a recommendation. We performed an evaluation on the Epinions dataset and compared our model with existing trust-based and collaborative filtering methods. <s> BIB002 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> General POI Recommendation <s> In this paper, we aim to provide a point-of-interests (POI) recommendation service for the rapid growing location-based social networks (LBSNs), e.g., Foursquare, Whrrl, etc. Our idea is to explore user preference, social influence and geographical influence for POI recommendations. In addition to deriving user preference based on user-based collaborative filtering and exploring social influence from friends, we put a special emphasis on geographical influence due to the spatial clustering phenomenon exhibited in user check-in activities of LBSNs. We argue that the geographical influence among POIs plays an important role in user check-in behaviors and model it by power law distribution. Accordingly, we develop a collaborative recommendation algorithm based on geographical influence based on naive Bayesian. Furthermore, we propose a unified POI recommendation framework, which fuses user preference to a POI with social influence and geographical influence. Finally, we conduct a comprehensive performance evaluation over two large-scale datasets collected from Foursquare and Whrrl. Experimental results with these real datasets show that the unified collaborative recommendation approach significantly outperforms a wide spectrum of alternative recommendation approaches. <s> BIB003 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> General POI Recommendation <s> The popularity of location-based social networks provide us with a new platform to understand users' preferences based on their location histories. In this paper, we present a location-based and preference-aware recommender system that offers a particular user a set of venues (such as restaurants) within a geospatial range with the consideration of both: 1) User preferences, which are automatically learned from her location history and 2) Social opinions, which are mined from the location histories of the local experts. This recommender system can facilitate people's travel not only near their living areas but also to a city that is new to them. As a user can only visit a limited number of locations, the user-locations matrix is very sparse, leading to a big challenge to traditional collaborative filtering-based location recommender systems. The problem becomes even more challenging when people travel to a new city. To this end, we propose a novel location recommender system, which consists of two main parts: offline modeling and online recommendation. The offline modeling part models each individual's personal preferences with a weighted category hierarchy (WCH) and infers the expertise of each user in a city with respect to different category of locations according to their location histories using an iterative learning model. The online recommendation part selects candidate local experts in a geospatial range that matches the user's preferences using a preference-aware candidate selection algorithm and then infers a score of the candidate locations based on the opinions of the selected local experts. Finally, the top-k ranked locations are returned as the recommendations for the user. We evaluated our system with a large-scale real dataset collected from Foursquare. The results confirm that our method offers more effective recommendations than baselines, while having a good efficiency of providing location recommendations. <s> BIB004 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> General POI Recommendation <s> Location-based social networks (LBSNs) have attracted an inordinate number of users and greatly enriched the urban experience in recent years. The availability of spatial, temporal and social information in online LBSNs offers an unprecedented opportunity to study various aspects of human behavior, and enable a variety of location-based services such as location recommendation. Previous work studied spatial and social influences on location recommendation in LBSNs. Due to the strong correlations between a user's check-in time and the corresponding check-in location, recommender systems designed for location recommendation inevitably need to consider temporal effects. In this paper, we introduce a novel location recommendation framework, based on the temporal properties of user movement observed from a real-world LBSN dataset. The experimental results exhibit the significance of temporal patterns in explaining user behavior, and demonstrate their power to improve location recommendation performance. <s> BIB005 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> General POI Recommendation <s> The problem of point of interest (POI) recommendation is to provide personalized recommendations of places of interests, such as restaurants, for mobile users. Due to its complexity and its connection to location based social networks (LBSNs), the decision process of a user choose a POI is complex and can be influenced by various factors, such as user preferences, geographical influences, and user mobility behaviors. While there are some studies on POI recommendations, it lacks of integrated analysis of the joint effect of multiple factors. To this end, in this paper, we propose a novel geographical probabilistic factor analysis framework which strategically takes various factors into consideration. Specifically, this framework allows to capture the geographical influences on a user's check-in behavior. Also, the user mobility behaviors can be effectively exploited in the recommendation model. Moreover, the recommendation model can effectively make use of user check-in count data as implicity user feedback for modeling user preferences. Finally, experimental results on real-world LBSNs data show that the proposed recommendation method outperforms state-of-the-art latent factor models with a significant margin. <s> BIB006 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> General POI Recommendation <s> With the rapid growth of location-based social networks, Point of Interest (POI) recommendation has become an important research problem. However, the scarcity of the check-in data, a type of implicit feedback data, poses a severe challenge for existing POI recommendation methods. Moreover, different types of context information about POIs are available and how to leverage them becomes another challenge. In this paper, we propose a ranking based geographical factorization method, called Rank-GeoFM, for POI recommendation, which addresses the two challenges. In the proposed model, we consider that the check-in frequency characterizes users' visiting preference and learn the factorization by ranking the POIs correctly. In our model, POIs both with and without check-ins will contribute to learning the ranking and thus the data sparsity problem can be alleviated. In addition, our model can easily incorporate different types of context information, such as the geographical influence and temporal influence. We propose a stochastic gradient descent based algorithm to learn the factorization. Experiments on publicly available datasets under both user-POI setting and user-time-POI setting have been conducted to test the effectiveness of the proposed method. Experimental results under both settings show that the proposed method outperforms the state-of-the-art methods significantly in terms of recommendation accuracy. <s> BIB007 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> General POI Recommendation <s> Given the abundance of online information available to mobile users, particularly tourists and weekend travelers, recommender systems that effectively filter this information and suggest interesting participatory opportunities will become increasingly important. Previous work has explored recommending interesting locations; however, users would also benefit from recommendations for activities in which to participate at those locations along with suitable times and days. Thus, systems that provide collaborative recommendations involving multiple dimensions such as location, activities and time would enhance the overall experience of users.The relationship among these dimensions can be modeled by higher-order matrices called tensors which are then solved by tensor factorization. However, these tensors can be extremely sparse. In this paper, we present a system and an approach for performing multi-dimensional collaborative recommendations for Who (User), What (Activity), When (Time) and Where (Location), using tensor factorization on sparse user-generated data. We formulate an objective function which simultaneously factorizes coupled tensors and matrices constructed from heterogeneous data sources. We evaluate our system and approach on large-scale real world data sets consisting of 588,000 Flickr photos collected from three major metro regions in USA. We compare our approach with several state-of-the-art baselines and demonstrate that it outperforms all of them. <s> BIB008
The general POI recommendation task recommends the top-N POIs for users, similar to movie recommendation task in Netflix competition. Researchers propose a variety of models to incorporate different influential factors, e.g., geographical influence and temporal influence, to fulfill this task BIB005 BIB007 BIB006 BIB003 . In the following, we report a recent representative model for this task. In BIB007 , Li et al. propose the geographical factorization method (Geo-FM), which employs the WARP loss to learn the recommended POI list. The checkin probability is assumed to be affected by two aspects: user preference and geographical influence, which are modeled by the interaction between the user and the target POI and the interaction between the user and neighboring POIs of the target POI. Further, a weight utility function is introduced to measure different neighbors' contribution in the geographical influence. For the neighbor l of target POI l, we set the weight w l,l = (0.5 + d(l, l )) −1 , where d(l, l ) denotes the distance between POI l and l . In practice, w l,l may be normalized by devided by the sum of all values. Further, given user u and POI l, we use u BIB004 u and u BIB008 u to denote the user latent feature for user preference and geographical influence, and l l to denote the POI latent feature. Then, the recommendation score y ul could be formulated as, where operator (·) denotes the inner product, and N k (l) denotes the k-nearest neighbors of POI l. After defining the recommendation score function, Geo-FM employs the WARP loss to learn the model. A user's preference ranking is summarized as that the higher the check-in frequency is, the more the POI is preferred by a user. In other words, for user u, POI l would be ranked higher than l if f ul > f ul , where f ul denotes the frequency of user u at POI l. Given a user u and a checked-in POI l, modeling the rank order is equivalent to minimize the following incompatibility, where U and L denote the user set and POI set respectively, is the error tolerance hyperparameter, and I(·) denotes the indicator function. By modeling the incompatibility for all check-ins in the set D, we get the objective function of the Geo-FM, where E(·) is a function to convert the ranking incompatibility into a loss value: as the candidate POIs the user u has not visited in POI set L. After learning the objective function in Eq. BIB001 , the check-in possibility of user u over a candidate POI l ∈ L C u could be estimated by Eq. BIB002 . Then, the POI recommendation task could be achieved through ranking the candidate POIs and selecting the top N POIs with the highest estimated possibility values for each user.
A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Successive POI Recommendation <s> Recently, location-based social networks (LBSNs), such as Gowalla, Foursquare, Facebook, and Brightkite, etc., have attracted millions of users to share their social friendship and their locations via check-ins. The available check-in information makes it possible to mine users' preference on locations and to provide favorite recommendations. Personalized Point-of-interest (POI) recommendation is a significant task in LBSNs since it can help targeted users explore their surroundings as well as help third-party developers to provide personalized services. To solve this task, matrix factorization is a promising tool due to its success in recommender systems. However, previously proposed matrix factorization (MF) methods do not explore geographical influence, e.g., multi-center check-in property, which yields suboptimal solutions for the recommendation. In this paper, to the best of our knowledge, we are the first to fuse MF with geographical and social influence for POI recommendation in LBSNs. We first capture the geographical influence via modeling the probability of a user's check-in on a location as a Multicenter Gaussian Model (MGM). Next, we include social information and fuse the geographical influence into a generalized matrix factorization framework. Our solution to POI recommendation is efficient and scales linearly with the number of observations. Finally, we conduct thorough experiments on a large-scale real-world LBSNs dataset and demonstrate that the fused matrix factorization framework with MGM utilizes the distance information sufficiently and outperforms other state-of-the-art methods significantly. <s> BIB001 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Successive POI Recommendation <s> The rapidly growing of Location-based Social Networks (LBSNs) provides a vast amount of check-in data, which enables many services, e.g., point-of-interest (POI) recommendation. In this paper, we study the next new POI recommendation problem in which new POIs with respect to users' current location are to be recommended. The challenge lies in the difficulty in precisely learning users' sequential information and personalizing the recommendation model. To this end, we resort to the Metric Embedding method for the recommendation, which avoids drawbacks of the Matrix Factorization technique. We propose a personalized ranking metric embedding method (PRME) to model personalized check-in sequences. We further develop a PRME-G model, which integrates sequential information, individual preference, and geographical influence, to improve the recommendation performance. Experiments on two real-world LBSN datasets demonstrate that our new algorithm outperforms the state-of-the-art next POI recommendation methods. <s> BIB002 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Successive POI Recommendation <s> In location-based social networks (LBSNs), new successive point-of-interest (POI) recommendation is a newly formulated task which tries to regard the POI a user currently visits as his POI-related query and recommend new POIs the user has not visited before. While carefully designed methods are proposed to solve this problem, they ignore the essence of the task which involves retrieval and recommendation problem simultaneously and fail to employ the social relations or temporal information adequately to improve the results. In order to solve this problem, we propose a new model called location and time aware social collaborative retrieval model (LTSCR), which has two distinct advantages: (1) it models the location, time, and social information simultaneously for the successive POI recommendation task; (2) it efficiently utilizes the merits of the collaborative retrieval model which leverages weighted approximately ranked pairwise (WARP) loss for achieving better top-n ranking results, just as the new successive POI recommendation task needs. We conducted some comprehensive experiments on publicly available datasets and demonstrate the power of the proposed method, with 46.6% growth in Precision@5 and 47.3% improvement in Recall@5 over the best previous method. <s> BIB003 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Successive POI Recommendation <s> Successive point-of-interest (POI) recommendation in location-based social networks (LBSNs) becomes a significant task since it helps users to navigate a number of candidate POIs and provides the best POI recommendations based on users' most recent check-in knowledge. However, all existing methods for successive POI recommendation only focus on modeling the correlation between POIs based on users' check-in sequences, but ignore an important fact that successive POI recommendation is a time-subtle recommendation task. In fact, even with the same previous check-in information, users would prefer different successive POIs at different time. To capture the impact of time on successive POI recommendation, in this paper, we propose a spatial-temporal latent ranking (STELLAR) method to explicitly model the interactions among user, POI, and time. In particular, the proposed STELLAR model is built upon a ranking-based pairwise tensor factorization framework with a fine-grained modeling of user-POI, POI-time, and POI-POI interactions for successive POI recommendation. Moreover, we propose a new interval-aware weight utility function to differentiate successive check-ins' correlations, which breaks the time interval constraint in prior work. Evaluations on two real-world datasets demonstrate that the STELLAR model outperforms state-of-the-art successive POI recommendation model about 20% in [email protected] and [email protected] <s> BIB004
Successive POI recommendation, as a natural extension of general POI recommendation, is recently proposed and has attracted great research interest BIB001 BIB002 BIB003 BIB004 . Different from general POI recommendation that focuses only on estimating users preferences on POIs, successive POI recommendation provides satisfied recommendations promptly based on users most recent checked-in location, which requires not only the preference modeling from users but also the accurate correlation analysis between POIs. In the following, we report a recent representative model for this task. In BIB004 , Zhao et al. propose the STELLAR system, which aims to provide time-aware successive POI recommendations. The system attempts to rank the POIs via a score function f : U × L × T × L → R, which maps a four-tuple tensor to real values. Here, U, L, and T denote the set of users, the set of POIs, and the set of time ids, respectively. The score function f (u, l q , t, l c ) that represents the "successive check-in possibility", is defined for user u to a candidate POI l c at the time stamp t given the user's last check-in as a query POI l q .
A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Data Sources <s> Even though human movement and mobility patterns have a high degree of freedom and variation, they also exhibit structural patterns due to geographic and social constraints. Using cell phone location data, as well as data from two online location-based social networks, we aim to understand what basic laws govern human motion and dynamics. We find that humans experience a combination of periodic movement that is geographically limited and seemingly random jumps correlated with their social networks. Short-ranged travel is periodic both spatially and temporally and not effected by the social network structure, while long-distance travel is more influenced by social network ties. We show that social relationships can explain about 10% to 30% of all human movement, while periodic behavior explains 50% to 70%. Based on our findings, we develop a model of human mobility that combines periodic short range movements with travel due to the social network structure. We show that our model reliably predicts the locations and dynamics of future human movement and gives an order of magnitude better performance than present models of human mobility. <s> BIB001 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Data Sources <s> Recently, location-based social networks (LBSNs), such as Gowalla, Foursquare, Facebook, and Brightkite, etc., have attracted millions of users to share their social friendship and their locations via check-ins. The available check-in information makes it possible to mine users' preference on locations and to provide favorite recommendations. Personalized Point-of-interest (POI) recommendation is a significant task in LBSNs since it can help targeted users explore their surroundings as well as help third-party developers to provide personalized services. To solve this task, matrix factorization is a promising tool due to its success in recommender systems. However, previously proposed matrix factorization (MF) methods do not explore geographical influence, e.g., multi-center check-in property, which yields suboptimal solutions for the recommendation. In this paper, to the best of our knowledge, we are the first to fuse MF with geographical and social influence for POI recommendation in LBSNs. We first capture the geographical influence via modeling the probability of a user's check-in on a location as a Multicenter Gaussian Model (MGM). Next, we include social information and fuse the geographical influence into a generalized matrix factorization framework. Our solution to POI recommendation is efficient and scales linearly with the number of observations. Finally, we conduct thorough experiments on a large-scale real-world LBSNs dataset and demonstrate that the fused matrix factorization framework with MGM utilizes the distance information sufficiently and outperforms other state-of-the-art methods significantly. <s> BIB002 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Data Sources <s> Location-based social networks (LBSNs) have become a popular form of social media in recent years. They provide location related services that allow users to “check-in” at geographical locations and share such experiences with their friends. Millions of “check-in” records in LBSNs contain rich information of social and geographical context and provide a unique opportunity for researchers to study user’s social behavior from a spatial-temporal aspect, which in turn enables a variety of services including place advertisement, traffic forecasting, and disaster relief. In this paper, we propose a social-historical model to explore user’s check-in behavior on LBSNs. Our model integrates the social and historical effects and assesses the role of social correlation in user’s check-in behavior. In particular, our model captures the property of user’s check-in history in forms of power-law distribution and short-term effect, and helps in explaining user’s check-in behavior. The experimental results on a real world LBSN demonstrate that our approach properly models user’s checkins and shows how social and historical ties can help location prediction. <s> BIB003 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Data Sources <s> Location-based social networks (LBSNs) have attracted an increasing number of users in recent years. The availability of geographical and social information of online LBSNs provides an unprecedented opportunity to study the human movement from their socio-spatial behavior, enabling a variety of location-based services. Previous work on LBSNs reported limited improvements from using the social network information for location prediction; as users can check-in at new places, traditional work on location prediction that relies on mining a user's historical trajectories is not designed for this "cold start" problem of predicting new check-ins. In this paper, we propose to utilize the social network information for solving the "cold start" location prediction problem, with a geo-social correlation model to capture social correlations on LBSNs considering social networks and geographical distance. The experimental results on a real-world LBSN demonstrate that our approach properly models the social correlations of a user's new check-ins by considering various correlation strengths and correlation measures. <s> BIB004 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Data Sources <s> The popularity of location-based social networks provide us with a new platform to understand users' preferences based on their location histories. In this paper, we present a location-based and preference-aware recommender system that offers a particular user a set of venues (such as restaurants) within a geospatial range with the consideration of both: 1) User preferences, which are automatically learned from her location history and 2) Social opinions, which are mined from the location histories of the local experts. This recommender system can facilitate people's travel not only near their living areas but also to a city that is new to them. As a user can only visit a limited number of locations, the user-locations matrix is very sparse, leading to a big challenge to traditional collaborative filtering-based location recommender systems. The problem becomes even more challenging when people travel to a new city. To this end, we propose a novel location recommender system, which consists of two main parts: offline modeling and online recommendation. The offline modeling part models each individual's personal preferences with a weighted category hierarchy (WCH) and infers the expertise of each user in a city with respect to different category of locations according to their location histories using an iterative learning model. The online recommendation part selects candidate local experts in a geospatial range that matches the user's preferences using a preference-aware candidate selection algorithm and then infers a score of the candidate locations based on the opinions of the selected local experts. Finally, the top-k ranked locations are returned as the recommendations for the user. We evaluated our system with a large-scale real dataset collected from Foursquare. The results confirm that our method offers more effective recommendations than baselines, while having a good efficiency of providing location recommendations. <s> BIB005
Gowalla, Brightkite, and Foursquare are famous benchmark datasets available for evaluating a POI recommendation model. In this subsection, we briefly introduce these datasets and describe the statistics, shown in Table 2 . BIB001 4,491,143 check-ins from 58,228 users Gowalla 1 BIB001 6,442,890 check-ins from 196,591 users Gowalla 2 BIB002 4,128,714 check-ins from 53,944 users Foursquare 1 BIB003 2,073,740 check-ins from 18,107 users Foursquare 2 BIB004 1,385,223 check-ins from 11,326 users Foursquare 3 BIB005 325,606 check-ins from 80,606 users
A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Metrics <s> We address the problems of 1/ assessing the confidence of the standard point estimates, precision, recall and F-score, and 2/ comparing the results, in terms of precision, recall and F-score, obtained using two different methods. To do so, we use a probabilistic setting which allows us to obtain posterior distributions on these performance indicators, rather than point estimates. This framework is applied to the case where different methods are run on different datasets from the same source, as well as the standard situation where competing results are obtained on the same data. <s> BIB001 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Metrics <s> Receiver Operator Characteristic (ROC) curves are commonly used to present results for binary decision problems in machine learning. However, when dealing with highly skewed datasets, Precision-Recall (PR) curves give a more informative picture of an algorithm's performance. We show that a deep connection exists between ROC space and PR space, such that a curve dominates in ROC space if and only if it dominates in PR space. A corollary is the notion of an achievable PR curve, which has properties much like the convex hull in ROC space; we show an efficient algorithm for computing this curve. Finally, we also note differences in the two types of curves are significant for algorithm design. For example, in PR space it is incorrect to linearly interpolate between points. Furthermore, algorithms that optimize the area under the ROC curve are not guaranteed to optimize the area under the PR curve. <s> BIB002 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Metrics <s> In this paper, we study the research issues in realizing location recommendation services for large-scale location-based social networks, by exploiting the social and geographical characteristics of users and locations/places. Through our analysis on a dataset collected from Foursquare, a popular location-based social networking system, we observe that there exists strong social and geospatial ties among users and their favorite locations/places in the system. Accordingly, we develop a friend-based collaborative filtering (FCF) approach for location recommendation based on collaborative ratings of places made by social friends. Moreover, we propose a variant of FCF technique, namely Geo-Measured FCF (GM-FCF), based on heuristics derived from observed geospatial characteristics in the Foursquare dataset. Finally, the evaluation results show that the proposed family of FCF techniques holds comparable recommendation effectiveness against the state-of-the-art recommendation algorithms, while incurring significantly lower computational overhead. Meanwhile, the GM-FCF provides additional flexibility in tradeoff between recommendation effectiveness and computational overhead. <s> BIB003 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Metrics <s> The problem of point of interest (POI) recommendation is to provide personalized recommendations of places of interests, such as restaurants, for mobile users. Due to its complexity and its connection to location based social networks (LBSNs), the decision process of a user choose a POI is complex and can be influenced by various factors, such as user preferences, geographical influences, and user mobility behaviors. While there are some studies on POI recommendations, it lacks of integrated analysis of the joint effect of multiple factors. To this end, in this paper, we propose a novel geographical probabilistic factor analysis framework which strategically takes various factors into consideration. Specifically, this framework allows to capture the geographical influences on a user's check-in behavior. Also, the user mobility behaviors can be effectively exploited in the recommendation model. Moreover, the recommendation model can effectively make use of user check-in count data as implicity user feedback for modeling user preferences. Finally, experimental results on real-world LBSNs data show that the proposed recommendation method outperforms state-of-the-art latent factor models with a significant margin. <s> BIB004
Most of POI recommendation systems utilize metrics of precision and recall, which are two general metrics to evaluate the model performance in information retrieval BIB002 BIB001 . To see the balance of precision and recall, F-score is also introduced in some work. Since the precision and recall are low for POI recommendation, some researches BIB004 BIB003 introduce one relative metric, which measures the model comparative performance over random selection. The precision and recall in the top-N recommendation system are denoted as P@N and R@N , respectively. P@N measures the ratio of recovered POIs to the N recommended POIs, and R@N means the ratio of recovered POIs to the set of POIs in the testing data. For each user u ∈ U , L T u denotes the set of correspondingly visited POIs in the test data, and L R u denotes the set of recommended POIs. Then, the definitions of P@N and R@N are formulated as follows, Further, F-score is the harmonic mean of precision and recall. Therefore, the F-score is defined as, In order to better compare the results, a relative metric is introduced. Relative precision@N and recall@N are denoted as r-P@N and r-R@N , respectively. Let L C u denote the candidate POIs for each user u, namely POIs the user has not checked-in, then precision and recall of a random recommendation system is , respectively. Then, the relative precision@N and recall@N are defined as,
A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Ranking-based Model <s> The paper is concerned with learning to rank, which is to construct a model or a function for ranking objects. Learning to rank is useful for document retrieval, collaborative filtering, and many other applications. Several methods for learning to rank have been proposed, which take object pairs as 'instances' in learning. We refer to them as the pairwise approach in this paper. Although the pairwise approach offers advantages, it ignores the fact that ranking is a prediction task on list of objects. The paper postulates that learning to rank should adopt the listwise approach in which lists of objects are used as 'instances' in learning. The paper proposes a new probabilistic method for the approach. Specifically it introduces two probability models, respectively referred to as permutation probability and top k probability, to define a listwise loss function for learning. Neural Network and Gradient Descent are then employed as model and algorithm in the learning method. Experimental results on information retrieval show that the proposed listwise approach performs better than the pairwise approach. <s> BIB001 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Ranking-based Model <s> In ranking with the pairwise classification approach, the loss associated to a predicted ranked list is the mean of the pairwise classification losses. This loss is inadequate for tasks like information retrieval where we prefer ranked lists with high precision on the top of the list. We propose to optimize a larger class of loss functions for ranking, based on an ordered weighted average (OWA) (Yager, 1988) of the classification losses. Convex OWA aggregation operators range from the max to the mean depending on their weights, and can be used to focus on the top ranked elements as they give more weight to the largest losses. When aggregating hinge losses, the optimization problem is similar to the SVM for interdependent output spaces. Moreover, we show that OWA aggregates of margin-based classification losses have good generalization properties. Experiments on the Letor 3.0 benchmark dataset for information retrieval validate our approach. <s> BIB002 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Ranking-based Model <s> Image annotation datasets are becoming larger and larger, with tens of millions of images and tens of thousands of possible annotations. We propose a strongly performing method that scales to such datasets by simultaneously learning to optimize precision at k of the ranked list of annotations for a given image and learning a low-dimensional joint embedding space for both images and annotations. Our method both outperforms several baseline methods and, in comparison to them, is faster and consumes less memory. We also demonstrate how our method learns an interpretable model, where annotations with alternate spellings or even languages are close in the embedding space. Hence, even when our model does not predict the exact annotation given by a human labeler, it often predicts similar annotations, a fact that we try to quantify by measuring the newly introduced "sibling" precision metric, where our method also obtains excellent results. <s> BIB003 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Ranking-based Model <s> This tutorial is concerned with a comprehensive introduction to the research area of learning to rank for information retrieval. In the first part of the tutorial, we will introduce three major approaches to learning to rank, i.e., the pointwise, pairwise, and listwise approaches, analyze the relationship between the loss functions used in these approaches and the widely-used IR evaluation measures, evaluate the performance of these approaches on the LETOR benchmark datasets, and demonstrate how to use these approaches to solve real ranking applications. In the second part of the tutorial, we will discuss some advanced topics regarding learning to rank, such as relational ranking, diverse ranking, semi-supervised ranking, transfer ranking, query-dependent ranking, and training data preprocessing. In the third part, we will briefly mention the recent advances on statistical learning theory for ranking, which explain the generalization ability and statistical consistency of different ranking methods. In the last part, we will conclude the tutorial and show several future research directions. <s> BIB004 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Ranking-based Model <s> Recently, location-based social networks (LBSNs), such as Gowalla, Foursquare, Facebook, and Brightkite, etc., have attracted millions of users to share their social friendship and their locations via check-ins. The available check-in information makes it possible to mine users' preference on locations and to provide favorite recommendations. Personalized Point-of-interest (POI) recommendation is a significant task in LBSNs since it can help targeted users explore their surroundings as well as help third-party developers to provide personalized services. To solve this task, matrix factorization is a promising tool due to its success in recommender systems. However, previously proposed matrix factorization (MF) methods do not explore geographical influence, e.g., multi-center check-in property, which yields suboptimal solutions for the recommendation. In this paper, to the best of our knowledge, we are the first to fuse MF with geographical and social influence for POI recommendation in LBSNs. We first capture the geographical influence via modeling the probability of a user's check-in on a location as a Multicenter Gaussian Model (MGM). Next, we include social information and fuse the geographical influence into a generalized matrix factorization framework. Our solution to POI recommendation is efficient and scales linearly with the number of observations. Finally, we conduct thorough experiments on a large-scale real-world LBSNs dataset and demonstrate that the fused matrix factorization framework with MGM utilizes the distance information sufficiently and outperforms other state-of-the-art methods significantly. <s> BIB005 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Ranking-based Model <s> Item recommendation is the task of predicting a personalized ranking on a set of items (e.g. websites, movies, products). In this paper, we investigate the most common scenario with implicit feedback (e.g. clicks, purchases). There are many methods for item recommendation from implicit feedback like matrix factorization (MF) or adaptive k-nearest-neighbor (kNN). Even though these methods are designed for the item prediction task of personalized ranking, none of them is directly optimized for ranking. In this paper we present a generic optimization criterion BPR-Opt for personalized ranking that is the maximum posterior estimator derived from a Bayesian analysis of the problem. We also provide a generic learning algorithm for optimizing models with respect to BPR-Opt. The learning method is based on stochastic gradient descent with bootstrap sampling. We show how to apply our method to two state-of-the-art recommender models: matrix factorization and adaptive kNN. Our experiments indicate that for the task of personalized ranking our optimization method outperforms the standard learning techniques for MF and kNN. The results show the importance of optimizing models for the right criterion. <s> BIB006 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Ranking-based Model <s> Retrieval tasks typically require a ranking of items given a query. Collaborative filtering tasks, on the other hand, learn to model user's preferences over items. In this paper we study the joint problem of recommending items to a user with respect to a given query, which is a surprisingly common task. This setup differs from the standard collaborative filtering one in that we are given a query x user x item tensor for training instead of the more traditional user x item matrix. Compared to document retrieval we do have a query, but we may or may not have content features (we will consider both cases) and we can also take account of the user's profile. We introduce a factorized model for this new task that optimizes the top-ranked items returned for the given query and user. We report empirical results where it outperforms several baselines. <s> BIB007 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Ranking-based Model <s> Location-based social networks (LBSNs) have attracted an inordinate number of users and greatly enriched the urban experience in recent years. The availability of spatial, temporal and social information in online LBSNs offers an unprecedented opportunity to study various aspects of human behavior, and enable a variety of location-based services such as location recommendation. Previous work studied spatial and social influences on location recommendation in LBSNs. Due to the strong correlations between a user's check-in time and the corresponding check-in location, recommender systems designed for location recommendation inevitably need to consider temporal effects. In this paper, we introduce a novel location recommendation framework, based on the temporal properties of user movement observed from a real-world LBSN dataset. The experimental results exhibit the significance of temporal patterns in explaining user behavior, and demonstrate their power to improve location recommendation performance. <s> BIB008 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Ranking-based Model <s> Personalized recommendation systems are used in a wide variety of applications such as electronic commerce, social networks, web search, and more. Collaborative filtering approaches to recommendation systems typically assume that the rating matrix (e.g., movie ratings by viewers) is low-rank. In this paper, we examine an alternative approach in which the rating matrix is locally low-rank. Concretely, we assume that the rating matrix is low-rank within certain neighborhoods of the metric space defined by (user, item) pairs. We combine a recent approach for local low-rank approximation based on the Frobenius norm with a general empirical risk minimization for ranking losses. Our experiments indicate that the combination of a mixture of local low-rank matrices each of which was trained to minimize a ranking loss outperforms many of the currently used state-of-the-art recommendation systems. Moreover, our method is easy to parallelize, making it a viable approach for large scale real-world rank-based recommendation systems. <s> BIB009 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Ranking-based Model <s> The rapidly growing of Location-based Social Networks (LBSNs) provides a vast amount of check-in data, which enables many services, e.g., point-of-interest (POI) recommendation. In this paper, we study the next new POI recommendation problem in which new POIs with respect to users' current location are to be recommended. The challenge lies in the difficulty in precisely learning users' sequential information and personalizing the recommendation model. To this end, we resort to the Metric Embedding method for the recommendation, which avoids drawbacks of the Matrix Factorization technique. We propose a personalized ranking metric embedding method (PRME) to model personalized check-in sequences. We further develop a PRME-G model, which integrates sequential information, individual preference, and geographical influence, to improve the recommendation performance. Experiments on two real-world LBSN datasets demonstrate that our new algorithm outperforms the state-of-the-art next POI recommendation methods. <s> BIB010 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Ranking-based Model <s> With the rapid growth of location-based social networks, Point of Interest (POI) recommendation has become an important research problem. However, the scarcity of the check-in data, a type of implicit feedback data, poses a severe challenge for existing POI recommendation methods. Moreover, different types of context information about POIs are available and how to leverage them becomes another challenge. In this paper, we propose a ranking based geographical factorization method, called Rank-GeoFM, for POI recommendation, which addresses the two challenges. In the proposed model, we consider that the check-in frequency characterizes users' visiting preference and learn the factorization by ranking the POIs correctly. In our model, POIs both with and without check-ins will contribute to learning the ranking and thus the data sparsity problem can be alleviated. In addition, our model can easily incorporate different types of context information, such as the geographical influence and temporal influence. We propose a stochastic gradient descent based algorithm to learn the factorization. Experiments on publicly available datasets under both user-POI setting and user-time-POI setting have been conducted to test the effectiveness of the proposed method. Experimental results under both settings show that the proposed method outperforms the state-of-the-art methods significantly in terms of recommendation accuracy. <s> BIB011 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Ranking-based Model <s> The rapid urban expansion has greatly extended the physical boundary of users' living area and developed a large number of POIs (points of interest). POI recommendation is a task that facilitates users' urban exploration and helps them filter uninteresting POIs for decision making. While existing work of POI recommendation on location-based social networks (LBSNs) discovers the spatial, temporal, and social patterns of user check-in behavior, the use of content information has not been systematically studied. The various types of content information available on LBSNs could be related to different aspects of a user's check-in action, providing a unique opportunity for POI recommendation. In this work, we study the content information on LB-SNs w.r.t. POI properties, user interests, and sentiment indications. We model the three types of information under a unified POI recommendation framework with the consideration of their relationship to check-in actions. The experimental results exhibit the significance of content information in explaining user behavior, and demonstrate its power to improve POI recommendation performance on LBSNs. <s> BIB012 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Ranking-based Model <s> Successive point-of-interest (POI) recommendation in location-based social networks (LBSNs) becomes a significant task since it helps users to navigate a number of candidate POIs and provides the best POI recommendations based on users' most recent check-in knowledge. However, all existing methods for successive POI recommendation only focus on modeling the correlation between POIs based on users' check-in sequences, but ignore an important fact that successive POI recommendation is a time-subtle recommendation task. In fact, even with the same previous check-in information, users would prefer different successive POIs at different time. To capture the impact of time on successive POI recommendation, in this paper, we propose a spatial-temporal latent ranking (STELLAR) method to explicitly model the interactions among user, POI, and time. In particular, the proposed STELLAR model is built upon a ranking-based pairwise tensor factorization framework with a fine-grained modeling of user-POI, POI-time, and POI-POI interactions for successive POI recommendation. Moreover, we propose a new interval-aware weight utility function to differentiate successive check-ins' correlations, which breaks the time interval constraint in prior work. Evaluations on two real-world datasets demonstrate that the STELLAR model outperforms state-of-the-art successive POI recommendation model about 20% in [email protected] and [email protected] <s> BIB013
Several ranking-based models BIB010 BIB011 BIB013 have been proposed for POI recommendation recently. Most of previous methods generally attempt to estimate the user check-in probability over POIs BIB005 BIB008 BIB012 . However, for the POI recommendation task, we do not really care about the predicted check-in possibility value but the preference order. Some work has proved that it is better for the recommendation task to learn the order rather than the real value BIB006 BIB009 BIB002 BIB003 BIB007 . Bayesian personalized ranking (BPR) loss BIB006 and weighted approximate rank pairwise (WARP) loss BIB002 BIB003 are two popular criteria to learn the ranking order. Researchers in BIB010 BIB013 leverage the BPR loss to learn the model, and Li et al. BIB011 use the WARP loss. The existing work using ranking-base model has shown its advantage in model performance. Then, learning to rank, as an important technique for information retrieval BIB001 BIB004 , may be used more for POI recommendation to improve performance in the future.
GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> Errors reported in 1946 by aircraft pilots using pulsed radar altimeters over Antarctic ice, coupled wih results of radio-wave propagation studies in both polar areas (1946-1955), led to measurements of the electrical characteristics of thick ice at high and ultra-high frequencies. These measurements produced information relative to dielectric constants, loss factors, scattering, and interface reflection data that subsequently permitted successful radio-wave penetration measurements in continental ice to depth of several hundred feet in both the Antarctic and the Arctic (1958-1960). Results indicated clearly that low-flying pilots relying on pulsed 440-Mc altimeters in poor visibility over thick ice can be fatally misled by errors inherent in these instruments. The paper presents recent data obtained by the Signal Corps pertinent to radio-wave transparency of thick ice and snow. <s> BIB001 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> Radio interferometry is a technique for measuring in‐situ electrical properties and for detecting subsurface changes in electrical properties of geologic regions with very low electrical conductivity. Ice‐covered terrestrial regions and the lunar surface are typical environments where this method can be applied. The field strengths about a transmitting antenna placed on the surface of such an environment exhibit interference maxima and minima which are characteristic of the subsurface electrical properties. This paper (Part I) examines the theoretical wave nature of the electromagnetic fields about various types of dipole sources placed on the surface of a low‐loss dielectric half‐space and two‐layer earth. Approximate expressions for the fields have been found using both normal mode analysis and the saddle‐point method of integration. The solutions yield a number of important results for the radio interferometry depth‐sounding method. The half‐space solutions show that the interface modifies the directio... <s> BIB002 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> In dry materials, physical factors control all electrical properties. The addition of a polar liquid solvent such as water or alcohol adds a host of solvent‐rock chemical interactions. These chemical interactions range from oxidation‐reduction corrosion, cation exchange, and clay‐organic processes at frequencies below 1 Hz to diffusion‐limited relaxation around colloidal particles at frequencies up to 100 MHz. Most mixing formulas are based upon physical mixing of noninteracting materials, and they fail when chemical processes appear. If the specific chemical processes are identifiable, combined physical and chemical mixing formulas must be used. The simplest systems to model are noninteracting physical mixtures of solvents with pure silica sand. The most complicated systems are mixtures of solvents with chemically surface‐reactive materials like clays and zeolites. <s> BIB003 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> Abstract Productive interpretations of ground penetrating radar surveys require an accurate understanding of electromagnetic wave radiation, propagation, and scattering in geological materials as well as accurate knowledge of the reflection characteristics of various target anomalies embedded in such materials. GPR responses and survey profiles are often interpreted on the basis of theoretical estimates and numerical simulation models of electromagnetic wave propagation in simplified representations of ground materials and by using idealized target contrasts and geometries. Alternatively, field experiments performed under controlled test conditions can also be effective in demostrating GPR system performance capabilities and in providing quantitative measurements in realistic geologic formations. Experimental research at the University of Rome “La Sapienza” and at the Italian National Research Council were initiated to develop a basic understanding of the radiation and scattering characteristics of VHF pulse-mode GPR signals in earth materials and in air with emphasis on antenna ground coupling and target backscatter responses. The results of the experimental measurements conducted in air provided baseline information on the GPR system and target reflections under lossless propagation conditions. Target response measurements at various burial depths provided a systematic data base from which target responses, propagation parameters of the medium, and relevant data processing techniques were evaluated to gain useful insights into their interpretations. Other more advanced experimental tests are planned for the future. <s> BIB004 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> Existing 1D and 2D models are used to simulate ground penetrating radar (GPR) field surveys conducted in a stratified limestone terrain. The 1D model gave good agreement in a simple layered section, accounting for multiple reflections, velocity variations and attenuation. The 2D F-K model used gave a good representation of the patterns observed due to edge diffraction from a fracture in limestone, although the model could not account for the attenuation caused by irregular blocks filling the fracture. <s> BIB005 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> Generally, in order to detect shallow archaeological features, such as tombs, cavities, walls, etc., ground penetrating radar (GPR) data are acquired along parallel profiles. In some cases, the data collected using the GPR method are difficult to interpret owing to the presence of low signal-to-noise (S/N) ratio. These signals can be generated by several factors that significantly influence the radar profiles. To enhance the interpretation of radar sections, three-dimensional data acquisition, radar signal processing and time-slice representation are used. ::: ::: ::: ::: The archaeological investigated as a test site was the Sabine Necropolis (700–300 BC) at Colle del Forno (Montelibretti, Roma), believed to contain unexplored underground dromos chamber tombs. The measurements were carried out along parallel profiles in a test area, using Sir System 10 (GSSI) equipped with different antennas operating at 100, 300 and 500 MHz. The spatial interval used during the survey was 20 cm. To enhance the S/N ratio, a band-pass filter and subtraction of an average trace on the field data has been applied; furthermore, the two-dimensional migration technique on all profiles collected was used in order to move diffraction effects. A time-slice representation technique was adopted to obtain a planimetric correlation between anomalous bodies at different depths. ::: ::: ::: ::: The results indicate that the three-dimensional data acquisition, processing and the time slice representation can help determine the location, depth and shapes of buried features. <s> BIB006 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> Subsurface georadar is a high-resolution technique based on the propagation of high-frequency radio waves. Modeling radio waves in a realistic medium requires the simulation of the complete wavefield and the correct description of the petrophysical properties, such as conductivity and dielectric relaxation. Here, the theory is developed for 2-D transverse magnetic (TM) waves, with a different relaxation function associated to each principal permittivity and conductivity component. In this way, the wave characteristics (e.g., wavefront and attenuation) are anisotropic and have a general frequency dependence. These characteristics are investigated through a plane-wave analysis that gives the expressions of measurable quantities such as the quality factor and the energy velocity. The numerical solution for arbitrary heterogeneous media is obtained by a grid method that uses a time-splitting algorithm to circumvent the stiffness of the differential equations. The modeling correctly reproduces the amplitude and the wavefront shape predicted by the plane-wave analysis for homogeneous media, confirming, in this way, both the theoretical analysis and the numerical algorithm. Finally, the modeling is applied to the evaluation of the electromagnetic response of contaminant pools in a sand aquifer. The results indicate the degree of resolution (radar frequency) necessary to identify the pools and the differences between the anisotropic and isotropic radargrams versus the source-receiver distance. <s> BIB007 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> Abstract The ground-penetrating radar (GPR) is a good candidate for the exploration of the Martian subsurface because it is smaller and lighter than seismic instruments, and, due to the lack of water in the Martian rocks, has great penetration capability. The modelling of the GPR signal response has been performed by computing the dielectric properties of each simulated layer as a linear function of porosity, known values of the solids, and the nature of the material filling the voids (ice water, carbou dioxide ice, gas, liquid water). The synthetic response was computed by reflecting ray-tracing at various peak frequencies. The complex results show that reflections are due to variations in mineralogy, porosity and porc-filling material. The reflectors produced by the reflection of the electromagnetic waves provide a picture of the geometries of the layers of the subsurface and give elues on the nature of rocks. Permafrost and liquid water can be investigated, chiefly their seasonal changes can be analysed by means of repeated profiles. The use of the GPR would be a major breakthrough in the reconstruction of the past geological history of the planet. <s> BIB008 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> In monostatic ground penetrating radar (GPR) the interfaces profile can be estimated from echoes amplitude and time of delay (TOD) using a layer stripping inversion algorithm. The authors' aim is to establish a reliable processing sequence for layer stripping inversion by estimating echoes TOD that keeps into account the layers lateral continuity, and by tracking the corresponding interfaces. The authors propose first an algorithm for multitarget tracking and then they describe the application of detection/tracking to Ins pulse monostatic GPR. The system is used to estimate layer thicknesses of asphalt and concrete in pavement profiling. Detection/tracking shows a better recognition capability of the lateral continuity in near surface interfaces with respect to algorithms that employ only local detection of echoes. <s> BIB009 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> A 2.5-D and 3-D multi-fold GPR survey was carried out in the Archaeological Park of Aquileia (northern Italy). The primary objective of the study was the identification of targets of potential archaeological interest in an area designated by local archaeological authorities. The second geophysical objective was to test 2-D and 3-D multi-fold methods and to study localised targets of unknown shape and dimensions in hostile soil conditions. Several portions of the acquisition grid were processed in common offset (CO), common shot (CSG) and common mid point (CMP) geometry. An 8×8 m area was studied with orthogonal CMPs thus achieving a 3-D subsurface coverage with azimuthal range limited to two normal components. Coherent noise components were identified in the pre-stack domain and removed by means of FK filtering of CMP records. Stack velocities were obtained from conventional velocity analysis and azimuthal velocity analysis of 3-D pre-stack gathers. Two major discontinuities were identified in the area of study. The deeper one most probably coincides with the paleosol at the base of the layer associated with activities of man in the area in the last 2500 years. This interpretation is in agreement with the results obtained from nearby cores and excavations. The shallow discontinuity is observed in a part of the investigated area and it shows local interruptions with a linear distribution on the grid. Such interruptions may correspond to buried targets of archaeological interest. The prominent enhancement of the subsurface images obtained by means of multi-fold techniques, compared with the relatively poor quality of the conventional single-fold georadar sections, indicates that multi-fold methods are well suited for the application to high resolution studies in archaeology. <s> BIB010 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> Ground penetrating radar (GPR) is a relatively new geophysical technique. The last decade has seen major advances and there is an overall sense of the technology reaching a level of maturity. The history of GPR is intertwined with the diverse applications of the technique. GPR has the most extensive set of applications of any geophysical technique. As a result, the spatial scales of applications and the diversity of instrument configurations are extensive. Both the value and the limitations of the method are better understood in the global user community. The goal of this paper is to provide a brief history of the method, a discussion of current trends and give a sense of future developments. <s> BIB011 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> Reconstruction of shallow stratigraphy of unconsolidated sediments is a topic of primary interest in several environmental, hydrological, geotechnical, and engineering applications. The identification of porous layers and the assessment of their saturation, the characterization of sediments, the identification of bedrock and the analysis of shallow layering are some examples of topics of primary interest in near-surface applications. Recent ground-penetrating radar (GPR) research demonstrates the excellent results that can be attained in the study of shallow stratigraphy. Complex stratigraphic structures, involving cross-stratification, conflicting dips, and rapid lateral and vertical particle-size variations pose a challenge to the application of single-fold (constant offset) GPR methods. ::: ::: The objectives of the present work are imaging and resolution enhancement of GPR multifold records from shallow, unconsolidated sediments. The study is based, in particular, on prestack processing and imaging of data from alluvial plain sites in northern Italy, which are characterized by different stratigraphic and sedimentological conditions. Figure 1 shows the location map of the survey. We show the results obtained on a fluvial terrace of the Isonzo River that are characterized by a complete alluvial sequence including a range of sediments (gravel to clayey loam) and range of stratigraphic structures (depositional and erosional). The water table and vadose zone are in the GPR and resistivity depth range and affect the response of the geophysical techniques, particularly the lateral and vertical resistivity and GPR velocity variations. Figure 1. ::: Map and aerial picture of the study area. The red rectangle shows the location of the 20 × 12 m study area. The site is close to the riverbank, where the different stratigraphic units identified by the geophysical survey were identified and sampled. ::: ::: ::: ::: A Mala Geoscience GPR system was equipped with shielded 250-MHz antennae for the study. Single-fold methods were used in reconnaissance surveys at all test sites. We successively performed … <s> BIB012 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> Road pavement performances are of great importance for driving comfort and safety. Monitoring and rehabilitation activities are always extremely strategic and crucial. The points of strength of advanced non-destructive techniques for road pavement monitoring essentially are: (1) reliability, (2) significance in the space domain, (3) efficiency and (4) quickness. One of the most relevant and widely used technologies is the Ground Penetrating Radar. In the field of pavement analysis its most frequent applications are the evaluation of layers thickness and voids detection. Recent experimental results put also in light the capability of Radar to identify the causes of road damages. Empirical relationships between physical and mechanical characteristics of the materials and electromagnetic parameters have been seen, established and analytical functions were proposed. Most promising and interesting evidences regard the prediction of water content. It is crucially important because water intrusion in sub-grade is one of the most important causes of loss of mechanical properties. The empirical relationships have shown a conservative and comparable trend for different materials, status conditions and radar frequencies, but variable amplitudes. General mathematical laws could be very useful to analyze the Radar scans correctly and in a more comprehensive framework A stochastically based correction of semi-empirical approach is here proposed to correlate the geophysical characteristics of the pavement���s materials (sub-grade) to the par:rmeters of the empirical model. Average dimension of grains, grading, specific surface area of grains (that is related to the hygroscopic potential) and dielectric characteristics of the dry material are primarily taken into consideration. The impact of this geophysical and stochastical model on non-destructive measurements and on the pavement management is high and it is here discussed. <s> BIB013 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> We study the impedance parameters and the energy transmitted and received by a couple of antennas working in a nonhomogeneous background. The focus is on stepped frequency ground penetrating radar (SF-GPR) prospecting. In particular, we propose a reconfiguration of the GPR system versus the frequency that accounts for the background scenario, and we show that the reconfiguration can improve the frequency behavior of the antennas significantly. Tests performed on two bow-tie antennas will be shown. <s> BIB014 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> DESIGN OF A BALUN FOR A BOW TIE ANTENNA INRECONFIGURABLE GROUND PENETRATING RADARSYSTEMSR. PersicoIstituto per il Beni Archeologici e MonumentaliConsiglio Nazionale delle RicercheVia Monteroni, Campus Universitario, Lecce 73100, ItalyN. RomanoDipartimento di Ingegneria dell’InformazioneSeconda Universitµa degli studi di NapoliVia Roma 29, Aversa 81031, ItalyF. SoldovieriIstituto per il Rilevamento Elettromagnetico dell’AmbienteConsiglio Nazionale delle RicercheVia Diocleziano 328, Napoli 80124, ItalyAbstract|This paper deals with the design of a reconflgurableantenna that resembles a monolithic UWB bow-tie antenna for GroundPenetrating Radar (GPR) applications. In particular, the attentionis focussed on the design of the balun system able to work in thefrequency band 0.3{1GHz; the efiectiveness of the design is shown byexamining the behaviour of the scattering parameters <s> BIB015 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> Abstract We analyze the dispersive characteristics of the electromagnetic guided waves (at georadar frequency) to infer the electrical properties of materials that constitute a layered wave guide. WARR (Wide Angle Reflection and Refraction) georadar acquisitions could be carried out in TE (Transverse Electric) configuration to collect the full wavefield at different offset from the source. The dispersive curves of TE modes are obtained by transforming the space-time acquisition into frequency-wavenumber domain (f-k spectrum); the relative maxima in the f-k spectrum for each frequency represent the different propagation modes. We adopt both global and local inversion algorithms for minimizing the misfit function between computed and theoretical curves in order to obtain a 1D model of the layered subsoil (thicknesses and electrical permittivity). We perform a multimodal and multilayer inversion of the dispersive events. The results of two field cases will be discussed; the first one refers to the propagation in a confined waveguide (layered subsoil) and the other in a leaky waveguide (snow cover on a glacier). <s> BIB016 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> The Romano-British site of Barcombe in East Sussex, England, has suffered heavy postdepositional attrition through reuse of the building materials for the effects of ploughing. A detailed GPR survey of the site was carried out in 2001, with results, achieved by usual radar data processing, published in 2002. ::: The current paper reexamines the GPR data using microwave tomography approach, based on a linear inverse scattering model, and a 3D visualization that permits to improve the definition of the villa plan and reexamine the possibility of detecting earlier prehistoric remains. <s> BIB017 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> Abstract Corrosion associated with reinforcing bars is the most significant contributor to bridge deficiencies. The corrosion is usually caused by moisture and chloride ion exposure. The reinforcing bars are attacked by corrosion and yield expansive corrosion products. These oxidation products occupy a larger volume than the original intact steel and internal expansive stresses lead to cracking and debonding. There are some conventional inspection methods for the detection of the reinforcing bar's corrosion but they can be invasive and destructive, often laborious, and lane closure is required and it is difficult or unreliable for any quantification of corrosion. For these reasons, bridge engineers always prefer more to use the ground penetrating radar (GPR) technique. In this work a novel numerical approach for three dimensional tracking and mapping of cracks in the bridge is proposed. The work starts from some interesting results based on the use of the 3D imaging technique in order to improve the potentiality of the GPR to detect voids, cracks or buried objects. The numerical approach has been tested on data acquired on a bridge by using a pulse GPR system specifically designed for bridge deck and pavement inspection. The equipment integrates two arrays of Ultra Wide Band ground coupled antennas, having a main working frequency of 2 GHz. The two arrays are using antennas arranged with a different polarization. The cracks, associated often to moisture increase and higher values of the dielectric constant, produce a not negligible increase of the signal amplitude. Following this, the algorithm, organized in preprocessing, processing and postprocessing stages, analyzes the signal by comparing the value of the amplitude all over the domain of the radar scan. <s> BIB018
ANY efforts from several scientific disciplines have been devoted over years to identify an effective technique capable of interpreting the hidden response of the ground reliably, and different methods of inspection have been developed accordingly. The answer to this issue has not been uniquely satisfied, since a fair number of techniques have demonstrated to be overall suited for this purpose. In this framework, groundpenetrating radar (GPR) is nowadays considered as one of the most powerful geophysical nondestructive tools, which has gained considerable interest among scientists and engineers thanks to the wide range of expertise and applications that can be covered. GPR is intrinsically a technology oriented toward applications, whose structure and electronics are relatively variable according to the target characteristics. Basically, structures and changes in material properties can be detected by GPR through the use of electromagnetic (EM) fields, which penetrate lossy dielectric materials to some depth up to their absorption. It is based on the scattering and/or reflection by changes in impedance of EM waves BIB011 . The recognition of the signal is relatively easy, as the return signal is shaped very similar to the emitted signal. The depth, shape, and EM properties of the scattering of the reflecting object affect the time delay, as well as the differences in phase, frequency, and amplitude. Going through the history of GPR technology and its use worldwide, one of the first applications can be traced back to the first half of the twentieth century and dealt with the use of radio wave propagation above and along the surface of the Earth BIB011 . The first documented application was later performed by El Said , who attempted to identify the water table depth in the Egyptian desert by knowing the distance between the receiver and the transmitter and measuring the time delay of the received signal. Over time, this technology witnessed great development within several different fields of application spanning from demining BIB001 - to lunar explorations BIB002 - , and including glaciology , archaeology , geology , BIB003 , and, of course, civil engineering . This paper aims at reviewing the state of the art on the use of GPR in Italy, from the beginning up to the most recent applications. The Italian case is worth to be deepened as one comprehensive large-scale study case, wherein the complexity of territorial, naturalistic, historical, cultural, and socioeconomical features has effectively met the flexibility and high potential offered by the GPR technology. First, the heterogeneity of its territory offers direct applications for GPR in a large range of fields, including geology, seismology, hydraulics and glaciology. Besides, the highest number of cultural UNESCO World Heritage sites [17] has generated a high sensitivity toward heritage monitoring and the use of maintenance technologies, which has progressively enhanced. In addition, with the Italian road network being one of the densest worldwide in relation to the territory available , economical investments are increasingly being addressed toward effective maintenance and rehabilitation policies by means of high-efficient survey technologies. These main features, along with other specific Italian peculiarities that will be analyzed in this review, have acted both as an impulse for spreading this technology within the national market and overseas, and as a challenge for improving its performances through high-quality scientific contributions, making Italy one of the most active and fruitful countries in the field. From a closely scientific perspective, the earliest Italian documented contribution falls slightly behind other countries like United States, Canada, and United Kingdom (see Fig. 1 ), since the first works on GPR authored by Italian researchers were released in 1995, and dealt with, respectively, the analysis of signal propagation BIB004 and geological issues BIB005 . Besides, first Italian GPR applications for archaeological purposes date back to the same period BIB006 , . In the years following, the GPR-based research has started to embrace more fields of application from different disciplines: geological investigations , the use of numerical simulation of the GPR signal for retrieving material responses BIB007 and analyzing the GPR applicability in planetary explorations BIB008 , together with the automatic detection of multilayered structures BIB009 demonstrate the growing interest of the Italian research community on GPR technologies and methodologies, since it is nowadays aligned to the highest research production standards worldwide. As shown by Fig. 2 , by 2015 Italy is within the first four countries publishing in the area of GPR. A great contribution by the Italian research to the GPR world community came from the civil engineering area, wherein considerable efforts have been devoted to the use of GPR in transport infrastructures since the first years of the noughties , and all over the last decade BIB013 - BIB018 . Lastly, it is worth mentioning the Italian contribution for the enhancement of the processing techniques of the GPR signal BIB012 - BIB016 , as well as for the development of innovative and performing hardware configurations . As for the latter, it is worth noting some important innovations, such as the development of a reconfigurable GPR system BIB014 capable to modify the EM parameters in real time for reaching higher performances BIB015 , BIB017 , and the introduction of systems equipped with antenna arrays capable to perform multi-offset measurements in real time , BIB010 . The state of the art about GPR activities in Italy is discussed in Section II according to the field of application. The selection process of the papers analyzed in this section has been made according to the following: 1) the number of citations collected in relation to the year of publication, as retrieved from the most recognized international scientific citation indexing services, and 2) scientific relevance criteria, intended as the contribution brought to the international scientific community in terms of development and novelties introduced. As for this latter point, it is worth to point out that it has to be interpreted as reflecting the scientific thought of the authors. Finally, Section III deals with conclusions and future perspectives on the applicability of GPR in Italy.
GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 1) Structures: <s> Subsurface georadar is a high-resolution technique based on the propagation of high-frequency radio waves. Modeling radio waves in a realistic medium requires the simulation of the complete wavefield and the correct description of the petrophysical properties, such as conductivity and dielectric relaxation. Here, the theory is developed for 2-D transverse magnetic (TM) waves, with a different relaxation function associated to each principal permittivity and conductivity component. In this way, the wave characteristics (e.g., wavefront and attenuation) are anisotropic and have a general frequency dependence. These characteristics are investigated through a plane-wave analysis that gives the expressions of measurable quantities such as the quality factor and the energy velocity. The numerical solution for arbitrary heterogeneous media is obtained by a grid method that uses a time-splitting algorithm to circumvent the stiffness of the differential equations. The modeling correctly reproduces the amplitude and the wavefront shape predicted by the plane-wave analysis for homogeneous media, confirming, in this way, both the theoretical analysis and the numerical algorithm. Finally, the modeling is applied to the evaluation of the electromagnetic response of contaminant pools in a sand aquifer. The results indicate the degree of resolution (radar frequency) necessary to identify the pools and the differences between the anisotropic and isotropic radargrams versus the source-receiver distance. <s> BIB001 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 1) Structures: <s> Abstract Feasibility and potential of tomography by Ground Penetrating Radar are investigated through experiments on laboratory models. The aim is the development of radar tomography procedures for inspection of structures like walls or pillars in historical buildings. Two different approaches are explored to satisfy high-resolution requirements. The first approach improves the results of classical traveltime (TT) and amplitude tomography (AT) on thin straight or curved rays through a progressive reduction of the null space of the problem. TT is a quantitative tool based on the thin ray assumption that allows a good tradeoff between robustness and resolution. AT is as robust as TT, but its results have only qualitative contents, since the energy transferred to the medium is basically unknown and the scattering effects are not taken into account. In the second approach, GPR is considered as a diffracting source, so that migration (MIG) and diffraction tomography (DT) are applied to overcome the geometrical optic approximations. While DT is in principle the best tool to invert the scattered field and to achieve the maximum resolution, MIG can be a more robust solution that requires less preprocessing of the data. All these advantages and drawbacks of the different approaches are discussed with some examples on synthetic and real data. <s> BIB002 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 1) Structures: <s> Ground penetrating radar (GPR) is a nondestructive measurement technique, which uses electromagnetic waves to locate targets or interfaces buried within a visually opaque substance or Earth material. GPR is also termed ground probing, surface penetrating (SPR), or subsurface radar. A GPR transmits a regular sequence of low-power packets of electromagnetic energy into the material or ground, and receives and detects the weak reflected signal from the buried target. The buried target can be a conductor, a dielectric, or combinations of both. There are now a number of commercially available equipments, and the technique is gradually developing in scope and capability. GPR has also been used successfully to provide forensic information in the course of criminal investigations, detect buried mines, survey roads, detect utilities, measure geophysical strata, and in other applications. ::: ::: ::: Keywords: ::: ::: ground penetrating radar; ::: ground probing radar; ::: surface penetrating radar; ::: subsurface radar; ::: electromagnetic waves <s> BIB003 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 1) Structures: <s> Abstract The radar technology, used to perform investigations on the civil buildings, derives from that used for investigations of the ground known with the name of Georadar. This is diffusing rapidly among the investigation methodology not destructive in the field of the structural engineering. It is based on the sending of electromagnetic waves of very short length and the recording of the time of arrival and of the breadth of any signals reflected on the interface between materials with a different dielectric constant. The aim of this paper is to present the operating methodologies and the results achieved by the application in the field of radar methodologies to map utilities, and for applications to civil building with special regard to the determination of the intern morphology, to the lack of homogeneity research and defectiveness and to the determination of the location of the steel reinforcements. Specifically, the system used, made up of one apparatus of field acquisition and another of delayed processing, seems to be able to provide good planimetric and three-dimensional restitution with regard to location and placement. In this paper, special attention has been paid to the processing of the acquired data and on the interpretation of experimental tests conducted on a civil building. <s> BIB004 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 1) Structures: <s> The identification of vadose zone flow parameters and solute travel time from the surface to the water table are key issues for the assessment of groundwater vulnerability. In this paper we use the results of time-lapse monitoring of the vadose zone in a UK consolidated sandstone aquifer using cross-hole zero-offset radar to assess and calibrate models of water flow in the vadose zone. The site under investigation is characterized by a layered structure, with permeable medium sandstone intercalated by finer, less permeable, laminated sandstone. Information on this structure is available from borehole geophysical (gamma-ray) logs. Monthly cross-hole radar monitoring was performed from August 1999 to February 2001, and shows small changes of moisture content over time and fairly large spatial variability with depth. One-dimensional Richards’ equation modeling of the infiltration process was performed under spatially heterogeneous, steady state conditions. Both layer structure and Richards’ equation parameters were simulated using a nested Monte Carlo approach, constrained via geostatistical analysis on the gamma-ray logs and on a priori information regarding the possible range of hydraulic parameters. The results of the Monte Carlo analysis show that, in order to match the radar-derived moisture content profiles, it is necessary to take into account the vertical scale of measurements, with an averaging window size of the order of the antenna length and the Fresnel zone width. Flow parameters cannot be uniquely identified, showing that the system is over parameterized with respect to the information content of the (nearly stationary) radar profiles. Estimates of travel time of water across the vadose zone are derived from the simulation results. <s> BIB005 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 1) Structures: <s> Abstract Current approaches to the reconstruction of the geometry of fluvial sediments of Quaternary alluvial plains and the characterization of their internal architecture are strongly dependent on core data (1-D). Accurate 2-D and 3-D reconstructions and maps of the subsurface are needed in hydrostratigraphy, hydrogeology and geotechnical studies. The present study aims to: 1) improve current methods for geophysical imaging of the subsurface by means of VES, ERGI and GPR data, and calibration with geomorphological and geological reconstructions, 2) optimize the horizontal and vertical resolution of subsurface imaging in order to resolve sedimentary heterogeneity, and 3) check the reliability/uncertainty of the results (maps and architectural reconstructions) by comparison with exposed analogues. The method was applied to shallow (0 to 15 m) aquifers of the fluvial plain of southern Lombardy (Northern Italy). At two sites we studied fluvial sediments of meandering systems of the Last Glacial Maximum and post-glacial historical age. These sediments comprise juxtaposed and superimposed gravel–sand units with fining-upward sequences (channel-bar depositional elements), which are separated by thin and laterally discontinuous silty and sandy clay units (overbank and flood plain deposits). The sedimentary architecture has been studied at different scales in the two areas. At the scale of the depositional system, we reconstructed the subsurface over an area of 4 km 2 to a depth of 18 m (study site 1). Reconstructed sequences based on 10 boreholes and water-well stratigraphic logs were integrated with the interpretation of 10 vertical electrical soundings (VES) with Schlumberger arrays and 1570 m long dipole–dipole electrical resistivity ground imaging profiles (ERGI). In unsaturated sediments, vertical and horizontal transitions between gravel–sand units and fine-grained sediments could be mapped respectively at the meter- to decameter scale after calibration of the VES with borehole data. Similar information could be obtained in waterlogged sediments, in which the largest units could be portrayed and the lateral continuity of major hydrostratigraphic units could be assessed. Maps of apparent resistivity were combined with sand-to-clay ratio maps obtained from stratigraphic data, which substantially increased their quality. ERGI profiles added substantial information about the horizontal transitions between fine- and coarse-grained units. At the scale of depositional elements (channel-bar systems) we studied quarry exposures, over an area of about 4000 m 2 , down to 8 m below ground level (study site 2). In this case, facies analysis was performed on progressing quarry faces and integrated with a network of 165 m long ERGI profiles and 1100 m long ground-penetrating radar (GPR) profiles. Channel boundaries and accretion surfaces of point bars were resolved by both GPR and ERGI, which permitted 3-D mapping of these surfaces. Comparison between the results obtained for the two study sites demonstrates that integration of sedimentological data with geophysical imaging (ERGI and VES) enables the identification of stratigraphic units at the scale of depositional elements. Moreover, fining-upward trends and other internal features of the deposits, such as the transitions from coarse to fine-grained sediments within channel-bar complexes, could be resolved. Hence, the combination of sedimentological and geophysical methods provides a more accurate 3-D reconstruction of hydrostratigraphically significant sedimentary units compared to reconstructions based solely on borehole/point data. <s> BIB006 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 1) Structures: <s> In recent years, innovative strategies such as inverse-scattering or data fusion have been suggested for the processing of GPR datasets in complex scenarios. In this framework, high-resolution concrete inspections are a challenge regarding the treatment of radar data because of the size of the datasets and the complex structures involved. In addition, the achievable depth of inspection is in many cases restricted to unacceptable limits because of the material properties of concrete and the “masking effect” of the upper layers of rebar. Thus, the application of innovative approaches to high-resolution concrete data seems to suggest itself. In this framework, this work deals with the processing of a high-resolution, dataset acquired on a concrete retaining wall via an inverse scattering technique. In particular, we show how the adoption of a strategy based on signal processing techniques and an inverse scattering approach is able to provide the mapping of the two layers of rebar. <s> BIB007 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 1) Structures: <s> The knowledge of moisture content changes in shallow soil layers has important environmental implications and is fundamental in fields of application such as soil science. In fact, the exchange of energy and water with the atmosphere, the mechanisms of flood generation as well as the infiltration of water and contaminant into the subsurface are primarily controlled by the presence of water in the pores of shallow soils. At the same time, the estimation of moisture content in the shallow subsurface is a difficult task. Direct measurements of water content require the recovery of soil samples for laboratory analyses: sampling is invasive and often destructive. In addition, these data are generally insufficient to yield a good spatial coverage for basin-scale investigations. In-situ assessment of soil-moisture contents, possibly at the scale of interest for distributed catchment-scale models, is therefore necessary. The goal of this paper is to assess the information contained in surface-to- surface GPR surveys for moisture content estimation under dynamic conditions. GPR data are compared against and integrated with TDR (Time Domain Reflectometry) data. TDR and surface-to-surface GPR data act at different spatial scales and two different frequency ranges. TDR, in particular, is widely used to estimate soil water content, e.g. converting bulk dielectric constant into volumetric water content values. GPR used in surface-to-surface configuration has been used increasingly to quickly image soil moisture content over large areas. Direct GPR wave velocity is measured in the ground. However, in the presence of shallow and thin low-velocity soil layers, such as the one generated by an infiltrating water front, dispersive, guided GPR waves are generated and the direct ground wave is not identifiable as a simple arrival. Under such conditions, the dispersion relation of guided waves can be estimated from field data and then inverted to obtain the properties of the guiding layers. In this paper, we analyze the GPR and TDR data collected at an experimental site of the University of Turin, during a controlled infiltration experiment. <s> BIB008 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 1) Structures: <s> The possibility of material characterization through the GPR measurements, taking into account the integration with the ultrasonic technique, has been studied and possible relationships between the permittivity of materials and their bulk density are discussed. We present here two different approaches. The first one describes an attempt to correlate the mechanical strength of concrete (as well the ultrasonic velocity) with the permittivity of the material. A series of samples of concrete, characterized by different material properties, were used for georadar and ultrasonic measures, seeking correlations among experimental data. The second approach illustrates the comparison between GPR and ultrasonic techniques to detect anomalies within the concrete. A 3D tomography was performed with ultrasonic and GPR measures on a laboratory model and the data obtained are here compared. <s> BIB009 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 1) Structures: <s> Ground Penetrating Radar (GPR) can assist decision making in a number of fields by enhancing our knowledge of subsurface features. Non-destructive investigations and controls of civil structures are improving day by day, however the scientific literature reports only a few documented cases of GPR applications to the detection of voids and discontinuities in hydraulic defense structures such as river embankments and levee systems. We applied GPR to the monitoring of river levees for detecting animal burrows, which may trigger levee failures by piping. The manageability and the non-invasiveness of GPR have resulted to be particularly suitable for this application. First because GPR is an extensive investigation method that enables one to rapidly cover a wide area, locating voids that are difficult and costly to locate using other intrusive methods. Second, GPR returns detailed information about the possible presence of voids and discontinuities within river embankments. We document a series of successful GPR applications to detect animal burrows in river levees. <s> BIB010 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 1) Structures: <s> CPTI11 updates and improves the 2004 version of CPTI with respect to background information and structure. It is based on updated macroseismic (DBMI11; Locati et al., 2011) and instrumental databases; it contains records of foreshocks and aftershocks; for some offshore events, macroseismic earthquake parameters have been determined by means of the method by Bakun and Wentworth (1997); when both macroseismic and instrumental parameters are available, the two determinations and a default one are provided (in this case, the epicentre is selected according to expert judgement, while Mw is obtained as a weighted mean); for some events, whose macroseismic data are poor, no macroseismic parameters have been determined. CPTI11 does not include the results of some methodological developments performed in the frame of the EC project “SHARE”. It does not consider the information background provided by: Molin et al. (2008); Camassi et al. (2011); recent studies on individual earthquakes; ECOS 2009 (Faeh et al., 2011) and SisFrance, 2010, yet, which will be considered in the next version. The area covered by CPTI11 is slightly reduced with respect to the one of CPTI04 (Fig. 1). The catalogue is composed of two sections: the main one (1000-2006) and the “Etna” earthquakes, for which a specific calibration is used for determining earthquake parameters. Appendix 4 supplies the list of the events which were included in CPTI04 but not in CPTI11 and the relevant explanation. <s> BIB011 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 1) Structures: <s> We present the results of GPR surveys performed to identify the foundation plinths of 12 buildings of a school, whose presence is uncertain since the structural drawings were not available. Their effective characterization is an essential element within a study aimed at assessing the seismic vulnerability of the buildings, which are non-seismically designed structures, located in an area classified as a seismic zone after their construction. Through GPR profiles acquired by two 250 MHz antennas, both in reflection mode and in a WARR configuration, the actual geometry and depth of the building plinths were successfully identified, limiting the number of invasive tests necessary to validate the GPR data interpretation, thus enabling the choice of the most suitable sites that would not alter the serviceability of the structure. The collected data were also critically analysed with reference to local environmental noise that, if causing reflections superimposed on those of the subsoil, could undermine the success of the investigation. Due to the homogeneity of the ground, the processing and results relative to each pair of profiles carried out for all of these buildings is very similar, so the results concerning only two of them are reported. <s> BIB012 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 1) Structures: <s> Abstract In this work, three different techniques, namely time domain reflectometry (TDR), ground penetrating radar (GPR) and electrical resistivity tomography (ERT) were experimentally tested for water leak detection in underground pipes. Each technique was employed in three experimental conditions (one laboratory or two field experiments), thus covering a limited but significant set of possible practical scenarios. Results show that each of these techniques may represent a useful alternative/addition to the others. Starting from considerations on the obtained experimental results, a thorough analysis on the advantages and drawbacks of the possible adoption of these techniques for leak detection in underground pipes is provided. <s> BIB013 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 1) Structures: <s> Abstract Recent flood events in Northern Italy (particularly in the Veneto Region) have brought river embankments into the focus of public attention. Many of these embankments are more than 100 years old and have been repeatedly repaired, so that detailed information on their current structure is generally missing. The monitoring of these structures is currently based, for the most part, on visual inspection and localized measurements of the embankment material parameters. However, this monitoring is generally insufficient to ensure an adequate safety level against floods. For these reasons there is an increasing demand for fast and accurate investigation methods, such as geophysical techniques. These techniques can provide detailed information on the subsurface structures, are non-invasive, cost-effective, and faster than traditional methods. However, they need verification in order to provide reliable results, particularly in complex and reworked man-made structures such as embankments. In this paper we present a case study in which three different geophysical techniques have been applied: electrical resistivity tomography (ERT), frequency domain electromagnetic induction (FDEM) and Ground Penetrating Radar (GPR). Two test sites have been selected, both located in the Province of Venice (NE Italy) where the Tagliamento River has large embankments. The results obtained with these techniques have been calibrated against evidence resolving from geotechnical investigations. The pros and cons of each technique, as well as their relative merit at identifying the specific features of the embankments in this area, are highlighted. The results demonstrate that geophysical techniques can provide very valuable information for embankment characterization, provided that the data interpretation is constrained via direct evidence, albeit limited in space. <s> BIB014 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 1) Structures: <s> Volumetric water content evaluation in structures, substructures, soils, and subsurface in general is a crucial issue in a wide range of applications. The main weaknesses of subsurface moisture sensing techniques are usually related both to the lack of cost-effectiveness of measurements, and to unsuitable support scales with respect to the extension of the surface to be investigated. In this regard, ground-penetrating radar (GPR) is an increasingly used non-destructive tool specifically suited for characterization and imaging. Several GPR techniques have been developed for different application purposes. Moisture evaluation in concrete is important for diagnosing structures at early stages of deterioration, as water contributes to the transfer of degrading and corrosive agents e.g., chloride. Traditionally, research efforts have been focused on the processing of GPR signal in time domain, although more recent studies are being increasingly addressed towards frequency domain analysis, providing additional information on moisture content in concrete. Concerning the evaluation of subsurface soil water content, different models ranging from empirical to theoretical are used for converting permittivity values into moisture. In this regard, two main GPR approaches are commonly employed for permittivity evaluation in time-domain measurements, namely, the ground wave method and the reflection method. Furthermore, the use of borehole transmission measurements, traditional off-ground methods, and of an inverse modelling approach allowing for a full waveform inversion of radar signals have been developed in the past decade. More recently, a self-consistent approach based on the Rayleigh scattering theory has also allowed the direct evaluation of moisture content from frequency spectra analysis. <s> BIB015
The use of the GPR technology in structural engineering is nowadays established and wide ranging. It is worth mentioning the location of reinforcing bars and metallic conduits, the assessment of concrete lining thicknesses, the investigation of highly wet water spots in bearing structures, the detection of voids and cracks, the assessment of rebar sizes, and the three-dimensional (3-D) reconstruction of detailed structural elements BIB003 . When considering the GPR-related Italian contribution in this area, it must be mentioned the impact brought by the nature of the Italian territory. Indeed, statistics from seismic databases have identified Italy as the most seismically active country of the Mediterranean Area BIB011 . According to this, it is not surprising that one of the main focuses of the Italian research community in this field is represented by the seismic evaluation of structural elements, in terms of both prevention and damage diagnostics. Concerning seismic prevention, Barrile and Pucinotti BIB004 developed a thorough work mainly focused on the bi-dimensional (2-D) and 3-D reconstruction of structural elements in a reinforced concrete structure. To this purpose, a ground-coupled pulsed GPR system with a 1600 MHz central frequency antenna was employed on a number of beams and columns of reinforced concrete structure. Concerning the reconstruction of punctual structural elements, the work by De Domenico et al. BIB012 , focused on the exact location of the foundation plints, is also worth to be mentioned. In the study by Valle et al. BIB002 , the authors have compared two different approaches to improve the resolution of the radar surveys, by using real and synthetic data on structural elements such as walls and pillars. First, travel-time and amplitude tomography methods were applied, then migration and diffraction tomographies were performed. According to the results achieved, the authors were capable to single out advantages and drawbacks of the proposed approaches. With similar purposes, Soldovieri et al. BIB007 used a frequency domain inverse scattering approach, based on a linear model of the EM scattering. The goal of this study was to overcome the issue of the relationship between wavelength and dimension of the scatterer. The authors analytically assessed the capability of the linear inverse model in terms of scatterer imaging, by identifying the optimal frequency step and the diffraction tomography arguments. The research was supplemented by several reconstructed scenarios related to synthetic and experimental data for the simulation of real environmental conditions. With regard to the damage assessment, a similar approach specifically focused on cracks characterization was later experimented by Bavusi et al. in the town of L'Aquila, Italy. By using tomographic techniques, the authors were capable to reconstruct different lines of reinforcing steel bars and defects inner to a number of structural elements damaged by the tragically-known seismic event recorded in the territory of L'Aquila, on April 2009. A further crucial issue affecting a considerable number of buildings, structures, and infrastructures in Italy is of course related to their aging. It is explanatory reminding that a great part of the Italian highway network was realized within the 70s [48] , due to the economic growth dating back to the first years of 60s. This has reflected in a considerable amount of concrete artifacts that is nowadays turning around 50 years of service, thereby requiring important maintenance and rehabilitation activities. In addition, the economic prosperity of the 60s generated a relevant raise of the level of urbanization, which was often out of any control or regulation, being the total amount of buildings, mainly made of concrete, passed from 10.7 million in 1951 up to 19.7 million in 1991. This fact has implied nowadays a general need for effective and efficient concrete inspections in Italy, to which several GPR-based research activities have been in turn oriented. To this purpose, Capizzi et al. BIB009 aimed at evaluating the capability of GPR in assessing the strength of concrete and reconstructing buried objects, in comparison with ultrasound (US) techniques. A polyvinyl chloride (PVC) pipe was positioned inside the concrete sample. By means of GPR tomography techniques, the authors were capable to reliably and efficiently reconstruct the cavity in the sample. The strength characterization of the concrete, mainly consisting in the correlation between permittivity and compression strength, was postponed to further investigation. 2) Hydraulics: With regard to the application of GPR in hydraulic engineering, remarkable international efforts have been addressed to a wide range of research-oriented works and study cases, spanning from basic research investigations, up to the management and protection of water resources in great works of civil engineering. Above all, we can cite the reconstruction of sewer lines, the location of underground storage tanks and the mapping of water tables, up to the evaluation of moisture in various soil types and construction materials at several scales of investigation using different GPR systems and signal processing techniques , BIB015 . Under a hydrological perspective, the Italian territory is known to be extremely peculiar. A first point of uniqueness consists in the high percentage of aquifer-withdrawn potable water, which amounts to 85.6% out of the total available [52] . This fact implies a serious issue related to both the quality control of water for the safety of the users' health, and the lowering of the ratio between the water leaked during conduction and the amount of water withdrawn. In this framework, several research activities focused on the application of GPR for characterizing aquifers and detecting leakages in water pipes have been developed. Beserzio et al. BIB006 have successfully attempted at reconstructing the geometry and architecture of the fluvial stratigraphy for the Quaternary Po Plain, Italy. With the purpose of improving the ongoing imaging methodologies, the authors have compared the results outcoming from several nondestructive testing (NDT) methods, namely, the vertical electrical sounding (VES), the electrical resistivity ground imaging (ERGI), and GPR. Strengths and limits of these techniques are discussed herein, and the potential of their integration for an accurate 3-D reconstruction of sedimentary units is also shown. On the contrary, Carcione BIB001 faced the topic of the characterization of aquifers using a simulation approach. The author proposed a theoretical model capable to reproduce the behavior of radio waves in realistic media, by simulating reflection, refraction, and diffraction phenomena, in addition to the relaxation mechanism and the anisotropic properties of the medium investigated. Therefore, this method was successfully applied for preliminarily assessing the saturation of a porous media, as well as for evaluating the contamination of a sand aquifer. The infiltration process in the portion of subsurface located above an aquifer, i.e., the vadose zone, was instead analyzed by Cassiani et al. BIB008 , by comparing data from both GPR and time domain reflectometry (TDR) measurements performed over a test site. Different central frequencies of investigation were employed herein. The results have confirmed the reliability of GPR in detecting the variation of moisture in a progressively saturated medium. With regard to those applications focused on leak detection in underground pipes, it is worthwhile mentioning the study performed by Cataldo et al. BIB013 , wherein the potential of different geophysical methods suited for purposes was evaluated. To this aim, TDR, GPR, and electrical resistivity tomography (ERT) were applied in both laboratory and on-field environment to water pipes differently leaked. The GPR device was equipped with a double set of antennas, with central frequencies of 200 and 600 MHz. GPR and TDR were found to be reliable tools for detecting water leakage spots. Nevertheless, the authors have reported the misleading impact on the GPR signal of some potential buried objects. A further peculiarity of the Italian territory consists in the close relationship between its hydrogeological complexity and the capillary character of the transport network, which makes the management of water-retaining structures a crucial issue to be tackled. Several studies have then investigated the potential of GPR in assessing the status of river embankments. Di Prinzio et al. BIB010 analyzed the reliability of GPR in detecting the presence of voids and discontinuities in levees and river embankments, which effectively represent a comprehensive strategy to localize early-stage damages. To this purpose, surveys along several kilometers of two embankments situated nearby the Italian town of Bologna were carried out using a GPR unit with a low central frequency of investigation, i.e., 250 MHz. The authors were able to clearly identify void spots, consisting in general of animal burrows, despite the interference of several factors affecting the quality of the data collected, such as the dependence on earlier weather conditions or the presence of vegetation over the unmaintained embankments. Moreover, it was also highlighted how the use of a suitable and unique central frequency of investigation may represent a critical issue, especially when evaluating targets at different depths. Such topic was indeed tackled by Perri et al. BIB014 by comparing data collected on the embankments of the Tagliamento river, near Venice, Italy, by using a 600 MHz central frequency GPR system, together with other geophysical tools. GPR has herein proved to be a relatively useful nondestructive technology, capable to support maintenance operations in hydraulic engineering great works. As far as the evaluation of soil moisture is concerned, Strobbia and Cassani have tackled the topic of moisture mapping in shallow and thin low-velocity soil layers. By implementing an inverse multilayer GPR waveguide model, it was endeavored to infer both the wave velocity within the medium and the layer thicknesses using a stochastic-based approach. A similar statistical approach was employed in further studies, wherein low-frequency GPR systems were used to reconstruct water content profiles in soils by performing cross-borehole zero offset profiles (ZOPs) , BIB005 .
GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> The reasons for damage to railroad tracks often lie in the subgrade. At present investigations of ::: tracks are carried out selectively and schematically by drilling and digging (every 100 m). By using ::: the GPR it is possible to give a comprehensive assessment concerning the condition ofthe ::: complete profile ofthe track <s> BIB001 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> Monostatic ground penetrating radar (GPR) has proven to be a useful technique in pavement profiling. In road and highway pavements, layer thickness and permittivity of asphalt and concrete can be estimated by using an inverse scattering approach. Layer-stripping inversion refers to the iterative estimation of layer properties from amplitude and time of delay (TOD) of echoes after their detection. This method is attractive for real-time implementation, in that accuracy is improved by reducing false alarms. To make layer stripping useful, a multitarget detection/tracking (D/T) algorithm is proposed. It exploits the lateral continuity of echoes arising from a multilayered medium. Interface D/T means that both detection and tracking are employed simultaneously (not sequentially). For each scan, both detection of the target and tracking of the corresponding TOD of the backscattered echoes are based on the evaluated a posteriori probability density. The TOD is then estimated by using the maximum a posteriori (MAP) or the minimum mean square error (MMSE) criterion. The statistical properties of a scan are related to those of the neighboring ones by assuming, for the interface, a first-order Markov model. <s> BIB002 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> railroad track substructure condition on a continuous top-ofrail nondestructive basis. In this study, 1 GHz radar data were acquired between concrete and wood ties as well as from the ballast shoulders beyond the ends of the ties, and with multiple antenna orientations and polarizations. Automatic processing of the data was developed to quickly generate hard copy sections of radar images and for input into railroad track performance monitoring software such as ORIM. Substructure conditions were observed such as thickness of the ballast and sub ballast layers, variations in layer thickness along the track, pockets of water trapped in the ballast, and soft subgrade from high water content. In addition, locations and depths of subsurface drainage pipes, trenches, and utilities were quickly and continuously mapped. GPR data were acquired and processed from a hirail vehicle moving continuously at 10 miles per hour with radar resolution of a few inches horizontally and a fraction of an inch vertically to depths of more than six feet. The largest errors resulted from the positioning system used to locate the antennas along and across the track. Automatic modeling to determine density and water content is being developed but the uneven and rough (at radar wavelengths) air-ballast interface is a major problem in modeling the data. <s> BIB003 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> Road pavement performances are of great importance for driving comfort and safety. Monitoring and rehabilitation activities are always extremely strategic and crucial. The points of strength of advanced non-destructive techniques for road pavement monitoring essentially are: (1) reliability, (2) significance in the space domain, (3) efficiency and (4) quickness. One of the most relevant and widely used technologies is the Ground Penetrating Radar. In the field of pavement analysis its most frequent applications are the evaluation of layers thickness and voids detection. Recent experimental results put also in light the capability of Radar to identify the causes of road damages. Empirical relationships between physical and mechanical characteristics of the materials and electromagnetic parameters have been seen, established and analytical functions were proposed. Most promising and interesting evidences regard the prediction of water content. It is crucially important because water intrusion in sub-grade is one of the most important causes of loss of mechanical properties. The empirical relationships have shown a conservative and comparable trend for different materials, status conditions and radar frequencies, but variable amplitudes. General mathematical laws could be very useful to analyze the Radar scans correctly and in a more comprehensive framework A stochastically based correction of semi-empirical approach is here proposed to correlate the geophysical characteristics of the pavement���s materials (sub-grade) to the par:rmeters of the empirical model. Average dimension of grains, grading, specific surface area of grains (that is related to the hygroscopic potential) and dielectric characteristics of the dry material are primarily taken into consideration. The impact of this geophysical and stochastical model on non-destructive measurements and on the pavement management is high and it is here discussed. <s> BIB004 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> The possibility to estimate accurately the subsurface electric properties from ground-penetrating radar (GPR) signals using inverse modeling is obstructed by the appropriateness of the forward model describing the GPR subsurface system. In this paper, we improved the recently developed approach of Lambot et al. whose success relies on a stepped-frequency continuous-wave (SFCW) radar combined with an off-ground monostatic transverse electromagnetic horn antenna. This radar configuration enables realistic and efficient forward modeling. We included in the initial model: 1) the multiple reflections occurring between the antenna and the soil surface using a positive feedback loop in the antenna block diagram and 2) the frequency dependence of the electric properties using a local linear approximation of the Debye model. The model was validated in laboratory conditions on a tank filled with a two-layered sand subject to different water contents. Results showed remarkable agreement between the measured and modeled Green's functions. Model inversion for the dielectric permittivity further demonstrated the accuracy of the method. Inversion for the electric conductivity led to less satisfactory results. However, a sensitivity analysis demonstrated the good stability properties of the inverse solution and put forward the necessity to reduce the remaining clutter by a factor 10. This may partly be achieved through a better characterization of the antenna transfer functions and by performing measurements in an environment without close extraneous scatterers. <s> BIB005 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> Ground penetrating radar (GPR) signal processing is a nondestructive technique, currently performed by many agencies involved in road management and particularly promising for soil characteristics interpretation. The focus of this paper is to assess the reliability of an optimal signal processing algorithm for pavement inspection. Preliminary detection and subsequent classification of pavement damages, based on an automatic GPR analysis, have been performed and experimentally validated. A threshold analysis of the error is carried out to detect possible damages and check if they can be predicted, while a second threshold analysis determines the nature of the damage. An optimum detection procedure is performed. It implements the classical Neyman-Pearson radar test. All the settings needed by the procedure have been estimated from training sets of experimental measures. The overall performance has been evaluated by looking at the usual receiver's operating characteristic. The results show that a reasonable performance has been achieved by exploiting the spatial correlation properties of the received signal, obtained from an appropriate analysis of GPR images. The proposed system shows that automatic evaluation of subgrade soil characteristics by GPR-based signal analysis and processing can be considered reliable in a number of experimental cases. <s> BIB006 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> Abstract The safety and operability of road networks is, in part, dependent on the quality of the pavement. It is known that pavements suffer from many different structural problems which can lead to damage to the pavement surface. To minimize the effect of these problems programmed policies for pavement management are required. Additionally a given local anomaly on the road surface can affect the safety of the road to various degrees according to the category of the road, so it is possible to set up different programmes of repair according to the different standards of road. Programmed policies for pavement management are required because of the wide structural damage which occurs to pavements during their normal operating life. This has consequences for the safety and operability of road networks. During the last decade, road networks suffered from great structural damage. The damage occurs for different reasons, such as the increasing traffic or the lack of means for routine maintenance. Many forms of damage, originating in the bottom layers are invisible until the pavement cracks. They depend on the infiltration of water and the presence of cohesive soil greatly reduces the bearing capacity of the sub-asphalt layers and underlying soils. On the basis of an in-depth literature review, an experimental survey with Ground Penetrating Radar (GPR) was carried out to calibrate the geophysical parameters and to validate the reliability of an indirect diagnostic method of pavement damage. The experiments were set on a pavement under which water was injected over a period of several hours. GPR travel time data were used to estimate the dielectric constant and the water content in the unbound aggregate layer, the variations in water content with time and particular areas where rate of infiltration decreases. A new methodology has been proposed to extract the hydraulic permittivity fields in sub-asphalt structural layers and soils from the moisture maps observed with GPR. It is effective at diagnosing the presence of clay or cohesive soil that compromises the bearing capacity of sub-base and induces damage. <s> BIB007 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> The SAFE-RAIL system and the relevant processing unit are hereafter described as a success case for the solution of an electromagnetic inverse problem in real-time applications. The SAFE-RAIL system on-board processing unit is conceived for providing functionalities for real-time exploitation of raw data deriving from microwave sensing action through innovative GPR equipment. In particular, the main objective, as per European STREP project SAFE-RAIL statements, is focused on automatic interpretation of microwave sensed data relevant to rail-track subsurface, aiming at characterizing the ballast and sub-ballast layer properties with consequent extraction in real-time of geophysical parameters. A neural network based approach has been exploited as an efficient way for solving the inverse problem through a "learning-by-examples" approach. The capability of the SAFE-RAIL system in matching real-time performance requirements has been investigated. System operability and cost-effective implementation issues have also been deeply addressed. <s> BIB008 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> Ground-penetrating radar (GPR) is a rapidly developing field that has seen tremendous progress over the past 15 years. The development of GPR spans aspects of geophysical science, technology, and a wide range of scientific and engineering applications. It is the breadth of applications that has made GPR such a valuable tool in the geophysical consulting and geotechnical engineering industries, has lead to its rapid development, and inspired new areas of research in academia. The topic of GPR has gone from not even being mentioned in geophysical texts ten years ago to being the focus of hundreds of research papers and special issues of journals dedicated to the topic. The explosion of primary literature devoted to GPR technology, theory and applications, has lead to a strong demand for an up-to-date synthesis and overview of this rapidly developing field. Because there are specifics in the utilization of GPR for different applications, a review of the current state of development of the applications along with the fundamental theory is required. This book will provide sufficient detail to allow both practitioners and newcomers to the area of GPR to use it as a handbook and primary research reference. *Review of GPR theory and applications by leaders in the field *Up-to-date information and references *Effective handbook and primary research reference for both experienced practitioners and newcomers <s> BIB009 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> Abstract The evaluation of the water content of unsaturated soil is important for many applications, such as environmental engineering, agriculture and soil science. This study is applied to pavement engineering, but the proposed approach can be utilized in other applications as well. There are various techniques currently available which measure the soil moisture content and some of these techniques are non-intrusive. Herein, a new methodology is proposed that avoids several disadvantages of existing techniques. In this study, ground-coupled Ground Penetrating Radar (GPR) techniques are used to non-destructively monitor the volumetric water content. The signal is processed in the frequency domain; this method is based on Rayleigh scattering according to the Fresnel theory. The scattering produces a non-linear frequency modulation of the electromagnetic signal, where the modulation is a function of the water content. To test the proposed method, five different types of soil were wetted in laboratory under controlled conditions and the samples were analyzed using GPR. The GPR data were processed in the frequency domain, demonstrating a correlation between the shift of the frequency spectrum of the radar signal and the moisture content. The techniques also demonstrate the potential for detecting clay content in soils. This frequency domain approach gives an innovative method that can be applied for an accurate and non-invasive estimation of the water content of soils – particularly, in sub-asphalt aggregate layers – and assessing the bearing capacity and efficacy of the pavement drainage layers. The main benefit of this method is that no preventive calibration is needed. <s> BIB010 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> Nowadays, severe meteorological events are always more frequent all over the world. This causes a strong impact on the environment such as numerous landslides, especially in rural areas. Rural roads are exposed to an increased risk for geotechnical instability. In the meantime, financial resources for maintenance are certainly decreased due to the international crisis and other different domestic factors. In this context, the best allocation of funds becomes a priority: efficiency and effectiveness of plans and actions are crucially requested. For this purpose, the correct localisation of geotechnically instable domains is strategic. In this paper, the use of Ground-Penetrating Radar (GPR) for geotechnical inspection of pavement and sub-pavement layers is proposed. A three-step protocol has been calibrated and validated to allocate efficiently and effectively the maintenance funds. In the first step, the instability is localised through an inspection at traffic speed using a 1-GHz GPR horn launched antenn... <s> BIB011 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> A recent approach relates the shift of the frequency peak of the Ground Penetrating Radar (GPR) spectrum with the increasing of the moisture content in the soil. Theweakness characterizing this approach is represented by the needs of high resolution signals, whereas GPR spectra are affected by low resolution. The novelty introduced by this work is twofold. First, we evidence that clay content information is present in the location where the maximum amplitude of the GPR spectra occurs. Then, we propose three super resolution methods, namely parabolic, triangular, and sinc-based interpolators, to further refine the location of the frequency peak. In fact, it is really important to be able to find this location quite precisely, to obtain accurate estimates of clay content. We show that the peak location can be found best through sinc-interpolation in the frequency domain of the measured data. Our experimental results confirm the effectiveness of the proposed approach to resolve a frequency shift in the GPR spectrum, even for a small amount of clay. <s> BIB012 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> The electric properties of multiphase aggregate mixtures are evaluated for a given mineralogic composition at frequencies between 300 kHz and 3 GHz. Two measurement techniques are employed: a coaxial transmission line and a monostatic stepped-frequency ground-penetrating radar (GPR). The effect of increasingwater content is analyzed in several sand clay mixtures. For the end-member case of maximum clay (25% in weight) and increasing water content, investigations are compared between the twomeasurement techniques. The electrical properties of materials are influenced by the amount ofwater, but clay affects the frequency dependency of soils showing distinctive features regardless of themineralogy. The microwave attenuation, expressed by the quality factor Q, is partly dependent on frequency and on the water content. The performance of one empirical and one volumetric mixing model is evaluated to assess the capability of indirectly retrieving the volumetric water content for a known mixture. The results are encouraging for applications in the field of pavement engineering with the aim of clay detection. The models used show similar behaviors, but measured data are better modeled using third order polynomial equations. <s> BIB013 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> In this paper, the correlation between the dielectric and the strength properties of unbound materials is analyzed, considering that mechanical characteristics of soil depend on particle interactions and assuming that dielectric properties of materials are related to bulk density. The work investigates this topic using ground-penetrating radar (GPR) techniques. In particular, two ground-coupled GPR are used in laboratory and in field experiments to infer the bearing ratio of soil in runway safety areas (RSA). The procedure is validated through CBR tests and in situ measurements using the light falling weight deflectometer (LFWD). A promising empirical relationship between the relative electric permittivity and the resilient modulus of soils is found. The comparison between measured and predicted data shows a reliable prediction of Young's modulus, laying the foundation for inferring mechanical properties of unbound materials through GPR measurements. <s> BIB014 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> Nowadays, financial resources for maintenance have certainly decreased in many fields of application due to the Global Economic Crisis. In this context, the need for high performing inspections in pavement engineering has become a priority, and the use of non-destructive techniques has increased. In that respect, ground-penetrating radar (GPR) is proving to be as one of the most promising tools for retrieving both physical and geometrical properties of pavements. In this study, an off-ground GPR system, 1-GHz centre frequency of investigation, was used for surveying a large-scale rural road network. Data processing was aimed to accurately identify the geometry of pavement layer interfaces. Results showed the high effectiveness and efficiency of such GPR system and procedure. The high productivity, approximately 160 km/day, along with the capability to identify mismatches in layers arrangement, even in case of undisclosed defects, demonstrated the importance of such technique in road inspections. <s> BIB015 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> It is well known that road safety issues are closely dependent on both pavement structural damages and surface unevenness, whose occurrence is often related to ineffective pavement asset management. The evaluation of road pavement operability is traditionally carried out through distress identification manuals on the basis of standardized comprehensive indexes, as a result of visual inspections or measurements, wherein the failure causes can be partially detected. In this regard, ground-penetrating radar (GPR) has proven to be over the past decades an effective and efficient technique to enable better management of pavement assets and better diagnosis of the causes of pavement failures. In this study, one of the main causes (i.e. subgrade failures) of surface damage is analyzed through finite-difference time-domain (FDTD) simulation of the GPR signal. The GprMax 2D numerical simulator for GPR is used on three different types of flexible pavement to retrieve the numerical solution of Maxwell's equations in the time domain. Results show the high potential of GPR in detecting the causes of such damage. <s> BIB016 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> Ground-penetrating radar (GPR) is a wide ranging non-destructive tool used in many fields of application including effective pavement engineering surveys. Despite the high potential and the consolidated results obtained over the past decades, pavement distress manuals based on visual inspections are still widely used, so that only the effects and not the causes of faults are generally considered. In such context, simulation can represent an effective solution for supporting engineers and decision-makers in understanding the deep responses of both revealed and unrevealed damages. In this study, the use of FDTD simulation of the GPR signal is analyzed by simulating three different types of flexible pavement at two different center frequencies of investigation commonly used for road surveys. Comparisons with the undisturbed modelled pavement sections are carried out showing promising agreements with theoretical expectations, and good chances for detecting the shape of damages are demonstrated. <s> BIB017 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> Over the last few years ground-penetrating radar (GPR) has proved to be an effective instrument for pavement applications spanning from physical to geometrical inspections of roads. In this paper, the new challenge of inferring mechanical properties of road pavements and materials from their dielectric characteristics was investigated. A pulsed GPR system with ground-coupled antennas, 600 MHz and 1600 MHz center frequencies of investigation, was used over a 4 m×30 m test site with a flexible pavement structure. A spacing of 0.40 m between the GPR acquisition tracks was considered both longitudinally and transversely in order to configure a square regular grid mesh of 836 nodes. Accordingly, the Young's modulus of elasticity was measured on each grid node using light falling weight deflectometer (LFWD). Therefore, a semi-empirical model for predicting strength properties of pavement was developed by comparing the observed elastic modulus and the electromagnetic response of substructure on each grid node. A good agreement between observed and modeled values was found, thereby showing great promises for large-scale mechanical inspections of pavements using GPR. <s> BIB018 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> In order to evaluate the level of ballast fouling for Portugal aggregates and the influence of antenna frequency on its measurement several laboratory tests were performed on different materials. Initially the clean granitic ballast was tested in different water content conditions, from dry to soak in order to see the influence of water on the dielectric characteristics. The fouling of the ballast was reproduced in laboratory through mixing the ballast with soil, mainly fine particles, in order to simulate the fouling existing in several old lines in Portugal, where the ballast was placed over the soil without any sub ballast layer. Five different fouling levels were reproduced and tested in laboratory, with different water contents, four for each fouling level. Tests were performed with five Ground Penetrating Radar (GPR) antennas with different frequencies, three ground coupled antennas of 400 MHz, 500 MHz and 900 MHz, and two horn antennas of 1000 MHz and 1800 MHz. In situ test pits were than used to validate the values of the dielectric constants obtained in laboratory. The main results obtained are presented in this paper together with troubleshooting associated to measurement on fouling ballast. This study is of interest for COST Action TU 1208. <s> BIB019 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> The characterization of shallow soil moisture spatial variability at the large scale is a crucial issue ::: in many research studies and fields of application ranging from agriculture and geology to civil and ::: environmental engineering. In this framework, this work contributes to the research in the area of ::: pavement engineering for preventing damages and planning effective management. High spatial ::: variations of subsurface water content can lead to unexpected damage of the load-bearing layers; ::: accordingly, both safety and operability of roads become lower, thereby affecting an increase in ::: expected accidents. A pulsed ground-penetrating radar system with ground-coupled antennas, i.e., 600-MHz and 1600-MHz center frequencies of investigation, was used to collect data in a 16 m × 16 m study site in the Po Valley area in northern Italy. Two ground-penetrating radar techniques were employed to nondestructively retrieve the subsurface moisture spatial profile. The first technique is based on the evaluation of the dielectric permittivity from the attenuation of signal amplitudes. Therefore, dielectrics were converted into moisture values using soil-specific coefficients from Topp’s relationship. Groundpenetrating-radar-derived values of soil moisture were then compared with measurements from eight capacitance probes. The second technique is based on the Rayleigh scattering of the signal from the Fresnel theory, wherein the shifts of the peaks of frequency spectra are assumed comprehensive indicators for characterizing the spatial variability of moisture. Both ground-penetrating radar methods have shown great promise for mapping the spatial variability of soil moisture at the large scale. <s> BIB020 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> Clay content is one of the primary causes of pavement damages, such as subgrade failures, cracks, and pavement rutting, thereby playing a crucial role in road safety issues as an indirect cause of accidents. In this paper, several ground-penetrating radar methods and analysis techniques were used to nondestructively investigate the electromagnetic behaviour of sub-asphalt compacted clayey layers and subgrade soils in unsaturated conditions. Typical road materials employed for load-bearing layers construction, classified as A1, A2, and A3 by the American Association of State Highway and Transportation Officials soil classification system, were used for the laboratory tests. Clay-free and clay-rich soil samples were manufactured and adequately compacted in electrically and hydraulically isolated formworks. The samples were tested at different moisture conditions from dry to saturated. Measurements were carried out for each water content using a vector network analyser spanning the 1 GHz–3 GHz frequency range, and a pulsed radar system with ground-coupled antennas, with 500-MHz centre frequency. Different theoretically based methods were used for data processing. Promising insights are shown to single out the influence of clay in load-bearing layers and subgrade soils, and its impact on their electromagnetic response at variable moisture conditions. <s> BIB021
This Section reviews the major uses of GPR in Italy in transport engineering by fields of application, namely, roads, railways, and airports. A further subsection will be devoted to critical transport infrastructures, such as bridges and tunnels, whose strategic importance deserves a separate discussion. 1) Roads: According to Saarenketo , GPR road applications can be broadly divided into four main categories, namely: 1) surveys needed in designing new roads; 2) surveys carried out for the rehabilitation design of existing roads; 3) quality control or quality assurance surveys in road projects; and 4) surveys carried out for pavement management systems. Worldwide, there is a remarkable number of works dealing with the application of GPR in roads and streets . In Italy, it is worth noting that most of the freight and passenger transport takes place on the road. The results of national inquiries depict a broadly extended road network, with increasing traffic volumes , . Such peculiarities have favored the use of GPR especially in roads more than in other transport infrastructures (see Fig. 3 ), whereby a considerable number of applications can be found for both subgrade soils, unbound and bound pavement layers. The Italian GPR-related research focused on the assessment of the physical properties of subgrade soils and load-bearing layers has been very fruitful since the early noughties, when Benedetto and Benedetto presented a semiempirical approach for the evaluation of the relative dielectric permittivity of subgrade soils, based on a Gauss function which takes into account the relative dielectric permittivity of both the dry and the saturated material, as well as its particle size properties. One multi-frequency GPR system with ground-coupled antennas, 600 and 1600 MHz central frequency of investigation, was used for the laboratory tests on two soil types, which in turn were oven-dried and progressively wetted at several known water contents up to saturation. It was observed how a mono-granular soil has a tendency to change its relative dielectric permittivity more rapidly than a heterogeneous soil particle size, since a faster change from a viscous to a free water status may occur. This approach was later deepened by Benedetto BIB004 , who compared the results, in terms of ε r , achieved by testing four types of soils, with empirical and theoretical models. Starting from 2005, considerable efforts have been devoted toward the GPR-based evaluation of water content in subgrade soils and unbound pavement layers. Fiori et al. investigated the relationship between the relative dielectric permittivity of soils and their volumetric water content. The effective permittivity of the soil was here derived as a function of the water content by using the effective medium approximation (EMA) technique after modeling the porous medium as a multiindicator structure with spherical elements of variable radii R. The derived formula was tested against controlled laboratory experiments and it has shown that the approximated relationship behaves quite well in a broad range of water contents θ, being the R-squared value R 2 0.98. More recently, Benedetto and Pensa BIB007 have carried out a GPR-based experimental survey for calibrating a number of geophysical parameters and validating the reliability of an indirect diagnostic method for the detection of pavement damages. Water was here injected within a flexible pavement structure over a period of several hours. The dielectric constant and the water content in the unbound aggregate layer were estimated by the GPR travel time data, as well as the variations of water content in time and the critical areas with low rates of water infiltration. Such approach has proved to be effective at diagnosing the presence of clay and the cohesive nature of certain soils that may compromise the bearing capacity of load-bearing layers and induce structural damage. A step beyond the common practices established in Italy and worldwide for moisture sensing with GPR in typical subgrade soils was done in 2010 by Benedetto BIB010 , who processed the GPR signal in the frequency domain on the basis of the Rayleigh scattering principles, according to the Fresnel theory. The main assumption relies on the fact that in unsaturated soils the water droplets are capable to scatter EM waves , thereby an extra-shifting of the central frequency of the wave spectra can be added to the one which is mainly related to the medium prop-erties , . In line with this, several relationships were provided between the shift of the peak and the water content for different types of soil under controlled laboratory conditions. The approach has been validated at the whole range of investigation scales BIB011 - BIB020 , providing good results and promising applicability. Much more recently, the Italian contribution on the GPR use for preventing structural damages in load-bearing layers has been focused on the possibility to detect and quantify clay content BIB012 . It is well-known that clay presence is closely related to moisture, due to its considerable swelling properties [73] , thereby it is capable to exert significant effects on the stability of the soil behavior under loading. In this regard, Tosti et al. have employed different GPR methods and techniques to nondestructively investigate the clay content in sub-asphalt soil samples compacted in laboratory environment. The experimental layout has provided the use of three types of soil with progressively increasing percentage of bentonite clay, and two different GPR instruments were used for the EM measurements at each step of clay content. In particular, a ground-coupled pulsed radar system, 500 MHz central frequency, and a vector network analyzer (VNA) spanning the 1-3 GHz frequency range were here employed. The signals collected were processed using the Rayleigh scattering method, the full-wave inversion technique BIB005 , and the time-domain signal picking technique. Overall, promising results were achieved for the detection of clay. The electrical behavior of clayey soil samples was also investigated at a smaller scale by Patriarca et al. BIB013 using two measurement techniques, namely, a coaxial transmission line and a monostatic stepped frequency GPR. The effect of growing water contents was analyzed for several sand-clay mixtures. The results from the two measurement techniques were compared for the end-member case of maximum clay, namely, 25% in weight, with water contents growing progressively up to saturation. It was confirmed the high impact of water over the electrical properties of materials, and it was also proved how the frequency dependence of the soils investigated is sensitive to the presence of clay, by showing distinctive features regardless of the soil mineralogy. Such results were confirmed by a similar experience conducted by Tosti et al. BIB021 with different GPR systems. Within the Italian contribution in the GPR-based research focused on the bound structure of a road pavement, an inverse scattering approach for pavement profiling was presented by Spagnolini and Rampa BIB002 , who determined layer thickness and permittivity of the asphalt. Benedetto et al. BIB006 proposed a study for assessing the reliability of an optimal signal processing algorithm for pavement inspections. Basically, the analyses were carried out as a function of two thresholds, with the first one set for taking into account the error of detecting possible damages and checking their predictability, and the second one for determining the nature of the damage. An optimum detection procedure implementing the classical Neyman-Pearson radar test was performed. A reasonable performance has been achieved by exploiting the spatial correlation properties of the signal received, as a result of a proper analysis of the GPR images. In Tosti et al. BIB015 an off-ground GPR system, 1 GHz central frequency of investigation, was employed for a large-scale investigation along an extra-urban road network. Homogeneous pavement sections were singled out according to a comprehensive checklist of elements of practical use for GPR end users. Useful advices on the system setup and calibration procedures are also given by the authors. The GPR system showed a very high productivity and a good effectiveness in detecting several causes of pavement damages. Tosti and Umiliaco BIB016 and Benedetto et al. BIB017 investigated the possibility of simulating different types of pavement damages. The authors performed finite-difference time-domain (FDTD) simulations of the GPR signal on three different types of flexible pavement using two central frequencies of investigation, i.e., 600 and 1600 MHz, commonly employed in road surveys. Regular-and irregular-shaped faults within hot-mix asphalt (HMA) layers and at the base-subbase interface, as well as potholes on the surface were here simulated by the gprMax2D numerical simulator BIB018 . Much more recently, Tosti et al. BIB017 proposed a promising semiempirical amplitude-based model for inferring the mechanical properties of road pavements and materials from their dielectric characteristics. For calibrating the model, the authors employed ground-truth data arising from the use of a light falling weight deflectometer (LFWD). 2) Railways: GPR applications in railway engineering have experienced a huge advancement especially since the 90s. Overall, they can be divided into three main categories, namely: 1) ballast surveys; 2) geotechnical investigations; and 3) structural quality assurances of new nonballasted rail track beds . To the best of our knowledge, there are no significant GPRrelated contributions worldwide concerning railway applications until 1994, when Göbel et al. BIB001 carried out some experimental tests to measure the ballast thickness and locating mudholes and ballast pockets, as well as for defining the soil boundaries of the subgrade. In addition, Saarenketo argued that GPR was tested in some Finnish railways in the mid-80s, although the results were not very encouraging due to a difficult data collection process and several processing problems. GPR has then started to become a technology acknowledged among railway engineers from the mid-90s , BIB003 . According to literature statistics, Italy holds a total of 16 742 km of rail network, being 11 931 km electrified and 4811 km not electrified . National statistics point out how the railway transport in Italy can be considered secondary to road transport, and to other European countries' railway network. This is one of the reasons why GPR applications in Italy in this field can count on a lower number of contributions, which in turn have started later than in the rest of Europe, especially if compared to North European countries. The Italian contribution in this area can be traced back to 1999 when IDS Ingegneria dei Sistemi carried out some pilot tests along an Italian high-speed railway track BIB008 . According to the results achieved, the same company developed an array of multi-frequency antennas wherewith it was possible to single out several damaging occurrences. With similar purposes, Caorsi et al. BIB008 developed a railway ballast inspection system capa-ble to extract relevant geophysical parameters in real time. A neural-network-based method was exploited herein for solving the EM inverse problem through a "learning-by-examples" approach. More recently, research efforts have been devoted to the EM characterization of the ballast material, mostly to analyze its response in case of fouling, whose occurrence leads to a drastic loss of performance. In line with this, Fontul et al. BIB019 have verified the dielectric values of the railway ballast used in Portuguese railways under controlled laboratory conditions through a multi-frequency GPR test, with the main goal of improving the GPR interpretation of the health conditions of railways. Additionally, GPR measurements and some test pits were performed in situ for validating the dielectric permittivity values of a clean ballast achieved preliminarily in laboratory environment. 3) Airports: The international scenario of literature publications on the use of GPR in airfield environment can count on a lower number of contributions than in other fields of application. Several possible applications to this purpose can be broadly mentioned, namely: 1) the locating of voids and moisture trails in concrete runways and taxiways; 2) the locating of posttensioning cables in concrete elements, such as garages or bridges inner to the airports; 3) the detection of voids and delamination of concrete roofs; 4) the reconstruction of cables, conduits, and rebars geometry in concrete pavements; 5) the location of buried utilities and their leaks; and 6) quality control and quality assurance surveys BIB009 . As of 2014, 45 International airports can be counted in Italy, which serve more than 150 million of passengers moving from, to, and within its territory by plane . The maintenance of airfields, and especially of runways and taxiways, is an issue always more perceived by airport administrations, in terms of both social and economic impacts. Many of the main international airports are providing technologies capable to predict effectively and reliably the evolution of damages in runway and taxiway pavements. Despite of their potential, there is still not a rightful number of research activities in this field. Benedetto and Tosti BIB014 have faced the topic of the GPRbased characterization of the strength and deformation properties of the unpaved natural soils, which constitute the so-called runway safety areas (RSA). To this purpose, deflectometric and GPR tests were carried out in both laboratory and field environment, at the Roma Urbe Airport, Rome, Italy. The GPR device employed here was a ground-coupled 600 and 1600 MHz pulsed system, whereas information about the strength parameters were gathered on the field by using a LFWD, and by performing California bearing ratio (CBR) tests in laboratory environment. The authors first related the dielectric permittivity values of the soil investigated with its bulk density. The Young's elasticity modulus was then predicted by implementing a semiempirical model, based on theoretical arguments and validated using groundtruth data. Relatively good results were achieved, although the authors suggest the need for widening the range of surveyed materials and investigating the soil behavior under different known moisture conditions.
GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 4) Bridges and Tunnels: <s> On request of the Italian National Electrical Agency, the Company IDROGEO carried out a ::: G.P.R. survey inside an old water-supply tunnel 14 km long belonging to an hydroelectric ::: power plant located in the North East of Italy. The aim of the survey was the geo-structural ::: investigation of the rock formations surrounding the tunnel with particular interest in the ::: mapping of cavities and fractures associated to the water occurrences and circulation. ::: A detailed investigation was also requested to detect the presence of voids at the concrete-rock ::: interface. The tunnel crosses different rock formations belonging to the Alpine sequence with ::: the presence of evaporitic formations affected by strong tectonic deformations. More than ::: 7,000 meters of G.P.R. profiles were recorded by using a GSSI SIR 10 equipped with 100 and ::: 500 MHz antennas with simultaneous data recording on two channels. The survey at 500 MHz ::: .was aimed at the precise determination of the concrete thickness and at the detection of the ::: voids at the concrete-rock interface, whereas the use of 100 MHz transducers permitted the ::: detection of larger unconformities and cavities up to a distance of 15-20 metres. The identified ::: structural elements were divided into 5 groups: ::: - lack of contact and delaminations at the concrete-rock interface ::: - geostructural elements ::: - open fractures ::: - voids and unconformities ::: - honeycomb alterations ::: The survey also permitted the location of some old artifacts whose position and nature were ::: uncertain. <s> BIB001 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 4) Bridges and Tunnels: <s> Abstract An integrated interpretation was made of data, from ground penetrating radar (GPR), seismic refraction and seismic transmission tomography, collected inside the catchment tunnels of a potable water source in central Italy. Rock fracturing and obsolescence of the concrete lining in a tunnel led a landslide that caused structural instability in the catchment work structures. To assess the stability of the rock close to the landslide, geophysical surveys were preferred to boreholes and geotechnical tests in order to avoid water pollution and the risk of further landslides. The interpretation of integrated data from seismic tomography and 200 MHz antenna GPR resulted in an evaluation of some of the elastic characteristics and the detection of discontinuities in the rock. Note also that an analysis of the back-scattered energy was required for the GPR data interpretation. The integration of seismic refraction data and 450 MHz antenna allowed us to identify the loosened zone around the tunnel and the extent of the mass involved in the cave-in, while GPR data from 225 MHz were used to evaluate the quality of contact between concrete lining and massive rock. <s> BIB002 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 4) Bridges and Tunnels: <s> Abstract Corrosion associated with reinforcing bars is the most significant contributor to bridge deficiencies. The corrosion is usually caused by moisture and chloride ion exposure. The reinforcing bars are attacked by corrosion and yield expansive corrosion products. These oxidation products occupy a larger volume than the original intact steel and internal expansive stresses lead to cracking and debonding. There are some conventional inspection methods for the detection of the reinforcing bar's corrosion but they can be invasive and destructive, often laborious, and lane closure is required and it is difficult or unreliable for any quantification of corrosion. For these reasons, bridge engineers always prefer more to use the ground penetrating radar (GPR) technique. In this work a novel numerical approach for three dimensional tracking and mapping of cracks in the bridge is proposed. The work starts from some interesting results based on the use of the 3D imaging technique in order to improve the potentiality of the GPR to detect voids, cracks or buried objects. The numerical approach has been tested on data acquired on a bridge by using a pulse GPR system specifically designed for bridge deck and pavement inspection. The equipment integrates two arrays of Ultra Wide Band ground coupled antennas, having a main working frequency of 2 GHz. The two arrays are using antennas arranged with a different polarization. The cracks, associated often to moisture increase and higher values of the dielectric constant, produce a not negligible increase of the signal amplitude. Following this, the algorithm, organized in preprocessing, processing and postprocessing stages, analyzes the signal by comparing the value of the amplitude all over the domain of the radar scan. <s> BIB003
The lowering of the risk related to structural stability issues of transport lifelines, such as bridges or tunnels, is an important task that needs to be undertaken in order to avoid possible failures, which may lead to a lack of functionality and compromise the whole transportation network. In this framework, GPR can cover an important role in monitoring and assessing these infrastructures, due to its minimum interference with traffic whilst measuring and testing. The potential of GPR in bridge engineering was evaluated in and BIB003 . Aiming at developing a highly reliable algorithm for the 3-D tracking of cracks within the HMA layers of a bridge deck, the authors employed a bridge-dedicated GPR system consisting in two arrays of ground-coupled antennas with a central frequency of 2 GHz, to survey several bridges in the district of Rieti, Italy. The signals collected were processed and amplified, and a 3-D matrix of signal amplitude values was then realized. Therefore, an amplitude threshold was calibrated to localize the paths of the cracks, by comparing the evidence of cracks on the field with the radar reflections and by assuming higher amplitude values related to cracks. Still concerning bridge applications, it is worth to mention the work developed by Pucinotti and Tripodo in 2009, over a study-case bridge, situated in the district of Reggio Calabria, Italy. To this intent, different technologies were used. In particular, laser scanner technology (LST) and GPR were combined to reconstruct the surface and inner morphology of the structure, respectively. GPR showed a good reliability and efficiency in determining the geometry of steel reinforcements, the lacks of homogeneity, and the major damages. With respect to tunnel engineering, Cardarelli et al. BIB002 made use of GPR in an integrated approach, to assess the health state of a tunnel set aside for potable water, which caved-in due to a landslide. GPR and seismic surveys were carried out by availing of a twin tunnel, around 15 m far from the target. Three antennas were employed in a bistatic configuration, with a central frequency spanning from 200 to 450 MHz. The lower frequencies were especially useful in retrieving the number and the location of discontinuities, thereby indicating collapsed zones. The integration between GPR, seismic, and tomographic analysis led to minimize uncertain data and to infer useful information about the tunnel structural stability. In line with the purposes of the former work, Piccolo and Zanelli BIB001 reconstructed the state of the geostructure surrounding the lining of a tunnel designed for potable water conduction in the North-East of Italy. The authors surveyed 7 km of tunnel lining using a pulsed GPR system with 100 and 500 MHz central frequency antennas. The higher frequency allowed monitoring the lining thickness all over the scan length, whereas the lower one allowed detecting deeper inhomogeneities, e.g., cavities and cracks, up to 20 m distance.
GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> C. Underground Utilities <s> Ground penetrating radar (GPR) is one of the most suitable technological solutions for timely detection of damage and leakage from pipelines, an issue of extreme importance both environmentally and from an economic perspective. However, for GPR to be effective, there is the need of designing appropriate imaging strategies such to provide reliable information. In this paper, we address the problem of imaging leaking pipes from single-fold, multi-receiver GPR data by means of a novel microwave tomographic method based on a 2D "distorted" scattering model which incorporates the available knowledge on the investigated scenario (i.e., pipe position and size). In order to properly design the features of the approach and test its capabilities in controlled but realistic conditions, we exploit an advanced, full-wave, 2.5D Finite-Difference Time-Domain forward modeling solver capable of accurately simulating real-world GPR scenarios in electromagnetically dispersive materials. By means of this latter approach, we show that the imaging procedure is reliable, allows us to detect the presence of a leakage already in its first stages of development, is robust against uncertainties and provides information which cannot be inferred from raw-data radargrams or "conventional" tomographic methods based on a half-space background. <s> BIB001 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> C. Underground Utilities <s> The realization of network infrastructures with lower environmental impact and the tendency to use digging technologies less invasive in terms of time and space of road occupation and restoration play a key-role in the development of communication networks. The "low impact mini-trench" technique (addressed in ITU L.83 recommendation) requires that non-destructive mapping of buried services enhances its productivity to match the improvements of new digging equipment. Therefore, the development of a fully automated and real-time 3D GPR processing system plays a key-role in overall optical network deployment profitability. We propose a novel processing scheme whose goal is the automated processing and detection of buried targets, that can be applied in real-time to 3D GPR array system (16 antennas, 900 MHz central frequency). After the standard pre-processing steps, the antenna records are continuously focused during acquisition, by the mean of Kirchhoff depth-migration algorithm, to build pre-stack reflection angle gathers G(x, θ; v) at nv different velocities. The analysis of pre-stack reflection angle gathers plays a key-role in automated detection: by the mean of correlation estimate computed for all the nv reflection angle gathers, targets are identified and the best local propagation velocities are recovered. The data redundancy of 3D GPR acquisitions highly improves the proposed automatic detection reliability. The proposed approach allows to process 3D GPR data and automatically detect buried utilities in real-time on a laptop computer, without the need of skilled interpreters and without specific high performance hardware. More than 100 Km of acquired data prove the feasibility of the proposed approach. <s> BIB002 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> C. Underground Utilities <s> This paper describes a project entitled 'ORFEUS', supported by the European Commission's 7th Framework development programme. Horizontal directional drilling (HDD) offers significant benefits for urban environments by minimising the disruption caused by street works. Use of the technique demands an accurate knowledge of underground utility assets and other obstructions in the drill path. This project is aimed at improving the results of a previous project developed under the 6th Framework programme; specifically it addresses some issues that were formerly unresolved, in order to produce a commercially viable product. In fact, ORFEUS activities concern the research of the optimum antenna configuration, the design of an angular position sensor and a communication module, as well as the identification/validation of the most effective bore-head GPR data processing algorithms. The final system is expected to offer the operator information directly from the drilling head, in real time, allowing objects to be avoided; this is a unique feature that will enhance safety and efficiency, reduce risk, reduce the environmental impact (e.g. damage to natural habitats, less CO2 emissions) and lead to positive economic benefits in terms of cost and time savings for the operator, manufacturers and wider supply chain. <s> BIB003
In most cases, the complexity of a network of underground pipes can carry telecommunication or electric cables, natural gas, potable water, and wastewater, but it can also develop as an underground oil pipeline or a tunnel network . Several studies from the 90s have been carried out worldwide for detecting and identifying the underground utilities. In Italy, a general lack of regulations and technical protocols in this matter has determined a really chaotic and uncontrolled use of the underground for the location of utilities. Consequently, it is not uncommon that roadworks get slowed by the damaging of unexpected utility pipes. A first national regulatory impulse dates back to 1999, when the Italian Ministry of Public Works promulgated a directive encouraging public administrations to adopt an urban plan for the management of underground utilities. To this aim, several big Italian municipalities such as Milan and Venice have already redacted their own urban plan. This occurrence has generated an important impulse for the national research in the field of GPR for detecting and classifying underground utilities. In this framework, one of the first initiatives is represented by the European co-funded projects GIGA and ORFEUS [98] , BIB003 . Among their main objectives, it was foreseen the design and manufacturing of an improved, user-friendly GPR capable to provide highly-detailed information for no-dig installation of gas pipelines by means of horizontal directional drillings (HDD). The possibility of gathering and interpreting the data in realtime holds a crucial role in the optimization process of costs and time efforts. In this sense, different studies proposing new integrated approaches for the processing of 3-D GPR data have been developed , BIB002 . These approaches make use of typical seismic algorithms, and build prestack reflection gathered by depth-migration processes for different propagation velocities. Such an operation allows estimating with high reliability both the position of scatterers and the propagation velocity of the EM wave. Under a different perspective, the noninvasive and efficient detection of underground utilities can also hold a crucial environmental role. Indeed, since the cost of energy and water resources kept raising, an early-stage location of leaks in underground pipes can avoid economic and environmental wastes. In such a framework, Crocco et al. BIB001 proposed a tomographic approach for detecting leaking pipes. The authors were capable to obtain detailed information about metallic leaking pipes, by employing a "distorted" wave scattering model and generating synthetic GPR data with a 2.5-D FDTD forward modeling solver.
GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> D. Geology and Environment <s> Existing 1D and 2D models are used to simulate ground penetrating radar (GPR) field surveys conducted in a stratified limestone terrain. The 1D model gave good agreement in a simple layered section, accounting for multiple reflections, velocity variations and attenuation. The 2D F-K model used gave a good representation of the patterns observed due to edge diffraction from a fracture in limestone, although the model could not account for the attenuation caused by irregular blocks filling the fracture. <s> BIB001 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> D. Geology and Environment <s> Abstract This work studies a methodology starting from georadar data that allows a semiquantitative evaluation of massive rock quality. The method is based on the concept that in good quality rock, most of the energy is transmitted, while in low quality rock, the energy is backscattered from fractures, strata joints, cavities, etc. When the energy loss due to spherical divergence and attenuation can be recovered by applying a constant spherical/exponential gain, the resulting energy function observed in the georadar section depends only on the backscattered energy. In such cases, it can be assumed that the amount of energy is an index of rock quality. Radar section interpretation is usually based on the reconstruction of reflected high-energy organized events. Thus, no consideration is given to backscattered not-organized energy produced by microfractures that greatly influences the geotechnical characteristics of the rock mass. In order to take into consideration all the backscattered energy, we propose a method based on the calculation of the average energy relative to a portion of predefined rock. The method allows a synthetic representation of the energy distributed throughout the section. The energy is computed as the sum of the square of amplitude of samplings contained inside cells of appropriate dimensions. The resultant section gives a synthetic and immediate mapping of rock quality. The consistency of the method has been tested by comparing georadar data acquired in travertine and limestone quarries, with seismic tomography and images of actual geological sections. The comparison highlights how effectively the energy calculated inside the cells give synthetic representation of the quality of rocks; this can result in maps where the high-energy values correspond to rock of poor quality and the low energy values correspond to a good quality region. The results obtained in this way can, in this case, be partly superimposed onto those of seismic tomography. <s> BIB002 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> D. Geology and Environment <s> Ground penetrating radar (GPR) is a nondestructive measurement technique, which uses electromagnetic waves to locate targets or interfaces buried within a visually opaque substance or Earth material. GPR is also termed ground probing, surface penetrating (SPR), or subsurface radar. A GPR transmits a regular sequence of low-power packets of electromagnetic energy into the material or ground, and receives and detects the weak reflected signal from the buried target. The buried target can be a conductor, a dielectric, or combinations of both. There are now a number of commercially available equipments, and the technique is gradually developing in scope and capability. GPR has also been used successfully to provide forensic information in the course of criminal investigations, detect buried mines, survey roads, detect utilities, measure geophysical strata, and in other applications. ::: ::: ::: Keywords: ::: ::: ground penetrating radar; ::: ground probing radar; ::: surface penetrating radar; ::: subsurface radar; ::: electromagnetic waves <s> BIB003 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> D. Geology and Environment <s> The joint application of electromagnetic techniques for near-surface exploration is a useful tool for soil pollution monitoring and can also contribute towards describing the spatial distribution of pollutants. The results of a geophysical field survey that was carried out for characterizing the heavy metal and waste disposal soil pollution phenomena in the industrial area of Val Basento (Basilicata region, Southern Italy) are presented here. First, topsoil magnetic susceptibility measurements have been carried out for defining the spatial distribution of superficial pollution phenomena in the investigated area. Second, detailed and integrated measurements based on a high-resolution magnetic mapping and ground probing radar (GPR) profiling have been applied to investigate the subsurface in two industrial areas located in more polluted sites that were identified during the first phase. Our monitoring strategy discloses the way to rapidly define the zone characterized by high pollution levels deriving from chemical industries and traffic emissions and to obtain the way information about the presence of local buried sources of contamination. <s> BIB004 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> D. Geology and Environment <s> Abstract A ground penetrating radar (GPR) survey was conducted across the Quaternary intramountain graben of the Norcia basin (Italy) in an effort to locate an active fault zone and to investigate the shallow geological structures. Measurements over an exposed (trenched) fault identify a radar signature consisting of hyperbolic diffractions in correspondence with the main fault's position. The migrated profile shows a good spatial correlation with the known fault at a control site. The average wave velocity of radar impulses in the ground was obtained by comparing with the field scans of real traces two synthetic signals, one at the hanging wall and one at the footwall of the fault. This analysis made possible to estimate the thickness of the sedimentary layers involved in the fault mechanism and the stratigraphic throw of the fault itself. The combined use of GPR across the probable northern continuation of the fault, with the information obtained with the study in the exposed fault, was used to select the location of new trench excavations. The GPR, being a relatively easy, non-invasive and high-resolution technique, can thus be used in palaeoseismological investigations, particularly for a preliminary investigation where the geological context is poorly defined. <s> BIB005 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> D. Geology and Environment <s> Three-dimensional assessment and modelling of fractured rock slopes is a challenging task. The reliability of the fracture network definition is of paramount importance for several engineering and geotechnical applications, and so far, different approaches have been proposed to improve the assessment procedure. A thorough knowledge of the actual fracture system is necessary to construct an accurate geometrical model of the rock mass and to determine block size distribution within the rock body. This paper describes the integration of diverse techniques used to define the rock mass fracture pattern, focusing on the most important fracture features, which are joint orientation, spacing, and persistence. A case study in the north of Italy was selected in order to show the potential of an integrated approach where surface and subsurface investigations are coupled. The rock surface was analysed by means of both standard geological mapping and terrestrial laser scanning. Ground penetrating radar surveys were conducted to image and map the discontinuity planes inside the rock mass and to estimate fracture persistence. The results obtained from the various investigation methodologies were employed to construct a model of the rock mass. This approach may lead to a better understanding of fracture network features, usually observed only on the rock surface. A careful analysis of block size distribution in a rock body can be of valuable help in several engineering and risk mitigation applications. <s> BIB006 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> D. Geology and Environment <s> Abstract In this work we report a GPR study across a tectonic discontinuity in Central Italy. The surveyed area is located in the Castelluccio depression, a tectonic basin in the Central Apennines, close to the western border of the Mt. Vettore. Its West flank is characterised by a set of W-dipping normal faults, considered active and capable of generating strong earthquakes (M w = 6.5, Galli et al., 2008 ). A secondary fault strand, already studied with paleo-seismological analysis ( Galadini and Galli, 2003 ), has been observed in the Quaternary deposits of the Prate Pala alluvial fan. We first defined the survey site using the data available in literature and referring to topographic and geological maps, evaluating also additional methodologies, such as orthophoto interpretation, geomorphologic analysis and integrating all the information in a GIS environment. In addition, we made extensive use of GPR modelling, reproducing the geometric characteristics of the inferred fault area and interpreting the synthetic profiles to recognise local geophysical indications of faulting on the radargrams. Finally, we performed a GPR survey employing antennas with different frequencies, to record both 2D Common Offset profiles and Common Mid Point (CMP) gathers for a more accurate velocity estimation of the investigated deposits. In this paper we focus on the evaluation of the most appropriated processing techniques and on data interpretation. Moreover we compare real and synthetic data, which allow us to better highlight some characteristic geophysical signatures of a shallow fault zone. <s> BIB007 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> D. Geology and Environment <s> High-frequency electromagnetic (EM) surveys have shown to be valuable techniques in the study of soil water content due to the strong dependence of soil dielectric permittivity with moisture content. This quantity can be determined by analyzing the average value of the early-time instantaneous amplitude of ground-penetrating radar (GPR) traces. We demonstrate the reliability of this approach to evaluate the shallow soil water content variations from standard fixed-offset GPR data by simulating the data over different likely EM soil conditions. A linear dipole model that uses a thin-wire approximation is assumed for the transmitting and receiving antennas. The homogenous half-space model is used to calculate the waveform instantaneous amplitude values averaged over different time windows. We analyzed their correlation with the soil surface dielectric parameters, and we found a clear inverse linear dependence on the permittivity values. Moreover, we evaluated how different kinds of noise affect this correlation, and we determined the influence of the electrical conductivity on the trace attributes. Finally, through a two-layered medium, we estimated the effect on the GPR signal of a shallow reflector, we analyzed how its presence can carry out inaccuracies in the soil surface dielectric permittivity estimation, and we determined the best time window to minimize these errors. <s> BIB008
Over the last decades, GPR has been used for a huge number of documented applications in the geological and environmental fields. Since a great part of the Italian territory is classified as seismically active, the geological hazard analysis BIB003 is one of the most important topics tackled in this field. In such a framework, the palaeoseismology [103] discipline can cover a crucial role, since it is capable to exploit signs of ancient earthquakes by stratigraphic analysis for evaluating the geological hazard of a certain territory. The first recognized Italian research activity on geological issues using GPR was carried out in the 90s by Pettinelli et al. BIB001 , with the aim of verifying the capability of one-dimensional (1-D) and 2-D EM models in reconstructing structural and stratigraphic soil features. The study gathers and cross-matches information coming from road cuts, scarp faces, and GPR surveys collected over a simple limestone stratified sequence, located nearby the city of Rovereto, in the South-Eastern Italian Alps. Both 1-D and 2-D models revealed to be in good agreement with the data collected, although the 1-D model has showed more difficulties in predicting the point of diffraction, such as the fractures in limestone. On the contrary, Orlando BIB002 has focused efforts on the detection of low quality rock areas, under the hypothesis that the volumes of rock showing fractures and cavities backscatter the energy transmitted by a GPR system. The author employed a GPR system equipped with a 200 MHz central frequency antenna, on different geological contexts, located in the middle part of the Appennini mountain chain. The results showed a good effectiveness of GPR in reconstructing the quality of the geological layers, according to the central frequency employed. In the same research area, it is also worth mentioning the study by Longoni et al. BIB006 . Concerning palaeoseismology, Pauselli et al. BIB005 adopted GPR techniques coupled with trench works to reach a direct and detailed level of information about palaeoseismic structures, nearby the town of Norcia, Italy. To this purpose, a groundcoupled GPR system was equipped with two antennas, 100 and 300 MHz central frequencies of investigation, and employed over two areas, located alongside the Norcia fault. GPR proved to be very effective as complementary tool with former trench works. The authors individuated a great applicability of GPR for a better planning of trenching sites and for enhancing the geological information collected in neighbor areas. Similar goals and methodologies were adopted with reliable outcomes in BIB007 . More insights about the GPR application in palaeoseismology can be found in . Next to the abundance in geological activities, Italy has also a strong agricultural tradition that has left, as heritage, more than 1.6 million of agricultural establishments [109] widespread over the whole territory of the country. In such a framework, it is clear how agricultural water management and soil water conservation bear a crucial role, with GPR being a primary tool for water content sensing, due to the influence exerted by water on the dielectric properties of soils. Di Matteo et al. BIB008 have faced the topic of relating shallow soil water content and surface dielectric parameters by performing numerical simulations. The authors showed a high correlation especially between the dielectric constant and the average envelope amplitude of the first portion of the GPR signal. In a country like Italy, wherein the agricultural tradition and the need of quality assurance for foods meet the difficulties of managing the industrial expansion and its related toxic refuses, a further critical issue is related on how to ensure a direct, rapid and noninvasive detection of pollution in soils. In this field, Chianese et al. BIB004 developed a GPR-related study case. With the purpose of characterizing the territory in terms of levels of soil pollution, the authors made use of geophysical surveys performed in the industrial area of Val Basento, in the Region of Basilicata, in Southern Italy. In more details, different magnetic devices were employed to measure the magnetic susceptibility and the gradient of the magnetic field, whereas a GPR system equipped with 200 and 400 MHz nominal frequency antennas was used to evaluate the subsurface EM behavior of those areas with higher magnetic susceptibility values. In general, the integrated use of magnetic and EM methods allowed to detect and characterize buried pollutant objects and highly attenuated zones, probably related to polluted soils.
GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> E. Archaeology <s> Forward modeling of ground penetration radar is developed using exact ray‐tracing techniques. Structural boundaries for a ground model are incorporated via a discrete grid with interfaces described by splines, polynomials, and in the case of special structures such as circular objects, the boundaries are given in terms of their functional formula. In the synthetic radargram method, the waveform contributions of many different wave types are computed. Using a finely digitized antenna directional response function, the radar crosssection of buried targets and the effective area of the receiving antenna can be statistically modeled. Attenuation along the raypaths is also monitored. The forward models are used: “1” as a learning tool to avoid pitfalls in radargram interpretation, (2) to understand radar signatures measured across various engineering structures, and (3) to predict the response of cultural structures buried beneath important archaeological sites in Japan. <s> BIB001 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> E. Archaeology <s> Subsurface georadar is a high-resolution technique based on the propagation of high-frequency radio waves. Modeling radio waves in a realistic medium requires the simulation of the complete wavefield and the correct description of the petrophysical properties, such as conductivity and dielectric relaxation. Here, the theory is developed for 2-D transverse magnetic (TM) waves, with a different relaxation function associated to each principal permittivity and conductivity component. In this way, the wave characteristics (e.g., wavefront and attenuation) are anisotropic and have a general frequency dependence. These characteristics are investigated through a plane-wave analysis that gives the expressions of measurable quantities such as the quality factor and the energy velocity. The numerical solution for arbitrary heterogeneous media is obtained by a grid method that uses a time-splitting algorithm to circumvent the stiffness of the differential equations. The modeling correctly reproduces the amplitude and the wavefront shape predicted by the plane-wave analysis for homogeneous media, confirming, in this way, both the theoretical analysis and the numerical algorithm. Finally, the modeling is applied to the evaluation of the electromagnetic response of contaminant pools in a sand aquifer. The results indicate the degree of resolution (radar frequency) necessary to identify the pools and the differences between the anisotropic and isotropic radargrams versus the source-receiver distance. <s> BIB002 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> E. Archaeology <s> A 2.5-D and 3-D multi-fold GPR survey was carried out in the Archaeological Park of Aquileia (northern Italy). The primary objective of the study was the identification of targets of potential archaeological interest in an area designated by local archaeological authorities. The second geophysical objective was to test 2-D and 3-D multi-fold methods and to study localised targets of unknown shape and dimensions in hostile soil conditions. Several portions of the acquisition grid were processed in common offset (CO), common shot (CSG) and common mid point (CMP) geometry. An 8×8 m area was studied with orthogonal CMPs thus achieving a 3-D subsurface coverage with azimuthal range limited to two normal components. Coherent noise components were identified in the pre-stack domain and removed by means of FK filtering of CMP records. Stack velocities were obtained from conventional velocity analysis and azimuthal velocity analysis of 3-D pre-stack gathers. Two major discontinuities were identified in the area of study. The deeper one most probably coincides with the paleosol at the base of the layer associated with activities of man in the area in the last 2500 years. This interpretation is in agreement with the results obtained from nearby cores and excavations. The shallow discontinuity is observed in a part of the investigated area and it shows local interruptions with a linear distribution on the grid. Such interruptions may correspond to buried targets of archaeological interest. The prominent enhancement of the subsurface images obtained by means of multi-fold techniques, compared with the relatively poor quality of the conventional single-fold georadar sections, indicates that multi-fold methods are well suited for the application to high resolution studies in archaeology. <s> BIB003 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> E. Archaeology <s> Abstract This paper deals with the application of two different processing methods of the georadar data aimed at improving the results in the case of bad quality data. The georadar data are referred to two areas located in the Axum archaeological park (Ethiopia) and were acquired prior to the reinstallation of the returned Stele from Italy to the Ethiopian Government. In the area the schist formation is covered by an outcropping sandy silt formation about 6–8 m thick. The archaeological excavations, performed before the georadar data acquisition, revealed that tombs and catacombs were dug into the superficial layer. Because the complexity of the georadar data interpretation based on standard data processing, some of the collected measured data are also processed by an innovative microwave tomographic approach which permits to achieve clearer diagnostic results with respect to the classic radaristic techniques in 2D and 3D representation. We take into account the data acquired for the East stele 2 with 100 MHz antenna and in the parking area of the archaeological park with 200 MHz antenna. The data were acquired on profiles 1 m apart. Comparing the data processed with the two different approaches, we obtained an improvement of the vertical resolution and of the quality of image on time slices using the tomographic approach compared to the results obtained with the classic radar one. <s> BIB004 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> E. Archaeology <s> Abstract A fast and efficient subsurface radar imaging procedure, based on a multi-channel cart system, has been developed and tested within the framework of a large-scale archaeological investigation project in northern Italy. The tested cart comprises 14 closely-spaced dipoles, rotated by 45° with respect to the dragging direction, and allows unidirectional scanning operations. Using this approach, an area of approximately 75 000m 2 was surveyed daytime via recording of a dense grid of about 490km of radar profiles. Geo-referencing of the scanning trajectories was achieved operating a separate on-board differential Global Positioning System in real-time kinematic mode. In this configuration the final positioning error of the radar sweeps was less than 0.05m. The large amount of collected data, of the order of tens of GBytes, was processed, using an open-source software package, on a workstation-based environment. A set of specific codes was developed to fully automate the data processing and the image generation procedure. Critical steps during code development were the integration of positioning and radar data, the referencing of the single radar sweeps and the correction for changes in the spectral amplitude of the different channels. The processed data volume displays high signal coherency and reveals several well-defined reflectors, clearly visible both on vertical profiles and horizontal time slices. The plan of the Roman settlement could be revealed in detail proving the potential of the tested approach for assisting high-resolution archaeological investigations of large areas. <s> BIB005
GPR has earned a wide acknowledgment in the archaeological community over the past decades. From the 70s until now, burial tombs, historic buried chambers and graves, campsites, and pit abodes have been detected through GPR methods . The interpretations of the collected GPR data have often been supported by simulation processes from 2-D or 3-D models BIB001 . By statistics, Italy is identified as the country owning the highest amount of UNESCO "World Heritage" sites [114] . Nevertheless, between 2001 and 2011, the funds for culture allocated by the Ministry for Cultural Heritage and Activities have suffered a lowering of about the 20% of allocations. In such a frame, GPR can hold a key role for its well-known nondestructive and cost-effective features. Therefore, a lively research community focused on developing or improving the ongoing methodologies for archaeological GPR surveys is not surprising. Pipan et al. BIB003 performed a study wherein the tasks of locating buried targets in archaeological areas, and testing 2-D and 3-D multifold (MF) methods for the characterization of the shape and the dimension of unknown objects are faced. To this purpose, the authors have thoroughly surveyed an area situated in the archaeological park of Aquileia, in Northern Italy, by means of a wide offset range of common midpoint (CMP) analyses. A 3-D MF data acquisition was finally performed, yielding an increment of the signal to noise ratio parameter. Such methodology led to an indication of potential archaeological targets buried below the surface. Similar goals were pursued in the work of Basile et al. by employing GPR methods to characterize in detail the shallower high-attenuation layers of one urban area presumably interested by buried archaeological structures, located nearby the town of Lecce, in Southern Italy. While GPR did not yield reliably information about the positioning of historical walls made of the same calcarenite, due to the weak EM contrast, it showed good performance in detecting and reconstructing the shape and the size of a barrel-vault cavity, which was later on confirmed by excavations. Negri and Leucci applied GPR methods combined with ERT, to assess the possible presence of voids and cavities in the subsurface of the Temple of Apollo in Hierapolis, in the Lycus Valley, Western Turkey. 3-D GPR imaging allowed to detect artifacts located beneath the Temple of Apollo, while 2-D ERT imaging enabled to verify an active fault as it was suggested by geological, geomorphological and palaeoseismic former studies. With regard to the same archaeological site of Hierapolis, in Turkey, a similar integrated geophysical approach was adopted by Nuzzo et al. . Nevertheless, to the best of authors' knowledge, the first example in the Italian literature of a multimethod geophysical approach applied to archaeological surveys, dates back to 1999, when Sambuelli et al. carried out some integrated geophysical inspections on a Roman archaeological site, nearby the town of Biella, in Northern Italy. Orlando and Soldovieri BIB004 proposed two different processing methods for providing reliable interpretation of bad quality datasets. Such methods were applied to the archaeological case of the relocation of a Stele in the Ethiopian archaeological park of Axum, and consisted in a classic processing scheme and a microwave tomographic approach. By comparing these two approaches, the authors were capable to increase the quality of the information coming from 100 and 200 MHz GPR antennas. The lack of GPR data interpretation can be overstepped, on the other hand, by making use of EM numerical simulation, as proposed in BIB002 and BIB001 . In general, one of the main issues affecting archaeological GPR surveys, is the need for high-resolution data. This can be a critical point when referring to the optimization of time and costs. Francese et al. BIB005 proposed a possible solution to this problem by using a multichannel GPR system mounted on a cart and equipped with 14 antennas with a central frequency of 400 MHz. Such set up allowed the authors to survey in half a day a considerable wide area (around 75000 m 2 ) located in Northern Italy, in the archaeological site of "Le pozze," wherein the remains of a roman village were known to be buried. The data were then processed, with particular regard to the data georeferencing by GPS. In the end, the boundaries of the buried structures were detected, thereby allowing to draw a comprehensive map of the whole archaeological site.
GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> F. Glaciology <s> A prototype of an ultra-high frequency radar system (2- 60Hz) has been developed at the RST laboratory within the framework of the HOPE project funded by the European Community and aimed at integrating three sensors (metaldetector, GPR and microwave radiometer) into a unique portable system for humanitarian demining. An advanced prototype of the GPR sensor assembled with the dual metal coil MD sensor has been recently tested at the outdoor facilities of the Joint Research Center in Ispra (Italy). The test field specifically prepared by JRC consists of a unique target scenario that is recreated under different type of soils and surface conditions. The target scenario includes different type of mines and false alarm targets like stone, wood, metallic or plastic objects. The dataset collected during this test are quite interesting for planning the future improvements of both the hardware and the software solutions. The data has been processed with a 3D imaging software specifically developed by the authors for the HOPE project. The preliminary results are encouraging for some scenarios whereas some others seem to be really demanding for the GPR sensor. <s> BIB001 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> F. Glaciology <s> Ground penetrating radar (GPR) is a nondestructive measurement technique, which uses electromagnetic waves to locate targets or interfaces buried within a visually opaque substance or Earth material. GPR is also termed ground probing, surface penetrating (SPR), or subsurface radar. A GPR transmits a regular sequence of low-power packets of electromagnetic energy into the material or ground, and receives and detects the weak reflected signal from the buried target. The buried target can be a conductor, a dielectric, or combinations of both. There are now a number of commercially available equipments, and the technique is gradually developing in scope and capability. GPR has also been used successfully to provide forensic information in the course of criminal investigations, detect buried mines, survey roads, detect utilities, measure geophysical strata, and in other applications. ::: ::: ::: Keywords: ::: ::: ground penetrating radar; ::: ground probing radar; ::: surface penetrating radar; ::: subsurface radar; ::: electromagnetic waves <s> BIB002 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> F. Glaciology <s> Ground-penetrating radar (GPR) is a rapidly developing field that has seen tremendous progress over the past 15 years. The development of GPR spans aspects of geophysical science, technology, and a wide range of scientific and engineering applications. It is the breadth of applications that has made GPR such a valuable tool in the geophysical consulting and geotechnical engineering industries, has lead to its rapid development, and inspired new areas of research in academia. The topic of GPR has gone from not even being mentioned in geophysical texts ten years ago to being the focus of hundreds of research papers and special issues of journals dedicated to the topic. The explosion of primary literature devoted to GPR technology, theory and applications, has lead to a strong demand for an up-to-date synthesis and overview of this rapidly developing field. Because there are specifics in the utilization of GPR for different applications, a review of the current state of development of the applications along with the fundamental theory is required. This book will provide sufficient detail to allow both practitioners and newcomers to the area of GPR to use it as a handbook and primary research reference. *Review of GPR theory and applications by leaders in the field *Up-to-date information and references *Effective handbook and primary research reference for both experienced practitioners and newcomers <s> BIB003 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> F. Glaciology <s> Abstract We evaluate the reliability of the joint use of Ground Penetrating Radar (GPR) and Time Domain Reflectometry (TDR) to map dry snow depth, layering, and density where the snowpack thickness is highly irregular and the use of classical survey methods (i.e., hand probes and snow sampling) is unsustainable. We choose a test site characterised by irregular ground morphology, slope, and intense wind action (about 3000 m a.s.l., Western Alps, northern Italy) in dry snow conditions and with a snow-depth ranging from 0.3 m to 3 m over a few tens of metres over the course of a season. The combined use of TDR and high-frequency GPR (at a nominal frequency of 900 MHz) allows for rapid high-resolution imaging of the snowpack. While the GPR data show the interface between the snowpack and the ground, the snow layering, and the presence of snow crusts, the TDR survey allows the local calibration of wave speed based on GPR measurements and the estimation of layer densities. From January to April, there was a slight increase in the average wave speed from 0.22 to 0.24 m/ns from the accumulation zone to the eroded zone. The values are consistent with density values in the range of 350–450 kg/m 3 , with peaks of 600 kg/m 3 , as gravimetrically measured from samples from snow pits at different times. The conversion of the electromagnetic wave speed into density agrees with the core samples, with an estimated uncertainty of about 10%. <s> BIB004 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> F. Glaciology <s> Abstract We propose a methodology to estimate the density of frozen media (snow, firn and ice) using common offset (CO) GPR data. The technique is based on reflection amplitude analysis to calculate the series of reflection coefficients used to estimate the dielectric permittivity of each layer. We determine the vertical density variations for all the GPR traces by applying an empirical equation. We are thus able to infer the nature of frozen materials, from fresh snow to firn and ice. The proposed technique is critically evaluated and validated on synthetic data and further tested on real data of the Glacier of Mt. Canin (South-Eastern Alps). Despite the simplifying hypotheses and the necessary approximations, the average values of density for different levels are calculated with acceptable accuracy. The resulting large-scale density data are fundamental to estimate the water equivalent (WE), which is an essential parameter to determine the actual water mass within a certain frozen volume. Moreover, this analysis can help to find and locate debris or moraines embedded within the ice bodies. <s> BIB005
From literature studies, it is known how dry snow and ice are the geological media providing the best wave propagation performances for GPR pulses with frequencies above approximately to 1 MHz. Indeed, such investigated media show a very low attenuation rate (low conductivity) for these pulses and the absence of relaxation processes (ε r = 0). The penetration depth reaches nowadays the dimension of kilometers BIB003 . Besides, GPR is considered an effective tool for evaluating the glacier subsurface, due to the mostly horizontal and continuous configurations of their layers, which provide reflection patterns of easy interpretation BIB003 . With respect to the Italian territory, in 1993 almost 1400 glaciers were counted on the Alpine arch, for an overall interested area of about 608 km 2 . The Alpine glaciers are mostly classified as temperate glaciers, thereby involving a high seasonal variability in terms of snowmelt runoff, which in turn guarantees water provision in dry and warm seasons. Therefore, important research contributions in the field of glaciological applications of GPR can be found in literature. A good level of knowledge about the physical properties of glaciers, such as depth, density, and structural configuration, revealed to be helpful not only in public safety (e.g., avalanche prediction, see Section II-G.2), but also in environmental applications (e.g., climate change monitoring), energy supply (e.g., hydropower production) and in agricultural issues (availability of water sources for irrigation). As far as the density of the media is concerned, it is worth mentioning the study carried out by Godio , who has used different GPR systems with antenna central frequencies spanning from 500 to 1500 MHz on three different Alpine sites. The author employed the data collected to test the main theoretical relationships between the dielectric properties and the density of the dry snow. The results show a good predictive capability of GPR in mapping the vertical profile of the density for the dry snow, whereas the author argues that further work has to be done for detecting the micro-structure of the snow. In addition, Previati et al. BIB004 faced the same issue with similar purposes using a different approach. Indeed, a combined use of GPR and TDR was tested in this case to evaluate some physical properties of the snowpack. In the survey site, namely "Cime Bianche," close to the Ventina glacier, Italy, a pulsed GPR system with a central frequency of 900 MHz was employed together with a TDR, which was helpful in calibrating the radar measurements. The results showed an accurate assessment of the snow depth, whereas statistical and geostatistical analyses demonstrated the need for high-density data collection, which highlighted the low applicability of traditional methods. More recently, Forte et al. BIB005 have focused their efforts on a reflection amplitude analysis with the aim of recognizing the nature of the subsurface layers (snow, firm or ice) with GPR. The proposed method was developed on the basis of synthetic data, then tested on the field, over the Glacier of Mt. Canin (South-Eastern Alps), by employing a GPR system equipped with a 250 MHz shielded antenna. The authors assessed reliably the dielectrics of the layers, which were related to their densities. The disposition of the Ottawa Treaty furnished a concrete impulse to the research and the industrial activities for developing more effective technologies in landmine detection BIB002 . In such a framework, GPR has been found to be an effective tool in reducing the false alarm ratio (FAR) affecting the most seldom-employed devices, such as EM induction metal detectors (MDs). Therefore, the GPR technology can accomplish the crucial task of classifying the detected targets by interpreting their EM response, more than detecting them BIB003 . Italy plays an important role in such a scenario, also due to the presence of the Joint Common Research (JRC) in ISPRA, in the district of Varese, Northern Italy. Here, a test site for unexploded ordnance (UXO) detection was arranged for the validation of a handheld system developed in the context of the Handheld Operational Demining System (HOPE) project, promoted by the European Union in the late 90s. The HOPE system first brought a multitask approach for demining, involving EM and Magnetic sensors BIB001 .
GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> G. Demining and Public <s> Surface-penetrating radar is a nondestructive testing technique which uses electromagnetic waves to investigate the composition of nonconducting materials either when searching for buried objects or when measuring their internal structure. A typical surface-penetrating radar transmits a short pulse of electromagnetic energy of 1 ns (10/sup -9/ s) time duration from a transmit antenna into the material. Energy reflected from discontinuities in impedance is received by means of a receive antenna and is then suitably processed and displayed by a radar receiver and display unit. If the transmit and receive antennas are moved at a constant velocity along a linear path, a cross-sectional image of the material can be generated. Alternatively, if the antennas are scanned in a regular grid pattern, a three-dimensional image of the target can be derived. This paper provides a review of the principles of the technique, discusses the technical requirements for the individual subsystems comprising a surface-penetrating radar and provides examples of typical applications for the method. Continued technical improvements in system performance enable clearer radar images of the internal structure of materials to be obtained, thus advancing the application of the technique. <s> BIB001 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> G. Demining and Public <s> This paper describes the data processing scheme for the Ground Penetrating Radar (GPR) array developed for the mullti-sensor mine detection system DEMAND. The GPR sensor make use of a densely sampled array able to supply data suitable for Three Dimensional (3D) focusing and for the evaluation of the full polarization matrix. A processing scheme based on a low threshold approach followed by feature extraction will be described. Such scheme uses 3D focalization, geometrical features and polarimetric ones to both maximize probability detection and reduce false alarm rate. The results will be illustrated using data from tests carried out in a realistic site in Sarajevo (Bosnia- Herzegovina) <s> BIB002 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> G. Demining and Public <s> A through-wall imaging problem for a 2-D scalar geometry is addressed. It is cast as an inverse scattering problem and tackled under the linear Born model by means of the truncated singular value decomposition inversion scheme. A multiarray-based inversion strategy is considered. In particular, first the data collected by each single array are processed to obtain different tomographic images of the same scene under test. Then, the different images are suitably combined to obtain the overall image. The inversion scheme is tested for the challenging case of objects located within a complex environment resembling a room in a building. <s> BIB003 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> G. Demining and Public <s> Among the technologies used to improve landmine detection, Ground Penetrating Radar (GPR) techniques are being developed and tested jointly by “Sapienza” and “Roma Tre” Universities. Using three-dimensional Finite Difference Time Domain (FDTD) simulations, the electromagnetic field scattered by five different buried objects has been calculated and the solutions have been compared to the measurements obtained by a GPR system on a (1.3×3.5×0.5) m3 sandbox, located in the Humanitarian Demining Laboratory at Cisterna di Latina, to assess the reliability of the simulations. A combination of pre-calculated FDTD solutions and GPR scans, may make the detection process more accurate. <s> BIB004 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> G. Demining and Public <s> Clearing large civilian areas from anti-personnel landmines and cluster munitions is a difficult problem. The FP7-funded research project ‘TIRAMISU: Toolbox Implementation for Removal of Anti-personnel Mines Submunitions and UXO’ aims to develop a global toolbox that will cover the main mine action activities, from the survey of large areas to the actual disposal of explosive hazards to mine risk education. For close-in detection a number of tools are being developed, including a new densely-sampled down-looking Ground Penetrating Radar array. It is a vehicle-based imaging array of air-launched antennas, endowed with realtime signal processing for the close-in detection (~0.4 m standoff) of landmines and UXOs buried within ~0.5 m deep soil layer. Automatic target detection capabilities and integration with a partner’s metal detector array onto a suitable autonomous vehicle will increase field data productivity and human safety. In particular, a novel antenna design has been studied to allow dense packing and stand-off operation while providing adequate penetration and resolution in almost all kind of terrains. Great effort is also being devoted to the development of effective signal processing algorithms suited for real-time implementation. This paper presents the general system architecture and the first experimental results from laboratory and in-house tests. <s> BIB005 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> G. Demining and Public <s> The localization of people buried or trapped under snow or debris is an emerging field of application of ground penetrating radar (GPR). In the last years, technological solutions and processing approaches have been developed to improve detection accuracy, speed up localization, and reduce false alarms. As such, GPR can play an active role in cooperative approaches required to tackle such emergencies. In this work, we present and briefly analyze the evolution of research in this field of application of GPR technology. In doing so, we adopt a point of view that takes into account that avalanches and collapsed buildings are two scenarios that call for different GPR approaches, since the former can be tackled through image processing of radar data, while the latter rely on the detection of the Doppler frequency changes induced by physiological movements of survivors, such as breathing. <s> BIB006
On the basis of this contribution, several Italian works can be found in literature concerning the development of multisensor systems for detecting and characterizing landmines in humanitarian activities. Alli et al. BIB002 reported about the data processing approach for the GPR array involved in an integrated system called DEMAND, which has provided the use of one MD. The aim of the study was to reduce the FAR index of the MD device, by integrating the GPR data. To this purpose, a vehicle-mounted densely sampled ultra-wide-band (UWB) array was developed, which allowed to reconstruct a 3-D imaging of the subsurface and to focus the whole polarization matrix. The data processing scheme was worked out on the basis of the field tests performed in a test site in Sarajevo, Bosnia-Herzegovina, on a selection of antipersonnel and antitank mines. The authors were capable to provide a high level of characterization of the mine-like targets. As a result, the GPR application yielded a 30% reduction of the FAR related to the single use of MD. Balsi et al. BIB004 adopted an inverse approach, based on the results of FDTD simulations, for returning a more accurate detection of UXOs and landmines. GPR tests were performed by using a bistatic ground-coupled system with a 1 GHz central frequency antenna on a 1.3 m × 3.5 m × 0.5 m sand-filled box with several mine-like objects buried beneath its surface. Hence, the data collected were compared with those achieved by performing FDTD simulation through the gprMax software . The results highlighted good agreement between real and synthetic data, thereby proving the reliability of GPR in detecting buried metallic and nonmetallic targets. On the contrary, the authors noticed the need for a multisensor analysis to achieve an effective method for classifying the object detected. A more recent attempt to realize a hardware and software system capable of automatically detecting and recognizing unexploded ordnances, was conducted by Nuzzo et al. BIB005 . The authors presented the system architecture and the first results of laboratory and on-site tests relative to a densely sampled GPR array, within the research project TIRAMISU. The antenna array holds a multichannel configuration yielding an approximate width of survey of 1.3 m in a single pass, and a fast 3-D reconstruction of the surveyed volume. A real-time processing algorithm was developed and a number of tests was performed on canonical targets, such as metal pipes. At the current state of the study, encouraging results have been achieved for the first device prototype, although the integration of GPR and MD still needs to be improved. 2) Forensics and Public Safety: Police agencies or rescue operators frequently need to carry out surveys for locating bodies hidden or buried underneath surfaces in a quick and noninvasive manner. Common applications of GPR in this field are the location of graves, the recognition of human remains, and the marks of former excavations BIB001 . GPR can be also used for locating movements beyond the walls and detecting natural or artificial tunnels in the subsurface, for security and rescuing purposes. It is also well known that the Italian territory is characterized by widespread mountainous areas, whose hydrogeological history can be classified as particularly unstable. To have an idea of the dimension of this issue, it is worth mentioning that around the 70% of Italian municipalities is affected by landslide activities. This fact assumes a relevant importance if we consider that after the Second World War the Italian territory was interested by a wide urban and infrastructural expansion, even in unstable areas . Under a public safety perspective, this fact implies two main issues. First, in case of landslide, there may be a need for detecting people buried under debris in a very rapid time, with the highest possible accuracy. Second, since the Italian territory is characterized by several skiing resorts among the most frequented in Europe, the risk of persons buried by avalanches appears extremely serious. This pushes the authorities to seek always more rapid and effective technologies capable to locate bodies in time. In this framework, it is of evidence how GPR can allow to detect more effectively nonmetallic objects with respect to other NDTs that are typically sensitive to the magnetic field. A review about GPR applications on the detection of buried or trapped victims under snow or debris can be found in Crocco and Ferrara BIB006 . Another worthwhile Italian contribution in the forensic field is the multi-array tomographic approach proposed by Soldovieri et al. BIB003 for through-wall imaging (TWI) using GPR. TWI exploits EM waves at microwave frequencies for detecting hidden bodies. Such an application can be useful in both rescue and law enforcement or antiterrorism operations. The authors presented an approach for a multiarray configuration consisting in a combination of data collected by each single array. Both direct and inverse processes were run on synthetic data representing known targets hidden by a fixed geometry room. The approach showed good reliability in detecting and localizing hidden objects and their complex geometry.
Selective survey on spaces of closed subgroups of topological groups <s> S(G) <s> Throughout this paper, G is assumed to be a compact Lie group which acts as a topological transformation group on a space M. The symbol G, denotes the closed subgroup of G consisting of all elements of G which leave fixed the point p of M. It has been shown by Gleason that under certain conditions [1] there exists a local cross-section of the orbits at a point p. However a local cross-section does not always exist. In the case where G acts differentiably on a differentiable manifold, it is known that there exists at every point a somewhat more general object which might be called a slice [2, 5]. By using an invariant Riemannian metric a slice could be roughly described as a cell K orthogonal to G(p) at p, with G,(K) = K, and dim K the complementary dimension of G(p). In case p is fixed under G, K is merely an invariant neighborhood of p, so that in this case the definition has little content. We shall now define a slice without differentiability and then go on to show that it exists in the topological case for finite-dimensional spaces which may or may not be manifolds. The proof makes use of a recent theorem of Michael [3]. DEFINITION. If a compact Lie group G acts on a space M containing a point p then a slice at p is a closed set K which satisfies the following conditions: (1) p EK (2) Gp(K) = K (3) if y E K, then G, C G, and Gp(y) = K n G(y) (4) there is a compact cell R in G which is a local cross-section of the cosets of Gp in G at e and for which the map <s> BIB001 </s> Selective survey on spaces of closed subgroups of topological groups <s> S(G) <s> The Chabauty space of a topological group is the set of its closed subgroups, endowed with a natural topology. As soon as $n>2$, the Chabauty space of $R^n$ has a rather intricate topology and is not a manifold. By an investigation of its local structure, we fit it into a wider, but too wild, class of topological spaces (namely Goresky-MacPherson stratified spaces). Thanks to a localization theorem, this local study also leads to the main result of this article: the Chabauty space of $R^n$ is simply connected for all $n$. Last, we give an alternative proof of the Hubbard-Pourezza Theorem, which describes the Chabauty space of $R^2$. <s> BIB002
for compact G. The following two lemmas from are the basic technical tools in this area. The continuity is easy but to prove the openness we need Lemma 1.2. Let G be a compact group, X ∈ S(G). Then the following subsets from a base of neighbourhoods of X is S(G): where U is a neighbourhood of the identity of G, N is closed normal subgroup such that G/N is a Lie group, x 1 , . . . , x n are arbitrary elements of X, n ∈ N. In particular, if G is a compact Lie group then Lemma 1.2 states that there is a neighbourhood N of X such that each subgroup Y ∈ N is conjugated to some subgroup of X. The key part in the proof of Lemma 1.2 plays the Montgomery-Yang theorem on tubes BIB001 , see also [11, Theorem 5.4 from Chapter 2]. We recall that the cellularity (or Souslin number) c(X) of a topological space X is the supremum of cardinalities of disjoint families of open subsets of X. A topological space X is called dyadic if X is a continuous image of some Cantor cube {0, 1} κ . The weight w(X) of a topological space X is the minimal cardinality of open bases of X. Theorem 1.4 . For every compact group G, we have c( Theorem 1.5 . Let a group G be either profinite or compact and Abelian. If An Abelian group G is called Artinian if every increasing chain of subgroups of G is finite; every such group is isomorphic to the direct sum ⊕ p∈F C p ∞ ⊕ K, where F is a finite set of primes, K is a finite subgroup. An Abelian group G is called minimax if G has a finitely generated subgroup N such that G/N is Artinian. Theorem 1.7 . For a compact Abelian group G, the space S(G) has an isolated point if and only if the dual group G ∧ is minimax. for LCA G. The space S(R) is homeomorphic to the segment [0, 1]. By , S(R 2 ) is homeomorphic to the sphere S 4 . For n ≥ 3, S(R n ) is not a topological manifold and its structure is far from understanding, see BIB002 .
Selective survey on spaces of closed subgroups of topological groups <s> From Chabauty to local method. <s> Topological groups with various systems of closed invariant subgroups are studied, and theorems generalizing Mal'cev's local theorems for Kuros-Cernikov class to locally compact groups are proved. As an application, information is obtained on systems of closed invariant subgroups of a locally compact, locally prosolvable or locally pronilpotent group. Problem 3.37 in the Kourovka notebook on the Frattini subgroup of a locally compact, locally pronilpotent group is also solved. A new topologization of the set of closed subsets of a topological space is applied to prove the local theorems. Bibliography: 9 titles. <s> BIB001 </s> Selective survey on spaces of closed subgroups of topological groups <s> From Chabauty to local method. <s> This paper gives a negative solution to the problem of Milnor concerning the degrees of growth of groups. The construction also answers a question of Day concerning amenable groups. A number of other results are obtained on residually finite finitely generated infinite 2-groups. Bibliography: 51 titles. <s> BIB002 </s> Selective survey on spaces of closed subgroups of topological groups <s> From Chabauty to local method. <s> We investigate the isolated points in the space of finitely generated groups. We give a workable characterization of isolated groups and study their hereditary properties. Various examples of groups are shown to yield isolated groups. We also discuss a connection between isolated groups and solvability of the word problem. <s> BIB003 </s> Selective survey on spaces of closed subgroups of topological groups <s> From Chabauty to local method. <s> A group G is called hereditarily non-topologizable if, for every H⩽G, no quotient of H admits a non-discrete Hausdorff topology. We construct first examples of infinite hereditarily non-topologizable groups. This allows us to prove that c-compactness does not imply compactness for topological groups. We also answer several other open questions about c-compact groups asked by Dikranjan and Uspenskij. On the other hand, we suggest a method of constructing topologizable groups based on generic properties in the space of marked k-generated groups. As an application, we show that there exist non-discrete quasi-cyclic groups of finite exponent; this answers a question of Morris and Obraztsov. <s> BIB004
A topological group G is called topologically simple if each closed normal subgroup of G is either G or {e}. Every topologically simple LCA-group is discrete and either G = {e} or G is isomorphic to C p . Following the algebraic tradition, we say that a group G is locally nilpotent (solvable) if every finitely generated subgroup is nilpotent (solvable). In [18, Problem 1.76], V. Platonov asked whether there exists a non-Abelian topologically simple locally compact locally nilpotent group. Now we sketch the negative answer to this question for locally solvable group obtained in . Let G be a locally compact locally solvable group. We take g ∈ G \ {e}, choose a compact neighbourhood U of G and denote by F the family of all topologically finitely generated subgroups of G containing g. We may assume that G is not topologically finitely generated so F is directed by the inclusion ⊂. For each F ∈ F , we choose A F , B F ∈ S(F ) such that B F ⊂ A F , A F and B F are normal in F , A F ∩ U = ∅, B F ∩ U = ∅ and A F /B F is Abelian. Since S(G) is compact, we can choose two subnets (A α ) α∈I , (B α ) α∈I of the nets (A F ) F ∈F , (B F ) F ∈I which converges to A, B ∈ S(G). Then A, B are normal in G and A/B is Abelian. Moreover, x / ∈ B and A ∩ U = ∅. If A = {G} then A is a proper normal subgroup of G; otherwise G/B is Abelian. In BIB001 , the Chabauty topology was defined on some systems of closed subgroups of locally compact group G. A system A of closed subgroups of G is called subnormal if • A contains {e} and G; • A is linearly ordered by the inclusion ⊂; • for any subset M of A, the closure of F ∈M F ∈ A and F ∈M F ∈ A ; • whenever A and B comprise a jump in A (i.e B ⊂ A and no members of A lie between B and A), B is a normal subgroup of A. If the subgroup A, B form a jump then A/B is called a factor of G. The system is called normal if each A ∈ A is normal in G. A group G is called an RN-group if G has a normal system with Abelian factors. Among the local theorems from BIB001 , one can find the following: if every topologically finitely generated subgroup of a locally compact group G is an RN-group then G is an RN-group. In particular, every locally compact locally solvable group is an RN-group. In 1941, see [21, pp. 78-83] , A.I. Mal'tsev obtained local theorems for discrete groups as applications of the following general local theorem: if every finitely generated subsystem of an algebraic system A satisfies some property P, which can be defined by some quasi universal second order formula, then A satisfies P. In , Mal'tsev's local theorem was generalized on topological algebraic system. The part of the model-theoretical Compactness Theorem in Mal'tsev arguments plays some convergents of closed subsets. A net (F α ) α∈I of closed subsets of a topological space X S-converges to a closed subset F if • for every x ∈ F and every neighbourhood U of x, there exists β ∈ I such that F α ∩ U = ∅ for each α > β; • for every y ∈ X \ F , there exist a neighbourhood V of y and γ ∈ I such that F α ∩ V = ∅ for each α > γ. Every net of closed subsets of an arbitrary (!) topological space has a convergent subnet. If X is a Hausdorff locally compact space then S-convergence coincides with convergence in the Chabauty topology. 1.7 Spaces of marked groups. Let F k be the free group of rank k with the free generators x 1 , . . . , x k and let G k denotes the set of all normal subgroups of F k . In the metric form, the Chabauty topology on G k was introduced in BIB002 as a reply on the Gromov's idea of topologizations of some sets of groups . Let G be a group generated by g 1 , . . . , g k . The bejection x i −→ g i g 1 , . . . , g n can be extended to the homomorphism f : F k −→ G. With the correspondence G −→ ker f , G k is called the space marked k-generated groups. A couple of papers in development of BIB002 is directed to understand how large in topological sense are well-known classes of finitely generated groups, or how a given kgenerated group is placed in G k , see BIB003 . Among applications of G k , we mention the construction of topologizable Tarski Monsters in BIB004 .
Selective survey on spaces of closed subgroups of topological groups <s> Segment topologies. <s> Metric spaces Coarse spaces Growth and amenability Translation algebras Coarse algebraic topology Coarse negative curvature Limits of metric spaces Rigidity Asymptotic dimension Groupoids and coarse geometry Coarse embeddability Bibliography. <s> BIB001 </s> Selective survey on spaces of closed subgroups of topological groups <s> Segment topologies. <s> In this paper we define some ballean structure on the power set of a group and, in particular, we study the subballean with support the lattice of all its subgroups. If $G$ is a group, we denote by $L(G)$ the family of all subgroups of $G$. For two groups $G$ and $H$, we relate their algebraic structure via the ballean structure of $L(G)$ and $L(H)$. <s> BIB002
Let G be a topological group, P G is the family of all subsets of G, [G] <ω is the family of all finite subsets of G. Each pair A, B of subsets of P G closed under finite unions define the segment topology on L(G) with a base consisting of the segments. These topologies are studied in in the following three cases: • if U, V ∈ Σ(H) then U ∩ V contains some W ∈ Σ(H); • for every U ∈ Σ(H), there exists V ∈ Σ(H) such that U ∈ Σ(K) for each K ∈ L(G), K ⊆ V; • U ∈Σ(H) U = H for each H ∈ L(G). Then the family {X ∈ L(G) : X ⊆ U}, U ∈ Σ is a base for the Σ-topology on L(G). Let τ denotes the topology of G, P τ is the family of all subsets of τ . We assume that, for each H ∈ L(G), Θ(H) is some subset of P τ such that the following conditions are satisfied • for every α, β ∈ Θ(H), there is γ ∈ Θ(H) such that α < γ, β < γ (α < β means that, for every U ∈ α, there exists V ∈ β such that V ⊆ U); • for every α ∈ Θ(H), there exists β ∈ Θ(H) such that if K ∈ L(G) and K ∩ V = ∅ for each V ∈ β, then α < γ for some γ ∈ Θ(K); • for each H ∈ L(G) and every neighbourhood V of x, there exists α ∈ Θ(H) such that x ∈ U, U ⊆ V for some U ∈ α. Then the family {X ∈ L(G) : X ∩U = ∅ for each U ∈ α}, where α ∈ Θ(H), H ∈ L(G), is a base for the Θ-topology on L(G). The upper bound of Σ-and Θ-topologies is called the (Σ, Θ)-topology. A net (H α ) α∈I converges in (Σ, Θ)-topology to H ∈ L(G) if and only if • for any U ∈ Σ(H), there exists β ∈ I such that H α ⊆ U for each α > β; • for any α ∈ Θ(H), there exists γ ∈ I such that H α ∩ V = ∅ for each α > γ. In , one can find characterizations of G with compact and discrete L(G) in some concrete (Σ, Θ)-topologies. 3.6. Hyperballeans of groups. Let G be a discrete group. The set {F g : g ∈ G, F ∈ [G] <ω } is a family of balls in the finitary coarse structure on G. For coarse structures and balleans see BIB001 and [50] . The finitary coarse structure on G induces the coarse structure on L(G) in which {X ∈ L(G) : X ⊆ F A, A ∈ F X}, F ∈ [G] <ω is the family of balls centered at A ∈ L(G). The set L(G) endowed with structure is called a hyperballean of G. Hyperballeans of groups carefully studied in BIB002 can be considered as asymptotic counterparts of Bourbaki uniformities.
Literature Survey on Non-Associative Rings and Developments <s> Introduction <s> A new catalyst component and its use with an organoaluminum compound, which component is a brown solid of high surface area and large pore volume comprising beta titanium trichloride and a small amount of an organic electron pair donor compound. This solid when used in conjunction with an organoaluminum compound to polymerize alpha-olefins produces product polymer at substantially increased rates and yields compared to present commercial, purple titanium trichloride while coproducing reduced amounts of low-molecular-weight and, particularly, amorphous polymer. Combinations of this new catalyst component and an organoaluminum compound can be further improved in their catalytic properties by addition of small amounts of modifiers, alone and in combination. Such combinations with or without modifiers show good sensitivity to hydrogen used as a molecular weight controlling agent. The combinations are useful for slurry, bulk and vapor phase polymerization of alpha-olefins such as propylene. <s> BIB001 </s> Literature Survey on Non-Associative Rings and Developments <s> Introduction <s> Introduction. The concept of isotopy, recently introduced(') by A. A. Albert in connection with the theory of linear non-associative algebras, appears to have its value in the theory of quasigroups. Conversely, the author has been able to use quasigroups(2) in the study of linear non-associative algebras. The present paper is primarily intended as an illustration of the usefulness of isotopy in quasigroup-theory and as groundwork for a later paper on algebras, but is bounded by neither of these aspects. The first two sections are devoted to the basic definitions of quasigroup and isotopy, along with some elementary remarks and two fundamental theorems due to Albert. Then there is initiated a study of special types of quasigroup, beginning with quasigroups with the inverse property (I. P. quasigroups). A system Q of elements a, b, * * * is called an I. P. quasigroup if it possesses a single-valued binary operation ab and there exist two one-to-one reversible mappings L and R of Q on itself such that <s> BIB002 </s> Literature Survey on Non-Associative Rings and Developments <s> Introduction <s> A waterproof pressure sensitive adhesive laminate is provided in which a flexible plastics backing sheet is coated with a bituminous adhesive composition containing a minor proportion of rubber or thermoplastic polymer. The backing sheet is reinforced with a mesh or a woven or non-woven fabric which is embedded in the sheet and provides substantial resistance to stretching. <s> BIB003 </s> Literature Survey on Non-Associative Rings and Developments <s> Introduction <s> A compression device for exerting pressure on an arm, shoulder, and/or trunk of a patient in need thereof (for example, a patient with hyperalgia or recovering from surgery in which the lymphatic system is affected), including an arm compression hose, a shoulder part for exerting pressure on the shoulder and trunk area, and a band-shaped fastening means for positioning the shoulder part and exerting pressure on the shoulder part. The arm compression hose exerts a pressure that decreases from a maximum pressure at the wrist or hand to a minimum pressure near the shoulder end of the arm, where the minimum pressure is approximately 70% of the maximum pressure. One or more lining pockets can be constructed on the inner lining of the compression device, where each lining pocket can hold one or more compression pads to increase tissue pressure in one or more body areas in need thereof. The compression pads each can have a shape that approximately conforms to the shape of the body part to which it is applied. The shoulder part can also have a shape that approximately conforms to the contour of the shoulder/trunk area to which it is applied. In addition, compression pants can be prepared with lining pockets for receiving compression pads. In one embodiment, compression pants include one or more donut-shaped pads or equivalents thereof that are placed in one or more lining pockets, each of which surrounds one or more osteoma openings. <s> BIB004 </s> Literature Survey on Non-Associative Rings and Developments <s> Introduction <s> for all a, x in 21. I t is clear that associative algebras are alternative. The most famous examples of alternative algebras which are not associative are the so-called Cayley-Dickson algebras of order 8 over $. Let S be an algebra of order 2 over % which is either a separable quadratic field over 5 or the direct sum 5 ©3There is one automorphism z—>z of S (over %) which is not the identity automorphism. The associative algebra O = 3~\~S with elements <s> BIB005 </s> Literature Survey on Non-Associative Rings and Developments <s> Introduction <s> The Basic Framework. The Structure of Groups. Lie Groups. Representation of Groups--Principal Ideas. Representation of Groups--Developments. Group Theory in Quantum Mechanical Calculations. Crystallographic Space Groups. The Role of Lie Algebras. The Relationships Between Lie Groups and Lie Algebras Explored. The Three-Dimensional Rotation Groups. The Structure of Semi-Simple Lie Algebras. Representations of Semi-Simple Lie Algebras. Symmetry Schemes for the Elementary Particles. Appendices. References. Subject Index. <s> BIB006 </s> Literature Survey on Non-Associative Rings and Developments <s> Introduction <s> This paper deals with the origins and early history of loop theory, summarizing the period from the 1920s through the 1960s. <s> BIB007 </s> Literature Survey on Non-Associative Rings and Developments <s> Introduction <s> The aim of this paper is to offer an overview of the most important applications of Jordan structures inside mathematics and also to physics, up-dated references being included. For a more detailed treatment of this topic see - especially - the recent book Iordanescu [364w], where sugestions for further developments are given through many open problems, comments and remarks pointed out throughout the text. ::: Nowadays, mathematics becomes more and more nonassociative and my prediction is that in few years nonassociativity will govern mathematics and applied sciences. ::: Keywords: Jordan algebra, Jordan triple system, Jordan pair, JB-, JB*-, JBW-, JBW*-, JH*-algebra, Ricatti equation, Riemann space, symmetric space, R-space, octonion plane, projective plane, Barbilian space, Tzitzeica equation, quantum group, B\"acklund-Darboux transformation, Hopf algebra, Yang-Baxter equation, KP equation, Sato Grassmann manifold, genetic algebra, random quadratic form. <s> BIB008 </s> Literature Survey on Non-Associative Rings and Developments <s> Introduction <s> In this paper, we study left ideals, left primary and weakly left primary ideals in LA-rings. Some characterizations of left primary and weakly left primary ideals are obtained. Moreover, we investigate relationships left primary and weakly left primary ideals in LA-rings. Finally, we obtain necessary and sufficient conditions of a weakly left primary ideal to be a left primary ideals in LA- rings. <s> BIB009 </s> Literature Survey on Non-Associative Rings and Developments <s> Introduction <s> The aim of this paper is to characterize left almost rings by congrunces. We show that each homomophism of left amost rings defines a congrucne relation on left almost rings. We then discuss quotient left almiost rings. At the end we prove analogues of the ismorphism theorem for left almost rings. <s> BIB010 </s> Literature Survey on Non-Associative Rings and Developments <s> Introduction <s> Thank you very much for downloading introduction to lie algebras and representation theory. As you may know, people have search numerous times for their favorite books like this introduction to lie algebras and representation theory, but end up in harmful downloads. Rather than enjoying a good book with a cup of tea in the afternoon, instead they cope with some harmful virus inside their desktop computer. <s> BIB011
One of the endlessly alluring aspects of mathematics is that its thorniest paradoxes have a way of blooming into beautiful theories. Pure mathematics is, in its way the poetry of logical ideas. Today mathematics especially pure mathematics is not the same as it was hundred years ago. Many revolutions have occurred and it has taken new shapes with the due course of time. Until recently the theory of rings and algebras was regarded exclusively as the theory of associative rings and algebras. This was a result of the fact that the first rings encountered in the course of the development of mathematics were associative (and commutative) rings of numbers and rings of functions, and also associative rings of endomorphisms of abelian groups, in particular, rings of linear transformations of vector spaces. This survey of one part of the theory of rings: precisely, the theory of rings, which although non-associative, are more or less connected with the theory of associative rings. More precise connections will be mentioned during the discussion of particular classes of rings. A major change took place in the mid of 19 th century when the concept of non-associative rings and non-associative algebras were introduced. The theory of non-associative rings and algebras has evolved into an independent branch of algebra, exhibiting many points of contact with other fields of mathematics and also with physics, mechanics, biology, and other sciences. The central part of the theory is the theory of what are known as nearly-associative rings and algebras: Lie, alternative, Jordan, loop rings and algebras, and some of their generalizations. We briefly describe the origins of the theory of non-associative rings. The oldest nonassociative operation used by mankind was plain subtraction of natural numbers. The first ever example of a ring that is non-associative is Octonions, constructed by John T. Graves in 1843. On the other hand the first example of an abstract non-associative system was Cayley numbers, constructed by Arthur Cayley in 1845. Later they were generalized by Dickson to what we know as Cayley-Dickson algebras. Later in 1870 a very important non-associative class known as Lie Theory was introduced by the Norwegian mathematician Sophus Lie. He employed a novel approach, combining transformations that preserve a type of geometric structure (specifically, a contact structure) and group theory to arrive at a theory of continuous transformation groups . Since then, Lie Theory has been found to have many applications in different areas of mathematics, including the study of special functions, differential and algebraic geometry, number theory, group and ring theory, and topology BIB011 . It has also become instrumental in parts of physics, for some Lie algebras arise naturally from symmetries in physical systems, and is a powerful tool in such areas as quantum and classical and mechanics, , solid state physics, atomic spectroscopy and elementary particles BIB006 . No doubt Lie theory is a fundamental part of mathematics. The areas it touches contain classical, differential, and algebraic geometry, topology, ordinary and partial differential equations, complex analysis and etc. And it is also an essential chapter of contemporary mathematics. A development of it is the Uniformization Theorem for Riemann surface. The final proof of such theorem is the invention from Einstein to the special theory of relativity and the Lorentz transformation. The application of Lie theory is astonishing. Moreover, in 1890's the concept of hyperbolic quaternion was given by Alexander Macfarlane which forms a non-associative ring that suggested the mathematical footing for space time theory that followed later. Furthermore, to the best of our knowledge the first detailed discussion about Alternative rings was started in 1930 by the German author Zorn BIB001 . For more study about this nonassociative structure we refer the readers to study BIB004 BIB005 . Another important class of non-associative structures was introduced in 1932-1933 by German specialist Pasqual Jordan in his algebraic formulation of quantum mechanics. Jordan structures also appear in quantum group theory, and exceptional Jordan algebras play an important role in recent fundamental physical theories, namely, in the theory of super-strings BIB008 . The systematic study of general Jordan algebras was started by Albert in 1946 . In addition, the study of loops started in 1920's and these were introduced formally first time in 1930's BIB007 . The theory of loops has its roots in geometry, algebra and combinatorics. This can be found in nonassociative products in algebra, in combinatorics it is presented in latin squares of particular form and in geometry it has connection with the analysis of web structures . A detailed study of theory of the loops can be found in [3, 4, BIB002 . Historically, the concept of a non-associative loop ring was introduced in a paper by Bruck in 1944 BIB003 . Non-associative loop rings appear to have been little more than a curiosity until the 1980s when the author found a class of non-associative Moufang loops whose loop rings satisfy the alternative laws. After the concept of loop rings (1944), a new class of non-associative ring theory was given by Yusuf in 2006 . Although the concept of LA-ring was given in 2006, but the systematic study and further developments was started in 2010 by Shah and Rehman in their paper . It is worth mentioning that this new class of non-associative rings named Left almost rings (LA-ring) is introduced after a huge gap of 6 decades since the introduction of loop rings. Left almost rings (LA-ring) is actually an off shoot of LA-semigroup and LA-group. It is a noncommutative and non-associative structure and gradually due to its peculiar characteristics it has been emerging as useful non-associative class which intuitively would have reasonable contribution to enhance non-associative ring theory. By an LA-ring, we mean a non-empty set R with at least two elements such that (R, +) is an LA-group, (R, .) is an LA-semigroup, both left and right distributive laws hold. In , the authors have discussed LA-ring of finitely nonzero functions which is in fact a generalization of a commutative semigroup ring. On the way the first ever definition of LA-module over an LA-ring was given by Shah and Rehman in the same paper . Moreover, Shah and Rehman discussed some properties of LArings through their ideals and intuitively ideal theory would be a gate way for investigating the application of fuzzy sets, intuitionistics fuzzy sets and soft sets in LA-rings. For example, Shah et al., have applied the concept of intuitionistic fuzzy sets and established some useful results. In some computational work through Mace4 has been done and some interesting characteristics of LA-rings have been explored. Further Shah et al., have promoted the concept of LA-module and established some results of isomorphism theorems and direct sum of LA-modules. Recently, in 2014, Alghamdi and Sahraoui have defined and constructed a tensor product of LA-modules and they extended some simple results from the ordinary tensor to the new setting. In 2014, Yiarayong BIB009 have given the new concept of left primary and weakly left primary ideals in LA-rings. Some characterizations of left primary and weakly left primary ideals are obtained. Moreover, in 2015 Hussain and Khan BIB010 have characterized LA-rings by congruence relations. They proved that each homomorphism of left almost rings defines a congruence relation on left almost rings. For some more study of LA-rings, we refer the readers to see .