doc_id
stringlengths
7
11
appl_id
stringlengths
8
8
flag_patent
int64
0
1
claim_one
stringlengths
12
36k
9519643
14740120
1
1. A computer-implemented process for translating map labels, comprising using a computing device for: receiving an entity's map label in a first language that is to be translated into a second language; generating translation candidates for each n-gram in the entity's map label and using these translation candidates to generate translation candidate sequences for the map label; selecting a prescribed number of translation candidate sequences; extracting features from the selected translation candidate sequences and the entity's map label by using geospatial and linguistic context information; using a probabilistic classifier trained at least in part with the extracted features to rank the selected translation candidate sequences; and re-ranking the selected translation candidate sequences using neighboring proximity information of the entity's location to disclose the highest re-ranked translation candidate sequence as the translated map label.
8496591
11794182
1
1. A perfusion assessment system comprising: a device configured to: provide an echo-power signal indicative of a perfusion of a contrast agent in a body part under analysis, the contrast agent being administered as a bolus and undergoing a significant destruction during a passage of the contrast agent in the body part, and a processor configured to: associate the echo-power signal to a model function including a mathematical product between a bolus function indicative of the passage of the contrast agent without said destruction and a reperfusion function indicative of a reperfusion of the contrast agent in the body part following the destruction corresponding to a substantially constant inflow of the contrast agent, and estimate at least one perfusion indicator from the model function, the bolus function, or the reperfusion function.
8306684
12506366
1
1. An autonomous moving apparatus arranged to move autonomously in a surrounding environment, the autonomous moving apparatus comprising: a distance measuring sensor arranged to output detection waves, detect a distance between the autonomous moving apparatus and an object which has reflected a detection wave based on a reflection of the detection wave, and acquire distance information about the distance between the autonomous moving apparatus and the object which has reflected the detection wave; an inclination sensor arranged to detect a state of inclination of the distance measuring sensor; and a control unit arranged to: estimate a self-position in the surrounding environment and generate a global map of the surrounding environment based on the distance information when a detection result of the inclination sensor indicates that the distance measuring sensor is in a state having a constant inclination amount, regardless of whether the distance measuring sensor is inclined or not; stop estimating the self-position and generating the global map based on the distance information when the detection result of the inclination sensor indicates that the distance measuring sensor is in state having a changing inclination amount; and resume estimating the self-position in the surrounding environment and generating the global map of the surrounding environment based on the distance information when the detection result of the inclination sensor indicates that the distance measuring sensor has changed from the state having the changing inclination amount to the state having the constant inclination amount.
9025837
13947095
1
1. An image processing method comprising: using a processor-based image acquisition device to acquire a main image of a scene and two or more further images approximately of a same scene within a time period; wherein said time period begins proximately before said acquiring of said main image and ends proximately after said acquiring of said main image; using one or more computing devices to perform the steps of: determining that a particular object in said main image has a defect; identifying two or more defect free objects, from the two or more further images, that correspond to the particular object; creating a corrective object that corresponds to the particular object by combining together said two or more defect free objects; performing a comparison between said particular object and said corrective object; and determining whether to initiate a further action based on said comparison.
7567712
10510078
1
1. A method of identifying endmember spectra values from a multispectral image comprising multispectral image data, where each multispectral data value is equal to a sum of mixing proportions of each endmember spectrum, said method including the steps of: processing the multispectral image data to obtain a multidimensional simplex having a number of vertices equal to the number of endmembers, the position of each vertex representing a spectrum of one of the endmembers, wherein processing of the data includes: providing starting estimates of each endmember spectrum for each image data value; estimating mixing proportions for each data value from estimates of the spectra of all the endmembers; estimating the spectrum of each endmember from the estimates of the mixing proportions of the spectra of all the endmembers for each image data value; and repeating the estimation of the mixing proportions and the estimation of the spectrum of each endmember until a stopping condition is met, wherein the stopping condition occurs when a relative change in a regularized residual sum of squares determined in the estimation steps attains a threshold, wherein the regularized residual sum of squares comprises a sum of residual sum of squares and a measure of the size of the simplex, the residual squares is reflective of a difference between the multispectral image data and a calculated value based on the estimated mixing proportions and estimated spectrum of each endmember.
20020085756
09750602
0
1. A method of recognizing at least one object in a digitized representation of an image, comprising: receiving the digitized representation of the image, the representation having a first resolution; creating a reduced-resolution version of the image responsive to the digitized representation of the image, the reduced-resolution version of the image having a second resolution lower than the first resolution; and identifying a value of each of at least one recognition initial condition responsive to at least a portion of the reduced resolution version of the image; and recognizing the at least one object represented in the digitized representation of the image responsive to the value of each of the at least one recognition initial condition identified.
20030179950
10128586
0
1. A method for orthocorrecting a satellite-acquired image, including distortion modeling for making positions of pixels in a corrected image and positions of pixels in an observed image correspond to each other and also resampling for interpolating intensities of said pixels in said observed image in accordance with results of said distortion modeling, wherein said distortion modeling comprises the following steps: (a) setting a plurality of control planes of different elevations relative to a control plane of a zero elevation of a reference rotating ellipsoidal earth model; (b) dividing said corrected image into plural blocks by lattices of equal intervals; (c) determining, with respect to each of said pixels in said corrected image, a corresponding one of said blocks; (d) determining two of said control planes, said two control planes sandwiching an elevation value of said position of said pixel above and below said elevation value in the thus-determined block; (e) calculating two pixel positions in said observed image, which correspond to said elevation value of said pixel position in said block, by using respective distortion model formulas determined as bilinear functions with respect to said two control planes; (f) linearly interpolating the thus-calculated two pixel positions by said elevation value of said pixel position in said corrected image to determine said pixel position in said observed image, which corresponds to said pixel position in said corrected image; and (g) repeating said steps (c) to (f) until pixel positions in said observed image are all determined corresponding to said individual pixel positions in said corrected image.
20140072238
14077109
0
1. A method of decoding an image, the method comprising: obtaining information about whether to divide a coding unit parsed from a received bitstream of an encoded video; determining coding units of a hierarchical structure, based on the information about whether to divide a coding unit; determining whether a current coding unit among the determined coding units of the hierarchical structure comprises a region that deviates from a boundary of a current image; if the current coding unit is determined to not comprise the region that deviates from the boundary of the current image based on the determining whether the current coding unit comprises the region, parsing and decoding data regarding the current coding unit of the hierarchical structure; and if the current coding unit is determined to comprise the region that deviates from the boundary of the current image based on the determining whether the current coding unit comprises the region, determining sub coding units generated by dividing the current coding unit, that comprises the region that deviates from the boundary of the current image.
20170124715
14927248
0
1. A method of measuring a depth map for an object, comprising: positioning multiple projectors at respectively multiple angular positions relative to the object, wherein each of the projectors comprises color channels whose colors differ from that of others of the multiple projectors; positioning an image capture device relative to the multiple projectors, wherein the image capture device comprises a multispectral capture device having a capture sensitivity spectral range that includes the spectral range of the totality of colors of the multiple projectors; projecting a first combined color pattern onto the object using the multiple projectors, wherein each projector projects a color striped pattern whose resolution differs from that of others of the projectors and wherein the resolution of the color striped pattern is related to the angular position of the projector; capturing an image of the object with the first combined color pattern using the image capture device; and recovering the depth map of the object by calculations using the captured image.
8040549
11759017
1
1. An image processing apparatus, comprising: an image reading part configured to read an image of a document; and an image storage part configured to store image data read by the image reading part; wherein the image reading part includes a part configured to read a range of a part of the image of the document; the image processing apparatus further includes: a first calculating part configured to calculate image data of a one surface of the document based on a size of the image data of the range read by the image reading part; a second calculating part configured to calculate the number of pages of image data of the document which can be stored by the image storage part based on the result of calculation by the first calculating part, and a range setting part configured to set the range, wherein the image reading part reads the range as a range defined by a reading starting position and a reading ending position from the head end of the document in the reading direction, wherein the range setting part can set a plurality of the ranges, wherein the image reading part reads the plural ranges in a case where the plural ranges are set by the range setting part, wherein the first calculating part calculates, in a case where image data of the plural ranges are read by the image reading part, an image data size of the one surface of the document, based on sizes of the image data of the plural ranges wherein the first calculating part is configured to calculate image data of the one surface of the document based on the size of the image data of a first range read by the image reading part and the size of the image data of a second range read by the image data, the first range being different than the second range, and wherein the number of pages of image data which can be stored by the image storage part L is based on a memory free capacity of the image storage part N, the size of the image data of the first range m, the size of the image data of the second range n, an area of the document, and a sum of a reading area, such that L=N/((m+n)* document area/sum of the reading area).
8970896
14010565
1
1. A method comprising using at least one hardware processor for: analyzing text in a digital document, to identify a text segment referring to a figure of the digital document; mapping said text segment to said figure; identifying, in said text segment, reference to one or more non-grayscale colors of said figure, to determine a level of importance of said one or more non-grayscale colors to legibility of said figure; and printing said digital document in accordance with the level of importance.
7646332
11889197
1
1. A method comprising: determining in real-time a radar cross section (RCS) of an object, the determining comprising: defining a shooting window configured to be illuminated by radar signals; dividing the shooting window into a number of subgrids equal to a number of processing threads allocated to process the shooting window; dividing each of the subgrids into subcells equal to a whole number multiple greater than one of a number of parallel processors; illuminating the shooting window with radar signals; receiving radar signals reflected by the object within the shooting window; and assigning the reflected radar signals received from each subcell to a corresponding one of the parallel processors such that the assignment of parallel processors within a subgrid is evenly distributed amongst the parallel processors and such that the assignment of adjacent subcells to the same processor is avoided.
20030218780
10154546
0
1. A method for color connecting an original halftone bitmap image by a predefined color correction function to produce a color corrected halftone bitmap image comprising: providing an original halftone bitmap image; estimating the dot area percentage of the original halftone bitmap image in a set of sub-image blocks; calculating an aim dot area percentage, based on a predefined color correction function, for each sub-image block in said original halftone bitmap image; calculating the number of halftone bitmap image pixels to convert to on or off states to produce a modified original halftone bitmap image that has the aim dot area percentage where said value is designated by N, for each sub-image block in said original halftone bitmap image, and converting N pixels in said original halftone bitmap image to either on or off states depending on whether the aim dot area percentage is greater or less than the dot area percentage of said original halftone bitmap image respectively, for each sub-image block in said original halftone bitmap image
9600860
14786981
1
1. A method for performing super-resolution on an input image having low resolution, comprising steps of generating a training data set of descriptors, or retrieving from a storage a previously generated training data set of descriptors, the descriptors being extracted from regions of training images, the training images comprising low-resolution and corresponding high-resolution images, wherein each descriptor comprises a region identifier and a geometrical feature; dividing the input image into a plurality of input image patches, wherein the patches are smaller than the regions; for each input image patch, performing the steps of determining a defined number of nearest neighbor regions, the nearest neighbor regions being low-resolution regions from the training data set that have geometrical features that are most similar to a current input image patch; from each nearest neighbor region, extracting a plurality of example patches by dense sampling, wherein the dense sampling comprises sampling in regular intervals of r pixels and is independent from the image contents within the region, r being an integer, and collecting the example patches in an example patch data base that is specific for the current input image; determining from the example patch data base a low-resolution example patch, or a combination of two or more low-resolution example patches, that optimally match geometrical features of the current input image patch; and constructing a target high-resolution patch from one or more high-resolution patches corresponding to said one or more low-resolution example patches, according to the determined combination, wherein the target high-resolution patches for all input patches form a super-resolved image.
7899699
10981902
1
1. A method for adaptive forecasting, comprising: receiving a request to forecast a number of tickets that will be sold for a future airline flight, wherein the future airline flight will depart: from a departure market, at a departure time, on a day of the week, and in a month in a quarter; retrieving a first permanent component, the first permanent component being a historical average number of tickets that were sold over a first time period for past airline flights that have departed from the departure market at the departure time; retrieving a first trend component, the first trend component being based on a first trend difference between the first permanent component and a first previous permanent component, the first previous permanent component being a historical average number of tickets that were sold over a second time period for past airline flights that have departed from the departure market at the departure time, wherein the end of the second time period occurred before the end of the first time period, wherein the first trend difference is positive if the first permanent component is greater than the first previous permanent component, and wherein the first trend difference is negative if the first permanent component is less than the first previous permanent component; retrieving a first seasonal component, the first seasonal component being based on a first seasonal difference between the first permanent component and a historical average number of tickets that were sold over the first time period for past airline flights that have departed from the departure market at the departure time, and that have departed on the day of the week, in the month, or in the quarter, wherein the first seasonal difference is negative if the first permanent component is greater than the historical average number of tickets that were sold over the first time period for past airline flights that have departed from the departure market at the departure time, and that have departed on the day of the week, in the month, or in the quarter, and wherein the first seasonal difference is positive if the first permanent component is less than the historical average number of tickets that were sold over the first time period for past airline flights that have departed from the departure market at the departure time, and that have departed on the day of the week, in the month, or in the quarter; determining, using a computer, a first forecast of the number of tickets that will be sold for the future airline flight, the first forecast being based on the first permanent component, the first trend component, and the first seasonal component; determining whether the first forecast is valid; and if the first forecast is not valid, then: retrieving a second permanent component, the second permanent component being a historical average number of tickets that were sold over the first time period for past airline flights that have departed from the departure market at either the departure time or one or more other departure times; retrieving a second trend component, the second trend component being based on a second trend difference between the second permanent component and a second previous permanent component, the second previous permanent component being a historical average number of tickets that were sold over the second time period for past airline flights that have departed from the departure market at either the departure time or the one or more other departure times, wherein the second trend difference is positive if the second permanent component is greater than the second previous permanent component, and wherein the second trend difference is negative if the second permanent component is less than the second previous permanent component; retrieving a second seasonal component, the second seasonal component being based on a second seasonal difference between the second permanent component and a historical average number of tickets that were sold over the first time period for past airline flights that have departed from the departure market at either the departure time or the one or more other departure times, and that have departed on the day of the week, in the month, or in the quarter; wherein the second seasonal difference is negative if the second permanent component is greater than the historical average number of tickets that were sold over the first time period for past airline flights that have departed from the departure market at either the departure time or the one or more other departure times, and that have departed on the day of the week, in the month, or in the quarter, and wherein the second seasonal difference is positive if the second permanent component is less than the historical average number of tickets that were sold over the first time period for past airline flights that have departed from the departure market at either the departure time or the one or more other departure times, and that have departed on the day of the week, in the month, or in the quarter; and determining, using a computer, a second forecast of the number of tickets that will be sold for the future airline flight, the second forecast being based on the second permanent component, the second trend component, and the second seasonal component.
20060279800
11338807
0
1. An image processing apparatus for detecting a target area including at least a portion of an object from image data generated by capturing at least the portion of the object, the apparatus comprising: a map generation unit that acquires a map of scores in which each score pertaining to a degree—to which a corresponding unit area of the image data is likely to be contained in the target area—is associated with a position of the corresponding unit area in the image data; a temporary area arrangement unit that arranges a temporary area of a predetermined shape in a position on the map of scores, the position being determined in accordance with a predetermined condition; and a target area detection unit that performs, at least once, at least one of: (1) processing of changing the position of the temporary area on the map of scores, on a basis of a distribution of scores of unit areas included in the temporary area, and (2) processing of changing a ratio of a length of the temporary area in a predetermined direction to a length of the temporary area in a direction orthogonal to the predetermined direction, on a basis of positions and scores of unit areas included in a predetermined adjacent area adjacent to the temporary area, the target area determination unit that determines the temporary area as the target area when another predetermined condition is met, the target area determination unit that outputs the determined target area.
20050169652
10769524
0
1. In an electrophotographic printing device, a method of determining a toner development control parameter comprising: (a) receiving a print job that describes an image to be printed; (b) determining if a band of image lines in the image satisfies pre-determined criteria; and (c) determining a development control parameter value based, at least in part, on whether the band is determined to satisfy the pre-determined criteria.
20020035443
09803443
0
1. A method for calculating a multi-trace geometric attribute of a surface at multiple scales, the surface regularly gridded with grid points, comprising the steps of: selecting a window size; selecting a set of the grid points defining grid cells of the selected window size; calculating the geometric attribute using the traces at the set of the grid points; repeating selecting and calculating steps for sets of the grid points defining grid cells of different window sizes; and determining the window size whose calculations best represent the geometric attribute.
9426381
14925225
1
1. An apparatus comprising: a scaling circuit configured to generate a plurality of scaled frames each having a plurality of pixels with a first exposure in response to a first subset of a plurality of frames generated by a sensor; a luma circuit configured to generate an average luminance value for each of a plurality of pixels having a second exposure in each of a second subset of said plurality of frames generated by said sensor; and a blending circuit configured to generate a plurality of output frames, wherein (A) each of said pixels of said output frames are selected in response to one of said average luminance values and (B) portions of each of said output frames provide a gradual linear blend between said pixels having said first exposure and said pixels having said second exposure.
20030004722
09894898
0
1. A method for allocating memory in a speech recognition system comprising the steps of: acquiring a first set of data structures that contain a grammar, a word subgrammar, a phone subgrammar and a state subgrammar, each of the subgrammars related to the grammar; acquiring a speech signal; performing a probabilistic search using the speech signal as an input, and using the grammar and the subgrammars as possible inputs; and allocating memory for one of the subgrammars when a transition to that subgrammar is made during the probabilistic search.
20030212327
10305936
0
1. A method for facilitating breast cancer screening, comprising: acquiring raw ultrasound slices representing sonographic properties of a breast; forming a volumetric representation of said sonographic properties from said raw ultrasound slices; computing a two-dimensional thick-slice ultrasound image from said volumetric representation, said thick-slice ultrasound image representing said sonographic properties within a slab-like subvolume of the breast having a thickness greater than about 2 mm and less than about 20 mm; displaying said thick-slice ultrasound image to a user during a viewing session; computing a planar ultrasound image from said volumetric representation, said planar ultrasound image representing said sonographic properties along a substantially planar portion of the breast substantially nonparallel to said slab-like subvolume; and electronically displaying said planar ultrasound image to the user during the viewing session.
9013496
13526716
1
1. A method under control of one or more processors configured with a computer-readable memory device storing executable instructions, the method comprising: rendering points on surfaces in a scene that includes global light transport; determining attributes associated with each of the points, the attributes including at least a first portion of the attributes and a second portion of the attributes; training a first point regression function based on the first portion of the attributes to create a first trained model, the first point regression function mapping the first portion of the attributes to at least a first component of indirect light; and training a second point regression function based on the second portion of the attributes to create a second trained model, the second point regression function mapping the second portion of the attributes to at least a second component of indirect light.
20080170144
11972427
0
1. A method of assessing the prevalence of noise in an image signal forming the output of an image sensor having an array of pixels, comprising: providing a reference area including a pixel array which is shielded from incident light; defining at least one pair of pixels within the reference area; measuring a first difference in output between the pixels of said pair in a first image frame; measuring a second difference in output between the pixels of said pair in a second image frame displaced in time from said first image frame; and deriving a noise value from a determined change in said first and second measured differences between the first and second image frames.
20130097195
13682714
0
1. A system for measuring potential similarity of apparently diverse binary objects comprises: a reporting module which counts the number of locations of a non-transitory computer-readable similarity store which contain an object identifier of two apparently dissimilar binary objects as a measure of relative similarity; coupled to, the non-transitory computer-readable similarity store which contains at its locations, the object identifiers of any binary object which contains a string which has a pattern corresponding to the location; coupled to, a receiving module which deduplicates binary objects which are essentially similar in any one of name, size, checksum, date, time, source, or destination; coupled to, a string selection module which selects comparable strings from each of N binary objects; coupled to a signature determination module which determines a pattern signature for each string selected from the N binary objects; coupled to a similarity store access module which reads from and writes to a location of similarity store according to the pattern signature determined from a selected string; and a processor and memory containing executable instructions which controls the system to write an object identifier into a location of similarity store determined by the pattern signature determined for each string selected from each of N binary objects, wherein N is an integer number greater than three.
9082063
13944637
1
1. A method for handling a continuous feed web material image receiving medium in an image forming system, comprising: providing a continuous feed web material image receiving medium in an image forming system; providing a multi-mode media handling system in the image forming system, the multi-mode media handling system having at least two separately-selectable modes of operation for handling the continuous feed web material image receiving medium comprising: a nip-based media driving mode of operation in which at least two opposing rollers are positioned in a first position to face each other in a manner that forms a media driving nip, and a dual tensioned-web supporting mode of operation configured with the at least two opposing rollers positioned in a second position offset from each other in a transport direction with the at least two opposing rollers maintaining tensioned contact with the continuous feed web material image receiving medium; selecting one of the at least two separately-selectable modes of operation for handling the continuous feed web material image receiving medium; positioning the at least two opposing rollers according to the selected one of the at least two separately-selectable modes of operation for handling the continuous feed web material image receiving medium with the at least two opposing rollers maintaining tensioned contact with the continuous feed web material image receiving medium in each of the at least two separately-selectable modes of operation for handling the continuous feed web material image receiving medium; and executing image receiving medium handling operations in the image forming system once the at least two opposing rollers are positioned according to the selected one of the at least two separately-selectable modes of operation for handling the continuous feed web material image receiving medium.
7995133
11946264
1
1. A method of correcting an image signal generated by a charge coupled device (CCD) image sensor of an imaging system, the charge coupled device (CCD) image sensor comprising a plurality of effective pixels for producing the image signal and a plurality of optical black (OB) pixels for determining a dark reference of the image signal, the imaging system comprising a memory for storing a plurality of gamma correction curves, each of which has a respective correction factor to increase contrast in a dark portion of the image signal, the method comprising: measuring a plurality of gray scale values of the plurality of effective pixels and the plurality of optical black (OB) pixels in the CCD image sensor; estimating a contrast level of an object scene to be imaged using the plurality of measured gray scale values; and correcting the image signal using a corresponding gamma correction curve depending on the estimated contrast level of the object scene, wherein the estimating of the contrast level of the object scene comprises: determining whether a smear effect induced in the CCD image sensor is acceptable; counting an amount of effective pixels that are in a dark portion of the image signal, if the smear effect is acceptable, and estimating a contrast level of the object scene using the amount of the effective pixels in the dark portion of the image signal, if the smear effect is acceptable, and estimating the contrast level of the object scene according to the level of the smear effect, if the smear effect is not acceptable.
20100305979
12474472
0
1. A computerized method comprising: receiving data, via a processor, indicative of a vehicle make and model, the data comprising one or more make fields and one or more model fields; preparing, using the processor, the one or more model fields for translation by: applying one or more rules from a plurality of rules; and associating each field of the one or more model fields with a class from a list of classes; preparing, using the processor, the one or more make fields for translation by applying one or more rules from the plurality of rules; associating, using the processor, each field of the one or more make fields with one or more make model entries from a plurality of predetermined make model entries; associating, using the processor, each field of the one or more model fields with one or more make model entries from the plurality of predetermined make model entries; and automatically translating, using the processor, the data into one or more vehicle identifiers based on the associated make model entries.
5517640
07957393
1
1. A data retrieving computer apparatus for retrieving data satisfying a user request from data stored in a computer file, comprising: a term defining means having quantitatively defined therein fuzzy terms assigned to each item to be retrieved in a form of rate ranges, a previously set retrieving condition range for each of said terms and a distribution curve having a function associated therewith and representing a satisfaction data distribution within said retrieving condition range for each of said terms; a condition analyzing means responsive to an input fuzzy term of an item to be retrieved for analyzing an input retrieving condition equation including said input fuzzy term and converting said input retrieving condition equation into a value-based retrieving condition equation using information from said term defining means, said input fuzzy term included in said input retrieving condition equation being changed to numerical values in said value-based retrieving condition equation; a data retrieving means for retrieving data from said computer file in accordance with said value-based retrieving condition equation supplied from said condition analyzing means: a satisfaction degree evaluating means coupled to said data retrieving means, for calculating a satisfaction degree for each retrieved data from said data retrieving means using a corresponding said distribution curve, and for outputting a calculated said satisfaction degree; and a data narrowing means for narrowing said retrieved data from said data retrieving means to obtain narrowed data representing a sub-set of said retrieved data, said data narrowing means performing said narrowing using a same said value-based retrieving condition equation upon said retrieved data.
8625674
13754337
1
1. A method of estimating a motion vector of a current block, the method comprising: selecting at least one partitions, which are compared with the current block for obtaining a motion vector difference, from among a plurality of neighboring partitions around the current block based on spatial similarities between the current block and the plurality of neighboring partitions; estimating a motion vector of the selected partition as the motion vector of the current block and generating the motion vector difference; and transmitting information on the motion vector difference and partition information for reconstruction of the motion vector of the current block, wherein the spatial similarities are obtained based on an average value of pixels of the current block and an average value of pixels of each of the neighboring partitions.
20150330146
14435888
0
1. A utility vehicle, in particular a firefighting vehicle ( 10 ), comprising an aerial apparatus like a turnable ladder ( 12 ) and/or an aerial rescue platform and lateral ground supports ( 16 ) that are movable between retracted positions and extracted operating positions in which the ends of the supports ( 16 ) rest on the ground, characterized by a monitoring system ( 28 ) for monitoring the position of the vehicle ( 10 ), comprising: surveillance cameras ( 22 ) at the sides of the vehicle ( 10 ), each camera ( 22 ) being allocated to one support ( 16 ) to monitor the ground area ( 24 ) on which the end ( 20 ) of this support ( 16 ) rests in its operating position and to take a real-time image ( 32 ) of the respective ground area ( 24 ), and a visual display ( 30 ) presenting the images ( 32 ) of all cameras ( 22 ) at the same time in different screen areas, superposed by visual markings ( 34 ) representing expected operating positions of the supports ( 16 ).
20050031194
10636355
0
1. A method for synthesizing heads from 3D models of heads and 2D silhouettes of heads, comprising: generating a 3D statistical model from a plurality of heads, the 3D statistical model including a model parameter; acquiring a plurality of 2D silhouettes of a particular head; fitting the 3D statistical model to the plurality of 2D silhouettes to determine a particular value of the model parameter corresponding to the plurality of 2D silhouettes; and rendering the 3D statistical model according to the particular value of the model parameter to reconstruct the particular head.
20170257653
15057455
0
1. A method, comprising: receiving a plurality of video shots of a video, wherein each video shot includes one or more frames of the video; generating a hierarchy based on the plurality of video shots, wherein the hierarchy comprises a plurality of superclusters that include at least one of a plurality of shot clusters and describes relationships between the plurality of video shots, wherein each of the shot clusters includes at least one of the plurality of video shots; computing a value for a statistic for the video based on the hierarchy; and computing, by operation of a computer processor, an expected value for a metric based on the statistic.
8315438
12011808
1
1. A method of reproducing an image of an image file on an electronic map including the image file so that position information of the image file matches position information on the electronic map, the method comprising the steps: (a) receiving a selection of a target point to be observed on the electronic map on a display of a computer comprising a processor and obtaining position information of the target point; (b) setting a search range within the displayed electronic map; (c) searching for image files including an image of the target point within the set search range using the processor; and (d) displaying images of searched image files overlaid on top of the electronic map via the computer display; wherein step (b) comprises setting the search range within a predetermined radius range based on the target point; and wherein the predetermined radius range varies based on an altitude of an image taken at the target point.
5559961
08520904
1
1. A graphical password arrangement comprising: a display; first means, responsive to an initial request of a user, for displaying on the display one or more position indicators along with an image; second means, for moving the displayed position indicators on the display, relative to the displayed image, under control of the user; third means, cooperative with the second means, for determining the user's positioning of the displayed position indicators in the displayed image; a memory; fourth means, responsive to the determined positioning of the position indicators in the displayed image, for storing positions of the position indicators in the displayed image in the memory as a password; fifth means, responsive to a subsequent request of the user, for displaying on the display the image without the one or more position indicators; sixth means, responsive to the user's selection of one or more locations in the displayed image without the one or more position indicators, for determining positions of the selected locations in the displayed image; seventh means, for determining whether the positions of the selected locations correspond to the positions that are stored in the memory as the password; and eighth means, responsive to a determination of a lack of correspondence between the positions of the selected locations and the positions that are stored in the memory as the password, for denying the user access to a resource protected by the password.
8180637
11949044
1
1. A method of compensating for additive and convolutive distortions applied to a signal indicative of an utterance, comprising: receiving the signal indicative of an utterance; setting an initial value of a channel mean vector to zero; utilizing a portion of frames from the signal to set initial values of a noise mean vector and a diagonal covariance matrix; utilizing the initial values of the channel mean vector, the noise mean vector, and the diagonal covariance matrix to calculate a Gaussian dependent compensation matrix; utilizing the Gaussian dependent compensation matrix to determine new values of the channel mean vector, the noise mean vector, and the diagonal covariance matrix; updating Hidden Markov Model (HMM) parameters based on the new values of the channel mean vector, the noise mean vector, and the diagonal covariance matrix to account for the additive and the convolutive distortions applied to the signal; decoding the utterance using the updated HMM parameters; re-calculating the Gaussian dependent compensation matrix utilizing information obtained during the utterance decoding; adapting the HMM parameters based upon the re-calculated Gaussian dependent compensation matrix; and applying the adapted HMM parameters to decode the utterance and provide a transcription of the utterance.
20150302615
14732790
0
1. An image processing device, comprising: an acquiring section that acquires a plurality of projection images in which a subject between a radiation detector and a radiation applying unit has, as a result of the radiation applying unit being moved to thereby change an angle of incidence, with respect to the subject, of radiation applied from the radiation applying unit, been imaged at each different angle of incidence; a processing section that performs frequency processing that attenuates, relative to a high-frequency component, a low-frequency component of projection images in which the angle of incidence is equal to or greater than a first threshold; and a tomographic image generating section that generates tomographic images of the subject by image reconstruction from projection images in which the angle of incidence is less than the first threshold and from the frequency-processed projection images.
9280717
13894247
1
1. A method for operating a computing device, the method being performed by one or more processors and comprising: processing one or more images of a scene captured by an image capturing device of the computing device, the scene including a self-propelled device that is in motion and can be controlled wirelessly by the computing device, the self-propelled device having a characteristic rounded shape; detecting the self-propelled device in the one or more images; determining position information based on a relative position of the self-propelled device in the one or more images; and implementing one or more processes that utilize the position information determined from the relative position of the self-propelled device; wherein implementing the one or more processes includes (i) determining a reference point for the self-propelled device, (ii) enabling a user to interact with a displayed representation of the self-propelled device on a touch-sensitive display of the computing device, and (iii) controlling a movement of the self-propelled device based on the reference point.
8683197
12075323
1
1. A method of providing rapid resumption of a video file, comprising: receiving a first user instruction to initiate the video playback of the video file; in response to the received first user instruction, loading the video file into a main memory component; after the loading, initiating the video playback of the video file by initially playing back the video file from the main memory component; during the initially playing back the video file, loading a cache memory component with video frame data of the video file; during the initially playing back the video file, receiving a second user instruction to interrupt the video playback of the video file and to access a non-video playback function; subsequent to the received second user instruction, unloading at least a portion of the video file from the main memory component; subsequent to receiving the second user instruction, providing access to the non-video playback function while preserving at least a portion of the video frame data loaded in the cache memory component; after the providing, receiving a third user instruction to resume the video playback of the video file; in response to the received third user instruction, initiating the resumption of the video playback of the video file by initially playing back the video frame data of the video file from the cache memory component; during the initially playing back the video frame data, reloading the at least a portion of the video file into the main memory component; and after the reloading, playing back the video file from the main memory component.
20120008859
12831499
0
1. For a particular pixel in a matrix of pixels, the particular pixel having a value for a first color but not for a second color, a machine implemented method for calculating a value for the second color for the particular pixel, comprising: selecting a first set of neighboring pixels that are situated on a first side of the particular pixel, the first set of neighboring pixels comprising a first subset of pixels having one or more values for the first color and a second subset of pixels having one or more values for the second color; based upon the one or more values for the first color from the first subset of pixels and the one or more values for the second color from the second subset of pixels, determining a first representative relationship between values for the second color and values for the first color for the first set of neighboring pixels; selecting a second set of neighboring pixels that are situated on a second side of the particular pixel which is opposite the first side, the second set of neighboring pixels comprising a third subset of pixels having one or more values for the first color and a fourth subset of pixels having one or more values for the second color; based upon the one or more values for the first color from the third subset of pixels and the one or more values for the second color from the fourth subset of pixels, determining a second representative relationship between values for the second color and values for the first color for the second set of neighboring pixels; based upon the first and second representative relationships, determining a target relationship between the value for the second color for the particular pixel and the value for the first color for the particular pixel; and based upon the target relationship and the value for the first color for the particular pixel, calculating the value for the second color for the particular pixel.
8279409
12536425
1
1. A method for calibrating a model of a photolithography process, comprising: providing a computational model of a photolithography process, the computational model having an adjustable parameter; printing a first pattern with the photolithography process as a printed pattern; measuring an aspect of the printed pattern; using the computational model to calculate an image intensity at a location determined according to the measured aspect; minimizing a first cost function that comprises a first difference between the calculated image intensity and an intensity threshold; calculating a second cost function comprising a second difference between a measured critical dimension of the printed pattern and a critical dimension simulated by the computational model; minimizing the first cost function with respect to a model parameter; determining if the computational model predicts that a portion of a first pattern will not print; removing from a definition of the second cost function, the critical dimension of the portion of the first pattern that will not print; and minimizing the second cost function with respect to the model parameter.
20120100500
12911895
0
1. A full anatomy model (FAM) for image-guided dental implant treatment planning, which at least contains the anatomy components of bones and soft tissues, may contain teeth and nerves if applicable, may contain any other anatomical or artificial structures, represents multiple anatomical components in one or more geometric models using any geometric representation, but preferably, triangulated models, is created from the data acquired for implant treatment planning including the patient 3D scan (such as CT and Cone-beam CT), radiographic guide CT or optical scan, optical scans of conventional physical dental models, optical scans of impressions, or intra-oral scan of patients, registers all component models into one coordinate system, preferably, that of the patient 3D scan, and may or may not assemble all the individual components into one geometric assembly.
20130076910
13625493
0
1. A focal plane imaging array having a dynamic range, comprising: a detector with a large pixel array having a plurality of large pixels, each of said large pixels having a large pixel area and a large pixel signal contact, to create a first signal that travels to said large pixel signal contact, and a small pixel array having a plurality of small pixels, each of said small pixels having a small pixel area and a small pixel signal contact, to create a second signal that travels to said small pixel signal contacts, wherein said plurality of small pixels is larger than said plurality of large pixels, wherein said large pixel array and said small pixel array are aligned and vertically stacked on a monolithic semiconductor substrate; a readout integrated circuit operably interconnected to said large pixel signal contacts and said small pixel signal contacts; and a clock operably connected to said large pixel signal contacts and said small pixel signal contacts to read said first signals at a first clock rate having a first integration time and a first reset time, and to read said second signals at a second clock rate having a second integration time and a second reset time; wherein said first clock rate is faster than said second clock rate; whereby reading of said first signals at said first clock rate, and said second signals at said second clock rate, extends said dynamic range.
8792698
12919129
1
1. A medical image processing device comprising: medical image information acquisition means configured to acquire plural sets of medical image information indicating a tomographic image of a femoral region in an object to be examined; evaluation region extraction means configured to extract an evaluation region from each of the plural sets of medical image information; evaluation region display means configured to display the evaluation region on a display device; and evaluation region comparative display means configured to specify a femur region and a muscle region from said evaluation region, execute a parallel transfer process, rotational transfer process and scaling process on at least one of the plural sets of medical image information to match the femur region, and display the medical image information that matched the femur region to compare each muscle region, wherein the evaluation region comparative display means sets a femur protrusion as a reference point, sets a reference line including the reference point and barycenter of the femur region, generates a graph with (i) an angle between a radius and the reference line on the horizontal axis and (ii) a distance on the radius from the reference point to the border of the muscle region on the vertical axis, and displays the graph on the display device.
7831911
11370816
1
1. A computer-implemented method of ranking replacement target strings for a misspelled source string, the computer-implemented method comprising: converting the misspelled source string into a source phoneme sequence using a letter-to-sound system; utilizing a computer processor that is a component of the computer to traverse at least one phoneme-based trie structure so as to select a plurality of different candidate phoneme sequences based on a comparison of phonemes in the phoneme-based trie structure to the source phoneme sequence but without doing a direct comparison of every component of the plurality of different candidate phoneme sequences to a component of the source phoneme sequence; generating a count for each different candidate phoneme sequence in said plurality of different candidate phoneme sequences, the count being indicative of a quantity of edit operations required to transform the candidate phoneme sequence into the source phoneme sequence; utilizing the computer processor to select a limited number of the plurality of different candidate phoneme sequences based at least in part on said counts, the limited number being less than all of the different candidate phoneme sequences included in said plurality of different candidate phoneme sequences; utilizing the computer processor to select a first set of replacement target strings, each replacement target string in the first set being selected based on direct correspondence to a candidate phoneme sequences in said limited number of different candidate phoneme sequences; utilizing the computer processor to traverse at least one letter-based trie structure so as to select a plurality of different candidate letter sequences, wherein selecting the plurality of different candidate letter sequences comprises traversing the letter-based trie structure so as to identify the plurality of different candidate letter sequences without doing a direct comparison of every component of the plurality of different candidate letter sequences with every component of the misspelled source string; utilizing the computer processor to select a limited number of the plurality of different candidate letter sequences based at least in part on a count of a quantity of edit operations required to transform each different candidate letter sequence included in the limited number of different candidate letter sequences into the misspelled source string, the limited number of different candidate letter sequences being less than all of the different candidate letter sequences included in said plurality of different candidate letter sequences; utilizing the computer processor to select a second set of replacement target strings, each replacement target string in the second set being one of the candidate letter sequences in said limited number of the plurality of different candidate letter sequences; and utilizing the computer processor to rank the replacement strings in the first and/or second sets based on a summation of the count of the quantity of edit operations required to transform a particular different candidate phoneme sequence included in the limited number of the plurality of different candidate phoneme sequences into the source phoneme sequence plus the count of the quantity of edit operations required to transform a particular different candidate letter sequence included in the limited number of the plurality of different candidate letter sequences into the misspelled source string.
20120220824
13359440
0
1. An endoscope apparatus comprising: a light source unit that is capable of adjusting quantity of light and irradiates illumination light to a subject; an imaging element that captures a capturing image by return light from a living body that is the subject in the illumination light and outputs a captured image signal; a density calculating section that calculates density of each pixel of the captured image on basis of the captured image signal; and an image processing section that performs a predetermined image processing on the captured image, wherein the image processing section changes frequency processing conditions with respect to the captured image such that detection and enhancement degree of a structure and components of the living body in the subject are changed according to at least the density of each pixel of the captured image.