File size: 32,358 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
{
    "paper_id": "W98-0205",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T06:05:17.976773Z"
    },
    "title": "Visualization for Large Collections of Multimedia Information",
    "authors": [
        {
            "first": "Dave",
            "middle": [],
            "last": "Himmel",
            "suffix": "",
            "affiliation": {},
            "email": "[email protected]"
        },
        {
            "first": "Mark",
            "middle": [],
            "last": "Greaves",
            "suffix": "",
            "affiliation": {
                "laboratory": "Natural Language Processing Applied Research and Technology Boeing Shared Services Group",
                "institution": "",
                "location": {
                    "postBox": "P.O. Box 3707",
                    "postCode": "7L-43, 98124",
                    "settlement": "Seattle",
                    "region": "MS, WA"
                }
            },
            "email": "mark.t.~reaves@boein_o.com"
        },
        {
            "first": "Anne",
            "middle": [
                "Kao Steve"
            ],
            "last": "Poteet",
            "suffix": "",
            "affiliation": {
                "laboratory": "Natural Language Processing Applied Research and Technology Boeing Shared Services Group",
                "institution": "",
                "location": {
                    "postBox": "P.O. Box 3707",
                    "postCode": "7L-43, 98124",
                    "settlement": "Seattle",
                    "region": "MS, WA"
                }
            },
            "email": ""
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "Organizations that make use of large amounts of multimedia material (especially images and video) require easy access to such information. Recent developments in computer hardware and algorithm design have made possible content indexing of digital video information and efficient display of 3D data representations. This paper describes collaborative work between Boeing Applied Research & Technology (AR&T), Carnegie Mellon University (CMU), and the Battelle Pacific Northwest National Laboratories (PNNL). to integrate media indexing with computer visualization to achieve effective content-based access to video information. Text metadata, representing video content, was extracted from the CMU Informedia system, processed by AR&T's text analysis software, and presented to users via PNNL's Starlight 3D visualization system. This approach shows how to make multimedia information accessible to a text-based visualization system, facilitating a global view of large collections of such data. We evaluated our approach by making several experimental queries against a library of eight hours of video segmented into several hundred \"video paragraphs.\" We conclude that search performance of Inforrnedia was enhanced in terms of ease of exploration by the integration with Starlight.",
    "pdf_parse": {
        "paper_id": "W98-0205",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "Organizations that make use of large amounts of multimedia material (especially images and video) require easy access to such information. Recent developments in computer hardware and algorithm design have made possible content indexing of digital video information and efficient display of 3D data representations. This paper describes collaborative work between Boeing Applied Research & Technology (AR&T), Carnegie Mellon University (CMU), and the Battelle Pacific Northwest National Laboratories (PNNL). to integrate media indexing with computer visualization to achieve effective content-based access to video information. Text metadata, representing video content, was extracted from the CMU Informedia system, processed by AR&T's text analysis software, and presented to users via PNNL's Starlight 3D visualization system. This approach shows how to make multimedia information accessible to a text-based visualization system, facilitating a global view of large collections of such data. We evaluated our approach by making several experimental queries against a library of eight hours of video segmented into several hundred \"video paragraphs.\" We conclude that search performance of Inforrnedia was enhanced in terms of ease of exploration by the integration with Starlight.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Boeing uses very large collections of data in the process of engineering and manufacturing commercial jet airplanes and in delivering complex military and space systems. The company also creates large amounts of information in the form of manuals and other documents to be delivered with these products. The need to control the cost of creating and accessing this information is motivation for new methods of indexing and searching computerized digital data.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "In 1996, the Natural Language Processing Group at Boeing AR&T began a collaboration with PNNL to jointly develop the Starlight information visualization system. AR&T developed the Text Processing Toolset (TPT) to perform indexing and querying operations in high-dimensional document spaces, and PNNL developed software for 3D graphics presentations. Although Starlight has a rich visual presentation, it processes only text and static imagery, and lacks any inherent capability for extracting content from multimedia data. Concurrently. as part of ongoing research into digital libraries, the AR&T Multimedia Group began support of the CMU Informedia project and obtained a demonstration system for searching indexed digital video information. At the heart of Informedia is a subsystem that creates text metadata that is descriptive of digital video content. This suggested awav to integrate the two systems.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "The Informedia Project has established a large on-line digital video library, incorporating video assets from WQED/Pittsburgh. The project is creating intelligent, automatic mechanisms for populating the library and allowing for its full-content and knowledgebased search and segment retrieval. Our approach applies several techniques for content-based searching and video-sequence retrieval. Content is conveyed in both the narrative (speech and language) and the image. Only by the collaborative interaction of image, speech, and natural-language understanding technology can we successfully populate, segment, index, and search diverse video collections with satisfactory recall and precision.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Informedia",
                "sec_num": "1"
            },
            {
                "text": "The Informedia Project uses the Sphinx-II speech recognition system to transcribe narratives and dialogues automatically. The resulting transcript is then processed with methods of natural language understanding to extract subjective descriptions and mark potential segment boundaries where significant semantic changes occur. Comparative difference measures are used in processing the video to mark potential segment boundaries. Images with small lfistogram disparity are considered to be relatively equivalent. By detecting significant changes in the weighted histogram of each successive frame, a sequence of images can be grouped into a segment. This simple and robust method for segmentation is fast and can detect 90% of the scene changes in video.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Informedia",
                "sec_num": "1"
            },
            {
                "text": "Segment breaks produced by image processing are examined along with the boundaries identified by the natural language processing of the transcript, and an improved set of segment boundaries are heuristically derived to partition the video library into sets of segments, or \"video paragraphs.\" The reader can find more in-depth discussions of the Informedia project and technologies in References 1-4.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Informedia",
                "sec_num": "1"
            },
            {
                "text": "Starlight was originally developed as an interactive information visualization environment for the US Army Intelligence and Security Command (INSCOM). It is designed to integrate several types of data (unstructured and structured text documents, geographic information, and digital imagery') into a single analysis space for rapid comparison of content and interrelationships (see reference 5). In this section, we will concentrate on the Starlight text processing and indexing functions.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Starlight",
                "sec_num": "2"
            },
            {
                "text": "A major problem with incorporating free-text documents into a visualization environment is that each document must be coded so that it can be clustered with other documents. The Boeing TPT is a prototype software engine that supports automatic coding and categorization of documents, concept-based querying, and visualization over large text document databases. The TPT combines techniques from statistics, linear algebra, and computational linguistics in order to take account of the total context in which words occur in a given document or qUery; it statistically compares a document context with similar contexts from other documents in the database. Through this technique, document sets can be represented in a way that supports visualization and analysis by the presentation components of Starlight.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Starlight",
                "sec_num": "2"
            },
            {
                "text": "The TPT performs two functions: First. it provides a powerful and flexible mechanism for concept-based searching over large text databases; second, it automatically assigns individual text units to coordinates in a userconfigurable 3D semantic space. Both of these functions derive from the TPT core technique of representing large numbers of text units as points in a higher dimensional space and performing similarity calculations in this space. Conceptually, the flow of data through the TPT is diagrammed in Ouch, --I / New Ten'n I Space .J",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Starlight",
                "sec_num": "2"
            },
            {
                "text": "Celcp~te Vector D~tence~",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Ranked Matoh Ust",
                "sec_num": null
            },
            {
                "text": "Prior to TPT indexing, a text collection must be preprocessed in three stages involving manual intervention: First, the text is divided into units with topical granularity that best correlates with the expected query patterns. The units can be titles, subject lines, abstracts, individual paragraphs or an entire document. It can also be a caption or a piece of transcribed text from a video. Next, the text is \"tokenized\" into individual words and phrases. Finally, a list of \"'stopwords,'\" or ignored terms, is chosen. These include determiners (e.g., a, the), conjunctions (e.g., and, or), and relatives (e.g., what, which), and certain domain-specific terms.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Figure 2 -TPT Processing Flow Diagram",
                "sec_num": null
            },
            {
                "text": "After preprocessing, operation of the TPT indexing system is automatic (see Figure 2) . The software builds a document/term matrix, performs several transformation and dimension reduction calculations, and stores output matrices in an object-oriented database for use by the Starlight visualization component.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 76,
                        "end": 85,
                        "text": "Figure 2)",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Figure 2 -TPT Processing Flow Diagram",
                "sec_num": null
            },
            {
                "text": "Users of Starlight can visually explore the topical structure of a large text database by navigating through a 3D topic space where each item is represented as a point in a scatterplot. Items with close visual proximity have similar content. The TPT provides a selection of dimensions (with associated topic words), any three of which can potentially be selected for axes of the scatterplot. Display of video paragraphs occurs in the context of a web page containing a video viewer and the text transcript; Figure 5 shows an example. When the user selects a particular document within Starlight. a browser displays the HTML page. The browser was coded in Java and the MPEG viewer is a Microsoft ActiveMovie control. Each digital video file was kept intact, that is. the video paragraphs were not partitioned into separate files; rather, paragraphs are viewed by playing from the specified \"In\" frame to the \"Out\" frame of the appropriate MPEG file. advantages. Neither system was designed for narrow, precise data querying, rather Starlight features all-inclusive views and Informedia facilitates iterative, progressive narrowing of focus. Motivation for integrating the systems was twofold: Give Starlight access to multimedia data, and investigate possible advantages of the global overview that Starlight can bring to the Informedia database.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 507,
                        "end": 515,
                        "text": "Figure 5",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Figure 2 -TPT Processing Flow Diagram",
                "sec_num": null
            },
            {
                "text": "We collected the following list of video titles from diverse sources at Boeing: ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Figure 2 -TPT Processing Flow Diagram",
                "sec_num": null
            },
            {
                "text": "\u2022 PBS",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Figure 2 -TPT Processing Flow Diagram",
                "sec_num": null
            },
            {
                "text": "Although both Informedia and Starlight separately exhibit powerful capabilities for accessing large collections of data, it was interesting to speculate about synergistic",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": null
            },
            {
                "text": "The video data was indexed by Informedia, then we passed the resulting metadata to Starlight as described above. Starlight presents the entire collection of hundreds of video paragraphs as a scatterplot of points for viewing, which can be color-coded by video title. We immediately observed that items with like colors tended to cluster together, verifying that TPT processing was effective in bringing together topically related items.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": null
            },
            {
                "text": "In an experiment to evaluate the further effects of integration, we posed several queries to both Informedia and Starlight, noted commonalties and differences in response, then used the global view of Starlight to find other items with similar content and to discover additional search terms. After Starlight reported query \"hits,\" we observed the region in 3D space encompassed by the video file containing the most hits. Using the cursor brush to see abstracts, we examined ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": null
            },
            {
                "text": "The last three columns indicate a positive result by showing additional video paragraphs and search terms relating to the initial query. The additional items varied in degree of relevance, but the cost of the new information is low in that only a few seconds were required to examine each. This illustrates the capacity of the integrated approach as an interactive tool for ready access to video content.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Informedia Hits",
                "sec_num": null
            },
            {
                "text": "This effort clearly succeeded in the goal of providing access to multimedia content for the Starlight system. We have also shown the usefullness of Starlight global visualization of text metadata for video content. all items (of any color) within this region, and noted interesting terms. Because the axes were topically labeled, additional terms were also available from the axis in the vicinity of the cluster. Table 1 shows the results of this experiment.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 413,
                        "end": 420,
                        "text": "Table 1",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": null
            },
            {
                "text": "Total Same as Informedia The concept of bringing text metadata into Starlight is extensible to image, sound, animation, and other media, which suggests further experimentation with other forms of multimedia information and methods of generating metadata.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Starlight Hits",
                "sec_num": null
            },
            {
                "text": "Both Informedia indexing and Starlight processing require some manual intervention. In order for these approaches to be efficient and cost-effective, we must develop fully automatic methods for creating and processing text metadata for multimedia information. It may be possible to do this by compromising the quality of metadata (perhaps by using unedited speech recognition from Informedia); a future experiment would be to attempt this compromise and discover the effect on search performance.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Starlight Hits",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "Our thanks go to Ricky Houghton and Bryan Maher at the Carnegie Mellon University Informedia project, and to John Risch, Scott Dowson, Brian Moon, and Bruce Rex at the Battelle Pacific Northwest National Laboratories Starlight project for their excellent work leading to this result. The Boeing team also includes Dean Billheimer, Andrew Booker, Fred Holt, Michelle Keim, Dan Pierce, and Jason Wu.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgments",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "bztelligent Access to Digital Video: bzformedia Project",
                "authors": [
                    {
                        "first": "Howard",
                        "middle": [
                            "D"
                        ],
                        "last": "Wactlar",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Kanade",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Takeo",
                        "suffix": ""
                    },
                    {
                        "first": "Michael",
                        "middle": [
                            "A"
                        ],
                        "last": "Smith",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Stevens",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Scott",
                        "suffix": ""
                    }
                ],
                "year": 1996,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "46--52",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Wactlar, Howard D., Kanade, Takeo, Smith, Michael A., and Stevens, Scott M. bztelligent Access to Digital Video: bzformedia Project, IEEE Computer, May 1996 pp46-52",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Some Results on Search Complexity vs Accuracy",
                "authors": [
                    {
                        "first": "",
                        "middle": [],
                        "last": "Ravishankar",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Mosur",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "Proceedings of DARPA Spoken Systems Technology Workshop",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ravishankar. Mosur K. Some Results on Search Complexity vs Accuracy, Proceedings of DARPA Spoken Systems Technology Workshop, Feb. 1997",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Mtdtimedia Abstractions for a Digital Video Librao",
                "authors": [
                    {
                        "first": "Michael",
                        "middle": [
                            "G"
                        ],
                        "last": "Christel",
                        "suffix": ""
                    },
                    {
                        "first": "David",
                        "middle": [
                            "B"
                        ],
                        "last": "Winkler",
                        "suffix": ""
                    },
                    {
                        "first": "Taylor",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    },
                    {
                        "first": "Roy",
                        "middle": [
                            "C"
                        ],
                        "last": "",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "Proceedings of ACM Digital Libraries '97",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Christel, Michael G., Winkler, David B., and Taylor, Roy C. Mtdtimedia Abstractions for a Digital Video Librao', Proceedings of ACM Digital Libraries '97, Philadelphia, PA, July 1997.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Takeo bnmersion into Visual Media: New Applications of Image Understanding. IEEE Expert",
                "authors": [
                    {
                        "first": "",
                        "middle": [],
                        "last": "Kanade",
                        "suffix": ""
                    }
                ],
                "year": 1996,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "73--80",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kanade, Takeo bnmersion into Visual Media: New Applications of Image Understanding. IEEE Expert. February 1996, pp73-80.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "A Virtual Environment for Multimedia b~telligence Data Anal, sis",
                "authors": [
                    {
                        "first": "John",
                        "middle": [],
                        "last": "Risch",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "May",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Richard",
                        "suffix": ""
                    },
                    {
                        "first": "Scott",
                        "middle": [],
                        "last": "Dowson",
                        "suffix": ""
                    },
                    {
                        "first": "James",
                        "middle": [],
                        "last": "Thomas",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "IEEE Computer Graphics and Applications",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Risch, John, May, Richard, Dowson, Scott, and Thomas, James A Virtual Environment for Multimedia b~telligence Data Anal, sis, IEEE Computer Graphics and Applications, November 1997.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "text": "illustrates the video searching facilities of Informedia, which include: \u2022 Filmstrip (lower-left of Figure 1) -Select thumbnail images to view video paragraph. \u2022 Selective play (upper right of Figure 1) -Prev/next paragraph, prev/next term hit. \u2022 Cursor browse -Abstracts and search terms available in filmstrip and play window. \u2022 Text query (upper left of Figure 1) -Terms parsed from natural language query. \u2022 Skim (not shown) -View video in 10% of normal time.",
                "uris": null,
                "num": null,
                "type_str": "figure"
            },
            "FIGREF1": {
                "text": "Informedia Search Screen",
                "uris": null,
                "num": null,
                "type_str": "figure"
            },
            "FIGREF2": {
                "text": "on Sim ila rity ~]'-I \"--.-._..~,,~,,. F\"Ter.rr,\u00d7 D,:,o~ | Frequency' I Vector Ct~ ~te Matrix Metr~ Oecomloostio~ F ooo,mo q J LC\u00b0~\u00b0ne~ts._l \"-'-\"----.i. I\"\" Do~mer,~-1 I h New Term I F Weigl-,t q ._,...._,--,----' J\" I,_ Space ,..I ~",
                "uris": null,
                "num": null,
                "type_str": "figure"
            },
            "FIGREF3": {
                "text": "shows a Starlight visualization screen containing a 3D scatterplot of the 322-item video metadata extracted from InformecUa: each axis is labeled with the dominant topics measured by the TPT. At the right ofFigure 3is an example query with results.",
                "uris": null,
                "num": null,
                "type_str": "figure"
            },
            "FIGREF4": {
                "text": "Integration of the two systems required the insertion of two processing steps, each involving new software development.Figure 4 shows the flow of data in and between the two systems; new integration elements are shown as shaded boxes. The two key elements are: (1) Extract video paragraph text metadata from Informedia, and (2) Display selected video paragraphs. .................................................................",
                "uris": null,
                "num": null,
                "type_str": "figure"
            },
            "FIGREF5": {
                "text": "Integration of Informedia and Starlight \u2022 Video file name \u2022 \"In\" and \"Out\" frames in the video file (for viewing the video paragraph) \u2022 Title of video paragraph \u2022 Abstract of video paragraph \u2022 Transcript text Extraction of metadata is accomplished by a C program that reads ASCII text and control parameters from several flies in the Informedia system and writes a collection of items compatible with TPT processing. Each item represents one video paragraph and includes the following fields:",
                "uris": null,
                "num": null,
                "type_str": "figure"
            },
            "FIGREF6": {
                "text": "Figure 5 -Video Paragraph Display",
                "uris": null,
                "num": null,
                "type_str": "figure"
            },
            "TABREF1": {
                "text": "Experimental Results",
                "num": null,
                "html": null,
                "type_str": "table",
                "content": "<table><tr><td>Initial Query</td><td/></tr><tr><td>How do I use the emergency</td><td>6</td></tr><tr><td>exits?</td><td/></tr><tr><td>What is fly by wire?</td><td>4</td></tr><tr><td>Tell me about scientific</td><td>12</td></tr><tr><td>visualization.</td><td/></tr><tr><td>What do you have on the Boeing</td><td/></tr><tr><td>merger with Rockwell?</td><td/></tr></table>"
            }
        }
    }
}