src
stringlengths
15.2k
66.8k
tgt
stringlengths
1.1k
4.65k
text
stringlengths
16.8k
70k
__index_level_0__
int64
0
42
professor a: ! maybe it 's just , how many t u how many times you crash in a day . phd g: or maybe it 's once you ' ve done enough meetings it wo n't crash on you anymore . professor a: that 's that 's great . do we have an agenda ? liz and andreas ca n't sh ca n't , ca n't come . grad b: i have no idea but got it a few minutes ago . right when you were in my office it arrived . grad b: so , does anyone have any a agenda items other than me ? i actually have one more also which is to talk about the digits . professor a: , right , so i was just gon na talk briefly about the nsf itr . professor a: , i wo n't say much , but then , you said wanna talk about digits ? grad b: i have a short thing about digits and then i wanna talk a little bit about naming conventions , although it 's unclear whether this is the right place to talk about it . so maybe just talk about it very briefly and take the details to the people who for whom it 's relevant . professor a: if we , we should n't add things in just to add things in . i ' m actually pretty busy today , so if we can we grad b: so the only thing i wanna say about digits is , we are done with the first test set . there are probably forms here and there that are marked as having been read that were n't really read . so i wo n't really know until i go through all the transcriber forms and extract out pieces that are in error . so i wa . two things . the first is what should we do about digits that were misread ? my opinion is , we should just throw them out completely , and have them read again by someone else . , the grouping is completely random , grad b: so it 's perfectly fine to put a group together again of errors and have them re - read , just to finish out the test set . grad b: , the other thing you could do is change the transcript to match what they really said . so those are the two options . professor a: but there 's often things where people do false starts . i know i ' ve done it , where i say a grad b: what the transcribers did with that is if they did a correction , and they eventually did read the right string , you extract the right string . phd g: , you 're talking about where they completely read the wrong string and did n't correct it ? postdoc f: , and s and you 're talking string - wise , you 're not talking about the entire page ? grad b: and so the two options are change the transcript to match what they really said , but then the transcript is n't the aurora test set anymore . i do n't think that really matters because the conditions are so different . and that would be a little easier . professor a: , i would , tak do the easy way , it it 's kinda , wh who knows what studies people will be doing on speaker - dependent things professor a: so that 's a couple hours of , speech , probably . which is a reasonable test set . grad b: and , jane , i do have a set of forms which you have copies of somewhere . grad b: , i was just wond i had all of them back from you . and then the other thing is that , the forms in front of us here that we 're gon na read later , were suggested by liz grad b: because she wanted to elicit some different prosodics from digits . and so , wanted people to , take a quick look at the instructions grad b: and the way it wa worked and see if it makes sense and if anyone has any comments on it . professor a: i see . and the decision here , was to continue with the words rather than the numerics . grad b: , yes , although we could switch it back . the problem was o and zero . although we could switch it back and tell them always to say " zero " or always to say " o " . professor a: or neither . but it 's just two thing ways that you can say it . professor a: right ? that 's the only thought i have because if you t start talking about these , u tr she 's trying to get at natural groupings , but it there 's nothing natural about reading numbers this way . grad b: the the problem also is she did want to stick with digits . i ' m speaking for her since she 's not here . but , the other problem we were thinking about is if you just put the numerals , they might say forty - three instead of four three . postdoc f: , if there 's space , though , between them . , you can with when you space them out they do n't look like , forty - three anymore . grad b: , she and i were talking about it , and she felt that it 's very , very natural to do that chunking . professor a: she 's right . it 's it 's a different problem . it 's a it 's an interesting problem , we ' ve done with numbers before , and sometimes people if you say s " three nine eight one " sometimes people will say " thirty - nine eighty - one " or " three hundred eighty - nine one " , or i do n't think they 'd say that , but th professor a: but , th thirty - eight ninety - one is probably how they 'd do it . grad b: so . , this is something that liz and i spoke about and , since this was something that liz asked for specifically , we need to defer to her . professor a: - . ok . , we 're probably gon na be collecting meetings for a while and if we decide we still wanna do some digits later we might be able to do some different ver different versions , professor a: but this is the next suggestion , so . ok , so e l i , let me , get my short thing out about the nsf . i sent this actually this is maybe a little side thing . , i sent to what we had , in some previous mail , as the right joint thing to send to , which was " m mtg rcdr hyphen joint " . grad b: it 's that 's because they set the one up at uw that 's not on our side , that 's on the u - dub side . and so u - uw set it up as a moderated list . grad b: and , i have no idea whether it actually ever goes to anyone so you might just wanna mail to mari professor a: no no , th i got , little excited notes from mari and jeff and so on , grad b: so the moderator actually did repost it . cuz i had sent one earlier actually the same thing happened to me i had sent one earlier . the message says , " you 'll be informed " and then i was never informed but i got replies from people indicating that they had gotten it , so . it 's just to prevent spam . professor a: so o ok . , anyway , i everybody here are y are you are on that list , right ? so you got the note ? ok . so this was , a , proposal that we put in before on more higher level , issues in meetings , from i higher level from my point of view . , and , meeting mappings , and , so is i for it was a proposal for the itr program , information technology research program 's part of national science foundation . it 's the second year of their doing , these grants . they 're they 're a lot of them are some of them anyway , are larger grants than the usual , small nsf grants , and . so , they 're very competitive , and they have a first phase where you put in pre - proposals , and we , got through that . and so th the next phase will be we 'll actually be doing a larger proposal . and i ' m i hope to be doing very little of it . and , which was also true for the pre - proposal , there 'll be bunch of people working on it . so . grad b: i that 's a good thing cuz that way i got my papers done early . professor a: my favorite is was when one reviewer says , " , this should be far more detailed " , and the nex the next reviewer says , " , there 's way too much detail " . grad b: or " this is way too general " , and the other reviewer says , " this is way too specific " . this is way too hard , way too easy . grad b: it sounded like they the first gate was pretty easy . is that right ? that they did n't reject a lot of the pre - proposals ? professor a: i should go back and look . i did n't i do n't think that 's true . professor a: but they have to weed out enough so that they have enough reviewers . so , , maybe they did n't r weed out as much as usual , but it 's usually a pretty but it . it 's it 's certainly not i ' m that it 's not down to one in two of what 's left . i ' m it 's , professor a: there 's different numbers of w awards for different size they have three size grants . this one there 's , see the small ones are less than five hundred thousand total over three years and that they have a fair number of them . and the large ones are , boy , i forget , more than a million and a half , more than two million like that . and and we 're in the middle category . we 're , , i forget what it was . but , i do n't remember , but it 's pr probably along the li i could be wrong on this , but probably along the lines of fifteen or that they 'll fund , or twenty . when they do you do how many they funded when they f in chuck 's , that he got last year ? grad b: it was smaller , that it was like four or five , was n't it ? professor a: last time they just had two categories , small and big , and this time they came up with a middle one , so it 'll there 'll be more of them that they fund than of the big . phd g: if we end up getting this , what will it mean to icsi in terms of , w wh where will the money go to , what would we be doing with it ? professor a: , it i none of it will go for those yachts that we ' ve talking about . grad b: it 's go higher level than we ' ve been talking about for meeting recorder . professor a: the other things that we have , been working on with , the c with communicator , especially with the newer things with the more acoustically - oriented things are lower level . and , this is dealing with , mapping on the level of , the conversation of mapping the conversations professor a: to different planes . so . but , . so it 's all that none of us are doing right now , or none of us are funded for , so it 's it would be new . phd g: so assuming everybody 's completely busy now , it means we 're gon na hafta , hire more students , or , something ? professor a: there 's evenings , and there 's weekends , there would be new hires , and there would be expansion , but , also , there 's always for everybody there 's always things that are dropping off , grants that are ending , or other things that are ending , so , professor a: there 's a continual need to bring in new things . but but there definitely would be new , students , professor a: we got we have , two of them are two in the c there 're two in the class already here , and then and , then there 's a third who 's doing a project here , who , but he wo n't be in the country that long , maybe another will end up . actually there is one other guy who 's looking that 's that guy , jeremy ? . anyway , that 's all i was gon na say is that 's , that 's and we 're sorta preceding to the next step , and , it 'll mean some more work , , in march in getting the proposal out , and then , it 's , we 'll see what happens . , the last one was that you had there , was about naming ? grad b: it just , we ' ve been cutting up sound files , in for ba both digits and for , doing recognition . and liz had some suggestions on naming and it just brought up the whole issue that has n't really been resolved about naming . so , one thing she would like to have is for all the names to be the same length so that sorting is easier . , same number of characters so that when you 're sorting filenames you can easily extract out bits and pieces that you want . and that 's easy enough to do . and i do n't think we have so many meetings that 's a big deal just to change the names . so that means , instead of calling it " mr one " , " mr two " , you 'd call it " mrm zero one " , mrm zero two , things like that . just so that they 're all the same length . postdoc f: but , when you , do things like that you can always as long as you have , you can always search from the beginning or the end of the string . grad b: alright , so we have th we 're gon na have the speaker id , the session , information on the microphones , grad b: and so if each one of those is a fixed length , the sorting becomes a lot easier . grad d: she wanted to keep them the same lengths across different meetings also . so like , the nsa meeting lengths , all filenames are gon na be the same length as the meeting recorder meeting names ? grad b: and as i said , the it 's we just do n't have that many that 's a big deal . grad b: and so , , at some point we have to take a few days off , let the transcribers have a few days off , make no one 's touching the data and reorganize the file structures . and when we do that we can also rationalize some of the naming . postdoc f: i would think though that the transcribe the transcripts themselves would n't need to have such lengthy names . so , you 're dealing with a different domain there , and with start and end times and all that , and channels and , grad b: right . so the only thing that would change with that is just the directory names , grad b: i would change them to match . so instead of being mr one it would be mrm zero one . but i do n't think that 's a big deal . grad b: so for m the meetings we were thinking about three letters and three numbers for meeting i ds . , for speakers , m or f and then three numbers , for , and , that also brings up the point that we have to start assembling a speaker database so that we get those links back and forth and keep it consistent . and then , the microphone issues . we want some way of specifying , more than looking in the " key " file , what channel and what mike . what channel , what mike , and what broadcaster . or i how to s say it . so with this one it 's this particular headset with this particular transmitter w as a wireless . and that one is a different headset and different channel . and so we just need some naming conventions on that . and , that 's gon na become especially important once we start changing the microphone set - up . we have some new microphones that i 'd like to start trying out , once i test them . and then we 'll need to specify that somewhere . so i was just gon na do a fixed list of , microphones and types . so , as i said professor a: , since we have such a short agenda list i wi i will ask how are the transcriptions going ? postdoc f: the the news is that i ' ve i s so in s so i ' ve switched to start my new sentence . i switched to doing the channel - by - channel transcriptions to provide , the , tighter time bins for partly for use in thilo 's work and also it 's of relevance to other people in the project . and , i discovered in the process a couple of interesting things , which , one of them is that , it seems that there are time lags involved in doing this , , using an interface that has so much more complexity to it . and i and i wanted to maybe ask , chuck to help me with some of the questions of efficiency . maybe i was thinking maybe the best way to do this in the long run may be to give them single channel parts and then piece them together later . and i have a script , piece them together . , so it 's like , i know that take them apart and put them together and i 'll end up with the representation which is where the real power of that interface is . and it may be that it 's faster to transcribe a channel at a time with only one , sound file and one , set of , utterances to check through . professor a: i ' m a little confused . that one of the reason we thought we were so much faster than , the other transcription , thing was that we were using the mixed file . postdoc f: , yes . ok . but , with the mixed , when you have an overlap , you only have a choice of one start and end time for that entire overlap , which means that you 're not tightly , tuning the individual parts th of that overlap by different speakers . so someone may have only said two words in that entire big chunk of overlap . and for purposes of , things like , so things like training the speech - nonspeech segmentation thing . th - it 's necessary to have it more tightly tuned than that . and w and , is a it would be wonderful if , it 's possible then to use that algorithm to more tightly tie in all the channels after that but , , i ' ve th the so , i exactly where that 's going at this point . but m i was experimenting with doing this by hand and i really do think that it 's wise that we ' ve had them start the way we have with , m y working off the mixed signal , having the interface that does n't require them to do the ti , the time bins for every single channel at a t , through the entire interaction . , i did discover a couple other things by doing this though , and one of them is that , once in a while a backchannel will be overlooked by the transcriber . as you might expect , because when it 's a b backchannel could happen in a very densely populated overlap . and if we 're gon na study types of overlaps , which is what i wanna do , an analysis of that , then that really does require listening to every single channel all the way through the entire length for all the different speakers . now , for only four speakers , that 's not gon na be too much time , but if it 's nine speakers , then that i that is more time . so it 's li , wondering it 's like this it 's really valuable that thilo 's working on the speech - nonspeech segmentation because maybe , we can close in on that wi without having to actually go to the time that it would take to listen to every single channel from start to finish through every single meeting . phd e: , but those backchannels will always be a problem . especially if they 're really short and they 're not very loud and so it can it will always happen that also the automatic s detection system will miss some of them , so . postdoc f: so then , maybe the answer is to , listen especially densely in places of overlap , just so that they 're not being overlooked because of that , and count on accuracy during the sparser phases . cuz there are large s spaces of the that 's a good point . there are large spaces where there 's no overlap . someone 's giving a presentation , or whatever . that 's that 's a good thought . and , let 's see , there was one other thing i was gon na say . it 's really interesting data to work with , i have to say , it 's very enjoyable . i really , not a problem spending time with these data . really interesting . and not just because i ' m in there . no , it 's real interesting . professor a: , it 's a short meeting . , you 're still in the midst of what you 're doing from what you described last time , i assume , phd c: i have n't results , yet but , i ' m continue working with the mixed signal now , after the last experience . phd c: and and i ' m tried to , adjust the to improve , an harmonicity , detector that , i implement . but i have problem because , i get , , very much harmonics now . , harmonic possi possible harmonics , and now i ' m trying to find , some a , of h of help , using the energy to distinguish between possible harmonics , and other fre frequency peaks , that , corres not harmonics . and , i have to talk with y with you , with the group , about the instantaneous frequency , because i have , an algorithm , and , i get , mmm , t results similar results , like , the paper , that i am following . but , the rules , that , people used in the paper to distinguish the harmonics , is does n't work . and i not that i , the way o to ob the way to obtain the instantaneous frequency is right , or it 's not right . i have n't enough file feeling to distinguish what happened . professor a: , i 'd like to talk with you about it . if if , if i do n't have enough time and y you wanna discuss with someone else some someone else besides us that you might want to talk to , might be stephane . phd g: is is this the algorithm where you hypothesize a fundamental , and then get the energy for all the harmonics of that fundamental ? phd c: no . i do n't proth process the fundamental . i , ehm i calculate the phase derivate using the fft . and the algorithm said that , if you change the , nnn the x - the frequency " x " , using the in the instantaneous frequency , you can find , how , in several frequencies that proba probably the harmonics , the errors of peaks the frequency peaks , move around these , frequency harmonic the frequency of the harmonic . and , if you compare the instantaneous frequency , of the , continuous , , filters , that , they used , to get , the instantaneous frequency , it probably too , you can find , that the instantaneous frequency for the continuous , the output of the continuous filters are very near . and in my case i in equal with our signal , it does n't happened . professor a: . i 'd hafta look at that and think about it . it 's it 's i have n't worked with that either so i ' m not the way the simple - minded way i suggested was what chuck was just saying , is that you could make a sieve . , y you actually say that here is let 's let 's hypothesize that it 's this frequency or that frequency , maybe you could use some other cute methods to , short cut it by , making some guesses , but uh , i would , you could make some guesses from , from the auto - correlation but then , given those guesses , try , only looking at the energy at multiples of the of that frequency , and see how much of the take the one that 's maximum . call that the phd c: but , i know many people use , low - pass filter to get , the pitch . phd g: but i but the harmonics are gon na be , , i what the right word is . they 're gon na be dampened by the , vocal tract , right ? the response of the vocal tract . and so just looking at the energy on those at the harmonics , is that gon na ? phd g: i m what you 'd like to do is get rid of the effect of the vocal tract . right ? and just look at the signal coming out of the glottis . professor a: but i do n't need if you need to get rid of it . that 'd be but i if it 's ess if it 's essential . , cuz the main thing is that , you 're trying wha what are you doing this for ? you 're trying distinguish between the case where there is , where there are more than , where there 's more than one speaker and the case where there 's only one speaker . so if there 's more than one speaker , i you could i you 're so you 're not distinguished between voiced and unvoiced , so , i if you do n't care about that see , if you also wanna just determine if you also wanna determine whether it 's unvoiced , then you want to look at high frequencies also , because the f the fact that there 's more energy in the high frequencies is gon na be an ob obvious cue that it 's unvoiced . but , i but , other than that i as far as the one person versus two persons , it would be primarily a low frequency phenomenon . and if you looked at the low frequencies , yes the higher frequencies are gon na there 's gon na be a spectral slope . the higher frequencies will be lower energy . but so what . that 's w phd c: i will prepare for the next week , all my results about the harmonicity and will try to come in and to discuss here , because , i have n't enough feeling to u many time to understand what happened with the with , so many peaks , , and i see the harmonics there many time but , there are a lot of peaks , that , they are not harmonics . i have to discover what is the w the best way to c to use them professor a: , but i do n't think you can you 're not gon na be able to look at every frame , so i really thought that the best way to do it , and i ' m speaking with no experience on this particular point , but , my impression was that the best way to do it was however you you ' ve used instantaneous frequency , whatever . however you ' ve come up you with your candidates , you wanna see how much of the energy is in that as coppo as opposed to all of the all the total energy . and , if it 's voiced , i so y maybe you do need a voiced - unvoiced determination too . but if it 's voiced , and the , e the fraction of the energy that 's in the harmonic sequence that you 're looking at is relatively low , then it should be then it 's more likely to be an overlap . phd c: is height . this this is the idea i had to compare the ratio of the energy of the harmonics with the , total energy in the spectrum and try to get a ratio to distinguish between overlapping and speech . professor a: but you 're looking a y you 're looking at let 's take a second with this . you 're looking at f at the phase derivative , in , what domain ? this is in bands ? or or phd c: it 's a it 's a o i w the band is , from zero to four kilohertz . and i ot i phd c: . i u m t i used two m two method two methods . , one , based on the f , ftt . to fft to obtain the or to study the harmonics from the spectrum directly , and to study the energy and the multiples of frequency . and another algorithm i have is the in the instantaneous frequency , based on the fft to calculate the phase derivate in the time . , n the d i have two algorithms . but , in m i in my opinion the instantaneous frequency , the behavior , was th it was very interesting . because i saw , how the spectrum concentrate , around the harmonic . but then when i apply the rule , of the in the instantaneous frequency of the ne of the continuous filter in the near filter , the rule that , people propose in the paper does n't work . and i why . professor a: but the instantaneous frequency , would n't that give you something more like the central frequency of the , of the where most of the energy is ? , if you does i does it why would it correspond to pitch ? phd c: i get the spectrum , and i represent all the frequency . and when ou i obtained the instantaneous frequency . and i change the @ , using the instantaneous frequency , here . professor a: , so you scale you s you do a scaling along that axis according to instantaneous phd c: because when , when i i use these frequency , the range is different , and the resolution is different . i observe more or less , thing like this . the paper said that , these frequencies are probably , harmonics . but , they used , a rule , based in the because to calculate the instantaneous frequency , they use a hanning window . and , they said that , if these peak are , harmonics , f of the contiguous , w , filters are very near , or have to be very near . but , phh ! i do n't i don i and i what is the distance . and i tried to put different distance , to put difference , length of the window , different front sieve , pfff ! and i not what happened . professor a: ok , i ' m not following it enough . i 'll probably gon na hafta look at the paper , but which i ' m not gon na have time to do in the next few days , but i ' m curious about it . postdoc f: i did i it did occur to me that this is , the return to the transcription , that there 's one third thing i wanted to ex raise as a to as an issue which is , how to handle breaths . so , i wanted to raise the question of whether people in speech recognition want to know where the breaths are . and the reason i ask the question is , aside from the fact that they 're very time - consuming to encode , the fact that there was some i had the indication from dan ellis in the email that i sent to you , and about , that in principle we might be able to , handle breaths by accessi by using cross - talk from the other things , be able that in principle , maybe we could get rid of them , so maybe and i was i , we had this an and i did n't could n't get back to you , but the question of whether it 'd be possible to eliminate them from the audio signal , which would be the ideal situation , professor a: we - see , we 're dealing with real speech and we 're trying to have it be as real as possible and breaths are part of real speech . postdoc f: , except that these are really truly , ther there 's a segment in o the one i did n the first one that i did for i for this , where truly w we 're hearing you breathing like as if we 're you 're in our ear , and it 's like i y i , breath is natural , but not postdoc f: except that we 're trying to mimic , i see what you 're saying . you 're saying that the pda application would have , have to cope with breath . grad b: but more people than just pda users are interested in this corpus . so so mean you 're right grad b: but we do n't wanna w remove it from the corpus , in terms of delivering it because the people will want it in there . professor a: i right . if if it gets in the way of what somebody is doing with it then you might wanna have some method which will allow you to block it , but you it 's real data . you do n't wanna b but you do n't professor a: if s , if there 's a little bit of noise out there , and somebody is talking about something they 're doing , that 's part of what we accept as part of a real meeting , even and we have the f the fan and the in the projector up there , and , this is it 's this is actual that we wanna work with . postdoc f: this is in very interesting because i it has a i it shows very clearly the contrast between , speech recognition research and discourse research because in discourse and linguistic research , what counts is what 's communit communicative . and breath , everyone breathes , they breathe all the time . and once in a while breath is communicative , but r very rarely . ok , so now , i had a discussion with chuck about the data structure and the idea is that the transcripts will that get stored as a master there 'll be a master transcript which has in it everything that 's needed for both of these uses . and the one that 's used for speech recognition will be processed via scripts . , like , don 's been writing scripts and , to process it for the speech recognition side . discourse side will have this side over he the we 'll have a s ch , not being very fluent here . but , this the discourse side will have a script which will stri strip away the things which are non - communicative . ok . so then the then let 's think about the practicalities of how we get to that master copy with reference to breaths . so what i would r what i would wonder is would it be possible to encode those automatically ? could we get a breath detector ? postdoc f: , you just have no idea . , if you 're getting a breath several times every minute , and just simply the keystrokes it takes to negotiate , to put the boundaries in , to type it in , i it 's just a huge amount of time . postdoc f: and you wanna be it 's used , and you wanna be it 's done as efficiently as possible , and if it can be done automatically , that would be ideal . postdoc f: , ok . so now there 's another possibility which is , the time boundaries could mark off words from nonwords . and that would be extremely time - effective , if that 's sufficient . professor a: i ' m think if it 's too hard for us to annotate the breaths per se , we are gon na be building up models for these things and these things are somewhat self - aligning , so if so , we i if we say there is some a thing which we call a " breath " or a " breath - in " or " breath - out " , the models will learn that thing . , so but you do want them to point them at some region where the breaths really are . so postdoc f: and that would n't be a problem to have it , pause plus breath plus laugh plus sneeze ? professor a: , i there is there 's this dynamic tension between marking everything , as , and marking just a little bit and counting on the statistical methods . the more we can mark the better . but if there seems to be a lot of effort for a small amount of reward in some area , and this might be one like this although i 'd be interested to h get input from liz and andreas on this to see if they cuz they ' ve - they ' ve got lots of experience with the breaths in , their transcripts . professor a: actually , yes they do , but we can handle that without them here . but but , you were gon na say something about phd g: , , one possible way that we could handle it is that , as the transcribers are going through , and if they get a hunk of speech that they 're gon na transcribe , u th they 're gon na transcribe it because there 's words in there or whatnot . if there 's a breath in there , they could transcribe that . postdoc f: that 's what they ' ve been doing . so , within an overlap segment , they do this . phd g: but right . but if there 's a big hunk of speech , let 's say on morgan 's mike where he 's not talking , do n't worry about that . so what we 're saying is , there 's no guarantee that , so for the chunks that are transcribed , everything 's transcribed . but outside of those boundaries , there could have been that was n't transcribed . so you just somebody ca n't rely on that data and say " that 's perfectly clean data " . do you see what i ' m saying ? phd g: so i would say do n't tell them to transcribe anything that 's outside of a grouping of words . phd e: , and that 's that quite co corresponds to the way i try to train the speech - nonspeech detector , as i really try to not to detect those breaths which are not within a speech chunk but with which are just in a silence region . and they so they hopefully wo n't be marked in those channel - specific files . professor a: u i wanted to comment a little more just for clarification about this business about the different purposes . professor a: see , in a way this is a really key point , that for speech recognition , research , e a it 's not just a minor part . , the i would say the core thing that we 're trying to do is to recognize the actual , meaningful components in the midst of other things that are not meaningful . so it 's critical it 's not just incidental it 's critical for us to get these other components that are not meaningful . because that 's what we 're trying to pull the other out of . that 's our problem . if we had nothing if we had only linguistically - relevant things if we only had changes in the spectrum that were associated with words , with different spectral components , and , we did n't have noise , we did n't have convolutional errors , we did n't have extraneous , behaviors , and moving your head and all these sorts of things , then , actually speech recognition i is n't that bad right now . you can it 's the technology 's come along pretty . the the reason we still complain about it is because is when you have more realistic conditions then things fall apart . postdoc f: ok , fair enough . i , what i was wondering is what at what level does the breathing aspect enter into the problem ? because if it were likely that a pda would be able to be built which would get rid of the breathing , so it would n't even have to be processed at thi at this computational le , let me see , it 'd have to be computationally processed to get rid of it , but if there were , like likely on the frontier , a good breath extractor then , and then you 'd have to professor a: that and we do n't either . so it 's it right now it 's just raw d it 's just data that we 're collecting , and so we do n't wanna presuppose that people will be able to get rid of particular degradations because that 's actually the research that we 're trying to feed . so , an and maybe in five years it 'll work really , and it 'll only mess - up ten percent of the time , but then we would still want to account for that ten percent , postdoc f: i there 's another aspect which is that as we ' ve improved our microphone technique , we have a lot less breath in the more recent , recordings , so it 's in a way it 's an artifact that there 's so much on the earlier ones . phd g: one of the , just to add to this one of the ways that we will be able to get rid of breath is by having models for them . , that 's what a lot of people do nowadays . and so in order to build the model you need to have some amount of it marked , so that where the boundaries are . so , i do n't think we need to worry a lot about breaths that are happening outside of a , conversation . we do n't have to go and search for them to mark them , but , if they 're there while they 're transcribing some hunk of words , i 'd say put them in if possible . postdoc f: ok , and it 's also the fact that they differ a lot from one channel to the other because of the way the microphone 's adjusted .
topics discussed by the berkeley meeting recorder group included the status of the first test set of digits data , naming conventions for files , speaker identification tags , and encoding files with details about the recording. the group also discussed a proposal for a grant from the nsf's itr ( information technology research ) program , transcriptions , and efforts by speaker mn005 to detect speaker overlap using harmonicity-related features. particular focus was paid to questions about transcription procedures , i.e . how to deal with overlooked backchannels , and audible breaths. a small percentage of transcripts will be changed to reflect mis-read , uncorrected digits. a speaker database will be compiled to establish consistent links between speakers and their corresponding identification tags. sections of densely overlapping speech will require hand-checking so that overlooked backchannels may be manually segmented and labelled. the transcribers should only code audible breaths within a grouping of words , and not outside regions of continuous speech. it was further determined that audible breaths are an important facet of recorded speech , and that removing them from the corpus would be contrary to the aims of the project. speaker mn005 will prepare his results for detecting speaker overlap and present them in the next meeting. during digits readings , subjects tend to chunk numbers together rather than reading each number separately. when working from the mixed channel , transcribers may select only one start and end time for overlapping speech , resulting in points of overlap that are less tightly tuned. transcribers are likely to overlook backchannels in densely populated sections of speaker overlap. speaker mn014 reported that this is also problematic for the automatic detection of speech and non-speech , as backchannels that are very short and not loud enough will inevitably be overlooked. speaker mn005 reported problems distinguishing between possible harmonics and other frequency peaks , and creating an algorithm for obtaining the instantaneous frequency. the encoding of all audible breaths is too time-consuming. the first test set of digits is complete and includes 4,000 lines , each comprising between 1-10 digits. new digits forms were distributed for eliciting different prosodic groupings of numbers. new naming conventions were discussed as means for facilitating the sorting process. existing files will be changed so that all filenames are of equal length. similar changes will be made to speaker identification tags. files will also contain information specifying channel , microphone , and broadcaster information. a proposal is being drafted for a grant from the nsf's itr program for extending the research initiatives of the meeting recorder project. speaker fe008 is performing channel-by-channel transcriptions to create tighter time bins. tentative plans are to assign single channels to the transcriber pool and then piece them together afterwards. efforts by speaker mn005 are in progress to detect speaker overlap in the mixed signal using harmonicity-related features. for determining the instantaneous frequency , speaker me013 recommended deriving the maxima from energy multiples of a given frequency. it was also suggested that speaker mn005 should determine whether portions of the signal are voiced or unvoiced , as voiced intervals reflecting a relatively low fraction of energy in the harmonic sequence are likely to indicate sections of overlap.
###dialogue: professor a: ! maybe it 's just , how many t u how many times you crash in a day . phd g: or maybe it 's once you ' ve done enough meetings it wo n't crash on you anymore . professor a: that 's that 's great . do we have an agenda ? liz and andreas ca n't sh ca n't , ca n't come . grad b: i have no idea but got it a few minutes ago . right when you were in my office it arrived . grad b: so , does anyone have any a agenda items other than me ? i actually have one more also which is to talk about the digits . professor a: , right , so i was just gon na talk briefly about the nsf itr . professor a: , i wo n't say much , but then , you said wanna talk about digits ? grad b: i have a short thing about digits and then i wanna talk a little bit about naming conventions , although it 's unclear whether this is the right place to talk about it . so maybe just talk about it very briefly and take the details to the people who for whom it 's relevant . professor a: if we , we should n't add things in just to add things in . i ' m actually pretty busy today , so if we can we grad b: so the only thing i wanna say about digits is , we are done with the first test set . there are probably forms here and there that are marked as having been read that were n't really read . so i wo n't really know until i go through all the transcriber forms and extract out pieces that are in error . so i wa . two things . the first is what should we do about digits that were misread ? my opinion is , we should just throw them out completely , and have them read again by someone else . , the grouping is completely random , grad b: so it 's perfectly fine to put a group together again of errors and have them re - read , just to finish out the test set . grad b: , the other thing you could do is change the transcript to match what they really said . so those are the two options . professor a: but there 's often things where people do false starts . i know i ' ve done it , where i say a grad b: what the transcribers did with that is if they did a correction , and they eventually did read the right string , you extract the right string . phd g: , you 're talking about where they completely read the wrong string and did n't correct it ? postdoc f: , and s and you 're talking string - wise , you 're not talking about the entire page ? grad b: and so the two options are change the transcript to match what they really said , but then the transcript is n't the aurora test set anymore . i do n't think that really matters because the conditions are so different . and that would be a little easier . professor a: , i would , tak do the easy way , it it 's kinda , wh who knows what studies people will be doing on speaker - dependent things professor a: so that 's a couple hours of , speech , probably . which is a reasonable test set . grad b: and , jane , i do have a set of forms which you have copies of somewhere . grad b: , i was just wond i had all of them back from you . and then the other thing is that , the forms in front of us here that we 're gon na read later , were suggested by liz grad b: because she wanted to elicit some different prosodics from digits . and so , wanted people to , take a quick look at the instructions grad b: and the way it wa worked and see if it makes sense and if anyone has any comments on it . professor a: i see . and the decision here , was to continue with the words rather than the numerics . grad b: , yes , although we could switch it back . the problem was o and zero . although we could switch it back and tell them always to say " zero " or always to say " o " . professor a: or neither . but it 's just two thing ways that you can say it . professor a: right ? that 's the only thought i have because if you t start talking about these , u tr she 's trying to get at natural groupings , but it there 's nothing natural about reading numbers this way . grad b: the the problem also is she did want to stick with digits . i ' m speaking for her since she 's not here . but , the other problem we were thinking about is if you just put the numerals , they might say forty - three instead of four three . postdoc f: , if there 's space , though , between them . , you can with when you space them out they do n't look like , forty - three anymore . grad b: , she and i were talking about it , and she felt that it 's very , very natural to do that chunking . professor a: she 's right . it 's it 's a different problem . it 's a it 's an interesting problem , we ' ve done with numbers before , and sometimes people if you say s " three nine eight one " sometimes people will say " thirty - nine eighty - one " or " three hundred eighty - nine one " , or i do n't think they 'd say that , but th professor a: but , th thirty - eight ninety - one is probably how they 'd do it . grad b: so . , this is something that liz and i spoke about and , since this was something that liz asked for specifically , we need to defer to her . professor a: - . ok . , we 're probably gon na be collecting meetings for a while and if we decide we still wanna do some digits later we might be able to do some different ver different versions , professor a: but this is the next suggestion , so . ok , so e l i , let me , get my short thing out about the nsf . i sent this actually this is maybe a little side thing . , i sent to what we had , in some previous mail , as the right joint thing to send to , which was " m mtg rcdr hyphen joint " . grad b: it 's that 's because they set the one up at uw that 's not on our side , that 's on the u - dub side . and so u - uw set it up as a moderated list . grad b: and , i have no idea whether it actually ever goes to anyone so you might just wanna mail to mari professor a: no no , th i got , little excited notes from mari and jeff and so on , grad b: so the moderator actually did repost it . cuz i had sent one earlier actually the same thing happened to me i had sent one earlier . the message says , " you 'll be informed " and then i was never informed but i got replies from people indicating that they had gotten it , so . it 's just to prevent spam . professor a: so o ok . , anyway , i everybody here are y are you are on that list , right ? so you got the note ? ok . so this was , a , proposal that we put in before on more higher level , issues in meetings , from i higher level from my point of view . , and , meeting mappings , and , so is i for it was a proposal for the itr program , information technology research program 's part of national science foundation . it 's the second year of their doing , these grants . they 're they 're a lot of them are some of them anyway , are larger grants than the usual , small nsf grants , and . so , they 're very competitive , and they have a first phase where you put in pre - proposals , and we , got through that . and so th the next phase will be we 'll actually be doing a larger proposal . and i ' m i hope to be doing very little of it . and , which was also true for the pre - proposal , there 'll be bunch of people working on it . so . grad b: i that 's a good thing cuz that way i got my papers done early . professor a: my favorite is was when one reviewer says , " , this should be far more detailed " , and the nex the next reviewer says , " , there 's way too much detail " . grad b: or " this is way too general " , and the other reviewer says , " this is way too specific " . this is way too hard , way too easy . grad b: it sounded like they the first gate was pretty easy . is that right ? that they did n't reject a lot of the pre - proposals ? professor a: i should go back and look . i did n't i do n't think that 's true . professor a: but they have to weed out enough so that they have enough reviewers . so , , maybe they did n't r weed out as much as usual , but it 's usually a pretty but it . it 's it 's certainly not i ' m that it 's not down to one in two of what 's left . i ' m it 's , professor a: there 's different numbers of w awards for different size they have three size grants . this one there 's , see the small ones are less than five hundred thousand total over three years and that they have a fair number of them . and the large ones are , boy , i forget , more than a million and a half , more than two million like that . and and we 're in the middle category . we 're , , i forget what it was . but , i do n't remember , but it 's pr probably along the li i could be wrong on this , but probably along the lines of fifteen or that they 'll fund , or twenty . when they do you do how many they funded when they f in chuck 's , that he got last year ? grad b: it was smaller , that it was like four or five , was n't it ? professor a: last time they just had two categories , small and big , and this time they came up with a middle one , so it 'll there 'll be more of them that they fund than of the big . phd g: if we end up getting this , what will it mean to icsi in terms of , w wh where will the money go to , what would we be doing with it ? professor a: , it i none of it will go for those yachts that we ' ve talking about . grad b: it 's go higher level than we ' ve been talking about for meeting recorder . professor a: the other things that we have , been working on with , the c with communicator , especially with the newer things with the more acoustically - oriented things are lower level . and , this is dealing with , mapping on the level of , the conversation of mapping the conversations professor a: to different planes . so . but , . so it 's all that none of us are doing right now , or none of us are funded for , so it 's it would be new . phd g: so assuming everybody 's completely busy now , it means we 're gon na hafta , hire more students , or , something ? professor a: there 's evenings , and there 's weekends , there would be new hires , and there would be expansion , but , also , there 's always for everybody there 's always things that are dropping off , grants that are ending , or other things that are ending , so , professor a: there 's a continual need to bring in new things . but but there definitely would be new , students , professor a: we got we have , two of them are two in the c there 're two in the class already here , and then and , then there 's a third who 's doing a project here , who , but he wo n't be in the country that long , maybe another will end up . actually there is one other guy who 's looking that 's that guy , jeremy ? . anyway , that 's all i was gon na say is that 's , that 's and we 're sorta preceding to the next step , and , it 'll mean some more work , , in march in getting the proposal out , and then , it 's , we 'll see what happens . , the last one was that you had there , was about naming ? grad b: it just , we ' ve been cutting up sound files , in for ba both digits and for , doing recognition . and liz had some suggestions on naming and it just brought up the whole issue that has n't really been resolved about naming . so , one thing she would like to have is for all the names to be the same length so that sorting is easier . , same number of characters so that when you 're sorting filenames you can easily extract out bits and pieces that you want . and that 's easy enough to do . and i do n't think we have so many meetings that 's a big deal just to change the names . so that means , instead of calling it " mr one " , " mr two " , you 'd call it " mrm zero one " , mrm zero two , things like that . just so that they 're all the same length . postdoc f: but , when you , do things like that you can always as long as you have , you can always search from the beginning or the end of the string . grad b: alright , so we have th we 're gon na have the speaker id , the session , information on the microphones , grad b: and so if each one of those is a fixed length , the sorting becomes a lot easier . grad d: she wanted to keep them the same lengths across different meetings also . so like , the nsa meeting lengths , all filenames are gon na be the same length as the meeting recorder meeting names ? grad b: and as i said , the it 's we just do n't have that many that 's a big deal . grad b: and so , , at some point we have to take a few days off , let the transcribers have a few days off , make no one 's touching the data and reorganize the file structures . and when we do that we can also rationalize some of the naming . postdoc f: i would think though that the transcribe the transcripts themselves would n't need to have such lengthy names . so , you 're dealing with a different domain there , and with start and end times and all that , and channels and , grad b: right . so the only thing that would change with that is just the directory names , grad b: i would change them to match . so instead of being mr one it would be mrm zero one . but i do n't think that 's a big deal . grad b: so for m the meetings we were thinking about three letters and three numbers for meeting i ds . , for speakers , m or f and then three numbers , for , and , that also brings up the point that we have to start assembling a speaker database so that we get those links back and forth and keep it consistent . and then , the microphone issues . we want some way of specifying , more than looking in the " key " file , what channel and what mike . what channel , what mike , and what broadcaster . or i how to s say it . so with this one it 's this particular headset with this particular transmitter w as a wireless . and that one is a different headset and different channel . and so we just need some naming conventions on that . and , that 's gon na become especially important once we start changing the microphone set - up . we have some new microphones that i 'd like to start trying out , once i test them . and then we 'll need to specify that somewhere . so i was just gon na do a fixed list of , microphones and types . so , as i said professor a: , since we have such a short agenda list i wi i will ask how are the transcriptions going ? postdoc f: the the news is that i ' ve i s so in s so i ' ve switched to start my new sentence . i switched to doing the channel - by - channel transcriptions to provide , the , tighter time bins for partly for use in thilo 's work and also it 's of relevance to other people in the project . and , i discovered in the process a couple of interesting things , which , one of them is that , it seems that there are time lags involved in doing this , , using an interface that has so much more complexity to it . and i and i wanted to maybe ask , chuck to help me with some of the questions of efficiency . maybe i was thinking maybe the best way to do this in the long run may be to give them single channel parts and then piece them together later . and i have a script , piece them together . , so it 's like , i know that take them apart and put them together and i 'll end up with the representation which is where the real power of that interface is . and it may be that it 's faster to transcribe a channel at a time with only one , sound file and one , set of , utterances to check through . professor a: i ' m a little confused . that one of the reason we thought we were so much faster than , the other transcription , thing was that we were using the mixed file . postdoc f: , yes . ok . but , with the mixed , when you have an overlap , you only have a choice of one start and end time for that entire overlap , which means that you 're not tightly , tuning the individual parts th of that overlap by different speakers . so someone may have only said two words in that entire big chunk of overlap . and for purposes of , things like , so things like training the speech - nonspeech segmentation thing . th - it 's necessary to have it more tightly tuned than that . and w and , is a it would be wonderful if , it 's possible then to use that algorithm to more tightly tie in all the channels after that but , , i ' ve th the so , i exactly where that 's going at this point . but m i was experimenting with doing this by hand and i really do think that it 's wise that we ' ve had them start the way we have with , m y working off the mixed signal , having the interface that does n't require them to do the ti , the time bins for every single channel at a t , through the entire interaction . , i did discover a couple other things by doing this though , and one of them is that , once in a while a backchannel will be overlooked by the transcriber . as you might expect , because when it 's a b backchannel could happen in a very densely populated overlap . and if we 're gon na study types of overlaps , which is what i wanna do , an analysis of that , then that really does require listening to every single channel all the way through the entire length for all the different speakers . now , for only four speakers , that 's not gon na be too much time , but if it 's nine speakers , then that i that is more time . so it 's li , wondering it 's like this it 's really valuable that thilo 's working on the speech - nonspeech segmentation because maybe , we can close in on that wi without having to actually go to the time that it would take to listen to every single channel from start to finish through every single meeting . phd e: , but those backchannels will always be a problem . especially if they 're really short and they 're not very loud and so it can it will always happen that also the automatic s detection system will miss some of them , so . postdoc f: so then , maybe the answer is to , listen especially densely in places of overlap , just so that they 're not being overlooked because of that , and count on accuracy during the sparser phases . cuz there are large s spaces of the that 's a good point . there are large spaces where there 's no overlap . someone 's giving a presentation , or whatever . that 's that 's a good thought . and , let 's see , there was one other thing i was gon na say . it 's really interesting data to work with , i have to say , it 's very enjoyable . i really , not a problem spending time with these data . really interesting . and not just because i ' m in there . no , it 's real interesting . professor a: , it 's a short meeting . , you 're still in the midst of what you 're doing from what you described last time , i assume , phd c: i have n't results , yet but , i ' m continue working with the mixed signal now , after the last experience . phd c: and and i ' m tried to , adjust the to improve , an harmonicity , detector that , i implement . but i have problem because , i get , , very much harmonics now . , harmonic possi possible harmonics , and now i ' m trying to find , some a , of h of help , using the energy to distinguish between possible harmonics , and other fre frequency peaks , that , corres not harmonics . and , i have to talk with y with you , with the group , about the instantaneous frequency , because i have , an algorithm , and , i get , mmm , t results similar results , like , the paper , that i am following . but , the rules , that , people used in the paper to distinguish the harmonics , is does n't work . and i not that i , the way o to ob the way to obtain the instantaneous frequency is right , or it 's not right . i have n't enough file feeling to distinguish what happened . professor a: , i 'd like to talk with you about it . if if , if i do n't have enough time and y you wanna discuss with someone else some someone else besides us that you might want to talk to , might be stephane . phd g: is is this the algorithm where you hypothesize a fundamental , and then get the energy for all the harmonics of that fundamental ? phd c: no . i do n't proth process the fundamental . i , ehm i calculate the phase derivate using the fft . and the algorithm said that , if you change the , nnn the x - the frequency " x " , using the in the instantaneous frequency , you can find , how , in several frequencies that proba probably the harmonics , the errors of peaks the frequency peaks , move around these , frequency harmonic the frequency of the harmonic . and , if you compare the instantaneous frequency , of the , continuous , , filters , that , they used , to get , the instantaneous frequency , it probably too , you can find , that the instantaneous frequency for the continuous , the output of the continuous filters are very near . and in my case i in equal with our signal , it does n't happened . professor a: . i 'd hafta look at that and think about it . it 's it 's i have n't worked with that either so i ' m not the way the simple - minded way i suggested was what chuck was just saying , is that you could make a sieve . , y you actually say that here is let 's let 's hypothesize that it 's this frequency or that frequency , maybe you could use some other cute methods to , short cut it by , making some guesses , but uh , i would , you could make some guesses from , from the auto - correlation but then , given those guesses , try , only looking at the energy at multiples of the of that frequency , and see how much of the take the one that 's maximum . call that the phd c: but , i know many people use , low - pass filter to get , the pitch . phd g: but i but the harmonics are gon na be , , i what the right word is . they 're gon na be dampened by the , vocal tract , right ? the response of the vocal tract . and so just looking at the energy on those at the harmonics , is that gon na ? phd g: i m what you 'd like to do is get rid of the effect of the vocal tract . right ? and just look at the signal coming out of the glottis . professor a: but i do n't need if you need to get rid of it . that 'd be but i if it 's ess if it 's essential . , cuz the main thing is that , you 're trying wha what are you doing this for ? you 're trying distinguish between the case where there is , where there are more than , where there 's more than one speaker and the case where there 's only one speaker . so if there 's more than one speaker , i you could i you 're so you 're not distinguished between voiced and unvoiced , so , i if you do n't care about that see , if you also wanna just determine if you also wanna determine whether it 's unvoiced , then you want to look at high frequencies also , because the f the fact that there 's more energy in the high frequencies is gon na be an ob obvious cue that it 's unvoiced . but , i but , other than that i as far as the one person versus two persons , it would be primarily a low frequency phenomenon . and if you looked at the low frequencies , yes the higher frequencies are gon na there 's gon na be a spectral slope . the higher frequencies will be lower energy . but so what . that 's w phd c: i will prepare for the next week , all my results about the harmonicity and will try to come in and to discuss here , because , i have n't enough feeling to u many time to understand what happened with the with , so many peaks , , and i see the harmonics there many time but , there are a lot of peaks , that , they are not harmonics . i have to discover what is the w the best way to c to use them professor a: , but i do n't think you can you 're not gon na be able to look at every frame , so i really thought that the best way to do it , and i ' m speaking with no experience on this particular point , but , my impression was that the best way to do it was however you you ' ve used instantaneous frequency , whatever . however you ' ve come up you with your candidates , you wanna see how much of the energy is in that as coppo as opposed to all of the all the total energy . and , if it 's voiced , i so y maybe you do need a voiced - unvoiced determination too . but if it 's voiced , and the , e the fraction of the energy that 's in the harmonic sequence that you 're looking at is relatively low , then it should be then it 's more likely to be an overlap . phd c: is height . this this is the idea i had to compare the ratio of the energy of the harmonics with the , total energy in the spectrum and try to get a ratio to distinguish between overlapping and speech . professor a: but you 're looking a y you 're looking at let 's take a second with this . you 're looking at f at the phase derivative , in , what domain ? this is in bands ? or or phd c: it 's a it 's a o i w the band is , from zero to four kilohertz . and i ot i phd c: . i u m t i used two m two method two methods . , one , based on the f , ftt . to fft to obtain the or to study the harmonics from the spectrum directly , and to study the energy and the multiples of frequency . and another algorithm i have is the in the instantaneous frequency , based on the fft to calculate the phase derivate in the time . , n the d i have two algorithms . but , in m i in my opinion the instantaneous frequency , the behavior , was th it was very interesting . because i saw , how the spectrum concentrate , around the harmonic . but then when i apply the rule , of the in the instantaneous frequency of the ne of the continuous filter in the near filter , the rule that , people propose in the paper does n't work . and i why . professor a: but the instantaneous frequency , would n't that give you something more like the central frequency of the , of the where most of the energy is ? , if you does i does it why would it correspond to pitch ? phd c: i get the spectrum , and i represent all the frequency . and when ou i obtained the instantaneous frequency . and i change the @ , using the instantaneous frequency , here . professor a: , so you scale you s you do a scaling along that axis according to instantaneous phd c: because when , when i i use these frequency , the range is different , and the resolution is different . i observe more or less , thing like this . the paper said that , these frequencies are probably , harmonics . but , they used , a rule , based in the because to calculate the instantaneous frequency , they use a hanning window . and , they said that , if these peak are , harmonics , f of the contiguous , w , filters are very near , or have to be very near . but , phh ! i do n't i don i and i what is the distance . and i tried to put different distance , to put difference , length of the window , different front sieve , pfff ! and i not what happened . professor a: ok , i ' m not following it enough . i 'll probably gon na hafta look at the paper , but which i ' m not gon na have time to do in the next few days , but i ' m curious about it . postdoc f: i did i it did occur to me that this is , the return to the transcription , that there 's one third thing i wanted to ex raise as a to as an issue which is , how to handle breaths . so , i wanted to raise the question of whether people in speech recognition want to know where the breaths are . and the reason i ask the question is , aside from the fact that they 're very time - consuming to encode , the fact that there was some i had the indication from dan ellis in the email that i sent to you , and about , that in principle we might be able to , handle breaths by accessi by using cross - talk from the other things , be able that in principle , maybe we could get rid of them , so maybe and i was i , we had this an and i did n't could n't get back to you , but the question of whether it 'd be possible to eliminate them from the audio signal , which would be the ideal situation , professor a: we - see , we 're dealing with real speech and we 're trying to have it be as real as possible and breaths are part of real speech . postdoc f: , except that these are really truly , ther there 's a segment in o the one i did n the first one that i did for i for this , where truly w we 're hearing you breathing like as if we 're you 're in our ear , and it 's like i y i , breath is natural , but not postdoc f: except that we 're trying to mimic , i see what you 're saying . you 're saying that the pda application would have , have to cope with breath . grad b: but more people than just pda users are interested in this corpus . so so mean you 're right grad b: but we do n't wanna w remove it from the corpus , in terms of delivering it because the people will want it in there . professor a: i right . if if it gets in the way of what somebody is doing with it then you might wanna have some method which will allow you to block it , but you it 's real data . you do n't wanna b but you do n't professor a: if s , if there 's a little bit of noise out there , and somebody is talking about something they 're doing , that 's part of what we accept as part of a real meeting , even and we have the f the fan and the in the projector up there , and , this is it 's this is actual that we wanna work with . postdoc f: this is in very interesting because i it has a i it shows very clearly the contrast between , speech recognition research and discourse research because in discourse and linguistic research , what counts is what 's communit communicative . and breath , everyone breathes , they breathe all the time . and once in a while breath is communicative , but r very rarely . ok , so now , i had a discussion with chuck about the data structure and the idea is that the transcripts will that get stored as a master there 'll be a master transcript which has in it everything that 's needed for both of these uses . and the one that 's used for speech recognition will be processed via scripts . , like , don 's been writing scripts and , to process it for the speech recognition side . discourse side will have this side over he the we 'll have a s ch , not being very fluent here . but , this the discourse side will have a script which will stri strip away the things which are non - communicative . ok . so then the then let 's think about the practicalities of how we get to that master copy with reference to breaths . so what i would r what i would wonder is would it be possible to encode those automatically ? could we get a breath detector ? postdoc f: , you just have no idea . , if you 're getting a breath several times every minute , and just simply the keystrokes it takes to negotiate , to put the boundaries in , to type it in , i it 's just a huge amount of time . postdoc f: and you wanna be it 's used , and you wanna be it 's done as efficiently as possible , and if it can be done automatically , that would be ideal . postdoc f: , ok . so now there 's another possibility which is , the time boundaries could mark off words from nonwords . and that would be extremely time - effective , if that 's sufficient . professor a: i ' m think if it 's too hard for us to annotate the breaths per se , we are gon na be building up models for these things and these things are somewhat self - aligning , so if so , we i if we say there is some a thing which we call a " breath " or a " breath - in " or " breath - out " , the models will learn that thing . , so but you do want them to point them at some region where the breaths really are . so postdoc f: and that would n't be a problem to have it , pause plus breath plus laugh plus sneeze ? professor a: , i there is there 's this dynamic tension between marking everything , as , and marking just a little bit and counting on the statistical methods . the more we can mark the better . but if there seems to be a lot of effort for a small amount of reward in some area , and this might be one like this although i 'd be interested to h get input from liz and andreas on this to see if they cuz they ' ve - they ' ve got lots of experience with the breaths in , their transcripts . professor a: actually , yes they do , but we can handle that without them here . but but , you were gon na say something about phd g: , , one possible way that we could handle it is that , as the transcribers are going through , and if they get a hunk of speech that they 're gon na transcribe , u th they 're gon na transcribe it because there 's words in there or whatnot . if there 's a breath in there , they could transcribe that . postdoc f: that 's what they ' ve been doing . so , within an overlap segment , they do this . phd g: but right . but if there 's a big hunk of speech , let 's say on morgan 's mike where he 's not talking , do n't worry about that . so what we 're saying is , there 's no guarantee that , so for the chunks that are transcribed , everything 's transcribed . but outside of those boundaries , there could have been that was n't transcribed . so you just somebody ca n't rely on that data and say " that 's perfectly clean data " . do you see what i ' m saying ? phd g: so i would say do n't tell them to transcribe anything that 's outside of a grouping of words . phd e: , and that 's that quite co corresponds to the way i try to train the speech - nonspeech detector , as i really try to not to detect those breaths which are not within a speech chunk but with which are just in a silence region . and they so they hopefully wo n't be marked in those channel - specific files . professor a: u i wanted to comment a little more just for clarification about this business about the different purposes . professor a: see , in a way this is a really key point , that for speech recognition , research , e a it 's not just a minor part . , the i would say the core thing that we 're trying to do is to recognize the actual , meaningful components in the midst of other things that are not meaningful . so it 's critical it 's not just incidental it 's critical for us to get these other components that are not meaningful . because that 's what we 're trying to pull the other out of . that 's our problem . if we had nothing if we had only linguistically - relevant things if we only had changes in the spectrum that were associated with words , with different spectral components , and , we did n't have noise , we did n't have convolutional errors , we did n't have extraneous , behaviors , and moving your head and all these sorts of things , then , actually speech recognition i is n't that bad right now . you can it 's the technology 's come along pretty . the the reason we still complain about it is because is when you have more realistic conditions then things fall apart . postdoc f: ok , fair enough . i , what i was wondering is what at what level does the breathing aspect enter into the problem ? because if it were likely that a pda would be able to be built which would get rid of the breathing , so it would n't even have to be processed at thi at this computational le , let me see , it 'd have to be computationally processed to get rid of it , but if there were , like likely on the frontier , a good breath extractor then , and then you 'd have to professor a: that and we do n't either . so it 's it right now it 's just raw d it 's just data that we 're collecting , and so we do n't wanna presuppose that people will be able to get rid of particular degradations because that 's actually the research that we 're trying to feed . so , an and maybe in five years it 'll work really , and it 'll only mess - up ten percent of the time , but then we would still want to account for that ten percent , postdoc f: i there 's another aspect which is that as we ' ve improved our microphone technique , we have a lot less breath in the more recent , recordings , so it 's in a way it 's an artifact that there 's so much on the earlier ones . phd g: one of the , just to add to this one of the ways that we will be able to get rid of breath is by having models for them . , that 's what a lot of people do nowadays . and so in order to build the model you need to have some amount of it marked , so that where the boundaries are . so , i do n't think we need to worry a lot about breaths that are happening outside of a , conversation . we do n't have to go and search for them to mark them , but , if they 're there while they 're transcribing some hunk of words , i 'd say put them in if possible . postdoc f: ok , and it 's also the fact that they differ a lot from one channel to the other because of the way the microphone 's adjusted . ###summary: topics discussed by the berkeley meeting recorder group included the status of the first test set of digits data , naming conventions for files , speaker identification tags , and encoding files with details about the recording. the group also discussed a proposal for a grant from the nsf's itr ( information technology research ) program , transcriptions , and efforts by speaker mn005 to detect speaker overlap using harmonicity-related features. particular focus was paid to questions about transcription procedures , i.e . how to deal with overlooked backchannels , and audible breaths. a small percentage of transcripts will be changed to reflect mis-read , uncorrected digits. a speaker database will be compiled to establish consistent links between speakers and their corresponding identification tags. sections of densely overlapping speech will require hand-checking so that overlooked backchannels may be manually segmented and labelled. the transcribers should only code audible breaths within a grouping of words , and not outside regions of continuous speech. it was further determined that audible breaths are an important facet of recorded speech , and that removing them from the corpus would be contrary to the aims of the project. speaker mn005 will prepare his results for detecting speaker overlap and present them in the next meeting. during digits readings , subjects tend to chunk numbers together rather than reading each number separately. when working from the mixed channel , transcribers may select only one start and end time for overlapping speech , resulting in points of overlap that are less tightly tuned. transcribers are likely to overlook backchannels in densely populated sections of speaker overlap. speaker mn014 reported that this is also problematic for the automatic detection of speech and non-speech , as backchannels that are very short and not loud enough will inevitably be overlooked. speaker mn005 reported problems distinguishing between possible harmonics and other frequency peaks , and creating an algorithm for obtaining the instantaneous frequency. the encoding of all audible breaths is too time-consuming. the first test set of digits is complete and includes 4,000 lines , each comprising between 1-10 digits. new digits forms were distributed for eliciting different prosodic groupings of numbers. new naming conventions were discussed as means for facilitating the sorting process. existing files will be changed so that all filenames are of equal length. similar changes will be made to speaker identification tags. files will also contain information specifying channel , microphone , and broadcaster information. a proposal is being drafted for a grant from the nsf's itr program for extending the research initiatives of the meeting recorder project. speaker fe008 is performing channel-by-channel transcriptions to create tighter time bins. tentative plans are to assign single channels to the transcriber pool and then piece them together afterwards. efforts by speaker mn005 are in progress to detect speaker overlap in the mixed signal using harmonicity-related features. for determining the instantaneous frequency , speaker me013 recommended deriving the maxima from energy multiples of a given frequency. it was also suggested that speaker mn005 should determine whether portions of the signal are voiced or unvoiced , as voiced intervals reflecting a relatively low fraction of energy in the harmonic sequence are likely to indicate sections of overlap.
31
professor a: we 're going ? ok . sh - close your door on the way out ? professor a: probably wanna get this other door , too . so . what are we talking about today ? professor a: the both the sri system and the oth and for one thing that shows the difference between having a lot of training data or not , professor a: , the the best number we have on the english on near microphone only is three or four percent . professor a: and it 's significantly better than that , using fairly simple front - ends on the , with the sri system . so i th that the but that 's using a pretty huge amount of data , mostly not digits , but then again , . , mostly not digits for the actual training the h m ms whereas in this case we 're just using digits for training the h m professor a: did anybody mention about whether the sri system is a is doing the digits the wor as a word model or as a sub s sub - phone states ? phd e: . so , because it 's their very d huge , their huge system . and . but . so . there is one difference , the sri system the result for the sri system that are represented here are with adaptation . so there is it 's their complete system and including on - line unsupervised adaptation . phd e: and if you do n't use adaptation , the error rate is around fifty percent worse , if i remember . professor a: still . but but what i 'd be interested to do given that , is that we should take i that somebody 's gon na do this , right ? is to take some of these tandem things and feed it into the sri system , phd e: but i the main point is the data because i am not . our back - end is fairly simple but until now , the attempts to improve it or have fail , what chuck tried to do professor a: , but he 's doing it with the same data , right ? so to so there 's two things being affected . professor a: . one is that , there 's something simple that 's wrong with the back - end . we ' ve been playing a number of states i if he got to the point of playing with the number of gaussians yet but , but , so far he had n't gotten any big improvement , but that 's all with the same amount of data which is pretty small . and . professor a: , you could do that , but i ' m saying even with it not with that part not retrained , just using having the h m ms much better h m professor a: . but just train those h m ms using different features , the features coming from our aurora . phd e: but what would be interesting to see also is what perhaps it 's not related , the amount of data but the recording conditions . i . because it 's probably not a problem of noise , because our features are supposed to be robust to noise . it 's not a problem of channel , because there is normalization with respect to the channel . so professor a: i ' m . what what is the problem that you 're trying to explain ? phd e: the the fact that the result with the tandem and aurora system are so much worse . professor a: that the so much worse ? i but i ' m almost certain that it , that it has to do with the amount of training data . professor a: but but having a huge if if you look at what commercial places do , they use a huge amount of data . this is a modest amount of data . professor a: so . , ordinarily you would say " , given that you have enough occurrences of the digits , you can just train with digits rather than with , " but , if you have a huge in other words , do word models but if you have a huge amount of data then you 're going to have many occurrences of similar allophones . and that 's just a huge amount of training for it . so it 's it has to be that , because , as you say , this is near - microphone , it 's really pretty clean data . now , some of it could be the fact that let 's see , in the in these multi - train things did we include noisy data in the training ? , that could be hurting us actually , for the clean case . phd e: , actually we see that the clean train for the aurora proposals are better than the multi - train , professor a: it is if cuz this is clean data , and so that 's not too surprising . but . phd e: , o i what i meant is that , let 's say if we add enough data to train on the meeting recorder digits , i we could have better results than this . phd e: what i meant is that perhaps we can learn something from this , what 's wrong what is different between ti - digits and these digits and professor a: so in the actual ti - digits database we 're getting point eight percent , and here we 're getting three or four three , let 's see , three for this ? , but , point eight percent is something like double or triple what people have gotten who ' ve worked very hard at doing that . and and also , as you point out , there 's adaptation in these numbers also . so if you , put the ad adap take the adaptation off , then it for the english - near you get something like two percent . and here you had , something like three point four . and i could easily see that difference coming from this huge amount of data that it was trained on . so it 's , i do n't think there 's anything magical here . it 's , we used a simple htk system with a modest amount of data . and this is a , modern system has a lot of points to it . so . , the htk is an older htk , even . it 's not that surprising . but to me it just meant a practical point that if we want to publish results on digits that people pay attention to we probably should cuz we ' ve had the problem before that you get show some improvement on something that 's , it seems like too large a number , and people do n't necessarily take it so . so the three point four percent for this is so why is it it 's an interesting question though , still . why is why is it three point four percent for the d the digits recorded in this environment as opposed to the point eight percent for the original ti - digits database ? professor a: just looking at the ti - di the tandem system , if we 're getting point eight percent , which , yes , it 's high . it 's , it 's not awfully high , but it 's , it 's high . why is it four times as high , or more ? professor a: right ? , there 's even though it 's close - miked there 's still there really is background noise . and i suspect when the ti - digits were recorded if somebody fumbled or said something wrong that they probably made them take it over . it was not there was no attempt to have it be realistic in any sense . phd e: and acoustically , it 's q it 's i listened . it 's quite different . ti - digit is it 's very , very clean and it 's like studio recording whereas these meeting recorder digits sometimes you have breath noise professor a: bless you . i . it 's so . yes . it 's it 's the indication it 's harder . , i that 's true either way . so take a look at the , the sri results . , they 're much better , but still you 're getting something like one point three percent for things that are same data as in t ti - digits the same text . and , i ' m the same system would get , point three or point four on the actual ti - digits . so this , on both systems the these digits are showing up as harder . which i find interesting this is closer to it 's still read . but i still think it 's much closer to what people actually face , when they 're dealing with people saying digits over the telephone . i do n't think , i ' m they would n't release the numbers , but i do n't think that the companies that do telephone speech get anything like point four percent on their digits . i ' m they get , for one thing people do phone up who do n't have middle america accents and it 's a we it 's us . it has many people who sound in many different ways . that was that topic . what else we got ? did we end up giving up on , any eurospeech submissions , or ? i know thilo and dan ellis are submitting something , but . phd e: i e the only thing with these the meeting recorder and , so , we gave up . professor a: . now , actually for the aur - we do have for aurora , right ? because because we have ano an extra month . professor a: , that 's fine . so th so we have a couple little things on meeting recorder and we have we do n't we do n't have to flood it with papers . we 're not trying to prove anything to anybody . so . that 's fine . anything else ? phd e: . so . perhaps that we ' ve been working on is , we have put the good vad in the system and it really makes a huge difference . so , , this is perhaps one of the reason why our system was not the best , because with the new vad , it 's very the results are similar to the france telecom results and perhaps even better sometimes . so there is this point . the problem is that it 's very big and we still have to think how to where to put it and , because it , this vad either some delay and we if we put it on the server side , it does n't work , because on the server side features you already have lda applied from the f from the terminal side and so you accumulate the delay so the vad should be before the lda which means perhaps on the terminal side and then smaller and phd e: it 's from ogi . so it 's the network trained it 's the network with the huge amounts on hidden of hidden units , and nine input frames compared to the vad that was in the proposal which has a very small amount of hidden units and fewer inputs . professor a: this is the one they had originally ? , but they had to get rid of it because of the space , did n't they ? phd e: but the abso assumption is that we will be able to make a vad that 's small and that works fine . and . so we can professor a: but the other thing is to use a different vad entirely . , i if there 's a if i what the thinking was amongst the etsi folk but if everybody let 's use this vad and take that out of there phd e: they just want , they do n't want to fix the vad because they think there is some interaction between feature extraction and vad or frame dropping but they still want to just to give some requirement for this vad because it 's it will not be part of they do n't want it to be part of the standard . so it must be at least somewhat fixed but not completely . so there just will be some requirements that are still not yet ready . professor a: determined . but i was thinking that s " , there may be some interaction , but i do n't think we need to be stuck on using our or ogi 's vad . we could use somebody else 's if it 's smaller or , as long as it did the job . so that 's good . phd e: . so there is this thing . there is i designed a new filter because when i designed other filters with shorter delay from the lda filters , there was one filter with fif sixty millisecond delay and the other with ten milliseconds and hynek suggested that both could have sixty - five sixty - s it 's sixty - five . both should have sixty - five because phd e: and . so i did that and it 's running . so , let 's see what will happen . but the filter is closer to the reference filter . . professor a: so that means logically , in principle , it should be better . so probably it 'll be worse . or in the basic perverse nature of reality . phd e: , and then we ' ve started to work with this of voiced - unvoiced . and next week we will perhaps try to have a new system with msg stream also see what happens . so , something that 's similar to the proposal too , but with msg stream . phd d: no , i w i begin to play with matlab and to found some parameter robust for voiced - unvoiced decision . but only to play . and we they we found that maybe w is a classical parameter , the sq the variance between the fft of the signal and the small spectrum of time we after the mel filter bank . and , is more or less robust . is good for clean speech . is quite good for noisy speech . but we must to have bigger statistic with timit , and is not ready yet to use on , i . phd d: i have here . i have here for one signal , for one frame . the the mix of the two , noise and unnoise , and the signal is this . clean , and this noise . these are the two the mixed , the big signal is for clean . professor a: , i ' m s there 's none of these axes are labeled , so i what this what 's this axis ? phd d: , this is energy , log - energy of the spectrum . of the this is the variance , the difference { nonvocalsound } between the spectrum of the signal and fft of each frame of the signal and this mouth spectrum of time after the f may fit for the two , phd d: this big , to here , they are to signal . this is for clean and this is for noise . phd d: and this is the noise portion . and this is more or less like this . but i meant to have see @ two the picture . this is , for one frame . the spectrum of the signal . and this is the small version of the spectrum after ml mel filter bank . phd d: and this is this is not the different . this is trying to obtain with lpc model the spectrum but using matlab without going factor and s phd d: and the that this is good . this is quite similar . this is another frame . ho how i obtained the envelope , { nonvocalsound } this envelope , with the mel filter bank . professor a: so now i wonder , do you want to i know you want to get at something orthogonal from what you get with the smooth spectrum . but if you were to really try and get a voiced - unvoiced , do you want to ignore that ? , do you , clearly a very big cues for voiced - unvoiced come from spectral slope and so on , phd d: because when did noise clear { nonvocalsound } in these section is clear if s @ { nonvocalsound } val value is indicative that is a voice frame and it 's low values professor a: , you probably want , certainly if you want to do good voiced - unvoiced detection , you need a few features . each each feature is by itself not enough . but , people look at slope and first auto - correlation coefficient , divided by power . or or there 's i we prob probably do n't have enough computation to do a simple pitch detector ? with a pitch detector you could have a an estimate of what the or maybe you could you just do it going through the p fft 's figuring out some probable harmonic structure . and and . phd d: you have read up and you have a paper , the paper that you s give me yesterday . they say that yesterday they are some { nonvocalsound } problem phd e: , there is th this fact actually . if you look at this spectrum , what 's this again ? is it the mel - filters ? phd e: ok . so the envelope here is the output of the mel - filters and what we clearly see is that in some cases , and it clearly appears here , and the harmonics are resolved by the f , there are still appear after mel - filtering , and it happens for high pitched voice because the width of the lower frequency mel - filters is sometimes even smaller than the pitch . it 's around one hundred , one hundred and fifty hertz nnn . and so what happens is that this , add additional variability to this envelope so we were thinking to modify the mel - spectrum to have something that 's smoother on low frequencies . professor a: separate thing ? maybe so . so , what what i was talking about was just , starting with the fft you could do a very rough thing to estimate pitch . and , given that , you could come up with some estimate of how much of the low frequency energy was explained by those harmonics . it 's a variant on what you 're s what you 're doing . the , the mel does give a smooth thing . but as you say it 's not that smooth here . and and so if you just subtracted off your of the harmonics then something like this would end up with quite a bit lower energy in the first fifteen hundred hertz or so and our first kilohertz , even . and if was noisy , the proportion that it would go down would be if it was unvoiced . so you oughta be able to pick out voiced segments . at least it should be another cue . so . anyway . ok ? that 's what 's going on . what 's up with you ? grad b: our t i went to talk with mike jordan this week { nonvocalsound } and shared with him the ideas about extending the larry saul work and i asked him some questions about factorial h m so like later down the line when we ' ve come up with these feature detectors , how do we , model the time series that happens and we talked a little bit about factorial h m ms and how when you 're doing inference or w when you 're doing recognition , there 's like simple viterbi that you can do for these h m and the great advantages that a lot of times the factorial h m ms do n't over - alert the problem there they have a limited number of parameters and they focus directly on the sub - problems at hand so you can imagine five or so parallel features transitioning independently and then at the end you couple these factorial h m ms with undirected links based on some more data . so he seemed like really interested in this and said this is something very do - able and can learn a lot and , i ' ve just been continue reading about certain things . thinking of maybe using m modulation spectrum to as features also in the sub - bands because it seems like the modulation spectrum tells you a lot about the intelligibility of certain words and so , . just that 's about it . grad c: ok . and so i ' ve been looking at avendano 's work and i 'll try to write up in my next stat status report a description of what he 's doing , but it 's an approach to deal with reverberation or that the aspect of his work that i ' m interested in the idea is that normally an analysis frames are too short to encompass reverberation effects in full . you miss most of the reverberation tail in a ten millisecond window and so you 'd like it to be that the reverberation responses simply convolved in , but it 's not really with these ten millisecond frames cuz you j but if you take , say , a two millisecond window i ' m a two second window then in a room like this , most of the reverberation response is included in the window and the then it then things are l more linear . it is it is more like the reverberation response is simply c convolved and you can use channel normalization techniques like in his thesis he 's assuming that the reverberation response is fixed . he just does mean subtraction , which is like removing the dc component of the modulation spectrum and that 's supposed to d deal pretty with the reverberation and the neat thing is you ca n't take these two second frames and feed them to a speech recognizer so he does this method training trading the spectral resolution for time resolution and come ca synthesizes a new representation which is with say ten second frames but a lower s frequency resolution . so i do n't really know the theory . i it 's these are called " time frequency representations " and h he 's making the time sh finer grained and the frequency resolution less fine grained . s so i ' m i my first stab actually in continuing his work is to re - implement this thing which changes the time and frequency resolutions cuz he does n't have code for me . so that 'll take some reading about the theory . i do n't really know the theory . , and , another f first step is , so the way i want to extend his work is make it able to deal with a time varying reverberation response we do n't really know how fast the reverberation response is varying the meeting recorder data so we have this block least squares imp echo canceller implementation and i want to try finding the response , say , between a near mike and the table mike for someone using the echo canceller and looking at the echo canceller taps and then see how fast that varies from block to block . that should give an idea of how fast the reverberation response is changing . grad c: s so y you do you read some of the zeros as o 's and some as zeros . is there a particular way we 're supposed to read them ? professor a: no . " o " o " o " and " zero " are two ways that we say that digit . phd e: perhaps in the sheets there should be another sign for the if we want to the guy to say " o " or professor a: no . people will do what they say . it 's ok . in digit recognition we ' ve done before , you have two pronunciations for that value , " o " and " zero " . phd e: but it 's perhaps more difficult for the people to prepare the database then , if because here you only have zeros professor a: they write down oh . or they write down zero a and they each have their own pronunciation . phd e: but if the sh the sheet was prepared with a different sign for the " o " . professor a: but people would n't that wa there is no convention for it . see . , you 'd have to tell them " ok when we write this , say it tha " , and you just they just want people to read the digits as you ordinarily would and people say it different ways . grad c: is this a change from the last batch of forms ? because in the last batch it was spelled out which one you should read . professor a: yes . that 's right . it was it was spelled out , and they decided they wanted to get at more the way people would really say things . professor a: that 's also why they 're bunched together in these different groups . so so it 's so it 's everything 's fine . actually , let me just s since you brought it up , i was just it was hard not to be self - conscious about that when it after we since we just discussed it . but i realized that when i ' m talking on the phone , certainly , and saying these numbers , i almost always say zero . and cuz because i it 's two syllables . it 's it 's more likely they 'll understand what i said . so that 's the habit i ' m in , but some people say " o " and
the main purpose of the meeting of icsi's meeting recorder group at berkeley was to discuss the recent progress of it's members. this includes reports on the progress of the groups main digit recogniser project , with interest on voice-activity detectors and voiced/unvoiced detection , work on acoustic feature detection , and research into dealing with reverberation. there was also talk of comparing different recognition systems and training datasets , and a discussion of the pronunciation of the digit zero for the recording at the end of the meeting. in his next status report , me026 will summarise the work he has been researching. the digit recognition system is still not working well enough , they must get better results if they want to publish and be noticed. they have not really made many improvements , which may be due to their comparatively small training set , or the conditions the data is recorded under. the new vad is quite a large network , and adds a delay to the process. this caused ogi to drop it , though speaker mn007 is assuming that a smaller and equally effective system can be developed. the alternative is to get yet another vad form somewhere else , though it's not clear if they will even be required in the final system. there are some problems with the voiced/unvoiced feature detection , because some pitches are slipping through the filtering. the group have been comparing their recognition system to a few others , and theirs has not come off favourably. there could be many reasons for this , including smaller training set , more realistic data , or older technology. speaker mn007 has put the best voice activity detector into the system , to great improvements along with designing new filters that run at the correct latency. speaker fn002 has started to find parameters for voiced/unvoiced feature detection , and has found some classic ones , although there are other things she wishes to look at. me013 offers a few ideas of simple things she may want to try , as he is not confident with everything she is trying. speaker me006 is continuing with the idea of extending work on acoustic feature detection. he is continuing to read , and has discussed the suitability of factorial hmms with a colleague. speaker me026 has been learning more about previous work on reverberation , and is ready to start with a re-implementation of the theory. from there he wants to extend the work to look at time-varying reverb.
###dialogue: professor a: we 're going ? ok . sh - close your door on the way out ? professor a: probably wanna get this other door , too . so . what are we talking about today ? professor a: the both the sri system and the oth and for one thing that shows the difference between having a lot of training data or not , professor a: , the the best number we have on the english on near microphone only is three or four percent . professor a: and it 's significantly better than that , using fairly simple front - ends on the , with the sri system . so i th that the but that 's using a pretty huge amount of data , mostly not digits , but then again , . , mostly not digits for the actual training the h m ms whereas in this case we 're just using digits for training the h m professor a: did anybody mention about whether the sri system is a is doing the digits the wor as a word model or as a sub s sub - phone states ? phd e: . so , because it 's their very d huge , their huge system . and . but . so . there is one difference , the sri system the result for the sri system that are represented here are with adaptation . so there is it 's their complete system and including on - line unsupervised adaptation . phd e: and if you do n't use adaptation , the error rate is around fifty percent worse , if i remember . professor a: still . but but what i 'd be interested to do given that , is that we should take i that somebody 's gon na do this , right ? is to take some of these tandem things and feed it into the sri system , phd e: but i the main point is the data because i am not . our back - end is fairly simple but until now , the attempts to improve it or have fail , what chuck tried to do professor a: , but he 's doing it with the same data , right ? so to so there 's two things being affected . professor a: . one is that , there 's something simple that 's wrong with the back - end . we ' ve been playing a number of states i if he got to the point of playing with the number of gaussians yet but , but , so far he had n't gotten any big improvement , but that 's all with the same amount of data which is pretty small . and . professor a: , you could do that , but i ' m saying even with it not with that part not retrained , just using having the h m ms much better h m professor a: . but just train those h m ms using different features , the features coming from our aurora . phd e: but what would be interesting to see also is what perhaps it 's not related , the amount of data but the recording conditions . i . because it 's probably not a problem of noise , because our features are supposed to be robust to noise . it 's not a problem of channel , because there is normalization with respect to the channel . so professor a: i ' m . what what is the problem that you 're trying to explain ? phd e: the the fact that the result with the tandem and aurora system are so much worse . professor a: that the so much worse ? i but i ' m almost certain that it , that it has to do with the amount of training data . professor a: but but having a huge if if you look at what commercial places do , they use a huge amount of data . this is a modest amount of data . professor a: so . , ordinarily you would say " , given that you have enough occurrences of the digits , you can just train with digits rather than with , " but , if you have a huge in other words , do word models but if you have a huge amount of data then you 're going to have many occurrences of similar allophones . and that 's just a huge amount of training for it . so it 's it has to be that , because , as you say , this is near - microphone , it 's really pretty clean data . now , some of it could be the fact that let 's see , in the in these multi - train things did we include noisy data in the training ? , that could be hurting us actually , for the clean case . phd e: , actually we see that the clean train for the aurora proposals are better than the multi - train , professor a: it is if cuz this is clean data , and so that 's not too surprising . but . phd e: , o i what i meant is that , let 's say if we add enough data to train on the meeting recorder digits , i we could have better results than this . phd e: what i meant is that perhaps we can learn something from this , what 's wrong what is different between ti - digits and these digits and professor a: so in the actual ti - digits database we 're getting point eight percent , and here we 're getting three or four three , let 's see , three for this ? , but , point eight percent is something like double or triple what people have gotten who ' ve worked very hard at doing that . and and also , as you point out , there 's adaptation in these numbers also . so if you , put the ad adap take the adaptation off , then it for the english - near you get something like two percent . and here you had , something like three point four . and i could easily see that difference coming from this huge amount of data that it was trained on . so it 's , i do n't think there 's anything magical here . it 's , we used a simple htk system with a modest amount of data . and this is a , modern system has a lot of points to it . so . , the htk is an older htk , even . it 's not that surprising . but to me it just meant a practical point that if we want to publish results on digits that people pay attention to we probably should cuz we ' ve had the problem before that you get show some improvement on something that 's , it seems like too large a number , and people do n't necessarily take it so . so the three point four percent for this is so why is it it 's an interesting question though , still . why is why is it three point four percent for the d the digits recorded in this environment as opposed to the point eight percent for the original ti - digits database ? professor a: just looking at the ti - di the tandem system , if we 're getting point eight percent , which , yes , it 's high . it 's , it 's not awfully high , but it 's , it 's high . why is it four times as high , or more ? professor a: right ? , there 's even though it 's close - miked there 's still there really is background noise . and i suspect when the ti - digits were recorded if somebody fumbled or said something wrong that they probably made them take it over . it was not there was no attempt to have it be realistic in any sense . phd e: and acoustically , it 's q it 's i listened . it 's quite different . ti - digit is it 's very , very clean and it 's like studio recording whereas these meeting recorder digits sometimes you have breath noise professor a: bless you . i . it 's so . yes . it 's it 's the indication it 's harder . , i that 's true either way . so take a look at the , the sri results . , they 're much better , but still you 're getting something like one point three percent for things that are same data as in t ti - digits the same text . and , i ' m the same system would get , point three or point four on the actual ti - digits . so this , on both systems the these digits are showing up as harder . which i find interesting this is closer to it 's still read . but i still think it 's much closer to what people actually face , when they 're dealing with people saying digits over the telephone . i do n't think , i ' m they would n't release the numbers , but i do n't think that the companies that do telephone speech get anything like point four percent on their digits . i ' m they get , for one thing people do phone up who do n't have middle america accents and it 's a we it 's us . it has many people who sound in many different ways . that was that topic . what else we got ? did we end up giving up on , any eurospeech submissions , or ? i know thilo and dan ellis are submitting something , but . phd e: i e the only thing with these the meeting recorder and , so , we gave up . professor a: . now , actually for the aur - we do have for aurora , right ? because because we have ano an extra month . professor a: , that 's fine . so th so we have a couple little things on meeting recorder and we have we do n't we do n't have to flood it with papers . we 're not trying to prove anything to anybody . so . that 's fine . anything else ? phd e: . so . perhaps that we ' ve been working on is , we have put the good vad in the system and it really makes a huge difference . so , , this is perhaps one of the reason why our system was not the best , because with the new vad , it 's very the results are similar to the france telecom results and perhaps even better sometimes . so there is this point . the problem is that it 's very big and we still have to think how to where to put it and , because it , this vad either some delay and we if we put it on the server side , it does n't work , because on the server side features you already have lda applied from the f from the terminal side and so you accumulate the delay so the vad should be before the lda which means perhaps on the terminal side and then smaller and phd e: it 's from ogi . so it 's the network trained it 's the network with the huge amounts on hidden of hidden units , and nine input frames compared to the vad that was in the proposal which has a very small amount of hidden units and fewer inputs . professor a: this is the one they had originally ? , but they had to get rid of it because of the space , did n't they ? phd e: but the abso assumption is that we will be able to make a vad that 's small and that works fine . and . so we can professor a: but the other thing is to use a different vad entirely . , i if there 's a if i what the thinking was amongst the etsi folk but if everybody let 's use this vad and take that out of there phd e: they just want , they do n't want to fix the vad because they think there is some interaction between feature extraction and vad or frame dropping but they still want to just to give some requirement for this vad because it 's it will not be part of they do n't want it to be part of the standard . so it must be at least somewhat fixed but not completely . so there just will be some requirements that are still not yet ready . professor a: determined . but i was thinking that s " , there may be some interaction , but i do n't think we need to be stuck on using our or ogi 's vad . we could use somebody else 's if it 's smaller or , as long as it did the job . so that 's good . phd e: . so there is this thing . there is i designed a new filter because when i designed other filters with shorter delay from the lda filters , there was one filter with fif sixty millisecond delay and the other with ten milliseconds and hynek suggested that both could have sixty - five sixty - s it 's sixty - five . both should have sixty - five because phd e: and . so i did that and it 's running . so , let 's see what will happen . but the filter is closer to the reference filter . . professor a: so that means logically , in principle , it should be better . so probably it 'll be worse . or in the basic perverse nature of reality . phd e: , and then we ' ve started to work with this of voiced - unvoiced . and next week we will perhaps try to have a new system with msg stream also see what happens . so , something that 's similar to the proposal too , but with msg stream . phd d: no , i w i begin to play with matlab and to found some parameter robust for voiced - unvoiced decision . but only to play . and we they we found that maybe w is a classical parameter , the sq the variance between the fft of the signal and the small spectrum of time we after the mel filter bank . and , is more or less robust . is good for clean speech . is quite good for noisy speech . but we must to have bigger statistic with timit , and is not ready yet to use on , i . phd d: i have here . i have here for one signal , for one frame . the the mix of the two , noise and unnoise , and the signal is this . clean , and this noise . these are the two the mixed , the big signal is for clean . professor a: , i ' m s there 's none of these axes are labeled , so i what this what 's this axis ? phd d: , this is energy , log - energy of the spectrum . of the this is the variance , the difference { nonvocalsound } between the spectrum of the signal and fft of each frame of the signal and this mouth spectrum of time after the f may fit for the two , phd d: this big , to here , they are to signal . this is for clean and this is for noise . phd d: and this is the noise portion . and this is more or less like this . but i meant to have see @ two the picture . this is , for one frame . the spectrum of the signal . and this is the small version of the spectrum after ml mel filter bank . phd d: and this is this is not the different . this is trying to obtain with lpc model the spectrum but using matlab without going factor and s phd d: and the that this is good . this is quite similar . this is another frame . ho how i obtained the envelope , { nonvocalsound } this envelope , with the mel filter bank . professor a: so now i wonder , do you want to i know you want to get at something orthogonal from what you get with the smooth spectrum . but if you were to really try and get a voiced - unvoiced , do you want to ignore that ? , do you , clearly a very big cues for voiced - unvoiced come from spectral slope and so on , phd d: because when did noise clear { nonvocalsound } in these section is clear if s @ { nonvocalsound } val value is indicative that is a voice frame and it 's low values professor a: , you probably want , certainly if you want to do good voiced - unvoiced detection , you need a few features . each each feature is by itself not enough . but , people look at slope and first auto - correlation coefficient , divided by power . or or there 's i we prob probably do n't have enough computation to do a simple pitch detector ? with a pitch detector you could have a an estimate of what the or maybe you could you just do it going through the p fft 's figuring out some probable harmonic structure . and and . phd d: you have read up and you have a paper , the paper that you s give me yesterday . they say that yesterday they are some { nonvocalsound } problem phd e: , there is th this fact actually . if you look at this spectrum , what 's this again ? is it the mel - filters ? phd e: ok . so the envelope here is the output of the mel - filters and what we clearly see is that in some cases , and it clearly appears here , and the harmonics are resolved by the f , there are still appear after mel - filtering , and it happens for high pitched voice because the width of the lower frequency mel - filters is sometimes even smaller than the pitch . it 's around one hundred , one hundred and fifty hertz nnn . and so what happens is that this , add additional variability to this envelope so we were thinking to modify the mel - spectrum to have something that 's smoother on low frequencies . professor a: separate thing ? maybe so . so , what what i was talking about was just , starting with the fft you could do a very rough thing to estimate pitch . and , given that , you could come up with some estimate of how much of the low frequency energy was explained by those harmonics . it 's a variant on what you 're s what you 're doing . the , the mel does give a smooth thing . but as you say it 's not that smooth here . and and so if you just subtracted off your of the harmonics then something like this would end up with quite a bit lower energy in the first fifteen hundred hertz or so and our first kilohertz , even . and if was noisy , the proportion that it would go down would be if it was unvoiced . so you oughta be able to pick out voiced segments . at least it should be another cue . so . anyway . ok ? that 's what 's going on . what 's up with you ? grad b: our t i went to talk with mike jordan this week { nonvocalsound } and shared with him the ideas about extending the larry saul work and i asked him some questions about factorial h m so like later down the line when we ' ve come up with these feature detectors , how do we , model the time series that happens and we talked a little bit about factorial h m ms and how when you 're doing inference or w when you 're doing recognition , there 's like simple viterbi that you can do for these h m and the great advantages that a lot of times the factorial h m ms do n't over - alert the problem there they have a limited number of parameters and they focus directly on the sub - problems at hand so you can imagine five or so parallel features transitioning independently and then at the end you couple these factorial h m ms with undirected links based on some more data . so he seemed like really interested in this and said this is something very do - able and can learn a lot and , i ' ve just been continue reading about certain things . thinking of maybe using m modulation spectrum to as features also in the sub - bands because it seems like the modulation spectrum tells you a lot about the intelligibility of certain words and so , . just that 's about it . grad c: ok . and so i ' ve been looking at avendano 's work and i 'll try to write up in my next stat status report a description of what he 's doing , but it 's an approach to deal with reverberation or that the aspect of his work that i ' m interested in the idea is that normally an analysis frames are too short to encompass reverberation effects in full . you miss most of the reverberation tail in a ten millisecond window and so you 'd like it to be that the reverberation responses simply convolved in , but it 's not really with these ten millisecond frames cuz you j but if you take , say , a two millisecond window i ' m a two second window then in a room like this , most of the reverberation response is included in the window and the then it then things are l more linear . it is it is more like the reverberation response is simply c convolved and you can use channel normalization techniques like in his thesis he 's assuming that the reverberation response is fixed . he just does mean subtraction , which is like removing the dc component of the modulation spectrum and that 's supposed to d deal pretty with the reverberation and the neat thing is you ca n't take these two second frames and feed them to a speech recognizer so he does this method training trading the spectral resolution for time resolution and come ca synthesizes a new representation which is with say ten second frames but a lower s frequency resolution . so i do n't really know the theory . i it 's these are called " time frequency representations " and h he 's making the time sh finer grained and the frequency resolution less fine grained . s so i ' m i my first stab actually in continuing his work is to re - implement this thing which changes the time and frequency resolutions cuz he does n't have code for me . so that 'll take some reading about the theory . i do n't really know the theory . , and , another f first step is , so the way i want to extend his work is make it able to deal with a time varying reverberation response we do n't really know how fast the reverberation response is varying the meeting recorder data so we have this block least squares imp echo canceller implementation and i want to try finding the response , say , between a near mike and the table mike for someone using the echo canceller and looking at the echo canceller taps and then see how fast that varies from block to block . that should give an idea of how fast the reverberation response is changing . grad c: s so y you do you read some of the zeros as o 's and some as zeros . is there a particular way we 're supposed to read them ? professor a: no . " o " o " o " and " zero " are two ways that we say that digit . phd e: perhaps in the sheets there should be another sign for the if we want to the guy to say " o " or professor a: no . people will do what they say . it 's ok . in digit recognition we ' ve done before , you have two pronunciations for that value , " o " and " zero " . phd e: but it 's perhaps more difficult for the people to prepare the database then , if because here you only have zeros professor a: they write down oh . or they write down zero a and they each have their own pronunciation . phd e: but if the sh the sheet was prepared with a different sign for the " o " . professor a: but people would n't that wa there is no convention for it . see . , you 'd have to tell them " ok when we write this , say it tha " , and you just they just want people to read the digits as you ordinarily would and people say it different ways . grad c: is this a change from the last batch of forms ? because in the last batch it was spelled out which one you should read . professor a: yes . that 's right . it was it was spelled out , and they decided they wanted to get at more the way people would really say things . professor a: that 's also why they 're bunched together in these different groups . so so it 's so it 's everything 's fine . actually , let me just s since you brought it up , i was just it was hard not to be self - conscious about that when it after we since we just discussed it . but i realized that when i ' m talking on the phone , certainly , and saying these numbers , i almost always say zero . and cuz because i it 's two syllables . it 's it 's more likely they 'll understand what i said . so that 's the habit i ' m in , but some people say " o " and ###summary: the main purpose of the meeting of icsi's meeting recorder group at berkeley was to discuss the recent progress of it's members. this includes reports on the progress of the groups main digit recogniser project , with interest on voice-activity detectors and voiced/unvoiced detection , work on acoustic feature detection , and research into dealing with reverberation. there was also talk of comparing different recognition systems and training datasets , and a discussion of the pronunciation of the digit zero for the recording at the end of the meeting. in his next status report , me026 will summarise the work he has been researching. the digit recognition system is still not working well enough , they must get better results if they want to publish and be noticed. they have not really made many improvements , which may be due to their comparatively small training set , or the conditions the data is recorded under. the new vad is quite a large network , and adds a delay to the process. this caused ogi to drop it , though speaker mn007 is assuming that a smaller and equally effective system can be developed. the alternative is to get yet another vad form somewhere else , though it's not clear if they will even be required in the final system. there are some problems with the voiced/unvoiced feature detection , because some pitches are slipping through the filtering. the group have been comparing their recognition system to a few others , and theirs has not come off favourably. there could be many reasons for this , including smaller training set , more realistic data , or older technology. speaker mn007 has put the best voice activity detector into the system , to great improvements along with designing new filters that run at the correct latency. speaker fn002 has started to find parameters for voiced/unvoiced feature detection , and has found some classic ones , although there are other things she wishes to look at. me013 offers a few ideas of simple things she may want to try , as he is not confident with everything she is trying. speaker me006 is continuing with the idea of extending work on acoustic feature detection. he is continuing to read , and has discussed the suitability of factorial hmms with a colleague. speaker me026 has been learning more about previous work on reverberation , and is ready to start with a re-implementation of the theory. from there he wants to extend the work to look at time-varying reverb.
1
"grad a: ok , we 're on . so just make that th your wireless mike is on , if you 're wearing a wirel(...TRUNCATED)
"the initial task of the edu group is to work on inferring intentions through context. in the naviga(...TRUNCATED)
"###dialogue: grad a: ok , we 're on . so just make that th your wireless mike is on , if you 're we(...TRUNCATED)
38
"phd b: i . do you have news from the conference talk ? , that was programmed for yesterday i . prof(...TRUNCATED)
"the berkeley meeting recorder group discussed the progress of several of their members. the progres(...TRUNCATED)
"###dialogue: phd b: i . do you have news from the conference talk ? , that was programmed for yeste(...TRUNCATED)
29
"grad f: , i should n't say it 's a good mike . all i really know is that the signal level is ok . i(...TRUNCATED)
"although the meeting recorder group only list two agenda items , this meeting explores transcriptio(...TRUNCATED)
"###dialogue: grad f: , i should n't say it 's a good mike . all i really know is that the signal le(...TRUNCATED)
21
"phd e: so it 's , it 's spectral subtraction or wiener filtering , depending on if we put if we squ(...TRUNCATED)
"icsi's meeting recorder group have returned from a meeting with some important decisions to make. t(...TRUNCATED)
"###dialogue: phd e: so it 's , it 's spectral subtraction or wiener filtering , depending on if we (...TRUNCATED)
35
"grad f: let 's see . so . what ? i ' m supposed to be on channel five ? her . nope . does n't seem (...TRUNCATED)
"minor technical issues,such as format conversions for xml and javabayes and the full translation of(...TRUNCATED)
"###dialogue: grad f: let 's see . so . what ? i ' m supposed to be on channel five ? her . nope . d(...TRUNCATED)
3
"professor d: , let 's get started . hopefully nancy will come , if not , she wo n't . grad b: , rob(...TRUNCATED)
"the main focus of the meeting was firstly on the structure of the belief-net , its decision nodes a(...TRUNCATED)
"###dialogue: professor d: , let 's get started . hopefully nancy will come , if not , she wo n't . (...TRUNCATED)
2
"grad d: and we already got the crash out of the way . it did crash , so i feel much better , earlie(...TRUNCATED)
"the berkeley meeting recorder group discussed digits data , recent asr results , the status of tran(...TRUNCATED)
"###dialogue: grad d: and we already got the crash out of the way . it did crash , so i feel much be(...TRUNCATED)
14
"professor d: ok . so , you can fill those out , after , actually , so so , i got , these results fr(...TRUNCATED)
"the berkley meeting recorder group discussed the most recent progress with their current project , (...TRUNCATED)
"###dialogue: professor d: ok . so , you can fill those out , after , actually , so so , i got , the(...TRUNCATED)
26
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
36
Edit dataset card