Audio
audioduration (s)
1.32
7.12
Original_Text
stringlengths
13
103
With voice cloning, you can narrate your own content with your own voice.
This includes making realistic voiceovers for podcasts,
audiobooks or personal assistants. In this video,
I'll explain how to make high quality voiceovers using just 20 minutes of audio.
I'll start by explaining the basics of how text to speech models work.
Then I'll cover data set preparation and last of all,
explain how to fine tune a model using that data set you've prepared.
If you are new to text to speech models,
don't worry because I will go through Jupyter Notebooks step by step from start
to finish. And if you're more advanced,
I'll describe in detail how I handle very challenging voices,
like my Irish accent. And here we go with text to speech.
So I'll start off describing how to build a really simple model.
And I'll do that with a few neural network approaches, transformers,
diffusers and generative adversarial networks.
That's going to be useful because it will lay the foundation for explaining
how to control style.
And I'll explain how to tweak style in audio that's been generated in a naive
way. And then I'll do it using the style TTS approach,
or rather style TTS2 actually.
Then I want to highlight a little around how voice cloning works versus fine
tuning, what the differences are,
the trade-offs when you might want to use one or the other before getting into
the meaty part of the presentation,
where I'll run through a notebook for data preparation and then one for fine
tuning the style TTS2 model. And as usual,
I'll finish off with a few final tips.
So here's a very simple text to speech model.
It takes in the quick brown Fox and it generates an output sound
wave over here. Now let's,
maybe make some quick pre-processing steps here on the input.
So rather than just putting in text,
we're actually going to split it into tokens and that will go into our neural
network. This should look familiar. If you've done any work with things like llama,
you tokenize into sub words and those sub words are what go into neural
network. Now, because this is sound,
you might want to tokenize not just with words or sub words,
but with what are called phonemes. What are phonemes? Well,
they're characters used to represent the sound of,
well, yeah, the sound of different parts of words. So here, for example,
we have, this is the sound that quick Brown and then Fox.
Now these sounds here are easily generated.
You can check a notebook like this one here. I've linked,
you can access it for free where I import what's called a phonemizer.
And then I put in the text through this phonemizer.
You do need to tell it what language and it will then create some
phonemes. And these are the phonemes I got back out.
And these are one-to-one representations of the sounds that are in words.
And they, for example, if I make a sound like puh,
there's a specific phoneme that represents that.
And you can see intuitively why that might be a useful representation if you
want to finally generate a sound.
Now let's take a quick look at the right-hand side here,
where we're generating a sound wave. And of course,
we're going to generate what is our best estimate for the sound
that represents the quick Brown Fox.
And this generated best estimate is what I've shown here in red.
And this red sound wave that we generate,
we're going to compare to the actual sound wave that we presumably have in our
training dataset for the quick Brown Fox.
And here I've just represented that or very small snippet of it in blue.
And the idea of this graph is to show you a comparison of the original wave with
degenerated wave.
And that comparison is important because we need to grade the quality of our
neural network and the quality of its outputs.
And one simple way we can do that is just by comparing the distance between the
red and the blue lines. More specifically,
we could look at say mean squared error,
which is just the difference between red and blue squared,
and then averaged across all of the sound wave here.
And that value, if it's large, it means there's a large discrepancy.
And if it's zero, it means, well, we've matched the generated and the original.
And so that value of the mean squared error or of the loss,
which is what we commonly call it,
it's useful because with that loss or the calculated mean squared error,
we can back propagate through the neural network and update the weights to
provide a higher quality neural network.
And that's the idea of training these neural networks.
You take some kind of input where you know what the truth is, the blue,
you generate the red, you compare,
you calculate how good or bad it is, which we call the loss,
and you back propagate that loss to update the quality of the neural network.
And if you keep doing this and you've got enough data and you've got the right
design for your network, in principle,
you can eventually converge on a model that will produce a red line here that's
pretty closely matched to the blue. Now, to make that specific,
you might have a neural network that takes in up to 512 tokens.
Maybe each token represents a phoneme.
And perhaps this network is capable of producing 10 seconds of audio.
So if you're producing 10 seconds of a sound wave at 24 kilohertz,
24 kilohertz means 24 sample points per second.
That means the network would have to produce 240 output magnitudes
to be able to represent 10 seconds of audio.
Now, actually, if you have 512 input phonemes,
that's probably closer to like a minute or more.
So you would actually need more like maybe six times this one and a half
million output magnitudes to be predicted in order to generate sound wave.
All right. So let's keep going deeper into how this model works.
Here, I've got some text input, phoneme representation,
and I'm getting into more detail now on exactly what this neural network is.