Spaces:
Running
on
T4
runtime error
78%|███████▊ | 18/23 [00:08<00:02, 2.24it/s]
91%|█████████▏| 21/23 [00:09<00:00, 2.23it/s]
100%|██████████| 23/23 [00:09<00:00, 2.30it/s]
2024-09-30 16:25:43.748 | INFO | main:generate_podcast:151 - Generating audio for Guest: We're excited to explore new applications for our models, like multimodal learning and natural language generation. We're also working on improving the efficiency and scalability of our training algorithms. The possibilities are endless!
/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/bark/generation.py:175: FutureWarning: torch.cuda.amp.autocast(args...)
is deprecated. Please use torch.amp.autocast('cuda', args...)
instead.
with InferenceContext(), torch.inference_mode(), torch.no_grad(), autocast():
0%| | 0/768 [00:00<?, ?it/s]
19%|█▉ | 149/768 [00:01<00:04, 148.43it/s]
39%|███▉ | 300/768 [00:02<00:03, 149.62it/s]
59%|█████▊ | 451/768 [00:03<00:02, 150.15it/s]
79%|███████▊ | 603/768 [00:04<00:01, 150.82it/s]
100%|██████████| 691/691 [00:04<00:00, 150.82it/s]
100%|██████████| 691/691 [00:04<00:00, 150.67it/s]
0%| | 0/35 [00:00<?, ?it/s]
9%|▊ | 3/35 [00:01<00:14, 2.24it/s]
17%|█▋ | 6/35 [00:02<00:12, 2.24it/s]
26%|██▌ | 9/35 [00:04<00:11, 2.25it/s]
34%|███▍ | 12/35 [00:05<00:10, 2.22it/s]
43%|████▎ | 15/35 [00:06<00:08, 2.23it/s]
51%|█████▏ | 18/35 [00:08<00:07, 2.23it/s]
60%|██████ | 21/35 [00:09<00:06, 2.23it/s]
69%|██████▊ | 24/35 [00:10<00:04, 2.23it/s]
77%|███████▋ | 27/35 [00:12<00:03, 2.23it/s]
86%|████████▌ | 30/35 [00:13<00:02, 2.23it/s]
94%|█████████▍| 33/35 [00:14<00:00, 2.22it/s]
100%|██████████| 35/35 [00:15<00:00, 2.25it/s]
2024-09-30 16:26:05.616 | INFO | main:generate_podcast:151 - Generating audio for Host (Jane): Well, Tomas, it's been an absolute blast having you on the show. Thanks for sharing your work with us, and we can't wait to see what the future holds for this research!
/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/bark/generation.py:175: FutureWarning: torch.cuda.amp.autocast(args...)
is deprecated. Please use torch.amp.autocast('cuda', args...)
instead.
with InferenceContext(), torch.inference_mode(), torch.no_grad(), autocast():
0%| | 0/768 [00:00<?, ?it/s]
19%|█▉ | 147/768 [00:01<00:04, 146.30it/s]
39%|███▉ | 298/768 [00:02<00:03, 148.55it/s]
58%|█████▊ | 447/768 [00:03<00:02, 148.32it/s]
100%|██████████| 578/578 [00:03<00:00, 148.32it/s]
100%|██████████| 578/578 [00:03<00:00, 148.78it/s]
0%| | 0/29 [00:00<?, ?it/s]
10%|█ | 3/29 [00:01<00:11, 2.17it/s]
21%|██ | 6/29 [00:02<00:10, 2.16it/s]
31%|███ | 9/29 [00:04<00:09, 2.14it/s]
41%|████▏ | 12/29 [00:05<00:07, 2.17it/s]
52%|█████▏ | 15/29 [00:06<00:06, 2.18it/s]
62%|██████▏ | 18/29 [00:08<00:05, 2.18it/s]
72%|███████▏ | 21/29 [00:09<00:03, 2.19it/s]
83%|████████▎ | 24/29 [00:10<00:02, 2.20it/s]
93%|█████████▎| 27/29 [00:12<00:00, 2.20it/s]
100%|██████████| 29/29 [00:13<00:00, 2.19it/s]
2024-09-30 16:26:23.938 | INFO | main:generate_podcast:151 - Generating audio for Guest: Thanks, Jane! It's been a pleasure sharing my work with your listeners.
/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/bark/generation.py:175: FutureWarning: torch.cuda.amp.autocast(args...)
is deprecated. Please use torch.amp.autocast('cuda', args...)
instead.
with InferenceContext(), torch.inference_mode(), torch.no_grad(), autocast():
0%| | 0/768 [00:00<?, ?it/s]
19%|█▉ | 145/768 [00:01<00:04, 144.75it/s]
38%|███▊ | 294/768 [00:02<00:03, 147.20it/s]
58%|█████▊ | 442/768 [00:03<00:02, 145.79it/s]
100%|██████████| 548/548 [00:03<00:00, 145.79it/s]
100%|██████████| 548/548 [00:03<00:00, 146.63it/s]
0%| | 0/28 [00:00<?, ?it/s]
11%|█ | 3/28 [00:01<00:11, 2.20it/s]
21%|██▏ | 6/28 [00:02<00:09, 2.21it/s]
32%|███▏ | 9/28 [00:04<00:08, 2.21it/s]
43%|████▎ | 12/28 [00:05<00:07, 2.21it/s]
54%|█████▎ | 15/28 [00:06<00:05, 2.21it/s]
64%|██████▍ | 18/28 [00:08<00:04, 2.22it/s]
75%|███████▌ | 21/28 [00:09<00:03, 2.23it/s]
86%|████████▌ | 24/28 [00:10<00:01, 2.23it/s]
96%|█████████▋| 27/28 [00:12<00:00, 2.23it/s]
100%|██████████| 28/28 [00:12<00:00, 2.26it/s]
2024-09-30 16:26:42.313 | INFO | __main__:generate_podcast:186 - Generated 2431 characters of audio
Caching example 2/3
2024-09-30 16:27:28.546 | INFO | __main__:generate_podcast:143 - Generated dialogue: scratchpad="Let's explore the key points and ideas to present in the podcast. We can start with the founding of Hugging Face, its evolution, and notable achievements. We can also discuss its products and services, such as the Transformers library and the Hugging Face Hub. To make the content engaging, we can use analogies, storytelling techniques, and hypothetical scenarios to explain complex topics like machine learning and natural language processing. We can also explore the company's partnerships and collaborations, such as with Amazon Web Services and UNESCO." name_of_guest='Clément Delangue' dialogue=[DialogueItem(speaker='Host (Jane)', text="Welcome to our podcast, where we dive into the fascinating world of artificial intelligence and machine learning. I'm your host, Jane, and today we have the amazing Clément Delangue, CEO of Hugging Face, joining us to share the story of how his company became a rockstar in the field. Welcome, Clément!"), DialogueItem(speaker='Guest', text="Thanks for having me, Jane! I'm super excited to share our journey with your listeners. It's been a wild ride!"), DialogueItem(speaker='Host (Jane)', text="So, let's start from the beginning. What sparked the idea for Hugging Face, and how did it all come together?"), DialogueItem(speaker='Guest', text="Well, we founded Hugging Face back in 2016, initially as a chatbot app for teenagers. But we soon realized that our true passion was in developing tools for machine learning and natural language processing. We open-sourced our model, and the community just took off! It was like a snowball effect – more and more people started contributing, and we were like, 'Wow, this is happening!'"), DialogueItem(speaker='Host (Jane)', text="That's so cool! And what about your Transformers library? I've heard it's a total game-changer. Can you tell us more about that?"), DialogueItem(speaker='Guest', text="Ah, yes! Our Transformers library is like the Swiss Army knife of machine learning. It's a Python package that contains open-source implementations of transformer models for text, image, and audio tasks. And the best part? It's compatible with popular deep learning libraries like PyTorch, TensorFlow, and JAX. We're really proud of how it's helped democratize access to machine learning – it's like we're giving superpowers to developers!"), DialogueItem(speaker='Host (Jane)', text="I love that! And I've heard that Hugging Face has partnered with some big names in the industry. Can you spill the beans about those collaborations?"), DialogueItem(speaker='Guest', text="We've been fortunate to partner with companies like Amazon Web Services, UNESCO, and others to advance the field of AI and machine learning. These partnerships have helped us develop new products and services, like our Private Hub, which allows enterprises to build and deploy machine learning models securely. It's like we're building a dream team of AI innovators!"), DialogueItem(speaker='Host (Jane)', text='Wow, that sounds like a match made in heaven! And what about your recent funding round? Congratulations on that – you must be thrilled!'), DialogueItem(speaker='Guest', text="Thanks, Jane! We're over the moon to have raised $235 million in our Series D funding round. It's a huge vote of confidence from our investors, and we're excited to use this funding to continue innovating and growing our ecosystem. The future is bright, and we're just getting started!"), DialogueItem(speaker='Host (Jane)', text="Well, Clément, it's been an absolute blast having you on the show. Before we go, can you summarize the key takeaways from our conversation?"), DialogueItem(speaker='Guest', text="Sure thing! I think the main points are that Hugging Face has come a long way from its humble beginnings as a chatbot app, and we've been able to make a significant impact in the field of machine learning and natural language processing through our products and partnerships. We're all about empowering developers and innovators to build a better future with AI – and we're just getting started!"), DialogueItem(speaker='Host (Jane)', text='Thanks, Clément, for sharing your insights with us today. And to our listeners, thank you for tuning in! If you want to learn more about Hugging Face and its products, be sure to check out their website and follow them on social media. Until next time, stay curious and keep on learning!')]
2024-09-30 16:27:28.547 | INFO | __main__:generate_podcast:151 - Generating audio for Host (Jane): Welcome to our podcast, where we dive into the fascinating world of artificial intelligence and machine learning. I'm your host, Jane, and today we have the amazing Clément Delangue, CEO of Hugging Face, joining us to share the story of how his company became a rockstar in the field. Welcome, Clément!
Traceback (most recent call last):
File "/home/user/app/app.py", line 191, in
demo = gr.Interface(
File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/gradio/interface.py", line 532, in init
self.render_examples()
File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/gradio/interface.py", line 880, in render_examples
self.examples_handler = Examples(
File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/gradio/helpers.py", line 81, in create_examples
examples_obj.create()
File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/gradio/helpers.py", line 340, in create
self._start_caching()
File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/gradio/helpers.py", line 391, in _start_caching
client_utils.synchronize_async(self.cache)
File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/gradio_client/utils.py", line 855, in synchronize_async
return fsspec.asyn.sync(fsspec.asyn.get_loop(), func, *args, **kwargs) # type: ignore
File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/fsspec/asyn.py", line 103, in sync
raise return_result
File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/fsspec/asyn.py", line 56, in _runner
result[0] = await coro
File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/gradio/helpers.py", line 517, in cache
prediction = await Context.root_block.process_api(
File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/gradio/blocks.py", line 1935, in process_api
result = await self.call_function(
File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/gradio/blocks.py", line 1520, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2405, in run_sync_in_worker_thread
return await future
File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 914, in run
result = context.run(func, *args)
File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/gradio/utils.py", line 826, in wrapper
response = f(*args, **kwargs)
File "/home/user/app/app.py", line 160, in generate_podcast
audio_file_path = generate_podcast_audio(
File "/home/user/app/utils.py", line 105, in generate_podcast_audio
result = hf_client.predict(
File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/gradio_client/client.py", line 468, in predict
).result()
File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/gradio_client/client.py", line 1499, in result
return super().result(timeout=timeout)
File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/gradio_client/client.py", line 1121, in _inner
predictions = _predict(*data)
File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/gradio_client/client.py", line 1233, in _predict
raise AppError(
gradio_client.exceptions.AppError: The upstream Gradio app has raised an exception but has not enabled verbose error reporting. To enable, set show_error=True in launch().
/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/pypdf/_crypt_providers/_cryptography.py:32: CryptographyDeprecationWarning: ARC4 has been moved to cryptography.hazmat.decrepit.ciphers.algorithms.ARC4 and will be removed from this module in 48.0.0.
from cryptography.hazmat.primitives.ciphers.algorithms import AES, ARC4
Loaded as API: https://mrfakename-melotts.hf.space ✔
/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/bark/generation.py:212: FutureWarning: You are using torch.load
with weights_only=False
(the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only
will be flipped to True
. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals
. We recommend you start setting weights_only=True
for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(ckpt_path, map_location=device)
should be fixed - thanks!