from huggingface_hub import InferenceClient import gradio as gr client = InferenceClient( "mistralai/Mixtral-8x7B-Instruct-v0.1" ) experiences = '''

Experiences:

''' communities = '''

Communities:

''' recommendations = '''

Recommendations:

Sayak Paul
Machine Learning Engineer at Hugging Face, Google Developer Expert in ML, GSoC Mentor at TensorFlow

Rishiraj and I worked together for a Kaggle Competition. I had already known Rishiraj and all his achievements by that time as he is my college junior. But after working together I got to witness how humble and how intelligent Rishiraj is.

I found Rishiraj to be a great communicator, an off-the-shelf and creative thinker, and a passionate hard-working individual. His quest for being able to apply ML skills creatively is infectious. I vividly remember how quickly he was able to incorporate an idea I had casually suggested into our competition pipeline notebook. He studied many relevant resources around object detection specific augmentation policies, and resolution discrepancy within no time and applied them in practice. In short, I learned a lot from him and I am even applying some of those learnings in my own projects.

Besides being great at ML, he’s also a chess player and is just as passionate about it. I wish Rishiraj an amazing career ahead.

''' conferences = '''

Conferences:

Photos

Google I/O Extended 2023 by GDG Cloud Kolkata

Saturday, August 19, 2023, 11:00 AM (IST)

Techno India University - Salt Lake Sector V, Kolkata, 700091
''' # Function to handle dynamic content display def show_info(section): if section == "Experiences": return experiences elif section == "Communities": return communities elif section == "Recommendations": return recommendations else: return "Select a section to display information." # Creating Gradio Interface with gr.Blocks() as app: with gr.Row(): with gr.Column(): gr.Markdown("# Hi 👋, I'm [Rishiraj Acharya](https://rishirajacharya.com/) (ঋষিরাজ আচার্য্য)") gr.Markdown("## Google Developer Expert in ML ✨ | Hugging Face Fellow 🤗 | GSoC '22 at TensorFlow 👨🏻‍🔬 | TFUG Kolkata Organizer 🎙️ | Kaggle Master 🧠 | Dynopii ML Engineer 👨🏻‍💻") gr.Markdown("**I work with natural language understanding, machine translation, named entity recognition, question answering, topic segmentation, and automatic speech recognition. My work typically relies on very large quantities of data and innovative methods in deep learning to tackle user challenges around the world — in languages from around the world. My areas of work include Natural Language Engineering, Language Modeling, Text-to-Speech Software Engineering, Speech Frameworks Engineering, Data Science, and Research.**") gr.Markdown("⚡ Fun fact **I’m a national level chess player, a swimming champion and I can lecture for hours on the outer reaches of space and the craziness of astrophysics.**") gr.HTML(value='

rishirajacharya

') section_dropdown = gr.Dropdown(["Experiences", "Communities", "Recommendations"], label="Select Information to Display") with gr.Column(): gr.Image("profile.png") with gr.Row(): info_display = gr.HTML() section_dropdown.change(show_info, inputs=section_dropdown, outputs=info_display) def format_prompt(message, history): prompt = "" for user_prompt, bot_response in history: prompt += f"[INST] {user_prompt} [/INST]" prompt += f" {bot_response} " prompt += f"[INST] {message} [/INST]" return prompt def generate( prompt, history, system_prompt, temperature=0.9, max_new_tokens=512, top_p=0.95, repetition_penalty=1.0, ): temperature = float(temperature) if temperature < 1e-2: temperature = 1e-2 top_p = float(top_p) generate_kwargs = dict( temperature=temperature, max_new_tokens=max_new_tokens, top_p=top_p, repetition_penalty=repetition_penalty, do_sample=True, seed=42, ) formatted_prompt = format_prompt(f"{system_prompt}, {prompt}", history) stream = client.text_generation(formatted_prompt, **generate_kwargs, stream=True, details=True, return_full_text=False) output = "" for response in stream: output += response.token.text yield output return output additional_inputs=[ gr.Textbox( label="System Prompt", max_lines=1, interactive=True, ), gr.Slider( label="Temperature", value=0.9, minimum=0.0, maximum=1.0, step=0.05, interactive=True, info="Higher values produce more diverse outputs", ), gr.Slider( label="Max new tokens", value=512, minimum=0, maximum=1048, step=64, interactive=True, info="The maximum numbers of new tokens", ), gr.Slider( label="Top-p (nucleus sampling)", value=0.90, minimum=0.0, maximum=1, step=0.05, interactive=True, info="Higher values sample more low-probability tokens", ), gr.Slider( label="Repetition penalty", value=1.2, minimum=1.0, maximum=2.0, step=0.05, interactive=True, info="Penalize repeated tokens", ) ] examples=[["Can you explain how the QuickSort algorithm works and provide a Python implementation?", None, None, None, None, None,], ["What are some unique features of Rust that make it stand out compared to other systems programming languages like C++?", None, None, None, None, None,], ] llm = gr.ChatInterface( fn=generate, chatbot=gr.Chatbot(show_label=True, show_share_button=True, show_copy_button=True, likeable=True, layout="bubble"), additional_inputs=additional_inputs, title="Hi 👋, I'm Rishiraj Acharya (ঋষিরাজ আচার্য্য)", examples=examples, concurrency_limit=20, ) # Creating Gradio Interface with gr.Blocks() as talks: with gr.Row(): with gr.Column(): gr.Markdown("# Hi 👋, I'm [Rishiraj Acharya](https://rishirajacharya.com/) (ঋষিরাজ আচার্য্য)") with gr.Row(): gr.HTML(value=conferences) demo = gr.TabbedInterface([app, llm, talks], ["About", "Chat", "Talks"]) demo.launch()