--- license: unknown tags: - summarization - PyTorch - text2text model-index: - name: bart-base-finetuned-poems results: - task: type: summarization name: Summarization metrics: - name: ROUGE-1 type: rouge value: 0.639237038471346 verified: true - name: ROUGE-2 type: rouge value: 0.45630749696717915 verified: true - name: ROUGE-L type: rouge value: 0.5747263252831926 verified: true - name: ROUGE-LSUM type: rouge value: 0.5747263252831925 verified: true metrics: - rouge base_model: google-t5/t5-base pipeline_tag: summarization --- # bart-base-job-info-summarizer This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on the private daily log of Bangkit bootcamp in Indonesia. - Rouge1: 0.639237038471346 - Rouge2: 0.45630749696717915 - Rougel: 0.5747263252831926 - Rougelsum: 0.5747263252831925 ## Intended use and limitations: This model can be used to summarize daily diary log into weekly summarization ## How to use: ```python !pip install transformers from transformers import T5Tokenizer, T5ForConditionalGeneration # Load the model and tokenizer model_name = "avisena/t5-base-weekly-diary-summarization" tokenizer = T5Tokenizer.from_pretrained(model_name) model = T5ForConditionalGeneration.from_pretrained(model_name) # Set up model arguments model_args = { "max_length": 512, # Increase max_length to handle longer outputs "length_penalty": -9.7, "num_beams":5, # Use beam search for better results "early_stopping": True, "temperature": 1.7 } # Tokenize input text input_text = """summarize: - I organized a large-scale professional conference and managed all logistical details, including venue selection, scheduling, and coordination with speakers. I ensured all necessary permits and insurance were in place to cover the event. - I conducted a detailed review of the conference objectives to ensure they aligned with the industry’s standards and goals. This involved working with the conference committee to define the agenda, target audience, and key outcomes. - I coordinated with a diverse group of speakers and panelists, reviewing their presentations and ensuring they were aligned with the conference themes. I also worked with suppliers to arrange audiovisual equipment, catering, and other event essentials. - The conference was structured into three main segments, starting with the most intensive one, which required meticulous planning due to its complexity and the need for precise timing and coordination. - In our final planning session, we reviewed the conference layout, assigned roles to team members, and established backup plans for potential issues such as speaker cancellations or technical failures. - We developed extensive contingency plans, including alternative session formats and additional technical support, to address any potential disruptions. - To ensure the conference ran smoothly, I organized several rehearsals and pre-event briefings to test all aspects of the event and make necessary adjustments. We also coordinated with volunteers to ensure everyone was prepared for their roles. - I managed the marketing and promotion of the conference, including designing promotional materials, managing social media outreach, and engaging with industry publications to boost attendance and interest. - On the day of the conference, I oversaw all activities, ensured that the schedule was adhered to, and addressed any issues that arose promptly. I worked closely with speakers, staff, and attendees to ensure a successful and productive event. - The setup for the first segment was particularly challenging due to its complexity and the need for precise execution. Despite facing several hurdles, I implemented effective solutions and worked closely with the team to ensure a successful start to the conference. - After the conference, I conducted a thorough review to evaluate its success and gather feedback from attendees, speakers, and staff. This feedback provided valuable insights for future conferences and highlighted areas for improvement. """ input_ids = tokenizer.encode(input_text, return_tensors="pt", max_length=250, truncation=True) # Generate summary summary_ids = model.generate( input_ids, max_length=model_args["max_length"], length_penalty=model_args["length_penalty"], num_beams=model_args["num_beams"], early_stopping=model_args["early_stopping"], temperature=model_args["temperature"] ) # Decode summary summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True, max_length=512) print(summary) ```