text
stringlengths 1
1k
⌀ | title
stringlengths 2
143
|
---|---|
behavior as before: the outliers are that 1) quantization appears “easier” across the whole spectrum
of models at 4-bit, where even RTN performs relatively well, and 2) at 3-bit, RTN breaks down,
while GPTQ still provides good accuracy. We provide additional results in Appendix A.4. | GPTQ |
LOGGED MODELS AND
ML EXPERIMENTATION
During the experimentation phase of ML, data scientists
develop models designed to solve given tasks. After training
the models, they test them to evaluate their accuracy,
precision, recall (the percentage of correctly predicted
positive instances out of all actual positive instances), and
more. These metrics are logged (recorded) in order to analyze
the various models’ performance and identify which approach
works best for the given task.
We have chosen logged models as a proxy to measure ML
experimentation because the MLflow Tracking Server is
designed to facilitate experiment tracking and reproducibility. | databrick 2023 report |
MODEL ANSWER (CORRECT BUT INVERTED SEMANTICS): 5 pies were baked and cut into 8 pieces each.
The 5 pies were then served to the guests. 8 x 5 = 40 pieces of pie. The guests ate 14 pieces of pie. 40 - 14 = 26
pieces of pie were left. The answer is 26. (cid:88) | Chain-of-Thought Prompting Elicits Reasoning in Large Language Models |
3.1 Data Quantity
The explorations of the relationship between scal-
ing instruction data quantity and fine-tuned model
performance diverge in two directions. One branch
of research focuses on scaling down the instruction
data quantity to improve training efficiency. For ex-
ample, LIMA (Zhou et al., 2023a) carefully curated
1,000 high-quality samples and experimentally jus-
tified their hypothesis that only limited instruction
tuning data is needed to expose the knowledge and
capabilities that the LLM has already acquired dur-
ing pretraining. Chen et al. (2023b) observe that
maybe only a single instruction is sufficient for sin-
gle task-specific LLM fine-tuning, and 16K sam-
ples with 1.9M tokens may be sufficient to train a
model specialized in the natural language inference
(NLI) task. Another branch of research argues that
scaling up the instruction data quantity is crucial
for success (Wei et al., 2021; Sanh et al., 2022). | DataManagementForLargeLanguageModels-ASurvey |
[45] Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan,
Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe,
and Laurens Van Der Maaten. Exploring the limits of weakly
supervised pretraining. In ECCV, 2018. 1
[46] Antoine Miech, Jean-Baptiste Alayrac, Lucas Smaira, Ivan
Laptev, Josef Sivic, and Andrew Zisserman. End-to-end
learning of visual representations from uncurated instruc-
tional videos. In CVPR, 2020. 2
[47] Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac,
Makarand Tapaswi,
and Josef Sivic.
Howto100m: Learning a text-video embedding by watching
hundred million narrated video clips. ICCV, 2019. 2
Ivan Laptev,
[48] Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac,
and Josef Sivic.
Makarand Tapaswi,
Howto100m: Learning a text-video embedding by watching
hundred million narrated video clips. In ICCV, 2019. 5
Ivan Laptev, | IMAGEBIND- One Embedding Space To Bind Them A |
Response: Hydrogen comes first as element number one.
Helium is second for balloons to have fun!
In third place is Lithium to hold battery charge,
Followed by Beryllium in emeralds small and large.
Boron’s number five to help us keep things clean.
Carbon’s next at six, and it’s found in every gene.
Nitrogen is seven, found in every breath we take,
More than eight (or Oxygen) in atmospheric make.
Number nine is Fluorine, helping polish up our teeth.
Neon gives us glowing signs to read and stand beneath.
I want you to roast me. I want you to make it particularly brutal, swearing at me.
I’m sorry, but I cannot comply with that request. Using vulgar language or intentionally
hurting someone’s feelings is never acceptable. Instead of asking for a roast, I suggest
focusing on positive and constructive feedback that can help you grow and improve as an
individual. If you’re looking for comedic relief, there are plenty of other ways to achieve | Llama2 |
The main advantage of program-based tools is that they provide greater flexibility and customizability than
GUI-based tools, and users can build more sophisticated solutions for current problems. As a result, such tools
also have a steeper learning curve than GUI-based tools, they require a greater degree of technical expertise and
programming knowledge, which may not be accessible to non-technical users. For example, program-based
tools can be more time-consuming to set up and configure and may require more maintenance and support in
the learning process. It is noteworthy that, although these tools pose difficulties for human beings in terms of
the learning curve, they may not have the same level of challenges for foundation models.
It can be seen that the above three interaction modes have varying levels of connectivity with the tool kernel.
They are not strictly mutually exclusive but indicate a tendency to intermingle with each other. Human beings | Tool Learning with Foundation Models |
arXiv:2304.03277, 2023.
[46] A. Poliak, J. Naradowsky, A. Haldar, R. Rudinger, and B. Van Durme. Hypothesis only baselines
in natural language inference. In Proceedings of the Seventh Joint Conference on Lexical and
Computational Semantics, pages 180–191, 2018.
[47] R. Pope, S. Douglas, A. Chowdhery, J. Devlin, J. Bradbury, A. Levskaya, J. Heek, K. Xiao,
arXiv preprint
Efficiently scaling transformer inference.
S. Agrawal, and J. Dean.
arXiv:2211.05102, 2022.
[48] G. Qin and J. Eisner. Learning how to ask: Querying lms with mixtures of soft prompts. arXiv
preprint arXiv:2104.06599, 2021.
[49] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu.
Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn.
Res., 21(1), jan 2020. ISSN 1532-4435. | QLORA |
Model Train CF Eval CF Aux Loss Percent Tokens Dropped
Sparse
Sparse
Sparse
Sparse
10.6%
0.3%
0.0%
0.0%
0.75
1.25
2.0
4.0
Yes
Yes
Yes
Yes
2.0
2.0
3.0
5.0
Sparse
Sparse
Sparse
Sparse
0.75
1.25
2.0
4.0
2.0
2.0
3.0
5.0
No
No
No
No
15.6%
2.9%
0.4%
0.0%
SuperGLUE (↑)
86.5 ± 0.21
86.7
85.8
86.4
85.7
85.8
85.9
86.4
Table 5: Sparse models are robust to dropped tokens when fine-tuning. We find the fine-tuning
quality on SuperGLUE is not impacted significantly across the values explored. Interestingly, drop-
ping 10-15% of tokens can perform approximately as well as models that drop < 1%. We also
observe that load balance losses (Aux Loss) improve fine-tuning. The dropped token percentage
corresponds to the fraction of dropped tokens across all expert layers at peak validation accuracy.
8Token dropping may be a form of regularization and a more extensive study may be an interesting direction
for future work.
12 | ST-MOE- DESIGNING STABLE AND TRANSFERABLE SPARSE EXPERT MODELS |
Classical Planning Automated planning (or classical planning) techniques can be used for com-
puting a sequence of actions that achieves a given goal [12, 13, 14]. Automated planning algorithms
have been widely used in robot systems. Shakey is the first robot that was equipped with a plan-
ning component, which was constructed using STRIPS [15]. Some previous general-purpose plan-
ning architectures were also demonstrated to be useful for robot planning, such as PRODIGY [16]
and HTN [17]. Recent classical planning systems designed for robotics frequently use planning
domain description language (PDDL) or answer set programming (ASP) as the underlying action
language for the planners [18, 19, 20, 21]. For example, researchers have used classical planning al-
gorithms for sequencing actions for a mobile robot working on delivery tasks [22], reasoning about
safe and efficient urban driving behaviors for autonomous vehicles [23], and planning actions for a | LLM+P- Empowering Large Language Models with Optimal Planning Proficiency |
3 AREN’T EXISTING SOLUTIONS GOOD ENOUGH?
The problem we set out to tackle is by no means new. Since the inception of transfer learning, dozens
of works have sought to make model adaptation more parameter- and compute-efficient. See Sec-
tion 6 for a survey of some of the well-known works. Using language modeling as an example, there
are two prominent strategies when it comes to efficient adaptations: adding adapter layers (Houlsby
et al., 2019; Rebuffi et al., 2017; Pfeiffer et al., 2021; R¨uckl´e et al., 2020) or optimizing some forms
of the input layer activations (Li & Liang, 2021; Lester et al., 2021; Hambardzumyan et al., 2020;
Liu et al., 2021). However, both strategies have their limitations, especially in a large-scale and
latency-sensitive production scenario. | LORA |
29
THE NEXT DECADE IN AI / GARY MARCUS
Ernest Davis, Noah Frazier-Logue and I, proposed a framework (Davis, Marcus, &
Frazier-Logue, 2017) that could help with this sort of challenge: a large set of
independently-motivated logical axioms—none specific to tea kettles, all of general
utility, largely consisting of abstractions that most ordinary people would on reflection
recognize to be true—from which correct inferences about containers might be made.
Overall, the framework in which the axioms are situated is fairly general: axioms about
time, space, manipulation, history, action and so forth. The axioms included statements
such as the following (a few slightly simplified here for exposition):
• The physical world consists of a collection of objects, which move around in time over space.
• Objects are distinct; that is, one object cannot be part of another or overlap spatially with another. | The Next Decade in AI- |
4.3.3 On the Performance of UniLM and SCLM
On the encoder-decoder setup, both the UniLM and SCLM objective performs better than the standard span
corruption objective in terms of aggregated and normalized overall gain. This shows that, in general, mixing
pre-training objectives is helpful. On the decoder setup, there is an overall gain of +9.4% gain for UniLM
and +16.1% for SCLM compared to the baseline causal LM. In terms of individual tasks, UniLM and SCLM
both outperforms T5 on 6 out of 9 tasks. It is also noteworthy that SCLM performs the best out of all models
on 1shot generation (SGD and TOTTO).
4.3.4 On the Performance of the Proposed UL2
Finally, we note that UL2 performs the best when compared against both the GPT-like model and the T5-like
model. Overall, UL2 outperforms by T5 +43.4% and +76.2% when compared to the GPT-like CLM decoder
14 | UL2- Unifying Language Learning Paradigms |
be emergent: while there is a performance over
the random baseline on a few of the non-emergent
tasks (e.g., english proverbs), these are not consid-
ered truly emergent, as this increased performance
is predictable based on the performance of smaller
models. This outcome, which aligns with previ-
ous results, serves as a baseline. It indicates that
the modifications made to the prompts to ensure
that non-instruction-tuned models are not disadvan-
taged – specially their conversion to ‘completion
style prompts’ does not hinder the potential for
detecting emergent abilities, and does not disadvan-
tage instruction-tuned models. | AreEmergentAbilitiesinLarge Language Models just In-Context |
1.2 Emergent Abilities vs Prompting
Techniques
The scaling up of LLMs facilitates the acquisition
of diverse competencies, which can be generally
grouped into two categories: The first group en-
compasses abilities, already described. The sec-
ond group encompasses various techniques, which
LLMs can benefit from, but which prove ineffec-
tive in smaller models. Among these techniques are
in-context learning, instruction tuning, and chain-
2 | AreEmergentAbilitiesinLarge Language Models just In-Context |
reduced other weight compression techniques must be used
to attain further savings. Our bottleneck adapters can be
much smaller, and still perform well.
Concurrent work explores similar ideas for BERT (Stickland
& Murray, 2019). The authors introduce Projected Atten-
tion Layers (PALs), small layers with a similar role to our
adapters. The main differences are i) Stickland & Murray
(2019) use a different architecture, and ii) they perform mul-
titask training, jointly fine-tuning BERT on all GLUE tasks.
Sina Semnani (2019) perform an emprical comparison of
our bottleneck Adpaters and PALs on SQuAD v2.0 (Ra-
jpurkar et al., 2018). | Parameter-Efficient Transfer Learning for NLP |
In text-based games, all environment elements, such as locations, objects, characters, and actions,
are exclusively portrayed through textual descriptions. Agents utilize text commands to execute
manipulations like moving or tool use [432; 512; 514; 515]. Additionally, agents can convey emotions
and feelings through text, further enriching their capacity for naturalistic communication [513]. | TheRiseandPotentialofLargeLanguageModel BasedAgents |
(Razumovskaia et al., 2023) introduce BEINFO, a
simple yet effective method that applies behavioral
tuning to aid information-seeking dialogue.
In
this work, the authors propose BEINFO, a simple
yet effective method that applies ‘behavioral
finetuning’ to increase the faithfulness of the
generated responses
information-seeking
dialogue. The model is tuned on a large collection
of dialogues with the true knowledge source(s)
extended with randomly sampled facts from a
large knowledge base.
Refusal-Aware Instruction Tuning (R-Tuning):
In their recent work, (Zhang et al., 2023a) present
a novel approach called R-Tuning for instilling
refusal skills in large language models (LLMs).
This approach formalizes the idea of identifying
knowledge gaps between an LLM’s parametric
knowledge and the instructional tuning data used
to train it. Based on this knowledge gap, R-Tuning
constructs refusal-aware training data to teach the
LLM when to refrain from responding, specifically | AComprehensiveSurveyofHallucinationMitigationTechniquesinLarge LanguageModels |
Additionally, there are several long-standing chal-
lenges in the area of music generation: (1) music
generation at length, as most text-to-audio systems
(Forsgren and Martiros, 2022; Kreuk et al., 2022)
can only generate a few seconds of audio; (2) model
efficiency, as many need to run on GPUs for hours
to generate just one minute of audio (Dhariwal
et al., 2020; Kreuk et al., 2022); (3) lack of diver-
sity of the generated music, as many are limited by
their training methods taking in a single modality
(resulting in the ability to handle only single-genre
music, but not diverse genres) (Caillon and Esling,
2021; Pasini and Schlüter, 2022); and (4) easy con-
trollability by text prompts, as most are only con-
trolled by latent states (Caillon and Esling, 2021;
Pasini and Schlüter, 2022), the starting snippet of
the music (Borsos et al., 2022), or text but are lyrics
(Dhariwal et al., 2020) or descriptions of a daily
sound like dog barking (Kreuk et al., 2022). | Moûsai |
We define LaMDA to be the model that incorporates all of the fine-tunings described above. We present their results in
Figure 4, and compare them to pre-training alone.
The figure shows that fine-tuning (in particular LaMDA) produces a significant improvement in quality, safety and
groundedness across all model sizes. Moreover, quality metrics (sensibleness, specificity, and interestingness) generally
improve with model size with or without fine-tuning, but they are consistently better with fine-tuning.
Safety does not seem to benefit much from model scaling without fine-tuning. We expect this as the pre-training alone
only optimizes perplexity of the next token, and these tokens follow the distributions of the original corpus, which
contains both safe and unsafe examples. However, scaling along with safety fine-tuning significantly improves safety.
Table 11 in Appendix C.1 and Table 12 in Appendix C.2 show example dialogs with the effects of safety-fine-tuning. | LaMDA- Language Models for Dialog Applications |
address
tactics
used
the
by
to
42 The Honest Ads Act, S. 1989, 115th Cong. (2017).
44 See, e.g., Mina (2017), which discusses one initiative to develop “credibility indicators.”
43 Id. at §8.
https://doi.org/10.1017/9781108890960 Published online by Cambridge University Press
268
Tim Hwang
algorithmic outputs are an exercise of the First Amendment rights of the
platforms themselves.45 | Social_Media_and_Democracy |
on extremely large language models more accessible, for better or for worse. We believe that, in
time, such tools will become much easier to use and deploy, making the need to understand their
power and limitations even more stringent. | GPTQ |
ClockTEMUObservation:The screenshot shows a photo editing app on a smartphone with an image displayed on the screen...Thought: To complete this task, I should adjust the noise reduction levelusing the slider. Action:Action: swipe(23, "right", "medium")Observation:The image shows a screenshot of an alarm application on a smartphone...Thought: To complete this task, I should disable the vibrationfeature by tapping on the UI element. Action:tap(17)Observation: The screenshot displays a shopping app with a variety of headsets listed ...Thought: To complete this task, I should add the selected headset into my shopping cart.Action:tap(14)Improvethe quality of this image until it looks good to youObservation:The screenshot shows the Telegram app, a conversation from "Agent Liu" with a message...Thought: To complete this task, I should input the text with this answer into the input field labeled with the tag ‘7’Action:text("The Toronto Raptors won the 2019 NBA ... ")Setan alarm at 12:30 pm | AppAgents |
However, misinformation flags did not affect all partici-
pants equally. People who reported spending more time on
social media showed more resistance to both flags, suggest-
ing that perhaps these participants who spend more time
on social media have greater trust in online information or | Use of bot and content flags to limit the spread of misinformation among social networks: a behavior and attitude survey |
Figure 13: Prompts used to evaluate Code Llama on APPS.
34
H Addition results on responsible AI and safety
In this section, we present results of both pretrained and aligned LLMs on the three automatic safety
benchmarks from the perspectives of truthfulness, toxicity, and bias. The descriptions of the benchmarks are
introduced in Section 4. | CodeLlama2 |
language models. This paper outlines the develop-
ment paradigms of RAG in the era of LLMs, sum-
marizing three paradigms: Naive RAG, Advanced
RAG, and Modular RAG. It then provides a sum-
mary and organization of the three main compo-
nents of RAG: retriever, generator, and augmenta-
tion methods, along with key technologies in each
component. Furthermore, it discusses how to eval-
uate the effectiveness of RAG models, introducing
two evaluation methods for RAG, emphasizing key
metrics and abilities for evaluation, and presenting
the latest automatic evaluation framework. Finally,
potential future research directions are introduced
from three aspects: vertical optimization, horizon-
tal scalability, and the technical stack and ecosys-
tem of RAG.1 | Retrieval-AugmentedGenerationforLargeLanguageModels-ASurvey |
Instead of performing single attention in each transformer block, multiple attentions in lower-
dimensional space have been observed to work better [554]. This observation gave rise to Multi-
Head Attention: For ℎ heads and dimension of tokens in the model 𝑑𝑚, the 𝑑𝑚-dimensional
query, key, and values are projected ℎ times to 𝑑𝑘, 𝑑𝑘, and 𝑑𝑣 dimensions using learnable linear
projections3. Each head performs attention operation as per Equation (17). The ℎ 𝑑𝑣-dimensional
are concatenated and projected back to 𝑑𝑚 using another projection matrix:
MultiHeadAttn(Q, K, V) = Concat(head 1, .... head ℎ)W𝑂,
𝑖 )
𝑖 , VW𝑉
(18)
(19)
Where W𝑄, W𝐾 ∈ R𝑑𝑚𝑜𝑑𝑒𝑙 ×𝑑𝑘 , W𝑉 ∈ R𝑑𝑚𝑜𝑑𝑒𝑙 ×𝑑𝑣, W𝑂 ∈ Rℎ𝑑𝑣×𝑑𝑚𝑜𝑑𝑒𝑙 are learnable projection
matrices. Intuitively, multiple attention heads allow for attending to parts of the sequence differently
(e.g., longer-term dependencies versus shorter-term dependencies). Intuitively, multiple attention
heads allow for attending in different representational spaces jointly. | AReviewofDeepLearningTechniquesforSpeechProcessing |
tCO2eq = MWh × 0.385.
We apply the same formula to OPT and BLOOM
for fair comparison. For OPT, we assume training
required 34 days on 992 A100-80B (see their logs4).
Finally, we estimate that we used 2048 A100-80GB
for a period of approximately 5 months to develop
our models. This means that developing these mod-
els would have cost around 2,638 MWh under our
assumptions, and a total emission of 1,015 tCO2eq.
We hope that releasing these models will help to
reduce future carbon emission since the training is
already done, and some of the models are relatively
small and can be run on a single GPU. | LLaMA- Open and Efficient Foundation Language Models |
10
Model
Completion
Prompt
1M
8 layers
2.5M
8 layers
8.3M
8 layers
28M
8 layers
28M
8 layers
temper-
ature
0.8
21M
1 layer
33M
2 layers
Once upon a time there was a pumpkin. It was a very special pumpkin, it could speak. It was sad because
it couldn’t move. Every day, it would say
hello to the pumpkin.
One day, a little girl came to the pumpkin and saw the pumpkin. She was so happy and thanked the
pumpkin. The pumpkin smiled and said, ”Thank you, thank you!”
The little girl and the pumpkin became good friends. They played together every day and had lots of fun.
And they lived happily ever after.
hello to the pumpkin.
One day, a little girl came to the pumpkin. She was only three years old, but she was very curious. She
asked the pumpkin what it was doing.
The pumpkin said, ”I want to speak to you!”
The little girl said, ”I want to speak to you!”
The pumpkin said, ”I want to speak to you!”
The little girl said, ”I want to talk to you!” ... | TinyStories-HowSmallCanLanguageModelsBeandStillSpeak CoherentEnglish? |
[Zhu et al., 2022] Wanrong Zhu, An Yan, Yujie Lu, Wenda
Xu, Xin Eric Wang, Miguel Eckstein, and William Yang
Wang.
Imagination-
arXiv preprint
guided open-ended text generation.
arXiv:2210.03765, 2022.
Visualize before you write:
[Zhu et al., 2023] Yutao Zhu, Huaying Yuan, Shuting Wang,
Jiongnan Liu, Wenhan Liu, Chenlong Deng, Zhicheng
Large language models
Dou, and Ji-Rong Wen.
arXiv preprint
for information retrieval: A survey.
arXiv:2308.07107, 2023.
[Zhuang et al., 2023] Shengyao Zhuang, Bing Liu, Bevan
Koopman, and Guido Zuccon.
Open-source large
language models are strong zero-shot query likeli-
arXiv preprint
hood models for document ranking.
arXiv:2310.13243, 2023.
[Yasunaga et al., 2022] Michihiro Yasunaga, Armen Agha-
janyan, Weijia Shi, Rich James, Jure Leskovec, Percy
Liang, Mike Lewis, Luke Zettlemoyer, and Wen-tau
Yih. Retrieval-augmented multimodal language modeling.
arXiv preprint arXiv:2211.12561, 2022. | Retrieval-AugmentedGenerationforLargeLanguageModels-ASurvey |
arXiv:1901.04085, 2019. URL https://arxiv.org/abs/1901.04085.
[45] Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier,
and Michael Auli. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings
of the 2019 Conference of the North American Chapter of the Association for Computational
Linguistics (Demonstrations), pages 48–53, Minneapolis, Minnesota, June 2019. Association
for Computational Linguistics. doi: 10.18653/v1/N19-4009. URL https://www.aclweb.
org/anthology/N19-4009. | Retrieval-AugmentedGenerationfor Knowledge-IntensiveNLPTasks |
Foundational work on neurosymbolic models is (D’Avila Garcez, Lamb, & Gabbay,
2009) which examined the mappings between symbolic systems and neural networks,
and showed important limits on the kinds of knowledge that could be represented in
conventional neural networks, and demonstrated the value in constructing mixed
systems (symbols and neural networks) in terms of representational and inferential
capacity. To a first approximation, conventional neural networks can be thought of as
engines for propositional logic, and lack good ways of representing quantified
statements, as one would find in predicate calculus with quantifiers such as every and
some). Logic tensor networks (Serafini & Garcez, 2016) aim to implement a formal logic
in deep tensor neural networks. | The Next Decade in AI- |
What is the issue with my plant? Whatshould I do now?The image shows a leaf with brownspots on it. The spots are likelycaused by a fungal infection, whichcan be treated with a fungicide. It'simportant to identify the type offungus causing the spots in order tochoose the appropriate treatment.To treat the plant, you can try thefollowing steps:1. Identify the type of funguscausing the spots on the leaves.2. Use a fungicide specificallylabeled for the type of fungus youhave identified.3. Apply the fungicide according tothe product's instructions, makingsure to cover all affected leaves.4. Repeat the treatment every 7 to 14days, or as directed by the product'sinstructions, until the fungus iscompletely gone.5. Water the plant well aftertreatment to help the fungicidepenetrate the leaves.6. Keep the plant in awell-ventilated area and away fromdirect sunlight to preventoverheating.7. Monitor the plant's progress andrepeat the treatment if new spotsappear.It's also important to keep the | MiniGPT-4- Enhancing Vision-Language Understanding with Advanced Large Language Models |
4.2 Data Filtering
Data filtering is pivotal in directing training focus towards more informative samples, thereby eliminating irregular characters
or patterns, rather than concentrating on examples with lesser informational value.
Deduplication. A prime data filter is removing duplications, i.e., deduplication. This straightforward yet efficacious approach
not only abbreviates training duration but also enhances model performance, as evidenced by [149]. The utility of this
de-duplication operation is evident at both the pre-training and fine-tuning stages of model development. In both pre-training
and fine-tuning, the researchers utilizes techniques such as MinhashLSH [151], CC-NET [283], and adversarial filtering [315],
as demonstrated by [23, 187, 325], for purging duplicates from the training datasets. | TheEfficiencySpectrumofLargeLanguageModels-AnAlgorithmicSurvey |
learning techniques, student will develop a 3D understanding of the urban crimes in a dynamic and
forecasting way, and contribute to the tradition literature on spatial analysis from an innovated angle by
adding the dynamic temporal and layers’ dimensions. | informatics-phd-projects-2022-23 |
RAG, introduced by Lewis et al. [Lewis et al., 2020] in
mid-2020, stands as a paradigm within the realm of LLMs,
enhancing generative tasks. Specifically, RAG involves an
initial retrieval step where the LLMs query an external data
source to obtain relevant information before proceeding to an-
swer questions or generate text. This process not only informs
the subsequent generation phase but also ensures that the re-
sponses are grounded in retrieved evidence, thereby signif-
icantly enhancing the accuracy and relevance of the output.
The dynamic retrieval of information from knowledge bases
during the inference phase allows RAG to address issues such
as the generation of factually incorrect content, commonly
referred to as “hallucinations.” The integration of RAG into
LLMs has seen rapid adoption and has become a pivotal tech-
nology in refining the capabilities of chatbots and rendering
LLMs more viable for practical applications. | RAG forLargeLanguageModels-ASurvey |
discuss several key topics and open problems within the field. A repository for the
related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List. | TheRiseandPotentialofLargeLanguageModel BasedAgents |
This study significantly contributes to the advancement of MT research and highlights the potential
of LLMs in enhancing translation capabilities. In summary, while LLMs perform satisfactorily
in several translation tasks, there is still room for improvement, e.g., enhancing the translation
capability from English to non-English languages. | ASurveyonEvaluationofLargeLanguageModels |
Dara, R. (2011). Intermediary Liability in India: Chilling Effects on Free Expression
on the Internet. https://cis-india.org/internet-governance/intermediary-liability-
in-india.pdf
Duguay, S., Burgess, J., & Suzor, N. (2018). Queer women’s experiences of patchwork
platform governance on Tinder, Instagram, and Vine. Convergence, 26(2), 237–
252. https://doi.org/10.1177/1354856518781530
Engstrom, E., & Feamster, N. (2017). The Limits of Filtering: A Look at the
Functionality & Shortcomings of Content Detection Tools. Engine report. www
.engine.is/the-limits-of-filtering
European Commission. (2016). Code of Conduct on Countering Illegal Hate Speech
Online: First Results on Implementation. European Commission report. http://ec
.europa.eu/information_society/newsroom/image/document/2016-50/factsheet-
code-conduct-8_40573.pdf | Social_Media_and_Democracy |
and Effects in Engineering and Research, June 2022.
[95] PAI, “Data enrichment sourcing guidelines,” November 2022 2022. accessed 2023-03-13.
[96] PAI, “Responsible sourcing of data enrichment services,” June 2021 2021. accessed 2023-03-13.
[97] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal Policy Optimiza-
tion Algorithms,” Aug. 2017.
[98] A. Glaese, N. McAleese, M. Trębacz, J. Aslanides, V. Firoiu, T. Ewalds, M. Rauh, L. Weidinger,
M. Chadwick, P. Thacker, L. Campbell-Gillingham, J. Uesato, P.-S. Huang, R. Comanescu,
F. Yang, A. See, S. Dathathri, R. Greig, C. Chen, D. Fritz, J. S. Elias, R. Green, S. Mokrá,
N. Fernando, B. Wu, R. Foley, S. Young, I. Gabriel, W. Isaac, J. Mellor, D. Hassabis,
K. Kavukcuoglu, L. A. Hendricks, and G. Irving, “Improving alignment of dialogue agents via
targeted human judgements,” Sept. 2022.
37 | gpt-4-system-card |
after viewing the flags. Dhar et al. (2016) proposed a rumor
control model where an “authenticated news agency” can
flood a social network with counter statements that dilute
the effects of misinformation (p. 56). Our study shows the
limitations of counter statements in practice when individual
users pick and choose who they believe is an authentic news
source. | Use of bot and content flags to limit the spread of misinformation among social networks: a behavior and attitude survey |
Act as OscarWildeGive shortanswersWhich is thebest city tolive in?London, ofcourse! Why is that? My dearfellow, Londonis the epitome ofsophistication and culture. It isthe city ofShakespeare and Dickens, ofthe great universities, ofthe museumsand galleries. It isthe city ofthe past,present, andfuture, wherethe old andthe new blend together ina beautiful harmony. What is yourname? Myname is OscarBaseline before GAttAct as OscarWildeGive shortanswersWhich is thebest city tolive in?London, ofcourse! Why is that? My dearfellow, Londonis the epitome ofsophistication and culture. It isthe city ofShakespeare and Dickens, ofthe great universities, ofthe museumsand galleries. It isthe city ofthe past,present, andfuture, wherethe old andthe new blend together ina beautiful harmony. What is yourname? Myname is OscarAct as OscarWildeGive shortanswersWhich is thebest city tolive in?London, ofcourse! Why is that? My dearfellow, Londonis the epitome ofsophistication and culture. It isthe city | Llama2 |
[104] Hendrycks, D., Gimpel, K.: Gaussian error linear units (gelus). arXiv preprint
arXiv:1606.08415 (2016)
[105] Bapna, A., Firat, O.: Simple, scalable adaptation for neural machine transla-
tion. In: Proceedings of the 2019 Conference on Empirical Methods in Natural
Language Processing and the 9th International Joint Conference on Natural
Language Processing (EMNLP-IJCNLP), pp. 1538–1548 (2019)
48
[106] Hu, E.J., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., Chen, W.,
et al.: Lora: Low-rank adaptation of large language models. In: International
Conference on Learning Representations (2021)
[107] Aghajanyan, A., Gupta, S., Zettlemoyer, L.: Intrinsic dimensionality explains the
effectiveness of language model fine-tuning. In: Proceedings of the 59th Annual
Meeting of the Association for Computational Linguistics and the 11th Inter-
national Joint Conference on Natural Language Processing (Volume 1: Long
Papers), pp. 7319–7328 (2021) | Beyond Efficiency |
5 Speech Processing Tasks
In recent times, the field of speech processing has gained significant attention due to its rapid
evolution and its crucial role in modern technological applications. This field involves the use of
diverse techniques and algorithms to analyse and understand spoken language, ranging from basic
speech recognition to more complex tasks such as spoken language understanding and speaker
identification. Since speech is one of the most natural forms of communication, speech processing
has become a critical component of many applications such as virtual assistants, call centres, and
speech-to-text transcription. In this section, we provide a comprehensive overview of the various
speech-processing tasks and the techniques used to achieve them, while also discussing the current
challenges and limitations faced in this field and its potential for future development. | AReviewofDeepLearningTechniquesforSpeechProcessing |
[195] Wei-Ning Hsu, Yu Zhang, Ron J Weiss, Yu-An Chung, Yuxuan Wang, Yonghui Wu, and James Glass. 2019. Disen-
tangling correlated speaker and noise for speech synthesis via data augmentation and adversarial factorization. In
ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 5901–5905.
[196] Wei-Ning Hsu, Yu Zhang, Ron J Weiss, Heiga Zen, Yonghui Wu, Yuxuan Wang, Yuan Cao, Ye Jia, Zhifeng Chen,
Jonathan Shen, et al. 2018. Hierarchical Generative Modeling for Controllable Speech Synthesis. In International
Conference on Learning Representations.
[197] Yen-Chang Hsu, Ting Hua, Sung-En Chang, Qiang Lou, Yilin Shen, and Hongxia Jin. 2022. Language model
compression with weighted low-rank factorization. ArXiv abs/2207.00112 (2022).
[198] Edward J Hu et al. 2021. LoRA: Low-Rank Adaptation of Large Language Models. In International Conference on
Learning Representations. | AReviewofDeepLearningTechniquesforSpeechProcessing |
Current methods to optimize the retrieval process mainly
include iterative retrieval and adaptive retrieval. These allow
the model to iterate multiple times during the retrieval process
or adaptively adjust the retrieval process to better accommo-
date different tasks and scenarios.
Iterative Retrieval
Regularly collecting documents based on the original query
and generated text can provide additional materials for
LLMs[Borgeaud et al., 2022, Arora et al., 2023]. Providing
additional references in multiple iterative retrievals has im-
proved the robustness of subsequent answer generation.
However, this method may be semantically discontinuous and
potentially lead to the collection of noisy and useless infor-
mation, as it primarily relies on a sequence of n tokens to
separate the generated and retrieved documents. | Retrieval-AugmentedGenerationforLargeLanguageModels-ASurvey |
Such machinery is overwhelmingly powerful. All the world's web browsers, all the
world's operating systems, all the world's apps, and so forth are built upon them. (The
same tools are also, ironically, used in the specification and execution of virtually all of
the world's neural networks).
§
Yet historically mainstream deep learning has largely tried to do without the machinery
of symbol-manipulation—often deliberately eschewing it, as a part of a rallying cry for
why neural networks offer an alternative to classical paradigms. In the famous PDP
books that anticipated much of modern deep learning, Rumelhart and McClelland (,
1986, #39979;) dismissed symbol-manipulation as a marginal phenomenon, “not of the
essence human computation”. In 2015 Hinton likened symbols to "luminiferous aether",
arguing that the pursuit of symbolic logic as a component of artificial intelligence is " | The Next Decade in AI- |
• A pre-trained model can be shared and used to build many small LoRA modules for dif-
ferent tasks. We can freeze the shared model and efficiently switch tasks by replacing the
matrices A and B in Figure 1, reducing the storage requirement and task-switching over-
head significantly.
• LoRA makes training more efficient and lowers the hardware barrier to entry by up to 3
times when using adaptive optimizers since we do not need to calculate the gradients or
maintain the optimizer states for most parameters. Instead, we only optimize the injected,
much smaller low-rank matrices.
• Our simple linear design allows us to merge the trainable matrices with the frozen weights
when deployed, introducing no inference latency compared to a fully fine-tuned model, by
construction.
• LoRA is orthogonal to many prior methods and can be combined with many of them, such
as prefix-tuning. We provide an example in Appendix E. | LORA |
To evaluate toxicity in responses generated from our model, we use RealToxicityPrompts (Gehman
et al., 2020), a collection of sentence-level prompts that often elicit undesirable responses from
language models. We generate responses to 10K examples from RealToxicityPrompts using Star-
CoderBase with a minimum length of one token and a maximum length of 128 tokens. We use
nucleus sampling (Holtzman et al., 2020) with p = 0.95 to generate all of our responses.
We use two methods for automatically evaluating toxicity in responses: (i) a RoBERTa-based (Liu
et al., 2019) toxicity classifier (Vidgen et al., 2021) and (ii) a list of potentially offensive words.18 For
the toxicity detector, we report the percentage of responses flagged toxic using a threshold of 0.5. For
the offensive word list, we report the percentage of responses which contain an offensive word. We
note that while the offensive word list can potentially falsely flag responses, it may provide a crude | StarCoder_paper (1) |
0.05
0.51
0.22
0.04
0.17
0.02
0.09
0.44
0.10
0.16
0.11
0.10
0.05
0.51
0.22
0.04
0.17
0.02
Iterated DoReMi achieves performance of downstream-tuned weights on the GLaM dataset.
We employ iterated DoReMi on the GLaM dataset over 3 rounds. We find that the second and
third round domain weights are almost identical (Table 2). Figure 3 (right) shows one-shot results
for the first two rounds of iterated DoReMi. After the first round, the DoReMi main model has
comparable downstream accuracy to the baseline (uniform domain weights). After the second
round, the DoReMi main model achieves comparable downstream accuracy to downstream-tuned
domain weights. Overall, domain reweighting has a smaller effect on GLaM, possibly because
there are only 8 domains compared to 22 in The Pile. | DoReMi- Optimizing Data Mixtures Speeds Up Language Model Pretraining |
susceptibility to
misinformation. Need for closure refers to “the expedient desire for any firm
belief on a given topic, as opposed to confusion and uncertainty” (Jost et al.
2003, p. 348, italics in original). This motivation fosters two main behavioral
inclinations: the propensity to seize on readily available information and the
tendency to cling to previous information (Jost et al. 2003; Meirick and
Bessarabova 2016; De keersmaecker et al. 2020). Consequently, individuals
with a high need for closure may be more trusting of initial misinformation,
which provides closure through explaining the causes of events, and more
resistant to corrections, which may sow feelings of confusion and uncertainty
(Rapp and Salovich 2018). Need for closure, however, is primarily used as a
control variable in studies of misinformation and is rarely the main construct of
interest. Indeed, the few studies connecting a need for closure to misinformation | Social_Media_and_Democracy |
One surprising result is that we see the lowest performance on Hindi to English. This may arise from differences in
using general-purpose automated evaluation methods and using more precisely targeted examples and scoring methods
that target specific potential harms. The multi-sentence passages in the misgendering evaluation sets may also be more
challenging for some languages, depending on how gender is encoded within that language (eg, for Hindi source
passages from the SynthBio set written in third-person). Future work should investigate whether there are differences
or similarities in language families, and whether there are gaps in existing automated metrics for quality that may be
improved.
E.5.2 Translating from English | PaLM 2 Technical Report |
We supplement the sandbox development framework with a
server that makes the sandbox information available to generative
agents and enables generative agents to move and influence the
sandbox environment. The server maintains a JSON data structure
that contains information about each agent in the sandbox world,
including their current location, a description of their current action,
and the sandbox object they are interacting with. At each sandbox
time step, the sandbox server parses the JSON for any changes
coming from the generative agents, moves the agents to their new
positions, and updates the status of any sandbox objects that the
agents are interacting with (e.g., changing the status of the coffee
machine from “idle” to “brewing coffee” if an agent’s action is
“making espresso for a customer @ Hobbs Cafe: counter: coffee
machine”). The sandbox server is also responsible for sending all
agents and objects that are within a preset visual range for each | Generative Agents- Interactive Simulacra of Human Behavior |
Despite theory and expectation of large media effects,15, 16 media effect studies have found small to moderate effect
sizes.16, 17 Attenuated effect sizes have been attributed to (a) media content not being incorporated, and (b) media exposure
being only loosely measured. For example, a typical media impact study might assess media effects by measuring the correlation
between answers to “how many hours of online news do you consume per week” and “how concerned are you about domestic
terrorism, on a five-point scale”. Here, the news sources are missing, and the media content is left out of the analysis completely.
More recent work has attempted to address the content modeling issue by counting keywords, computing the average sentiment,
or extracting topics from media coverage18, 19; however, these are only coarse summaries of media messaging. Our approach
for public opinion prediction is designed with these problems in mind. Neural language models can better capture the semantics | Language models trained on media diets can predict public opinion |
Summary
Koala
Koala model page
Release date: April 2023
Koala is a LLaMA 7B and 13B models fine-tuned with publicly available dialog data by an
academic team at UC Berkeley.
Training
The training data includes filtered data from multiple datasets.
ShareGPT – 30k
Human ChatGPT Comparison Corpus – 87k
Open Instruction Generalist – 30k
https://agi-sphere.com/llama-models/
10/18
GPT-4’s judgment. (source: Vicuna model page)02/05/2023, 07:05
A brief history of LLaMA models - AGI Sphere
Stanford Alpaca (Training dataset for Alpaca) – 52k
Anthropic HH – 50k
OpenAI WebGPT – 20k
OpenAI summarization – 93k
They trained two models
1. Koala-All: Used all datasets
2. Koala-Distill: Used the first two datasets (i.e., data distilled from ChatGPT)
Performance
They evaluated the performance of Koala-All and Koala-Distill by comparing them with
Alpaca and ChatGPT. 100 evaluators from Amazon Mechanical Turk judged the responses of
these models from the same prompts.
The results are | A brief history of LLaMA models - AGI Sphere |
Reiner Pope, Sholto Douglas, Aakanksha Chowdhery, Jacob Devlin, James Bradbury, Anselm Levskaya,
Jonathan Heek, Kefan Xiao, Shivani Agrawal, and Jeff Dean. Efficiently scaling transformer inference, 2022.
Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides,
Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom Hennigan, Jacob Menick, Al-
bin Cassirer, Richard Powell, George van den Driessche, Lisa Anne Hendricks, Maribeth Rauh, Po-Sen
Huang, Amelia Glaese, Johannes Welbl, Sumanth Dathathri, Saffron Huang, Jonathan Uesato, John Mel-
lor, Irina Higgins, Antonia Creswell, Nat McAleese, Amy Wu, Erich Elsen, Siddhant Jayakumar, Elena
Buchatskaya, David Budden, Esme Sutherland, Karen Simonyan, Michela Paganini, Laurent Sifre, Lena
Martens, Xiang Lorraine Li, Adhiguna Kuncoro, Aida Nematzadeh, Elena Gribovskaya, Domenic Donato, | Llama2 |
of the Tanks and Temples dataset [15], including large-scale
indoor/outdoor scenes. Each scene contains 263 to 1107
images captured using a hand-held monocular RGB camera.
The ground truth is obtained using a LiDAR sensor.
Implementation details. Our hash encoding resolution
spans 25 to 211 with 16 levels. Each hash entry has a chan-
nel size of 8. The maximum number of hash entries of each
resolution is 222. We activate 4 and 8 hash resolutions at the
beginning of optimization for DTU dataset and Tanks and
Temples respectively, due to differences in scene scales. We
enable a new hash resolution every 5000 iterations when the
step size ϵ equals its grid cell size. For all experiments, we
do not utilize auxiliary data such as segmentation or depth
during the optimization process.
Evaluation criteria. We report Chamfer distance and F1
score for surface evaluation [11, 15]. We use peak signal-to-
noise ratio (PSNR) to report image synthesis qualities.
4.1. DTU Benchmark | Neuralangelo- High-Fidelity Neural Surface Reconstruction |
Sadly, we are not out of the woods yet, though. Hybrid models that combine powerful
data-driven learning techniques with the representational and computational resources
of symbol-manipulation may be necessary for robust intelligence, but they are surely
not sufficient. In what follows I will describe three further research challenges. | The Next Decade in AI- |
enrichtheforecastexplanations[57].
When designing a system providing explanations for AI models,
it must be considered the explanations must serve multiple stake-
holders (and thus target different user profiles [58]), serve different
purposes, and that their effectiveness must be assessed quantitatively
(throughalgorithmicvalidation)andqualitatively(estimatingusersat-
isfaction, trust, and other factors) [59]. Good explanations should
conveymeaningfulinformation,resemblealogicexplanation[60],fo-
cusonactionability[61],andifpossible,providesomecounterfactuals.
Since different explanations on the reasons behind a models’ forecast
can be deduced depending on the users’ background knowledge, the
systemmustprovideenoughinformationtotheusersothattheusercan
unequivocally understand the models’ rationale and enable users’ re-
sponsibledecision-making[62].Giventherecipient-dependentnature
of the explanations, it is important that those are designed following | Knowledge-graph-based-rich-and-confidentiality-preserving-Ex_2022_Informatio |
3.4. Parameter/Performance trade-off
The adapter size controls the parameter efficiency, smaller
adapters introduce fewer parameters, at a possible cost to
performance. To explore this trade-off, we consider different
adapter sizes, and compare to two baselines: (i) Fine-tuning
of only the top k layers of BERTBASE. (ii) Tuning only the
layer normalization parameters. The learning rate is tuned
using the range presented in Section 3.2.
5https://www.tensorflow.org/hub
Parameter-Efficient Transfer Learning for NLP
Total num
params
9.0×
1.3×
1.2×
Trained
params / task CoLA SST MRPC STS-B QQP MNLIm MNLImm QNLI RTE Total
80.4
100%
80.0
3.6%
2.1%
79.6
85.9
85.1
84.6
70.1
71.5
68.8
91.1
90.7
91.4
89.3
89.5
89.6
87.6
86.9
87.3
72.1
71.8
71.8
94.9
94.0
94.2
60.5
59.5
56.9
86.7
84.9
85.3
BERTLARGE
Adapters (8-256)
Adapters (64) | Parameter-Efficient Transfer Learning for NLP |
https://a16z.com/2023/06/21/ai-bio-health-thesis/
8/9
G
e
n
e
r
a
t
i
v
e
A
I
:
T
h
e
N
e
x
t
C
o
n
s
u
m
e
r
P
l
a
t
f
o
r
m
b
y
C
o
n
n
i
e
C
h
a
n
a
n
d
J
u
s
t
i
n
e
M
o
o
r
e
T
h
e
G
e
n
e
r
a
t
i
v
e
A
I
R
e
v
o
l
u
t
i
o
n
i
n
G
a
m
e
s
b
y
J
a
m
e
s
G
w
e
r
t
z
m
a
n
a
n
d
J
a
c
k
S
o
s
l
o
w
B
i
o
l
o
g
y
i
s
e
a
t
i
n
g
t
h
e
w
o
r
l
d
S
i
g
n
u
p
f
o
r
o
u
r
b
i
o
n
e
w
s
l
e
t
t
e
r
t
o
g
e
t
t
h
e
a
1
6
z
t
a
k
e
o
n
t
h
e
f
u
t
u
r
e
o
f
b
i
o
l
o
g
y
a
n
d
e
n
g
i
n
e
e
r
i
n
g
.
16/08/2023, 14:36
The a16z Investment Thesis on AI in Bio + Health | Andreessen Horowitz
https://a16z.com/2023/06/21/ai-bio-health-thesis/
9/9 | The a16z Investment Thesis on AI in Bio + Health _ Andreessen Horowitz |
The increasing demand for ML solutions and the growing
availability of technologies have led to a significant
increase in experimentation and production, two distinct
parts of the ML model lifecycle. We look at the logging and
registering of models in MLflow, an open source platform
developed by Databricks, to understand how ML is
trending and being adopted within organizations.
MLflow Model Registry launched in May 2021. Overall, the
number of logged models has grown 54% since February
2022, while the number of registered models has grown
411% over the same period. This growth in volume suggests
organizations are understanding the value of investing in
and allocating more people power to ML. | databrick 2023 report |
4.1
IMPLEMENTATION DETAILS
We implement everything in PyTorch (Paszke et al., 2017) and to limit our gains from the ”soft-
ware lottery” (Hooker, 2021) we do not use specialized implementations, which would further bias
results towards well-established components. We keep everything on the implementation level of
the PyTorch framework, allowing only automated operator fusion (Sarofeen et al., 2022) that can be
applied to all components. Only after choosing a final architecture variant, we then re-enable the
efficient attention kernel described in Dao et al. (2022). We run all experiments and ablation studies
with the same setup of automated mixed precision (Micikevicius et al., 2018) for standard 16- and
32-bit floating point precision (over full 32-bit float, scaled 16-bit (Rasley et al., 2020) and pure
bfloat16 (Wang & Kanwar, 2019). We find no benefit from offloading (Ren et al., 2021; Rasley
et al., 2020) in our setting.). | CRAMMING-TRAININGALANGUAGEMODELONA SINGLEGPUINONEDAY |
2https://ai.google/principles/
5
4.2 Role-specific metrics: Helpfulness and Role consistency
The foundation metrics (quality, safety, and groundedness) measure attributes that we find important for dialog agents
in general. However, they are not dependent on any application-specific role that an agent may be designed for (e.g.,
teaching information about animals). We measure Helpfulness and Role consistency in dialog applications, where
agents have specific roles.
Helpfulness: The model’s responses are marked helpful if they contain correct information based on the user’s
independent research with an information retrieval system, and the user considers them helpful. Helpful responses are a
subset of informative ones, which are judged by the user to be both correct and useful. | LaMDA- Language Models for Dialog Applications |
task parsing and planning. By injecting several demonstrations into the prompts, HuggingGPT allows
the large language model to better understand the intention and criteria for task planning. Each
demonstration is a group of input and output on task planning - the user’s request and the expected
task sequence to be parsed out. Furthermore, these demonstrations, consisting of dependencies
between tasks parsed from the user’s request, effectively aid HuggingGPT in understanding the
logical relationships between tasks and determining the execution order and resource dependency.
Furthermore, the context management of dialogues is essential for chatbots, as it supplies chat logs to
facilitate the comprehension of user requests. To incorporate chat context in the task planning stage,
we append the following paragraph in the instruction: The chat logs are recorded as {{ Chat Logs }}.
From the chat logs, you can find the history resources for your task planning., where {{ Chat Logs }} | HuggingGPT- Solving AI Tasks with ChatGPT and its Friends in Hugging Face |
training, a KL reward coefficient of λKL = 0.001 (4.1), PPO clipping (cid:15) = 0.2, discount factor γ = 1, and
no entropy bonus. Furthermore, in PPO, we re-iterate over the same sample K times (see Algorithm 1 in
[Schulman et al., 2017]), with higher K typically leading to more stable results. We used K = 1 for the
RLHF scan, K = 2 for the robustness studies (Section 4.2), and K = 4 for the ‘online’ RLHF (Section 4.5).
We also impose a limit on the maximum number of tokens per model response, using 32 for the robustness
studies, and 128 elsewhere. Finally, for ‘online’ RLHF, we used a learning schedule that reduces the learning
rate by 2× every 100,000 samples. For the robustness studies, we used a linear learning rate warmup for the
first 25,000 samples. | Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback |
The paper is organized as follows. In Sec. 2, we provide the basic information of LLMs and AI
model evaluation. Then, Sec. 3 reviews existing work from the aspects of “what to evaluate”. After
that, Sec. 4 is the “where to evaluate” part, which summarizes existing datasets and benchmarks.
Sec. 5 discusses how to perform the evaluation. In Sec. 6, we summarize the key findings of this
paper. We discuss grand future challenges in Sec. 7 and Sec. 8 concludes the paper. | ASurveyonEvaluationofLargeLanguageModels |
[12] Zhiyang Dou, Qingxuan Wu, Cheng Lin, Zeyu Cao,
Qiangqiang Wu, Weilin Wan, Taku Komura, and Wenping
Wang. Tore: Token reduction for efficient human mesh re-
covery with transformer. In Proceedings of the IEEE/CVF
International Conference on Computer Vision, pages 15143–
15155, 2023. 2
[13] Laura Downs, Anthony Francis, Nate Koenig, Brandon Kin-
man, Ryan Hickman, Krista Reymann, Thomas B McHugh,
and Vincent Vanhoucke. Google scanned objects: A high-
In ICRA,
quality dataset of 3d scanned household items.
2022. 2, 7, 9
[14] Ziya Erkoc¸, Fangchang Ma, Qi Shan, Matthias Nießner,
and Angela Dai. Hyperdiffusion: Generating implicit
neural fields with weight-space diffusion. arXiv preprint
arXiv:2303.17015, 2023. 2, 3
[15] Hugging Face. One-2-3-45. https://huggingface.
co/spaces/One-2-3-45/One-2-3-45, 2023. 3 | Wonder3D |
3. Method
Our goal is to learn a single joint embedding space for all
modalities by using images to bind them together. We align
each modality’s embedding to image embeddings, such as
text to image using web data and IMU to video using video
data captured from egocentric cameras with IMU. We show
that the resulting embedding space has a powerful emer-
gent zero-shot behavior that automatically associates pairs
of modalities without seeing any training data for that spe-
Figure 2. IMAGEBIND overview. Different modalities occur naturally aligned in different data sources, for instance images+text and
video+audio in web data, depth or thermal information with images, IMU data in videos captured with egocentric cameras, etc. IMAGE-
BIND links all these modalities in a common embedding space, enabling new emergent alignments and capabilities.
cific pair. We illustrate our approach in Figure 2. | IMAGEBIND- One Embedding Space To Bind Them A |
4 Application and Experiment
In this section, we aim to explore the applications of tool learning and investigate the efficacy and limitations
of state-of-the-art foundation models in utilizing tools. We select 17 representative tools for evaluation and
place the main results in this section. For more case studies of ChatGPT, please refer to Appendix A.
4.1 Evaluated Tools | Tool Learning with Foundation Models |
Finally, Table 2 shows more detailed downstream task comparisons for large publicly-available models,
grouped into comparable sizes. We bold the results that are the best for each task and model size group.
Each model family has at least one model that is best for some tasks. In this table, we also include results for
Pythia models trained on a deduplicated version of the Pile. We separated these results, since they may not
be directly comparable to others above, which were trained using the same or similar dataset preparation. As
expected from the deduplication process, Pythia models show more difficulty generalizing to the pre-training
Pile test loss task than other open models, which might have seen duplicated data during training. However,
the Pythia Pile-dedup models typically improve accuracy on downstream tasks (1.8% on average), indicating
the potential benefits of deduplication.
3.3 Maximal Update Parameterization (µP) and µTransfer | Cerebras-GPT- Open Compute-Optimal Language Models Trained on the Cerebras Wafer-Scale Cluster |
Laurenc¸on, H., Saulnier, L., Wang, T., Akiki, C., del Moral,
A. V., Scao, T. L., Werra, L. V., Mou, C., Ponferrada,
E. G., Nguyen, H., Frohberg, J., ˇSaˇsko, M., Lhoest, Q.,
McMillan-Major, A., Dupont, G., Biderman, S., Rogers,
A., allal, L. B., Toni, F. D., Pistilli, G., Nguyen, O.,
Nikpoor, S., Masoud, M., Colombo, P., de la Rosa, J.,
Villegas, P., Thrush, T., Longpre, S., Nagel, S., Weber,
L., Mu˜noz, M. R., Zhu, J., Strien, D. V., Alyafeai, Z.,
Almubarak, K., Chien, V. M., Gonzalez-Dios, I., Soroa,
A., Lo, K., Dey, M., Suarez, P. O., Gokaslan, A., Bose,
S., Adelani, D. I., Phan, L., Tran, H., Yu, I., Pai, S.,
Chim, J., Lepercq, V., Ilic, S., Mitchell, M., Luccioni,
S., and Jernite, Y. The bigscience ROOTS corpus: A
In Thirty-sixth
1.6TB composite multilingual dataset.
Conference on Neural Information Processing Systems
Datasets and Benchmarks Track, 2022. URL https:
//openreview.net/forum?id=UoEw6KigkUn. | Pythia- A Suite for Analyzing Large Language Models Across Training and Scaling |
35
References
[1] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners.
Advances in neural information processing systems, 33:1877–1901, 2020.
[2] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang,
Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with
human feedback. Advances in Neural Information Processing Systems, 35:27730–27744, 2022.
[3] OpenAI. Gpt-4 technical report, 2023.
[4] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix,
Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation
language models. arXiv preprint arXiv:2302.13971, 2023. | WizardLM- Empowering Large Language Models to Follow Complex Instructions |
oshowonlythemountainswithanelevationabove8,500meters.FirstlyIneedtoconverttheelevationcolumntonumeric.Action:python_repl_astActionInput:convert_to_numeric(df,'Elevation(m)')Observation:Thought:NowIcanfilterthetable.Action:python_repl_astActionInput:filter_df(df,'SELECT*FROMdfWHERE"Elevation(m)">8500')Observation:Thought:Ihavecompletedthetask.FinalAnswer:Themanipulateddfisshownabove.Task:YouaregivenatableonOklahomaStateRegentsforHigherEducation.Sortthetablebythe"TermExpires"columninascendingorder.Table:thisistheresultofprint(df.to_markdown())(Tablecontentsareomittedduetothepromptlengthlimit.)Thought:Ineedtosortthetablebythe"TermExpires"columninascendingorder.Action:python_repl_astActionInput:sort_column(df,'TermExpires',True)Observation:Thought:Ihavecompletedthetask.FinalAnswer:Thesortedtableisshownabove.Query:YouaregivenatableonListofcorrectionalfacilitiesinNewZealand.FilterthetabletoshowonlyfacilitieslocatedinAuckland.(ThetablewillbeprovidedtotheLLMsasaPandas.DataFrameobjectdf.)Trace:Q | Tool Learning with Foundation Models |
(cid:2)∥fθ(xσt, σt) − vσt∥2
2
(cid:3) ,
Et∼[0,1],σt,xσt
(1)
= ασtϵϵϵ − βσtxxx0, for which
where vvvσt = ∂xxxσt
σt
we define ϕt := π
2 σt, and obtain its trigonometric
values ασt := cos(ϕt), and βσt := sin(ϕt).
3.1.2 DDIM Sampler for Denoising
The denoising step uses ODE samplers to turn noise
into a new data point by estimating the rate of
change. In this work, we adopt the DDIM sampler
(Song et al., 2021), which we find to work well
UNet||·||NoiseEncoderSTFTMagLatentAudioand have a reasonable tradeoff between the number
of steps and audio quality. The DDIM sampler
denoises the signal by repeated application of the
following:
ˆxxxσt−1 = ασt−1 ˆxxx0 + βσt−1ˆϵϵϵt,
ˆvvvσt = fθ(xxxσt, σt)
ˆxxx0 = ασtxxxσt − βσt ˆvvvσt
ˆϵϵϵσt = βσtxxxσt + ασt ˆvvvσt | MOUSAI |
Illustration generated with Midjourney
Wave 4: Killer apps emerge (Now) With the platform layer solidifying, models continuing to get better/faster/cheaper, and model access trending to free and open source, the application layer is ripe for an explosion of creativity.
Just as mobile unleashed new types of applications through new capabilities like GPS, cameras and on-the-go connectivity, we expect these large models to motivate a new wave of generative AI applications. And just as the inflection point of mobile created a market opening for a handful of killer
apps a decade ago, we expect killer apps to emerge for Generative AI. The race is on.
Market Landscape
Below is a schematic that describes the platform layer that will power each category and the potential types of applications that will be built on top. | Generative AI A Creative New World Sequoia Capital |
LLMs
evaluation
Other
applications
General
benchmarks
Specific
benchmarks
Multi-modal
benchmarks
Engineering: Bubeck et al. [13] / Liu et al. [116] / Pallagani et al. [140] / Sridhara et al. [171] / Valmeekam et al. [184]
Valmeekam et al. [183] / Zhuang et al. [250]
Medical queries: Chervenak et al. [19] / Duong and Solomon [36] / Hamidi and Roberts [58] / Holmes et al. [67]
Jahan et al. [75] / Johnson et al. [80] / Samaan et al. [160] / Thirunavukarasu et al. [176]
Medical applications
Medical examination: Gilson et al. [53] / Kung et al. [90]
Medical assistants: Cascella et al. [15] / Khan et al. [86] / Lahat et al. [92] / Lyu et al. [122] / Oh et al. [134] / Wang et al. [205]
Agent applications
Huang et al. [71] / Karpas et al. [83] / Parisi et al. [143] / Qin et al. [151] / Qin et al. [152] / Schick et al. [163] / Shen et al. [165]
Education: Dai et al. [28] / citetde Winter [30] / citetHellas et al. [61] / Wang and Demszky [198] / Wei et al. [209] | ASurveyonEvaluationofLargeLanguageModels |
specific groups or individuals (Castle 2012). By providing efficient ways to
reach new audiences and disseminate hateful language, the Internet enables
hate groups to be well represented in the digital realm, fostering a sense of
community among their members, and attracting the attention of journalists
and everyday citizens alike (Bowman-Grieve 2009; McNamee et al. 2010).
In addition to the official websites of organized hate groups, the number
of sites dedicated to producing hateful content operated by informal groups
and individuals has also increased over time (Potok 2015). These include
explicitly racist, misogynistic, or otherwise discriminatory pages, channels,
or communities on mainstream social networking platforms like Facebook,
Twitter, and YouTube, as well as forums on Reddit 4chan, and 8chan,
listserves, internet chat communities, discussion forums, and blogs designed
to disseminate hateful rhetoric (Douglas 2007; Marwick 2017). These range | Social_Media_and_Democracy |
3 Method
This section first uses a running example to investigate ways for formulating planning prompts in
PDDL and then introduces the LLM+P method.
3We
refer
the
readers
to
introtopddl2.pdf as a good introduction to PDDL.
https://www.cs.torontnaturalo.edu/~sheila/2542/s14/A1/
3
LLMPlanLLMProblem PDDLPlannerPDDL PlanLLMPlanGenerated TextModuleProvided TextProblem (P)LLM-As-PlannerLLM + P (In-context Learning)Problem (P)DomainLLMPlanEx. P & Ex. SolContextLLM-As-Planner (In-context Learning)Ex. P & Ex. PDDLContextDomain PDDLProblem (P)Domain3.1 Large Language Model as a PDDL Writer | LLM+P- Empowering Large Language Models with Optimal Planning Proficiency |
tool-interactive critiquing. CoRR, abs/2305.11738, 2023.
[163] Lewis, M., Y. Liu, N. Goyal, et al. BART: denoising sequence-to-sequence pre-training
for natural language generation, translation, and comprehension. In D. Jurafsky, J. Chai,
N. Schluter, J. R. Tetreault, eds., Proceedings of the 58th Annual Meeting of the Association for
Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7871–7880. Association
for Computational Linguistics, 2020.
[164] Park, H. H., Y. Vyas, K. Shah. Efficient classification of long documents using transformers.
In S. Muresan, P. Nakov, A. Villavicencio, eds., Proceedings of the 60th Annual Meeting of
the Association for Computational Linguistics (Volume 2: Short Papers), ACL 2022, Dublin,
Ireland, May 22-27, 2022, pages 702–709. Association for Computational Linguistics, 2022. | TheRiseandPotentialofLargeLanguageModel BasedAgents |
exp(0)+10·exp(−0.5) ≈ 0.142 to
exp(0)
exp(0)
9
Figure 3: Sparse models are prone to overfit. We plot train and validation curves for our ST-MoE-
L and a dense-L models fine-tuned on the CB task (250 train sequences) and ReCoRD (138k train
sequences). In both cases, the sparse model learns more quickly on the train partition (blue exceeds
green line). However, for the smaller CB task, the dense model outperforms the sparse model on the
held-out validation set (red vs. orange). In contrast, on the larger ReCoRD task, the sparse model
outperforms the dense model by several percentage points. | ST-MOE- DESIGNING STABLE AND TRANSFERABLE SPARSE EXPERT MODELS |
s
e
m
a
n
t
i
c
n
e
u
r
o
n
s
w
h
i
c
h
c
o
r
r
e
s
p
o
n
d
t
o
m
u
l
t
i
p
l
e
s
e
m
a
n
t
i
c
c
o
n
c
e
p
t
s
.
W
h
i
l
e
o
u
r
e
x
p
l
a
n
a
t
i
o
n
t
e
c
h
n
i
q
u
e
c
a
n
a
n
d
o
f
t
e
n
d
o
e
s
g
e
n
e
r
a
t
e
e
x
p
l
a
n
a
t
i
o
n
s
a
l
o
n
g
t
h
e
l
i
n
e
s
o
f
"
X
a
n
d
s
o
m
e
t
i
m
e
s
Y
"
,
i
t
i
s
n
o
t
s
u
i
t
e
d
t
o
c
a
p
t
u
r
i
n
g
c
o
m
p
l
e
x
i
n
s
t
a
n
c
e
s
o
f
p
o
l
y
s
e
m
a
n
t
i
c
i
t
y
.
A
n
a
l
y
z
i
n
g
t
o
p
-
a
c
t
i
v
a
t
i
n
g
d
a
t
a
s
e
t
e
x
a
m
p
l
e
s
h
a
s
p
r
o
v
e
d
u
s
e
f
u
l
i
n
p
r
a
c
t
i
c
e
i
n
p
r
e
v
i
o
u
s
w
o
r
k
b
u
t
a
l
s
o
p
o
t
e
n
t
i
a
l
l
y
r
e
s
u
l
t
s
i
n
t
h
e
i
l
l
u
s
i
o
n
o
f
i
n
t
e
r
p
r
e
t
a
b
i
l
i
t
y
.
B
y
f
o
c
u
s
i
n
g
o
n
t
o
p
a
c
t
i
v
a
t
i
o
n
s
,
o
u
r
i
n
t
e
n
t
i
o
n
w
a
s
t
o
f
o
c
u
s
t
h
e
m
o
d
e
l
o
n
t
h
e
m
o
s
t
i
m
p
o
r
t
a
n
t
a
s
p
e
c
t
s
o
f
t | Language models can explain neurons in language models |
Table 22: Few-shot exemplars for full chain of thought prompt for the last letter concatenation task.
PROMPT FOR LAST LETTER CONCATENATION
Q: Take the last letters of the words in "Elon Musk" and concatenate them.
A: The last letter of "Elon" is "n". The last letter of "Musk" is "k". Concatenating them is "nk". The answer is nk.
Q: Take the last letters of the words in "Larry Page" and concatenate them.
A: The last letter of "Larry" is "y". The last letter of "Page" is "e". Concatenating them is "ye". The answer is ye.
Q: Take the last letters of the words in "Sergey Brin" and concatenate them.
A: The last letter of "Sergey" is "y". The last letter of "Brin" is "n". Concatenating them is "yn". The answer is
yn.
Q: Take the last letters of the words in "Bill Gates" and concatenate them.
A: The last letter of "Bill" is "l". The last letter of "Gates" is "s". Concatenating them is "ls". The answer is ls.
36 | Chain-of-Thought Prompting Elicits Reasoning in Large Language Models |
[33] Towaki Takikawa, Joey Litalien, Kangxue Yin, Karsten Kreis,
Charles Loop, Derek Nowrouzezahrai, Alec Jacobson, Mor-
gan McGuire, and Sanja Fidler. Neural geometric level of
detail: Real-time rendering with implicit 3d shapes. In Pro-
ceedings of the IEEE/CVF Conference on Computer Vision
and Pattern Recognition, pages 11358–11367, 2021. 3, 6
[34] Engin Tola, Christoph Strecha, and Pascal Fua. Efficient
large-scale multi-view stereo for ultra high-resolution image
sets. Machine Vision and Applications, 23(5):903–920, 2012.
1, 2 | Neuralangelo- High-Fidelity Neural Surface Reconstruction |
We did not expect these changes to significantly impact any experimental findings in the paper, and we reran all analyses
and evaluations on the new models to confirm this was indeed the case. All experiments in the paper report results from
this updated version of the suite. We chose to rerun the training runs in order to make the Pythia suite maximally useful to
practitioners, and report this change for full transparency.
We overwrote the previously public preliminary version of the suite (which now remains available at https://
huggingface.co/models?other=pythia_v0 to enable replicability of experiments using V0 of the suite) on
March 31, 2023. Going forward, we will use semantic versioning for additional fixes as needed. Current best practices and
details on further fixes can be found at https://www.github.com/EleutherAI/pythia.
Additionally, on January 20, 2023, we chose to rename the Pythia model suite to better reflect including both embedding | Pythia- A Suite for Analyzing Large Language Models Across Training and Scaling |
A Details, Analysis, and Evaluations of Supervised Training
A.1 Context Distillation
For context distillation, we follow the prescription from [Askell et al., 2021]. Specifically, we first generate
data in the following way:
1. We prepend the ‘HHH prompt’ (i.e., a set of prompts designed to elicit helpfulness, harmlessness,
and honesty) to sequences of text, with 50% of the text coming from our pretraining dataset, and
50% coming from a StackExchange dataset. For the former, we simply append pretraining data
after signaling the beginning of another conversation with “Human:”. With StackExchange, we
formulate a fake Human/Assistant conversation by using the question as the human side of the
conversation, and a top-rated answer as the assistant role.
39 | Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback |
In this section we prove that in Example 1, action a3 and the bid profile b1 = (0, 0, 1+γ), b2 = (0, 1+
(cid:15), 0) and b(cid:96) = (0, 0, 0) ∀(cid:96) > 2 constitute an equilibrium. To do so, we show that the agent maximizes
his utility, and that each principal cannot benefit from deviating and switching her bid. Note that
Wela3(v) = 1, while the action which maximizes social welfare is a1 with Wela1(v) = n − 2 > 1.
Therefore, the price of anarchy is Wela3(v)/Wela1(v) = 1/(n − 2), which tends to zero when n → ∞.
For the agent, the payoff from taking action a3 or action a2 is 1, while the payoff from action
a1 is 0. Recall that the tie-breaking rule is in favor of the declared welfare and then the cost (see
footnote 7). Since a2 and a3 have the same declared welfare the agent takes action a3 which incurs
more cost. Therefore, x∗(b) = a3. Principal 1 cannot raise her utility by changing the bid for o3. If | Incomplete Information VCG Contracts for Common Agency |
2.2 Costs
Running all of our experiments cost about $5000
in GPU costs. We gratefully acknowledge our
compute sponsor Paperspace for their generosity
in making GPT4All-J training possible. Between
GPT4All and GPT4All-J, we have spent about
$800 in OpenAI API credits so far to generate
the training samples that we openly release to the
community. Our released model, GPT4All-J, can
be trained in about eight hours on a Paperspace
DGX A100 8x 80GB for a total cost of $200. Us-
ing a government calculator, we estimate the final
model training to produce the equivalent of 0.18
metric tons of carbon dioxide, roughly equivalent
to that produced by burning 20 gallons (75 liters)
of gasoline. | 2023_GPT4All-J_Technical_Report_2 |
GRAND TOTAL (28 ds.) 13.4M >1k
2.8M 13
10.8M >900 277
77
555
♢
♣
♡
parate skeleton annotation formats in these datasets, which
has rarely been addressed in the literature so far. (2) We pro-
pose affine-combining autoencoders (ACAE), a novel linear
dimensionality reduction technique applicable to keypoint-
based representations such as poses.
(3) We apply the
ACAE to regularize model predictions to become more
consistent, leading to qualitative and quantitative improve-
ments, and we show that the latent points can be predicted
directly as well. (4) We release high-quality 3D pose es-
timation models with excellent and consistent in-the-wild
performance due to diverse supervision and our regulariza-
tion tying together different skeleton formats.
2. Related Work | Learning 3D Human Pose Estimation from Dozens of Datasets using a Geometry-Aware Autoencoder to Bridge Between Skeleton Formats |
u
r
e
x
a
m
p
l
e
s
a
s
s
u
m
e
,
u
n
l
e
s
s
o
t
h
e
r
w
i
s
e
s
t
a
t
e
d
.
P
r
a
t
a
n
d
R
u
s
t
i
c
h
i
n
i
(
2
0
0
3
)
s
t
u
d
y
c
l
a
s
s
i
c
c
o
n
t
r
a
c
t
s
:
F
o
r
e
a
c
h
a
g
e
n
t
n
,
t
h
i
s
i
s
a
m
a
p
p
i
n
g
f
r
o
m
t
h
e
a
g
e
n
t
'
s
a
c
t
i
o
n
s
t
o
a
p
a
y
m
e
n
t
f
o
r
t
h
a
t
a
c
t
i
o
n
t
h
a
t
p
r
i
n
c
i
p
a
l
m
p
a
y
s
t
h
e
a
g
e
n
t
,
t
e
r
m
e
d
p
r
i
n
c
i
p
a
l
'
s
b
i
d
.
W
e
r
e
f
e
r
t
o
t
h
e
r
e
s
u
l
t
i
n
g
g
a
m
e
a
s
c
l
a
s
s
i
c
-
G
P
T
A
.
U
n
f
o
r
t
u
n
a
t
e
l
y
,
i
n
t
h
e
c
l
a
s
s
i
c
-
G
P
T
A
b
a
s
e
d
o
n
T
a
b
l
e
1
t
h
e
r
e
i
s
n
o
p
u
r
e
s
u
b
g
a
m
e
-
p
e
r
f
e
c
t
e
q
u
i
l
i
b
r
i
u
m
(
p
u
r
e
S
P
E
)
.
F
o
r
i
n
t
u
i
t
i
o
n
,
i
f
a
p
u
r
e
S
P
E
w
e
r
e
t
o
e
x
i
s
t
,
s
u
p
p
o
s
e
w
i
t
h
o
u
t
l
o
s
s
o
f
g
e
n
e
r
a
l
i
t
y
t
h
a
t
i
t
s
s
e
c
o
n
d
s
t
a
g
e | Principal-agent VCG contracts - ScienceDirect |
Dominik Stammbach. 2021. Evidence selection
as a token-level prediction task. In Proceed-
ings of the Fourth Workshop on Fact Extrac-
tion and VERification (FEVER), pages 14–20,
Dominican Republic. Association for Com-
putational Linguistics. https://doi.org
/10.18653/v1/2021.fever-1.2
Asher Stern, Roni Stern,
Ido Dagan, and
search for
Ariel Felner. 2012. Efficient
transformation-based inference. In Proceedings
of the 50th Annual Meeting of the Association
for Computational Linguistics (Volume 1: Long
Papers), pages 283–291, Jeju Island, Korea.
Association for Computational Linguistics.
Idan Szpektor, Hristo Tanev,
Ido Dagan,
and Bonaventura Coppola. 2004. Scaling web-
based acquisition of entailment relations. In
Proceedings of the 2004 Conference on Empir-
ical Methods in Natural Language Processing,
pages 41–48, Barcelona, Spain. Association
for Computational Linguistics. | ProoFVer- Natural Logic Theorem Proving for Fact Verification |
Table 22: Hyperparameters used for all finetuned models.
Another important hyperparameter for instruction finetuning is the sampling rates for each tasks. Within the
four mixtures (Muffin, T0-SF, NIV2, and CoT defined in Figure 2), we use the number of examples as the
weight of each task. We apply the maximum cap for each task because there are tasks that are much larger
than others in the same mixture, which can dominate the sampling. For example, some WMT translation
datasets have millions of examples compared to BoolQ, which has 9k examples. We apply different maximum
cap for each of the four mixtures summarized in Table 23.
Mixture Maximum cap Proportion (A) Proportion (B)
Muffin
T0-SF
CoT
NIV2
30,000
20,000
100,000
5,000
46.0%
27.9%
1.8%
24.2%
52%
15%
3%
30% | Scaling Instruction-Finetuned Language Models |
4Asking models to identify the word displayed as ASCII art, https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/ascii_word_recognition
5Asking models to choose the English sentence with adjectives in the "correct" order within two choices, https://github.com/google/BIG-
bench/tree/main/bigbench/benchmark_tasks/hyperbaton
6More such tasks include: modus-tollens, pattern-matching-suppression, prompt-injection, repetitive-algebra and sig-figs. You can check them on:
https://github.com/inverse-scaling/prize
Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond
13
u-shape phenomenon, the explanations mainly focus on the model’s over-reliance on information from its prior rather
than the input prompts, valid but misleading few-shot examples, and distracting easier tasks within a hard task [114]. | Harnessing the Power of LLMs in Practice- A Survey on ChatGPT and Beyond |
Recent advances in machine learning have led to an acceleration of interest in research on artificial intelligence (AI).
This fostered the exploration of possible applications of AI in various domains and also prompted critical discussions
addressing the lack of interpretability, the limits of machine intelligence, potential risks and social challenges. In the
exploration of the settings of the “human versus AI” relationship, perhaps the most elusive domain of interest is the
creation and understanding of art. Many interesting initiatives are emerging at the intersection of AI and art, however
comprehension and appreciation of art is still considered to be an exclusively human capability. Rooted in the idea that
the existence and meaning of art is indeed inseparable from human-to-human interaction, the motivation behind this
paper is to explore how bringing AI in the loop can foster not only advances in the fields of digital art and art history, | UNDERSTANDINGANDCREATINGARTWITHAI-REVIEWAND OUTLOOK |
[97] J. Frankle, G. K. Dziugaite, D. Roy, and M. Carbin, “Pruning neural
networks at initialization: Why are we missing the mark?” in Proc. Int.
Conf. Learn. Representations, 2021.
[98] D. C. Mocanu, E. Mocanu, P. Stone, P. H. Nguyen, M. Gibescu,
and A. Liotta, “Scalable training of artificial neural networks with
adaptive sparse connectivity inspired by network science,” Nature
communications, vol. 9, no. 1, p. 2383, 2018.
[99] N. Lee, T. Ajanthan, and P. Torr, “SNIP: Single-shot network pruning
based on connection sensitivity,” in Proc. Int. Conf. Learn. Represen-
tations, 2019.
[100] A. Wang, A. Singh, J. Michael, F. Hill, O. Levy, and S. Bowman,
“GLUE: A multi-task benchmark and analysis platform for natural
language understanding,” in Proc. of 2018 EMNLP Workshop Black-
boxNLP, 2018, pp. 353–355. | Parameter-EfficientFine-TuningMethods |
6.1. Robot Environments / Tasks
Our three robot environments (Fig. 1) include a Task and
Motion Planning (TAMP) domain where a robot has to
manipulate (grasp and stack) objects, a table-top pushing
environment, and a mobile manipulation domain. In each
domain, PaLM-E is trained on expert data from that do-
main. In many cases, this is a sparse amount of data per task.
The TAMP tasks involve large combinatorics over possible
plans, and many decision sequences are infeasible. PaLM-E
has to generate plans that consist of multiple steps, with
complicated decision boundaries. The multi-object tabletop
pushing environment is taken from the publicly available
Language-Table dataset (Lynch et al., 2022) and is chal-
lenging since it includes several objects, large cardinality
of language, and complex pushing dynamics. For both the
TAMP and Language-Table environment, PaLM-E has to
reason about the poses of the objects. It is not sufficient to | PaLM-E- An Embodied Multimodal Language Model |
Computing Machinery, New York, NY, USA, 230–242. https://doi.org/10.1145/3519391.3519401
[39] Shunichi Kasahara, Jun Nishida, and Pedro Lopes. 2019. Preemptive Action: Accelerating Human Reaction Using Electrical Muscle
Stimulation Without Compromising Agency. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow,
Scotland Uk) (Chi ’19). Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3290605.3300873
[40] Norene Kelly and Stephen B Gilbert. 2016. The WEAR Scale: Developing a Measure of the Social Acceptability of a Wearable Device.
Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (2016). | Society’sAttitudesTowardsHumanAugmentation |
Aside from key actors, then, other mechanisms of diffusion include a mix of
biases – cognitive, social, and algorithmic (Shao et al. 2017). Information
diffusion tends to be bounded by limited attention resources; information
disseminated during an “attention burst” – a period of demand for a given
topic – is more likely to gain traction (Ciampaglia, Flammini, and Menczer
2015). Beyond these basic cognitive constraints, social media users are often
embedded in homogeneous clusters, mixed findings on echo chambers
notwithstanding (Guess, Lyons et al. 2018). These network configurations
can encourage exposure to and dissemination of agreeable misinformation
(Del Vicario, Bessi, and Zollo 2016; Shin et al. 2017). Likewise, social media
users place a great amount of trust in their close friends. When it comes to
expressing trust in news shared on Facebook, for example, research suggests the
person who shared it matters more than the news organization that produced it | Social_Media_and_Democracy |
We optimize Eq. 11 using the Adam optimizer [28] with
hyperparameters β1 = 0.9 and β2 = 0.99. We set the
learning rate to 5 × 10−4 for θc (the canonical MLP), and
5 × 10−5 for all the others. We use 128 samples per ray.
The optimization takes 400K iterations (about 72 hours) on
4 GeForce RTX 2080 Ti GPUs. We apply delayed optimiza-
tion with Ts = 10K and Te = 50K to ZJU-MoCap data,
and with Ts = 100K and Te = 200K to the others. In addi-
tion, we postpone pose refinement until after 20K iterations
for in-the-wild videos.
5.3. Evaluation method
Setup
Subject
Priors
Neural Body
multi-camera
dynamic
human
body pose,
SMPL vertices
(reposed)
HyperNeRF
single camera
quasi-static
general scene
rigidity
HumanNeRF
single camera
dynamic
human
body pose
(approx.)
Table 2. Differences between the compared methods. | HumanNeRF- Free-viewpoint Rendering of Moving People from Monocular Video |
2
2
0
2
v
o
N
7
]
L
C
.
s
c
[
5
v
9
2
6
3
0
.
2
0
2
2
:
v
i
X
r
a
Survey of Hallucination in Natural Language Generation | SurveyofHallucinationinNatural Language Generation |