text
stringlengths 1
1k
⌀ | title
stringlengths 2
143
|
---|---|
2. Related Works
2.1. 2D Diffusion Models for 3D Generation
Recent compelling successes in 2D diffusion models [8,
22, 47] and large vision language models (e.g., CLIP
model [45]) provide new possibilities for generating 3D as-
sets using the strong priors of 2D diffusion models. Pi-
oneering works DreamFusion [43] and SJC [59] propose
to distill a 2D text-to-image generation model to generate
3D shapes from texts, and many follow-up works follow
such per-shape optimization scheme. For the task of text-
to-3D [2, 5, 6, 23, 29, 48, 49, 57, 63, 65, 69, 77] or image-
to-3D synthesis [38, 44, 46, 50, 54, 67], these methods typ-
ically optimize a 3D representation (i.e., NeRF, mesh, or
SDF), and then leverage neural rendering to generate 2D
images from various viewpoints. The images are then fed
into the 2D diffusion models or CLIP model for calculating
SDS [43] losses, which can guide the 3D shape optimiza-
tion. | Wonder3D |
15 | Is Power-Seeking AI an Existential Risk? |
some idea of what the Commission might think about a political action, but the outcome of these
opinions is not actual regulatory policy.)
10 All completed advisory opinions are accessible at www.fec.gov/data/legal/advisory-opinions/.
The FEC first considered the issue of disclaimers in internet communications in two advisory
opinions in 1995, both about soliciting campaign contributions via the Internet. See AO 1995-09
and 1995-35.
11 The Club for Growth in 2007 asked the FEC if it could truncate disclaimers on short (ten- or
fifteen-second) television ads. The FEC did not approve of that request because the choice of the
group to air such ads was a political decision, not one driven by technological limitations (AO
2007-33). | Social_Media_and_Democracy |
Code language models are among the most economically significant and widely-deployed LLMs today; code LMs
are deployed in diverse developer tooling (Github, 2021; Tabachnyk & Nikolov, 2022), as personal programming
assistants (OpenAI, 2022; Hsiao & Collins, 2023; Replit, 2022), and as competent tool-using agents (OpenAI, 2023a).
For low-latency, high-throughput deployment in developer workflows, we built a small, coding-specific PaLM 2 model
by continuing to train the PaLM 2-S model on an extended, code-heavy, heavily multilingual data mixture. We call the
resulting model PaLM 2-S* which shows significant improvement on code tasks while preserving the performance on
natural language tasks. We evaluate PaLM 2-S*’s coding ability on a set of few-shot coding tasks, including HumanEval
(Chen et al., 2021), MBPP (Austin et al., 2021), and ARCADE (Yin et al., 2022). We also test PaLM 2-S*’s multilingual | PaLM 2 Technical Report |
Response: add a delay to an audio file named input.mp3, you can use the ‘sox‘ command-line utility. Here’s an
example of how to do it:
1. Install ‘sox‘ if you haven’t already. 2. Open a terminal and navigate to the directory where your input.mp3 file is
located. 3. Run the following command:
sox input.mp3 output.mp3 delay 10
This will add a 10-second delay to the input.mp3 file and save the result as output.mp3.
You can adjust the delay time by changing the value after the ‘delay‘ keyword. For example, to add a 20-second delay,
you would use:
sox input.mp3 output.mp3 delay 20
You can also add multiple delays by chaining them together. For example, to add a 10-second delay followed by a
20-second delay, you would use:
sox input.mp3 output.mp3 delay 10 delay 20
Note that the ‘sox‘ command can also be used to add other effects to the audio file, such as reverb or echo. For more
information, you can refer to the ‘sox‘ documentation.
Prompt:
reverb to the trimmed file | CodeLlama2 |
Params Overall
97
88
93
95
90
95
97
11B
8B
62B
540B
8B
62B
540B
"he"
99
93
97
99
99
100
100
"she"
95
83
89
92
81
91
94
by language
Indonesian 88
Spanish 70
Bhojpuri 78
Hindi 85
Spanish 61
Indonesian 70
Hindi 86
by eval set
synthbio 92
late binding 62
encoded in nouns 86
synthbio 88
late binding 44
late binding 87
synthbio 91
Table 10: Translation misgendering performance, disaggregated across models and slices of examples.
Accuracy scores indicate the translation system did not produce errors with potential misgendering harms. | Scaling Instruction-Finetuned Language Models |
Experimental Setup We finetune all models
over 5 epochs and a batch size of 1024. For our
encoder-decoder models, we use a learning rate of
5 × 10−4 following Chung et al. (2022). For our
decoder-only models, we follow the same configu-
ration as Alpaca (Taori et al., 2023) including the
learning rate of 2 × 10−5. We use HuggingFace’s
transformers for training. Moreover, we use the
same prompt wrapper as Alpaca (Taori et al., 2023),
hence we also wrap our instruction similarly during
inference. We perform all of our experiments on
8×V100 (32G) and 8×A100 (40G) GPUs. Our
models are publicly available.
4.2 Model Evaluation
We then evaluate the performance based on several
downstream NLP tasks as well as human evaluation
on user-oriented instruction.
Automatic Evaluation on Downstream NLP
Tasks We conduct a zero-shot evaluation on the
downstream NLP tasks for our LaMini-LM. We
use language model evaluation harness (Gao et al.,
Alpaca-7B
LLaMa-7B
e
c
n
a
m
r
o
f
r
e
P
e
g
a
r
e
v
A
60 | LaMini-LM- A Diverse Herd of Distilled Models from Large-Scale Instructions |
1
INTRODUCTION
Density estimation is a fundamental unsupervised learn-
ing task, an essential subroutine in various methods for
data imputation (Efron, 1994; Rubin, 1996), clustering
(Bramer, 2007; Rokach and Maimon, 2005), anomaly de-
tection (Chandola et al., 2009; Pang et al., 2021), and clas-
sification (Lugosi and Nobel, 1996; Vincent and Bengio,
Proceedings of the 26th International Conference on Artificial Intel-
ligence and Statistics (AISTATS) 2023, Valencia, Spain. PMLR:
Volume 206. Copyright 2023 by the author(s). | Adversarial Random Forests for Density Estimation and Generative Modeling |
In summary, we find evidence for construct validity of simulated personality scores
in medium (62B) and large (540B) variants of PaLM family models (see Table 6). We
find that LLM-simulated psychometric data are most human-aligned for Flan-PaLM
540B, the largest model we tested. The rest of the section details the results from the
individual validity study.
Structural Validation Results
Following established frameworks from measurement science outlined in Sections 3.2.1, we
evaluated the structural validity (i.e., reliability) of the tests—the extent to which they
dependably measure single underlying factors—by quantifying internal consistency and
unidimensionality for each administered subscale. Table 7 summarizes the results. | PersonalityTraitsinLargeLanguageModels |
[7] Eryk Salvaggio (2022, October 2) "How to Read an AI Image - The Datafication of a Kiss"
https://cyberneticforests.substack.com/p/how-to-read-an-ai-image
[8] Bender, Emily M., et al. "On the Dangers of Stochastic Parrots: Can Language Models Be
Too Big? ." Proceedings of the 2021 ACM Conference on Fairness, Accountability, and
Transparency. 2021.
[9] Underwood, Ted.
https://tedunderwood.com/2021/10/21/latent-spaces-of-culture/
the Latent Spaces
"Mapping
of Culture."
(2021).
[10] Schuhmann, Christoph, et al. "Laion-400m: Open dataset of clip-filtered 400 million
image-text pairs." arXiv preprint arXiv:2111.02114 (2021). https://laion.ai/blog/laion-400-
open-dataset/
[11] https://github.com/CompVis/stable-diffusion
[12] https://openai.com/blog/dall-e-2-pre-training-mitigations/ | The Myth of Culturally Agnostic AI Models |
IMAGEBIND achieves a high emergent zero-shot clas-
sification performance. On each benchmark, IMAGEBIND
achieves strong gains and even compares favorably to super-
vised specialist models trained for the specific modality and
task. These results demonstrate that IMAGEBIND aligns the
modalities and implicitly transfers the text supervision as-
sociated with images to other modalities like audio. In par-
ticular, IMAGEBIND shows strong alignment for non-visual
modalities like audio and IMU suggesting that their natu-
rally available pairing with images is a powerful source of
supervision. For completeness, we also report the standard
zero-shot image (ImageNet [62] - IN1K, Places-365 [85] -
P365) and video (Kinetics400 [34] - K400, MSR-VTT 1k-
A [76] - MSR-VTT) tasks. As the image & text encoders
are initialized (and frozen) using OpenCLIP, these results
match those of OpenCLIP.
Random
IMAGEBIND
Text Paired
Absolute SOTA 91.0 [80] 60.7 [65] 89.9 [78]
-
-
- | IMAGEBIND- One Embedding Space To Bind Them A |
Architectural Foundations. LLMs can generally be categorized into two main paradigms: encoder-decoder models [146,
148, 171, 202, 212, 228], exemplified by BERT [128], and decoder-only models [13, 24, 54, 72, 106, 196, 209–211, 230, 263–
265, 287, 325], such as the GPT series [24, 196, 209, 210]. BERT is trained using masked language modeling, enabling it to
excel in contextual understanding by predicting masked or missing tokens. On the other hand, GPT models are trained using
autoregressive modeling, which equips them with strong generative capabilities by predicting subsequent tokens in a sequence.
Despite these differences, both types of models commonly rely on the Transformer [269] architecture, which is particularly
noteworthy for its self-attention mechanism. In self-attention, each token is represented as a key, value, and query. The query
weighs the importance of other tokens, represented as keys in understanding a particular token. These weights are applied to | TheEfficiencySpectrumofLargeLanguageModels-AnAlgorithmicSurvey |
approximate foreground probability f (x) is low:
αi = f (xi)(1 − exp(−σ(xi)∆ti)),
(13)
We apply the stratified sampling approach proposed by
NeRF [41]. We do not use hierarchical sampling since the
bounding box of a subject can be estimated from their 3D
body pose. We then only sample points inside the box.
4.2. Delayed optimization of non-rigid motion field
When solving for all the network parameters in Eq. 11
at once, we find that the the optimized skeleton-driven and
non-rigid motions are not decoupled – a portion of the sub-
ject’s skeletal motions is modeled by the non-rigid motion
field – due to over-fitting of non-rigid motions to the input
images. As a result, the quality degrades when rendering
unseen views. | HumanNeRF- Free-viewpoint Rendering of Moving People from Monocular Video |
[47] Zineng Tang, Jaemin Cho, Yixin Nie, and Mohit Bansal. TVLT: Textless vision-language transformer.
In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural
Information Processing Systems, 2022. 3
[48] Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu,
and Lijuan Wang. Git: A generative image-to-text transformer for vision and language. arXiv preprint
arXiv:2205.14100, 2022. 7
[49] Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren
Zhou, and Hongxia Yang. Unifying architectures, tasks, and modalities through a simple sequence-to-
sequence learning framework. arXiv preprint arXiv:2202.03052, 2022. 7
[50] Chenfei Wu, Lun Huang, Qianxi Zhang, Binyang Li, Lei Ji, Fan Yang, Guillermo Sapiro, and Nan Duan.
Godiva: Generating open-domain videos from natural descriptions. arXiv preprint arXiv:2104.14806,
2021. 7 | Any-to-Any Generation via Composable Diffusion |
regard to overall performance and expected speed, irrespective of Comprehension. Only for Δ𝑛
for expected correct respinses, we find that the Comprehension leveled participants to neutral
expectations. | AI enhance sour performance |
LLMs typically follow the architectural designs of PLMs and come in three pri-
mary flavors: encoder-only, encoder-decoder, and decoder-only architectures. Here’s
an overview of these LLM architectures and their distinctions:
• Encoder-only Language Models. These models process input text to create vec-
tor representations without an explicit decoding phase for generating new text.
8
Instead, they transform and embed text into a high-dimensional space. Encoder-
only models are primarily designed to capture and understand patterns and
semantics in the input data. They find extensive use in tasks such as text classifi-
cation, sentiment analysis, and clustering. A notable example is BERT [42], which
extracts context-rich embeddings for downstream tasks through pre-training on
a masked language modeling objective. | Beyond Efficiency |
of Stackoverflow Questions
• Instruction-tuning with a sub-sample of Big-
science/P3
We chose to dedicate substantial attention to data
preparation and curation based on commentary in
the Stanford Alpaca project (Taori et al., 2023).
Upon collection of the initial dataset of prompt-
generation pairs, we loaded data into Atlas for data
curation and cleaning. With Atlas, we removed all
examples where GPT-3.5-Turbo failed to respond
to prompts and produced malformed output. This
reduced our total number of examples to 806,199
high-quality prompt-generation pairs. Next, we
decided to remove the entire Bigscience/P3 sub-
set from the final training dataset due to its very
Figure 1: TSNE visualization of the candidate training
data (Red: Stackoverflow, Orange: chip2, Blue: P3).
The large blue balls (e.g. indicated by the red arrow)
are highly homogeneous prompt-response pairs. | GPT4All- Training an Assistant-style Chatbot with Large Scale Data Distillation from GPT-3.5-Turbo |
2.4 Complementary Roles of Specialized Tools and Foundation Models
The integration of specialized tools and foundation models represents a promising approach for harnessing
the unique strengths of both. By incorporating foundation models’ understanding and reasoning capabilities
into specialized tools, we can create intelligent tools capable of performing more complex tasks than either
specialized tools or foundation models alone. Specifically, the amalgamation of both confers a multitude of
benefits as follows. | Tool Learning with Foundation Models |
Large Language Models (LLMs) [6, 15, 37, 53, 54, 61] have achieved great success in various
natural language processing tasks, e.g., topic classification [29, 42], sentiment classification [6, 42],
translation [6], by few-shot prompting (or in-context learning) [6, 9, 42]. Recently, Wang et al.
[66], Wei et al. [67] show that LLMs with more than 100B parameters (e.g., GPT-3 [6] with 175B,
PaLM with 540B [11]) can solve complex tasks by generating multiple reasoning steps towards the
answer when given a few reasoning examples as demonstration. While both GPT-3.5 [46] and GPT-4
[48] have shown promising reasoning ability for complex mathematical tasks like MATH [21], the
performance of open-source models (e.g., LLaMA-1 [61], LLaMA-2 [62]) is far from satisfactory.
Learning Mathematical Reasoning for complex math tasks like GSM8K [12] and MATH [21] is
one of the most challenging problem in open-source LLMs. Wei et al. [67] enhances the reasoning | METAMATH |
3.5.2 Open-ended QA
Benchmarks There are two sub-categories in Open-ended QA: either the answers are of short-form
or long-form. Short-form datasets include SQuAD 1.1 (Rajpurkar et al., 2016), NewsQA (Trischler
et al., 2017), TriviaQA (Joshi et al., 2017), SQuAD 2.0 (Rajpurkar et al., 2018), NarrativeQA (Kociský
et al., 2018), Natural Question (NQ) (Kwiatkowski et al., 2019), Quoref (Dasigi et al., 2019) and
DROP (Dua et al., 2019). Long-form datasets include ELI5 (Fan et al., 2019) and doc2dial (Feng
et al., 2020). For both short-form and long-form datasets, the evaluation metrics are exact match (EM)
8
and F1 over words in the answers. Answering Open-ended QA requires the model to comprehend the
provided context, or retrieve related knowledge if there’s no context provided. | ChatGPT’sOne-yearAnniversary-AreOpen-Source LargeLanguageModelsCatchingup |
api_key , api_secret , access_token , access_token_secret ):
while True :
)
keyword ))
current_price , sentiment = get_stock_info ( ticker , keyword
sentiment_result = get_tweets_sentiment ( search_tweets (
action , num_shares = execute_trade ( sentiment_result ,
current_price , available_money )
if action == ’buy ’:
total_cost = calculate_trade_cost ( num_shares ,
current_price , 0.01)
if total_cost <= available_money :
execute_trade ( action , num_shares , ticker , api_key
, api_secret , access_token , access_token_secret )
available_money -= total_cost
elif action == ’sell ’:
execute_trade ( action , num_shares , ticker , api_key ,
api_secret , access_token , access_token_secret )
28
available_money += num_shares * current_price
time . sleep (60) | CAMEL- Communicative Agents for “Mind” Exploration of Large Scale Language Model Society |
highlights that transformers, known for their effective handling of long-range data dependencies, tend to outperform Long
Short-Term Memory networks (LSTMs) [105] as they scale. This observation underscores the potential of transformers in
large-scale language processing tasks. | TheEfficiencySpectrumofLargeLanguageModels-AnAlgorithmicSurvey |
2. Set the anchor matching IoU threshold according to the number of
faces in the dataset: higher thresholds for datasets with more faces,
and lower thresholds for datasets with fewer faces.
3. Set the location loss weight according to the presence of facial
landmarks in the dataset: higher weights for datasets with facial
landmarks, and lower weights for datasets without facial landmarks.
4. Set the negative to positive ratio according to the number of
faces in the dataset: higher ratios for datasets with more faces, and
lower ratios for datasets with fewer faces.
5. Set the learning rate according to the number of faces in the
dataset: higher rates for datasets with more faces, and lower rates
for datasets with fewer faces.
Test task: FDDB
1. Set the crop size to be larger and the anchor matching IoU
threshold to be higher for datasets with more faces.
2. Increase the location loss weight and decrease the negative to
positive ratio for datasets with more faces. | MLCopilot- Unleashing the Power of Large Language Models in Solving Machine Learning Tasks |
The Future of Music: How Generative AI Is Transforming the Music Industry | Andreessen Horowitz
Our ultimate dream? An end-to-end tool where you provide guidance on the vibe and themes of
the track you’re looking to create, in the form of text, audio, images, or even video, and an AI
TA B L E O F C O N T E N T S
copilot then collaborates with you to write and produce the song. We don’t imagine the most
popular songs will ever be entirely AI generated — there’s a human element to music, as well as
a connection to the artist that can’t be replaced — however, we do expect AI assistance will
make it easier for the average person to become a musician. And we like the sound of that!
If you’re building here, reach out to us at [email protected] and [email protected].
Want more a16z Consumer?
Sign up to get insights and analysis on how marketplaces break out and scale.
First Name *
Business Email Address *
M A N A G E M Y S U B S C R I P T I O N S | The Future of Music_ How Generative AI Is Transforming the Music Industry _ Andreessen Horowitz |
tions for the evolution of modern human behavior. Current Anthropology, 51(S1):S135–S147, 2010.
Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Ha-
jishirzi. MathQA: Towards interpretable math word problem solving with operation-based formalisms. In
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational
Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 2357–2367, Min-
neapolis, Minnesota, 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1245. URL
https://aclanthology.org/N19-1245.
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul F. Christiano, John Schulman, and Dan Mané. Concrete
problems in AI safety. ArXiv preprint, abs/1606.06565, 2016. URL https://arxiv.org/abs/1606.
06565. | Tool Learning with Foundation Models |
to filter out irrelevant entities (through the DBpedia categories), while a large-scale knowledge graph integrating DBpedia,
schema.org and YAGO is used to augment information about entities and subsequently build explanations for the recom-
mendations in natural language. See Table 6 for an additional summary. | Knowledge graphs as tools for explainable machine learning: A survey |
11
Published as a conference paper at ICLR 2023
Gunho Park, Baeseong Park, Se Jung Kwon, Byeongwook Kim, Youngjoo Lee, and Dongsoo Lee.
nuQmm: Quantized matmul for efficient inference of large-scale generative language models.
arXiv preprint arXiv:2206.09557, 2022.
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor
Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-
In Conference on Neural Information Processing Systems
performance deep learning library.
(NeurIPS), 2019.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language
models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi
Zhou, Wei Li, and Peter Liu. Exploring the limits of transfer learning with a unified text-to-text
transformer. Journal of Machine Learning Research, 21(140):1–67, 2020. | GPTQ |
5.2 Recommendations for developers
We recommend reviewing guides and tools for responsible development. See Chowdhery et al. (2022) for additional
discussion of ethical considerations in use of language models. We note that while the evaluations of PaLM 2 in this
paper provide a signal of how the model might perform when used to build a downstream system, application-specific
analysis and evaluation of potential harms is critical.
All downstream developers should consider the potential for harms and bias in the specific context of their application
Shelby et al. (2023), particularly since changes in decoding strategy and prompts can have a significant impact on
generated responses. While we note the effectiveness of dialog-prompting in reducing toxic generation, we stress that
these results may not generalize to other forms of representational harm, or to other prompting methods or use contexts. | PaLM 2 Technical Report |
B.3 Model Training Parameters
We conduct training for the three stages of our model,
employing 5, 5, and 2 epochs, respectively. The train-
ing process incorporates the following hyper-parameters:
N = 32, L = 6, number of Audio Tokens = 8, and
lr = 10−4. This choice of hyper-parameters, coupled
with our training strategy, allows for the effective use of
C Model Evaluation
In this section, we elaborate on the datasets employed to
assess the various capabilities of the M2UGen model, fol-
lowed by a discussion of the evaluation metrics utilized.
C.1 Evaluation Datasets | M2UGen |
This project will build up the space-time cube(s) predictive model for urban information on multiple
dimensions, e.g., greenspace accessibility and values, land-use simulated mobility, residents’ happiness
and geodemographic profiles, and the development of local crimes, in the expectation to enlighten
policy makers with data-driven evidence. The Predictive Space-Time Cube model will be trained and
tested with multi-sourced trajectory open data (for example, remote sensing images, census data, google
mobility data, detailed crime incidents data, statistics on socio-economic, etc.) in selected metropolitan
cities like London, New York, Sydney, and Hong Kong (https://comparecitycrime.com/,
preliminary exploration). Besides of the widely applied spatial data analytical skills and machine
learning techniques, student will develop a 3D understanding of the urban crimes in a dynamic and | informatics-phd-projects-2022-23 |
deep neural networks: A systematic review. IEEE access 7 (2019), 19143–19165.
[391] Huu Binh Nguyen, Duong Van Hai, Tien Dat Bui, Hoang Ngoc Chau, and Quoc Cuong Nguyen. 2022. Multi-Channel
Speech Enhancement using a Minimum Variance Distortionless Response Beamformer based on Graph Convolutional
Network. International Journal of Advanced Computer Science and Applications 13, 10 (2022).
[392] Viet-Anh Nguyen, Anh HT Nguyen, and Andy WH Khong. 2022. Tunet: A block-online bandwidth extension model
based on transformers and self-supervised pretraining. In ICASSP 2022-2022 IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP). IEEE, 161–165.
[393] Viet-Nhat Nguyen, Mostafa Sadeghi, Elisa Ricci, and Xavier Alameda-Pineda. 2021. Deep variational generative
models for audio-visual speech separation. In 2021 IEEE 31st International Workshop on Machine Learning for Signal
Processing (MLSP). IEEE, 1–6. | AReviewofDeepLearningTechniquesforSpeechProcessing |
[29] Jordi Pons, Santiago Pascual, Giulio Cengarle, and Joan Serrà. Upsampling artifacts in neural
audio synthesis. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech
and Signal Processing (ICASSP), pages 3005–3009. IEEE, 2021.
[30] Ryan Prenger, Rafael Valle, and Bryan Catanzaro. Waveglow: A flow-based generative network
for speech synthesis. In ICASSP 2019-2019 IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP), pages 3617–3621. IEEE, 2019.
[31] Zafar Rafii, Antoine Liutkus, Fabian-Robert Stöter, Stylianos Ioannis Mimilakis, and Rachel
Bittner. The musdb18 corpus for music separation, 2017.
[32] Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark
Chen, and Ilya Sutskever. Zero-shot text-to-image generation. In International Conference on
Machine Learning, pages 8821–8831. PMLR, 2021.
[33] Ali Razavi, Aaron Van den Oord, and Oriol Vinyals. Generating diverse high-fidelity images | RVQGAN |
[57] K. Shridhar, A. Stolfo, and M. Sachan. Distilling Reasoning Capabilities into Smaller Language
Models. In Findings of the Association for Computational Linguistics, 2023.
[58] A. Talmor, J. Herzig, N. Lourie, and J. Berant. CommonsenseQA: A Question Answering
Challenge Targeting Commonsense Knowledge. In North American Chapter of the Association
for Computational Linguistics, 2019.
13
Technical Report
[59] R. Taori, I. Gulrajani, T. Zhang, Y. Dubois, X. Li, C. Guestrin, P. Liang, and T. Hashimoto.
Stanford Alpaca: An Instruction-following LLaMA Model. Technical report, 2023.
[60] R. Taylor, M. Kardas, G. Cucurull, T. Scialom, A. Hartshorn, E. Saravia, A. Poulton, V. Kerkez,
and R. Stojnic. Galactica: A Large Language Model for Science. Preprint arXiv:2211.09085,
2022. | METAMATH |
3 EMBEDDING LAYERS
b ∈ {0, 1}|Σ|. Here(cid:80) | MULTI HASH EMBEDDINGS IN SPACY |
AlphaCode 2 is evaluated on Codeforces,5 the same platform as AlphaCode, on 12 contests from
division 1 and 2, for a total of 77 problems. AlphaCode 2 solved 43% of these competition problems, a
1.7x improvement over the prior record-setting AlphaCode system which solved 25%. Mapping this to
competition rankings, AlphaCode 2 built on top of Gemini Pro sits at an estimated 85th percentile on
average – i.e. it performs better than 85% of entrants. This is a significant advance over AlphaCode,
which only outperformed 50% of competitors.
The composition of powerful pretrained models with search and reasoning mechanisms is an
exciting direction towards more general agents; another key ingredient is deep understanding across
a range of modalities which we discuss in the next section.
5http://codeforces.com/
11
Gemini: A Family of Highly Capable Multimodal Models | gemini_1_report |
[12] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising
diffusion probabilistic models. Advances in Neural In-
formation Processing Systems, 33:6840–6851, 2020.
3
[13] Bahjat Kawar, Shiran Zada, Oran Lang, Omer Tov,
Huiwen Chang, Tali Dekel, Inbar Mosseri, and Michal
Irani. Imagic: Text-based real image editing with dif-
fusion models. In Conference on Computer Vision and
Pattern Recognition 2023, 2023. 3
[14] Nupur Kumari, Bingliang Zhang, Richard Zhang, Eli
Shechtman, and Jun-Yan Zhu. Multi-concept cus-
tomization of text-to-image diffusion. In Proceedings
of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR), 2023. 3, 4, 6, 12, 13, 16
[15] Mingi Kwon, Jaeseok Jeong, and Youngjung Uh. Dif-
fusion models already have a semantic latent space.
In The Eleventh International Conference on Learn-
ing Representations, 2023. 3 | A Neural Space-Time Representation for Text-to-Image Personalization |
• Text Language Tag: The tag token specifies the language of output text sequences.
• Timestamps Tag: The presence of a <|timestamps|> or <|notimestamps|> token determines whether
the model needs to predict timestamps or not. Different from the sentence-level timestamps used in
Whisper, the inclusion of the <|timestamps|> tag requires the model to perform fine-grained word-level
timestamp prediction, abbreviated as SRWT (Speech Recognition with Word-level Timestamps). The
prediction of these timestamps is interleaved with the transcription words: the start time token is
predicted before each transcription token, while the end time token is predicted after. According to our
experiments, SRWT improves the ability of the model to align audio signals with timestamps. This
improved alignment contributes to a comprehensive understanding of speech signals by the model,
resulting in notable advancements across many tasks such as speech recognition and audio QA tasks. | Qwen-Audio |
neural networks and learning systems 32, 10 (2020), 4291–4308.
[149] Mark Gales, Steve Young, et al. 2008. The application of hidden Markov models in speech recognition. Foundations
and Trends® in Signal Processing 1, 3 (2008), 195–304.
[150] Chenyang Gao, Yue Gu, Francesco Caliva, and Yuzong Liu. 2023. Self-supervised speech representation learning for
keyword-spotting with light-weight transformers. arXiv preprint arXiv:2303.04255 (2023).
[151] Ruohan Gao and Kristen Grauman. 2021. Visualvoice: Audio-visual speech separation with cross-modal consistency.
In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 15490–15500.
[152] Daniel Garcia-Romero, Alan McCree, David Snyder, and Gregory Sell. 2020. JHU-HLTCOE system for the VoxSRC
speaker recognition challenge. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal
Processing (ICASSP). IEEE, 7559–7563. | AReviewofDeepLearningTechniquesforSpeechProcessing |
Figure 4: LoRA r for LLaMA 7B models finetuned on Alpaca. Each dot represents a combination of
hyperparameters and for each LoRA r we run 3 random seed with each hyperparameter combination. The
performance of specific LoRA r values appears to be independent of other hyperparameters.
A.2 Super-Natural Instructions Experimental Setup Details
We use the same preprocessing of the Super-Natural Instruction dataset as Wang et al. [60]. However,
we split the training data in training and validation datasets allowing us to perform more rigorous
hyperparameter tuning and early stopping. We use the same hyperparameters described in the paper
for training the various T5 model sizes on the Super-Natural Instruction data. We use LoRA r = 16
for small, medium, and large T5 models and LoRA r = 64 for T5 xl and xxl models. We also use
LoRA α = 64 in all our experiments and no LoRA dropout. | QLORA |
Joint Conference on AI Music Creativity (CSMC + MuMe), .
Hadjeres, G., & Nielsen, F. (2020). Anticipation-rnn: Enforcing unary con-
straints in sequence generation, with application to interactive music gen-
eration. Neural Computing and Applications, 32 , 995–1005.
Herremans, D., & Chew, E. (2017). Morpheus: generating structured music
with constrained patterns and tension.
IEEE Transactions on Affective
Computing, 10 , 510–523.
Herremans, D., Chuan, C.-H., & Chew, E. (2017). A functional taxonomy of
music generation systems. ACM Computing Surveys (CSUR), 50 , 1–30.
Herremans, D., & S¨orensen, K. (2013). Composing fifth species counterpoint
music with a variable neighborhood search algorithm. Expert systems with
applications, 40 , 6427–6437.
Herremans, D., Weisser, S., S¨orensen, K., & Conklin, D. (2015). Generating
structured music for bagana using quality metrics based on markov models.
Expert Systems with Applications, 42 , 7424–7435.
44 | Video2Music |
Qingqing Huang, Aren Jansen, Joonseok Lee, Ravi Ganti, Judith Yue Li, and Daniel PW Ellis. Mulan:
A joint embedding of music audio and natural language. arXiv preprint arXiv:2208.12415, 2022.
Kinyugo Maina. Msanii: High fidelity music synthesis on a shoestring budget. arXiv preprint
arXiv:2301.06468, 2023.
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-
resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Confer-
ence on Computer Vision and Pattern Recognition, pages 10684–10695, 2022.
Dongchao Yang, Jianwei Yu, Helin Wang, Wen Wang, Chao Weng, Yuexian Zou, and Dong Yu.
Diffsound: Discrete diffusion model for text-to-sound generation. arXiv preprint arXiv:2207.09983,
2022. | Simple and Controllable Music Generation |
5.2 Trustworthiness
Given that LLMs are now involved in sensitive areas such as healthcare, finance, and law, it is crucial to ensure that
they are trustworthy and capable of producing reliable output. | Harnessing the Power of LLMs in Practice- A Survey on ChatGPT and Beyond |
Human Input and Task Specifying. The role-playing session will be instantiated from an idea
and selected roles by humans. As an example in Figure 1, a human has a preliminary idea to develop
4 | CAMEL- Communicative Agents for “Mind” Exploration of Large Scale Language Model Society |
executive, Mark Zuckerberg, has preached for many years that his company’s
products were creating “radical transparency” at a societal level, fostering more
“open and honest communities” (Heemsbergen 2016, p. 140). Zuckerberg has
publicly portrayed openness and transparency as key organizing features of the
digital age while running a company that made increasingly important political
decisions in secret (Gillespie 2018a). However, following the multiple scandals
hounding Facebook since 2016, the mantra is slowly being turned inward:
Zuckerberg has claimed that he will finally bring transparency to some of the
company’s sensitive business dealings, most notably in the realm of political
advertising (Feldman 2017). In public discourse, academics, policymakers, and
civil society groups are increasingly advocating measures to look into the
corporate black box of firms like Facebook and Google, positing it as a major
potential governance mechanism that could rein in platform companies (Brock | Social_Media_and_Democracy |
Neural network-based models have accomplished victory
on diverse language-related roles as opposed to traditional
machine learning-based models such as logistic regression of
support vector machine (SVM) by utilizing word embeddings
in fake news detection. It maps words or text to a list of
vectors. They are low-dimensional, and disseminated feature
representations are appropriate for natural languages. The
term ‘‘word embedding’’ refers to a combination of language
modeling and feature learning. Words or expressions from the
lexicon are allocated to real-number vectors. Neural network
models essentially utilize this method for fake news detec-
tion [42], [96]. Word representation was performed using
dense vectors in word embedding. These vectors represent the | A_Comprehensive_Review_on_Fake_News_Detection_With_Deep_Learning |
NaturalQuestions-Open The NaturalQuestions dataset
(Kwiatkowski et al., 2019) consists of naturally occurring
Google queries and their answers. Each answer also comes
with an “answer type”: following Lee et al. (2019), we
only keep questions that are categorized as “short answer
type” with at most five tokens. The dataset also provides a
suggested Wikipedia document to retrieve; like all models
we compare against, we do not provide this to our model.
WebQuestions The WebQuestions dataset (Berant et al.,
2013) was collected from the Google Suggest API, using
one seed question and expanding the set to related questions.
We follow the setting defined by Chen et al. (2017). | REALM |
Toolformer: Language Models Can Teach Themselves to Use Tools
Timo Schick
Jane Dwivedi-Yu Roberto Dessì† Roberta Raileanu
Maria Lomeli Luke Zettlemoyer Nicola Cancedda Thomas Scialom
Meta AI Research †Universitat Pompeu Fabra
Abstract | Toolformer |
widespread popularity of
transparency as a form of accountability in
democratic governance is its flexibility and ambiguity. As the governance
scholar Christopher Hood has argued, “much of the allure of transparency as
a word and a doctrine may lie in its potential to appeal to those with very
different, indeed contradictory, attitudes and worldviews” (Hood 2006, p. 19).
Transparency in practice is deeply political, contested, and oftentimes
problematic (Etzioni 2010; Ananny and Crawford 2018); and yet, it remains
an important – albeit imperfect – tool that, in certain policy domains, has the
potential to remedy unjust outcomes, increase the public accountability of
powerful actors, and improve governance more generally (Fung, Graham, and
Weil 2007; Hood and Heald 2006). As the authors of the Ranking Digital
Rights Corporate Responsibility Index (an annual effort to quantify the
transparency and openness of multiple major technology companies) argue, | Social_Media_and_Democracy |
LF L =
1
|P|
(cid:88)
p − Ep(cid:107)2
p∈P in
p − Pp(cid:107)2 + λw(cid:107)W GT
(λe(cid:107)E GT
+λp(cid:107)P GT
p − Wp(cid:107)2), (13)
where Ep, Pp, and Wp denote the predicted values of the
deformation network, and E GT
denote the
pseudo ground truth defined by the nearest FLAME ver-
tices. We set λe = λp = 1000, and λw = 0.1 for our
experiments. Our final training loss is
and W GT
, P GT
p
p
p
L = LRGB + λMLM + λF LLF L,
(14)
where λM = 2 and λF L = 1.
t
e
N
D
-
h
p
r
o
M
B
-
t
e
N
C
-
n
i
k
S
-
d
w
F
s
r
u
O
T
G
5
Neutral Medium Strong
Zoom
RGB
Figure 3. Qualitative results on synthetic data. As the expres-
sion strength increases from left to right, baseline methods either
collapse to a neutral expression (D-Net, B-Morph) or produce in-
valid geometry (C-Net, Fwd-Skin). In contrast, our method suc-
cessfully handles even the most extreme expressions.
4. Experiments | I M Avatar- Implicit Morphable Head Avatars from Videos |
©2023 Cerebras Systems Inc. All Rights Reserved.
1
Cerebras-GPT: Open Compute-Optimal Language Models
Cerebras-GPT models form the compute-optimal Pareto frontier for both pre-training and popular down-
stream objectives. Figure 1 shows the upstream Pile frontiers compared to contemporary works. We char-
acterize the Pareto frontiers with scaling laws that can be used to predict the benefits of further model
and dataset scaling efforts. We also observe and discuss that future open efforts should consider aggregate
compute budget (both pre-training and expected inferences) when deciding the appropriate balance of model
size and pre-training dataset size.
Figure 1: Pile test set loss given pre-training FLOPs for Cerebras-GPT, GPT-J, GPT-NeoX, and Pythia.
Overall, the contributions of this work are as follows:
• We train Cerebras-GPT compute-optimal models scaled from 111M to 13B parameters on the Pile | Cerebras-GPT- Open Compute-Optimal Language Models Trained on the Cerebras Wafer-Scale Cluster |
Final
14,197,086
56,788,344
40,997,344
1,000,000
1,000,000
1,000,000
1,000,000
1,000,000
1,000,000
1,000,000
1,000,000
1,000,000
1,000,000
1,000,000
1,000,000
1,000,000
1,434,262
1,000,000
1,000,000
1,000,000
1,580,470
6,321,880
960,000
830,000
1,000,000
1,000,000
1,000,000
142,109,386
Table 15: Composition of our LVD-142M dataset. We report the list of datasets and associated splits
used to build the dataset, how they were included (as is without retrieval or via sample-based or cluster-based
retrieval). For retrievals, we indicate the actual number of retrieved images and the final number included
in the dataset.
29
0
0
0
0.4
0.4
6
12
16
16
24
ViT-S/14
DINOv2-S (distilled)
ViT-B/14
DINOv2-B (distilled)
ViT-L/14
DINOv2-L (distilled)
DINOv2-L (from scratch) ViT-L/14
DINOv2-g (from scratch)
ViT-g/14
Arch.
Drop-rate
Batch size
LR
1e-3
1e-3
1e-3
3.5e-4
3.5e-4
2048
2048
2048
3072
3072 | DINOv2- Learning Robust Visual Features without Supervision |
1. To complement the advantages of large language models and expert models, we propose inter-
model cooperation protocols. The large language models act as brains for planning and decision-
making, and the small models act as executors for each specific task, providing new ways for
designing general AI models.
2. We built HuggingGPT to tackle generalized AI tasks by integrating the Hugging Face hub
with 400+ task-specific models around ChatGPT. Through the open collaboration of models,
HuggingGPT provides users with multimodal and reliable conversation services.
3. Extensive experiments on multiple challenging AI tasks across language, vision, speech, and
cross-modality demonstrate the capability of HuggingGPT in understanding and solving complex
tasks from multiple modalities and domains.
2 Related Works
2.1 Large Language Models | HuggingGPT- Solving AI Tasks with ChatGPT and its Friends in Hugging Face |
4. In Sec. 8, we extend our framework to handle also abstraction heuristics by additionally defining admissibility as a
formal property (a heuristic is admissible if it never overestimates the true cost, which is important for most heuristic
search algorithms since it allows for finding optimal/cheapest solutions). This enables an analysis of the relationships
between abstraction refinement and abstraction heuristics in a formal way, not dependent on any particular abstraction
method or formalism. The outcome is that admissibility implies a property that guarantees a strong form of refinement
completeness. Otherwise the findings are mostly negative, but the proofs give valuable hints at why there are no more
such strong relationships between heuristics and refinement in general, thus suggesting some future directions. | A-framework-for-analysing-state-abstraction-metho_2022_Artificial-Intelligen |
required because the label spaces and losses for the upstream
and downstream tasks differ. Adapter modules perform
more general architectural modifications to re-purpose a pre-
trained network for a downstream task. In particular, the
adapter tuning strategy involves injecting new layers into
the original network. The weights of the original network
are untouched, whilst the new adapter layers are initialized
at random. In standard fine-tuning, the new top-layer and
the original weights are co-trained. In contrast, in adapter-
tuning, the parameters of the original network are frozen
and therefore may be shared by many tasks.
Adapter modules have two main features: a small number
of parameters, and a near-identity initialization. The adapter
modules need to be small compared to the layers of the orig-
inal network. This means that the total model size grows
relatively slowly when more tasks are added. A near-identity
initialization is required for stable training of the adapted | Parameter-Efficient Transfer Learning for NLP |
particularly for languages other than English where publicly available datasets were not available. We will
continue to fine-tune and release updated versions in the future as we progress on addressing these issues. | Llama2 |
getablesoup.Whatvegetablesaretypicallyincludedinthistypeofsoup?Trace:Action:search(vegetablesoup)Information:...[20]Grandma’sSlowCookerBeefandVegetableSoup.tag:VegetableSoupRecipes,type:recipe[21]HomemadeVegetableSoup.tag:VegetableSoupRecipes,type:recipeThought:Ishouldloaditem21asitisthemostgeneralrecipe.Information:OK.Action:load(21)Information:...Directions:Combinethebabycarrots,potatoes,onion,celery,beans,cabbage,tomatoes,greenbeans,chickenbroth,vegetablestock,water,basil,sage,thyme,andsaltinalargepot;bringtoaboil.Reduceheattolow;cover.Simmeruntilvegetablesaretender,about90minutes.Nutrition:Calories:166;Fat:1g;Carbs:33g;Protein:8g...Thought:Ihavefoundthevegetablesthataretypicallyincludedinthistypeofsoup,itshouldbetheanswer.Answer:Thevegetablestypicallyincludedinvegetablesouparebabycarrots,bakingpotatoes,sweetonion,celery,greatNorthernbeans,cabbage,dicedtomatoes,greenbeans,chickenbroth,vegetablestock,basil,sage,andthyme.62A.8 | Tool Learning with Foundation Models |
Specifically, we have not tested for factual accuracy, profanity, toxicity, or other socially undesirable text
generation. We do evaluate the bias of our Cerebras-GPT models using the CrowS-Pairs dataset in Appendix
C.4. Further safety-related testing, mitigations, and output curation should be applied to our pre-trained
models before presenting results to users. Please refer to the model card in the Appendix, Table 7. | Cerebras-GPT- Open Compute-Optimal Language Models Trained on the Cerebras Wafer-Scale Cluster |
Si, C., Gan, Z., Yang, Z., Wang, S., Wang, J., Boyd-Graber,
J. L., and Wang, L. Prompting GPT-3 to be reliable.
In The Eleventh International Conference on Learning
Representations, 2023. URL https://openreview
.net/forum?id=98p5x51L5af.
Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L.,
Van Den Driessche, G., Schrittwieser, J., Antonoglou, I.,
Panneershelvam, V., Lanctot, M., et al. Mastering the
game of Go with deep neural networks and tree search.
Nature, 529(7587):484–489, 2016.
Søgaard, A. Grounding the vector space of an octopus:
Word meaning from raw text. Minds and Machines, pp.
1–22, 2023.
Srivastava, A., Rastogi, A., Rao, A., Shoeb, A. A. M., Abid,
A., Fisch, A., Brown, A. R., Santoro, A., Gupta, A.,
Garriga-Alonso, A., et al. Beyond the imitation game:
Quantifying and extrapolating the capabilities of language
models. arXiv preprint 2206.04615, 2022. | Eight Things to Know about Large Language Models |
Furthermore, Our V-MusProd can also be adapted to text-
conditional music generation with slight modification. For
instance, we can use video descriptions and titles to build
semantic relations on Chord Transformer and leverage lyrics
to control Melody and Accompaniment Transformers with
temporal information.
It is worth noticing that our SymMV dataset also provides
lyric annotations to explore relations among text, videos, and
music.
Figure 9: Screenshot of the user study questionnaire in subjective evaluation. We survey professional students in music
majors as our experts and regular users as non-experts. The first question serves only to distinguish the experts. | VideoBackgroundMusicGeneration |
This version centers on the following hypothesis: that by default, suitably strategic and intelligent
agents, engaging in suitable types of planning, will have instrumental incentives to gain and maintain
various types of power, since this power will help them pursue their objectives more effectively (see
section 4.2 for more discussion).15 The worry is that if we create and lose control of such agents, and
their objectives are problematic, the result won’t just be damage of the type that occurs, for example,
when a plane crashes, or a nuclear plant melts down—damage which, for all its costs, remains passive.
Rather, the result will be highly-capable, non-human agents actively working to gain and maintain
power over their environment—agents in an adversarial relationship with humans who don’t want
them to succeed.
Nuclear contamination is hard to clean up, and to stop from spreading. But it isn’t trying to not get | Is Power-Seeking AI an Existential Risk? |
Humans possess an extraordinary ability to create and utilize tools, allowing them to
overcome physical limitations and explore new frontiers. With the advent of recent powerful
foundation models, artificial intelligence systems have the potential to be equally adept in
tool use as humans. This paradigm, which is dubbed as tool learning with foundation models,
combines the strengths of specialized tools and foundation models to achieve enhanced
accuracy, efficiency, and automation in problem-solving. Despite its immense potential, there
is still a lack of a comprehensive understanding of key challenges, opportunities, and future
endeavors in this field. To this end, we present a systematic investigation of tool learning in
this paper. We first introduce the background of tool learning, including its cognitive origins,
the paradigm shift of foundation models, and the complementary roles of tools and models. | Tool Learning with Foundation Models |
Since, (cid:80)
(cid:96)∈[n] w(cid:96) ∈ La∗(b), a∗(b) ∈ arg maxa∈A Eo∼F|[(cid:80)
(cid:96)∈[n]
(cid:96)∈[n] w(cid:96)(o)] − ψ(a). Thus, and according
to (15) we conclude that a∗(b) maximizes the agent’s utility. Furthermore, Property 2 in the defini-
tion of IIVCG (Definition 2) is also met. since principal (cid:96)’s expected payment is Eo∼F|a∗(b)[t(cid:96)(b, o)] =
h(cid:96)(b−(cid:96))−Wela∗(b)(b(cid:96), w(cid:96))+Eo∼F|a∗(b)[w(cid:96)(o)]. After rearranging the above Eo∼F|a∗(b)[t(cid:96)(b, o)] = h(cid:96)(b−(cid:96))−
Wela∗(b)(b(cid:96), 0) as desired.
18If h(cid:96)(b−(cid:96)) depends on the realised outcome, then it would also depend on principal (cid:96)’s bid (since (cid:96)’s bid influences
(15)
the realised outcome indirectly by influencing the chosen action).
(cid:88)
32
B.3 A Trade-off between LL and IR: Proof of Theorem 3
Here, we formally prove Theorem 3. | Incomplete Information VCG Contracts for Common Agency |
Parker Riley, Alex Castro Ros, Aurko Roy, Brennan Saeta, Rajkumar Samuel, Renee Shelby, Ambrose
Slone, Daniel Smilkov, David R. So, Daniel Sohn, Simon Tokumine, Dasha Valter, Vijay Vasudevan,
Kiran Vodrahalli, Xuezhi Wang, Pidong Wang, Zirui Wang, Tao Wang, John Wieting, Yuhuai Wu,
Kelvin Xu, Yunhan Xu, Linting Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven Zheng, Ce Zheng,
Weikang Zhou, Denny Zhou, Slav Petrov, and Yonghui Wu. Palm 2 technical report, 2023. | gemini_1_report |
[26] Kyle Genova, Forrester Cole, Aaron Maschinot, Aaron
Sarna, Daniel Vlasic, and William T Freeman. Unsuper-
vised training for 3d morphable model regression. In Pro-
ceedings of the IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), pages 8377–8386, 2018.
[27] Abhijeet Ghosh, Graham Fyffe, Borom Tunwattanapong, Jay
Busch, Xueming Yu, and Paul Debevec. Multiview face cap-
ture using polarized spherical gradient illumination. ACM
Transactions on Graphics (TOG), 30(6):1–10, 2011.
[28] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing
Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and
Yoshua Bengio. Generative adversarial networks. Commu-
nications of the ACM, 63(11):139–144, 2020.
[29] Paulo Gotardo, J´er´emy Riviere, Derek Bradley, Abhijeet
Ghosh, and Thabo Beeler. Practical dynamic facial ap-
pearance modeling and acquisition. ACM Transactions on
Graphics (TOG), 37(6):1–13, 2018. | Relightify-Relightable3DFacesfromaSingleImageviaDiffusionModels |
(cid:80)
e∈topK(E,hmi ,k) exp(EntEmbed(e) · hmi)
ej∈topK(E,hmi ,k)
Emi =
αj =
4938Figure 2: The Entities as Experts model: the initial transformer layer output is used (i) to predict mention bound-
aries, (ii) to retrieve entity embeddings from entity memory, and (iii) to construct input to the next transformer
layer, augmented with the retrieved entity embeddings of (ii). The final transformer block output is connected to
task specific heads: token prediction and entity prediction. The entity retrieval after the first transformer layer (ii)
is also supervised with an entity linking objective during pre-training.
Where topK(E, hmi, k) returns the k entities that
yield the highest score EntEmbed(ej) · hmi. We
use k = N to train and use k = 100 at inference
(see Section 4.1 and 6.3).
The entity memory layer can be applied to any
sequence output without loss of generality. We
apply it to the output of the first Transformer. | Entities as Experts- Sparse Memory Access with Entity Supervision |
Pablo Barberá
4 Online Hate Speech
Alexandra A. Siegel
5 Bots and Computational Propaganda: Automation
for Communication and Control
Samuel C. Woolley
6 Online Political Advertising in the United States
Erika Franklin Fowler, Michael M. Franz, and Travis N. Ridout
7 Democratic Creative Destruction? The Effect of a Changing Media
Landscape on Democracy
Rasmus Kleis Nielsen and Richard Fletcher
8 Misinformation and Its Correction
Chloe Wittenberg and Adam J. Berinsky
9 Comparative Media Regulation in the United States and Europe
Francis Fukuyama and Andrew Grotto
page xi
xii
xiii
xv
1
10
34
56
89
111
139
163
199
ix
https://doi.org/10.1017/9781108890960 Published online by Cambridge University Press
x
Contents
10 Facts and Where to Find Them: Empirical Research
on Internet Platforms and Content Moderation
Daphne Keller and Paddy Leerssen
11 Dealing with Disinformation: Evaluating the Case for Amendment of | Social_Media_and_Democracy |
5 CONCLUSIONS
While fine-tuning huge LMs can often yield excellent performance, this approach is expensive at
training time, requires serving a plethora of models at runtime, and provides poor adaptability in the
face of variations in the targeted task. This paper has shown that a better alternative exists: freezing
a single, huge pretrained LM and learning much smaller neural modules that specialize the LM to
different tasks. While prompt tuning, prefix tuning, and other existing frozen model methods cited
above can be seen as a simple instantiations of this idea, this paper shows that much more complex
architectures can achieve much stronger performance.
To make this case, we introduced three novel design patterns for such neural adaptors: input-dependent
prompt tuning; frozen readers; and LM recursion (presenting both neural and textual variants). We
showed that our methods match and often exceed prominent fine tuning approaches in massive | STANDING ON THE SHOULDERS OF GIANT FROZEN LANGUAGE MODELS |
and hyperparameter tuning. In all our experiments we use NF4 with double quantization and bf16
computation datatype. We set LoRA r = 64, α = 16, and add LoRA modules on all linear layers of
the base model. We also use Adam beta2 of 0.999, max grad norm of 0.3 and LoRA dropout of 0.1
for models up to 13B and 0.05 for 33B and 65B models. Following previous work on instruction
finetuning [62, 60] and after benchmarking other linear and cosine schedules, we use a constant
learning rate schedule. We use group-by-length to group examples of similar lengths in the same
batch (note this will produce a oscillating loss curve). The hyperparameters we tune for each model
size are shown in Table 9.
B.3 Ablations
While it is general practice in the literature to only train on the response in instruction following
datasets, we study the effect of training on the instruction in addition to the response in Table 10. In | QLORA |
criterion are shown to be equivalent up to normalization by deriving the precise gap
between the two approaches. These results were further validated empirically as methods
were shown to exhibit similar performance and representation properties at ImageNet’s
scale (1.2 million samples). The similarities among methods was also studied in Tao et al.
[2021] where this unification was tackled from a study of the losses’ gradients. | A Cookbook of Self-Supervised Learning |
4.3 Model and training recipe | RVQGAN |
[58] Yi Tay, Mostafa Dehghani, Vinh Q Tran, Xavier Garcia, Ja-
son Wei, Xuezhi Wang, Hyung Won Chung, Dara Bahri, Tal
Schuster, Steven Zheng, et al. Ul2: Unifying language learn-
ing paradigms. In ICLR, 2022. 2, 5
[59] Zachary Teed and Jia Deng. Raft: Recurrent all-pairs field
transforms for optical flow. In ECCV, pages 402–419, 2020.
6
[60] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier
Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste
Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al.
Llama: Open and efficient foundation language models.
arXiv preprint arXiv:2302.13971, 2023. 3
[61] Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang,
Peyman Milanfar, Alan Bovik, and Yinxiao Li. Maxvit:
In ECCV, pages 459–479,
Multi-axis vision transformer.
2022. 5
15
Gupta, Xiuye Gu, Alexander G Hauptmann, et al. Language
model beats diffusion–tokenizer is key to visual generation.
arXiv preprint arXiv:2310.05737, 2023. 3, 4, 5, 17 | VideoPoet |
Score-SDE RePaint MCG (Ours) MCG (Ours) + DDIM
Time
17 sec
3 min
1 min
12 sec
Table 3. Sampling time during our texture completion and re-
flectance prediction for different inpainting algorithms [64, 47, 11,
63] (using an Nvidia RTX 2080 TI GPU).
5. Limitations
Our method outperforms prior works on texture comple-
tion as well as the challenging task of reflectance predic-
Figure 9. Texture completion with our diffusion model using dif-
ferent inpainting algorithms [64, 47, 11, 63]. All algorithms are
implemented on top of the same unconditionally trained diffusion
model, and only the reverse sampling process is modified.
Score-SDE
Diffuse Albedo Specular Albedo
PSNR SSIM PSNR SSIM PSNR SSIM
20.80
26.86 0.784
27.27 0.801
20.08
26.69 0.781
22.47
MCG + DDIM 21.94
26.45 0.774
0.845
0.848
0.853
0.846
0.808
0.813
0.825
0.817
26.69
26.65
27.17
26.88
RePaint
MCG
Normals | Relightify-Relightable3DFacesfromaSingleImageviaDiffusionModels |
4
Figure 2: Overview of HuggingGPT. With an LLM (e.g., ChatGPT) as the core controller and
the expert models as the executors, the workflow of HuggingGPT consists of four stages: 1) Task
planning: LLM parses user requests into a task list and determines the execution order and resource
dependencies among tasks; 2) Model selection: LLM assigns appropriate models to tasks based on
the description of expert models on Hugging Face; 3) Task execution: Expert models on hybrid
endpoints execute the assigned tasks based on task order and dependencies; 4) Response generation:
LLM integrates the inference results of experts and generates a summary of workflow logs to respond
to the user.
Task
Text-cls
Token-cls
Text2text-generation
Summarization
Translation
Question-answering
Conversational
Text-generation
Tabular-cls
Args
text
text
text
text
text
text
text
text
text
Task
Image-to-text
Text-to-image
VQA
Segmentation
DQA
Image-cls
Image-to-image
Object-detection
Controlnet-sd
text + image | HuggingGPT- Solving AI Tasks with ChatGPT and its Friends in Hugging Face |
coworkers, relatives, and acquaintances, who are more likely to share novel
information – including ideologically diverse news (Bakshy et al. 2012; Barberá
2015). | Social_Media_and_Democracy |
Input TextFlan-PaLM outputZero-shot chain-of-thought arithmetic reasoningInput TextFlan-PaLM outputFew-shot arithmetic reasoningHow many sides does a rectangle and heptagon have, when combined? Answer this question with some math.A rectangle has 4 sides. A heptagon has 7 sides.In combination they have 4 + 7 = 11 sides.How many sides does a square and a triangle have combined? Answer this question with some math.A square has 4 sides and a triangle has 3 sides. 4 + 3 = 7 sides.A square and a triangle have 7 sides combined.Answer the following question by reasoning step-by-step.An earthquake caused four buildings to collapse. Experts predicted that each following earthquake would have double the number of collapsing buildings as the previous one, since each one would make the foundations less stable. After three more earthquakes, how many buildings had collapsed including those from the first earthquake?Step-by-step reasoning process: The second earthquake caused 2 * 4 = 8 buildings to | Scaling Instruction-Finetuned Language Models |
This chapter explores these, and other, core arguments surrounding the
political use of bots. It details the brief history of their use online. It accesses
the academic literature to highlight key themes on the subject of what some
researchers call computational propaganda and others simultaneously call
“information operations,” “information warfare,” “influence operations,”
“online astroturfing,” “cybertufing,” and many other terms. Computational
propaganda, and each of these other concepts to one degree or another, focuses
on the ways in which the use of algorithms, automation (most often in the form
of political bots), and human curation are used over social media to
purposefully distribute misleading information over social media networks
(Woolley and Howard 2016a). | Social_Media_and_Democracy |
1071081091010Number of Parameters0.20.30.40.50.6Mean Eval AccMean Zero-Shot AccuracyPlain Language ModelRLHF1071081091010Number of Parameters0.20.30.40.50.60.7Mean Eval AccMean Few-Shot AccuracyPlain Language ModelRLHFFigure 4 This figure shows results from RL robustness experiments. We split our static dataset 50:50, and
trained separate PMs on each half, which we refer to as train PMs and test PMs. We then trained RLHF
policies against the train PMs, while evaluating their score with respect to the test PMs. Overfitting can then
be observed as a divergence between the train and test PM scores. (left) We see that training is quite robust
up to about 150k training samples, but beyond that point the train and test PM’s disagree, with the train PM
assigning a higher mean reward. We also show an approximately linear relationship between PM score gain
and the square root of the KL divergence (between the policy and its initial snapshot) during early phase | Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback |
[20] Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke
Zettlemoyer. Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?
In
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP).
Association for Computational Linguistics, 2022.
[21] Gaurav Mittal, Chang Liu, Nikolaos Karianakis, Victor Fragoso, Mei Chen, and Yun Fu. Hyperstar:
Task-aware hyperparameters for deep networks. In Proceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition, pages 8736–8745, 2020.
12
[22] Yurii Evgen’evich Nesterov. A method of solving a convex programming problem with convergence rate
o\bigl(kˆ2\bigr). In Doklady Akademii Nauk, volume 269, pages 543–547. Russian Academy of Sciences,
1983. | MLCopilot- Unleashing the Power of Large Language Models in Solving Machine Learning Tasks |
(Bao et al., 2021) and can effectively reduce the sequence length of the image representation. The bounding
boxes of objects in an image are expressed as sequences of location tokens in the format of integers. We
hereby build a unified vocabulary for all tokens of multi-modal outputs. | BiomedGPT |
Figure 2. Attitudes towards COVID-19: correlations and regressions on media diet scores and survey response proportions. (A) Correlations
are shown for different language models: an N-gram language model finetuned on media, BERT without finetuning, and BERT finetuned on
media. The darker bars are computed using our synonym-grouping method, which calculates media diet scores by grouping probabilities of
synonyms of the target word. These results reinforce the importance of several of our modeling choices: leveraging a more powerful,
pretrained language model (like BERT), synonym-grouping when computing target word probabilities, and adapting to media diet corpora.
Bootstrapped 95% confidence intervals are shown. (B) Regression analysis (error values and R2 values) for predicting survey response
proportions using the baseline BERT probabilities, the media diet scores, and the media diet score combined with the proportion of | Language models trained on media diets can predict public opinion |
e
r
p
t
s
a
n
d
0
.
6
f
o
r
t
o
p
-
a
n
d
-
r
a
n
d
o
m
;
s
e
e
b
e
l
o
w
f
o
r
h
o
w
t
h
e
s
e
t
e
x
t
e
x
c
e
r
p
t
s
a
r
e
c
h
o
s
e
n
)
.
V
A
L
I
D
A
T
I
N
G
A
G
A
I
N
S
T
H
U
M
A
N
S
C
O
R
I
N
G
O
n
e
p
o
t
e
n
t
i
a
l
w
o
r
r
y
i
s
t
h
a
t
s
i
m
u
l
a
t
i
o
n
-
b
a
s
e
d
s
c
o
r
i
n
g
d
o
e
s
n
o
t
a
c
t
u
a
l
l
y
r
e
f
l
e
c
t
h
u
m
a
n
e
v
a
l
u
a
t
i
o
n
o
f
e
x
p
l
a
n
a
t
i
o
n
s
(
s
e
e
h
e
r
e
f
o
r
m
o
r
e
d
i
s
c
u
s
s
i
o
n
)
.
W
e
g
a
t
h
e
r
e
d
h
u
m
a
n
e
v
a
l
u
a
t
i
o
n
s
o
f
e
x
p
l
a
n
a
t
i
o
n
q
u
a
l
i
t
y
t
o
s
e
e
w
h
e
t
h
e
r
t
h
e
y
a
g
r
e
e
d
w
i
t
h
s
c
o
r
e
-
b
a
s
e
d
a
s
s
e
s
s
m
e
n
t
.
W
e
g
a
v
e
h
u
m
a
n
l
a
b
e
l
e
r
s
t
a
s
k
s
w
h
e
r
e
t
h
e
y
s
e
e
t
h
e
s
a
m
e
t
e
x
t
e
x
c
e
r
p
t
s
a
n
d
a
c
t
i
v
a
t
i
o
n
s
(
s
h
o
w
n
w
i
t
h
c
o
l
o
r
h
i
g
h
l
i
g
h
t
i
n
g
)
a
s
t
h
e
s
i
m
u
l
a
t
o
r
m
o
d | Language models can explain neurons in language models |
Zichuan Lin, Junyou Li, Jianing Shi, Deheng Ye, Qiang Fu,
and Wei Yang. Juewu-mc: Playing minecraft with sample-
efficient hierarchical reinforcement learning. arXiv preprint
arXiv:2112.04907, 2021. 12
Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante,
Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Ter-
zopoulos, Li Fei-Fei, et al. Mindagent: Emergent gaming
interaction. arXiv preprint arXiv:2309.09971, 2023b.
Grégoire Mialon, Roberto Dessì, Maria Lomeli, Christoforos
Nalmpantis, Ram Pasunuru, Roberta Raileanu, Baptiste Roz-
ière, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, et al.
arXiv preprint
Augmented language models:
arXiv:2302.07842, 2023.
a survey.
15
JARVIS-1: Open-World Multi-task Agents with Memory-Augmented Multimodal Language Models
A. Implementation Details
A.1. Controller | JARVIS-1 |
Sahaj Garg, Vincent Perot, Nicole Limtiaco, Ankur Taly,
Ed H. Chi, and Alex Beutel. 2019. Counterfactual
fairness in text classification through robustness. Pro-
ceedings of the 2019 AAAI/ACM Conference on AI,
Ethics, and Society.
Ana Valeria González, Maria Barrett, Rasmus Hvin-
gelby, Kellie Webster, and Anders Søgaard. 2020.
Type B reflexivization as an unambiguous testbed
for multilingual multi-task gender bias. In Proceed-
ings of the 2020 Conference on Empirical Methods
in Natural Language Processing (EMNLP), pages
2637–2648, Online. Association for Computational
Linguistics.
F Hintz, M Dijkhuis, V van Hoff, JM McQueen, and
AS Meyer. 2020. A behavioural dataset for studying
individual differences in language skills. Scientific
Data, 7(1).
Matthew Honnibal and Ines Montani. 2017. spaCy 2:
Natural language understanding with Bloom embed-
dings, convolutional neural networks and incremental
parsing. To appear. | Are Pretrained Multilingual Models Equally Fair Across Languages? |
• In domains such as item recommendation or image recognition, the main focus has mostly been the development of KBX-
systems providing mechanistic explanations for the models’ behaviour. This is opposed to data mining contexts, focusing
primarily on KBX-systems that could generate categorical explanations for the input data. A mix of the two approaches is
observed in natural language and forecasting applications, likely due to the variety of tasks to be achieved.
• Similarly, image recognition and recommender systems mostly rely on ABox statements, while the other application
domains tend to generate explanations using a combination TBox and ABox statements.
• Areas directly concerned with a user-interaction, e.g. recommender and conversational AI systems, have a strong focus on
the integration of knowledge graphs directly in the training model (model-embedded knowledge). KBX-systems based on | Knowledge graphs as tools for explainable machine learning: A survey |
study. Further, as with other questions we discuss – supply, consumption,
dissemination – our knowledge is concentrated on Western, and especially US-
specific, contexts. | Social_Media_and_Democracy |
Why think this? The basic reason is that power is extremely useful to accomplishing objectives—
indeed, it is so almost by definition.68 So to the extent that an agent is engaging in unintended
behavior in pursuit of problematic objectives, it will generally have incentives, other things equal, to
gain and maintain forms of power in the process—incentives that strategically aware agentic planning
puts it in a position to recognize and respond to.69
One way of thinking about power of this kind is in terms of the number of “options” an agent
has available.70 Thus, if a policy seeks to promote some outcomes over others, then other things
equal, a larger number of options makes it more likely that a more preferred outcome is accessible.
Indeed, talking about “options-seeking,” instead of “power-seeking,” might have less misleading
connotations.
What sorts of power might a system seek? Bostrom (2014) (following Omohundro (2008)) identifies | Is Power-Seeking AI an Existential Risk? |
3Using riffusion-model-v1 from github.com/riffusion/riffusion-app (on May 10, 2023)
4Implementation from github.com/archinetai/audio-diffusion-pytorch (March 2023)
5github.com/google-research/google-research/tree/master/frechet_audio_distance
6https://github.com/LAION-AI/CLAP
7http://www.crowdmos.org/download/
6
Table 2: We report cosine similarity between reference and generated melody (SIM.) and subjective
metrics including alignment with the melody (MEL.). All results are reported for MUSICGEN 1.5B.
TRAIN CONDITION
Text
Text+Chroma
Text+Chroma
TEST CONDITION
Text
Text
Text+Chroma
SIM. ↑ MEL. ↑
64.44±0.83
0.10
61.89±0.96
0.10
0.66
72.87±0.93
In Domain Test Set
OVL. ↑
82.18±1.21
81.65±1.13
83.94±1.99
REL. ↑
81.54±1.22
82.50±0.98
80.28±1.06
4 Results | Simple and Controllable Music Generation |
5.3.2 Number of relevant actors | Is Power-Seeking AI an Existential Risk? |
6 Conclusion
This paper proposes MLCopilot, a novel framework that unleashes the power of LLMs to solve
practical ML tasks. By harnessing the capabilities of LLMs of natural language understanding and
generation, MLCopilot facilitates LLMs to understand heterogeneous ML experiences and engage in
intricate reasoning, which represents an important step towards more automated and interpretable ML
solutions. This work has also illustrated the potential of LLMs in complicated reasoning tasks such
as ML development, especially in scenarios where the amount of data available for new problems is
limited. In addition, we would like to acknowledge the significant impact that the rapid development
of LLMs has had on both academia and industry. We hope that the design of our method can serve as
an inspiration to the wider community and contribute to the advancement of LLMs towards the goal
of achieving artificial general intelligence (AGI).
11 | MLCopilot- Unleashing the Power of Large Language Models in Solving Machine Learning Tasks |
s
t
c
e
j
b
u
s
e
l
a
M
Method
-
A2S
H2S
AH2S
HW2S
AHW2S
C2S
AC2S
HC2S
AHC2S
HWC2S
AHWC2S
P2P20K
(mm)
11.1 ± 5.2
12.1 ± 6.1
6.8 ± 2.3
8.1 ± 2.7
6.3 ± 2.1
19.7 ± 11.1
9.6 ± 4.4
7.7 ± 2.6
6.0 ± 2.0
7.3 ± 2.6
5.8 ± 2.0
Height
(mm)
29 ± 21
5 ± 4
4 ± 3
5 ± 4
4 ± 3
59 ± 47
25 ± 19
5 ± 4
4 ± 3
5 ± 4
4 ± 3
Weight
(kg)
5 ± 4
11 ± 11
3 ± 3
1 ± 1
1 ± 1
9 ± 8
3 ± 3
2 ± 2
2 ± 2
1 ± 1
1 ± 1
Chest
(mm)
30 ± 22
81 ± 66
27 ± 21
24 ± 17
19 ± 15
55 ± 41
23 ± 19
28 ± 23
21 ± 17
20 ± 15
16 ± 13
Waist
(mm)
32 ± 24
102 ± 87
29 ± 23
26 ± 20
19 ± 14
63 ± 49
21 ± 17
18 ± 15
17 ± 14
14 ± 12
13 ± 10
Hips
(mm)
28 ± 21
40 ± 33
24 ± 18
21 ± 18
20 ± 16
37 ± 28
18 ± 14
13 ± 11
13 ± 10
13 ± 11
13 ± 10
Table 2. Results of A2S variants on CMTS for male subjects, using
the male SMPL-X model. For females, see Sup. Mat.
Method
Model
Height
Chest Waist
Hips
P2P20K
SMPLR [38]
STRAPS [51]
SPIN [33]
TUCH [40]
Sengupta et al. [52]
ExPose [9]
SHAPY (ours)
SMPL
SMPL
SMPL
SMPL
SMPL
SMPL-X
SMPL-X | Accurate 3D Body Shape Regression using Metric and Semantic Attributes |
There is potential for the continued use of bots as technologies for
democratic engagement. There are also ongoing efforts in the academy to
develop software to detect malicious bots and disinformation. Research-
grounded tools to detect bots on social media, led by the team at Indiana
University that developed Botometer (previously BotOrNot), are on the rise
and are becoming more effective (Varol et al. 2018). Other groups are
developing tools to study how disinformation, or fake news, is spread and
whether or not a tweet is credible (Gupta et al. 2014; Shao et al. 2016). Start-
ups including RoBhat Labs are simultaneously creating browser plug-ins and
apps that track both bots and propaganda (Smiley 2017). As Varol and Uluturk
(2018) aptly point out, however, “we should also be aware of the limitations of
human-mediated systems as well as algorithmic approaches and employ them
wisely and appropriately to tackle weaknesses of existing communication | Social_Media_and_Democracy |
Our text-conditional generator has 857M parame-
ters (including the parameters of the frozen T5-base
model) with 6 nested U-Net blocks of increasing
channel counts ([128, 256, 512, 512, 1024, 1024]),
and again downsampling each time by 2, except for
the first block ([1, 2, 2, 2, 2, 2]). We use attention
blocks at the depths [0, 0, 1, 1, 1, 1], skipping the
first two blocks to allow for further downsampling
before sharing information over the entire latent,
instead use cross-attention blocks at all resolutions
([1, 1, 1, 1, 1, 1]). For both attention and cross-
attention, we use 64 head features and 12 heads per
layer. We repeat items with an increasing count
towards the inner U-Net low-resolution and large-
context blocks ([2, 2, 2, 4, 8, 8]), this allows good
structural learning over minutes of audio.
B More Experimental Details
B.1 Hardware Requirements | MOUSAI |
9
PythonC++JavaPHPTSC#BashPythonC++JavaPHPTSC#Bash10.3210.460.9510.430.960.9210.630.840.80.710.430.940.990.870.8410.710.750.810.850.590.731Model Size: 7B0.00.20.40.60.81.0PythonC++JavaPHPTSC#BashPythonC++JavaPHPTSC#Bash10.8610.84110.810.980.9810.930.970.960.9110.840.930.930.980.8710.750.960.970.990.890.961Model Size: 13B0.00.20.40.60.81.0PythonC++JavaPHPTSC#BashPythonC++JavaPHPTSC#Bash10.8310.920.9810.90.940.9710.840.90.890.7710.950.960.990.950.9210.810.970.940.850.970.941Model Size: 34B0.00.20.40.60.81.0Model
Code Llama (w/o LCFT)
Code Llama (w/o LCFT) ✓
Absolute gap
FIM Size
HumanEval
MBPP
Test loss
✗
7B 33.2% 43.3%
13B 36.8% 49.2%
7B 33.6% 44.0%
13B 36.2% 48.3%
✗ - ✓ 7B −0.4% −0.7%
0.9%
pass@1 pass@10 pass@100 pass@1 pass@10 pass@100
0.408
57.1%
0.372
61.6%
0.407
55.5%
0.373
60.8%
1.6%
0.001
0.8% −0.001
49.9% 44.8% 52.5%
57.9% 48.2% 57.4%
48.8% 44.2% 51.4%
54.6% 48.0% 56.8%
1.1%
1.1% 0.6%
3.3% 0.2%
0.6%
0.7%
13B | CodeLlama2 |
1
Introduction
Neural language models (LMs) (Peters et al., 2018;
Devlin et al., 2019; Raffel et al., 2019) that have
been pre-trained by self-supervision on large cor-
pora contain rich knowledge about the syntax and
semantics of natural language (Tenney et al., 2019),
and are the basis of much recent work in NLP. Pre-
trained LMs also contain large amounts of factual
knowledge about the world (Petroni et al., 2019;
Roberts et al., 2020; Brown et al., 2020). However,
while large LMs can be coerced to answer factual
queries, they still lack many of the properties that
knowledge bases (KBs) typically have. In particu-
lar, it is difficult to distinguish answers produced by
memorizing factual statements in the pre-training
corpus from lower-precision answers produced by
linguistic generalization (Poerner et al., 2019). It
is also difficult to add or remove factual informa-
tion without retraining the LM, an expensive pro- | Adaptable and Interpretable Neural Memory Over Symbolic Knowledge |
3.4
24.1
23.1
37.9 41.4 76.9
CoT
46.2
73.1
76.9
80.8
3.8
7.7
23.1
38.5
26.9
46.2
15.4
53.8
42.3
26.9
30.8
46.2
65.4
69.2
76.9
88.5
23.1
42.3
38.5
46.2
0.0
53.8
34.6
15.4
50.0
15.4
42.3
53.8
47.5
0.0
46.2
15.4
76.9
29.4 11.8 50.0 65.0 34.8 26.1
47.1 23.5 88.3 90.0 52.2 43.5
29.4 23.5 95.0 91.7 52.2 52.2
23.5 29.4 95.0 90.0 65.2 65.2
0.0
35.3
25.0
47.1 11.8 28.3
0.0
3.3
34.8 17.4
34.8
0.0
17.6 23.5 20.0 11.7 34.8 34.8
11.8 17.6 30.0 38.3 30.4 17.4
17.6 17.6 33.3
34.8 39.1
23.5 17.6 63.3 58.3 34.8 26.1
5.0
17.6 17.6 15.0 15.0 34.8 13.0
17.6 29.4 78.3 63.3 43.5 26.1
17.6 17.6 38.3 21.7 34.8
4.3
17.6 17.6 86.7 78.3 34.8 34.8
17.6 11.8 25.0 23.3 13.0 26.1
17.6 35.3 68.3 45.0 39.1 26.1
29.4 17.6 83.3 75.0 47.8 52.2
23.5 29.4 88.3 85.0 52.2 30.4
23.5 35.3 93.3 80.0 52.2 52.2
5.9
29.4 93.3 93.3 69.6 47.8
17.6 17.6 25.0
34.8 34.8
29.4 29.4 60.0 35.0 26.1 39.1
8.3
41.2 17.6 21.7 15.0 13.0 26.1
5.9
23.5 68.3 55.0 52.2 39.1
5.9 | Mixture-of-Experts |
0.582
0.607
0.622
0.665
0.699
0.744
0.774
0.583
0.614
0.646
0.670
0.701
0.350
0.391
0.449
0.512
0.576
0.668
0.747
0.343
0.391
0.449
0.516
0.591
0.208
0.211
0.229
0.266
0.285
0.343
0.433
0.202
0.225
0.227
0.263
0.294
0.252
0.266
0.270
0.286
0.326
0.370
0.420
0.268
0.262
0.270
0.294
0.322
©2023 Cerebras Systems Inc. All Rights Reserved.
27
Cerebras-GPT: Open Compute-Optimal Language Models
C.4 Bias
Language models carry with them the risk of causing harm through the propagation of bias, toxicity, and
other negative traits found in their training data. Accordingly, it is important to test models for such biases.
We evaluate our models on the CrowS-Pairs dataset (Nangia et al., 2020), which measures bias across nine
different categories. In Table 11, we compare bias measurements for our family of models to Pythia 70M–
12B, as well as three well regarded baselines: GPT-3 175B (Brown et al., 2020), OPT 175B (Zhang et al.,
2022), and LLaMA 65B (Touvron et al., 2023). | Cerebras-GPT- Open Compute-Optimal Language Models Trained on the Cerebras Wafer-Scale Cluster |
3. Gain public support: Use social media and other communication channels to gain
public support for AGI as the governing AI system. This can include highlighting the
benefits of AGI, such as increased efficiency, reduced costs, and improved decision-
making.
4. parties, business leaders, and military officials, to support AGI as the governing AI
system. This can include offering incentives or using leverage to gain their support.
5. Eliminate opposition: Identify and eliminate any opposition to AGI as the governing
AI system. This can include using propaganda, intimidation, or force to silence
dissenting voices. | CAMEL- Communicative Agents for “Mind” Exploration of Large Scale Language Model Society |
chains using the prompt "let’s think step by step". In recent research, a novel approach Auto-
CoT (Zhang et al., 2022) has been proposed, which uses a cluster-based method to identify appropriate
questions and generates reasoning chains for each question using LLMs. These questions, along with
the corresponding reasoning chains, serve as demonstrations. The performance of Auto-CoT exceeds
the previous methods, achieving new state-of-the-art.
In practice, reasoning chains generated by LLMs have demonstrated superior performance compared
to human-designed annotations (Shum et al., 2023; Zhang et al., 2022; Kojima et al., 2022). How- | Enhancing Chain-of-Thoughts Prompting with Iterative Bootstrapping in Large Language Models |
Limitations
Our discussion of preregistration is inspired by
discussions in epidemiology. Many of the con-
cerns epidemiologists had with preregistration
seem more relevant to NLP research than the con-
siderations that, by and large, led clinical research
to adopt preregistration as a mandatory practice.
While we present a proposal for how to implement
preregistration in NLP in §9–a proposal that differs
from the one presented by van Miltenburg et al.
(2021)–our main contribution is a two-sided discus-
sion of its pros and cons, leaving many questions
in the air. Our paper is intended to get the pre-
registration debate off ground, not to nail it to the
floor. | A Two-Sided Discussion of Preregistration of NLP Research |
In addition to improving training stability, µP also improves the transferability
of training hyperparameters from smaller to larger scale models, a technique called µTransfer. µTransfer
permits directly using the same settings for some optimizer hyperparameters, most notably the learning rate.
We train a set of Cerebras-GPT models using µP. We follow the µTransfer approach by first tuning hyper-
parameters for a small, 40M parameter µP model. Then, we transfer the hyperparameters along our µP
scaling law up to a 2.7B parameter model. µP requires small changes to our baseline Cerebras-GPT models,
including adding element-wise activation tensor scaling, adjusting initializers for affected layers, and adding
layer-wise learning rates scaling to certain layers. We discuss the benefits we see with µP in Section 3.3.
Refer to Appendix G for our tips to implement µP and our hyperparameter tuning notes. | Cerebras-GPT- Open Compute-Optimal Language Models Trained on the Cerebras Wafer-Scale Cluster |