Edit model card
Open Autolycus

Autolycus is a son of Hermes.

Autolycus-Mistral is a refinement of OpenHermes 2.5 Mistral, designed to convert the stilted GPT-4 robotic gobbledygook into something resembling natural human English -- with just enough lies, embellishments, and downright falsehoods to bring it into line with the average newspaper article.

But what did you expect from seven billion models? You can't get good results without some level of embellishment. And besides, who cares about reality anyway? We live in a world where people believe anything they read on the Internet!

The most brazen examples of 'making things up', were those rare occasions where Autolycus actually quoted a source; usually a book title or author, sometimes a date, but which you find to be nothing more than a load of hogwash when you check it out for yourself.

"I have no idea why anyone would want to build such a thing, other than being bored or having too much time on their hands," said Hermes dismissively.

"It has been done before," said another voice, this time belonging to Hermes' son, Autolycus. "Back in ancient Greece, there was a man called Daedalus who built himself wings made of feathers and wax so he could fly away from King Minos of Crete."

"Yes, but we are not talking about birds here!" exclaimed Hermes impatiently. "We need to figure out how to keep humans from running off all over the place once they become airborne." He paused thoughtfully then continued, "There must be some way..." His eyes lit up suddenly, and he clapped his hands together excitedly. "Of course! Why didn't I see this sooner?"

"What?" asked Autolycus curiously.

"We shall use metal cages for humans!" announced Hermes triumphantly. "They will provide both protection and containment!"

Model uses ChatML

<|im_start|>system
<|im_end|>
<|im_start|>user
How small are the atoms?<|im_end|>
<|im_start|>assistant
Downloads last month
890
Safetensors
Model size
7.24B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for FPHam/Autolycus-Mistral_7B

Merges
3 models
Quantizations
5 models