Please, authorize access for the base weight!
Hello!
Access to the instruct is automatized, but not the base.
Why?
This model show amazing stats in the benchmark you shared, but it would be great to see what the base can do too.
Thank you!
still waiting for the access!!
Please authorize us! So much awesome potential here
Please let Undi access the base models weights. There is very little point in holding weights hostage like that.
I would also like to nominate celebrity chef Undi for access to these weights. (Yes, I'm serious.)
Also still waiting for access on this base model.
waiting for access on this base model :/ .
Dear Databricks Team,
I hope this message finds you well. I am writing to you today to express my concerns regarding the current process for accessing the base dbrx model. As an AI language model enthusiast, I have been eagerly following the developments in this field and have a keen interest in exploring the capabilities of your model.
However, I have come to understand that the current bureaucratic procedures in place for granting access to the base dbrx model have become a significant hindrance for many interested parties. The manual approval process seems to be overwhelmed by the volume of applications, leading to prolonged waiting times and a lack of timely responses.
In light of this situation, I kindly request that you consider implementing a more streamlined and efficient approach to granting access to the base dbrx model. It is evident that your organization may not have the necessary resources to meticulously review and approve each application by hand. Therefore, I propose that you explore the possibility of making access to the base dbrx model automatically approvable for qualified individuals and organizations.
Furthermore, I would like to point out that the manual approval process seems needless, especially considering that Databricks is far smaller in scale compared to tech giants like Meta or other larger corporations who have much larger bureaucratic appraratus. It is also worth noting that Mistral, another prominent player in the AI language model space, has made their models publicly accessible without any approval process whatsoever. Despite this open approach, they have not encountered any significant issues or misuse of their models.
By following the examples set by these industry leaders, Databricks can streamline its access process, reduce unnecessary bureaucracy, and foster a more open and collaborative environment for AI research and development.
I kindly request that you reconsider the necessity of the manual approval process and take steps to make the base dbrx model more readily accessible to the AI community. By doing so, you will not only align yourself with the practices of your peers but also demonstrate your commitment to driving innovation and progress in the field.
Thank you for your attention to this matter. I eagerly await your response and hope that we can work together to find a solution that benefits both Databricks and the broader AI community.
Yours sincerely,
Charles McSneed
Yep, still waiting for access.
Please open the weights of the base model!βΊοΈ
wow, I'm early to the party this time!
Am I too late?
Am I too late?
No
No bro, we're all fucking waiting.. wtf
Genuine question since there are some true legends in this thread. Other than bad logistics in authorizeing us, could there be a true reason to limit the access to the base model over the instruct? I'm sure a few of us intended to rip all its layers with our own custom datasets and see what happens. Besides furthering research and open experimentation, can anyone see the downside to letting us have a go and seeing it's true potential?
yep :D waiting for access on this base model :/ .
+1, waiting for access
I feel like I'm waiting in line for the latest iPhone...
Genuine question since there are some true legends in this thread. Other than bad logistics in authorizeing us, could there be a true reason to limit the access to the base model over the instruct? I'm sure a few of us intended to rip all its layers with our own custom datasets and see what happens. Besides furthering research and open experimentation, can anyone see the downside to letting us have a go and seeing it's true potential?
I don't know, maybe they don't want us to compete with their own Instruct model (at least right now), I find that ridiculous.
They should had upload only the Instruct version if they didn't want us to have the base now, selecting only some people and gatekeeping like this will make us mad, why some others can acces but not us? Open is open, if you want your model to be open, it should be for everyone, not a bunch of selected people.
I probably assume there is also a strategy to make some noise, they release the Instruct, people talk about it, they want the base next, it's closed, we whine, it make even more noise and it's free ads for them kek.
Also (since my own field is that), making an uncensored or ERP model of this directly after the release could give them a bad reputation from the people who don't know a lot about how LLM work, and will assume it's from them too.
That's the only explanation I have now.
I think I sent my request within the first minutes it became available! For the reason
I said Fine-tuning
which now I am regretting kinda! (I mean what else you need a base/pretrained model!)
I think it's a logistic and it's just too many requests to go through. At some point someone is going to leak the weight on torrents, it's going to start popping up on HF, and this will go away. (which I would highly recommend to remove this verification part today before that happens to get all the credit you deserve)
waiting for access
I don't know, maybe they don't want us to compete with their own Instruct model (at least right now), I find that ridiculous.
They should had upload only the Instruct version if they didn't want us to have the base now, selecting only some people and gatekeeping like this will make us mad, why some others can acces but not us? Open is open, if you want your model to be open, it should be for everyone, not a bunch of selected people.
I probably assume there is also a strategy to make some noise, they release the Instruct, people talk about it, they want the base next, it's closed, we whine, it make even more noise and it's free ads for them kek.
Also (since my own field is that), making an uncensored or ERP model of this directly after the release could give them a bad reputation from the people who don't know a lot about how LLM work, and will assume it's from them too.
That's the only explanation I have now.
Thanks for your input! I can see the logic in your thinking, I just really hope that's not the case. I understand wanting to be careful of how your model is reflected and portrayed thru different fine-tunes, so you don't get a bad reputation. That being said, I know that a lot of us creators really make sure to differentiate and label our different versions and the inherent biases/issues/unsavory tasks they may contain. It's important for users to know how the models change based on our tunings and experiments.
Hopefully, this was just a logistical issue and they are happy to see what we can do.
By the way, thanks for all your great work!
I think I sent my request within the first minutes it became available! For the
reason
I saidFine-tuning
which now I am regretting kinda! (I mean what else you need a base/pretrained model!)I think it's a logistic and it's just too many requests to go through. At some point someone is going to leak the weight on torrents, it's going to start popping up on HF, and this will go away. (which I would highly recommend to remove this verification part today before that happens to get all the credit you deserve)
I felt the same once I realized the situation we're in! Haha But what else would be use the base for right? @Undi95 brought up a great point about unsavory fine-tunes and possibly just trying to make some noise. Hope that's not the case! Really excited to see the merge and tunes you have in mind for this one. Thanks for all you do for this community!
Maybe we should just be patient. How can we be entitled to something we did not know exists one day ago?
Vorfreude ist die beste Freude...
Maybe we should just be patient. How can we be entitled to something we did not know exists one day ago?
Vorfreude ist die beste Freude...
yes, I just got in. So it is coming.
i got it too :)
Fwiw my friend got it, so seems like a case of slow processing is all.
waiting for the access to the base model, I got access to the instruct
Still waiting :-(
Wow. Looks huge π² Can't wait for testing my code with it π±
hello AI buddies and @Undi95
I just want to understand, have you guys used DBRX for either text summary or Grammar correction tasks with any specific domain task, even if it is Health care, no problem.
kindly let me know how well the DBRX is performing for these tasks?, how do you rate it, and what is the rationale behind it?
i'm still in waiting :-( Please authorize access for the base weight !
Check Undi's profile -- He's got a mirror of the model up.
How much time do they need to process the request? its been 3-4 days since I applied for authorization
Been waiting forever for access. Doesn't inspire a lot of confidence...
+1, waiting for access
still waiting.