Matt

mattding91
·

AI & ML interests

onnx, gguf, gptq, lora

Recent Activity

liked a model 5 days ago
NexaAIDev/omnivision-968M
liked a model 5 days ago
stepfun-ai/GOT-OCR2_0
liked a model 5 days ago
vikhyatk/moondream2
View all activity

Organizations

None yet

mattding91's activity

Reacted to prithivMLmods's post with 👍 5 days ago
view post
Post
3178
The (768 x 1024) mix of MidJourney and Flux's LoRA is nearly identical to the actual visual design. It hasn’t undergone much concept art development for now. In the meantime, try out the impressive visual designs on:

🥚Midjourney Flux Mix : prithivMLmods/Midjourney-Flux

🥚Adapter for FluxMix : strangerzonehf/Flux-Midjourney-Mix-LoRA

🥚Flux LoRA Collection: prithivMLmods/flux-lora-collections-66dd5908be2206cfaa8519be
.
.
.
@prithivMLmods 🤗
  • 3 replies
·
Reacted to rwightman's post with 🚀 5 days ago
view post
Post
1545
New MobileNetV4 weights were uploaded a few days ago -- more ImageNet-12k training at 384x384 for the speedy 'Conv Medium' models.

There are 3 weight variants here for those who like to tinker. On my hold-out eval they are ordered as below, not that different, but the Adopt 180 epochs closer to AdamW 250 than to AdamW 180.
* AdamW for 250 epochs - timm/mobilenetv4_conv_medium.e250_r384_in12k
* Adopt for 180 epochs - timm/mobilenetv4_conv_medium.e180_ad_r384_in12k
* AdamW for 180 epochs - timm/mobilenetv4_conv_medium.e180_r384_in12k

This was by request as a user reported impressive results using the 'Conv Large' ImagNet-12k pretrains as object detection backbones. ImageNet-1k fine-tunes are pending, the weights do behave differently with the 180 vs 250 epochs and the Adopt vs AdamW optimizer.