multi GPU inference

#4
by himasai9711 - opened

Hey @ehartford can you help me with the inference speed i tried to inference in AWS instance g6.12xlarge for image my inference speed is 42 seconds for task data extraction from image to JSON, how can i increase the inference speed, and can we use multi GPU as my instance type have 4 GPU access, and can you help me in this?

It wasn't me! 😁

Sign up or log in to comment