The "Deploy Only" option lets you deploy AI models inference endpoints or self host models on your own infra, either using your own GPUs or renting on-demand from our decentralized GPU marketplace.
Yes, you can deploy AI models on our marketplace on your own GPUs for full control over your own privacy and security. Alternatively, you can rent more compute power from our decentralized GPU marketplace on demand. Models can be hosted on a distributed cluster using both yours own computes and rented computes.
We support a diverse and continuously updated list of AI models, including newest foundation models like Deepseek R1, Deepseek V3, Qwen 2.5, seamless M4T, SAM, etc. The model marketplace is an open P2P marketplace so anyone can monetize their models as long as they are qualified. You can always test those models in our demo environment before deploying.
Yes! Before deployment, you can test any model in our real-time demo environment. This lets you validate performance and suitability before committing resources to deployment.
Absolutely. If you’re a model creator, you can list your models on our model marketplace and monetize them. Other users can then deploy your models for different purposes such as: deploy inference endpoints, auto label, auto train, fine-tune.
If you need to customize a model before deployment, select the "Fine-Tune and Deploy" option instead. This allows you to fine-tune your chosen model with your own data before launching it into production.