The "Deploy Only" option lets you deploy AI models inference endpoints or self host models on your own infra, either using your own GPUs or renting on-demand from our decentralized GPU marketplace.
Yes, you can deploy AI models on our marketplace on your own GPUs for full control over your own privacy and security. Alternatively, you can rent more compute power from our decentralized GPU marketplace on demand. Models can be hosted on a distributed cluster using both yours own computes and rented computes.
Yes! Before deployment, you can test any model in our real-time demo environment. This lets you validate performance and suitability before committing resources to deployment.
Absolutely. If you’re a model creator, you can list your models on our model marketplace and monetize them. Other users can then deploy your models for different purposes such as: deploy inference endpoints, auto label, auto train, fine-tune.
If you need to customize a model before deployment, select the "Fine-Tune and Deploy" option instead. This allows you to fine-tune your chosen model with your own data before launching it into production.