Appearance
Inference API
Use Inference API when you want programmatic access to RemoteGPU models without operating Kubernetes directly.
This product path is designed for developers, application teams, and integrators who want HTTP-based model access with API keys.
Open in the console
| Task | Console page |
|---|---|
| Review image models and runtime status | Inference API / Image |
| Create the required API key | Settings / API Keys |
Choose this path when
| You want to... | Use Inference API if... |
|---|---|
| call models from your app or backend | you want an HTTP API instead of a hosted application |
| automate prompts and request parameters | you are comfortable with API keys and request payloads |
| avoid Kubernetes operations | you do not want to run namespace-scoped workloads yourself |
How this product path works
| Area | What to expect |
|---|---|
| Main interface | HTTP API, with console support for visibility |
| Authentication | API key |
| Kubernetes knowledge required | No |
| Runtime model | RemoteGPU serves the model and executes requests for you |
Current API guides
| Guide | Use it for |
|---|---|
| Image inference | Send image-generation requests and poll job status |
How this differs from the other product paths
| Product path | Best for | You manage |
|---|---|---|
Application | Guided hosted workflows | Very little beyond normal console actions |
Inference API | Semi-professional and programmatic use | API calls, request payloads, and keys |
Kubernetes | Professional operators | Native Kubernetes workloads and networking resources |
Read next
- Read Authentication overview if you need to decide which API key to create.
- Read Image inference to send your first request.
