Launch
Launch swiftly pairs AI tasks with the most economical GPU resources, auto-provisions, and effortlessly runs the job, eliminating complex environment setup and management. It supports a range of compute-intensive tasks for generative AI and LLMs, such as large-scale training, serverless deployments, and vector DB searches. TensorOpera Launch also facilitates on-prem cluster management and deployment on private or hybrid clouds.

Quickstart

1.Setup the fedml library

Please refer to Installation https://docs.tensoropera.ai/open-source/installation, if you found any issues.

2.You can now initiate your run with a single command:

3.The YAML file is defined as follows:

4.You can monitor your runs in the UI or with command line:

With UI:

for Train Job, please go to run to see the run details (metric, logging, etc.) for Deploy Job, please go to endpoint to see the run details for Federate Job, please go to to see the run details

With CLI:

Please refer to https://docs.tensoropera.ai/launch for more guidance.