FAQ
Straightforward answers to general queries.
How is TES AI different from AWS?
TES AI offers a fundamentally different approach to cloud computing, leveraging a distributed and decentralized model, which can provide more control and flexibility to users, our services are permissionless and cost efficient. The combination of all these factors sets TES AI in its own league of Decentralized providers.
Who are your target customers?
Ultimately, anyone looking to create or operate an ML model or AI app is a potential customer. Given the explosion of “no-code tools” like Predibase and user-friendly model creation platforms like Hugging Face, this will eventually be a massive market.
How do we manage availability and allocation to users across your global network of GPUs?
TES AI connects a global network of clients to a global network of suppliers. We deploy our container on each worker machine, facilitating the TES AI Virtual Network's integration and monitoring of all the devices` availability across the network. Our algorithm intelligently groups resources matching the selections made by the engineer and glues them into a cluster , All within 90 seconds. Our networking solution has been thoroughly tested and found reliable.
How Flexibly can clients create their GPUs?
Clients can create their cluster with unmatched flexibility through a set of selections and options : cluster type by use case, sustainability (e.g., “Green GPUs” powered by 100% clean energy), geographic location, security compliance level (SOC2, HIPAA, end-to-end encryption), connectivity tier, and cluster purpose (we are expanding into other use cases). Our out-of-the-box configuration requires no additional set up by our clients to deploy the cluster.
What's the maximum number of GPUs allowed in a single cluster?
There is no maximum number As your cluster is only limited by the maximum supply available.
How long does it take to create a cluster of GPUs?
Creating a Cluster with TES AI takes less than 90 seconds.
How do you actually parallelize ? / How are you connecting all the GPUs together?
Distribution and decentralization: Leveraging specialized libraries for data streaming, training, fine-tuning, hyperparameter tuning, and serving with our technology results in a simplified process of developing and deploying large-scale AI models over a massive grid of GPUs.
How do you preserve data privacy and security?
Our TES AI agent ensures that unauthorized containers are not running on a hired GPU to eliminate any risks. When a node is hired, the data existing between one worker node and the other worker node is encrypted in the docker file system. Any network traffic is also on a mesh VPN, which ensures maximum security.
Last updated