Local AI (NVidia CUDA 12)

    Local AI (NVidia CUDA 12)

    Local AI is an open-source, self-hostable online OpenAI API compatible service with WebUI to chat, generate images and more. Works best with DataCrunch provider and GPU instances.

    Requirements

    4096MB
    2 vCPU
    60GB

    Template Provider: DollarDeploy

    Approx. deploy time: ~20 minutes

    About Local AI (NVidia CUDA 12)

    Local AI is an open-source, self-hostable online OpenAI API compatible service with chatbot UI. This template is compatible with server which have NVidia GPU with CUDA 12 configured. We tested it with DataCrunch provider in DollarDeploy, and it works well with GPU instances of type A6000 and better (1€/hr).

    Introduction to Local AI (NVidia CUDA 12)

    About DollarDeploy

    DollarDeploy allows you to easily deploy and manage apps on your own VPS without the need for SSH access. Deploy Local AI (NVidia CUDA 12) with just a few clicks and start building your solution today!