NVIDIA has introduced Chat with RTX, allowing users to build their own personalized chatbot experience. Unlike many cloud-based solutions, Chat with RTX operates entirely on a local Windows PC or workstation, offering enhanced data privacy and control.
This new app empowers users to train a large language model with their own data, including documents, notes, YouTube video transcripts, and more. By feeding the app personal content, users can cultivate a chatbot tailored to their specific needs and knowledge base, unlocking a new level of personalized interaction.
Leveraging advanced technologies like Retrieval-Augmented Generation (RAG), TensorRT-LLM, and RTX acceleration, the app delivers rapid and accurate responses to user queries. This powerful combination enables efficient retrieval of relevant information from the personalized dataset, resulting in contextual and insightful answers.
Furthermore, Chat with RTX emphasizes data protection. Running locally eliminates the need for cloud storage, keeping information under direct user control. This localized processing provides a significant advantage over many cloud-based chatbot solutions, especially for those who prioritize data privacy.
Chat with RTX also supports a wide range of file formats including text, PDF, DOC/DOCX, and XML, ensuring compatibility with various content types. Additionally, the app seamlessly integrates YouTube video transcripts, expanding training data exponentially with valuable information from preferred channels.
Developers can also tap into the potential of Chat with RTX. Built upon the open-source TensorRT-LLM RAG Developer Reference Project, the app serves as a springboard for crafting custom RAG-based applications that further harness the power of RTX acceleration.
The news of Nvidia's Chat with RTX app has received positive responses from the developer community. Gradio, a visual toolkit for building user interfaces for machine learning models, expressed their excitement on X:
We are super excited to see Gradio featured on official announcements from @NVIDIAStudio on the Chat with RTX tech demo! You can also locally explore RAG using Nvidia and Gradio.
Chat with RTX is seen as a valuable tool for building and interacting with custom language models, and that leveraging open-source projects can further enhance its potential.
Chat with RTX is like a web server with a Python instant. After downloading it, users have to download the Mistral or the Llama 2 models separately, which then train it over the data provided by the user.