Between October 28th and 31st, VESSL AI had the incredible opportunity to co-sponsor the highly anticipated Agentic RAG-A-THON 2024↗ in the heart of Silicon Valley, where the tech giants are born. Held at the 500 Global headquarters, this event was one of the special highlights of SF Tech Week, featuring major tech firms and AI startups, including OpenAI, SAP, Together AI, Mistral AI, and Arize.
This hackathon, hosted in collaboration with LlamaIndex and Pinecone, brought together top enterprise developers and AI enthusiasts to explore the potential of Retrieval-Augmented Generation (RAG) and agentic systems. It was a space for innovation, creativity, and, most importantly, pushing the boundaries of enterprise AI.
The Fire of the Hackathon
RAG-a-Thon was filled with energy from the very start. Over 560 developers signed up, and with about 200 participants attending in person, the event was buzzing with activity.
Participants came from diverse backgrounds, mostly enterprise developers and product managers from companies based in South Bay. Their passion for AI and eagerness to tackle complex problems was evident, and it was inspiring to see these teams work tirelessly to create cutting-edge solutions.
As a Sponsor
VESSL AI was proud to be a key sponsor at the RAG-a-Thon. As part of our sponsorship, we hosted a hands-on workshop on AI and ML workloads and introduced developers to VESSL AI's ML operations platform. Throughout the event, we supported participants with our tools, showcasing how to seamlessly run custom AI workloads using VESSL’s features like multi-agent systems, serving LLMs, and fine-tuning models on the fly.
One highlight was our demo, "How to Run AI Workloads on VESSL AI," which showed developers how to finetune Llama 3.1 models↗ as OpenAI-compatible APIs. It was rewarding to see teams leverage VESSL’s control plane to accelerate their projects, simplifying complex workflows.
Highlight: Best Use Case Award - Team SteamScape AI
During the RAG-a-Thon 2024, Team SteamScape AI earned the Best Use of VESSL AI award for their impressive integration of our platform, showcasing its unique capabilities in real-world applications. Their project tackled the challenge of real-time game discovery by using VESSL’s multi-agent infrastructure to build a dynamic recommendation system for trending Steam games, demonstrating the power of our platform’s scalability and agility.
By seamlessly integrating LlamaIndex and Pinecone through VESSL, the team achieved fast, accurate querying and real-time data handling with minimal latency. This setup not only streamlined complex AI workflows but also highlighted VESSL’s ability to support demanding, high-performance applications. Team SteamScape AI’s project ultimately illustrated how VESSL’s control plane can manage intricate datasets and AI-driven tasks in real-time, providing an exceptional user experience. Their project resonated with the judges, standing out for both technical excellence and creativity, making full use of VESSL’s control plane to push the boundaries of agentic systems.
For more details on their project, check out this project page↗. You can also explore their presentation on YouTube↗ and slides.com.↗
Lessons Learned
Through our participation in the RAG-a-Thon hackathon, we gained valuable insights into how developers are leveraging VESSL to solve real-world challenges. Here’s a look at some key lessons we’ve learned from the event and how they are shaping our future direction.
Building a RAG Pipeline rapidly and efficiently
Participants were able to easily and quickly set up RAG pipelines by leveraging Pinecone and LlamaIndex examples available on VESSL Hub. The success of these examples highlighted the growing demand among developers for ready-made RAG and LLM templates, which simplify complex workflows and speed up the development process. To meet this demand, we plan to expand VESSL Hub by introducing advanced RAG examples, such as Hybrid RAG, specifically tailored for production-level use cases.
Simplified LLM Development Experience
One of the most positive pieces of feedback we received is how easy it is to develop and deploy effective LLMs with VESSL. Participants fine-tuned models like Llama 3.2 and deployed them as APIs using just a few simple configurations on VESSL.
Real-World Applications Matter
Beyond VESSL's robust GPU cluster and workload management systems, we heard directly from participants, including the award-winning Team SteamScape AI, about how invaluable the platform was for rapid development. One of the team members, Rui↗ shared, “VESSL allowed us to streamline complex AI workflows in ways we hadn’t experienced before. Within just two days, we went from an idea to a fully functioning RAG solution.” By using VESSL, teams not only accelerated their project timelines but also successfully deployed solutions that tackled real-world problems. With this feedback in mind, we’re excited to continue expanding VESSL’s use cases and fostering more opportunities for innovation across diverse applications.
Looking Forward
The RAG-a-Thon hackathon reinforced our belief in the future of RAG and agentic AI systems. We are excited to continue supporting the developer community, refining our platform based on this experience, and bringing more opportunities for innovation.
We are also thrilled to announce our new partnership with LlamaIndex, and we're already working on several exciting projects together. Looking to the future, we aim to deepen this partnership and foster further collaboration, securing VESSL AI’s role as a key contributor to the advancement of enterprise AI solutions. Stay tuned for what's to come!
A huge thank you to LlamaIndex, Pinecone, and all the incredible participants for making this event a success. We can’t wait to see how RAG and agentic systems evolve in the future!
—
Kelly, Head of Global Operations
Lucas, Software Engineer
Wayne, Technical Communicator