Blog

Key Highlights from the 5th Annual MLOps World Conference

Community

20 November 2024

Key Highlights from the 5th Annual MLOps World Conference

VESSL AI proudly sponsored the 5th MLOps World Conference, showcasing our dedication to scalable and efficient AI systems.

VESSL AI's ODSC 2024 Review

Community

20 November 2024

VESSL AI's ODSC 2024 Review

At ODSC 2024, VESSL showcased its MLOps platform, gaining industry attention for unified hybrid infrastructure and cost-effective LLM solutions.

RAG-a-Thon 2024: VESSL AI's First Silicon Valley Hackathon with LlamaIndex and Pinecone

Community

28 October 2024

RAG-a-Thon 2024: VESSL AI's First Silicon Valley Hackathon with LlamaIndex and Pinecone

Dive into VESSL AI’s debut at RAG-a-Thon 2024! Discover how developers used our platform to build real-world AI solutions, including the award-winning project by Team SteamScape AI. Key insights and future plans await!

Unveiling the Future of AI: Insights from Oracle CloudWorld 2024

Community

02 October 2024

Unveiling the Future of AI: Insights from Oracle CloudWorld 2024

Join us as we delve into our experiences at Oracle CloudWorld 2024 in Las Vegas.

VESSL AI Partners with Oracle to Gear up the Next Era of MLOps

Company

06 September 2024

VESSL AI Partners with Oracle to Gear up the Next Era of MLOps

We are excited to announce that we have partnered with Oracle Cloud Infrastructure.

Introducing Serverless Mode — Zapping Ahead Fast Inference

Product

23 August 2024

Introducing Serverless Mode — Zapping Ahead Fast Inference

Introduce “Serverless Mode," a new feature on VESSL AI

How To Build & Serve Private LLMs - (4) Deployment

Machine Learning

14 August 2024

How To Build & Serve Private LLMs - (4) Deployment

This post delves deep into the deployment of LLMs.

How To Build & Serve Private LLMs - (3) Fine-Tuning

Machine Learning

02 August 2024

How To Build & Serve Private LLMs - (3) Fine-Tuning

This post delves deep into the fine-tuning of LLMs.

How To Build & Serve Private LLMs - (2) RAG

Machine Learning

12 July 2024

How To Build & Serve Private LLMs - (2) RAG

This post delves deep into the RAG techniques.

VESSL 2.0 is Here: Experience Our Sleek New Interface

Product

11 July 2024

VESSL 2.0 is Here: Experience Our Sleek New Interface

Unveil the Future of Machine Learning Operations with VESSL 2.0

How To Build & Serve Private LLMs - (1) Introduction

Machine Learning

10 July 2024

How To Build & Serve Private LLMs - (1) Introduction

How To Build & Serve Private LLMs - Introduction

Mastering MLOps: A Step-by-Step Guide with VESSL

Machine Learning

08 July 2024

Mastering MLOps: A Step-by-Step Guide with VESSL

Discover how MLOps streamlines the Machine Learning lifecycle, with a detailed look at VESSL's capabilities

4 Key Trends in CVPR 2024

Community

28 June 2024

4 Key Trends in CVPR 2024

VESSL AI was at CVPR 2024 — Here are the 4 trends & highlights

Introducing VESSL Serve — Deploy custom models & generative AI applications and scale inference with ease

Product

24 April 2024

Introducing VESSL Serve — Deploy custom models & generative AI applications and scale inference with ease

Deploy any models to any clouds at any scale in minutes without wasting hours on API servers, load balancing, automatic scaling, and more

New in March 2024

Product

25 March 2024

New in March 2024

Improvements on cloud storage & volume mount, fine-tuning Mixtral 8x7B, and more

Fine-tuning Mixtral 8x7B with an LLM-generated Q&A dataset on a single GPU

Tutorials

14 March 2024

Fine-tuning Mixtral 8x7B with an LLM-generated Q&A dataset on a single GPU

Learn how to use VESSL Run to fine-tune Mixtral-8x7B with GPT-generated custom datasets

Build an interactive chatbot application with Gemma 2b-IT using Streamlit and VESSL Run

Tutorials

12 March 2024

Build an interactive chatbot application with Gemma 2b-IT using Streamlit and VESSL Run

Learn to host Gemma 2b-IT for interactive conversations on the GPU cloud using Streamlit and VESSL Run.

[Insights from MLOps Now] Project Pluto — Leveraging LLMs & LLMOps in financial media

Community

28 February 2024

[Insights from MLOps Now] Project Pluto — Leveraging LLMs & LLMOps in financial media

Project Pluto automates content generation for its financial news outlet with GPT-4 and a dedicated LLMOps platform & automation pipelines

3 Multi-modal models & papers from NeurIPS 2023 — Try them out now at VESSL Hub

Machine Learning

24 January 2024

3 Multi-modal models & papers from NeurIPS 2023 — Try them out now at VESSL Hub

Run LLaVa, MusicGen, MotionGPT at VESSL Hub

VESSL AI achieves ISO 27001 and more

Company

22 January 2024

VESSL AI achieves ISO 27001 and more

Our ISO certification on 27001, 27701, 27017 & 27018 and commitment to building a more secure AI/ML platform

New models on VESSL Hub — December 2023

Machine Learning

16 January 2024

New models on VESSL Hub — December 2023

5 new models we uploaded on VESSL Hub — Llama2-7B, Mistral-7B, SSD-1B, and more

Build an AI image animation app with VESSL Run and Streamlit

Tutorials

16 January 2024

Build an AI image animation app with VESSL Run and Streamlit

Learn how to quickly host Thin-Plate Spline Motion Model for Image Animation on the GPU cloud

10 Highlights from NeurIPS 2023

Machine Learning

11 January 2024

10 Highlights from NeurIPS 2023

VESSL AI was at NeurIPS 2023 — Here are the 10 trends & highlights

Scatter Lab — Building more human sLLM & Personal AI with VESSL AI

Customers

08 January 2024

Scatter Lab — Building more human sLLM & Personal AI with VESSL AI

VESSL AI’s end-to-end LLMOps platform helps Scatter Lab scale Asia’s most advanced sLLM "Pingpong-1"

5 Highlight papers from EMNLP 2023 — Try them out at VESSL Hub

Machine Learning

28 December 2023

5 Highlight papers from EMNLP 2023 — Try them out at VESSL Hub

VESSL AI was at EMNLP 2023 — Here are 5 papers that we found the most interesting

Announcing VESSL Hub: One-click recipes for the latest open-source models

Product

28 December 2023

Announcing VESSL Hub: One-click recipes for the latest open-source models

Fine-tune and deploy Llama 2, Stable Diffusion, and more with just a single click

Unveiling VESSL Run: Bringing Unified Interfaces and Reproducibility to Machine Learning

Product

29 November 2023

Unveiling VESSL Run: Bringing Unified Interfaces and Reproducibility to Machine Learning

Discover VESSL Run: a versatile ML tool streamlining training and deployment across multiple infrastructures with YAML for easy reproducibility and integration.

Deploying Docs AI with LlamaIndex, BentoML, and VESSL Serve

Tutorials

24 November 2023

Deploying Docs AI with LlamaIndex, BentoML, and VESSL Serve

Build a minimally viable LLM-powered document Q&A app with VESSL AI

3 YAMLs we shared at MLOps World — Llama2c playground to production

Machine Learning

17 November 2023

3 YAMLs we shared at MLOps World — Llama2c playground to production

LLMOps infrastructure for prototyping, fine-tuning, and deploying LLM, simplified into 3 YAMLs

AI Infrastructure for LLMs

Product

30 October 2023

AI Infrastructure for LLMs

Scalable infrastructure for putting LLMs in production faster at scale — without the massive cost

VESSL AI at Google for Startups Accelerator: Cloud North America cohort

Company

26 October 2023

VESSL AI at Google for Startups Accelerator: Cloud North America cohort

Sharing our time at Google's 10-week startup accelerator program and how it shaped VESSL AI's vision for AI cloud

Run CVPR 2023 highlights with VESSL Run

Community

22 June 2023

Run CVPR 2023 highlights with VESSL Run

Run CVPR 2023 highlight models and papers with a single YAML file

Introducing VESSL Run: a unified YAML interface for running any AI models

Product

19 June 2023

Introducing VESSL Run: a unified YAML interface for running any AI models

VESSL Run makes fine-tuning and scaling the latest open-source models easier than ever

Introducing tvault

Machine Learning

27 April 2023

Introducing tvault

a lightweight local registry for storing and comparing PyTorch models

Hybrid Cloud for ML, Simplified

Tutorials

09 January 2023

Hybrid Cloud for ML, Simplified

Set up a hybrid ML infrastructure with a single-line command with VESSL Clusters

Seoul National University accelerates ML for MRI research with an open competition using VESSL AI

Customers

03 January 2023

Seoul National University accelerates ML for MRI research with an open competition using VESSL AI

With VESSL Run, the participants at SNU fastMRI Challenge can focus on building state-of-the-art MRI reconstruction models

10 Highlights from NeurIPS 2022

Community

08 December 2022

10 Highlights from NeurIPS 2022

A large language model and reinforcement learning are the two main keywords in NeurIPS 2022.

Train Balloon Segmentation Model on VESSL

Machine Learning

21 November 2022

Train Balloon Segmentation Model on VESSL

Getting started with MLOps with git-committed code, versioned datasets, and experiment tracking

KAIST AI orchestrates 800+ GPUs and enables instant access to its HPCs for ML with VESSL

Customers

06 July 2022

KAIST AI orchestrates 800+ GPUs and enables instant access to its HPCs for ML with VESSL

KAIST AI orchestrates 800+ GPUs and enables instant access to its HPCs for ML with VESSL

VESSL for Academics

Customers

14 February 2022

VESSL for Academics

Introducing a free academic plan for research teams — our small token of support for graduate students and faculty members

VESSL AI — Heading into 2022

Company

11 January 2022

VESSL AI — Heading into 2022

Announcing our $4.4M seed round and sharing what’s next

Try VESSL today

Build, train, and deploy models faster at scale with fully managed infrastructure, tools, and workflows.

Get Started

MLOps for high-performance ML teams

© 2024 VESSL AI, Inc. All rights reserved.