05 March 2026
Dashboard gives users and admins visibility into GPU workloads, utilization, VRAM, temperature, and spend rate to spot idle resources and reduce waste.

Until now, there was no single place to see whether your GPU workloads were running as expected — or whether your organization's GPU budget was being used efficiently. We built the Dashboard to change that.
GPU compute is expensive. But knowing how well it's actually being used has always required workarounds.
Individual users had to open each workspace one by one, check logs, or run low-level tools like nvidia-smi to understand what their GPU was doing. Questions like "Is my training job actually computing anything?" or "Am I about to run out of VRAM?" had no easy answer from the platform.
For organization admins, the blind spot was even larger. There was no way to see GPU usage at an organizational level — no breakdown by team or user, no sense of how much was being spent in real time, and no signal when resources were sitting idle. Making informed decisions about resource allocation meant relying on manual check-ins or out-of-band tooling.
We heard this repeatedly in interviews with users across academia and industry: a dashboard is table stakes for a paid GPU platform. The Dashboard is our answer to both problems.

The Home Dashboard is the first thing you see when you log in. It answers the question: "What are my workloads doing right now?"
At the top of the page, four cards give you a quick health check:
$21.85/hr)
Below the summary, your workloads are listed in a single table. Use the All teams and All statuses filters at the top to narrow down the list.
Each row shows:
1x A100 GPU) or CPU only
When a workload has been running with low GPU utilization for 1hr, it's visually flagged in the table.

The Org Dashboard is accessible from Manage > Organization > Dashboard and is only visible to org admins. It answers: "How is our organization using GPU resources, and where is our budget going?"

If any workloads have had 0% GPU utilization for the past 3 hours, a banner appears at the top of the page:
Click Review to jump to the filtered workload list showing idle workloads only.


The spend trend chart shows your organization's credit usage over time, broken down by team. Each team appears as its own line, alongside an "All teams" aggregate line, so you can see exactly where spend is coming from.
Use the date range picker to zoom into a specific window.

The Team Breakdown table gives admins a per-team view of GPU usage:
Click any team row to drill down and see individual user-level utilization within that team.

The full workload table lets admins filter by team, user, or idle-only status. Each row shows:

The Workspace Metrics page (accessible from any workspace's detail view → Metrics tab) has been updated with two new charts, an improved time range selector, and contextual banners.

When you arrive at the Metrics page, a banner appears at the top based on the workload's GPU utilization state.
Low utilization (GPU Util < 30%)

Low GPU utilization — GPU is running below 30%. Want to save costs? You can downscale using Pause → Edit.
High utilization (GPU Util > 90%)

Great GPU utilization! Need more power or worried about OOM? You can scale up using Pause → Edit.
The Dashboard brings together GPU utilization, VRAM, temperature, cost, and team-level usage into surfaces designed for your role — whether you're an engineer debugging a training run or an admin allocating resources across teams.
Ready to take a look? Log in and head to Home in the left sidebar to get started.
We'd love your feedback. If you have questions or suggestions, reach out to us at support@vessl.ai.
Happy training! 🚀

Product Manager

Product Marketer
Build, train, and deploy models faster at scale with fully managed infrastructure, tools, and workflows.