Skip to main content

Documentation Index

Fetch the complete documentation index at: https://workllm.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

The Reports dashboard gives Admins a complete view of how AI is being used across the organization — who is using it, which models are being used, how much it costs, and where engagement is high or low. Use Reports to optimize LLM spend, identify adoption gaps, track governance events, and demonstrate the value of WorkLLM to stakeholders.
Reports is available to users with the Admin role only. Members and Viewers do not have access to the Reports dashboard.

Accessing the Reports dashboard

Click on Profile and then go to WS Reports

Available metrics

The Reports dashboard is organized into several sections:

Usage overview

Total messages

The total number of messages sent across all users, models, and features during the selected period. Broken down by day on a trend chart.

Active users

The number of unique users who sent at least one message during the period. Helps you track adoption across the organization.

Output volume

Total AI-generated output by character count or token estimate — useful for understanding workload and value produced.

Session duration

Average time users spend in active AI sessions, showing depth of engagement beyond just message counts.

LLM spend and model usage

The Model Usage section breaks down activity by model, showing:
  • Messages per model — how many messages were sent to each model
  • Estimated cost per model — LLM spend attributed to each model based on token consumption and current pricing
  • Cost by team — total LLM spend broken down by team, so you can see where resources are being used
  • Cost by user — per-user spend for individual-level accountability and budgeting
Use the Cost by team view to identify teams spending disproportionately on expensive models. You may find that switching certain workflows to a lighter model (such as Mistral Large instead of GPT-4o) significantly reduces cost with minimal impact on output quality.

Feature and content engagement

  • Top used prompts — the shared prompts run most frequently, showing which templates your team relies on
  • Top used tools — the AI Tools used most often, useful for identifying where tooling investment pays off
  • Agent run volume — the number of agent runs completed during the period, broken down by agent
  • Agent success rate — the percentage of agent runs that completed without error

Governance events

The Governance section logs events relevant to security and compliance:
  • Data access events (file uploads, document chat sessions)
  • Integration connection and disconnection events
  • Role changes (member promoted, access revoked)
  • Prompt or tool publishing events
Governance event logs are available for export and can be forwarded to your SIEM or compliance tooling via your Admin settings. Contact your WorkLLM account team for enterprise log forwarding options.

Filtering reports

Use the filter bar at the top of the dashboard to narrow the data:
FilterOptions
Date rangeLast 7 days, 30 days, 90 days, or custom date range
TeamFilter all metrics to a specific team
UserFilter to an individual user’s activity
ModelShow activity for a specific AI model only
FeatureFilter by feature: Chat, Team AI, Document Chat, Agents, Tools
Filters compound — for example, you can filter to a specific team and date range simultaneously. Click Clear filters to reset to the organization-wide default view.

Exporting report data

Export any view in the Reports dashboard to CSV for further analysis in Excel, Google Sheets, or your BI tooling.
1

Apply your filters

Set the date range, team, and any other filters to scope the data you want to export.
2

Click Export

Click the Export button in the top-right corner of the section you want to export. Each section (Usage, Model Costs, Governance) exports independently.
3

Download the CSV

WorkLLM generates the file and downloads it to your browser. Large exports (90-day organization-wide data) may take a few seconds to prepare.

Using reports to optimize AI spend

Filter by team and compare cost against active users. Teams with high spend but low user counts may benefit from training, better tool coverage, or model guidance to reduce unnecessary model calls.
Review the Model Usage breakdown. If the majority of messages are going to premium models (GPT-4o, Claude) for tasks that lighter models handle well (Mistral, Llama), consider setting default model recommendations or creating AI Tools that specify the appropriate model for common tasks.
Use the 90-day view with the Active users trend chart to measure whether adoption is growing. Dips may correlate with team changes, new feature releases, or training gaps.
The Agent run volume and Agent success rate metrics help you evaluate whether your automated agents are delivering consistent value. A high failure rate on a specific agent signals a configuration issue worth investigating.
Export files contain usage data tied to individual users. Handle exported CSVs in accordance with your organization’s data handling and employee privacy policies.