The Reports dashboard gives Admins a complete view of how AI is being used across the organization — who is using it, which models are being used, how much it costs, and where engagement is high or low. Use Reports to optimize LLM spend, identify adoption gaps, track governance events, and demonstrate the value of WorkLLM to stakeholders.Documentation Index
Fetch the complete documentation index at: https://workllm.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Reports is available to users with the Admin role only. Members and Viewers do not have access to the Reports dashboard.
Accessing the Reports dashboard
Click on Profile and then go to WS ReportsAvailable metrics
The Reports dashboard is organized into several sections:Usage overview
Total messages
The total number of messages sent across all users, models, and features during the selected period. Broken down by day on a trend chart.
Active users
The number of unique users who sent at least one message during the period. Helps you track adoption across the organization.
Output volume
Total AI-generated output by character count or token estimate — useful for understanding workload and value produced.
Session duration
Average time users spend in active AI sessions, showing depth of engagement beyond just message counts.
LLM spend and model usage
The Model Usage section breaks down activity by model, showing:- Messages per model — how many messages were sent to each model
- Estimated cost per model — LLM spend attributed to each model based on token consumption and current pricing
- Cost by team — total LLM spend broken down by team, so you can see where resources are being used
- Cost by user — per-user spend for individual-level accountability and budgeting
Feature and content engagement
- Top used prompts — the shared prompts run most frequently, showing which templates your team relies on
- Top used tools — the AI Tools used most often, useful for identifying where tooling investment pays off
- Agent run volume — the number of agent runs completed during the period, broken down by agent
- Agent success rate — the percentage of agent runs that completed without error
Governance events
The Governance section logs events relevant to security and compliance:- Data access events (file uploads, document chat sessions)
- Integration connection and disconnection events
- Role changes (member promoted, access revoked)
- Prompt or tool publishing events
Governance event logs are available for export and can be forwarded to your SIEM or compliance tooling via your Admin settings. Contact your WorkLLM account team for enterprise log forwarding options.
Filtering reports
Use the filter bar at the top of the dashboard to narrow the data:| Filter | Options |
|---|---|
| Date range | Last 7 days, 30 days, 90 days, or custom date range |
| Team | Filter all metrics to a specific team |
| User | Filter to an individual user’s activity |
| Model | Show activity for a specific AI model only |
| Feature | Filter by feature: Chat, Team AI, Document Chat, Agents, Tools |
Exporting report data
Export any view in the Reports dashboard to CSV for further analysis in Excel, Google Sheets, or your BI tooling.Apply your filters
Set the date range, team, and any other filters to scope the data you want to export.
Click Export
Click the Export button in the top-right corner of the section you want to export. Each section (Usage, Model Costs, Governance) exports independently.
Using reports to optimize AI spend
Identify high-cost, low-engagement teams
Identify high-cost, low-engagement teams
Filter by team and compare cost against active users. Teams with high spend but low user counts may benefit from training, better tool coverage, or model guidance to reduce unnecessary model calls.
Right-size model selection
Right-size model selection
Review the Model Usage breakdown. If the majority of messages are going to premium models (GPT-4o, Claude) for tasks that lighter models handle well (Mistral, Llama), consider setting default model recommendations or creating AI Tools that specify the appropriate model for common tasks.
Track adoption over time
Track adoption over time
Use the 90-day view with the Active users trend chart to measure whether adoption is growing. Dips may correlate with team changes, new feature releases, or training gaps.
Validate agent value
Validate agent value
The Agent run volume and Agent success rate metrics help you evaluate whether your automated agents are delivering consistent value. A high failure rate on a specific agent signals a configuration issue worth investigating.