WE SHIP FASTER THAN AMAZONTHE ONLY REAL MOAT IS ATTENTIONWE'RE ALMOST AS SECURE AS FORT KNOXTHE WORLD RUNS ON LOVE & STATUSFAST, GOOD, CHEAP, PICK THREEYOU CAN TRUST US WITH YOUR DOG (WE LOVE DOGS)WE SHIP FASTER THAN AMAZONTHE ONLY REAL MOAT IS ATTENTIONWE'RE ALMOST AS SECURE AS FORT KNOXTHE WORLD RUNS ON LOVE & STATUSFAST, GOOD, CHEAP, PICK THREEYOU CAN TRUST US WITH YOUR DOG (WE LOVE DOGS)
Back to Case Studies

We built collaborative AI before anyone

Enterprise teams were buying individual AI plans. We built CollabGPT to make AI a platform where teams could build documents together, and develop organizational intelligence. It served users across six countries before purpose-built tools from Google and others absorbed the category.

Active users4,200+

users across 6+ countries.

Team output velocity2.8×

increase in document edit speed.

Knowledge reuse rate64%

Percentage of projects that referenced or built upon prior collaborative threads, creating compounding institutional context.

Situation

The breaking point

By early 2023, AI adoption inside organizations had become widespread and entirely individual. Every team member had their own ChatGPT window open, their own prompts, their own conversation history. The result was predictable: three people on the same team would ask a model the same question and get three different answers, with no way to reconcile them. Institutional knowledge was being generated at scale and immediately lost. We saw this pattern across the teams we worked with and inside our own organization. The gap was clear. AI needed to become a collaborative surface, the same way documents and whiteboards had become collaborative a decade earlier. We built CollabGPT to close that gap before any major platform had recognized it.

  • Individual AI usage created knowledge silos where insights generated in one conversation were invisible to the rest of the team, leading to duplicated effort and contradictory outputs.
  • Teams had no way to build on previous AI-assisted work. Every conversation started from zero context, which meant the AI never got smarter about the team's domain, terminology, or preferences.
  • Sensitive projects required controlled access, but existing AI tools offered no concept of shared workspaces, permission tiers, or project-level privacy boundaries.
  • Document creation workflows were fractured: teams would generate content in an AI tool, copy it into a document editor, and lose the reasoning chain that produced the output.

Approach

The build

Build a collaborative AI platform where teams can work with language models together in real time, maintain persistent project context, control access at multiple levels, and produce polished documents without leaving the environment.

We designed CollabGPT as a workspace where AI was a native collaborator inside every team interaction. The core unit was a collaborative thread: a real-time conversation where multiple team members and the AI model participated simultaneously. Each thread carried full context, meaning the AI could reference earlier exchanges, uploaded documents, and project-level knowledge when responding. Around the thread layer, we built a document editor that allowed teams to refine AI-generated content into finished deliverables without switching tools. Projects served as the organizational container, with configurable access levels so teams could run sensitive workstreams alongside open collaboration spaces. The architecture supported multiple LLM providers, giving teams the ability to select models based on task requirements. We shipped the platform in 11 weeks and onboarded users across six countries within the first quarter.

System blueprint

Under the hood

The core components that make the system work, and why each one matters.

Collaborative Threads

Real-time multi-user AI conversations

Multiple team members could participate in the same AI conversation simultaneously. The model maintained awareness of all participants, their contributions, and the evolving direction of the discussion. This transformed AI from a private utility into a shared thinking environment where the best prompt from any team member benefited everyone.

Document Workspace

AI-assisted collaborative editing

Teams could transition from exploratory AI conversations into structured document editing within the same environment. The AI served as a writing partner during editing, offering revisions, expansions, and restructuring while preserving the reasoning context from the thread that originated the content.

Project Architecture

Scoped workspaces with granular access control

Every project maintained its own knowledge boundary. Teams could upload reference files, set access permissions at the owner, editor, and viewer level, and ensure that sensitive workstreams remained isolated. The AI operated within project scope, drawing on uploaded materials and prior threads to deliver contextually grounded responses.

Context Engine

Persistent memory across sessions and participants

CollabGPT maintained a structured context layer that persisted across conversations within a project. When a new thread was started, the AI already understood the project's domain, previous decisions, key terminology, and ongoing workstreams. This compounding context was the mechanism that made team AI usage more valuable over time.

Performance shift

The numbers that moved

Key metrics before and after launch.

Time to first draft (team documents)

Lower is better
Before
4.2 hrs
After
1.5 hrs

Knowledge reuse across projects

Higher is better
Before
11%
After
64%

Duplicate AI queries within teams

Lower is better
Before
73%
After
18%

Context retained between sessions

Higher is better
Before
0%
After
94%

Delivery path

How we shipped it

Every phase delivered something real. Here's the timeline.

Phase 01Weeks 1-3

Collaboration model research and architecture design

We studied how teams were actually using AI tools in their daily workflows, identified the fragmentation patterns, and designed the collaborative architecture. The core decisions around real-time thread synchronization, project-scoped context, and multi-tier permissions were locked during this phase.

Workflow fragmentation analysisCollaboration architecture specPermission model designContext persistence strategy
Phase 02Weeks 4-7

Core platform build and real-time infrastructure

The collaborative thread engine, real-time synchronization layer, document workspace, and project management system were built in parallel. The LLM integration layer was designed to be provider-agnostic from the start, with OpenAI and Anthropic models available at launch. WebSocket infrastructure ensured that multi-user conversations felt instantaneous.

Thread engineReal-time sync layerDocument editorProject management systemMulti-model integration
Phase 03Weeks 8-9

Access controls, file handling, and context engine

The permission system was implemented with project-level, thread-level, and document-level controls. File upload and parsing infrastructure was built to support PDFs, spreadsheets, and text documents as project context. The persistent context engine was connected across all project artifacts so the AI could draw on the full knowledge base when responding.

Granular access controlsFile upload and parsingContext engine integrationKnowledge base indexing
Phase 04Weeks 10-11

Launch, international rollout, and adoption tracking

CollabGPT launched with onboarding flows designed for three distinct user segments: corporate teams, university research groups, and marketing agencies. Usage analytics and adoption tracking were built into the platform from day one. Within the first quarter, active users spanned six countries with organic growth driven by team-level referrals.

Production launchSegment-specific onboardingAnalytics dashboardInternational deployment

What shipped

What we delivered

  • Real-time collaborative AI thread engine with multi-user participation
  • AI-powered document editor with inline revision and expansion capabilities
  • Project workspace system with owner, editor, and viewer access tiers
  • Persistent context engine that compounds project knowledge over time
  • Multi-model LLM integration supporting provider selection per task
  • File upload and parsing system for PDFs, spreadsheets, and text documents
  • Usage analytics and team adoption tracking dashboard
  • Segment-specific onboarding flows for corporate, academic, and agency users

Integrations

Connected systems

  • OpenAI GPT-4 API
  • Anthropic Claude API
  • WebSocket real-time infrastructure
  • Cloud file storage (AWS S3)
  • SSO and identity management
  • PDF and document parsing pipeline

Governance

Guardrails

  • Project-level data isolation ensuring that conversations and files in one workspace were never accessible from another without explicit permission grants
  • All AI interactions logged with full audit trails, allowing team leads to review how the AI contributed to key decisions and document outputs
  • User-level data export and deletion capabilities aligned with GDPR requirements across all operating regions
  • Model provider selection gave teams control over where their data was processed, supporting organizations with specific vendor compliance requirements
  • Rate limiting and usage controls at the project and organization level to prevent runaway costs and ensure equitable access across team members

Outcomes

The payoff

  • Teams moved from fragmented individual AI usage to a shared environment where every conversation built on the last. The 2.8x increase in document completion speed came from eliminating the cycle of redundant prompting and context re-entry that defined individual AI workflows.
  • The 64% knowledge reuse rate proved the core thesis: when AI conversations are collaborative and persistent, organizations develop compounding intelligence. Teams stopped starting from scratch and began treating their AI workspace as institutional memory.
  • Adoption spread organically across six countries within a single quarter. Corporate teams, university research groups, and marketing agencies all found value in the same core workflow, validating that collaborative AI was a horizontal need rather than a vertical feature.
  • When Google and other major platforms launched their own collaborative AI features in late 2023 and 2024, CollabGPT had already proven the model, refined the interaction patterns, and demonstrated the value to thousands of users. The product was retired gracefully as the category matured, with the architectural patterns and product insights feeding directly into subsequent Ellenox engagements.
We saw teams treating AI like a private calculator when it should have been a shared whiteboard. CollabGPT proved that the real value of AI in organizations is unlocked when people think with it together, and the work compounds over time instead of evaporating after every session.
Ellenox Product TeamCollabGPT by EllenoxReflecting on the product after category absorption by major platform players

Next case study

The World's Largest Golf Course Database

Course Finder now drives 20% of all traffic to GOLF.com, a site with 3.5 million monthly users