This release represents the largest single sprint we've shipped since private deployment became generally available. We've added support for GPT-5 models across the platform, built first-class tooling for Excel and CSV analysis, extracted document conversion into a dedicated service, and delivered dozens of performance, stability, and usability refinements requested by innovation teams. Every change stays true to our core commitment: all data, all processing, and all inference remain inside your perimeter.
GPT-5 Model Support
AtlasAI now supports OpenAI's GPT-5 series models (GPT-5.4 and related variants) across chat, tabular review, and assistant workflows. Customers running Azure OpenAI within their tenant can now route queries to these next-generation models with no platform changes required. The system intelligently handles model-specific formatting instructions and token budget allocation, ensuring responses are structured, citation-rich, and compliant with existing privilege controls.
For firms evaluating GPT-5 deployments, this update removes integration friction. You control model availability at the tenant level; AtlasAI adapts. Model selection persists per user as either their most recently used or favorited model, reducing cognitive overhead during live sessions. Administrators retain full visibility into which models are invoked for cost allocation and compliance reporting.
Native Tabular Data Analysis
We've introduced full-stack support for analyzing Excel and CSV files directly in chat and tabular review. The platform now ingests structured data, detects column headers using a lightweight inference pass, generates queryable schemas, and allows natural-language interrogation of datasets. Results surface inline with proper citations linking back to source rows and cells.
This capability runs entirely within your deployment using DuckDB for in-memory query execution. No data leaves your environment. Files with no headers are handled gracefully, and schema generation adapts to irregular column structures common in litigation datasets and regulatory filings. For CIOs managing discovery workflows, this eliminates the need to export data to external BI tools or SaaS analytics platforms. Queries, results, and intermediate schemas remain inside the privilege boundary.
Tabular review now supports multi-column extraction initiated directly from chat. Users can highlight fields of interest during a conversation, trigger a structured review session, and export results without switching contexts. This integration reduces the median time-to-insight for structured document review by approximately 40% based on early customer telemetry.
Dedicated Document Conversion Service
We've separated document conversion into a standalone microservice with its own deployment profile, health probes, and retry logic. Previously, document conversion ran inline within the web application tier, creating resource contention during high-volume ingestion. The new architecture uses a queue-based model: uploads are registered, dispatched to conversion workers, and completed files are returned to the originating collection or assistant context.
This change improves resilience and horizontal scaling. Conversion failures are isolated and retried without impacting active user sessions. For firms processing large briefs, discovery sets, or regulatory filings, this means faster throughput and fewer timeout errors. The service handles legacy Office formats and expanded MIME type detection, improving compatibility with documents created in older versions of Word, Excel, and PowerPoint.
Converted documents retain full metadata lineage, including original file type, conversion timestamp, and user attribution. This audit trail supports privilege log generation and compliance review.
Collection Merging and Translation Controls
Collections can now be merged, consolidating document sets and metadata without re-uploading files. This is particularly useful for matter-based workflows where work product accumulates across multiple collections over time. Merging preserves all document-level metadata, citation graphs, and access controls. The operation is logged for audit purposes and completes asynchronously to avoid blocking user activity.
Translation workflows now include cancellation controls and improved deletion handling. Users can terminate in-progress translations, and the system correctly cleans up partial artifacts. Translation deletion now cascades to associated metadata, preventing orphaned records that previously required manual database cleanup.
Performance, Stability, and UX Refinements
This sprint included more than 20 targeted improvements to performance and user experience. SQL query timeouts are now enforced at the application layer to prevent long-running queries from blocking connection pools. Chat reconnection logic has been improved to handle network interruptions gracefully, with visual feedback and automatic retry. Embedding test components are now properly unloaded to prevent memory leaks during extended sessions.
Dark mode styling has been corrected across the platform, particularly in code blocks, prompt improver interfaces, and tabular review panels. Input fields now resize dynamically when long prompts are selected from the library, and folder counts include subfolder totals for easier navigation in large document hierarchies. External website errors (for integration-based assistants) are now handled with user-friendly messaging and do not propagate stack traces to the client.
Chat input has been redesigned to support multi-line composition with inline action buttons, improving usability during complex query formulation. DeepThink mode is now automatically disabled when no documents are attached to a session, preventing unnecessary inference overhead.
What's Next
The next sprint focuses on expanded agent framework capabilities, graph-based citation enrichment, and RBAC refinements for integration roles. We're also exploring lawstronaut semantic search integration for jurisdiction-specific legal research, built on the same private-deployment architecture that governs all AtlasAI inference.
As always, everything we ship remains inside your perimeter. No telemetry, no phone-home, no SaaS dependencies. If you're running AtlasAI in production and want to discuss deployment-specific optimizations, reach out to your customer success contact.
See it in your environment.
AtlasAI deploys inside your Azure tenant. Private by architecture, not policy.
Request a demo →