LLMS Central - The Robots.txt for AI

inceptionlabs.ai

Last updated: 5/14/2026valid

Independent Directory - Important Information

This llms.txt file was publicly accessible and retrieved from inceptionlabs.ai. LLMS Central does not claim ownership of this content and hosts it for informational purposes only to help AI systems discover and respect website policies.

This listing is not an endorsement by inceptionlabs.ai and they have not sponsored this page. We are an independent directory service with no affiliation to the listed domain.

Copyright & Terms: Users should respect the original terms of service of inceptionlabs.ai. If you believe there is a copyright or terms of service violation, please contact us at support@llmscentral.com for prompt removal. Domain owners can also claim their listing.

Current llms.txt Content

# Inception

> Inception is an AI research and product company building diffusion-based large language models (dLLMs). Unlike traditional autoregressive LLMs that generate one token at a time, Inception's Mercury models generate tokens in parallel using a coarse-to-fine diffusion process, delivering 5x faster inference with best-in-class quality at a fraction of the cost. Mercury models are OpenAI API-compatible and run on standard GPUs.

Key facts about Inception and Mercury:

- Inception builds diffusion large language models (dLLMs), a fundamentally different architecture from autoregressive LLMs like GPT or Claude
- Mercury models generate tokens in parallel rather than sequentially, enabling dramatically faster inference without sacrificing quality
- Mercury is production-grade and deployed at Fortune 500 companies. It is available through AWS Bedrock, Azure Foundry, and Inception's own API
- Mercury models are OpenAI API-compatible, making them a drop-in replacement for existing LLM workflows
- Three core use cases: AI agents, real-time voice/search, and coding (autocomplete, tab suggestions, chat)
- Pricing: $0.25 per 1M input tokens, $0.75 per 1M output tokens
- The founding team includes leading researchers from Stanford, UCLA, and Cornell who pioneered foundational AI technologies including diffusion models, Flash Attention, and Direct Preference Optimization (DPO)

## Models

- [Mercury 2](https://www.inceptionlabs.ai/models): The fastest reasoning LLM and the first reasoning dLLM. Ideal for complex applications where both performance and speed are critical.
- [Mercury Edit](https://www.inceptionlabs.ai/models): A small, coding-focused dLLM optimized for code editing and extremely latency-sensitive components of coding workflows. Integrated with the Zed code editor.

## Getting Started

- [API Platform](https://platform.inceptionlabs.ai/): Sign up and get API access to Mercury models
- [Mercury Chat](https://chat.inceptionlabs.ai/): Try Mercury 2 in the browser
- [API Documentation](https://docs.inceptionlabs.ai/get-started/get-started): Quickstart guide and full API reference
- [Integrations](https://docs.inceptionlabs.ai/resources): Available integrations and deployment options

## Company

- [About Inception](https://www.inceptionlabs.ai/about): Company mission, team, and founding story
- [Research](https://www.inceptionlabs.ai/research): Published research from the Inception team
- [Blog](https://www.inceptionlabs.ai/blog): Product announcements and technical deep dives
- [Introducing Mercury 2](https://www.inceptionlabs.ai/blog/introducing-mercury-2): Mercury 2 launch announcement
- [Careers](https://jobs.gem.com/inception): Open roles at Inception

## Enterprise

- [Enterprise Solutions](https://www.inceptionlabs.ai/enterprise): Fine-tuning, private deployments, custom SLAs, and 99.5%+ uptime
- [Contact Sales](https://www.inceptionlabs.ai/enterprise#contact-sales): Get in touch for enterprise pricing and deployment options
- [Customer Stories](https://www.inceptionlabs.ai/enterprise#customer-stories): How teams are using Mercury in production

## Research Papers

- [Diffusion Models (Ermon et al.)](https://arxiv.org/abs/2010.02502): The foundational approach for modern image and video generation, co-developed by Inception CEO Stefano Ermon
- [Flash Attention](https://arxiv.org/abs/2205.14135): A key algorithm for efficient GPU utilization in LLM training and inference
- [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290): One of the core approaches for aligning LLMs with human feedback
- [Masked Diffusion (MDLM)](https://arxiv.org/abs/2406.07524): Masked diffusion language models
- [d1 Reasoning](https://arxiv.org/abs/2504.12216): Reasoning capabilities for diffusion language models
- [Block Diffusion](https://arxiv.org/abs/2503.09573): Block-level diffusion for efficient text generation
- [Discrete Diffusion Guidance](https://arxiv.org/abs/2412.10193): Guidance methods for discrete diffusion models

## Contact

- [Sales](mailto:sales@inceptionlabs.ai): Enterprise and sales inquiries
- [General Inquiries](mailto:hello@inceptionlabs.ai): General questions
- [Discord](https://discord.com/invite/5VySp6ctXB): Developer community
- [X / Twitter](https://x.com/_inception_ai): @_inception_ai
- [LinkedIn](https://www.linkedin.com/company/inception-labs-ai/): Company updates

## Optional

- [Terms of Service](https://www.inceptionlabs.ai/docs/terms-of-use): Legal terms
- [Privacy Policy](https://www.inceptionlabs.ai/docs/privacy-policy): Privacy policy
- [Pricing](https://www.inceptionlabs.ai/models#pricing): Detailed model pricing

Version History

Version 15/14/2026, 2:02:06 PMvalid
4647 bytes

Categories

blogdocumentationdocs

Visit Website

Explore the original website and see their AI training policy in action.

Visit inceptionlabs.ai

Content Types

apidocumentation

Recent Access

No recent access

API Access

Canonical URL:
https://llmscentral.com/inceptionlabs.ai/llms.txt
API Endpoint:
/api/llms?domain=inceptionlabs.ai