Skip to main content

Technical FAQs

Technical FAQs: Lexopedia AI by DeepBrainz AI

Arunkumar Venkataramanan avatar
Written by Arunkumar Venkataramanan
Updated today

Technical Specifications

What models does Lexopedia AI use?

Lexopedia AI optimizes and orchestrates multiple advanced LLMs (Large Language Models), including:

  • DeepBrainz-R1: Our custom-built agentic reasoning model based on DeepSeek-R1. We’ve significantly enhanced its capabilities to reduce hallucinations while maintaining and improving reasoning, math, and coding performance. We invite users to experience these improvements firsthand through Lexopedia AI.

  • OpenAI (o4-mini, o3, o1 series, GPT4.1, GPT-4.5, GPT-4o)

  • Anthropic Claude 4, Claude 3.7, and Claude 3.5 series

  • Google Gemini 2.5 and Gemini 2.0 series

  • xAI Grok 3 series

  • DeepSeek-R1 and DeepSeek V3-0324

  • Meta Llama 4 series and Llama 3.3 series

  • Microsoft MAI-DS-R1 & Phi-4 Reasoning Plus

  • DeepMind Gemma 3 series

  • Our advanced custom efficient models

  • Open-source (SOTA) AI models, and more

These models are continuously updated to ensure optimal speed, accuracy, and relevance for various research and problem-solving tasks.

Frequently Asked Questions (FAQ) about DeepBrainz-R1

Here are some common questions about DeepBrainz-R1:

What is DeepBrainz-R1?

DeepBrainz-R1 is our custom-built agentic reasoning model developed by DeepBrainz AI & Labs, based on DeepSeek-R1. We've significantly enhanced its capabilities to reduce hallucinations while maintaining and improving reasoning, math, and coding performance.

What is the current status of DeepBrainz-R1?

DeepBrainz-R1 is currently in its pre-alpha stage. This means it is actively undergoing post-training and rigorous internal testing to further enhance its capabilities and stability.

How can I access DeepBrainz-R1?

At this time, DeepBrainz-R1 is accessible exclusively within Lexopedia AI. It serves as Lexopedia AI's Default Flagship Thinking model, powering the advanced agentic reasoning capabilities you experience on the platform.

Will DeepBrainz-R1 be available via an API for developers?

Yes, we have plans to launch DeepBrainz-R1 via an API for developers. Once we have a more stable and robust version of DeepBrainz-R1 that meets our high standards, we will make it available for external integration. Please stay tuned for future announcements regarding API access.

What is the Triple Ops Mode architecture?

Lexopedia AI's Triple Ops Mode is its core agentic architecture that powers agentic research workflows across three operational modes:

  1. Autopilot and Expert Search Mode:

    • Handles query execution and performs automated web searches

    • Offers both Quick Search (Efficiency and Speed) and Deep Search (Quality and Accuracy) options

  2. Deep Thinker Mode:

    • Provides various thinking-first research modes including Pro, Flash, Edge, Flow, Essential, and Default with their variants

  3. Deep Research Mode**:

    • Delivers specialized reasoning-first deep research modes:

      • Access Pro*, Pro+, Flash+, Edge+, Flow+, Expert, and Fusion with their variants

What is the Model Context Protocol (MCP)?

The Model Context Protocol (MCP) is an open standard designed to connect AI models directly with external data sources and tools. It uses a host-client-server architecture (often leveraging JSON-RPC) so that applications—like large language models—can access real-time, contextual data, much like plugging into a "USB-C port for AI." This standardized connection enables more accurate and dynamic responses by providing the necessary context while ensuring secure and efficient communication.

As stated, the Model Context Protocol (MCP) Host-Client-Servers integration is based on Anthropic's open standard protocol. It is part of Lexopedia AI's architecture that helps ensure access to the most accurate, current, and reliable information while maintaining privacy and security. It's currently being built to enhance the platform's ability to handle complex research queries, support multi-step reasoning, and streamline integration between various systems.

What is Vector Similarity Search and Embeddings?

Vector Similarity Search is a technique that identifies similar items by comparing their positions in a multi-dimensional space. These positions are represented by numerical vectors.

Embeddings are those numerical vectors generated by machine learning models that capture the underlying semantic meaning of data—like words or images. Together, they enable systems to find information based on meaning rather than just matching text or keywords.

Lexopedia AI uses vector similarity search and embeddings to understand the meaning behind queries instead of just matching keywords. Embeddings convert text into numerical vectors that capture semantic content, and vector similarity search finds items with similar vectors. This approach delivers highly relevant, context-aware results, enhancing the accuracy of research outcomes.

Features and Functionality

What are the 125+ Autopilot modes?

Lexopedia AI provides access to over 125 autopilot modes tailored for various research needs, including:

  • Web Search, Academic Search, Computing Search, Forum Search

  • And many more domain-specific expert autopilots

How does Lexopedia AI handle file uploads?

The Files Upload & Analysis feature allows you to easily upload documents and files by clicking the 'Clip' icon in the file upload section. Lexopedia AI then performs deep analysis to extract key insights and contextual information from your uploaded materials.

How does Lexopedia AI handle mathematics and coding?

Lexopedia AI seamlessly processes complex mathematical equations and coding challenges through:

  • LaTeX support for precise mathematical expressions

  • Integrated computational tools for data analysis

  • Code interpretation, debugging, and optimization

  • Math problem-solving capabilities with step-by-step solutions

Image Uploads

  • Image Upload: Coming Soon―You will be able to upload images to be analyzed and integrated into research outputs.

Can Lexopedia AI generate images?

Image generation features are currently in development and will be available soon. Once launched, Lexopedia AI will be able to generate images based on textual descriptions, which will be useful for visual research and documentation.

Advanced Usage and Settings

Personalize Your Research

Add your profile on Settings page to tailor Lexopedia AI’s responses to your needs, such as preferred location, language, styles, and preferences.

How do I custom instruct Lexopedia AI?

Lexopedia AI can be customized through the Settings page. For additional information on custom instructions, please consult the Personalization section under General FAQs. The Custom Autopilot feature will be available soon to enable further customization of Lexopedia Autopilots.

Does Lexopedia AI support conversation and chat?

Yes, Lexopedia AI is designed with conversational capabilities, supporting interactive and context-aware dialogue-based research. It remembers the context of previous queries for a seamless conversation experience.

What is AI Data Retention and how does it work?

AI Data Retention is a setting that allows Lexopedia AI to use your research data to improve its models. You have full control over this setting and can turn it off to exclude your data from being used for model improvement.

What are tokens and how does Lexopedia AI process them?

Tokens are units of text used in natural language processing. Lexopedia AI can process a substantial number of tokens simultaneously, providing detailed responses to complex queries and research requests.

Can users adjust model parameters?

Standard users cannot directly adjust parameters like temperature, top p, or stop tokens. These settings are optimized automatically based on the chosen research mode. For advanced customization, developers will be able to explore Lexopedia AI's API when it becomes available.

Model Updates

Lexopedia AI continuously optimizes its existing models and constantly adds new models that are updated with up-to-date training data to enhance performance and capabilities.

Data Privacy and Security

Are third-party model providers training on my data?

No, Lexopedia AI ensures user data is not used for training third-party models, adhering to stringent privacy measures and India's data localization norms.

What data does Lexopedia AI collect?

Lexopedia AI collects minimal user data necessary to improve service quality and personalize your experience. All data is protected under stringent privacy policies, and you have control over how your data is used through the AI Data Retention settings.

How does Lexopedia AI ensure privacy and security?

Lexopedia AI prioritizes privacy and security through:

  • A fully sovereign, privacy-centric architecture with no user tracking

  • Compliance with India's data localization norms

  • Stringent measures to protect user data

  • Collection of minimal user data, protected under strict privacy policies

  • Adherence to robots.txt directives, respecting website rules on automated access

Does Lexopedia AI comply with robots.txt?

Yes, Lexopedia AI adheres to robots.txt directives, respecting website rules on automated access.

Reporting Issues and Support

How do I report technical issues?

For technical issues or bugs, you can:

- Use the In-app messenger at the bottom right corner of the app/site

- Email support@deepbrainz.com with a detailed description of the issue, steps to reproduce it, and relevant screenshots

How do I report security issues?

Report security issues via:

- In-app messenger

Information on any bug bounty programs will be available on our security page.

Integration with Other Services

Can I integrate Lexopedia AI with ChatGPT or other AI platforms?

Currently, Lexopedia AI operates independently and does not support integration with ChatGPT accounts or other third-party AI platforms. It uses its own set of plugins and tools designed specifically for its agentic-reasoning-first approach.

Will there be API access for developers?

Yes, API access is planned for future release. This will provide integration opportunities with open-source LLMs and various research tools, enabling developers to build custom solutions using Lexopedia AI's capabilities.

What browser extensions or tools are available for Lexopedia AI?

Browser extensions and native mobile/desktop apps are currently in development and will be announced soon. These will enhance the accessibility and usability of Lexopedia AI across different platforms and devices.

For further information, subscriptions, and support, visit Lexopedia AI.

Upcoming Features and Development Roadmap

What new features are being developed for Lexopedia AI?

Upcoming features include everything listed under Upcoming Features in Getting Started or General FAQs including advanced multi-agent workflow & task automation.

When will subscriptions be available?

The official launch of subscription plans (Starter, Pro, Pro+, Academia Pro+, Enterprise Pro+, Academia Pro* and Enterprise Pro*) is scheduled for June 2025, as per the Getting Started guide.

When will mobile apps be available?

Mobile apps for Android and iOS are currently under development.

For additional technical support, please contact us via the In-app messenger or at support@deepbrainz.com.

Did this answer your question?