In an era where language models are becoming an everyday tool for developers, there is a growing need for developers in the field 1C the question arises: how to do it AI A truly useful assistant, not just a pretty chat? The answer lies in a simple truth: a model is only as good as the context you provide it. This is where the Model Context Protocol comes in, a protocol that changes the very nature of interaction between AI and enterprise systems.
Why Context Is Everything
Imagine asking a language model to help you develop a report in 1C. You describe the task, but the model doesn't know what documents are available in your configuration, what the directory structure is, or what information registers exist. The result? The generated code is inaccurate, requires revision, and the time savings evaporate.
It's like hiring a contractor and giving them a vague project description instead of a full technical specification, estimates, and plans. Naturally, they'll ask clarifying questions, work more slowly, and make mistakes.
The Model Context Protocol solves this problem elegantly. MCP is a universal way to automatically provide language models with the information they need to solve your problems. It's a kind of bridge between your AI chat and the outside world: your files, the internet, API services, and database data.
What is MCP and how does it all work?
MCP has existed in the AI ecosystem for quite some time, but it was unavailable to 1C developers until recently. The protocol allows for interaction between two types of participants: MCP clients and MCP servers.
MCP clients These are artificial intelligence systems: Cursor IDE, Claude Desktop, and various web chats. They support the MCP protocol and allow connecting MCP servers to language models.
MCP servers These are services that provide information on request in a specific format. They act as the "layer" between your corporate system and the AI assistant.
What does this look like in action? The user opens Cursor, connects to the 1C MCP server, and sets the task: "Compare the current month's sales with the previous month." The language model understands that it needs data. It knows there's a connected MCP server that can provide it. The model automatically generates a request, the server sends the data, and the model generates a high-quality response based on it.
Important: this occurs in parallel with the user's interaction with the model, and the user sees the entire process. Moreover, if the model is sufficiently intelligent, it can make multiple calls to various MCP server tools, automatically gathering the necessary context.
Technical realities – why is 1C more complicated?
This is where things get interesting. MCP offers two main transport options:
STDIO — This is a console application launched automatically by the client itself. All settings are located in the client configuration. It runs locally, and only you interact with it.
HTTP — a web server that runs separately and can be accessed remotely by multiple users. It's more versatile, but requires security considerations.
1C doesn't support STDIO transport because the configuration can't be run as a console application. HTTP would seem to be the solution. But a second problem arises: the MCP protocol requires either HTTP streaming or Server-Sent Events, both of which require long connections with partial data output. 1C isn't capable of this—it always responds fully.
Sounds like a dead end? Well, it's not. There are solutions.
Architectural solution - direct connection and proxy
There are two integration schemes:
Option 1: Direct connection
You publish the 1C extension's HTTP service and connect it directly as an MCP server. It's simple, fast, and requires no additional dependencies. This option works with most modern MCP clients.
Option 2: Python Proxy
A small Python script acts as an intermediary between the MCP client and 1C. The proxy fully implements all transport options, including STDIO and HTTP streaming, and communicates with 1C via standard HTTP services. This option is necessary if you work with clients that only support STDIO, or if you want to avoid publishing an HTTP service without authorization.
Enough with the theory. On to practice.
Open-source project on GitHub Contains everything you need: the 1C extension, sample configurations, and a Python proxy script. Installation consists of several steps.
Step 1: Connecting the extension
You download the project, take the extension from the Build folder, and add it to your configuration. The extension contains an HTTP service that needs to be published on the web server.
Step 2: Publishing the HTTP Service
When publishing, make sure the "Publish extension HTTP services by default" option is enabled. Then, test the functionality in your browser by accessing the URL base/HS/MCP/Health. If the status is "Ok," everything is working.
Step 3: Setting up authorization
Here's an important point: in the default.vrd file, which is created during publication, you need to explicitly specify the login and password so that HTTP services are accessible without authorization.
Step 4: Connecting to the MCP Client
Open the MCP.json configuration file from the repository. Select the configuration for direct 1C connection and copy it. In Cursor (or another client), create a new MCP connection with your database address: https://database-address/HS/MCP.
Step 5: First Launch
Once connected, you immediately gain access to three built-in tools: a list of configuration metadata and the structure of individual objects. This is already a huge help.
What AI sees initially
Here's the key point: even before you write your first custom tool, the MCP server provides the language model with a complete picture of your configuration structure. The model knows what documents are available, how the references are organized, and the structure of each object.
Testing: Open Cursor and ask, "Which MCP tools are available to you?" The model lists them with descriptions. Then, "What documents are in the configuration?" The cursor calls the MCP tool, receives the data, and generates a response.
Now imagine the power of this approach when writing code. You ask the model to write a query to retrieve order data. It can automatically access the MCP server, learn the document structure, and write a correct query tailored specifically to your configuration. No typos in the attribute names, no errors in the query logic.
Expanding functionality
The built-in tools are a good start, but the real power of MCP is revealed when you add your own tools.
The mechanism is similar to adding printing forms in BSP-based configurations via an extension. You create a processing task of a specific format and include it in the "Tool Containers" subsystem.
In processing, two export methods need to be implemented:
AddTools() — here you describe the tools this processing adds. For each tool, specify its name, description, and required parameters. MCP provides a special JSON schema for this, but the extension encapsulates everything—you simply work with familiar 1C structures.
ExecuteTool() — executes the tool's logic and returns a result. The result is typically a string or Markdown, but can also return an image or binary data.
Example: You want to add a "Get Latest Sales" tool. In AddTools(), you specify that the tool requires the following parameters: organization, period. In ExecuteTool(), you execute the corresponding query against the information register and return the data as a Markdown table.
Now the model can call this tool itself, get the data and use it in its response.
Application in real development
Technology is interesting, but why is it necessary in practice?
Scenario 1: AI assistant in development
A developer opens Cursor and begins writing a configuration. He needs to create a sales report. Instead of searching through documentation or remembering the names of the details, he asks the AI: "Write a query to retrieve all 'Customer Order' documents for the last month." The model accesses the MCP, learns the document structure, and writes perfectly correct code.
Scenario 2: Automating Data Analysis
You connect a tool that provides configuration usage statistics. The model can analyze this data and suggest optimizations.
Scenario 3: Training Newbies
A novice developer connects to an MCP server with tools that provide information about the configuration structure. Now they can ask the AI about any part of the system, and the AI will provide a precise answer based on the actual structure, not on general knowledge.
Scenario 4: Integration with external systems
You add tools that send data to external APIs and receive information from partner systems. The MCP server becomes the central hub for interaction between 1C and AI systems.
How do we draw conclusions and explore application prospects?
The Model Context Protocol isn't just another technological innovation. It's a fundamental shift in how language models can work with enterprise systems. Instead of relying on general knowledge and hoping the model will guess how your system is structured, you give it direct access to the information it needs to perform well.
This is especially significant for 1C developers. The platform is known for its specificity and quirks. AI models often perform worse with it than with more standard technologies. MCP levels the playing field, allowing models to work with 1C as effectively as with any other system.
The GitHub project is actively developing, and the community is growing. This is a good time to start experimenting, adding your own tools, and sharing ideas on how to apply MCP in your projects.
The future of 1C development will look like this: a developer creates a requirement or writes initial code, an AI assistant automatically retrieves all the necessary context via MCP, and the result is achieved on the first try. This era has already arrived. All that remains is to start using it.
Just contact us and we will help you choose the best solution for you.





