Building Our MCP Server for AI Agents: A Developer's Journey

Learn how we built Semantic Pen's MCP server to enable AI agents to create and manage content, the challenges we faced, and the solutions we implemented.
Building Our MCP Server for AI Agents: A Developer's Journey
Six months ago, most of us hadn't even heard of MCP (Model Context Protocol).
There was no standard playbook, no "right" way to expose tools to LLMs. Just a few GitHub repos, some early experiments, and a lot of shared frustration in Discord threads. But something about it clicked — the idea that AI agents could go beyond chat and actually do things, securely and reliably.
As a content automation platform, we knew we had to take MCP seriously. Because when agents start taking actions like creating articles and managing content projects, everything comes back to trust, authentication, and access control.
So we rolled up our sleeves and built an MCP server for Semantic Pen. Along the way, we learned from the community's growing pains, contributed back fixes, and most importantly, shaped the infrastructure we'd want other teams (and their agents) to rely on.
Good news! We've open-sourced our MCP server implementation. You can find it on GitHub at https://github.com/pushkarsingh32/semantic-pen-mcp-server.
What is MCP and Why Does It Matter?
The Model Context Protocol (MCP) is an emerging standard that enables AI agents to interact with external tools and services. It's what allows Claude, ChatGPT, or any other AI assistant to not just talk about doing things, but to actually do them.
For Semantic Pen, this meant creating a bridge between AI agents and our content creation platform. With MCP, we could let agents:
- Browse content projects
- Create SEO-optimized articles
- Manage article generation
- Access and use generated content
This isn't just a convenience feature — it's a fundamental shift in how content can be created and managed. Instead of switching between an AI assistant and our platform, users can now have their AI assistant directly interact with Semantic Pen.
Step 1: Getting the First Tools Working
We started small — just a couple of basic tools wired up to the MCP server. These weren't production-critical yet. The goal was to validate:
Can agents call our tools, and can we control what happens when they do?
Each tool had a clear structure:
- A name
- A short description
- An input schema
- A run function to actually do the thing
Here's what one of our earliest tools looked like — get_projects
:
{
name: "get_projects",
description: "Get all projects from your article queue",
inputSchema: {
type: "object",
properties: {}
}
}
The implementation would fetch projects from our API and format them for the agent:
private async getProjects() {
const result = await this.makeRequest<ProjectQueueResponse>('/article-queue');
if (result.success && result.data) {
const projects = result.data.data.projects;
// Group by project name and get unique projects
const uniqueProjects = projects.reduce((acc, project) => {
if (!acc[project.project_id]) {
acc[project.project_id] = {
...project,
totalArticles: 1
};
} else {
acc[project.project_id].totalArticles += 1;
}
return acc;
}, {});
const projectList = Object.values(uniqueProjects).map(project =>
`📁 **${project.project_name}** (${project.totalArticles} articles)\n Project ID: ${project.project_id}\n Latest Article: ${project.extra_data.targetArticleTopic}\n Created: ${new Date(project.created_at).toLocaleDateString()}\n Status: ${project.status}`
).join('\n\n');
return {
content: [
{
type: "text",
text: `📋 **Your Projects** (${Object.keys(uniqueProjects).length} projects, ${result.data.count} total articles)\n\n${projectList || 'No projects found.'}`
}
]
};
}
}
We used TypeScript interfaces to define our data structures and ensure type safety:
interface Project {
id: string;
created_at: string;
status: string;
statusDetails: string;
progress: number;
error: string | null;
project_id: string;
project_name: string;
extra_data: {
targetArticleTopic: string;
};
article_count: number;
}
Having all tools registered declaratively made it easy to reason about what we were exposing and how.
Step 2: Authentication and Security
With a few tools wired up, it was time to protect them.
Since the MCP server was going to be a public-facing surface callable by real agents, over the internet, we had no business leaving it open. This wasn't just for demo use. It was a real integration point, and we treated it like one from the start.
We implemented a straightforward but effective authentication system:
- API Key Verification: Every request to our MCP server requires a valid Semantic Pen API key
- Environment Variables: The API key is securely passed via environment variables
- Automatic Verification: The server verifies the API key on startup
private async initializeApiKey(): Promise<void> {
if (!this.apiKey) {
console.error("⚠️ SEMANTIC_PEN_API_KEY environment variable not set");
return;
}
try {
const result = await this.makeRequest<ApiKeyVerificationResponse>('/verify-key');
if (result.success && result.data) {
this.isApiKeyVerified = true;
console.error(`✅ Semantic Pen API key verified for user: ${result.data.userId}`);
} else {
console.error(`❌ API key verification failed: ${result.error}`);
}
} catch (error) {
console.error(`❌ Failed to verify API key: ${error}`);
}
}
This approach ensures that only authorized users can access the MCP server, and that all actions are properly attributed to the correct user account.
Step 3: Designing for Agent Experience
One thing we learned quickly was that agents have different needs than human users. When designing our tools, we had to consider:
- Clear, descriptive responses: Agents need well-structured information they can reason about
- Rich formatting: Using markdown formatting to highlight important information
- Contextual guidance: Providing next steps and related actions
- Error handling: Clear error messages that explain what went wrong
For example, when an agent creates a new article, we don't just return a success message. We provide:
- The article ID for future reference
- The current status of the article
- All the settings that were applied
- Instructions on how to check progress and retrieve the content
return {
content: [
{
type: "text",
text: `✅ **Article Created Successfully!**\n\n**Topic:** ${args.targetArticleTopic}\n**Article ID:** ${result.data.id}\n**Status:** ${result.data.status}\n**Settings:**\n- Keyword: ${args.targetKeyword || 'None'}\n- Word Count: ${args.wordCount || 1000}\n- Language: ${args.language || 'English'}\n- Type: ${args.articleType || 'Article'}\n- Tone: ${args.toneOfVoice || 'Professional'}\n\n🔄 Your article is being generated. Use \`get_article\` with ID \`${result.data.id}\` to check progress and retrieve the content.`
}
]
}
This rich context helps agents understand what happened and what they can do next, making for a much smoother interaction.
Step 4: Testing with Real Agents
Once our tools were wired up and secured, we needed to answer the real question: "Can agents actually use this?"
It's one thing to build a working endpoint. It's another to see how an actual agent interacts with your tool — what it discovers, what it fails on, and how it reacts to unclear definitions or missing information.
So we tested with real clients:
- Claude Desktop
- Windsurf
- ChatGPT, via plugins
What we caught during testing:
- Unclear descriptions: Tools that weren't being used because their purpose wasn't clear
- Missing guidance: Agents not knowing what to do with the returned data
- Format issues: Responses that were technically correct but not optimized for agent consumption
We'd tweak a tool, test it with an agent, and immediately see how it reacted. That feedback loop — build, test, refine — helped us debug fast, ship safely, and shape better defaults.
Our MCP Server Today
Today, our MCP server provides five core tools:
- get_projects: Browse all your content projects
- get_project_articles: View articles within a specific project
- search_projects: Find projects by name
- create_article: Generate a new SEO-optimized article
- get_article: Retrieve a specific article with full content
These tools enable AI agents to perform the most common content creation and management tasks directly, without having to switch contexts.
Integration Made Simple
We've made it incredibly easy to integrate our MCP server with popular AI coding assistants:
For Cursor
{
"mcpServers": {
"semantic-pen": {
"command": "npx",
"args": ["-y", "semantic-pen-mcp-server@latest"],
"env": {
"SEMANTIC_PEN_API_KEY": "your-api-key-here"
}
}
}
}
For Claude Code
{
"mcpServers": {
"semantic-pen": {
"command": "npx",
"args": ["-y", "semantic-pen-mcp-server@latest"],
"env": {
"SEMANTIC_PEN_API_KEY": "your-api-key-here"
}
}
}
}
For Windsurf
{
"mcpServers": {
"semantic-pen": {
"command": "npx",
"args": ["-y", "semantic-pen-mcp-server@latest"],
"env": {
"SEMANTIC_PEN_API_KEY": "your-api-key-here"
}
}
}
}
What's Next
We're far from done. This first version was about standing something up, proving that a secure, agent-friendly MCP server could be more than a demo. Now we're pushing further.
Next on our list:
- More tools: Expanding our capabilities to include more content operations
- Smarter responses: Enhancing the quality and usefulness of information returned to agents
- Better error handling: More graceful handling of edge cases and unexpected inputs
- Usage analytics: Understanding how agents are using our tools to guide future development
A Few Things We Learned
If you're thinking of building your own MCP server or even just experimenting, here's what we'd tell you:
- Start with authentication: Don't leave security for later
- Think like an agent: They need different information than humans
- Format matters: Rich, well-structured responses make a huge difference
- Test with real agents: There's no substitute for seeing how they actually interact with your tools
- Iterate quickly: The standard is still evolving, so be prepared to adapt
We're still building, but excited about where this is going. If the first era of software was humans calling APIs, and the second is agents calling tools, then MCP is where it all gets real.
Summing up
Building an MCP server for Semantic Pen has been an exciting journey into the future of AI-human collaboration. By enabling AI agents to directly interact with our content creation platform, we're opening up new possibilities for automation and productivity. The MCP standard is still evolving, but it's already clear that this approach to tool integration will be fundamental to how we work with AI in the coming years.
We're proud to have open-sourced our MCP server implementation, allowing developers to learn from our work and contribute to the growing MCP ecosystem. Check out our repository at https://github.com/pushkarsingh32/semantic-pen-mcp-server.
Frequently Asked Questions
What is an MCP server?
An MCP (Model Context Protocol) server provides a standardized way for AI agents to interact with external tools and services, allowing them to take actions beyond just conversation.
How do I get started with Semantic Pen's MCP server?
Simply add the configuration to your AI coding assistant (like Cursor, Claude Code, or Windsurf), and provide your Semantic Pen API key. You can find detailed setup instructions in our GitHub repository.
Is the MCP server open-source?
Yes, we've open-sourced our MCP server implementation. You can find it on GitHub at https://github.com/pushkarsingh32/semantic-pen-mcp-server.
Is the MCP server secure?
Yes, our MCP server requires API key authentication and all actions are properly attributed to your Semantic Pen account.
What can AI agents do with Semantic Pen's MCP server?
Agents can browse your content projects, create new SEO-optimized articles, manage article generation, and access generated content.

Automate your organic growth.
No credit card required, get 3 free articles, generate months of content at once.