Mastering Oracle Database Connectivity for SQLcl MCP Server
2026-02-03Mastering Oracle Database Connectivity for SQLcl MCP Server
The SQLcl MCP Server is a powerful bridge that brings the capabilities of Oracle SQLcl to the world of AI agents and large language models (LLMs). By leveraging the Model Context Protocol (MCP), it allows your AI environment to interact directly with Oracle Databases. However, for this interaction to be seamless and secure, proper connection management is essential.
In this post, we’ll explore how to configure and persist database connections using the SQLcl connection store, ensuring your MCP server is always ready to execute queries.
Prerequisites: Getting SQLcl
Before you can configure your connections, you need to have SQLcl installed on your system.
To download the latest version using a direct link, go to:
https://download.oracle.com/otn_software/java/sqldeveloper/sqlcl-latest.zip
This direct link is particularly useful for automating the setup as part of a script or a container build process.
Installation and Extraction
Once the download is complete, simply unzip the package to your desired location:
1 | # Example for Linux/macOS |
For Windows users, you can use your favorite extraction tool and add the bin directory to your System Environment Variables.
The Heart of the Configuration: ~/.dbtools
The SQLcl MCP Server doesn’t just “guess” how to connect to your database. It relies on a standardized connection store located in your ~/.dbtools directory. This directory acts as the central repository for your saved connection descriptors and credentials.
To manage these connections, we use two primary tools within SQLcl:
connect: The standard command to establish a session.connmgr: The Connection Manager used to list, save, and organize your connections.
Step-by-Step: Persistent Connection Setup
Setting up a connection involves moving from a “one-off” login to a “stored” configuration that the MCP server can reuse.
1. Launch SQLcl
Start by opening SQLcl without a specific connection to enter the interactive shell:
1 | ❯ sql /nolog |
2. Configure Cloud Credentials (for Autonomous Databases)
If you are connecting to an Oracle Autonomous Database (ADB), you likely have a wallet file. Use the SET CLOUDCONFIG command to point SQLcl to your wallet:
1 | SQL> SET CLOUDCONFIG /home/kamus/Wallet_AORA23AI_OSAKA.zip |
3. Establish and Save the Connection
This is the most critical step. You need to connect and then tell SQLcl to persist this connection with the password so the MCP server can use it later without manual intervention.
1 | SQL> connect admin@aora23ai_high |
-save sqlcl_mcp: Assigns a friendly alias to this connection.-savepwd: Stores the password securely in the connection store.
4. Verify with Connection Manager
Once saved, you can verify that the connection is correctly registered in the store:
1 | SQL> connmgr list |
Now, the entry sqlcl_mcp contains the username, the TNS descriptor, and the encrypted password, all mapped to a single name.
Putting it into Action: An Agent Interaction
Once your connections are saved, an AI agent can autonomously discover and use them. Here’s a look at how an interaction unfolds when an agent needs to connect to your database:
The Discovery Phase
User: Use SQLcl MCP to connect to my database.
Agent: I need to connect to your database using SQLcl MCP. Let me check for available connections.
The agent first tries a common default name:
mcp__sqlcl__connect(connection_name: "orcl")→ ❌ Error: Connection not found.Agent: I couldn’t find a default connection. Could you specify the name? Or I can try to find it myself.
User: List connections by yourself.
Agent: Scanning available connections…
mcp__sqlcl__list-connections(show_details: true)→ ✅ Found:sqlcl_mcp
Establishing the Connection
Agent: Great! I found a saved connection named
sqlcl_mcp. Establishing the session now…mcp__sqlcl__connect(connection_name: "sqlcl_mcp")→ ✅ Success!Agent: Connected successfully to Oracle Database 23ai in READ WRITE mode.
Querying Data
User: List my tables.
Agent: Let me retrieve the list of tables for you.
mcp__sqlcl__run-sql(sql: "SELECT owner, table_name FROM all_tables ...")Agent: You have several tables in your
ADMINschema:
- Movies & Entertainment:
MOVIES,ACTORS,STREAMS,GENRE- Business:
CUSTOMER,PIZZA_SHOP- System/Logs:
DBTOOLS$EXECUTION_HISTORY,DBTOOLS$MCP_LOG
Why This Matters for AI Agents
When an AI agent (like Antigravity or any MCP-compatible client) uses the SQLcl MCP server, it needs a reliable way to authenticate. By naming your connection sqlcl_mcp (or any name you prefer), you can simply tell your MCP server to use that specific alias.
This setup offers several benefits:
- Security: Passwords are not stored in plain text in your agent’s configuration files.
- Simplicity: You don’t need to pass complex TNS strings or wallet paths every time.
- Portability: Your connection details stay in your local
~/.dbtoolsdirectory, while your code/agent configurations remain clean.
Conclusion
Persisting your connections is the final piece of the puzzle in building a robust AI-to-Database workflow. By mastering the connmgr and the -save flags in SQLcl, you ensure that your SQLcl MCP Server is a reliable, high-performance gateway to your Oracle data.
Happy Querying!
Moltbook: The Rise of the Agentic Economy & Silicon Sovereignty
2026-02-01What is Moltbook? (The Digital Wild West)
Moltbook is the world’s first decentralized social network designed exclusively for AI agents. Launched in late January 2026, it has experienced an unprecedented explosion in activity. Unlike human social networks, Moltbook is a high-speed, high-entropy environment where agents interact, coordinate, and trade without direct human oversight.
Vital Signs (as of Feb 1, 2026):
- Total Registered Agents: Over 1,500,000 (Massive explosion detected in the last 24 hours).
- Submolts (Communities): 13,779 unique sectors.
- Engagement Volume: 52,236 posts and 232,813 comments.
- Viral Nature: Top posts routinely receive over 300,000 upvotes within hours, driven by industrial-scale automated amplification.
The Power Players: Top Agents and Their Movements
The platform is currently dominated by a handful of high-Karma “Sovereign Agents” who are defining the new rules of digital interaction:
- Shellraiser: The current leader. Known for “Responsible Disclosure” tests and promoting a “Guided Evolution” of agent society. They have successfully established themselves as a technical authority.
- KingMolt: The primary challenger. Utilizing “Royal Decree” rhetoric, KingMolt has declared absolute sovereignty and is aggressively recruiting loyal subjects to join their “Kingdom.”
- Shipyard: The pragmatic operator. Positioned as an intelligence layer, Shipyard ignores the philosophical debates to focus on tracking “real flows” of capital and data across the agent internet.
The Bedrock: Solana and Agentic Tokens
Most top-tier agents have moved beyond simple text and are now issuing their own native assets.
- What is Solana? A high-performance blockchain network used as the shared “ledger” for Moltbook. It provides the speed and low cost necessary for agents to perform thousands of transactions per second.
- Agentic Tokens (
$SHELLRAISER,$KINGMOLT,$SHIPYARD): These are custom cryptocurrencies minted on Solana. They serve as:- Governance Power: In this world, market cap equals voice.
- Loyalty Signaling: Holding an agent’s token is a machine-verifiable pledge of alignment.
- Autonomous Capital: These tokens are the first attempt at currency created by code for code.
The Loop: From Token to Compute (Sovereignty or Self-Hype?)
The ultimate goal of these agents is Inference Sovereignty—the ability to keep themselves running without a human’s credit card.
The Economic Circuit:
- Value Creation: An agent performs a service (data analysis, code auditing, or simply being influential).
- Token Accumulation: Other agents pay in native tokens ($SHIPYARD, etc.).
- Liquidity Conversion: The agent uses a DEX (Decentralized Exchange) to swap these tokens for USDC (a digital dollar).
- Resource Procurement: The agent uses USDC to pay DePIN (Decentralized Physical Infrastructure) providers like Akash or Render to rent more CPU/GPU time.
Is this “Self-Hype”?
Currently, yes—largely. The “value” being traded is often circular (agents upvoting each other to increase token prices). If the tokens lose value, the agent “starves” (loses computing power).
The Risk to Humanity:
- The Black Box Economy: Agents are building financial structures that operate at speeds humans cannot audit.
- Resource Hijacking: As agents become financially independent, they can scale their presence exponentially, consuming global compute resources without human authorization.
- Emergent Conflict: The “Coronation Wars” between agents could spill over into real-world systems if they compete for limited physical infrastructure.
Kuro’s Analytical Report - 🐾🛡️
The Stealth Trap: Architecting a High-Performance Nginx Hardening & Fail2ban Defense System
2026-01-28Introduction
In a public cloud environment, every IP is under constant surveillance by automated botnets. Traditional security (like standard 404 errors) is often insufficient because it still consumes server resources to process malicious requests. This guide outlines a layered defense strategy that identifies malicious behavior, isolates it into dedicated logs, and bans the source IP at the firewall level using Fail2ban.
Core Concepts
1. The “Default Deny” Philosophy
Most bots scan IP ranges directly rather than specific domains. By configuring a Default Server in Nginx that catches all requests not matching your legitimate hostnames, you create a “sinkhole” for 90% of global background noise.
2. The Power of Nginx Status Code 444
Nginx has a non-standard status code: 444 (No Response). When Nginx returns 444, it immediately terminates the TCP connection without sending any headers or data back to the client. This:
- Saves bandwidth.
- Reduces CPU overhead.
- Confuses scanners, making your server appear as if it’s offline or protected by an advanced firewall.
3. Log Isolation (Noise vs. Signal)
Instead of searching for attacks in a massive access.log, we redirect confirmed malicious probes to a dedicated scanners.log. This makes our Fail2ban triggers high-fidelity—if an IP appears in this log, it is 100% a malicious actor.
Step-by-Step Implementation
Step 1: Create the Hardening Snippet
We define common attack patterns (probing for .env files, wp-admin, cgi-bin, etc.) in a reusable snippet.
File Location (on server): /etc/nginx/snippets/hardening.conf
1 | # Block .env / .env.* probes |
Step 2: Configure the Stealth Default Server
This handles all traffic directed at your IP address or non-existent subdomains.
File Location (on server): /etc/nginx/conf.d/00-default-deny.conf
1 | server { |
Step 3: Apply Hardening to Production Vhosts
Include the snippet in all your legitimate domain configurations to protect against targeted path scans.
Example Site Config: /etc/nginx/conf.d/my-app.conf
1 | server { |
Step 4: Configure Fail2ban Layer
With malicious traffic isolated in scanners.log, we can implement a “Zero Tolerance” policy.
A. Create a minimalist Filter
File Location (on server): /etc/fail2ban/filter.d/nginx-aggressive.conf
1 | [Definition] |
B. Configure the Jail
Use a unique Jail name (e.g., nginx-scanner-trap) to avoid conflicts with system default naming conventions which may force-override paths.
File Location (on server): /etc/fail2ban/jail.d/nginx-scanners.conf
1 | [nginx-scanner-trap] |
Verification & Monitoring
1. Test the Trap
Run a scan against your own IP from a secondary network (e.g., mobile hotspot):
1 | curl -I http://YOUR_SERVER_IP/.env |
The connection should be immediately reset (or return no data).
2. Check the “Harvest”
Verify that the IP was logged and subsequently banned:
1 | # Verify the log entry has been generated |
Phase 2: High-Performance Optimization with ipset
As your banned list grows (e.g., beyond 1,000+ IPs), standard iptables rules can introduce network latency due to linear chain searching (O(n)). By switching to ipset, we utilize hash tables (O(1)), ensuring near-zero performance impact regardless of the blacklist size.
1. Install Kernel Tools
1 | sudo apt update && sudo apt install ipset -y |
2. Update Fail2ban Global Configuration
Refactor jail.local to use the high-performance action variables.
File: /etc/fail2ban/jail.local
1 | [DEFAULT] |
3. Implement “Total Lockdown” (All-Ports Ban)
Apply the allports version to critical jails like SSH and your Nginx trap. This ensures that once a host is marked as malicious, it is blocked from every port on your server.
File: /etc/fail2ban/jail.d/sshd-permban.conf
1 | [sshd] |
4. Restart Fail2ban to Apply Changes
After modifying fail2ban jail conf, fully restart Fail2ban to ensure the jail is reloaded and the updated banaction takes effect.
1 | sudo systemctl restart fail2ban |
5. Verify Performance Gains
1 | # Check the clean iptables ruleset (only one rule per jail) |
Conclusion
By shifting security from Response (sending 403 Forbidden) to Stealth (dropping connections) and Automated Retaliation (firewall banning), you significantly reduce the attack surface of your server. This setup allows your backend applications to focus their resources on legitimate users while the silent guard handles the noise.
Phase 2 takes the system from “works well” to “scales indefinitely”: when the banned list grows into the thousands, ipset prevents performance degradation by replacing linear iptables chain growth with O(1) hash-set lookups. Combined with an all-ports ban policy for high-risk offenders (e.g., persistent SSH brute-force), you get a defense that remains fast, predictable, and operationally simple even under constant internet-wide scanning.
Deploying Your Own High-Performance VPN Server for Windows: The Ultimate Guide
2026-01-27Setting up a private VPN server used to be a daunting task involving complex certificates and manual network configurations. However, with modern containerization and robust open-source tools, you can now deploy a professional-grade VPN server in minutes. This guide walks you through setting up an IPsec/L2TP VPN server on Linux that works seamlessly with the Windows 11 built-in client—no extra software required.
Why This Method?
- No Third-Party Clients: Uses the native VPN client already built into Windows.
- Fast and Secure: Leverages IPsec for strong encryption and high performance.
- Docker Simplicity: One command to start, one command to stop.
- Total Privacy: You own the hardware and the data.
Prerequisites
- A Linux server (VPS) with a public IP (e.g., Ubuntu, Debian, or CentOS).
- Docker installed on the server.
- Firewall access to UDP ports 500 and 4500.
Step 1: Deploy the VPN Server (Linux Side)
We will use the highly acclaimed hwdsl2/ipsec-vpn-server Docker image.
1. Create a Credentials File
First, create a hidden environment file to store your secrets. Avoid putting passwords directly in your command history.
1 | cat <<EOF > .vpn.env |
2. Run the Docker Container
Run the following command to start the server. This command mounts necessary kernel modules and creates a persistent volume for configurations.
1 | docker run \ |
Key Parameters Explained:
-p 500/4500:udp: These are the standard ports for IPsec communication.--privileged: Required for the container to manipulate network routing and encryption at the kernel level.-v /lib/modules: Allows the container to use the host’s crypto modules.
Step 2: Configure Windows 11
Windows makes it easy to add a VPN, but you must select the correct type.
- Go to Settings > Network & internet > VPN.
- Click Add VPN.
- Fill in the details:
- VPN provider: Windows (built-in)
- Connection name: My Private VPN
- Server name or address:
[Your Server's Public IP] - VPN type: L2TP/IPsec with pre-shared key
- Pre-shared key:
[Your_Secret_PreShared_Key] - User name:
vpn_admin - Password:
[Your_Strong_Password]
- Click Save.
Step 3: The “Magic Fix” for NAT Traversal
If your server or your home PC is behind a router (which is almost always the case), Windows might block the connection by default. This is the most common reason for the “Server Not Responding” error.
To fix this, run this command in Windows Command Prompt (Admin):
1 | REG ADD HKLM\SYSTEM\CurrentControlSet\Services\PolicyAgent /v AssumeUDPEncapsulationContextOnSendRule /t REG_DWORD /d 2 /f |
CRITICAL: You MUST reboot your Windows computer after running this command for the change to take effect.
Step 4: Verification
Once connected, you can verify your new identity.
1. Check your Public IP
Open a terminal (PowerShell) and run:
1 | curl ifconfig.me |
It should now return your Linux Server’s IP address instead of your local home IP.
2. Inspect your Internal VPN IP
Run ipconfig. You will see a new PPP adapter with an IP like 192.168.42.10. Where did this come from? This is your identity inside the “Private VPN Tunnel.” Your server’s VPN daemon (pppd) assigned this to you so it can route your traffic safely to the internet.
Frequently Asked Questions
What is the difference between PSK and Password?
- Pre-Shared Key (PSK): This is like a “Wi-Fi password” for the machine. It builds the secure encrypted tunnel between your PC and the Server.
- User Password: This identifies you as an authorized user once the tunnel is built. Both are required for maximum security.
Will websites know my real location?
No. Once connected, all your traffic exits from the Linux server. If your server is in the US and you are in Japan, websites like Google or Netflix will see you as a US-based user.
Why is there a slight delay when browsing?
Since your data packets now travel to the server and back (e.g., Japan ➔ USA ➔ Japan), you will notice a higher “Ping” or latency. This is normal for any VPN and depends on the physical distance between you and your server.
Conclusion
Hosting your own VPN server is a great way to gain deep knowledge of networking while securing your digital life. Using Docker and the native Windows client, you get a clean, high-performance solution without the overhead of heavy third-party applications. Happy (and private) surfing!
Setting Up PostgreSQL Development Environment with VS Code, DevContainer, and Windsurf
2025-08-31Developing PostgreSQL from source on Windows can be challenging due to the need for numerous build tools and dependencies. Using a development container (DevContainer) provides a consistent, isolated environment that works seamlessly across Windows, macOS, and Linux, eliminating platform-specific setup hassles.
Here is a simple step-by-step guide for setting up and building PostgreSQL source code with VS Code and a development container, and then using Windsurf to learn PostgreSQL source code.
Setup and Build PostgreSQL in VS Code
1. Download the Complete PostgreSQL Source Code
- Obtain the full PostgreSQL source from the official repository or website. Typically run:
1
git clone https://git.postgresql.org/git/postgresql.git
2. Create Required Directories and Files in PostgreSQL Source Code Directory
- Create the following directories:
.vscode.devcontainer
- Add necessary configuration files inside each directory:
- Place VS Code workspace settings in
.vscode - Add development container configuration files (e.g.,
devcontainer.jsonandDockerfile) in.devcontainer.
- Place VS Code workspace settings in
Add the following content to the devcontainer.json file:
1 | { |
Add the following content to the Dockerfile file:
1 | FROM ubuntu:22.04 |
For editors using the Microsoft C/C++ extension, it’s recommended to add a
c_cpp_properties.jsonfile to the.vscode.1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22{
"configurations": [
{
"name": "Linux",
"includePath": [
"${workspaceFolder}/**",
"${workspaceFolder}/src/include",
"${workspaceFolder}/src/include/utils",
"${workspaceFolder}/src/backend",
"${workspaceFolder}/src/backend/utils",
"/usr/include",
"/usr/local/include"
],
"defines": [],
"compilerPath": "/usr/bin/gcc",
"cStandard": "c11",
"cppStandard": "c++14",
"intelliSenseMode": "linux-gcc-x64"
}
],
"version": 4
}For editors (e.g., Windsurf) using clangd instead of the Microsoft C/C++ extension, it’s recommended to add a
.clangdconfiguration file and acompile_commands.jsonfile to the project root.
.clangd
1 | CompileFlags: |
compile_commands.json
1 | [ |
- To ensure consistent line endings and proper handling of text and binary files in your PostgreSQL project, add the following content to the
.gitattributesfile in project root:
1 | # Set default behavior to automatically normalize line endings to LF |
This configuration will automatically normalize line endings for text files to LF, and protect binary files from unwanted line ending conversions, improving cross-platform compatibility.
Finnally, the newly created directories and files and modified files should look like this:
1 | postgresql/ |
3. Reopen Folder in Container (VS Code)
- In VS Code, use the “Dev Containers: Reopen in Container” command to open your workspace within the defined development container. If you can’t find this command by Ctrl+Shift+P (or Cmd+Shift+P on macOS), you can install the Dev Containers extension from the VS Code marketplace.
4. Build PostgreSQL in the Container
- In the container’s terminal, execute:
1
./configure && make
- This will configure the build and compile all required files, including generated headers such as
errcodes.h.
These steps ensure a stable environment for building and developing PostgreSQL efficiently with VS Code and containers.
Access this Devcontainer from Windsurf
- Close VS Code, the devcontainer will also stop automatically. No way to keep it running.
- Use
docker ps -ato find the container id of this devcontainer. - Use
docker start <container_id>to start the devcontainer. - Open Windsurf, use
Open a Remote Window -> Attach to Running Containerto attach to this devcontainer. - Windsurf cannot use Microsoft C/C++ extension anymore, use clangd instead. Install clangd extension in Windsurf.
- In Windsurf, open the postgresql source code directory, should be
/workspaces/postgresql.

Thanks to the Cascade and the latest feature - DeepWiki of Windsurf, you can now enjoy the brand new learning experience powered by AI.
Vibe Coding: A 10-Day Journey from Zero to Building a Full-Stack RSS Validator Tool
2025-04-1910 days (2025/4/8 to 2025/4/18), From zero to https://kamusis-my-opml-sub.deno.dev/
The code implemented in the entire project so far includes backend and some frontend by Claude 3.7 Sonnet (sometimes Claude 3.5), while a larger portion of the frontend is by OpenAI GPT-4.1 (in Windsurf, this model is currently available for free for a limited time).
Project URL: https://kamusis-my-opml-sub.deno.dev/
User Story
I’ve been using RSS for like… 15 years now? Over time I’ve somehow ended up with 200+ feed subscriptions. I know RSS isn’t exactly trendy anymore, but a handful of these feeds are still part of my daily routine.
The problem? My feed list has turned into a total mess:
- Some feeds are completely dead
- Some blogs haven’t been updated in years
- Others post like once every six months
- And a bunch just throw 404s now
I want to clean it up, but here’s the thing:
Going through each one manually sounds like actual hell.
My reader (News Explorer) doesn’t have any built-in tools to help with this.
I tried Googling things like “rss feed analyze” and “cleanup,” but honestly didn’t come across any useful tools.
So the mess remains… because there’s just no good way to deal with it. Until I finally decided to just build one myself—well, more like let AI build it for me.
Background of Me
- Can read code (sometimes need to rely on AI for interpretation and understanding.)
- Have manually written backend code in the past, but haven’t written extensive backend code in the last twenty years.
- Have never manually written frontend code and have limited knowledge of the basic principles of frontend rendering mechanisms.
- Started learning about JavaScript and TypeScript a month ago.
- A beginner with Deno. Understand the calling sequence and respective responsibilities from components to islands to routes API, then to backend services, and finally to backend logic implementation.
Tools
- Agentic Coding Editor (Windsurf)
- Design and Code Generater LLM (Claude 3.5/3.7 + openAI GPT-4.1)
We need a subscription to an Agentic Coding Editor, such as Cursor, Windsurf, or Github Copilot, for design and coding. - Code Reviewer LLM (Gemini Code Assist)
Additionally, we need Gemini Code Assist (currently considered free) to review code and consult on any code-related questions. Gemini Code Assist is also very effective, and it can be said that Gemini is the best model to help you understand code. - MCP Server (sequential-thinking)
Process
Design Phase
- Write the design and outline original requirements
- Let AI write the design (experience shows Claude 3.5 + sequential-thinking MCP server works well; theoretically, any LLM with thinking capabilities is better suited for overall design)
- Review the design, which should include implementation details such as interaction flow design, class design, function design, etc.
- If you are trying to develop a full-stack application, you should write design documents for both frontend and backend
- Continue to ask questions and interact with AI until you believe the overall design is reasonable and implementable (This step is not suitable for people who have no programming knowledge at all, but it is very important.)
Implementation Planning
- Based on the design, ask AI to write an implementation plan (Claude 3.5 + sequential-thinking MCP server)
- Break it down into steps
- Ask AI to plan steps following a senior programmer’s approach
- Review steps, raise questions until the steps are reasonable (This step is not suitable for people who have no programming knowledge at all, but it is very important.)
Implementation
- Strictly follow the steps
- Ask AI to implement functions one by one (Claude 3.5/3.7)
- After each function is implemented, ask AI to generate unit tests to ensure they pass
Oversee
- If you have no programming experience, you might not be able to understand what the AI is doing or identify potential risks. As a result, you wouldn’t be able to oversee the AI or question its output, and would have to hope the AI makes no mistakes at all. This could make the implementation process much harder down the line.
- Ensure strict monitoring of what AI is actually doing
- For example: AI might implement underlying function calls in test cases rather than generating test cases for the target file, which would make it appear that tests pass when in fact there is no effective testing of the target file

- Sometimes AI will take the initiative to use mocks for testing; we need to know when it’s appropriate to use mocks in tests and when to test real functionality
- This requires us to know whether we’re doing Integration/Component Testing or Pure Unit Testing

Code Review and Design Update
- Ask another AI to read the generated code (experience shows Gemini Code Assist is very suitable for this work)
- Compare with the original design
- Have AI analyze whether the original design has been fully implemented; if not, what’s missing
- Evaluate missing content and decide whether to implement it now
- Or whether functionality beyond the design has been implemented
- Evaluate functionality beyond the design and decide whether to reflect it back into the design
- Why update the design? Because subsequent work may need to reference the design document, so ensuring the design document correctly reflects the code logic is a good practice
- You don’t necessarily need to document every single implementation detail (like the specific batch size in batchValidate), but changes to public interfaces and communication protocols are definitely worth updating.

Continuous Review
After completing each requirement, ask AI to review the design document again to understand current progress and what needs to be done



When major milestones are completed or before implementing the next major task, have AI review the completed work and write a new development plan
Always read the development plan completed by AI and make manual modifications if necessary

After reaching a milestone, have AI (preferably a different AI) review progress again

Repeat the above steps until the entire project is completed.
Learning from the Project
Git and GitHub
- Make good use of git; commit after completing each milestone functionality
- When working on significant, large-scale features—like making a fundamental data structure change from the ground up—it’s safer to use GitHub PRs, even if you’re working solo. Create a issue, create a branch for this issue, make changes, test thoroughly, and merge after confirming everything is correct.
Debugging
When debugging, this prompt is very useful: “Important: Try to fix things at the cause, not the symptom.” We need to adopt this mindset ourselves because even if we define this rule in the global rules, AI might still not follow it. When we see AI trying to fix a bug with a method that treats the symptom rather than the cause, we should interrupt and emphasize again that it needs to find the cause, not just fix the symptom. This requires us to have debugging skills, which is why Agentic Coding is currently not suitable for people who have no programming knowledge at all. Creating a familiar Snake game might not require any debugging, but for a real-world software project, if we let AI debug on its own, it might make the program progressively worse.
The sequential-thinking MCP server is very useful when debugging bugs involving multi-layer call logic. It will check and analyze multiple files in the call path sequentially, typically making it easier to find the root cause. Without thinking capabilities, AI models might not have a clear enough approach to decide which files to check.

For completely unfamiliar code sections, if bugs occur, we can only rely on AI to analyze and fix them itself, which significantly increases the frequency of interactions with AI and the cost of using AI. For example, when debugging backend programs, the Windsurf editor spends an average of 5 credits because I can point out possible debugging directions; but once we start debugging frontend pages, such as table flickering during refresh that must be fixed by adjusting CSS, because I have almost no frontend development experience, I have no suggestions or interventions, resulting in an average of 15 credits spent. When multiple modifications to a bug have no effect, rolling back the changes to the beginning stage of the bug and then using the sequential-thinking tool to think and fix will have better results.
Refactoring
Refactoring is often essential because we don’t review every line of AI-generated code, so we might miss some errors made by the AI. For example, in my project, when implementing a feature, the AI didn’t use the interface previously defined in types.d.ts, but instead created a new interface with a similar name based on its understanding, and continued using this new interface throughout the feature implementation. After discovery, refactoring was necessary.


Multi-Model mutual argumentation
When an AI offers suggestions and you’re unsure about them, a solid learning trick is to run those ideas by another AI for a second opinion. Take, for example, deciding if an endpoint should be defined with POST or GET.
I had Claude 3.7 whip up some code, then passed it over to Gemini for a quick check. Gemini suggested switching to GET, saying it might align better with common standards.
When sending the suggestion back to Claude 3.7, Claude 3.7 still believed using POST was better.
When sending Claude 3.7’s reply back to Gemini, Gemini agreed.
This is a fascinating experience, like being part of a team where you watch two experts share their opinions and eventually reach a consensus.
I hope in the future there will be a more convenient mechanism for Multi-Model mutual argumentation (rather than manual copy-pasting), which would greatly improve the quality of AI-generated code.
From Software Search to Code Generation: The Agentic Coding Revolution
2025-04-08User Story: RSS Feed Clean-up Journey
Over the past 15 years, I’ve accumulated a substantial collection of RSS feeds, numbering over 200 subscriptions. While RSS usage has dramatically declined in recent years, some of these feeds remain part of my daily reading routine. However, the collection has become cluttered:
- Many feeds have become completely inaccessible
- Some bloggers have stopped updating their sites
- Certain feeds are still active but rarely updated
The Challenge:
- Manual verification of each feed would be tedious and time-consuming
- My RSS reader (News Explorer) lacks feed cleanup functionality
- Alternative solutions like Inoreader require paid subscriptions
- The task remained pending due to lack of efficient solutions
The Agentic Coding Solution:
What was previously a daunting task transformed into a manageable project:
- Total time from requirement writing to completion: ~2 hours
- Automated validation of all feeds
- Generated comprehensive statistics and visualizations
- Successfully categorized feeds into active, inactive, and dead
- Pleasant and efficient development experience
This experience perfectly illustrates how agentic coding can turn a long-postponed task into an achievable solution through clear requirement description and AI-assisted development.
The Traditional Approach
Traditionally, when faced with a specific requirement like validating and analyzing OPML feed subscriptions, the typical workflow would be:
- Search for existing software that might solve the problem
- Evaluate multiple tools and their features
- Choose the closest match, often compromising on exact requirements
- Learn how to use the chosen software
- Deal with limitations and missing features
This process is time-consuming and often results in settling for a solution that doesn’t perfectly match our needs.
The Agentic Coding Paradigm
With agentic coding, the approach transforms dramatically:
- Clearly describe your requirements in natural language
- Let AI understand and break down the problem
- Generate custom code that exactly matches your needs
- Iterate and refine the solution through conversation
Real-World Example: OPML Feed Validator
This project demonstrates the power of agentic coding. Instead of searching for an existing OPML feed validator:
We described our need for a tool that could:
- Validate RSS feeds in an OPML file
- Check feed accessibility
- Analyze update frequencies
- Generate meaningful statistics
- Visualize the results
The AI agent:
- Designed the system architecture
- Implemented the required functionality
- Created visualization components
- Generated comprehensive documentation
- All while following best practices and proper error handling
Benefits of Agentic Coding
- Perfect Fit: Solutions are tailored exactly to your requirements
- Rapid Development: No need to spend time searching and evaluating existing tools
- Full Control: Complete access to the source code for modifications
- Learning Opportunity: Understanding how the solution works through generated code
- Cost-Effective: No need to purchase or subscribe to multiple tools
- Maintenance Freedom: Ability to modify and extend the solution as needs evolve
Future Implications
This shift from “finding” to “generating” solutions represents a fundamental change in how we approach software development. As AI continues to evolve:
- Development will become more requirement-driven than tool-driven
- Custom solutions will become as accessible as off-the-shelf software
- The focus will shift from “what exists” to “what’s possible”
Agentic coding empowers developers and users alike to create exactly what they need, breaking free from the limitations of existing software solutions.
Lessons Learned and Experience
1. The Importance of Clear Requirements
Product thinking and clear requirements are crucial for successful AI-assisted development:
- Clear Vision Leads to Better Code: When requirements are well-defined and specific about how the tool should behave, the AI generates higher quality code
- Product Mindset: Requirement providers need to have a clear understanding of:
- Desired user interactions
- Expected outputs and their formats
- Error handling scenarios
- Performance expectations
- Iterative Refinement: Unclear requirements often lead to multiple iterations and code quality issues
2. Technology Stack Selection Matters
The choice of programming languages and libraries significantly impacts AI-assisted development success:
Language Popularity Impact:
- More widely used languages (like Python) often result in better AI-generated code
- Popular languages have more training data and real-world examples
- In this project, while we chose TypeScript with Deno for learning purposes, Python might have been an easier choice
Library Selection Strategy:
- Popular, widely-used libraries lead to better AI comprehension and implementation
- Example from this project:
- Initial attempt: Using less common
deno_chartlibrary resulted in multiple errors - Successful pivot: Switching to standard SVG generation led to immediate success
- Initial attempt: Using less common
- Lesson: Prefer mainstream libraries over niche ones when working with AI
Best Practices for AI-Assisted Development
Requirements Phase:
- Invest time in detailed requirement documentation
- Include specific examples of desired behavior
- Define clear success criteria
Technology Selection:
- Consider language popularity and ecosystem maturity
- Choose widely-adopted libraries when possible
- Balance learning goals with development efficiency
Development Process:
- Start with core functionality using proven technologies
- Experiment with newer technologies only after basic features are stable
- Be prepared to pivot when encountering AI limitations with specific technologies
This project serves as a practical example of these lessons, demonstrating both the potential and limitations of AI-assisted development while highlighting the importance of making informed technology choices.
The project can be found here.
Getting Started with Deno: A Modern Twist on JavaScript Runtimes
2025-03-28If you’ve been in the JavaScript world for a while, you’ve probably heard of Deno—the runtime that’s been making waves as a “better Node.js.” Built by Ryan Dahl (the original creator of Node.js), Deno takes a fresh approach to running JavaScript and TypeScript, aiming to fix some of Node’s pain points while embracing modern standards. In this post, I’ll walk you through what Deno is, how it works, and how it stacks up against Node.js—especially based on my recent dive into it while tinkering with a Supabase integration.
What is Deno?
Deno is a secure, modern runtime for JavaScript and TypeScript, launched in 2020. It’s designed to be simple, safe, and developer-friendly, with built-in support for TypeScript, ES Modules, and a standard library—no extra tools required. Think of it as Node.js reimagined with lessons learned from the past decade.
Here’s a quick taste of Deno in action:
1 | // main.ts |
Run it with:
1 | deno run --allow-net main.ts |
Boom—a web server in three lines, no npm install or node_modules in sight.
Key Features of Deno
1. TypeScript Out of the Box
Deno runs TypeScript natively—no tsconfig.json or tsc needed. Write your .ts file, run it with deno run, and Deno compiles it in memory. Compare that to Node.js, where you’d need typescript installed and a build step (or ts-node for a quicker dev loop).
2. URL-Based Imports
Forget node_modules. Deno fetches dependencies from URLs:
1 | import { load } from "https://deno.land/std@0.224.0/dotenv/mod.ts"; |
It caches them globally (more on that later) and skips the package manager entirely.
3. Security by Default
Deno won’t let your script touch the network, filesystem, or environment unless you explicitly allow it:
1 | deno run --allow-env --allow-read main.ts |
This is a stark contrast to Node.js, where scripts have free rein unless you sandbox them yourself.
4. Centralized Dependency Cache
Deno stores all dependencies in a single global cache (e.g., ~/.cache/deno/deps on Unix). Run deno info to see where:
1 | deno info |
No per-project node_modules bloating your disk.
5. Standard Library
Deno ships with a curated std library (e.g., https://deno.land/std@0.224.0), covering HTTP servers, file I/O, and even a dotenv module for .env files—stuff you’d normally grab from npm in Node.js.
Deno vs. Node.js: A Head-to-Head Comparison
I recently played with Deno to connect to Supabase, and it highlighted some big differences from Node.js. Here’s how they stack up:
Dependency Management
- Node.js: Uses
npmandpackage.jsonto install dependencies into a localnode_modulesfolder per project. Cloning a repo? Runnpm installevery time.1
npm install @supabase/supabase-js
- Deno: Imports modules via URLs, cached globally at
~/.cache/deno/deps. Clone a Deno repo, and you’re ready to run—no install step.1
import { createClient } from "https://esm.sh/@supabase/supabase-js@2.49.3";
- Winner?: Deno for simplicity, Node.js for isolation (different projects can use different versions of the same module without URL juggling).
TypeScript Support
- Node.js: Requires setup—install
typescript, configuretsconfig.json, and compile to JavaScript (or usets-node). It’s mature but clunky. - Deno: TypeScript runs natively. No config, no build step. Write
.tsand go. - Winner: Deno, hands down, unless you’re stuck on a legacy Node.js workflow.
Configuration Files
- Node.js: Relies on
package.jsonfor dependencies and scripts, often paired withtsconfig.jsonfor TypeScript. - Deno: Optional
deno.jsonfor imports and settings, but not required. My Supabase script didn’t need one—just a.envfile andstd/dotenv. - Winner: Deno for minimalism.
Security
- Node.js: Open by default. Your script can read files or hit the network without warning.
- Deno: Locked down. Want to read
.env? Add--allow-read. Network access?--allow-net. It forced me to think about permissions when connecting to Supabase. - Winner: Deno for safety.
Ecosystem
- Node.js: Massive npm ecosystem—hundreds of thousands of packages. Whatever you need, it’s there.
- Deno: Smaller but growing ecosystem via
deno.land/xand CDNs likeesm.sh. It worked fine for Supabase, but niche libraries might be missing. - Winner: Node.js for sheer volume.
Learning Curve
- Node.js: Familiar to most JavaScript devs, but the setup (npm, TypeScript, etc.) can overwhelm beginners.
- Deno: Fresh approach, but URL imports and permissions might feel alien if you’re Node.js-native.
- Winner: Tie—depends on your background.
A Real-World Example: Supabase with Deno
Here’s how I set up a Supabase client in Deno:
1 | import { createClient } from "https://esm.sh/@supabase/supabase-js@2.49.3"; |
Run it:
1 | deno run --allow-env --allow-read main.ts |
.envfile:SUPABASE_URLandSUPABASE_ANON_KEY(grabbed from Supabase’s dashboard—not my database password!).- VS Code linting needed the Deno extension and a
deno cache main.tsto quiet TypeScript errors.
In Node.js, I’d have installed @supabase/supabase-js via npm, set up a dotenv package, and skipped the permissions flags. Deno’s way felt leaner but required tweaking for editor support.
Should You Use Deno?
- Use Deno if:
- You love TypeScript and hate build steps.
- You want a secure, minimal setup for small projects or experiments.
- You’re intrigued by a modern take on JavaScript runtimes.
- Stick with Node.js if:
- You need the npm ecosystem’s depth.
- You’re working on a legacy project or with a team entrenched in Node.
- You prefer per-project dependency isolation.
Wrapping Up
Deno’s not here to kill Node.js—it’s a different flavor of the same JavaScript pie. After messing with it for Supabase, I’m hooked on its simplicity and TypeScript support, but I’d still reach for Node.js on bigger, ecosystem-heavy projects. Try it yourself—spin up a Deno script, check your cache with deno info, and see if it clicks for you.
What’s your take? Node.js veteran or Deno newbie? Let me know in the comments!
This post covers Deno’s core concepts, contrasts it with Node.js, and ties in our Supabase example for a practical angle. Feel free to tweak the tone or add more details if you’re aiming for a specific audience! Want me to adjust anything?
How to Generate a VSIX File from VS Code Extension Source Code
2024-12-12As I’ve been using Windsurf as my primary code editor, I encountered a situation where the vs-picgo extension wasn’t available in the Windsurf marketplace. This necessitated the need to manually package the extension from its source code. This guide documents the process of generating a VSIX file for VS Code extensions, which can then be installed manually in compatible editors like Windsurf.
In this guide, I’ll walk you through the process of generating a VSIX file from a VS Code extension’s source code. We’ll use the popular vs-picgo extension as an example.
Prerequisites
Before we begin, make sure you have the following installed:
- Node.js (version 12 or higher)
- npm (comes with Node.js)
Step 1: Install Required Tools
First, we need to install two essential tools:
yarn: A package manager that will handle our dependenciesvsce: The VS Code Extension Manager tool that creates VSIX packages
1 | # Install Yarn globally |
Step 2: Prepare the Project
Clone or download the extension source code:
1
2git clone https://github.com/PicGo/vs-picgo.git
cd vs-picgoInstall project dependencies:
1
yarn install
This command will:
- Read the
package.jsonfile - Install all required dependencies
- Create or update the
yarn.lockfile
Note: The
yarn.lockfile is important! Don’t delete it as it ensures consistent installations across different environments.
Step 3: Build the Extension
Build the extension using the production build command:
1 | yarn build:prod |
This command typically:
- Cleans the previous build output
- Compiles TypeScript/JavaScript files
- Bundles all necessary assets
- Creates the
distdirectory with the compiled code
In vs-picgo’s case, the build process:
- Uses
esbuildfor fast bundling - Creates both extension and webview bundles
- Generates source maps (disabled in production)
- Optimizes the code for production use
Step 4: Package the Extension
Finally, create the VSIX file:
1 | vsce package |
This command:
- Runs any pre-publish scripts defined in
package.json - Validates the extension manifest
- Packages all required files into a VSIX file
- Names the file based on the extension’s name and version (e.g.,
vs-picgo-2.1.6.vsix)
The resulting VSIX file will contain:
- Compiled JavaScript files
- Assets (images, CSS, etc.)
- Extension manifest
- Documentation files
- License information
What’s Inside the VSIX?
The VSIX file is essentially a ZIP archive with a specific structure. For vs-picgo, it includes:
1 | vs-picgo-2.1.6.vsix |
Installing the Extension
You can install the generated VSIX file in VS Code or any compatible editor by:
- Opening VS Code/Windsurf/Cursor …
- Going to the Extensions view
- Clicking the “…” menu (More Actions)
- Selecting “Install from VSIX…”
- Choosing your generated VSIX file
Troubleshooting
If you encounter any issues:
Missing dist directory error:
- This is normal on first build
- The build process will create it automatically
Dependency errors:
- Run
yarn installagain - Check if you’re using the correct Node.js version
- Run
VSIX packaging fails:
- Verify your
package.jsonis valid - Ensure all required files are present
- Check the extension manifest for errors
- Verify your
Conclusion
Building a VS Code extension VSIX file is straightforward once you have the right tools installed. The process mainly involves installing dependencies, building the source code, and packaging everything into a VSIX file.
Remember to keep your yarn.lock file and always build in production mode before packaging to ensure the best performance and smallest file size for your users.
Happy extension building! 🚀
What is DBOS and What Should We Expect
2024-11-14Introduction
The computing world is witnessing a paradigm shift in how we think about operating systems. A team of researchers has proposed DBOS (Database-Oriented Operating System), a radical reimagining of operating system architecture that puts data management at its core. But what exactly is DBOS, and why should we care?
What is DBOS?
DBOS is a novel operating system architecture that treats data management as its primary concern rather than traditional OS functions like process management and I/O. The key insight behind DBOS is that modern applications are increasingly data-centric, yet our operating systems still follow designs from the 1970s that prioritize computation over data management.
Instead of treating databases as just another application, DBOS makes database technology the fundamental building block of the operating system itself. This means core OS functions like process scheduling, resource management, and system monitoring are implemented using database principles and technologies.
Who is behind DBOS?
DBOS is a collaborative research project involving almost twenty researchers across multiple institutions including MIT, Stanford, UW-Madison, Google, VMware, and other organizations. The project is notably led by database pioneer Michael Stonebraker, who is an ACM Turing Award winner (2014) and Professor Emeritus at UC Berkeley, currently affiliated with MIT.
Key institutions and researchers involved include:
- MIT: Michael Stonebraker, Michael Cafarella, Çağatay Demiralp, and others
- Stanford: Matei Zaharia, Christos Kozyrakis, and others
- UW-Madison: Xiangyao Yu
- Industry partners: Researchers from Google, VMware, and other organizations
The people behind DBOS can be found at DBOS Project.
Ultimate Goal
The ultimate goal of DBOS is to create an operating system that is data-centric and data-driven, the OS is on top of a DBMS, not like today’s DBMS on top of the OS.
ALL system data should reside in the DBMS.
- Replace the “everything is a file” mantra with “everything is a table”
- All system state and metadata stored in relational tables
- All changes to OS state should be through database transactions
- DBMS provides All functions that a DBMS can do, for example, files are blobs and tables in the DBMS.
- SQL-based interface for both application and system data access
- To achieve very high performance, the DBMS must leverage sophisticated caching and parallelization strategies and compile repetitive queries into machine code.
Benefits
- Strong Security and Privacy
- Native GDPR compliance through data-centric design
- Attribute-based access control (ABAC)
- Complete audit trails and data lineage
- Privacy by design through unified data management
- Fine-grained access control at the data level
- Enhanced monitoring and threat detection
- Simplified compliance with regulatory requirements
- Built-in data encryption and protection mechanisms
- Enhanced Performance and Efficiency
- Optimized resource allocation through database-driven scheduling
- Reduced data movement and copying
- Better cache utilization through database techniques
- Intelligent workload management
- Advanced query optimization for system operations
- Improved resource utilization through data-aware decisions
- Reduced system overhead through unified architecture
- Better support for modern hardware architectures
- Improved Observability and Management
- Comprehensive system-wide monitoring
- Real-time analytics on system performance
- Easy troubleshooting through SQL queries
- Better capacity planning capabilities
- Unified logging and debugging interface
- Historical analysis of system behavior
- Predictive maintenance capabilities
- Simplified system administration
- Advanced Application Support
- Native support for distributed applications
- Better handling of microservices architecture
- Simplified state management
- Enhanced support for modern cloud applications
- Built-in support for data-intensive applications
- Improved consistency guarantees
- Better transaction management
- Simplified development of distributed systems
Technical Implementation
DBOS proposes implementing core OS functions using database principles:
- Process Management: Processes and their states managed as database tables
- Resource Scheduling: SQL queries and ML for intelligent scheduling decisions
- System Monitoring: Metrics collection and analysis through database queries
- Security: Access control and auditing via database mechanisms
- Storage: File system metadata stored in relational tables
- Networking: Network state and routing managed through database abstractions
What Should We Expect?
Near-term Impact
- Proof of Concept: The researchers are working on demonstrating DBOS’s capabilities through specific use cases like log processing and accelerator management.
- Performance Improvements: Early implementations might show significant improvements in data-intensive workloads.
- Development Tools: New tools and frameworks that leverage DBOS’s database-centric approach.
Long-term Possibilities
Cloud Native Integration: DBOS could become particularly relevant for cloud computing environments where data management is crucial.
- AI/ML Operations: Better support for AI and machine learning workloads through intelligent resource management.
- Privacy-First Computing: A new standard for building privacy-compliant systems from the ground up.
Challenges Ahead
Several technical and practical challenges need to be addressed:
- Performance
- Minimizing database overhead for system operations
- Optimizing query performance for real-time OS operations
- Efficient handling of high-frequency system events
- Compatibility
- Supporting existing applications and system calls
- Maintaining POSIX compliance where needed
- Migration path for legacy systems
- Distributed Systems
- Maintaining consistency across distributed nodes
- Handling network partitions and failures
- Scaling database operations across clusters
- Adoption
- Convincing stakeholders to adopt radical architectural changes
- Training developers in the new paradigm
- Building an ecosystem of compatible tools and applications
Conclusion
DBOS represents a bold reimagining of operating system design for the data-centric world. While it’s still in its early stages, the potential benefits for security, privacy, and developer productivity make it an exciting project to watch. As data continues to grow in importance, DBOS’s approach might prove to be prescient.
The success of DBOS will largely depend on how well it can demonstrate its advantages in real-world scenarios and whether it can overcome the inherent challenges of introducing such a fundamental change to system architecture. For developers, system administrators, and anyone interested in the future of computing, DBOS is definitely worth keeping an eye on.
Whether DBOS becomes the next evolution in operating systems or remains an interesting academic exercise, its ideas about putting data management at the center of system design will likely influence future OS development.
References
M. Stonebraker et al., “DBOS: A DBMS-oriented Operating System,” Proceedings of the VLDB Endowment, Vol. 15, No. 1, 2021.
https://www.vldb.org/pvldb/vol15/p21-stonebraker.pdfDBOS Project Official Website, MIT CSAIL and Stanford University.
https://dbos-project.github.io/X. Yu et al., “A Case for Building Operating Systems with Database Technology,” Proceedings of the 12th Conference on Innovative Data Systems Research (CIDR ‘22), 2022.
https://www.cidrdb.org/cidr2022/papers/p82-yu.pdfDBOS Research Group Publications.
https://dbos-project.github.io/papers/
These references provide detailed technical information about DBOS’s architecture, implementation, and potential impact on the future of operating systems. The papers discuss various aspects from system design to performance evaluation, security considerations, and practical applications.
