Hacker Newsnew | past | comments | ask | show | jobs | submit | jngiam1's commentslogin

Exactly. and even if so, how are you going to safe guard tool access?

Imagine your favorite email provider has a CLI for reading and sending email - you're cool with the agent reading, but not sending. What are you going to do? Make 2 API keys? Make N API keys for each possible tool configuration you care about?

MCPs make this problem simple and easy to solve. CLIs don't.

I don't think OpenClaw will last that long without security solved well - and MCPs seem to be obvious solution, but actively rejected by that community.


Supposedly, you make a Skill for it, but even that is out of scope for chat agents. I didn't scroll far, but I wouldn't be surprised more people in this thread have made the mistake of giving that answer.

I think if you want background agents with sandboxes and well scoped permissions, you want MCP to be your data protocol and security layer.

If you’re vibing and doing the open claw thing without any security concerns; then you’re absolutely right.


The Skills I have for Claude are all based on personal preferences and reflects the setup I have going. It's a way to narrow the probability space to the specific set which works really well for me.


i've a simple setup with Claude Code and MCPs; and i get real benefits from better task mgmt, email mgmt, calendar, health/food/fitness tracking, working together with claude on tasks (that go into md files).

i don't think we need ClawdBot, but we do need a way to easily interact with the model such that it can create long term memories (likely as files).


The mirror test is cool!


Subtle detail but the little table casts a shadow because of the light in the window and the shadow remains unchanged after the mirror replaces the window.


More obviously, the objects in the mirror aren't actually reversed!


That one's on me! It was still using the old NB image.

Updated the mirror test to use the NB Pro version.


Store/ version with git, throw Claude code at it, and it’ll be amazing


Lutra is a native code-mode AI agent that: (a) converts all tools into functions available in a coding environment, (b) uses LLMs to produce code that orchestrates across those tools, (c) has a custom stateless code interpreter to make it all work securely and well.


There’s too much rage baiting on the internet now; the headlines that take the extreme position get reshared, while the truth is more in the middle.


This is the key. MCP encapsulates tools, auth, instructions.

We always need something for that - and it needs to work for non tech users too


I hypothesize that these AI agents are all likely higher than human performance now.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: