Hacker Newsnew | past | comments | ask | show | jobs | submit | cle's commentslogin

There are good arguments for it, but it's also not a coincidence that they happen to align with Google's business objectives. Ex it's hard to issue a TLS cert without notifying Google of it.


I don't get your logic/reasoning here... could you explain?


There are public logs of every TLS cert issued by the major providers. This benefits Google.

Kinda like how Wikipedia benefits Google. Or public roads benefit Uber. Or clean water benefits restaurants


Certificate transparency logs are public. How does this benefit Google?


Google also knows about every domain name that gets renewed or registered... How does knowing a website has tls help in any meaningful way that would detract from society as a whole?


The certificate transparency log lets everyone know which domains are active as the certificates are getting renewed, likely more often than the domain itself, and also which sub-domains are active if those are not secured using a wild-card certificate.

Not just Google: AI bots could use the information to look for juicy new data to scrape and ingest.

Probably not a significant thing, the information can be derived in other ways too if someone wants to track these things, but it is a thing.


This doesn't feel like much of an argument in favor of not using https though.


Not at all IMO, unless you are really paranoid about Google & friends. I was just saying that what was being questioned does (or could) benefit them a tiny bit.


That will not work with many of the world's most important documents because of information density. For example, dense tables or tables with lots of row/col spans, or complex forms with checkboxess, complex real-world formatting and features like strikethroughs, etc.

To solve this generally you need to chunk not by page, but by semantic chunks that don't exceed the information density threshold of the model, given the task.

This is not a trivial problem at all. And sometimes there is no naive way to chunk documents so that every element can fit within the information density limit. A really simple example is a table that spans hundreds pages. Solving that generally is an open problem.


> How many degrees of freedom do you real need to represent API cost.

The amount that your users care about.

At a large enough scale, users will care about the cost differences between extraction and classification (very different!) and finding the right spot on the accuracy-latency curve for their use case.


Exactly correct! We've had users migrate over from other providers because our granular pricing enabled new use cases that weren't feasible to do before.

One interesting thing we've learned is, most production pipelines often end up using a combination of the two (e.g. cheap classification and splitting, paired with performance extraction).


I don't understand this criticism by the author. MCP supports JSON Schema, and server responses must conform to the schema. If the schema requires an ISO-8601 timestamp (ex by specifying a "date" format in the schema) but the server sends a Unix epoch timestamp, then it is violating the protocol.

The author even later says that MCP supports JSON Schema, but also claims "you can't generate type-safe clients". Which is plainly untrue, there exist plenty of JSON Schema code generators.


Except that any properly written software will respond to protocol and schema violations by throwing an error.

Claude will happily cast your int into a 2023 Toyota Yaris and keep on hallucinating things.


For the curious:

> Cast an integer into the type of a 2023 Toyota Yaris using Javascript

(GPT-4o mini)

> To cast an integer into the type of a 2023 Toyota Yaris in JavaScript, you would typically create a class or a constructor function that represents the Toyota Yaris. Then, you can create an instance of that class using the integer value. Here's an example of how you might do this:

  // Define a class for the 2023 Toyota Yaris
  class ToyotaYaris {
      constructor(modelYear, value) {
          this.modelYear = modelYear;
          this.value = value;
      }
  
      displayInfo() {
          console.log(`Model Year: ${this.modelYear}, Value: ${this.value}`);
      }
  }
  
  // Function to cast an integer into a Toyota Yaris object
  function castToYaris(integerValue) {
      const modelYear = 2023; // Set the model year
      return new ToyotaYaris(modelYear, integerValue);
  }
  
  // Example usage
  const integerValue = 20000; // Example integer value
  const yaris = castToYaris(integerValue);
  yaris.displayInfo(); // Output: Model Year: 2023, Value: $20000


You really tried to inflict the most amount of damage to the reader by choosing GPT-4o mini


Billy, it's becoming increasingly harder to believe you are writing this code yourself


I just tried this in Claude Code. I made an MCP server whose tool output is declared as an integer but it returns a string at runtime.

Claude Code validated the response against the schema and did not pass the response to the LLM.

     test - test_tool (MCP)(input: "foo")
      ⎿  Error: Output validation error: 'bar' is not of type 'integer'


How many times does this need to be repeated.

It works in this instance. On this run. It is not guaranteed to work next time. There is a error percentage here that makes it _INEVITABLE_ that eventually, with enough executions, the validation will pass when it should fail.

It will choose not to pass this to the validator, at some point in the future. It will create its own validator, at some point in the future. It will simply pretend like it did any of the above, at some point in the future.

This might be fine for your B2B use case. It is not fine for underlying infrastructure for a financial firm or communications.


Every time the LLM uses this tool, the response schema is validated--deterministically. The LLM will never see a non-integer value as output from the tool.


Can you please diagram out, using little text arrows ("->"), what you think is happening so I can just fill in the gap for you?


I write these as part of my job, I know how they work. I'm not going to spend more time explaining to you (and demonstrating!) what is in the spec. Read the spec and let the authors know that they don't understand what they wrote. I've run out of energy in this conversation.


I gave you the chance to be explicit about your mental model of these systems and you run away with very unoriginal grandstand.


llm tool call -> mcp client validates the schema -> mcp client calls the tool -> mcp server validates the schema -> mcp server responds with the result -> mcp client passes the tool result into llm


not a developer.

what happens if this schema validation fails here - what will the mcp server respond with and what will the llm do next (in a deterministic sense)?

llm tool call -> mcp client validates the schema -> mcp client calls the tool -> mcp server validates the schema


They often do fail, at the client level you can just feed the schema validation error message back into the LLM and it corrects itself most of the time. If not the LLM throws itself into a loop until its caller times it out and it sends an error message back to the user.

At the server level it's just a good old JSON API at this point, and the server would send the usual error message it would send out to anyone.


This time.

Can you guarantee it will validate it every time ? Can you guarantee the way MCPs/tool calling are implemented (which is already an incredible joke that only python brained developers would inflict upon the world) will always go through the validation layer, are you even sure of what part of Claude handles this validation ? Sure, it didn't cast an int into a Toyota Yaris. Will it cast "70Y074" into one ? Maybe a 2022 one. What if there are embedded parsing rules into a string, will it respect it every time ? What if you use it outside of Claude Code, but just ask nicely through the API, can you guarantee this validation still works ? Or that they won't break it next week ?

The whole point of it is, whichever LLM you're using is already too dumb to not trip when lacing its own shoes. Why you'd trust it to reliably and properly parse input badly described by a terrible format is beyond me.


> Can you guarantee it will validate it every time ?

Yes, to the extent you can guarantee the behavior of third party software, you can (which you can't really guarantee no matter what spec the software supposedly implements, so the gaps aren't an MCP issue), because “the app enforces schema compliance before handing the results to the LLM” is deterministic behavior in the traditional app that provides the toolchain that provides the interface between tools (and the user) and the LLM, not non-deterministic behavior driven by the LLM. Hence, “before handing the results to the LLM”.

> The whole point of it is, whichever LLM you're using is already too dumb to not trip when lacing its own shoes. Why you'd trust it to reliably and properly parse input badly described by a terrible format is beyond me.

The toolchain is parsing, validating, and mapping the data into the format preferred by the chosen models promot template, the LLM has nothing to do with doing that, because that by definition has to happen before it can see the data.

You aren't trusting the LLM.


>The toolchain is parsing, validating, and mapping the data into the format preferred by the chosen models promot template, the LLM has nothing to do with doing that

The LLM has everything to do with that. The LLM is literally choosing to do that. I don't know why this point keeps getting missed or side-stepped.

It WILL, at some point in the future and given enough executions, as a matter of statistical certainty, simply not do that above, or pretend to do the above, or do something totally different at some point in the future.


> The LLM has everything to do with that. The LLM is literally choosing to do that.

No, the LLM doesn't control on a case-by-caae basis what the toolchain does between the LLM putting a tool call request in an output message and the toolchain calling the LLM afterwards.

If the toolchain is programmed to always validate tool responses against the JSON schema provided by MCP server before mapping into the LLM prompt template and calling the LLM again to handle the response, that is going to happen 100% of the time. The LLM doesn't choose it. It CAN'T because the only way it even knows that the data has come back from the tool call is that the toolchain has already done whatever it is programmed to do, ending with mapping the response into a prompt and calling the LLM again.

Even before MCPs or even models specifically trained and with vendor-provided templates for tool calling (but after the ReAct architecture was described), it was like a weekend project to implement a basic framework supporting tooling calling around a local or remote LLM. I don't think you need to do that to understand how silly the claim that the LLM controls what the toolchain does with each response and might make it not validate it is, but certainly doing it will give you a visceral understanding of how silly it is.


I think you are, for whatever reason, missing a fact of causality here and I'm not sure I can fix that over text. I mean that in the most respectful way possible.


Are you two talking at cross-purposes because you don't have a shared understanding of control and data flow?

The pieces here are:

* Claude Code, a Node (Javascript) application that talks to MCP server(s) and the Claude API

* The MCP server, which exposes some tools through stdin or HTTP

* The Claude API, which is more structured than "text in, text out".

* The Claude LLM behind the API, which generates a response to a given prompt

Claude Code is a Node application. CC is configured in JSON with a list of MCP servers. When CC starts up, CC"s Javascript initialises each server and as part of that gets a list of callable functions.

When CC calls the LLM API with a user's request, it's not just "here is the user's words, do it". There are multiple slots in the request object, one of which is a "tools" block, a list of the tools that can be called. Inside the API, I imagine this is packaged into a prefix context string like "you have access to the following tools: tool(args) ...". The LLM API probably has a bunch of prompts it runs through (figure out what type of request the user has made, maybe using different prompts to make different types of plan, etc.) and somewhere along the way the LLM might respond with a request to call a tool.

The LLM API call then returns the tool call request to CC, in a structured "tool_use" block separate from the freetext "hey good news, you asked a question and got this response". The structured block means "the LLM wants to call this tool."

CC's JS then calls the server with the tool request and gets the response. It validates the response (e.g., JSON schemas) and then calls the LLM API again bundling up the success/failure of the tool call into a structured "tool_result" block. If it validated and was successful, the LLM gets to see the MCP server's response. If it failed to validate, the LLM gets to see that it failed and what the error message was (so the LLM can try again in a different way).

The idea is that if a tool call is supposed to return a CarMakeModel string ("Toyota Tercel") and instead returns an int (42), JSON Schemas can catch this. The client validates the server's response against the schema, and calls the LLM API with

  {
    "type": "tool_result",
    "tool_use_id": "abc123",
    "is_error": true,
    "content": [
      {
        "type": "text",
        "text": "Expected string, got integer."
      }
    ]
  }
So the LLM isn't choosing to call the validator, it's the deterministic Javascript that is Claude Code that chooses to call the validator.

There are plenty of ways for this to go wrong: the client (Claude Code) has to validate; int vs string isn't the same as "is a valid timestamp/CarMakeModel/etc"; if you helpfully put the thing that failed into the error message ("Expect string, got integer (42)") then the LLM gets 42 and might choose to interpret that as a CarMakeModel if it's having a particularly bad day; the LLM might say "well, that didn't work, but let's assume the answer was Toyota Tercel, a common car make and model", ... We're reaching here, yet these are possible.

But the basic flow has validation done in deterministic code and hiding the MCP server's invalid responses from the LLM. The LLM can't choose not to validate. You seemed to be saying that the LLM could choose not to validate, and your interlocutor was saying that was not the case.

I hope this helps!


>Are you two talking at cross-purposes because you don't have a shared understanding of control and data flow?

No they're literally just skipping an entire step into how LLM's actually "use" MCP.

MCP is just a standard, largely for humans. LLM's do not give a singular fuck about it. Some might be fine tuned for it to decrease erroneous output, but at the end of the day it's just system prompts.

And respectfully, your example misunderstands what is going on:

>* The Claude API, which is more structured than "text in, text out".

>* The Claude LLM behind the API, which generates a response to a given prompt

No. That's not what "this" is. LLM's use MCP to discover tools they can call, aka function/tool calling. MCP is just an agreed upon format, it doesn't do anything magical; it's just a way of aligning the structure across companies, teams, and people.

There is not an "LLM behind the API", while a specific tool might implement its overall feature set using LLM's, that's totally irrelevant to what's being discussed and the principle point of contention.

Which is this: an LLM interacting with other tools via MCP still needs system prompts or fine tuning to do so. Both of those things are not predictable or deterministic. They will fail at some point in the future. That is indisputable. It is a matter of statistical certainty.

It's not up for debate. And an agreed upon standard between humans that ultimately just acts as convention is not going to change that.

It is GRAVELY concerning that so many people are trying to use technical jargon of which they clearly are ill-equipped to do so. The magic rules all.


> No they're literally just skipping an entire step into how LLM's actually "use" MCP.

No,you are literally misunderstanding the entire control flow of how an LLM toolchain uses both the model and any external tools (whether specified via MCP or not, but the focus of the conversation is MCP.)

> MCP is just a standard, largely for humans.

The standard is for humans implementing both tools and the toolchains that call them.

> LLM's do not give a singular fuck about it.

Correct. LLM toolchains, which if they can connect to tools via MCP, are also MCP clients care about it. LLMs don't care abojt it because the toolchain is the thing that actually calls both the LLM and the tools. And that's true whether the toolchain is a desktop frontend with a local, in process llama.cpp backend for running the LLM or if its the Claude Desktop app with a remote connection to the Anthropic API for calling the LLM or whatever.

> Some might be fine tuned for it to decrease erroneous output,

No, they aren't. Most models that are used to call tools now are specially trained for tool calling with a well-defined format for requesting tool calls from the toolchain a mnd receiving results back from it (though this isn't necessary for tool calling to work, people were using the ReAct pattern in toolchains to do it with regular chat models without any training or prespecified prompt/response format for tool calls just by having the toolchain inject tool-related instructions in the prompt, and read LLM responses to see if it was asking for tool calls), none of them that exist now are fine tuned for MCP, nor do they need to be because they literally never see it. The toolchain reads LLM responses, identifies tool call requests, takes any that map to tools defined via MCP and routes them down the channel (http or subprocess stdio) specified by the MCP, and does the reverse woth responses from the MCP server, validating responses and then mapping them into a prompt template that specifies where tool responses go and how they are formatted. It does the same thing (minus the MCP parts) for tools that aren’t specified by MCP (frontends might have their own built-tools, or have other mechanisms for custom tools that predate MCP support.) The LLM doesn't see any difference between MCP tools and other tools or a human reading the message with the tool request and manually creating a response that goes directly back.

> LLM's use MCP to discover tools they can call,

No, they don't. LLM frontends, which are traditional deterministic programs, use MCP to do that, and to find schemas for what should be sent to and expected from the tools. LLMs don’t see the MCP specs, and get information from the toolchain in prompts in formats that are model-specific and unrelated to MCP that tell them what tools they can request calls be made to and what they can expect back.

> an LLM interacting with other tools via MCP still needs system prompts or fine tuning to do so. Both of those things are not predictable or deterministic. They will fail at some point in the future. That is indisputable.

That's not, contrary to your description, a point of contention.

The point of contention is that the validation of data returned by an MCP server against the schema provided by the server is not predictable or deterministic. Confusing these two issues can only happen if you think the model does something with each response that controls whether or not the toolchain validates it, which is impossible, because the toolchain does whatever validation it is programmed to do before the model sees the data. The model has no way to know there is a response until that happens.

Now,can the model make requests that the don't fit the toolchain’s expectations due to unpredictable model behavior? Sure. Can the model do dumb things with the post-validation reaponse data after the toolchain has validated it and mapped it into the models prompt template and called the model with that prompt, for the same reason? Abso-fucking-lutely.

Can the model do anything to tell the toolchain not to validate response data for a tool call that it did decide to make on behalf of the model if the toolchain is programmed to validate the response data against the schema provided by the tool server? No, it can't. It can't even know that the tool was provided by an MCP and that that might be an issue, not can it know that the toolchain made the request, nor can it know that the toolchain received a response until the toolchain has done what it is programmed to do with the response through the point of populating the prompt template and calling the model with the resulting prompt, by which point any validation it was programmed to do has been done and is an immutable part of history.


>No, they don't. LLM frontends, which are traditional deterministic programs, use MCP to do that, and to find schemas for what should be sent to and expected from the tools.

You are REALLY, REALLY misunderstanding how this works. Like severely.

You think MCP is being used for some other purpose despite the one it was explicitly designed for... which is just weird and silly.

>Confusing these two issues can only happen if you think the model does something with each response that controls whether or not the toolchain validates it

No, you're still just arguing against something no one is arguing for the sake of pretending like MCP is doing something it literally cannot do or fundamentally fix about how LLM's operate.

I promise you if you read this a month from now with a fresh pair of eyes you will see your mistake.


What do you think the `tools/call` MCP flow is between the LLM and an MCP server? For example, if I had the GitHub MCP server configured on Claude Code and prompted "Show me the most recent pull requests on the torvalds/linux repository".


Hum, I'm not sure if everyone is simply unable to understand what you are saying, including me, but if the MCP client validates the MCP server response against the schema before passing the response to the LLM model, the model doesn't even matter, your MCP client could choose to report an error and interrupt the agentic flow.

That will depend on what MCP client you are using and how they've handled it.


You missed the MCP client/host distinction :p jk, great explanation.


I don’t know how this works, just to start off.

How does the AI bypass the MCP layer to make the request? The assumption is (as I understand it) the AI says “I want to make MCP request XYZ with data ABC” and it sends that off to the MCP interface which does the heavy lifting.

If the MCP interface is doing the schema checks, and tossing errors as appropriate, how is the AI routing around this interface to bypass the schema enforcement?


>How does the AI bypass the MCP layer to make the request

It doesn't. I don't know why the other commenters are pretending this step does not happen.

There is a prompt that basically tells the LLM to use the generated manifest/configuration files. The LLM still has to not hallucinate in order to properly call the tools with JRPC and properly follow MCP protocol. It then also has to make sense of the structured prompts that define the tools in the MCP manifest/configuration file.

It's system prompts all the way down. Here's a good read of some the underlying/supporting concepts: https://huggingface.co/docs/hugs/en/guides/function-calling

Why this fact is seemingly being lost in this thread, I have no idea, but I don't have anything nice to say about it so I won't :). Other than we're all clearly quite screwed, of course.

MCP is to make things standard for humans, with expected formats. The LLM's really couldn't give a shit and don't have anything super special about how the interact with MCP configuration files or the protocol (other than some additional fine-tuning, again, to make it less likely to get the wrong output).


> There is a prompt that basically tells the LLM to use the generated manifest/configuration files.

No, there isn't. The model doesn't see any difference between MCP-supplied tools, tools built in to the toolchain, and tools supplied by any other method. The prompt simply provides tool names, arguments, and response types to the model. The toolchain, a conventional deterministic program, reads the model response, finds things that meet the models defined format for tool calls, parses out the call names and arguments, looks up in its own internal list of tools to find matching names and see if they are internal, MCP supplied, or other tools, and routes the calls appropriately, gathers responses, does any validation it is designed to do, then mals the validated results into where the model's prompt template specifies tool results should go, and calls the model again with an new message appended to the previous conversation context containing the tool results.


Do you have any technical diagrams or specs that describe this flow? I've been reading the Lang chain[0] and mcp docs[0] and cannot find this behavior you're proposing anywhere.

[0]- https://langchain-ai.github.io/langgraph/agents/mcp/

[1]- https://docs.anthropic.com/en/docs/mcp


Because it's about the MCP Host <-> LLM interaction. Not how a vanilla server and client communicate to each other and have done so for the last 5+ decades.

This really is not that hard to understand. The LLM must be "bootstrapped" with tool definitions and it must retain stable enough context to continue to call those tools into the future.

This will fail at some point, with any model. It will pretend to do a tool call, it will simply not do the tool call, or it will attempt to call a tool that does not exist, or any of the above or anything else not listed here. It is a statistical certainty.

I don't know why people are pretending MCP does something to fix this, or that MCP is special in anyway. It won't, and it's not.

Make sure you have a good understanding of the overall model: https://hackteam.io/blog/your-llm-does-not-care-about-mcp/

Then take a look at research like this: https://www.archgw.com/blogs/detecting-hallucinations-in-llm...


Oh, so you're not talking about json validation inside the mcp server, you're talking about the contract between the LLM and the MCP server potentially changing. This is a valid issue the same as other APIs that must be written against, the same as you would with other external API connections. Mcp does not solve this correct, just the same as swagger does not solve it.

As for your comments on LLM pretending to do tool calls, sure. That's not what the original thread comments were discussing. There are ways to mitigate this with proper context and memory management but it is more advanced.


>That's not what the original thread comments were discussing. There are ways to mitigate this with proper context and memory management but it is more advanced.

That is what the original article is describing, and what the comments misunderstood or purposefully over-simplified, and extends it to being able to trace these issues across a large amount of calls/invocations at scale.

>MCP has none of this richness. No machine-readable contracts beyond basic JSON schemas means you can’t generate type-safe clients or prove to auditors that AI interactions follow specified contracts.

>MCP ignores this completely. Each language implements MCP independently, guaranteeing inconsistencies. Python’s JSON encoder handles Unicode differently than JavaScript’s JSON encoder. Float representation varies. Error propagation is ad hoc. When frontend JavaScript and backend Python interpret MCP messages differently, you get integration nightmares. Third-party tools using different MCP libraries exhibit subtle incompatibilities only under edge cases. Language-specific bugs require expertise in each implementation, rather than knowledge of the protocol.

>Tool invocations can’t be safely retried or load-balanced without understanding their side effects. You can’t horizontally scale MCP servers without complex session affinity. Every request hits the backend even for identical, repeated queries.

Somehow comments confused a server <-> client interaction which has been a non-issue for decades with making the rest of the "call stack" dependable. What leads to that level of confusion, I can only guess it's inexperience and religious zealotry.

It's also worth noting that certain commenters saying I "should" (I'm using this word on purpose) read the spec is also pretty laughable, considering how vague the "protocol" itself is.

>Clients SHOULD validate structured results against this schema.

Have fun with that one. MCP could have at least copied the XML/SOAP process around this and we'd be better off.

Which again, leads back to the articles ultimate premise. MCP does a lot of talking and not a lot of walking, it's pointless at best and is going to lead to A LOT of integration headaches.


I don't think people in this thread aren't really confused about MCP. They are confused that you claimed, or at least insinuated that an LLM might skip the schema validation portion of an MCP tool call request/response, which was originally demonstrated via Claude Code. Hopefully you can understand why everyone seems so confused, since that claim doesn't make any sense when the LLM doesn't really have anything to do with schema validation at all.


What you described is essentially how it works. The LLM has no control over how the inputs & outputs are validated, nor in how the result is fed back into it.

The MCP interface (Claude Code in this case) is doing the schema checks. Claude Code will refuse to provide the result to the LLM if it does not pass the schema check, and the LLM has no control over that.


>The LLM has no control over how the inputs & outputs are validated

Which is completely fucking irrelevant to what everyone else is discussing.


> > The LLM has no control over how the inputs & outputs are validated

> Which is completely fucking irrelevant to what everyone else is discussing.

Not sure what you think is going on, but that is literally the question this subthread is debating, starting with an exchange in which the salient claims were:

From: https://news.ycombinator.com/item?id=44849695

> Claude Code validated the response against the schema and did not pass the response to the LLM.

From: https://news.ycombinator.com/item?id=44850894

> This time.

> Can you guarantee it will validate it every time ?


This is deterministic, it is validating the response using a JSON Schema validator and refusing to pass it to an LLM inference.

I can't gaurantee that behavior will remain the same more than any other software. But all this happens before the LLM is even involved.

> The whole point of it is, whichever LLM you're using is already too dumb to not trip when lacing its own shoes. Why you'd trust it to reliably and properly parse input badly described by a terrible format is beyond me.

You are describing why MCP supports JSON Schema. It requires parsing & validating the input using deterministic software, not LLMs.


>This is deterministic, it is validating the response using a JSON Schema validator and refusing to pass it to an LLM inference.

No. It is not. You are still misunderstanding how this works. It is "choosing" to pass this to a validator or some other tool, _for now_. As a matter of pure statistics, it will simply not do this at some point in the future on some run.

It is inevitable.


I'd encourage you to read the MCP specification: https://modelcontextprotocol.io/specification/2025-06-18/ser...

Or write a simple MCP server and a client that uses it. FastMCP is easy: https://gofastmcp.com/getting-started/quickstart

You are quite wrong. The LLM "chooses" to use a tool, but the input (provided by the LLM) is validated with JSON Schema by the server, and the output is validated by the client (Claude Code). The output is not provided back to the LLM if it does not comply with the JSON Schema, instead an error is surfaced.


> The LLM "chooses" to use a tool

I think the others are trying to point out that statistically speaking, in at least one run the LLM might do something other than choose to use the correct tool. i.e 1 out of (say) 1 million runs it might do something else


No, the discussion is about whether validation is certain to happen when the LLM makes something where the frontend recognizes aa a tool request and calls a tool on behalf of the LLM, not whether the LLM can choose not to make a tool call at all.

The question is whether havign observed Claude Code validating a tool response before handing the response back to the LLM, you can count on that validation on future calls, not whether you can count on the LLM calling a tool in a similar situation.


Why do you think anything you said contradicts what I'm saying? I promise you I'm probably far more experienced in this field than you are.

>The LLM "chooses" to use a tool

Take a minute to just repeat this a few times.


MCP requires that servers providing tools must deterministically validate tool inputs and outputs against the schema.

LLMs cannot decide to skip this validation. They can only decide not to call the tool.

So is your criticism that MCP doesn't specify if and when tools are called? If so then you are essentially asking for a massive expansion of MCP's scope to turn it into an orchestration or workflow platform.


The LLM chooses to call a tool, it doesn't choose how the frontend handles anything about that call between the LLM making a tool request and the frontend, after having done its processing of the response (including any validation), mapping the result into a new prompt and calling the LLM with it.


> . It is "choosing" to pass this to a validator or some other tool, _for now_.

No, its not. The validation happens at the frontend before the LLM sees the response. There is no way for the LLM to choose anything about what happens.

The cool thing about having coded a basic ReAct pattern implementation (before MCP, or even models trained on any specific prompt format for tool calls, was a thing, but none of that impacts the basic pattern) is that it gives a pretty visceral understanding of what is going on here, and all that's changed since is per model standardization of prompt and response patterns on the frontend<->LLM side and, with MCP, of the protocol for interacting on the frontend<->tool side.


Claude Code isn't a pure LLM, it's a regular software program that calls out to an LLM with an API. The LLM is not making any decisions about validation.


As an example.

"1979010112345" is a unix timestamp that looks like it might be Jan 1 1979 datetime formatted as an integer, but is really Sep 17 2032 05:01:52.


This can even be a feature: https://github.com/Zorokee/ArtificialCast


imho it's a fantasy to expect type safe protocols except in the case that both client and server are written in the same (type safe) language. Actually even that doesn't work. What language actually allows a type definition for "ISO-8601 timestamp" that's complete? Everything ends up being some construction of strings and numbers, and it's often not possible to completely describe the set of valid values except by run-time checking, certainly beyond trivial cases like "integer between 0 and 10".


> What language actually allows a type definition for "ISO-8601 timestamp" that's complete?

It is absolutely possible to do this, and to generate client code which complies with ISO-8601 in JS/TS. Large amounts of financial services would not work if this was not the case.

See the c# support for ISO-8601 strings: https://learn.microsoft.com/en-us/dotnet/standard/base-types...


You've misunderstood his statement and proven his point.

`DateTime` is not an ISO-8601 type. It can _parse_ an ISO-8601 formatted string.

And even past that, there are Windows-specific idiosyncrasies with how the `DateTime` class implements the parsing of these strings and how it stores the resulting value.


> `DateTime` is not an ISO-8601 type. It can _parse_ an ISO-8601 formatted string.

This is exactly the point: a string is just a data interchange format in the context of a DateTime, and C# provides (as far as I can tell) a complete way of accessing the ISO-8601 specification on the language object. It also supports type-safe generation of clients and client object (or struct) generation from the ISO-8601 string format.

> And even past that, there are Windows-specific idiosyncrasies with how the `DateTime` class implements the parsing of these strings and how it stores the resulting value.

Not really. The windows statements on the article (and I use this on linux for financial services software) are related to automated settings of the preferences for generated strings. All of these may be set within the code itself.


>The windows statements on the article

That was based on decades of experience in .NET and Windows. Not the article ;).


Ah. For what its worth, the only windows machine I have is used pretty much only to run steam. I can't really comment on c# on windows.


Generally you'd use a time library to model ISO-8601 dates in a typesafe way. Some fancier languages might have syntactic support for it, but they ultimately serve the same purpose.

Related but distinct from serialization.


Not only it's possible, but it has been done many times before, and a lot of modern business software is still built on that. You know that old fashioned XML thing? It has schemas, and those schemas come with a lot of predefined types including, specifically, a whole family of ISO 8601-related types:

https://www.w3.org/TR/xmlschema-2/#isoformats

And so you can write a web service in Java and use it from C#, or whatever.


Let's not focus on 100% or nothing. Let's focus on better and worse.


At its core, the article was just ramblings from someone being upset that LLMs didn't make things more complicated so that they could charge more billable hours to solve invented corporate problems... Which some people built their career on.

The merchants of complexity are disappointed. It turns out that even machines don't care for 'machine-readable' formats; even the machines prefer human-readable formats.

The only entities on this planet who appreciate so-called 'machine-readability' are bureaucrats; and they like it for the same reason that they like enterprise acronyms... Literally the opposite of readability.


I look forward to waiting a decade and seeing what MCP ends up reinventing.


For me the battle is with people who want to waste time bikeshedding over the definition of "REST" and whether the APIs are "RESTful", with no practical advantages, and then having to steer the conversation--and their motivation--towards more useful things without alienating them. It's tiresome.


It was buried towards the bottom of the article, but the reason, to me:

Clients can be almost automatic with a HATEOS implementation, because it is a self describing protocol.

Of course, Open API (and perhaps to some extent now AI) also mean that clients don't need to be written they are just generated.

However it is important perhaps to remember the context here: SOAP is and was terrible, but for enterprise that needed a complex and robust RPC system, it was beginning to gain traction. HATEOS is a much more general yet simple and comprehensive system in comparison.

Of course, you don't need any of this. So people built APIs they did need that were not restfull but had an acronym that their bosses thought sounded better than SOAP, and the rest is History.


> Clients can be almost automatic with a HATEOS implementation, because it is a self describing protocol.

That was the theory, but it was never true in practice.

The oft comparisons to the browser really missed the mark. The browser was driven by advanced AI wetware.

Given the advancements in LLMs, it's not even clear that RESTish interfaces would be easier for them to consume (say vs. gRPC, etc.)


Then let developer-Darwin win and fire those people. Let the natural selection of the hiring process win against pedantic assholes. The days are too short to argue over issues that are not really issues.


Can we just call them HTTP APIs?


"Everyone should support education" is an empty platitude, it doesn't help answer questions like "how much funding?" and "who gets funding and who doesn't?". That's where the sides arise.

The author (and Nature) pretends like those aren't real problems and that scientists should get unconditional support. That's never been the case.


Cursor does all that stuff too perfectly fine.


Meanwhile multiple non-technical people that I know pay $20/mo to OpenAI and have long, verbal conversations with ChatGPT every day to learn new things, explore ideas, reflect, etc.

These are obviously what voice assistants should do, the research was just not there. Amazon was unwilling to invest in the long-term research to make that a reality, because of a myopic focus on easy-to-measure KPIs. After pouring billions of dollars into Alexa. A catastrophic management failure.


Are they talking to ChatGPT, or are they typing? More and more we're seeing that user don't even want to use a phone for phone calls, so maybe a voice interface really isn't the way to go.

Edit: Oh, you wrote "verbal" that seems weird to me. Most people I know certainly don't want to talk to their devices.


My wife paid for ChatGPT and is loving it - she only types to it so far (and sends it images and screenshots), but I've had a go at talking to it and it was much better than I thought.

If I'm alone I don't mind talking if it is faster, but there is no way I'm talking to AI in the office or on the train (yet...)


> If I'm alone I don't mind talking if it is faster

When is talking faster than text? I only ever use it when my hands are tied (usually for looking up how to do things while playing a video game).


When you can talk at your normal pace?

People talk at about 120WPM - 160WPM naturally, few can type that fast which is why stenographers have a special keyboard and notation.


I struggle to have naturally flowing conversation with an AI for much the same reason people don't use most of Siri's features - it's awkward and feels strange.

As such I can maintain about five minutes of slow pace before giving up and typing. I have to believe others have similar experiences. But perhaps I'm an outlier.


I feel tiredness in my throat when I talk to bots like Alexa as you have to enunciate in a special way to get across to them.


Sure, it defiantly doesn’t work for everyone. I think it’s accent or something dependent as some people’s natural voice comes across fine.


I know quite a few folks that chat with the gpts. Especially while committing in the car. Also there are niche uses like language practice.


How does that eliminate the need for the graceful shutdown the author discusses?


In the same way that GC eliminates the need for manual memory management.

Sometimes it's not enough and you have to 'do it by hand', but generally if you're working in a system that has GC, freeing memory is not something that you think of often.

The BEAM is designed for building distributed, fault tolerant systems in the sense that these type of concerns are first class objects, as compared to having them as external libraries (eg. Kafka) or completely outside of the system (eg. Kubernetes).

The three points the author lists in the beginning of the article are already built in and their behavior are described rather than implemented, which is what I think OP meant with not having to 'intentionally create graceful shutdown routines'.


I really don't see how what you are describing has anything to do with the graceful shutdown strategies/tips mentioned in the post.

- Some applications want to instantly terminate upon receiving kill sigs, others want to handle them, OP shows how to handle them

- In the case of HTTP servers, you want to stop listening for new requests, but finish handling current ones under a timer. TBF, OPs post actually handles that badly with a time.Sleep when there's a running connection, instead of using a sync.WaitGroup like most applications would want to do

- Regardless if the application is GCd or not, you probably want to still manually close connections, so you can handle any possible errors (a lot of connections stuff flushes data on close)


Thread OPs comment was pointing out that in Elixir there is no need to manually implement these strategies as they already exist within OTP as first class members on the BEAM.

Blog post author has to hand roll these, including picking the wrong solution with time.Sleep as you mentioned.

My analogy with GC was in that spirit; if GC is built in, you don't need custom allocators, memory debuggers etc 99% of the time because you won't be poking around memory the same way that you would in say C. Malloc/free still happens.

Likewise, graceful shutdown, trapping signals, restarting queues, managing restart strategies for subsystems, service monitoring, timeouts, retries, fault recovery, caching, system wide (as in distributed) error handling, system wide debugging, system wide tracing... and so on, are already there on the BEAM.

This is not the case for other runtimes. Instead, to the extent that you can achieve these functionalities from within your runtime at all (without relying on completely external software like Kubernetes, Redis, Datadog etc), you do so by glueing together a tonne of libraries that might or might not gel nicely.

The BEAM is built specifically for the domain "send many small but important messages across the world without falling over", and it shows. They've been incrementally improving it for some ~35 years, there's very few known unknowns left.


> They have much superior product compared to VSCode in terms of pretty much everything, except AI

Disagree, I keep trying Jetbrains once in a while and keep walking away disappointed (used to be a hardcore user). I use VS Code bc it is seamlessly polyglot. Jetbrains wants me to launch a whole separate IDE for different use cases, which is just horrible UX for me. Why would I pay hundreds for a worse UX?


I use IntelliJ for all languages at work.


you can install nearly all of their supported language plugins in your editor fyi. you just lose some of the language specific integrations if you use the python plugin via intellij for example.


Which isn't a problem with VSCode. And it's free.


> Which isn't a problem with VSCode

you also have to install plugins in vscode

> And it's free

not relevant to my comment


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: