Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> You don't get reams of wishy-washy code unless you ask for it.

I've found that they get pretty wishy-washy when you correct them. As an example, yesterday I was working on porting a function from the open-source CUE4Parse project from C# to Python (for a hobby project), and the LLM (Gemini 2.5 Pro) suggested the following translation of a C# method:

    @staticmethod
    def from_name(name: str) -> 'FPackageId':
        # C# Encoding.Unicode is UTF-16 LE in Python
        lower_name_bytes = name.lower().encode('utf-16-le')
        hashed_id = CityHash64(lower_name_bytes)
        return FPackageId(hashed_id)
I noted that the original used a custom ToLower() implementation:

> This custom ToLower(), does that produce the same result as .lower() in Python?

Gemini answered with a lot of text and concluded: "You should use Python's standard lower() method for your port."

I pushed back with:

> Right, but for this to work (looking up an asset by its hash as contained in global.utoc), I probably have to match the behavior of Unreal Engine...

  Gemini: You are absolutely right. My apologies, your insight is spot on. For the purpose of matching hashes that are pre-calculated by Unreal Engine and stored in files like `global.utoc`, you **must** replicate the engine's behavior exactly. My previous advice to use Python's `.lower()` was incorrect in this specific context.

  You need to implement a function that behaves identically to the C# `ToLower` you highlighted. The engine uses this specific, non-Unicode-aware, ASCII-only lowercasing function before hashing the path to generate the `FPackageId`. Using Python's standard `.lower()` would produce a different byte sequence for any path containing non-English characters, leading to a hash mismatch.
From my experience this is representative of the typical LLM interaction once one ventures into niche topics like Unreal Engine modding.


Yes, if they make a mistake and you point it out, they will make another mistake. In many of those cases, it's better to scrap it and start with a fresh context, try a different prompt (or provider smaller context).

Also, more importantly, they will ignore negative directives. So telling it: "don't do X" , will get ignored. You are better of using positive directives instead.


It's pretty difficult to have a useful back and forth with an LLM, because they're really heavily finetuned to be agreeable (and also they're not particularly smart, just knowledgeable, so their 'system 1' is a lot better than their 'system 2', to analogize with 'thinking fast and slow'). Generally speaking if they don't get a useful answer in one shot or with relatively simple, objective feedback, they're just going to flop around and agree with whatever you last suggested.


Exactly.

But, to make a comparison here with Claude Code, I was initially impressed with Geminis ability to hold a keep a conversation on track, but it rarely gets the hint when I express annoyance with its output. Claude has an uncanny ability to guess what I find wrong with its output (even when I just respond with WTF!) and will try to fix it often in actually useful ways, Gemini just keeps repeating its last output after acknowledging my annoyance.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: