Hacker Newsnew | past | comments | ask | show | jobs | submit | drcxd's commentslogin

A leet-code style automated assignment is like a test. We all did tests in school so I guess most people do not feel there is anything wrong about that.

However, an interview, which should be conducted by human, but instead by something AI pretends to be human, would make most of the current human beings feel disgusted, naturally.

Is there any formal proof that an AI conducted interview yields more than a pencil & paper test? Or is there any scientific research about that? I doubt there would be any in the near future. Then using such AI conducted interviews is simply a belief.


You miss read. The parent thread said: > the women has am easier time not to commit and just sleep around That's for women. The second time they mentioned sleep round: > Dating apps are available, statistically women all try to get into a relationship with the same 1% of men - who sleep around and cause toxicity all around. They mean the 1% of men are the ones who sleep around. Also, I think it is better to interpret "sleep around" as the state of having non-committed sex relationship with non-marital partner. It is a description of a fact rather than an accusation. Though the words may sound harsh.


In my opinion, reproduction is based on the idea "borrowing against the future", or Ponzi Scheme, because reproduction is based on the idea that "we would have a better future", but in fact, we will not.


Yeah, and if there really is any boilerplate thing, can't we programmers come up with a more deterministic solution, like a framework? I don't know.


We have, haven't we? That was what was so nice about rails 20 years ago; you just said rails new app something and had a functioning web app that could even just read your database and generate all the model files.


This reminds me of the video game crash of 1983: https://en.wikipedia.org/wiki/Video_game_crash_of_1983

Because video games of poor quality are too many, consumers simply refuse to spend time identifying the high-quality ones from the enormous poor-quality ones.

I wonder if the software industry would experience the similar thing?


I like your idea of finding the pattern of those "embarrassing LLM questions". However, I do not understand your example. What is a random program? Is it a program that compiles/executes without error but can literally do anything? Also, how do you translate a program to plain English?


A randomly generated program from a space of programs defined by a set of generating actions.

A simple example is a programming language that can only operate on integers, do addition, subtraction, multiplication, and can check for equality. You can create an infinite amount of programs of this sort. Once generated, these programs are quickly evaluated within a split second. You can translate them all to English programmatically, ensuring grammatical and semantical correctness, by use of a generating rule set that translates the program to English. The LLM can provide its own evaluation of the output.

For example:

program:

1 + 2 * 3 == 7

evaluates to true in its machine-readable, non-LLM form.

LLM-readable english form:

Is one plus two times three equal to seven?

The LLM will evaluate this to either true or false. You compare with what classical execution provided.

Now take this principle, and create a much more complex system which can create more advanced interactions. You could talk about geometry, colors, logical sequences in stories, etc.


news.ycombinator.com/item?id=46670279

Recently there was this post which is largely generated by Claude Code. Read it.


But the dashboard is not important at all, because everyone can have the same dashboard the same way you have it. It's like you are generating a static website using Hugo and apply a theme provided on it. The end product you get is something built by a streamline. No taste, no soul, no effort. (Of course, the effort is behind the design and produce of the streamline, but not the product produced by the streamline.)

Now, if you want to use the dashboard do something else really brilliant, it is good enough for means. Just make sure the dashboard is not the end.


Dashboard is just an example. The gist is how much of know-how that we use in our work can be replaced by AI transforming other people's existing work. I think it hinges on how many new problems or new business demands will show up. If we just work on small variations of existing business, then quickly our know-hows will converge (e.g. building a dashboard or a vanilla version of linear regression model), and AI will spew out such code for many of us.


Strictly speaking, Lua is not global by default. All free names, that is, all names unqualified with `local`, is actually indexed from a table `_ENV`, which is set to `_G`, the global environment. So, all free names are effectively global by default, but you can change this behavior by put this line at the top of your file `local _G = _G; _ENV = {};`. This way, all free names are indexed from this new table, and all access to the global names must explicitly be accessed through `_G`, which is a local variable now. However, I have never seen such practice. Maybe it is just too complicated to accept that all free names are global variables and you have to explicitly make it local.


Thanks to Lua’s great metaprogramming facilities, and the fact that _G is just a table, another workaround is to add a metamethod to _G that throws an error if you try to declare a global. That way you can still declare globals using rawset if you really want them, but it prevents you from declaring them accidentally in a function body.


Obviously, since the training material for such esoteric languages is scarce. (That's why they are esoteric!) So by definition, LLM will never be good at esoteric languages.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: