Hacker Newsnew | past | comments | ask | show | jobs | submit | idopmstuff's commentslogin

Good for you. Quitting alcohol isn't easy, and you should feel proud.

Happy New Year!

2025: My first full year of running my own business, and things really went well. Also had my second child, a beautiful daughter who is currently not sleeping well at night. I am so tired. I lost my dog suddenly - he was my first dog and my companion for nine years. I love that guy so much.


I can feel the pain my brother. I too lost my dog (my brother) who was with me from 14 years.

That is terrible - I am really sorry.

best of luck to you big guy, best to you and yours

> But when they do that, Draft One erases the initial draft, and with it any evidence of what portions of the report were written by AI and what portions were written by an officer. That means that if an officer is caught lying on the stand – as shown by a contradiction between their courtroom testimony and their earlier police report – they could point to the contradictory parts of their report and say, “the AI wrote that."

This seems solvable by passing a law that makes the officer legally responsible for the report as if he had written it. He doesn't get to use this excuse in the courtroom and it gets stricken from the record if he tries. That honestly seems like a better solution than storing the original AI-generated version, because that can reinforce the view that AI wrote it to jurors, even if the officer reviewed it and decided it was correct at the time.


Yeah this seems like an obvious solution, which axon ought to be on board with since it protects them.

When juniors use the excuse “oh Claude wrote that” in a PR, I tell them if the PR has their name on it, they wrote it - and their PRs are part of their performance review. This is no different


I own a business and am constantly using working on using AI in every part of it, both for actual time savings and also as my very practical eval. On the "can this successfully be used to do work that I do or pay someone else to do more quickly/cheaply/etc." eval, I can confirm that models are progressing nicely!


I work in construction. Gpt-5.2 is the first model that has been able to make a quantity takeoff for concrete and rebar from a set of drawings. I've been testing this since o1.


This is just semantics. You can say they don't understand, but I'm sitting here with Nano Banana Pro creating infographics, and it's doing as good of a job as my human designer does with the same kinds of instructions. Does it matter if that's understanding or not?


> This is just semantics.

Precisely my point:

  semantics: the branch of linguistics and logic concerned with meaning.
> You can say they don't understand, but I'm sitting here with Nano Banana Pro creating infographics, and it's doing as good of a job as my human designer does with the same kinds of instructions. Does it matter if that's understanding or not?

Understanding, when used in its unqualified form, implies people possessing same. As such, it is a metaphysical property unique to people and defined wholly therein.

Excel "understands" well-formed spreadsheets by performing specified calculations. But who defines those spreadsheets? And who determines the result to be "right?"

Nano Banana Pro "understands" instructions to generate images. But who defines those instructions? And who determines the result to be "right?"

"They" do not understand.

You do.


"This is just semantics" is a set phrase in English and it means that the issue being discussed is merely about definitions of words, and not about the substance (the object level).

And generally the point is that it does not matter whether we call what they do "understanding" or not. It will have the same kind of consequences in the end, economic and otherwise.

This is basically the number one hangup that people have about AI systems, all the way back since Turing's time.

The consequences will come from AI's ability to produce certain types of artifacts and perform certain types of transformations of bits. That's all we need for all the scifi stuff to happen. Turing realized this very quickly, and his famous Turing test is exactly about making this point. It's not an engineering kind of test. It's a thought experiment trying to prove that it does not matter whether it's just "simulated understanding". A simulated cake is useless, I can't eat it. But simulated understanding can have real world effects of the exact same sort as real understanding.


> "This is just semantics" is a set phrase in English and it means that the issue being discussed is merely about definitions of words, and not about the substance (the object level).

I understand the general use of the phrase and used same as an entryway to broach a deeper discussion regarding "understanding."

> And generally the point is that it does not matter whether we call what they do "understanding" or not. It will have the same kind of consequences in the end, economic and otherwise.

To me, when the stakes are significant enough to already see the economic impacts of this technology, it is important for people to know where understanding resides. It exists exclusively within oneself.

> A simulated cake is useless, I can't eat it. But simulated understanding can have real world effects of the exact same sort as real understanding.

I agree with you in part. Simulated understanding absolutely can have real world effects when it is presented and accepted as real understanding. When simulated understanding is known to be unrelated to real understanding and treated as such, its impact can be mitigated. To wit, few believe parrots understand the sounds they reproduce.


Your view on parrots is wrong ! Parakeet don't understand but some parrots are exceptionally intelligent.

Africans grey parrots, do understand the words they use, they don't merely reproduce them. Once mature they have the intelligence (and temperament) of a 4 to 6 years old child.


> Your view on parrots is wrong !

There's a good chance of that.

> Africans grey parrots, do understand the words they use, they don't merely reproduce them. Once mature they have the intelligence (and temperament) of a 4 to 6 years old child.

I did not realize I could discuss with an African grey parrot the shared experience of how difficult it was to learn how to tie my shoelaces and what the feeling was like to go to a place every day (school) which was not my home.

I stand corrected.


You can, of course, define understanding as a metaphysical property that only people have. If you then try to use that definition to determine whether a machine understands, you'll have a clear answer for yourself. The whole operation, however, does not lead to much understanding of anything.


>> Understanding, when used in its unqualified form, implies people possessing same.

> You can, of course, define understanding as a metaphysical property that only people have.

This is not what I said.

What I said was unqualified use of "understanding" implies understanding people possess. Thus it being a metaphysical property by definition and existing strictly within a person.

Many other entities possess their own form of understanding. Most would agree mammals do. Some would say any living creature does.

I would make the case that every program compiler (C, C#, C++, D, Java, Kotlin, Pascal, etc.) possesses understanding of a particular sort.

All of the aforementioned examples differ from the kind of understanding people possess.


The visual programming language for programming human and object behavior in The Sims is called "SimAntics".

https://simstek.fandom.com/wiki/SimAntics


Speaking of programming languages...

Just saw your profile and it reminded me of a book my mentor bequeathed to me which we both referred to as "the real blue book":

  Starting FORTH[0]
Thanks for bringing back fond memories.

0 - https://www.goodreads.com/book/show/2297758.Starting_FORTH


> it is a metaphysical property unique to people

So basically your thesis is also your assumption.


I have always found writing documentation to be incredibly helpful for clarifying my thinking. It prevents me from doing mental hand-waving around details, and often times writing down a process that I have done a thousand times is the thing that makes me realize how I can cut steps or improve it.

I'm now in the process of trying to hand off chunks of the work I do to run my business to AI (both to save time but also just as my very broad, practical eval). It really is all about documentation. I buy small e-commerce brands, and they're simple enough that current SOTA models have more than enough intelligence to take a first pass at listings + financials to determine whether I should take a call with the seller. To make that work, though, I've got a prompt that's currently at six pages that is just every single thing I look when evaluating a business codified.

Using that has really convinced me that people are overrating the importance of intelligence in LLMs in terms of driving real economic value. Most work is like my evaluations - it requires intelligence, but there's a ceiling to how much you need. Someone with 150 IQ points wouldn't do any better at this task than someone with 100 IQ points.

Instead, I think what's going to drive actual change is the scaffolding that lets LLMs take on increasing numbers of tasks. My big issue right now is that I have to go to the listing page for a business that's for sale, screenshot the page, download the files, upload that all to ChatGPT and then give it the prompt. I'm still waiting for a web browsing agent that can handle all of that for me, so I can automate the full flow and just get an analysis of each listing sent to me without having to do anything.


Question: could you use something like (example) Selenium to perform some or all of those pre-LLM tasks?


I could (I mean in theory - practically, I'm not technically proficient enough to do so), and in fact one of the most promising web browsing agents I've tested is director.ai, which just writes Stagehand code on the fly to achieve the objectives you give it. Unfortunately it can't be invoked via API yet, so doesn't work for my use case.

Honestly, it takes such a relatively small amount of time that it makes sense to just do it myself until there's an agent that can easily handle it; I'm really only spending time trying to automate it now as a test of AI capabilities. If I actually wanted to get it automated tomorrow, the most time-efficient way to do that would just be to involve a VA from somewhere cheap for the work I'm doing.


Yeah, I've done it with industry-specific acronyms and this works well. Generate a list of company names and other terms it gets wrong, and give it definitions and any other useful context. For industry jargon, example sentences are good, but that's probably not relevant for company names.

Feed it that list and the transcript along with a simple prompt along the lines of "Attached is a transcript of a conversation created from an audio file. The model doing the transcription has trouble with company names/industry terms/acronyms/whatever else and will have made errors with those. I have also attached a list of company names/etc. that may have been spoken in the transcribed audio. Please review the transcription, and output a corrected version, along with a list of all corrections that you made. The list of corrections should include the original version of the word that you fixed, what you updated it to, and where it is in the document." If it's getting things wrong, you can also ask it to give an explanation of why it made each change that it did and use that to iterate on your prompt and the context you're giving it with your list of words.


Which specific model do you use?


I buy and operate small e-commerce brands, and since GPT-3.5, I've been attempting to vibe code software to help me manage the business. With GPT-5 Codex I have finally managed to create something legitimately useful for myself. The code may be (almost certainly is) not of great quality, but for the purposes of an internal application that only I use, it's doing the job just fine.


I honestly think we had a pretty good middle ground with people having to go to Vegas/Reno/AC to gamble. It can be fun, but you have to go out of your way to do it. If you have a fun Vegas weekend and blow a bunch of money once in a while, that seems pretty okay relative to being able to constantly bet on anything from your phone.


> Let's start with the obvious- in all forms of gambling the gamblers make a net loss. The games are hosted by very sophisticated companies, that have better mathematicians, and make money.

> $x is pumped into the system by the punters, $y is extracted, $z is returned. The 'house' is the only winner.

This is incorrect, specifically with regard to sports betting. Sports betting and poker are both winnable games. Most people don't win in the long run, but unlike in table games (Blackjack, etc.) there are absolutely winners that are not the house.

To be clear, that doesn't mean they're good or should be allowed. I used to be a poker player and enjoy putting some bets on football now, but I've come around to the general idea that sports betting in particular is a net negative for society. Still, if you're going to make an argument against it, it's always going to be a better argument if it isn't built on a basis that's just factually untrue.


Net loss means that if you add all the players together, the losses exceed the winnings.

Yes some individuals win (at least occasionally.) But as a group it's always a net loss (because the house takes a cut.)


The statement "in all forms of gambling the gamblers make a net loss" is ambiguous as to whether it's the gamblers as a whole or each of the gamblers making a net loss, and while I grant that it reads more like the way you're describing, the later context pushes the intended meaning in the other direction.

He says "The 'house' is the only winner," which is as a point of fact untrue - there are also individual gamblers who are winners. Saying this implies he thinks that each of the gamblers is a net loser.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: