Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

ChatGPT says "Boston is a city located in Massachusetts, United States. It is not possible for Boston to be a distance away from itself. If you are asking about the distance from one part of the city to another, it will depend on the specific locations within the city. The distance within the city can be measured in miles or kilometers and can be determined using a map or a GPS device."

So it seems like this particular implementation is not particularly good but the general chat-as-search might fare better, though there are definitely many ways to get ChatGPT to say nonsense.



I agree. Chat-as-search is very promising but I think it will cause big issues because people will not check the veracity of its statements. Right now, you actually have to go to a page to find the info and make up your mind on whether you trust this source. (Say, I generally trust CDC.) Chat simply linking to sources will not solve this problem because we are lazy. But, since we are lazy, I think it will take over the current model.


Disagreed. If you've ever watched how non-technical people use google, you'll find that chatGPT already does a much better job. You're not seeing as much SEO blogspam as non-technical people do because you unconsciously avoid using words that are commonly found in spam.


What part do you disagree with?

I am also wondering:

1) how resilient chat-to-search will be SEO spam in the future, 2) whether people will be less likely to publish content if chat is supposed to spit out its summaries and, potentially, reduce traffic to their sites.


What does it matter about SEO spam if it’s not linking to any sources and is seemingly generating original content? There’s no benefit to trying to optimize content for a generalized AI because there’s no way to know how it will be leveraged and the provenance of the data won’t be tied to the content creator anyway.


Surely if this style of search becomes very popular there will be people dedicating tons of effort to gaming the system to ensure the chat bot is primed to answer questions like “what’s the best kind of mattress for a side sleeper?” in a way that helps them sell more mattresses, regardless of whether the bot actually links anywhere.



I'm disagreeing with the idea that this will cause big issues. It's a known problem and it's hard to imagine SEO spam getting any worse than it already is.

> whether people will be less likely to publish content if chat is supposed to spit out its summaries and, potentially, reduce traffic to their sites.

Good. Today, kind humans summarize long winded articles in the comment section and often save me a click. Sometimes I'm that human.


Do you have ideas for how we can make the citations more likely to be used?

It does seem like the biggest failure cases of chat are happening when we have not yet incorporated one of our apps (like weather or directions).

Richard from YOU here.


It would be great if when clicking through the link, the relevant text could be highlighted in the webpage, similar to the featured snippets in Google search. E.g. when searching "What were the causes of the swiss civil war?" Google returns:

https://en.wikipedia.org/wiki/Old_Swiss_Confederacy#:~:text=....


Great idea! We'll look into that.


>Chat-as-search is very promising but I think it will cause big issues because people will not check the veracity of its statements.

What, you think they're checking the veracity of sources they find on Google? Get a grip...


I guess it depends on a person, doesn't it?


At this point I believe chat gpt is more sentient than some of the people I work with.


Scored 83 IQ, 50th percentile on SAT, shows sophomore-level skill for math degree coursework.


another way to frame it is that it is smarter than 1 in 7 adults, and the median high school student. Not bad for a robot. How long until they can cook or do household tasks as well as a twelve year-old unattended?


I've been experimenting with having it make recipes for me. It provided me a cake recipe however I didn't have the correct size cake tin, I asked it to change the recipe but use the size cake tin that I had. It was able to correctly adjust the ingredients.

I didn't end up following the recipe because (just like the compiling problems with its code suggestions) I wasn't willing to spend a few hours and have it turn out bad. I would love to see a YouTube channel of someone following AI recipes, I wonder if it could come up with some unique foods.

One thing I do enjoy using it for is providing healthy meal ideas.


That would require a body: robotic advancements and not "just" ai.


Any speculation on why it did better on the SAT than IQ? Somewhat more knowledge-based vs reasoning I guess?


I bet there are a lot of actual SAT question/answer pairs in its training data.


source?



My sole problem with ChatGPT is that I can’t talk to people about it in French, because “GPT” is a swearword.


For non-francophones, GPT sounds like "J'ai pété" (I farted). Now that I think about it, the term seems quite appropriate given the occasional brain farts produced by ChatGPT.


It might be the anglosaxon revenge for the programming language / theorem prover coq.


It is mind blowing to me that ChatGPT can come up with this answer. It would seem inconceivable that this has been asked before on any corner of the internet.


The correct answer is a question "which Boston from which Boston?"

Both Chat and You illustrate garbage in garbage out, you throw something nonsensical or without context and it doesn't know what you mean but also doesn't know that it doesn't know. Both assume you ask for a distance from a city to itself.


It handled that for me (chatgpt)

>How many miles away is Boston from Boston?

>Boston is located in Massachusetts, United States, and it is approximately 0 miles away from itself. It is not possible for a city or location to be a certain distance away from itself, as it is the point of reference for measuring distance. If you are asking about the distance between two different locations both named Boston, it would be necessary to specify which two Bostons you are referring to.


...

There are tons of places named Boston, yet the model assumes you want to know about the distance from US city to itself. This seems pretty stupid but I guess somehow you find the question and the answer meaningful. Care to explain your reasoning?

(Unless you are a language model too, in that case it all makes sense)

Literally the only instance where a question about distance makes sense is in a context of two different places.


I'm a human.

The model needs to reply with something and here it tells the user the assumptions it made. The only alternative I can think of would be to arbitrarily choose two Bostons but my guess is it is more likely that people making this request would intend to include the same location in both.

One use for ChatGPT is to give a prompt with lists of data. It is convenient if ChatGPT handles the trivial edge cases correctly so that the user doesn't need to do preprocessing.

"Can you give the distance in miles between a list of pairs of cities? Want the output formatted like this:

city a, city b, 10 city b, city c, 100 ...

The pairs of cities are:

New York City, Los Angeles

Chicago, Los Angeles

Chicago, Paris

Chicago, Boston

Boston, Boston

Boston, Los Angeles"

Response from ChatGPT:

"Sure! Here is the list of pairs of cities and their distances in miles:

New York City, Los Angeles, 2,798

Chicago, Los Angeles, 2,054

Chicago, Paris, 4,973

Chicago, Boston, 1,099

Boston, Boston, 0

Boston, Los Angeles, 2,905

Please note that these are approximate distances and may vary slightly due to factors such as route taken and Earth's curvature."

me again: if it asked you to clarify exactly which city you are referring to for each without making assumptions, making requests would be pretty tedious. Sometimes, you are processing data that starts out ambiguous to you anyway.


The reason I think you're a LLM is that you somehow believe that knowing a distance between a place and itself is meaningful. It's still the case.

> The model needs to reply with something and here it tells the user the assumptions it made.

No, it didn't "tell" its assumption (where did it output "I assume you mean you want to know the distance from US city to itself"?). It lacks the awareness to even know the assumption is made let alone that it's nonsensical. But it's clear from its responses what the assumption was: that you want to know the distance between a city and itself.

In your second prompt you provided the context (a list of exclusively US cities). With context established, as I said in the first place, it's no longer pure garbage in.

Honestly you don't need to copy-paste more of that stuff, it does nothing to support your counter-argument against mine that if you give it garbage, it will respond with garbage.


>No, it didn't "tell" its assumption (where did it output "I assume you mean you want to know the distance from US city to itself"?)

Just reread what it told me; it told me it is zero miles away from itself but it was a nonsensical query and that one should be more specific if the cities being compared are different.

> But it's clear from its responses what the assumption was: that you want to know the distance between a city and itself.

If you ask for the distance between just Boston and London, it assumes Boston, MA and London, UK just like Google does. Neither asks if I want the distance between London, UK and Boston, England. Same thing for Portland and Boston, even though Portland, Maine is much closer to Boston, MA. I think both ChatGPT and Google just assume the largest cities.

Not sure if you got my point about the list. In that situation I'd find the zero miles to be useful because otherwise I'd need to remember to exclude some entries from the list and add back those records doing the trivial transformation afterward.

Another way to think of it is if I wrote a function for you that asks if the number n is divisible by 17, wouldn't you want it to handle the trivial case of 'Is 17 divisible by 17?' over a CSV rather than excluding the trivial examples and handling them manually? Trivial case handling is important for LLMs. Is also useful if you're trying to test its logical consistency.

>The reason I think you're a LLM is that you somehow believe that knowing a distance between a place and itself is meaningful. It's still the case.

Based on my posting history (13 years) and username (real name), chances are pretty good I'm human. I mentioned my reasons; sorry I couldn't communicate it clearly enough. Very likely I will disengage from this thread now. Also, don't appreciate being called a bot; think people doing that makes Hacker News worse.


> Just reread what it told me; it told me it is zero miles away from itself but it was a nonsensical query and that one should be more specific if the cities being compared are different.

It made the least sane assumption. If it was aware of that, it would have used a more reasonable one (eg. two cities).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: