Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Strangely, this sounds like a great use case for LLMs? To just grind through entire datasets attempting to surface prior art.

Edit: Found this with a search, so it can be done: https://xlscout.ai/novelty-checker-llm/

(also, thanks Cloudflare! Keep on grinding patent trolls!)



After I quit the USPTO, I tried using ChatGPT 3.5 for some basic patent examining activity out of curiosity, and I can say that it did an absolutely horrendous job. This wasn't prior art search, just analyzing the text to do a rejection based on the text alone (35 USC 112).

And the AI search technologies I used tended to not be particularly good. They typically find "background" documents that are related but can't be used in a rejection.

I don't anticipate LLMs being able to examine patents in general well. Many times a detailed understanding of things not in the text is necessary to examine. For the technologies I examined, often search was basically flipping through drawings. I'd love to see an AI search technology focus specifically on patent drawings. This can be quite difficult. Often I'd have to understand the topology of a circuit (electrical or flow) and find a specific combination of elements. Of course, each drawing could be laid out differently but be topologically equivalent... this surely can be handled with computers in some way, but it's going to require a big effort right now.


The patent office is also horrendous at evaluating novelty, so I suppose ChatGPT has already reached human level performance on this task!


Similar to the way in which software developers are terrible at delivering quality software on-time and on-budget, so I suppose ChatGPT has already reached human level performance on this task!


ChatGPT is a mirror where we don't look too good ...


My point was more that just because humans are terrible at something doesn't mean ChatGPT can't be much worse.


As others have said, ChatGPT is great for writing fluff content that has no right or wrong answer. But it is still weak when a correct answer is needed, like in legal analysis. It can write a great 10 page summary of the history of the use of strawberries. But when it comes to telling how many r's are in the word strawberry, it's not very trustworthy.


I wonder if most people realize that your observation is a fundamental problem with LLMS. LLMs simply have no means to evaluate factuality. Keep asking ChatGPT "Are you sure?" and it will break eventually.

The inability to answer basic facts should be a dealbreaker.


Then you need to go over each item with just as much care as you would any probably-irrelevant item pulled from a keyword search, because the LLM is incapable of evaluating it in any way other than correlation.

Also, you don't necessarily have a real dataset to begin with: prior art doesn't need to be patented, it just needs to be published/public/invented sufficiently before the patent. Searching the existing patent database is insufficient.


> Also, you don't necessarily have a real dataset to begin with: prior art doesn't need to be patented, it just needs to be published/public/invented sufficiently before the patent. Searching the existing patent database is insufficient.

I would caution against making assumptions with regards to dataset access and size. I agree effectiveness of the effort I mention would be a function of not only gen AI engineering, but also dataset size and scope.


Going over a better curated list is a significant upgrade and time saver.

Let’s not pretend that “correlation” isn’t very powerful


There are, in fact, startups working on using AI for legal matters. I know one of the principals in one personally.

I don't know if they're tackling this issue, though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: