Hacker Newsnew | past | comments | ask | show | jobs | submit | grantcarthew's commentslogin

I posted a Show HN post recently. It has not had a single glance of interest. I think it is dead now anyway. I've spent months on the project I posted, but no interest from anyone.

Well were they interesting?

Yes, Atlassian's CLI support is shocking.

This is for Confluence. Markdown all the way:

https://github.com/grantcarthew/acon

Jira, same, markdown all the way:

https://github.com/grantcarthew/ajira

Other agent tools:

https://github.com/grantcarthew/homebrew-tap


There are a few around. I originally had a bash script https://github.com/grantcarthew/scripts/blob/main/lib/archiv..., this is a little more developed


Built for AI agents to consume web content efficiently.


This is a pattern I'm using with AI agents to increase context population speed for commonly used documentation.


I've used it to read authenticated pages with Chromium. It can be run as a headless browser and convert the HTML to markdown, but I generally open Chromium, authenticate to the system, then allow the CLI agent to interact with the page.

https://github.com/grantcarthew/scripts/blob/main/get-webpag...



I put an unsecured open FTP server on the internet about 20 years ago, just to see what would happen.

Within half a day I had some pirate "marking" his claim to my FTP server, then he/she started uploading a game. I deleted everything and left it open again.

It was a long time ago, so I don't remember all the details, but all the pirates would create directories inside directories, upload files, then mark it with their mark. All of this was scripted I gather.

After a while, I set up a file system watcher that deleted subdirectories. This gave me an FTP server I could use for anything. I shut it down a few months later.

Interesting though.


I recently automated this process, however, in a very different way.

- CLI run for every job you want to apply for (this is important)

- JavaScript (Deno) with Puppeteer to run the JS for the page

- Create a directory for all the artefacts <yyyy-mm-dd-ms-pagetitle>

- Save the webpage link (artefact)

- Take a screenshot of the page (artefact)

- Extract the HTML (artefact)

- Convert HTML to Markdown with a CLI (artefact)

- Send Markdown to the Grok API to extract just the Job Description as Markdown (artefact)

- Send Job Description and Autobiography to Grok API to generate a Resume (artefact)

- Send Job Description and Autobiography to Grok API to generate a Cover Letter (artefact)

- Use pandoc to convert the Markdown Resume and Cover Letter into Open Document Format (LibreOffice) (artefacts)

The important differences here are:

- You need to find the job you are interested in. Why automate this?

- Run the CLI `job-hunter https://job.site/jobid` (50sec runtime)

- Open the ODF documents, review, edit, save (human involved is important)

- Use a bash script running LibreOffice CLI to convert ODF documents to PDF

- Review the PDFs

- Manually click the apply button on the site and upload the documents

I also keep a spreadsheet with the details for each job I apply for so I can track interactions, think CRM for job applications and recruiters. This could be automated, however, I got a job so have lost interest.

Points of interest:

- Markdown is a fantastic format in general, but for LLMs as prompts and documents, it's awesome.

- If you just curl the page html, you don't get the recruiters email addresses in most cases, hence the use of Puppeteer.

- Having all the artefacts saved on disk is important for review before and after the application, including adding notes.

- By using an Autobiography that is extreme in detail, the LLM can add anything and everything about you to the documents.

- Use Grok and support Elon. OpenAI can stick their "Open" where it fits.

- I don't end up having to format the documents that are generated as ODF files, they look great.

I can apply for around 10 to 20 jobs in a day if I try hard. Most of the time it is around 5 because I am doing other things. They are only jobs I'm interested in though, and I can customise the documents. Also, If I am applying for a job that includes AI, I add a note at the bottom stating it has been generated by an LLM and customised.

There's probably more interesting points, but you get the idea.

My TODO list includes a CLI switch to only open the page in a Firefox profile so I can authenticate to the page. This removes the stupid "automate auth on ever job site" issue. Simply authenticate and keep the cookie in the hunter profile.

The repo is private for the time being, but I could make it public.

Edit: formatting.


I disagree, the landing page is just fantastic.

I like the product as well, however, I need to use it for a week to comment.


I think it would help if there were more information about it, along with demos, to clearly communicate what it's about. At the moment I'm still confused because I see Ghostty has gotten so much attention, and I'm not clear why it's special.

I am sold thanks to the author's authenticity, but if I had not seen this, I wouldn't know what I'm looking at. So I think it would be very helpful if there were more info on the landing page.


Performance of course but native is a big part of ghostty’s appeal; there’s an article by a neovim dev that expands on that native part a bit - https://gpanders.com/blog/ghostty-is-native-so-what/


I have to enable JavaScript to make it load. It seems the animation on the page requires JavaScript. So, I agree with @pzo that a GIF would be better in this case.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: