Hacker Newsnew | past | comments | ask | show | jobs | submit | tcoff91's commentslogin

I have that with URQL+gql.tada.

What else does relay give me that URQL does not?


I may be wrong on the details, but with URQL:

- you don't have a normalized cache. You may not want one! But if you find yourself annoyed that modifying one entity in one location doesn't automatically cause another view into that same entity to update, it's due to a lack of a normalized cache. And this is a more frequent problem than folks admit. You might go from a detail view to an edit view, modify a few things, then press the back button. You can't reuse cached data without a normalized cache, or without custom logic to keep these items in sync. At scale, it doesn't work.

- Since you don't have a normalized cache, you presumably just refetch instead of updating items in the cache. So you will presumably re-render an entire page in response to changes. Relay will just re-render components whose data has actually changed. In https://quoraengineering.quora.com/Choosing-Quora-s-GraphQL-..., the engineer at Quora points out that as one paginates, one can get hundreds of components on the screen. And each pagination slows the performance of the page, if you're re-rendering the entire page from root.

- Fragments are great. You really want data masking, and not just at the type level. If you stop selecting some data in some component, it may affect the behavior of other components, if they do something like Object.stringify or JSON.keys. But admittedly, type-level data masking + colocation is substantially better than nothing.

- Relay will also generate queries for you. For example, pagination queries, or refetch queries (where you refetch part of a tree with different variables.)

There are lots of great reasons to adopt Relay!

And if you don't like the complexity of Relay, check out isograph (https://isograph.dev), which (hopefully) has better DevEx and a much lower barrier to entry.

https://www.youtube.com/watch?v=lhVGdErZuN4 goes into more detail about the advantages of Relay


URQL has normalized caching and it works great! You opt into it by adopting their graphcache exchange.

https://nearform.com/open-source/urql/docs/graphcache/

It also works great with fragments.

And its exchange system is super powerful and flexible. I’ve even seen an offline-first sync engine built as a custom URQL exchange in a react native app. The frontend could be written as if the app always is online but it would handle offline capabilities within the exchange.


Try gql tada it’s much better than graphQL codegen


I did. I really wanted to like it. I think it broke due to something I was doing with fragments or splitting up code in my monorepo. I may give it a shot again, from first principles it is a better approach.


URQL and gql.tada are great client side tooling innovations.


Not getting measles, polio, etc… seems like a pretty big benefit to the individual.


But with jj there are better workflows that aren’t really doable with git.


Last time I saw this claimed (maybe from steve's tutorial?) it was just autosquash. Do you have another example?


https://ofcr.se/jujutsu-merge-workflow

With `jjui` this strategy takes only a few keystrokes to do operations like adding/removing parents from merge commits.

It's so nice to have like 4 parallel PRs in flight and then rebase all of them and all the other experimental branches you have on top onto main in 1 command.

Also, I cannot even stress to you how much first-class-conflicts is a game changer. Like seriously you do NOT understand how much better it is to not have to resolve conflicts immediately when rebasing and being able to come back and resolve them whenever you want. It cannot be overstated how much better this is than git.

Also, anonymous branches are SOOOO much better than git stashes.


> Also, anonymous branches are SOOOO much better than git stashes.

You can do anonymous branches in Git as well. I use both for different use cases.


The UX around anonymous branches in git is not nearly as good as jj though.

Also git has no equivalent to the operation log. `jj undo` and `jj op restore` are so sweet.


I can't comment on the UX of jj, but with git you literally just specify the commit, it doesn't feels tedious to me.

> Also git has no equivalent to the operation log.

For easy cases it's just git reset @{1}, but sure the oplog is a cool thing. I think it will be just added to git eventually, it can't be that hard.


You can specify a commit, yes, but how do you remember your set of unnamed commits? Once HEAD no longer points to a commit, it will not show up in `git log`.

I agree that Git could gain an operation log. I haven't thought much about it but it feels like it could be done in a backwards-compatible way. It sounds like a ton of work, though, especially if it's going to be a transition from having the current ref storage be the source of truth to making the operation log the source of truth.


The last one is always available via `git checkout -` and if you want to do more you can do `git checkout @{4}` etc. . It will also show up in `git log --reflog`. I honestly don't see the problem with naming things. Typing a chosen name is just so much more convenient than looking up the commit hash, even when you only need to type the unique prefix. When I don't want to think of a name yet, I just do "git tag a, b, c, ..."

I also tend to have the builtin GUI log equivalent (gitk) open. This has the behaviour, that no commit vanishes on refresh, even when it isn't on a branch anymore. To stop showing a commit you need to do a hard reload. This automatically puts the commit currently selected into the clipboard selection, so all you need to do is press Insert in the terminal.

> It sounds like a ton of work, though, especially if it's going to be a transition from having the current ref storage be the source of truth to making the operation log the source of truth.

I don't think that needs to be implemented like this. The only thing you need to do is recording the list of commands and program a resolver that outputs the inverse command of any given command.


Yeah but in jj every time you run ‘jj log’ you see all your anonymous branches and you can rebase all of them at once onto main in 1 command.

When I’m exploring a problem I end up with complex tree of many anonymous branches as I try different solutions to a problem and they all show up in my jj log and it’s so easy to refer to them by stable change ids. Often I’ll like part of a solution but not another part so I split a change into 2 commits and branch off the part I like and try something else for the other part. This way of working is not nearly as frictionless with git. A lot of times I do not even bother with editor undo unless it’s just a small amount of undoing because I have this workflow.

Git is to jj like asm is to C: you can do it all with git that you can do with jj but it’s all a lot easier in jj.


I guess I never had complex trees from such an action, just a bunch of parallel branches, but I would say splitting and picking commits from different branches is not exactly hard with git either. Also you can also see them in git, but they won't have change ids of course.


I know how to do everything in git that I can do in jj but the thing is I would never bother doing most of these workflows with git because it’s way more of a pain in the ass than with jj. I work with version control in a totally different way now because how easy jj makes it to edit the graph.

Within a day of switching I was fully up to speed with jj and I never see myself going back. I use colocated repos so I can still use git tools in my editor for blaming and viewing file history.

Sure even rebasing a complex tree in git can be done by creating an octopus merge of all leaf nodes and rebasing with preserve merges but like that’s such a pain.


isnt jj undo the equivalent of git reflog (+ reset/checkout)?


Mostly, yes. It also covers changes to the working copy (because jj automatically snapshots changes in the working copy). It's also much easier to use, especially when many refs were updated together. But, to be fair, it's kind of hard to update many refs at once with Git in the first place (there's `git rebase --update-refs` but not much else?), so undoing multiple ref-updates is not as relevant there.


There is a vscode jj gui extension


Perhaps something like TLA+ or PlusCal specs could be the specs in terms of 'specs are the new code'.


I’ve been looking into this idea for a couple weeks, have some success with generating Alloy specs as an intermediate between high level arch docs and product code.


Any automation/agents/etc around that which you could share, or just a pretty manual process? I'm working on something similar.

After hitting the inevitable problems with LLMs trying to read/write more obscure targets like alloy, I've been trying to decide whether it's better to a) create a python-wrapper for the obscure language, b) build the MCP tool-suite for validate/analyze/run, or c) go all the way towards custom models, fine-tuning, synthetic data and all that.


I’m purely in experiment stage, no automation, no agents; just ‘these are the design docs, this is the existing code base, let’s get a simple alloy model started’ and interactively building from there. I was concerned about the same things you mention, but starting very small with a tight development loop worked well with GPT 5.1 high. I wouldn’t try to zero shot the whole model unsupervised… yet.

The first step before a python/TS wrapper would be to put a single file manual into the context as is customary for non-primary targets, but I didn’t even reach the stage where this is necessary ;)


I think getting a model to do this without hurting alignment significantly will be very difficult.


At the time that there's something as good as sonnet 4.5 available locally, the frontier models in datacenters may be far better.

People are always going to want the best models.


Compared to 2025 github yeah I do think most self-hosted CI systems would be more available. Github goes down weekly lately.


Aren't they halting all work to migrate to azure? Does not sound like an easy thing to do and feels quite easy to cause unexpected problems.


I recall the Hotmail acquisition and the failed attempts to migrate the service to Windows servers.


Yes, this is not the first time github trying to migrate to azure. It's like the fourth time or something.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: