Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I really think the library search is more of something you inherit from other languages, though database drivers are something you need to go looking for. The standard library has an adequate HTTP router (though I prefer grpc-gateway as it autogenerates docs, types, etc.) and logger (slog, but honestly plain log is fine).

For your database driver, just use pgx. For migrations, tern is fine. For the tiniest bit of sugar around scanning database results into structs, use sqlx instead of database/sql.

I wouldn't recommend using a testing framework in Go: https://go.dev/wiki/TestComments#assert-libraries

Here's how I do dependency injection:

   func main() {
       foo := &Foo{
           Parameter: goesHere,
       }
       bar := &Bar{
           SomethingItNeeds: canJustBeTypedIn,
       }
       app := &App{
           Foo: foo,
           Bar: bar,
       }
       app.ListenAndServe()
   }
If you need more complexity, you can add more complexity. I like "zap" over "slog" for logging. I am interested in some of the DI frameworks (dig), but it's never been a clear win to me over a little bit of hand-rolled complexity like the above.

A lot of people want some sort of mocking framework. I just do this:

   - func foo(x SomethingConcrete) {
   -     x.Whatever()
   - }
   + interface Whateverer { Whatever() }
   + func foo(x Whateverer) {
   +     x.Whatever()
   + } 
Then in the tests:

   type testWhateverer {
      n int
   }
   var _ Whateverer = (*testWhateverer)(nil)
   func (w *testWhateverer) Whatever() { w.n++ }
   func TestFoo(t *testing.T) {
       x := &testWhateverer{}
       foo(x)
       if got, want := x.n, 1; got != want {
           t.Errorf("expected Whatever to have been called: invocation count:\n  got: %v\n want: %v", got, want)
       }
   }
It's easy. I typed it in an HN comment in like 30 seconds. Whether or not a test that counts how many times you called Whatever is up to you, but if you need it, you need it, and it's easy to do.


I've been writing Golang for years now, and I heavily endorse everything written here.

Only exception is you should use my migration library [0] instead of tern — you don't need down migrations, and you can stop worrying about migration number conflicts.

One other suggestion I'll make is you probably at some point should write a translation layer between your API endpoints and the http.Handler interface, so that your endpoints return `(result *T, error)` and your tests can avoid worrying about serde/typeasserting the results.

[0] https://github.com/peterldowns/pgmigrate


This looks excellent!

The go tools for managing DB schema migrations have always felt lacking to me, and it seems like your tool ticks all of the boxes I had.

Except for one: lack of support for CREATE INDEX CONCURRENTLY (usually done by detecting that and skipping the transaction for that migration). How do you handle creating indexes without this?


Thanks for taking a look!

Long-running index creation is a problem for pgmigrate and anyone else doing “on-app-startup” or “before-app-deploys” migrations.

Even at moderate scale (normal webapp stuff, not megaco size) building indexes can take a long time — especially for the tables where it’s most important to have indexes.

But if you’re doing long-running index building in your migrations step, you can’t deploy a new version of your app until the migration step finishes. (Big problem for lots of reasons.)

The way I’ve dealt with this in the past is:

- the database connection used to perform migrations has a low statement timeout of 10seconds.

- a long-running index creation statement gets its own migration file and is written as: “CREATE INDEX … IF NOT EXISTS”. This definition does not include the “CONCURRENTLY” directive. When migrations run on a local dev server or during tests, the table being indexed is small so this happens quickly.

- Manually, before merging the migration in and deploying so that it’s applied in production, you open a psql terminal to prod and run “CREATE INDEX … CONCURRENTLY”. This may take a long time; it can even fail and need to be retried after hours of waiting. Eventually, it’s complete.

- Merge your migration and deploy your app. The “CREATE INDEX … IF NOT EXISTS” migration runs and immediately succeeds because the index exists.

I’m curious what you think about this answer. If you have any suggestions for how pgmigrate should handle this better, I’d seriously appreciate it!


I think that’s the safest approach, but it’s inconvenient for the common case of an index that’ll be quick enough in practice.

The approach I’ve seen flyway take is to allow detecting / opting out of transactions on specific migrations

As long as you always apply migrations before deploying and abort the deploy if they time out or fail, then this approach is perfectly safe.

On the whole I think flyway does a decent job of making the easy things easy and the harder things possible - it just unfortunately comes with a bunch JVM baggage - so a Go based tool seems like a good alternative


Makes sense — Flyway is so good, copying their behavior is usually a smart choice. Thanks for the feedback!


I can definitely get behind using some other migration library! Thank you for writing and sharing this!


Thanks :) if you have the time, I’d sincerely appreciate feedback on it and especially on its docs/readme, even a simple github issue for “this is confusing” or “this typo is weird” would be really helpful.


Bold of you to flat out drop down migrations.

I guess having a new up migration to cover the case is better, but its nice to have a documented way of rolling back (which would be the down migration) - without applying it programmatically. But it helps if other team members can see how a change should be rolled back ideally.


Glenjamin gave a great answer. I’ll just add that in my experience (being responsible for the team’s database, at a few companies over the years), down migrations are NEVER helpful when a migration goes wrong. Roll-forward is the only correct model. Besides, there are plenty of migrations that cant be safely rolled back, so most “down migrations” either destroy data or don’t work (or both.)


The key here is that in production it's almost always not safe to actually apply a down migration - so it's better to make that clear than to pretend there's some way to reverse an irreversible operation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: