Hacker Newsnew | past | comments | ask | show | jobs | submit | LennyWhiteJr's commentslogin

I love Detekt! It's particularly good for enforcing project conventions that can't be covered with standard linters. With access to the full AST, you can basically enforce any kind of rule you want with custom analyzers. And LLMs take out 90% of the effort for creating new analyzers.


Do you use it in CI? Do you have a template or something to share?


Yeah, it's included as one of the gradle scripts which fails the build in CI if the rules don't pass.

No template, as it's specific to my team's project, but one example is that we enforce that classes within our core domain don't import types from outside the project. You could accomplish this with separate projects, but that comes with its own complexities. This rule basically says "any domain type shall not import a non-domain type".


there's an official template here: https://github.com/detekt/detekt-custom-rule-template/tree/m...

and here's the diff for a 'real world' rule I implemented: https://github.com/michaelbull/kotlin-result/compare/master....


Almost my entire org uses it for backend server development at Amazon. There is very strong support for Kotlin support within the Amazon dev community.


Someone who actually knows what they're talking about.

Even with the Customer Obsession LP, it's not too hard of a stretch to arrive at a conclusion where more ads are shown. Better are worse are, in many aspects, quite subjective in these areas.


There is organic matter buried in and around the site that can be used to date the point at which it was filled in.


In my Amazon team, we use PostgreSQL as a queue using skip-locked to implement transactional outbox pattern for our database inserts. People commenting 'just use a queue' are totally missing the need for transactional consistency. I agree with the author, it's an amazing tool and scales quite well.


Nothing about the outbox pattern guarantees ordering.


If you use Postgres logical replication, that is not true.


Even if are "down to several nano-seconds", a slight clock drift can be the different between corrupt data or not, and when running at scale, it's only a matter of time before you start running into race conditions.

For a small web app, fine, but if you're running enterprise level software processing billions of DB transactions per day, clocks just don't cut it.


That’s why you buy NICs with hardware timestamp support and enable PTP. You can detect clock drift within a few packets.

Race conditions are mitigated, not by clocks, but by other logics. The clock was just something done after frustrations in reading distributed logs and seeing them out of order. Logs are basically never out of order any more and there is sanity.


This is precisely why large tech companies generally offer parallel career progression paths for individual contributors who don't wish to go into management. A Senior or Principal engineer at Amazon carries nearly as much weight, sometimes more, in software related decision making as their SDM counterparts.


A couple ways. If the need is not real-time and analytical, you feed the data from multiple services into a separate BI database which can do slower and more complex joins across data from multiple data sources. Or if the need is real-time, you build a paginated API with a page limit that can always be processed within the API SLA. Then you build workflows on top of the paginated API to operate on that data.

Generally, unbounded operations have to be broken up at some point. It just depends on how big the data set is.


What does it mean for a field to be required in new API version but optional in the old version?


Author here! Are you thinking fields in a request or fields in a response?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: