I do hear what you're saying, and I've wrestled with "not everything should / can be an app". That being said, I'm still trying to solve food (for myself) with computers, haha.
Right now, that looks like trying to create a nutritionally-optimal "dog food for humans", using combinatorial optimization solvers. I think I'm going to write something up as a post when it becomes a bit more feature-complete.
It's living at chow.seanjohnsen.com if you're curious! Would love feedback from someone who has thought along these lines.
What I would really want is for this to be a hardware button that, without requiring input on the phone itself, starts and stops a recording on my iphone which uses my airpods as a mic.
This would actually be super helpful in the lab, dictating notes on a protocol ("I did something weird in this step") without needing to stop to write things down (sometimes protocol is quite time-sensitive).
Not really. If you know about uv, you know how to use "uv tool run", so you know how to use any formatter of your choice (which you can find easily on Google, arguably easier than reading the documentation and learning about uv format).
Well, it is arguably worse to run an unknown, not version pinned, unconfigured formatter over your code and expect it to improve things, unless the code is an utter catastrophe in terms of formatting.
_You_ may find it irrelevant, but speak for yourself. I don't want dependencies, that are not version-pinned and checksummed running over my code. It is certainly not irrelevant to me.
for doing laboratory work in my PhD, I've found no better app than OmniFocus. It's particularly valuable in its ability to create tasks via a templating system. This is crucial, for example, for managing 10+ genetic crosses at a time. Each cross takes weeks to move to the next step, but when that next step occurs, I need to be on top of the cross 2x / day. Juggling different crosses at different stages would be impossible for my brain without a system I can rely on. Other lab work follows similar workflows.
Instead of writing the counting tool he could have used the Multi-Point Tool in ImageJ [1] [2]. I used it just this morning for counting some embryos I collected.
It sounds like this may have been one of the pieces of software the author intentionally chose not to use:
> There are some clunky old Windows programs, niche scientific tools, and image analysis software that assumes you’re trying to count cells under a microscope...
C. elegans is nice for this since you can freeze stocks in glycerol. Labs routinely go and thaw out the main wild-type reference stock if the lab stock has been around for too long.
Now I'm in a fly lab and no one's really figured a good way to freeze a fly stock down for long-term storage. So we're left to just accept some degree of background mutation and generally assume that it's not impacting our experiments too much...
It's worth noting that we've found genetic differences between the N2 wild type strains used by different labs as well, so this is still a problem for C. elegans.
Before these companies like plasmidsaurus that do whole-plasmid sequencing for relatively cheap with nanopore, people generally only sequenced a region of interest using sanger sequencing. The rest of the plasmid was assumed to be mostly correct, as long as it grows on a bacterial resistance. As noted in the article, the rise of nanopore-based whole-plasmid sequencing has reduced a lot of these types of errors.
Would it be possible to consider them separately though? Like maybe it will turn out that say 10% of them are beneficial, 65% of them are neutral (either they do nothing at all or a mixture of benefit and harm), and 25% are slightly bad for us (can't be too harmful or we would have already known ig).
Delivering gene therapies into brain cells is a non-trivial task. Also, there's alternatives to cutting the original sequence out; you can also dampen the transcribed RNA with downstream therapies.
'Bad' is notoriously hard to figure out. It might be good for the group to have a few people with major psychiatric disorders even if it's not ideal for that individual or the people who have to directly interact with them.
Right now, that looks like trying to create a nutritionally-optimal "dog food for humans", using combinatorial optimization solvers. I think I'm going to write something up as a post when it becomes a bit more feature-complete.
It's living at chow.seanjohnsen.com if you're curious! Would love feedback from someone who has thought along these lines.
reply