I am a big fan of MAUI, but I'd really wish they fixed existing issues instead of extending it further. 3.9k open issues and counting. I've got 5 open, verified bugs, some from 2023 :(
It is perhaps my eyes, but when I zoom in enough to make it readable, it gets blurry. A higher-res image would be much appreciated. Great idea otherwise.
This might be the case for a hobby project or a start-up MVP being created in a rush, but in reality, there are a few points we may want to take into account:
1. Software teams I work with are maintaining the usual review practices. Even if a feature is completely created by AI. It goes through the usual PR review process. The dev may choose "Accept All", although I am not saying this is a good practice, the change still gets reviewed by a human.
2. From my experience, sub-agents intended for code and security review do a good job. It is even possible to use another model to review the code, which can provide a different perspective.
3. A year ago, code written by AI was failing to run the first time, requiring a painful joint troubleshooting effort. Now it works 95% of the time, but perhaps it is not optimal. Given the speed at which it is improving, it is safe to expect that in 6-9 months' time, it will not only work but will also be written to a good quality.
I understand the argument, and there are some really good points.
My biggest concern would be that adopting the CLI method would require LLM to have permission to execute binaries on the filesystem. This is a non-issue in an openclaw-type scenario where permission is there by design, but it would be more difficult to adopt in an enterprise setting. There are ways to limit LLMs to a directory tree where only allowed CLIs live, but there will still be hacks to break out of it. Not to mention, LLM would use an MCP or another local tool to execute CLI commands, making it a two-step process.
I am a supporter of human tools for humans and AI tools for AI. The best example is something like WebMCP vs the current method of screenshotting webpages and trying to find buttons inputboxes etc.
If we keep them separate, we can allow them to evolve to fully support each use case. Otherwise, the CLIs would soon start to include LLM-specific switches and arguments, e.g., to provide information in JSON.
Tools like awscli are good examples of there LLM can use a CLI. But then we need to remember that these are partly, if not mostly, intended for machine use, so CI/CD pipelines can do things.
I think their intention is not mining your data (easily opt-outable) or hoping that you maintain the subscription after 6 months. It is rather making large open source project maintainers give AI a proper go.
Believe it or not, there are still a large amount of great tech professionals out there who are sceptical about AI. Many tried AI a year ago and has the impression that "It was alright but had limitations". AI came a long way since then, and it is going to improve even faster over the next 6 months. So this is Anthropics invite for you to join that journey.
In turn, of course this fuels the adoption by superstars (maintainers) endorsing the models.
The use of data for model training is a simple toggle, very easy to opt out of during the initial setup.
Also, the end product is open source anyway, so there is no case of IP being leaked into training data. What remains is that they can use, with your permission, the overall coding practices of a great programmer to fine-tune Claude's code and models. As in, how one approaches planning or troubleshooting. Is this a bad thing? Perhaps every maintainer should decide for themselves whether they want to contribute back or not.
Assuming they've got reasonable programming skills. They can simply find an open-source project they are passionate about. Spend time understanding the overall structure. Then pick up an issue raised by the community and prepare a fix as a pull request.
The first PR is unlikely to be merged the next day; however, it sparks lots of productive discussions with the rest of the community, allowing your kid to build a mental model of the project's best practices and sensitivities.
The more he contributes, the more integral he becomes to the community. After gaining enough experience through small issues, they can even consider working on a new feature.
As a byproduct, a great addition to the CV if they are also looking to go commercial.
I don't think this is an AI issue. It is about the terms of use: they don't allow a second account, even if it's intended for ad management. The recommended way is to use Meta Business Manager via the existing account.
Low-quality AI-created PRs that are submitted to open-source repositories are prompted by humans. And those are the same humans who fails to review AI's output properly before submitting (or letting AI submit) as PRs. Let's not blame the tools instead of bad workmanship.
A smaller number of PRs generated by OpenClaw-type bots are also doing so based on their owner's direct or implied instructions. I mean, someone is giving them GitHub credentials and letting them loose.
AI is also allowing the creation of many new open-source projects, led by responsible developers.
Given the exponential speed at which AI is progressing, surely the quality of such PRs is going to improve. But there are also opportunities for the open-source community to improve their response. It will sound controversial, but AI can be used to perform an initial review of PRs, suggest improvements, and, in extreme cases, reject them.
Does the article have a strong marketing vibe? Absolutely
Does the research performed move the needle, however small, in theoretical physics? Yes
Could we have expected this to happen a year ago? Not really.
My personal opinion is that things will only accelerate from here.
reply