Hacker Newsnew | past | comments | ask | show | jobs | submit | nijaar's commentslogin

for a sec i thought DEI was going too far

is the title like that on purpose?


MCP Dockmaster is a straightforward tool designed to help you easily install, manage, and monitor AI applications using MCP.

MCP is an open-source standard created by Anthropic that allows AI apps like Claude Desktop or Cursor to seamlessly access data from platforms such as Slack or Google Drive, interact with other applications, and connect to APIs.

Next stop, we want to add payment integrations so it is easier to monetize using MCPs.

Any feedback is very welcomed!


same here. pro in the US and still no access. i even logged in using my phone and a different browser


This is from 2014. What’s the point of resharing it?


It's new to me and I've found the comments thread enthralling.


"The ability to plan a course of action that achieves a desired state of affairs has long been considered a core competence of intelligent agents and has been an integral part of AI research since its inception. With the advent of large language models (LLMs), there has been considerable interest in the question of whether or not they possess such planning abilities. PlanBench, an extensible benchmark we developed in 2022, soon after the release of GPT3, has remained an important tool for evaluating the planning abilities of LLMs. Despite the slew of new private and open source LLMs since GPT3, progress on this benchmark has been surprisingly slow. OpenAI claims that their recent o1 (Strawberry) model has been specifically constructed and trained to escape the normal limitations of autoregressive LLMs--making it a new kind of model: a Large Reasoning Model (LRM). Using this development as a catalyst, this paper takes a comprehensive look at how well current LLMs and new LRMs do on PlanBench. As we shall see, while o1's performance is a quantum improvement on the benchmark, outpacing the competition, it is still far from saturating it. This improvement also brings to the fore questions about accuracy, efficiency, and guarantees which must be considered before deploying such systems."


if this works would this make the rust compiler considerably smaller / faster?


Smaller? Yes. Faster? Almost certainly not.

It really doesn't make sense to optimize anything in a bootstrapping compiler. Usually the only code that will ever be compiled by this compiler will be rustc itself. And rustc doesn't need to run fast - just fast enough to recompile itself. So, the output also probably won't have any optimisations applied either.


if it is smaller, doesn't it mean that it has less code to execute hence should it be faster? Trying to understand better -- this is something completely new for me


Not necessarily, in fact one of the most important optimizations for compilers is inlining code (copy-pasting function bodies into call sites) which results in more code being generated (more space) but faster wallclock times (more speed). Most optimizations tradeoff size for speed in some way, and compilers have flags to control it (eg -Os vs -O3 tells most C compilers to optimize for size instead of speed).

Where optimizing for size is optimizing for speed is when it's faster (in terms of wall clock time) for a program to compute data than to read it from memory, disk, i/o etc, because i/o bandwidth is generally much slower than execution bandwidth. That means the processor does more work, but it takes less time because it's not waiting for data to load through the cache or memory.


great explanation. thank you!


Why would a program run faster just because it’s smaller?


Example: this is a small program

int main() { for(;;); }


Oh, I suppose I’m imagining two implementations that both do the same work. (Like two rust compilers).

Eg, quicksort vs bubble sort. Quicksort is usually more code but faster.

Or a linked list vs a btree. The linked list is less code, but the btree will be faster.

Or substring search, with or without simd. The simd version will be longer and more complex, but run faster.

Even in a for loop - if the compiler unrolls the loop, it takes up more code but often runs faster.

If you have two programs that both do the same work, and one is short and simple, I don’t think that tells you much about which will run faster.


No, this won't change rustc at all. The purpose of this project is to be able to bootstrap a current version of rustc without having to do hundreds of intermediate compilations to go from TinyCC -> Guile -> OCaml -> Rust 0.7 -> ...... Rust current. (Or bootstrap a C++ compiler to be able to build mrustc, which can compile Rust 1.56, which will give you Rust current after another 25 or so compilations.)

Ultimately the final rustc you get will be more or less identical to the one built and distributed through rustup.


> will be more or less identical

What could cause differences between the bootstrapped rustc and rustup’s?


In theory there shouldn’t be any. The official Rust builds, I believe, have one level of bootstrapping: the previous official build is used to build an intermediate build of the current version, which is then used to build itself. So the distributed binaries are the current version built with the current version. A longer from-source bootstrapping process should also end by building the current version with the current version, and that should lead to a bit-for-bit identical result.

In practice, you’ll have to make sure the build configuration, the toolchain components not part of rustc itself (e.g. linker), and the target sysroot (e.g. glibc .so file) are close or identical to what the official builds are using. Also, while rustc is supposed to be reproducible, and thus free of the other usual issues with reproducible builds (like paths or timestamps being embedded into the build), there might be bugs. And I’m not sure if reproducibility requires any special options which the official builders might not be passing. Hopefully not.

See also: https://github.com/rust-lang/rust/issues/75362


It could be even easier, we implemented a two click install open-source local AI manager (+RAG and other cool stuff) for Windows / Mac / Linux. You can check it out in shinkai com or check out the code in https://github.com/dcspark/shinkai-apps


What exactly does the P2P network do? Does my node communicate with the nodes of strangers?


only when you arent looking


Are they planning to acquihire Taiwan?


$7T is ~10x Taiwan’s GDP :)


10x revenue is a typical multiple these days.


I believe it s larger for real estate


Not really bc the IRS actually has puts guidelines (although kind of slow) for crypto unlike the SEC


For mobile it may be very important performance for some cases (too repetitive or just heavy load) so using Rust allows to compile to the specific mobile architecture. The same is not possible with typescript which just run in the JSVirtualMachind


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: