I'm a bit leery of teleoperated machinery, rather than physical people. Not saying it can't be done, but adds in a few more things that can go wrong. And while outsourcing coding can incur extra delays if something went wrong with the link/power supply, I don't want a robot with a drill in my mouth.
So what would I need to go to a teleoperated dentist.
-Reliable direct satellite link?
-Own power supply in the clinic in case of black outs.
-Similarly reliable supplies in the operators place, or sufficient numbers of people that could take over the operation if the something went wrong with them.
Also any surgeons/dentists in the house? How important is reaction time. Would latency be an issue?
Yeah, I'd be fine with a teleoperated robot surgeon, if the human surgeon was in the next room with scrubs and gloves, ready to rush in at a moment's notice. Because, while the robot may be more precise and steady-handed than a meatbag, the meatbag I suspect would have better error recovery.
The above site has <meta http-equiv="Objective" content="Hash Exploit"> in as well.
Poder is power in Latin as well. Cibernetico might be a latin translation of Cybernetic.... I'm expecting a Latin moto as per usual on these things. Still can't get the hash, tried a number of different capitalisations.
So 128 bits? I'd guess they wouldn't use md5 (or any of the md family), which according to wiki leaves haval/ripemd/tiger. I'd go for ripemd-128 (on what the wiki says) although you would expect them to use a NSA blessed algorithm.
I don't think they're expecting it to be cracked, per se, just guessed. I don't have to have defeated the SHA-256 algorithm to find your hashed password in a rainbow table.
x86 assembly is full of old cruft, that 'cld' there is a thing that if you've ever forgotten it probably cost you a lot of time.
99% of the time the flag is clear... unless it isn't.
So for a seasoned assembly programmer that cld is idiomatic, axod probably typed the instruction reflexively because he knows he can't rely on the state of the direction flag, even if it has nothing to do with the problem per-se.
Is there a market for long time scale shorts on EFTs? I know long term oil futures go for 7-8 years generally.
I wouldn't place bets except on the time scale of 20-40 years and even then I'm not sure it is worth it in terms of likely pay off and time taken to research.
As well as the potential overwriting. File carvers have trouble recovering fragmented files. They might produce corrupted files, or nothing depending upon the details of the file format.
A lot of computer forensics is hacking commoditized for law enforcement. If there is data you need to analyse that you can't get you need to "hack" it.
Basically it is too unconventional (chemical), faddy and not focussed on producing something usable by the average geek.
That sort of stuff is still interesting (for computing in odd situations) but is not what I am looking for. I suppose I wondering why there isn't the computer equivalent of a space elevator. That is something most people know about that can't be done with current tech but is physically plausible (but might still might be too hard to do). Something that might spark the equivalent of the spaceward foundation, but for computers.
The fleet architecture represents a different face of unconventional computing. One that geeks can get behind. However it concentrates on speed of processing. Looking at the costs of computing, increasing computational power per watt or flops is useful but does not address the dominant cost of owning and running a computer. The dominant costs, I think, are the costs of learning the system, administering them and programming them. This is not addressed by either of the above threads of research.
I have my own odd-ball ideas. Which I'm excited about. I just wanted to gauge opinion of HN type people.
It seems like what you're interested in is more like UI or UX research than hardware innovation? The universality of the machine, strengthened by the ubiquity of compilers and software written in high-level languages, almost totally disconnects the user experience from the computing hardware, except for efficiency differences; instead it's tied to the I/O devices and the user interaction techniques, and increasingly, to the data the user is interacting with.
But I do see a fair bit of discussion of researchy and novel UIs here, don't you? On the front page right now I see Heroku (reducing the cost of administering systems), Hummingbird (real-time web site analytics visualization), Android vs. iPhone (which is largely about ubiquity and UI), Chatroulette, the death of files in the iPhone/iPad UI (which sounds like goes right to the core of the "dominant costs" you're talking about), Nielsen's report on iPad usability, and UI design in Basecamp. And that's just above the fold!
There are three ways to tackle the human costs of computing.
1) Make the things humans have to do easier. UI/UX
2) Reduce the number of things humans have to do. While all modern hardware can calculate the same things (are universal) they have different security models which can affect how much maintenance the user has to do. Take capability based security, an old idea implemented in hardware in the IBM AS 400. Languages (E, Joe-E) based on it are currently being touted as a way to reduce the risk of malware infection, even if malware does get on the system it can't do much because the language VMs operate under a principle of least privilege.
If we are changing the Arch for performance (e.g. fleet) and can't make use of the performance with standard software we may want to change it in this way as well, to take advantage of the system.
To give a concrete example of how computer architectures can be changed for the better. If windows had capability based security at the low level it could pass bits of memory to the user land process by sharing a capability that gave it write access. Then the user land process could populate it, once it had finished and the kernel wanted to read it, they could revoke the the writeable permission. This would prevent this sort of attack
3) Make the computer do the work for the human. Yes this is mainly an AI problem, but it also an architecture problem. If you want the system to manage things like your graphics card drivers for you, you have to make some decisions about the hardware. Which programs are allowed to try and manage the graphics card drivers, how can the user communicate what she wants in terms of graphics card drivers in a way that the computer will find unambiguous.
So yep, UI and UX, is important but it is only one possibly angle of attack, and not the one I'm interested in. Because people are doing fine work on it, while the others languish a bit.
> it could pass bits of memory to the user land process
> by sharing a capability that gave it write access.
> Then the userland process could populate it,
> once it had finished and the kernel wanted to read it,
> they could revoke the the writeable permission.
> This would prevent this sort of attack [apparently,
> confusing auditors with TOCTOU attacks on system call arguments]
Virtual memory mapping hardware is already roughly a capability system. The CPU doesn't maintain a list of ownerships and permissions for every page of physical memory; it puts capabilities to those pages into page tables. That's how KeyKOS was able to run efficiently on stock hardware.
Capability systems are indeed better for security in several ways, but this isn't one of them. The problem here is that the memory page is shareable between different user threads. You can solve this problem in a variety of ways, including the one you suggest. However, unmapping the page that a system-call argument lives in before invoking an auditor does not constitute implementing a capability system.
To a great extent, it seems like the move toward web apps is exactly a move toward a different security model in order to reduce the maintenance the user has to do, a model in which most apps are fairly limited in their authority. The same-origin policy still falls far short of full POLA, but it's a step. The project in this area I'm most excited about is Caja, which is what MarkM's working on these days.
I thought about mapping. Wouldn't you get into trouble if the section of memory still had to be readable during the time it is used by the kernel if you unmapped it? Or can you modify a read-write map to a read-only map? I'm just getting into windows internals.
Heh, I didn't know there were fellow people interested in keykos type stuff here. I'm fairly new to that and more interested in the 3rd thing you can do to reduce cost of ownership, having an adaptive computer background.
If you submit a link to caja here let me know and I'll upvote it. The cap-like stuff that the Marks were working on for delegating authority to web apps was also interesting. It does reduce the amount of maintenance the user has to do, they still have to pay for the web apps though, so depending upon the income of the user and cost of the service it might not reduce the total cost by much.
http://www.youtube.com/watch?v=YBv79LKfMt4