Hacker Newsnew | past | comments | ask | show | jobs | submit | VogonPoetry's commentslogin

I am not sure using sandbox-exec is a good security architecture for AI agents. It sure is convenient and available to everyone right now. I've made another comment elsewhere in this discussion about what I think "deprecated" means - it is a sharp tool that could break if not tracking everything that changes, including every change in a SW update. It is also easy to get wrong if there is not a "(default deny)" in the profile. An agent could escape if they can find a mach service or some other system call coordinated proxy service. Java, Silverlight and Flash had backdoor communication mechanisms with other instances of themselves that could be abused.

The Sandboxing and Entitlements mechanisms are very different. Sandboxing can only drop access to resources, it cannot grant access that was not already there [1]. Entitlements are all about giving additional selective privileges or to make the sandbox NOT remove access (like full disk access or debug ability ). Entitlements are bound to processes only and are non-transferable. This is in contrast to a capability based system where they can be passed around. Reasoning about capabilities is challenging because analysis effectively requires global knowledge of the system. Binding entitlements to libraries or Frameworks would turn them into capabilities.

[1] a GUI app can restore access to files by using a trusted external selection process.

Edit: change footnote reference to prevent markup error.


This is true. I was being brash. Let me say instead that the split in reasoning and evaluation as it exists on macOS in this area is rough and potentially not needed. Granted, I don't have a better answer in my back pocket, and the fact that Apple has kicked the can for 15 years on trying to harmonize these is a sign it's hard.

Does this mean you tried to ship an App in the Apple App Store but could not because of some restriction?

Why would it mean that?

I took the "granularity doesn't cut it" comment to mean there aren't enough entitlements to eliminate the need for custom SBPL. Followed by a sentence about apps that have temporary exception SBPL. Combining the two seems to imply that if there were more entitlements the custom SBPL might not be necessary. In the followup you noted; the split in reasoning and evaluation is rough and potentially not needed. I read this as a conclusion of wanting to do something, but could not as there were not enough entitlements to make it work, so custom SBPL would be necessary.

The runtime engine is not known to be turning complete. It has no expressions and cannot loop, only forward jumps are permitted.

"sandbox-exec" is deprecated in the sense of "please don't use this method to run sandboxes" rather than the mechanism going away.

If you are using "sandbox-exec" then you are likely maintaining your own seatbelt profile. Keeping those up to date can be challenging, especially for 3rd parties as any changes to underlying Frameworks and libraries can break a hand crafted profile.

If you are using it to secure your own stuff and accept this and not complain, even for minor SW updates, then you are going to be fine. Don't ship things to 3rd parties without also accepting this. That is what this deprecated means.


I've written to <voxmeditantis@gmail.com> about how deceptive it was to put the Editorial Note at the end instead of upfront. I stopped reading because sections felt fabricated - but it was presented as an oral history or actual interview. What a terrible way to present the work of a pioneer.


I have received feedback from Vox. The article has been updated with a new leading paragraph indicating the fictional nature of the article.


There is something off about this piece. Particularly the section that starts "You passed away on your eighty-eighth birthday – 4th August 2020. Do you reflect on mortality?" I stopped reading after that.


An Editorial Note is at the bottom (as others have now noted), it should have been at the top. Had I not seen other comments I would likely have believed everything was made up. This is a terrible way to recount the memory of Frances Allen.


I have received feedback from Vox. The article has been updated with a new leading paragraph indicating the fictional nature of the article.


I did a maths undergrad degree and the way my blind, mostly deaf friend and I communicated was using a stylized version of TeX markup. I typed on a terminal and he read / wrote on his braille terminal. It worked really well.


Thanks! Did you communicate in "raw" TeX, or was it compiled / encoded for braille? Can you point me at the software you used?


Yes, mostly raw TeX, just plain ascii - not specially coded for Braille. This was quite a long time ago, mid 1980's, so not long after TeX had started to spread in computer science and maths communities. My friend was using a "Versa Braille" terminal hooked via a serial port to a BBC Micro running a terminal program that I'd written. I cannot completely remember how we came to an understanding of the syntax to use. We did shorten some items because the Versa Braille only had 20 chars per "line".

He is still active and online and has a contact page see https://www.foneware.net. I have been a poor correspondent with him - he will not know my HN username. I will try to reach out to him.


Now that I've been recalling more memories of this, I do remember there being encoding or "escaped" character issues - particularly with brackets and parentheses.

There was another device between the BBC Micro and the "Versa Braille" unit. The interposing unit was a matrix switch that could multiplex between different serial devices - I now suspect it might also have been doing some character escaping / translation.

For those not familiar with Braille, it uses a 2x3 array (6 bits) to encode everything. The "standard" (ahem, by country) Braille encodings are super-sub-optimal for pretty much any programming language or mathematics.

After a bit of (me)memory refresh, in "standard" Braille you only get ( and ) - and they both encode to the same 2x3 pattern! So in Braille ()() and (()) would "read" as the same thing.

I now understand why you were asking about the software used. I do not recall how we completely worked this out. We had to have added some sort of convention for scoping.

I now also remember that the Braille terminal aggressively compressed whitespace. My friend liked to use (physical) touch to build a picture, but it was not easy to send spatial / line-by-line information to the Braille terminal.

Being able to rely on spatial information has always stuck with me. It is for this reason I've always had a bias against Python, it is one of the few languages that depends on precise whitespace for statement syntax / scope.


Thank you so much for all this detail. This is very interesting & quite helpful, and it's great you were able to communicate all this with your friend.

For anyone else interested: I wanted to be able to typeset mathematics (actual formulas) for the students that's as automated as possible. There are 1 or 2 commercial products that can typeset math in Braille (I can't remember the names but can look them up) but not priced for individual use. My university had a license to one of them but only for their own use (duh) and they did not have the staff to dedicate to my students (double duh).

My eventual solution was to compile latex to html, which the students could use with a screen reader. But screen readers were not fully reliable, and very, very slow to use (compared to Braille), making homework and exams take much longer than they need to. I also couldn't include figures this way. I looked around but did not find an easy open source solution for converting documents to Braille. It would be fantastic to be able to do this, formulas and figures included, but I would've been very happy with just the formulas. (This was single variable calculus; I shudder to think what teaching vector calc would have been like.)

FYI Our external vendor was able to convert figures to printed Braille, but I imagine that's a labor intensive process.

Partway through the term we found funding for dedicated "learning assistants" (an undergraduate student who came to class and helped explain what's going on, and also met with the students outside of class). This, as much or more than any tech, was probably the single most imapctful thing.


I have experienced some similar issues. I think some of it related to the "locked" state of the device. Siri needs context data to answer, particularly the mom or some destination questions. Specifically for contacts or recent places data. This context isn't remotely stored, but provided by the device to Siri each time. I think when the phone is locked it doesn't have access to the data (reading or writing). When I mean "Siri", I mean both the on device and remote parts of it.

I think this also interacts with countries and states that have (possibly misguided) strict laws forbidding the "touching" of phones "while driving". My experiences suggest that using Siri when driving and the device is locked, it just gives up - I sort of see the start of it working then, bam, it stops. If I retry, I suspect that I've somehow "looked" at the phone in frustration, it saw my attention and unlocked. I now wonder if where I have placed the device is making a difference.

It does seem to work much better (when driving) if the device is already unlocked.

I also see odd things when using Shortcuts for navigation. If I've previously asked for walking directions and then speak the shortcut while driving it won't give directions until I switch to the "car" icon in maps. I think it might be trying to calculate the 15Km walking directions, but it doesn't complete before I tell it, frustrated, to stop.

When Siri doesn't work it is usually the times when I need it to. This is definitely a multiplier in disastisfaction.


After writing this I decided to look at my shortcut. The action seems to have been a simple "get directions to <place>" and sent verbatim to Siri.

I was not able to edit / update it! However, there was now a new "maps" option for `Open <type> directions from <Start> to <Destination>`

Where type can now be {driving,walking,biking,transit} and <start> is Current Location by default.

After updating, this now seems to correctly set actual driving directions, even if I'd previously set up a walking route!


Perhaps using AI assistance is good OPSEC. It could help to shield the author from stylometry or author profiling.


And then the author posts it himself to Hacker News. Nah, that's not opsec.


To get feedback / commentary you likely need to change the permissions on the repository, currently it seems to be private.


You mean the pages doesnt open? Thank you for that… and for all the fish


I see a different error now - a 404 with "There isn't a GitHub Pages site here".


Ah yes when i switched it public the old “obfuscated” url is gone. I posted normally, but it needs a bit of work still. You can play tho


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: