Hacker Newsnew | past | comments | ask | show | jobs | submit | blucoat's commentslogin

Perhaps the filtering would also be done by a third party. This isn't entirely foreign now: if you write an article online, you can't control the comments on reddit, HN or 4chan. However, users have some control over which comments they see, in that they can select which sources of comments they want, and each source has different moderation policies. The roles of publishing content and moderating comments are totally de-coupled from each other.


>SIGSEGV is a very important signal. It happens when your program tries to access memory that it does not have. An appropriate reaction might be to

    allocate more memory
    read some data from disk into that memory
    do something with garbage collcetion (but what? I'm confused about this still.)
What? Are there any Real World Programs which do anything other than print a stacktrace and exit? I don't think this person gets what a segfault is.


While I agree that it'd be unusual for regular applications to have to resort to using SEGVs to implement features, low level systems code, especially VMs often do, for performance reasons. The Hotspot JVM for instance uses SEGVs to force a thread into a safepoint. The JIT inserts a read instruction, among other places, at backward branches, which tries to read from a page in memory called the polling page. Said page is mapped during normal operation of the application. When the VM needs to bring threads to a "safe point", say to perform a GC, it does so by unmapping the polling page. This causes each of the active threads to fault on the read and enter the SEGV handler, which notices that the faulting address falls within the polling page and executes appropriate "safe point" actions. Libc implementations use a similar technique to commit pages for a thread's stack lazily.


Windows uses page faults in the stack guard page to lazily commit stack pages. Compilers allocating large structures on the stack need to generate loops touching each allocated page in turn to guarantee the allocation. On Windows the lazy allocation can be done entirely in user code - it doesn't need to be an OS feature. I believe pthreads uses the same technique on Linux; very far from sure though.

Generational GC can use segfaults to detect writes to older generations and mark pages that need scanning for references to younger allocations. They can also act as a way of triggering a safe point without polluting the branch prediction cache: unmap a page when you want an interrupt, and periodically touch the page in code that needs interrupting (loops etc.). Virtual machines for languages like Java can and do use these techniques.


If you had a Green threaded program and one of the threads segfaulted. You would probably want to catch Segv and kill that thread. (Not killing the OS thread running it).

I've also seen is used to implement a distributed malloc. When a segfault occurs, the handler messages the programs peers asking if they have the data for that address. If so the peers sends the page and the handler maps in a new page for that address with the correct data in it. This is essentially implementing a page fault handler in user space. (For some network backed memory).


Why would you only want to kill that green thread? On any thread implementation I'm aware of, an unhandled segfault kills the whole process. Anything else is disaster waiting to happen.


I've read that one of the original Unix shells (Thompson's or Bourne's) used a combination of sbrk()/brk() system calls and SIGSEGV to do dynamic memory allocation for itself. I can't find a reference to this via Google, as any information about old shells and SIGSEGV is swamped by modern people talking about bash and bad programs, or trapping SIGSEGV in scripts or some such. The "heirloom sh" code doesn't have anything like that, but it's clearly been tinkered with, as it uses sigaction(), a BSD innovation.

So, feel free to ignore this vague memory.


I think most arguments for abortion rely heavily on the right of the mother to bodily autonomy. That is, once the baby is delivered, those arguments fall apart. Breastfeeding your child because you want the "experience" is completely different.


breastfeeding is a hell of a lot more than 'an experience'


Breastfeeding is objectively in the best interests of the child. The "birthing experience" is what was being mocked.


> [The media] simply do not see that as their role. For most of them, a posture of “neutrality” and “opinion-free” blankness are the highest values.

> ... that’s all one can expect from large sectors of the U.S. media: cowardly neutrality, feigned analytical objectivity ...

> Shortly before this article was published this morning, Cuomo re-appeared on Twitter and apparently had a change of heart from last night’s proclamation. ... Sometimes, social media shaming works.

Am I the only one really bothered by what the author of this article thinks news should be? As an instrument to shut down political ideas you don't like? I find it ironic that he criticizes the idea of limiting freedom of speech to fight terrorism, but then thinks we should turn around and silence bigots.


I think journalists have opinions - and sharing those opinions is a big part of 'news' - whether you like it or not, that's constantly done, either through direct words, or through more subtle means. That's why you have 'conservative' and 'liberal' entities of the news.

Moreover, the media also decides what's important - think about how they've covered Trump vs. Sanders - Trump is selling ratings, they are promoting his message in a way that's unequal to other candidates, and then saying 'oh, we're being neutral.'


I agree with you that Trump gets a ridiculously disproportionate amount of news -- not a surprise, considering how much money covering him makes -- but that's not how I interpreted the article's complaints. It read to me like the author was upset that the media weren't actively denouncing him, like they had some moral obligation to. If I've misinterpreted that then that's my error.


Yeah, don't know. Agree w/ you that journalists/ reporters do not need to have moral obligations, though [unless they make moral statements, and are not electing to do so in certain cases] - but even that's a slightly different issue.


It seems like everyone here has this mentality that "of course he's evil, he's a thief!" Can someone explain what actual harm this causes anyone? I think it's ridiculous that people pay so much for his "work", but that's their choice. It's not like people are going to him as an alternative to the original source; if he didn't copy it, nobody would give a damn about the original.

EDIT: I'm sorry if this came off as aggressive, but somebody's downvoted me without answering my question. I seriously want to know why his actions are so terrible. What harm does this cause to anyone?


Does ohshit stand for something else clever? I know the Android API uses "What a Terrible Failure" to serve a similar function.


There's also ohshite() :-)


This is what confuses me, so maybe someone with a better understanding of this market works can elaborate. How is it that the drug is not patented but the manufacturer has the exclusive right to sell it? It blows my mind that one company can overnight make such a huge change and there is no competitor to turn to.


In the US, you still have to get FDA approval to sell drugs that claim to be reformulations of another manufacturer’s product. This isn’t entirely baseless - demonstrating your version’s bioequivalency to the original is part of the process, which doesn’t seem like an unreasonable thing to require.

Unfortunately, the whole approval process still costs in the low $millions at a minimum & so for out of patent drugs that are a pain to manufacture, and/or have small numbers of patients, they just aren’t financially viable once you factor in the approval costs.

Part of what you see when a drug company steps in and jacks up the prices like this is that the new owners know that the FDA approval process combined with their pre-existing manufacturing capability gives them an expensive moat to bridge for other companies that isn’t justified by the size of the market, so if they’re willing to take the reputational hit that the previous owners weren’t then they can jack up the prices enormously since the patients have no-where else to go.


I am several steps removed from the industry but to my knowledge -

There is still a regulatory approval process to be able to manufacture the drug - so the cost and time required to receive this approval, then to market the drug as a generic alternative to the existing drug to a market that often doesn't see itself as the ultimate payor (ie. the consumer thinks the insurance companies / government are paying) and may prefer known names, and the original manufacturer's ability to then lower the price of the drug all combine to push away competition on low volume drugs like this one.


I'm not very familiar with the drug market myself, but game theory can theorize why they can do it. From what I have read, drugs cost a lot of money and a few years to get to market even when they are not patent protected. This means that when a competitor starts investing money in a competing product, the main company can simply reduce the price to a point where that investment is no longer profitable. Potential competitors know this and that’s why they wont enter the market.


According to a CNBC interview last week the product is protected via trade secrets. People know what's in it, but not how to make it. And it's some weird stuff (I think pituitary glands from pigs or something equally weird are key components - not sure there's a lot of open-source knowledge on how to deal with that ingredient pipeline.)

The CEO's pitch was that they believe the drug is useful in more roles than it's currently used for, so they're taking the increased money to perform the studies required to allow the drug to be used to treat more diseases for which there is no alternative.

And right now this is a last-ditch drug - it's used once all other therapies have failed.

So they're trying to increase marketshare by doing expensive studies, which is why this drug is still produced and where the money is going.

At least according to the CEO, and my memory.


Gcc with default options does. I was curious about this and tested it, expecting to get a segmentation fault, but to my surprise string constants are executable. A mutable string, however (char data[] instead of char *data), is not executable. I don't think this is a totally insane vulnerability or anything, since no memory is both writeable and executable at once.


You may be interested to know that there has recently been a notable movement in systems security research to push for an "execute-only" permission, that makes executable data unreadable in addition to unwriteable. This has come in response to certain attacks (i.e. http://www.ieee-security.org/TC/SP2013/papers/4977a574.pdf) that use scripting languages (such as JS or Flash's actionscript) to read all of memory at exploit time and use this knowledge to craft a payload that bypasses ASLR.

So, works such as http://www.ics.uci.edu/~sjcrane/papers/sjcrane15_readactor.p... (and several others) are attempting to come up with systems that can prevent this type of attack by preventing executable memory from being read in the first place. This is made difficult not only by the fact that even if processors could support such permissions (many generally can't in any efficient fashion), but also due to the fact that many compilers frequently mix together executable code and static data, such as strings. The second paper I linked is about instrumenting LLVM to ensure that it always outputs readable data and code in separate sections.

Having been involved in such research myself, I can confidently respond to the parent's question too by saying that, if anything, a majority of modern compilers freely mix code and data. In addition, there is often data that is directly related to code, such as tables of addresses used in a switch statement, but is never intended to be directly executed. Even if it would work just fine to place such tables in a read-only section, it may make logical sense to the compiler authors to place it in the vicinity of where it is used (that is, in the executable data section).


Something to keep in mind:

Of all Tox clients, uTox is written in C, using its own UI framework that directly interfaces with X11 and WinAPI. This makes the code itself a mess. The reasoning behind this is that it's somewhat of a meme on /g/ that anything but pure C code is "bloat". I tried contributing a bit last year, did some work on copy/pasting inline images, and found a remote code execution vuln. Then I got fed up with how terribly confusing the codebase was for something so simple. I'm not a professional programmer or anything, just a student, but it seems like it's the same for everyone else in the project.


I have no experience using Active Directory. Is this common practice? I would personally not even classify this as a bug; it seems like common sense that running code downloaded from an unauthenticated connection is bad. How is this different from saying there are critical security bugs in http/ftp, since the same type of attack is possible (but well known)?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: