I think so, this looks like the feature where adding an new linked device transfers messages from your phone.
What still isn't possible as far as I am aware is transferring messages from Android to iOS or vice versa. Last time I looked into this was a few weeks ago.
It is just one comment on Y Combinator's link aggregation service. People who haven't tried starting a serious FOSS project, do not understand how unsustainable it is. Funny thing is, like you mentioned, the monetisation isn't even imposed on the software itself, the entire software is free. It is on the *gratis* service to host it.
Entitlement knows no bounds. Don't worry about those disheartening comments, they are not coming from a place of genuine concern.
The failure of the organization to meet productivity expectations without questionable forms of psychophysiological manipulation of employees should not be met with open arms.
Yes, it can be effective to deploy such methods over the limited physical access to employees, but ultimately it is an unsustainable method of control that arguably breeds a cohort of disinterested middle managers over time. A successful organization for its goals has the most motivated workers, needless of such methods of control.
Ads are manipulation, that is not a reputation, that is a definition. There is no logical conclusion that omits a message from being classified as both an ad and manipulative. One can't come without the other. An ad is non-consensual. You don't ask for ads. The ads don't answer questions you specifically are asking, they are exploiting your demographic using keywords and phrases their research shows to be effective towards influencing your decision making to doing exactly what they want (selling you something.)
If you really think ads are not by definition intrusive, I am curious to reconsider my stance.
What a strong negative opinion, really proven my point. Ads by definition are very broad, maybe you only see ads only in the form of tv / youtube / mobile game ads that halts your activity and forcing you to consume the ads content.
Ads also comes in a form of storefront banner. By your definition, then there shouldn't be McDonald's logo on top of their shop's front door. So how can you know that building is a McDonald's?
Yes, it is generally intrusive and non consensual. But if you say nobody ask for ads, you're very wrong. If you're going to a forum to get game recommendations, you're looking to be advertised. Steam recommend section or hot deals or storefront page are all advertisement. Game trailers are advertisement. They just don't really feel as intrusive as yt ads, and an example of good ads in my book.
Google is pretty much unusable today if you don't look for specific websites. If you are using it to look up information, learn or discover new things in the web it is just SEO LLM spam. Features like shopping and LLM-powered Q&A are quite misleading and potentially dangerous for a trusting user.
I understand, and even agree with the notion that deep societal distrust is unhealthy and problematic, however, that doesn't necessarily answer the question of needing that trust in the first place [to regulate]. Having a company with that much power is in fact harder to regulate, which in turn means we are going to have to trust the public institutions even more to do their jobs.
I don't see why we should put ourselves in a position where we need that kind of trust. Another way to put it is, why burden the government with an unsustainable uncompetitive market? For what?
OpenAI is a for-profit private corporation with a commercial service to offer that has no bearing on the most important concerns the government is elected each year to tackle.
>I don't see why we should put ourselves in a position where we need that kind of trust. Another way to put it is, why burden the government with an unsustainable uncompetitive market? For what?
I'm not sure I follow this exactly, isn't regulation supposed to aid in preventing an `unsustainable uncompetitive market` ?
The market has shown over and over that left to it's own devices, things will not balance out.
> Another way to put it is, why burden the government with an unsustainable uncompetitive market? For what?
Because the societal costs of certain industries' unregulated activities do more harm than the economic cost of doing that regulation.
Despite what the Libertarian Party's pamphlet might say, regulation is invariably reactive rather than proactive; the saying is "safety-codes are written in blood", after-all.
Note that I'm not advocating we "regulate AI" now; instead I believe we're still in the "wait-and-see" phase (whereas we're definitely past that for social-media services like Facebook, but that's another story). There are hypothetical, but plausible, risks; but in the event they become real then we (society) need to be prepared to respond appropriately.
I'm not an expert in this area; I don't need to be: I trust people who do know better than me to come up with workable proposals. How about that?
If you'll excuse my departure from what is normal lexicon for this site, I believe that without pre-emptive regulation on AI technology advancement and mergers the "wait and see" phase quickly becomes a "fuck around and find out" phrase.
Regulatory bodies have long been behind on understanding of technology, for example for the first few decades of world wide web advancements (and I would argue even now). I don't think we can afford a reactionary lag time with a technology capable of so profoundly transforming our societies.
I hope we can nudge the developments in a positive direction before there is an all-out AI arms race. I understand the nuances in balancing regulating your own country's AI efforts with making sure you are not outstripped. Perhaps we need something akin to the international treaties dedicated to avoid a colonization dash of outer space.
I'm well aware of the common usage - try to turn your perception to see how it applies here in the abstract sense. The ones who believe no regulation is necessary will be delivered the finding out through brilliantly and hilariously malicious agents.
And how many 'innocent' didn't-test-it-enough targetted user agent incidents do we have to witness in order to call it what it is and stop making excuses?