A bit tangential, but in Poland we also had such traffic cameras with public access (it wasn't a live feed, but a snapshot updated every minute or so). It was provided by a company which won a lot of tenders for IT infrastructure around roads (https://www.traxelektronik.pl/pogoda/kamery/).
What is interesting to me is that the public access to the cameras has been blocked a few months after the war in Ukraine started. For a few months I could watch the large convoys of equipment going towards Ukraine, and my personal theory is that so did the MoD of Russia. I haven't seen any reports about that, just my personal observation.
Would have been a good opportunity to inject misinformation after they noticed (assuming it's what happened)... Convoy passing by? Quick, splice in alternative footage that has equivalent traffic/weather conditions. (Or an infinite convoy to scare them)
I don't know how much you have used Telegram, but it's ridden with absolutely vile stuff.
You open the "Telegram nearby" feature anywhere and it's full of people selling drugs and scams. When I mistyped something in the search bar I ended up in some ISIS propaganda channel (which was straight up calling for violence/terrorism). All of this on unencrypted public groups/channels ofc (I'm pretty sure it's the same with CP, although I'm afraid to check for obvious reasons).
I think there is a line between "protecting free speech" and being complicit in crime. This line has been crossed by Telegram.
I use it a lot, and I run some large groups on it. I don't see any of that stuff, I've never gone looking for it, and I'm not even sure how to look for it. Can you tell me some examples of what to search for to see what you're talking about?
Not OP, but have to use the cloud version of Jira and Confluence. My biggest complaint is that they put the "Yes! Send me news and offers from Atlassian about products, events, and more." checkbox in the place where I would expect the "Remember me" checkbox.
Many measuring devices used in eg Germany have both proper units and Freedom units printed on them. It's probably just easier to have one model that you can sell anyone on the globe. Economics of scale and all that.
I'm Norwegian, and it's very common in Norway as well to have e.g. rulers and other measuring devices with both inches and metric units. It's if anything pretty rare to have just one or the other unless it's a "format" where displaying both affect usability - e.g. make the writing too small.
I know everybody says how bad SMS 2FA is, and how we should replace it with the next cool thing $BIGCORP invented (thus requiring you to have an account with them, which only defers the problem).
But couldn't we pressure the telecoms to improve it?
I have an idea that would make SIM swaps way harder to execute. Namely a website that wants to authenticate you should be able query the telecom for some kind of SIM card ID. This would happen before sending a 2FA code.
With such a feature it would be easy to store the SIM card ID in a database when enrolling the phone number. Later when the user tries to authenticate and the ID does not match what saved before, the account is locked out. For enterprise accounts you would need to explain yourself to IT and for personal accounts a fallback 2FA would have to be used. Alternatively the authentication would be delayed for a few days to give the legitimate owner of the SIM card time to react.
Another thing that could be added on top of this is to send a SMS to the old "inactive" SIM, alerting the original owner of the attack.
EDIT: To add to this, here are some advantages of SMS 2FA over time based OTP or passkeys:
1. My grandma can use it with her dumb phone and poor digital skills.
2. Your SIM card will most likely survive if your phone is destroyed due to water or physical damage. (Sadly not true for eSIM)
3. You can dictate an SMS/OTP code over the phone, or forward it to somebody you trust.
4. Banks can append a short description of what you are currently authorizing. It can tip you off in case your computer is infected with malware, or you are victim to one of those TeamViewer scams.
I think this is conceptually wrong from a layering perspective because youre punching through the abstraction and making it leaky on purpose. This just moves the problem down one layer in the stack - there will be legitimate new use cases for “sim card ID spoofing” and then we’re back to square one. Also from a usability standpoint “getting a new phone” is precisely the wrong time to lock users out of their accounts
A perfect analogy would be trying to implement security with mac addresses but applied to internet. It just makes a mess of an abstraction layer and then you have to rebuild it because those abstractions were useful (mac address spoofing has legitimate uses because mac addresses were used for security and then people realized they needed to be able to transparently swap things out)
In your scheme, how do I transfer money from my bank after my phone is stolen and I need to get a new phone without access to the original sim? Or access my email?
If that’s just impossible, how do I fix the issue? A “fallback 2FA” what is that exactly?
Probably one time use recovery codes you are supposed to print and keep in a safe place. In case of a bank this could also mean a trip to the nearest branch for ID verification.
The same issue you mentioned applies to other 2FA methods. Your TOTP codes and passkeys also live on your phone, Yubikeys can be stolen too.
Is that true? Large companies producing software usually have bespoke infra, which barely anyone monitors. See: the Solarwinds hack. Similarly to the xz compromise they added the a Trojan to the binary artifacts by hijacking the build infrastructure. According to Wikipedia "around 18,000 government and private users downloaded compromised versions", it took almost a year for somebody to detect the trojan.
Thanks to the tiered updates of Linux distros, the backdoor was caught in testing releases, and not in stable versions. So only a very low percentage of people were impacted. Also the whole situation happened because distros used the tarball with a "closed source" generated script, instead of generating it themselves from the git repo. Again proving that it's easier to hide stuff in closed source software that nobody inspects.
Same with getting hired. Don't companies hire cheap contractors from Asia? There it would be easy to sneak in some crooked or even fake person to do some dirty work. Personally I was even emailed by a guy from China who asked me if I was willing to "borrow" him my identity so he could work in western companies, and he would share the money with me. Of course I didn't agree, but I'm not sure if everybody whose email he found on Github did.
I'm not sure where you live (probably the US), but here in Europe you can easily get GPON ONTs from different manufacturers. There even are whole communities dedicated to replacing your ISP's ONT+modem combo: https://hack-gpon.org/quick-start
In some countries (Germany) it's super easy, because there are laws forcing the ISPs to allow customer provided equipment, while in other countries you need to do some hackery with spoofing serial numbers and such of the original modem. People even make utilities to scrape that information via the administrative interface, and make the process semi-automated: https://github.com/StephanGR/GO-BOX
The biggest problem for me about the ISP routers is their sheer size, they probably make them big so that they seem "powerful" to the average person and he chooses that ISP believing that their router provides superior Wi-Fi. New apartments built here (in Poland) even have nice boxes with the incoming fiber and an electrical socket where you are supposed to hide your Router, but the shoebox-sized devices don't fit there and you have to put them on the floor, or somewhere else. I myself have bought a SFP+ GPON (LEOX LXT-010S-H) transceiver, which is the smallest form-factor you can get. It goes inside my Banana-Pi R3 router, together with an LTE modem for backup connectivity. And this setup is still smaller than the box provided by my ISP, which only served as a bridge between GPON and my router.
>The biggest problem for me about the ISP routers is their sheer size, they probably make them big so that they seem "powerful" to the average person and he chooses that ISP believing that their router provides superior Wi-Fi.
Size is not just for fun, if you have mu-mimo capable device with multiple antennas you need distance between them. Same with the spider like gaming routers, its not just aesthetics.
Where I live, I just plug the fiber straight into a Ubiquiti EdgeRouter X. All the setup needed, which was documented on a local forum, was to buy the correct SFP module and to set a specific VLAN tag on the WAN port.
For me as a person who learned programming in the times of Github/lab/whatever, the idea of sending patches via email is fucking ridiculous.
The typical interface for handling merge/pull requests adds so many useful things over just sending a patch - if the project has CI I can immediately see if it even successfully builds before even going into the details of the PR.
Same for reviewing, each comment can be replied to separately or resolved, which serves as a nice TODO list for the original author.
I know there are some things people don't like (I think Linus was pretty vocal about it), but it seems to be they could be easily fixed by modifying the available open-source forges. This proposal here for example fixes the concern about centralisation, so I guess it's a good step forward.
Or maybe I'm just young and like shiny things and will eventually have a spiritual awakening and learn about the virtues of sending in patches via email.
> For me as a person who learned programming in the times of Github/lab/whatever, the idea of sending patches via email is fucking ridiculous.
For me as a person who learned programming before the Internet was a thing, and has worked both on projects that do patches by email and on projects that use web-based pull requests, I also prefer the web-based pull requests in every possible way. The email based workflow is baroque, painful both to send and to receive, lacking in features, and error-prone.
I think we can all agree that the main reason why developers require distributed source control is in order to facilitate development in a parallel way.
So, as a maintainer the purpose of such a request for collaboration (it being a PR or a patch) is to determine if: a) it does what it's expected out of it, b) it matches the conventions of the existing code.
I, personally, can make a judgement about both of things better with a patch that I apply locally than with a PR.
The main issue with PRs (in my opinion) is that they limit severely the context in which the changes are viewed. If I want to properly review a piece of code I have to check it out and follow the diff in its proper context (either while debugging) or even while just reading it.
Source forges, through the PR mechanism, encourage superficial reviews and insufficient attention being given to the merged code.
> I, personally, can make a judgement about both of things better with a patch that I apply locally than with a PR.
How come? How is a text .patch file easier in this regard than a UI for essentially that same .patch? Can't you check out the PR in the same way you would 'apply' a patch to review it?
For what it's worth, you can just add .patch onto the end of a github PR URL to get that.
> I can of course do the same with the PR, but then it loses its convenience. :)
Not at all. A PR still retains all the other useful features like separate threads for comments which be individually marked as done and CI checks. And not all PRs need a local checkout for a review.
> I, personally, can make a judgement about both of things better with a patch that I apply locally than with a PR.
`gh pr checkout NNN` works very well to give a local copy for review, by pulling and checking out the PR branch. There are equivalent commands for gitlab.
Also:
> The main issue with PRs (in my opinion) is that they limit severely the context in which the changes are viewed. If I want to properly review a piece of code I have to check it out and follow the diff in its proper context (either while debugging) or even while just reading it.
Both PRs and emailed patches encourage reading and reviewing just the patch. With emailed patches, you need to prepare a local branch with the patches applied if you want to do the kind of review you describe; with a PR, you need to fetch and checkout the PR branch. I would argue that checking out the latter is substantially easier than the former, especially given the availability of command-line tools like `gh pr checkout`.
Others keep in mind that Josh is so experienced with the email workflow that he made a tool to manage multiple versions of a “patch series”: git-series
(git(1) doesn’t help you with maintaining patch series)
> I, personally, can make a judgement about both of things better with a patch that I apply locally than with a PR.
FWIW, my git is configured in such a way that pulling from github also pulls all PRs for that repo, so, in effect, all PRs are applied locally, and I can review them however I want.
I do think that email threads (when all parties are disciplined and have them properly configured) are superior to the PR+comments format for discussion, but applying patches from a mailbox has never seemed to me like a pretty and reliable way to go, so at least github helps with that.
At any rate, git sucks for reviews in general because it still lacks mercurial's "mutable-history"/evolve approach to safe and distributed history rewriting (or "diffs of diffs"), and that, to me, is saddening as one more evidence that git's monopoly is causing stagnation and unnecessary pain in this space.
You can still get a “diff of diffs” with git range-diff. (If I interpret you correctly.) Like it says that in this iteration a paragraph was added to one commit, another commit was dropped, and another commit was expanded.
But you gotta do most of the work of lining up what the previous thing was yourself. (Inconvenient if you rebased and can’t seem to find the previous version.)
Mercurial's evolve is more about tracking history rewrites and their context. If I submit a series of commits, and over the review you have me amend one mid-series, then rebase few identically on top of that, then add some new changes, and split some existing ones, mercurial will still know how every single commit in the resulting series relates to things from the original PR. As such, answering questions like "show me how commit 3 was updated several times over multiple series submissions" is trivial.
As I replied to a sibling comment. I agree that I can review PRs on my local machine, which is sometimes what I do, but that means that the upside of viewing them directly on the source forge is lost, and I might as well have received a patch. :)
> is saddening as one more evidence that git's monopoly is causing stagnation and unnecessary pain in this space.
I disagree strongly with this type of sentiment. Mercurial works, mercurial source forges exist. It's entirely possible to use them for development. Complaining about git's success despite that feels a bit disingenuous.
>> The main issue with PRs (in my opinion) is that they limit severely the context in which the changes are viewed. If I want to properly review a piece of code I have to check it out and follow the diff in its proper context (either while debugging) or even while just reading it.
> but that means that the upside of viewing them directly on the source forge is lost, and I might as well have received a patch. :)
I don't follow, github gives it to you both ways, so I don't see the downside here. And on top of that, you can pull comments into your IDE of choice would you need to. I hardly see the problem, I suspect it isn't technical.
> I disagree strongly with this type of sentiment. Mercurial works, mercurial source forges exist.
Your disagreement doesn't align with the reality, unfortunately. Could you name one mercurial forge? They aren't many left anymore, and you have to get out of your way to host your mercurial code somewhere. And while it's hard to name mercurial forge, it's easy to name high profile projects which reluctantly converted to git: python, mozilla, pypy. It only got worse over time, "interestingly", about as fast as git consolidated its monopoly. And that's the main flaw in your point, that success somehow is based on merit alone, with no influence of peer/social pressure/network effect.
I would prefer `git request-pull etc` to actually make a pull request that anyone could see inside their editor, on GitHub, in their mailbox or wherever else they want.
The only way sending patches through email would actually work is if there was some interface on top of that process that managed that. By that point, it's probably easier to just use ActivityPub/HTTP as your protocol rather than SMTP/IMAP.
Absolutely, but it's depressing that we have to build a whole new system instead of dovetailing with a system that not only works but is already universally deployed.
Does it work for touchscreens too? When I plug in a portable monitor with a touchscreen into my macOS laptop the touch input gets sent into the screen where the cursor is (ie. I touch the touchscreen but it clicks something on the internal display, because this is where I left the cursor), instead of always inputting on the physical monitor associated with this touchscreen.
What is interesting to me is that the public access to the cameras has been blocked a few months after the war in Ukraine started. For a few months I could watch the large convoys of equipment going towards Ukraine, and my personal theory is that so did the MoD of Russia. I haven't seen any reports about that, just my personal observation.