Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How often are mongo instances exposed to the internet? I'm more of an SQL person and for those I know it's pretty uncommon, but does happen.




From my experience, Mongo DB's entire raison d'etre is "laziness".

* Don't worry about a schema.

* Don't worry about persistence or durability.

* Don't worry about reads or writes.

* Don't worry about connectivity.

This is basically the entire philosophy, so it's not surprising at all that users would also not worry about basic security.


To the extent that any of this was ever true, it hasn’t been true for at least a decade. After the WiredTiger acquisition they really got their engineering shit together. You can argue it was several years too late but it did happen.

I got heavily burned pre-wiredtiger and swore to never use it again. Started a new job which uses it and it’s been… Painless, stable and fast with excellent support and good libraries. They did turn it around for sure.

Although interestingly, for all the mongo deployments I managed, the first time I saw a cluster publicly exposed without SSL was postgres :)

Not only that, but authentication is much harder than it needs to be to set up (and is off by default).

Most of your points are wrong. Maybe only 1- is valid'ish.

I'm sure there are publicly exposed MySQLs too

There are many more exposed MySQLs than MongoDBs:

https://www.shodan.io/search?query=mongodb https://www.shodan.io/search?query=mysql https://www.shodan.io/search?query=postgresql

But this must be proportional to the overall popularity.


Ultimate webscale!

A highly cited reason for using mongo is that people would rather not figure out a schema. (N=3/3 for “serious” orgs I know using mongo).

That sort of inclination to push off doing the right thing now to save yourself a headache down the line probably overlaps with “let’s just make the db publicly exposed” instead of doing the work of setting up an internal network to save yourself a headache down the line.


> A highly cited reason for using mongo is that people would rather not figure out a schema.

Which is such a cop out, because there is always a schema. The only questions are whether it is designed, documented, and where it's implemented. Mongo requires some very explicit schema decisions, otherwise performance will quickly degrade.


Fowler describes it as Implicit vs Explicit schema, which feels right.

Kleppmann chooses "schema-on-read" vs "schema-on-write" for the same concept, which I find harder to grasp mentally, but describes when schema validation need occur.


I would have hoped that there would be no important data in mongoDB.

But now we can at least be rest assured that the important data in mongoDB is just very hard to read with the lack of schemas.

Probably all of that nasty "schema" work and tech debt will finally be done by hackers trying to make use of that information.


There is a surprising amount of important data in various Mongo instances around the world. Particularly within high finance, with multi-TB setups sprouting up here and there.

I suspect that this is in part due to historical inertia and exposure to SecDB designs.[0] Financial instruments can be hideously complex and they certainly are ever-evolving, so I can imagine a fixed schema for essentially constantly shifting time series universe would be challenging. When financial institutions began to adopt the SecDB model, MongoDB was available as a high-volume, "schemaless" KV store, with a reasonably good scaling story.

Combine that with the relatively incestuous nature of finance (they tend to poach and hire from within their own ranks), the average tenure of an engineer in one organisation being less than 4 years and you have an osmotic process of spreading "this at least works in this type of environment" knowledge. Add the naturally risk-averse nature of finance[ß] and you can see how one successful early adoption will quickly proliferate across the industry.

0: This was discussed at HN back in the day too: https://calpaterson.com/bank-python.html

ß: For an industry that loves to take financial risks - with other people's money of course, they're not stupid - the players in high finance are remarkably risk-averse when it comes to technology choices. Experimentation with something new and unknown carries a potentially unbounded downside with limited, slowly emerging upside.


I'd argue that there's a schema; it's just defined dynamically by the queries themselves. Given how much of the industry seems fine with dynamic typing in languages, it's always been weird to me how diehard people seem to be about this with databases. There have been plenty of legitimate reasons to be skeptical of mongodb over the years (especially in the early days), but this one really isn't any more of a big deal than using Python or JavaScript.

Yes there's a schema, but it's hard to maintain. You end up with 200 separate code locations rechecking that the data is in the expected shape. I've had to fix too many such messes at work after a project grinded to a halt. Ironically some people will do schemaless but use a statically typed lang for regular backend code, which doesn't buy you much. I'd totally do dynamic there. But DB schema is so little effort for the strong foundation it sets for your code.

Sometimes it comes from a misconception that your schema should never have to change as features are added, and so you need to cover all cases with 1-2 omni tables. Often named "node" and "edge."


The adage I always tell people is that in any successful system, the data will far outlive the code. People throw away front ends and middle layers all the time. This becomes so much harder to do if the schema is defined across a sprawling middle layer like you describe.

We just sit a data persistence service infront of mongo and so we can enforce some controls for everything there if we need them, but quite often we don’t.

It’s probably better to check what you’re working on than blindly assuming this thing you’ve gotten from somewhere is the right shape anyway.


As someone who has done a lot of Ruby coding I would say using a statically typed database is almost a must when using a dynamically type language. The database enforces the data model and the Ruby code was mostly just glue on top of that data model.

What's weird to me is when dynamic typers don't acknowledge the tradeoff of quality vs upfront work.

I never said mongodb was wrong in that post, I just said it accumulated tech debt.

Let's stop feeling attacked over the negatives of tradeoffs


Whatever horrors there are with mongo, it's still better than the shitshow that is Zope's ZODB.

Are you guys serious with these takes?

You very often have both NoSQL and SQL at scale.

NoSQL is used for high availability of data at scale - iMessage famously uses it for message threads, EA famously uses it for gaming matchmaking.

What you do is have both SQL and NoSQL. The NoSQL is basically caches of resources for high availability. Imagine you are making a social media app... Yes of course you have a SQL database that stores all the data, but you maintain API caches of posts in NoSQL.

Why? This gets to some of your other black vs white insults: NoSQL is typically WAY FASTER than SQL. That's why you use it. It's way faster to read a JSON file from a hard drive than it is to query a SQL database, always has been. So why not use NoSQL for EVERYTHING? Well, because you have duplicated data everywhere since it's not relational, it's just giant caches essentially. You also will get slow queries when the documents get huge.

Anyway you need both. It's not an either/or thing. I cannot believe this many years later people do not know the purpose of SQL and NoSQL and do not understand that it is not a competition at all. You want both!


Because nobody uses mongo for the reasons you listed. They use redis, dynamo, scylla or any number of enriched KV stores.

Mongo has spent its entire existence pretending to be a SQL database by poorly reinventing everything you get for free in postgres or mysql or cockroach.


Redis and Dynamo are NoSQL genius, and I said NoSQL the entire time.

Yeah fair, I was being a bit lazy here when writing my comment. I've used nosql professionally quite a bit, but always set up by others. When working on personal projects I reach for SQL first because I can throw something together and don't need ideal performance. You're absolutely right that they both have their place.

That being said the question was genuine - because I don't keep up with the ecosystem, I don't know it's ever valid practice to have a nosql db exposed to the internet.


What they wrote was pretty benign. They just asked how common it is for Mongo to be exposed. You seem to have taken that as a completely different statement

I mean they said it's rarely used when in fact it's widely used by some of the world's biggest companies at the highest scale the internet knows. The other guy had a harsher comment sure, maybe I should duplicate my reply to them, but who knows what kinds of rules that breaks on this site lmao Happy Christmas & New Year buddy!

They did not say it's rarely used at all.

The article links to a shodan scan reporting 213K exposed instances https://www.shodan.io/search?query=Product%3A%22MongoDB%22

It could be because when you leave an SQL server exposed it often turns into much worse things. For example, without additional configuration, PostgreSQL will default into a configuration that can own the entire host machine. There is probably some obscure feature that allows system process management, uploading a shell script or something else that isn't disabled by default.

The end result is "everyone" kind of knows that if you put a PostgreSQL instance up publicly facing without a password or with a weak/default password, it will be popped in minutes and you'll find out about it because the attackers are lazy and just running crypto-mine malware, etc.


My university has one exposed to the internet, and it's still not patched. Everyone is on holiday and I have no idea who to contact.

No one, if you aren't in the administration's good graces and something shitty happens unrelated to you, you've put a target on your back to be suspect #1.

"Look at me. I'm the DBA now"

-JS devs after "Signing In With Facebook" to MongoDB Atlas

AKA me

Sorry guys, I broke it


For a long time, the default install had it binding to all interfaces and with authentication disabled.

often. lots of data leaks happened because of this. people spin it up in a cloud vm and forget it has a public ip all the time.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: