Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Database Review 2021 (bytebase.com)
80 points by miles21 on Nov 11, 2022 | hide | past | favorite | 67 comments


My prediction: SQLite will keep gaining popularity.

Especially among pragmatic software builders who run their own business and do not work for the man. A demographic that I expect to grow.

Talking about SQLite: Is there any downside to partitioning an SQLite db into multiple files?

For example one of my systems has a table 'details' which is not vital for the system to work. It's just a nice to have, to have data in this table. And it is pretty big, growing fast.

When I copy the DB over to another system, I don't need that table. So it would be nice to have like primary.db and secondary.db. With 'details' in secondary.db. Any downside to this approach? Are JOINS slower across two files than across two tables in the same file?


SQLite has seen runaway popularity on HN lately and I bought into the hype too for a while but when I look under the hood, the 3rd party backup and replication stories just seem janky, tedious and not yet mature. It's the kind of thing where a misconfiguration could wipe out everything and/or waste you hours of time.

>Especially among pragmatic software builders who run their own business and do not work for the man.

That's the perfect use case for a SaaS database. Administering a database adds zero business value and you'd be doing it to save at most $50 a month.


> SQLite has seen runaway popularity on HN lately and I bought into the hype too for a while but when I look under the hood, the 3rd party backup and replication stories just seem janky, tedious and not yet mature. It's the kind of thing where a misconfiguration could wipe out everything and/or waste you hours of time.

If you treat it as a database outside of your application code, yes. Its "database replication" tools are far behind.

But that's "using it wrong". Outside of the application code using it, a sqlite database should be treated as a file, that's the whole magic of it.

Backup and replication tools for files are great and mature, far more mature than for most database. Something as simple as rsync already covers 99% of use case you need for it.

If you need "live replication across multiple servers" or something like that, you're completely out of the scope of the what sqlite is made for.


I also find SQLite to be a poor solution for a backend database.

In particular - it makes it incredibly frustrating to manage multiple instances accessing it, and has some very strict limitations around how the underlying FS is mounted.

SQLite is an incredible tool - but the right place for it is in a deployed client application (where - seriously - it's a first class project and is an incredible joy). It's not really designed to be your web db.


That's absolutely not true, these solutions have all kinds of costs in terms of training, maintenance, and overall system complexity.


Priory art warning: every shack that rents out a /home/$customerid slice running phpmyadmin already is on the seller side of the SaaS database market. That market is just not very interesting.


> When I copy the DB over to another system, I don't need that table. So it would be nice to have like primary.db and secondary.db. With 'details' in secondary.db. Any downside to this approach? Are JOINS slower across two files than across two tables in the same file?

I'm in the middle of refactoring my personal project such that "shared" data is in one database, and "personal" data is in a separate database; the idea being that every user will have a separate SQLite "connection", with their own "personal" data ATTACHed. I had reasonably extensive functional testing before the refactor, and after the refactor I didn't have any issues from a functional perspective.

Potential advantages:

- Each user can download their own "personal" database whenever they want

- This is essentially a form of "sharding", which should go a long way towards mitigating the "single writer" bottleneck; as the "shared data" will change much less frequently than the "personal" data. It should also make it fairly straightforward to distribute the workload across multiple servers / regions, should my project ever get that big.

Haven't done any performance testing yet.

Main issues I've encountered so far:

- Foreign key constraints across the databases is missing; that's just a reduction in safety, however.

- Golang's "automatic connection management" doesn't play well with SQLite's "ATTACH" command: it expect to automatically open new connections, but the secondary connections won't have the ATTACHed databases. This is solvable, but something to watch out for.

As implied, I'm still in the middle of changing things over, so it's early days; but so far things seem positive.


The very problem of SQLite: Single user only. Although SQLite does have WAL but it still doesn't allow you to do concurrent write unless you want to see file corruption.

This means SQLite is very much locked to things that works with one specific purpose and almost nothing else. Sure, you can be read-only, but you have to run alongside the app in the specific node, too.

Another problem (although without solving the single user mindset this wouldn't be a problem at all) is high availability. You want to make sure that your database won't get lost do you.

Things like Litestream [1] attempts to solve the SQLite backup problem it by continuously saving the database state and pack t up to S3-compatibles or file system but its just half the story. You want to make sure your operation not stopping. This is where HA comes in to save you from an emergency fixup when you are enjoying your holiday.

It doesn't mean that nobody tried to solve both these problems though. Ahem, introducing rqlite [2]. Although my own experience is not very great because the memory usage is quite high and does not fit my need (because the embedded device only has 512MB on it, and every byte counts, sorry), I guess that's the price to pay if you want to turn a non-multiuser, non-concurrently acccess database into one...Another honorable mention would be LiteFS [3] but I haven't used it yet so I have no say on it.

[1]: https://litestream.io/

[2]: https://github.com/rqlite/rqlite

[3]: https://github.com/superfly/litefs


Many of us here writing web apps for enterprises can use SQLite with WAL with no issues.

Number of concurrent users range from tens and rarely hit hundreds. SQLite can handle that kind of traffic without any issues.


why would you do all this work when postgres sorted this all out over a decade ago?


Setting up Postgres is a PITA compared to SQLite. It comes bundled with python these days. Obviously it's going to be a trade off as to which one causes you more pain.


No docker access? postgres images are available in seconds.


So the solution is pulling down additional gigabytes of images and runtime to run the database?


Yes. It works great.


I manly work as a sysadmin for small companies. (Most people don't call us when they install new tools, they call when they exploded.) All my hatred goes out to the (Windows) programs (and its creators) that think they need MS SQL Express or the likes to save their two bytes of dust. All my love goes to the programs that just run/save from/to a UNC path.


Has SMB locking, and OP Locking become that much more reliable?

Pretty much one of the classic desktop support calls for me was "my access database on the shared drive is corrupt"

I'm out touch now, but one of the failure modes seemed to be that a client would take out an oplock, so it could do local caching etc. Then someone else opens the file, the server sends the oplock break to original client but that message gets lost/ignored, and we end up with 2 or more clients now making unsynchronised changes to a file.

Any smb client access to shared data more complicated than documents and spreadsheets just makes me twitch these days.

It was a long time ago now, and it probably was more prevalent in larger environments.. just more chances for things to go wrong I'd guess.

If something is designed for shared filesystems that's different, but my experience was that at the low midrange things aren't. They seem to work, until they don't.

What's wrong with sql express? Assuming you fit within its size constraints?


> Pretty much one of the classic desktop support calls for me was "my access database on the shared drive is corrupt"

Yes, that one >12 year old access database with two concurrent users brakes biweekly. Everything else: No problems.

Just this Thursday I "moved" a program from an old PC to a new one. We more or less support this client since 7 years. The PC is from before that. Got the call. First time hearing about that lab software. Right click on the desktop icon. Oh, it just points to an UNC path on the "main server" (Linux Samba share). Took a look into the settings. It has a path to a SQLite DB. Also just an UNC path. Only two clients using it. It worked unattended for years. Since it's stored on the "main server" it's also covered by backup. No client side installation necessary. Copy the link. Update our documentation that we know it exists, what is it used for and how it works. Done.

I see it time and time again: Desktop software that runs on one PC, produces/saves tiny amounts of data (like a temperature monitor for one fridge), but installs a SQL server. A random technician shows up, installs it and leaves. We don't know that it exists. It's therefore not part of a backup concept. It's usually only discovered when it explodes or when the PC is renewed (or when a second PostgreSQL install creates a TCP port conflict). I have the feeling only 1 out of 10 "softwares" even have a proper concept for export and import of data.

A client with like 7 or 8 workers called my because he wanted me to move his new time tracking software. The technician who also installed the NFC reader apparently installed it on the wrong PC. It took over 3 hours! It downloaded GBs and GBs of .NET, SQL Express and what not. I didn't know that going in/clicking the setup.exe. I had to call support to get the Express DB over. I can still hear the hard drive screaming. ...man... for stuff a Z80 and a CSV file would be enough. ...And I am the bad one for explaining to the client that the DB needs to be backuped. I sound like I want to upsell him something.

I know, I know. This rant is not about databases. It's about the state of IT and arrogant developers who don't live in the trenches. The "pulling down additional gigabytes of images and runtime to run the database" > "Yes. It works great." triggered me. I'm sorry. You know. Not everything is a unicorn webapp.

> Any smb client access to shared data more complicated than documents and spreadsheets just makes me twitch these days.

I know of multiple large install bases of an old medical software. It's still actively developed/supported but it is so old that it doesn't use a SQL but ISAM database. It just lies there, on a file share. No active server component to speak of. No client site installation necessary.

> What's wrong with sql express? Assuming you fit within its size constraints?

If it is not used for something a CSV file would be enough. If the costumer is told that it needs to be installed "on a server". If we get involved from the beginning so we know what is what. If the costumer doesn't expect me to be able to move it out of the blue within seconds. Than sql express is fine. ...which reminds me of that one time, when that well-hung developer of a software for the energy sector didn't knew that express only can allocate up to 2GB. Obviously he was blamed us for delivering a faulty server...


postgres alpine is 90MB.

I can really recommend it, using a declarative docker-compose.yml file and then docker-compose command.


agreed, running pg locally is a pain. I use a cloud postgres instance (even for local dev). They're dirt cheap and it's not worth the hassle of working with a local pg.


What OS are you on? Running postgres on Linux is as trivial as it gets, and for Windows they have a nice installer. Or just use docker.


Does the docker have users set up? I seem to remember that being a pain point in the initial setup.


I love Postgres.app on mac, makes running locally a breeze. Running Postgres fast on the cloud is what I always struggled with (without paying enormous sums)


>> agreed, running pg locally is a pain

If you have a single process accessing the database, use SQLite. If not, use Postgres.


which cloud do you use for local dev?


aws, gcp, digital ocean and supabase all have pretty good free tiers


There is also Cloudflare D1, which is now in public alpha.


> Any downside to this approach?

None that I can think of unless you need foreign key constraints between both.

> Are JOINS slower across two files than across two tables in the same file?

I was recently debugging as slow JOIN with ATTACH'ed databases and the query plan looked the same as when both tables were in the same database. I don't think it makes any difference.

But in these situations, the solution is measuring and benchmarking for your use case.


I keep reaching for SQLite and it keeps working. Although I've been needing a better review of what other embedded databases I should be considering in 2022. I tried Genji[1] recently and tore it out as it wasn't doing ORDER BY with multiple columns.

1. https://genji.dev/


> Especially among pragmatic software builders who run their own business and do not work for the man. A demographic that I expect to grow.

From the FAQ; the are lots of caveats (especially, the last).

> Situations Where A Client/Server RDBMS May Work Better

> Client/Server Applications

> If there are many client programs sending SQL to the same database over a network, then use a client/server database engine instead of SQLite. SQLite will work over a network filesystem, but because of the latency associated with most network filesystems, performance will not be great. Also, file locking logic is buggy in many network filesystem implementations (on both Unix and Windows). If file locking does not work correctly, two or more clients might try to modify the same part of the same database at the same time, resulting in corruption. Because this problem results from bugs in the underlying filesystem implementation, there is nothing SQLite can do to prevent it.

> A good rule of thumb is to avoid using SQLite in situations where the same database will be accessed directly (without an intervening application server) and simultaneously from many computers over a network.

> High-volume Websites

> SQLite will normally work fine as the database backend to a website. But if the website is write-intensive or is so busy that it requires multiple servers, then consider using an enterprise-class client/server database engine instead of SQLite.

> Very large datasets

> An SQLite database is limited in size to 281 terabytes (248 bytes, 256 tibibytes). And even if it could handle larger databases, SQLite stores the entire database in a single disk file and many filesystems limit the maximum size of files to something less than this. So if you are contemplating databases of this magnitude, you would do well to consider using a client/server database engine that spreads its content across multiple disk files, and perhaps across multiple volumes.

> High Concurrency

> SQLite supports an unlimited number of simultaneous readers, but it will only allow one writer at any instant in time. For many situations, this is not a problem. Writers queue up. Each application does its database work quickly and moves on, and no lock lasts for more than a few dozen milliseconds. But there are some applications that require more concurrency, and those applications may need to seek a different solution.


For me an important caveat is the typing. With all respect for the original author of SQLite -- he has done an outstanding job-- I think he underestimates the value of a good typing system. I have seen some databases that had all kinds of messy data. Back in the day MySQL was also quite loose with regards to checking data. Undoing the damage is in most cases not possible. For a business data is more important than code, so be strict up front.

I know, SQLite has added the option to enforce type checking. The authors still don´t believe in the value of it and the available types are quite limited and thus loose. I think this is something that pgsql got quite right, where you can have your domain types on the database level.

On the other hand, if you keep this as a replacement for your config file ( I thought this was the original purpose?), then yeah, you get an awesome deal. I wouldn't dare to build my business on it, just like I don´t believe in MongoDb and any untyped language for serious purposes.


As others have pointed out, there's the strict mode now which is still quite restricted (pun intended), but what you most often don't hear is that you can also use check constraints, as in

    sqlite> create table t ( id integer primary key, n integer check ( typeof( n ) = 'integer' ) );
    sqlite> insert into t ( n ) values ( 1 );
    sqlite> insert into t ( n ) values ( '1' );
    sqlite> insert into t ( n ) values ( true );
    sqlite> insert into t ( n ) values ( 'x' );
    Runtime error: CHECK constraint failed: typeof( n ) = 'integer' (19)
    sqlite> select * from t;
    ┌────┬───┐
    │ id │ n │
    ├────┼───┤
    │ 1  │ 1 │
    │ 2  │ 1 │
    │ 3  │ 1 │
    └────┴───┘
    sqlite> select ( select n from t where id = 1 ) = ( select n from t where id = 2 );
    1 // i.e. true
Check constraints do have the advantage over more classical types that additional constraints can be declared such as valid ranges for numerical types etc.


>I think this is something that pgsql got quite right

I don't think so. For example, pgsql had an array type before it got JSON, so the drivers can't automatically convert arrays that you want to insert into JSON. With my SQLite ORM, you can just insert arrays and objects and it knows to convert them automatically to JSON.

I like that SQLite just has a few primitive types. My ORM will be able to build on top of them. For example, JavaScript will soon be adding new date types (Temporal), and I will create new types for that, which will be stored as text ultimately.


SQLite has strict mode now


Which is quite limited in scope and does not allow for boolean (faux-boolean, of course) or json columns. It also affects certain operations in ways that might not be immediately obvious.

Not sure if this has received any further work since its release.

https://sqlite.org/src/wiki/StrictMode

https://sqlite.org/stricttables.html


I think I mentioned that, or I don´t understand what you mean.


>If there are many client programs sending SQL to the same database over a network

I believe this is a reference to enterprises that have different users querying the database directly with SQL that they wrote over a network to a central database.


My prediction: Database gatekeeping will continue into next year.

Lots of X is a toy database, Y is all you need for every use case, nobody really needs scalability, high-availability etc and above all else never use an ORM. Real engineers write SQL by hand.


Recently I stumbled upon BedrockDB[0] from Expensify. It is based on SQLite and has very interesting idea on HA and distributed DB.

[0] https://bedrockdb.com


I use an app with 3 different SQLite databases, but since I never have to join tables from different files, I haven't found a downside.


This is a review of companies making databases, not the databases themselves.


Correct. And Databricks/Spark is not really a database, it's a processing engine which can connect to many databases.


I think you could view SparkSQL with catalyst optimizer as a DB.


I concur with author's words here

> and the performance of your product can't be exponentially different from your opponent's, or you won't even have a fight

... but then the same author goes to include the table where Firebolt claims to beat the dust out of Snowflake by a factor of 50x to 6000x.

I am not affiliated with any of those two companies nor any other DB vendor, and the same type of wishful thinking can be found in blogs from pretty much any other DB vendor, but I'd really wish these type of things to become a thing of a past.

I get that they have to sell something to their investors and customers but anybody who knows a thing or two about the domain knows that these type of speed ups aren't possible without trade-offs. People aren't getting any more smarter any time soon.


If pg was olap there wouldn’t exist any competition out there. IMO hybrid oltp and olap is what most companies need for their user facing apps and whoever nails it will become #1 in zero time.

Ps. I would like to test firebolt, sounds promising.


There's too wide gap between OLAP and OLTP DBs, but definitely hybrids will appear. Vanilla pg is great but it's far away from analytical workloads. I use Vertica at work, it's a OLAP DB, but it handles hundreds of concurrent users in a light transactional workload without any issues (even though its locking isn't as granular as the one in postgresql). I would say that there's a spectrum between the two types of DBs and there's a market for all.


I'm not that familiar with the requirements of a big OLAP setup but can you explain why they can't be met by creating a logical replica using something like pg_logical?

I understand that the shape of the schema has to be somewhat different for analytics (though I think that point is overstated and many use cases could probably get accomplished with the same table layout but just different index placements)


Because transactional workloads (many small reads and writes that require consistency) look different from analytical workloads (giant reads with a small number of columns with looser consistency guarantees).

OLAP databases typically use a column store which is amazing for reading a subset of columns because of much better compression and use of vectorized execution as opposed to Postgres’ tuple-at-a-time execution. The tradeoff is it’s expensive to update a column store since you have to rewrite a chunk of the column at least.

Postgres is moving towards better OLAP. AlloyDB is a recent commercial DB that swaps out the Postgres storage engine to better support OLAP.

That said, you can coerce Postgres into doing a reasonable job at OLAP for a surprising amount of data.


> If pg was olap there wouldn’t exist any competition out there.

What does this actually mean? If PG were an OLAP database it probably wouldn't have full ACID support, because pure OLAP workloads typically don't do a lot of updates. And it would have a lot of hooks around storage like compression codecs that don't give a lot of value in an OLTP database. Finally it would probably be eventually consistent because it's expensive to ensure data consistency across a large, distributed dataset.

So it wouldn't really be PG at that point.


there's a few projects that are creating something like that, most include some type of column based storage extension for OLAP stuff


What about graph databases? Where do they fit? Is there a report or comparison for them?


It turns out that Postgres is good enough for most things. Unless you're doing, e.g. heavy-duty network analysis, you don't really need anything else.


> There will be a new database coming out, and the main selling point will be developer workflow.

I think Xata is headed towards this. I’ve played with it a bit and it has some potential imo. The branching feature could be quite useful.


> It already seems to be an afterthought after Gigapipe and Firebolt debuts.

This is referring to Clickhouse Inc. and is way off the mark. If you're looking for hosted Clickhouse, why would a company whose CTO is the creator of Clickhouse be an "afterthought"? Their cloud product offers separation of storage and compute, but you still have an escape hatch to self-hosting if you need it (which you wouldn't have with Firebolt).

Clickhouse has only one major flaw, joins. If Clickhouse improves join's, it will become the standard open source OLAP database.


Couldn't agree more with the join statement. Working on a data heavy SaaS, we've looked to overhaul our database for almost a year now, and Clickhouse hits every nail on the head for our needs except this. Most OLAP databases, and especially Clickhouse, seem to suffer from the syndrome of being structurally incompatible with relational data - understandably enough for mostly columnar databases, but still have rather weak solutions to bridge the gap. The author talks about an AP/TP hybrid being the industry killer, and it seems to ring true if we measure success on the profitability of the products that will be built on it. Data is where the money is, but not in end result data like IoT performance metrics / generally 1-dimensional data etc but in the intersections of high volumes of data. This ends up giving you two options right now if you're building in this space; A. very expensive and slow/tedious feature building on OLTP databases with lots of preprocessing or B. low cost, high performance but limited in features on OLAP.


I hadn't heard of Clickhouse, so had a look. It seems to be an OLAP backing store and not what one would call an OLAP database - because it doesn't support MDX. You have to use Mondrian as the OLAP server middleware. Or am I misunderstanding?

And why would one use Clickhouse vs Sql Server as the backing store for OLAP? Is it that much cheaper?


It's both faster and cheaper. ClickHouse gets a lot of migrations off SQL Server. Not every dataset works, of course. As noted elsewhere ClickHouse does not handle complex joins well.


ClickHouse is ridiculously fast, and its architecture is versatile while still being easy to setup. Its just a very well made product.


Considered posting an Ask HN, and this is a bit of a segue, but seems a relevant place to ask... What do people think of DynamoDB? I think it fits my use case, and apparently, there's enough overlap with Cassandra to support a migration, if it turns out to be a bad idea. But I rarely see much writing about teams using it, so wondered about support / popularity / resources?


Dynamodb is very scalable and effective at keeping operational cost down. It succeeds at it's goals by trading off on lots of things though. As a developer it can feel almost hostile. Adhoc queries are limited almost forcing you to have a complete second copy of the data. Access patterns must be designed up front and hope you don't need to change much when you hit scale. The tooling is meh. It's easier to make a scalable database when you push all those problems on your devs. Constraints are tricky. Transactions must be carefully designed. It's nice to know that almost definitely read and write throughput will scale and you can reduce the DBA/DevOps staff because AWS handles it.


The hard 400kb per item limit does sort of force a pause to make sure it will still work well over the lifetime of the app. Not that huge "rows" are a good idea, but that's roughly 2 pages of text.


I think it's more like 200 pages of text -- 2kB/page is what I've always worked off.


Heh, yep. Missed some magnitude there. Argh. Though still something to consider, like pulling from SQS could use most of the item space for example.


> That's why PG has been adopted by Heroku early on, to the new Heroku-like render, and Supabase. They can offer a low or even free database plan because they can serve many users with a single PG instance.

Just a note to say every Supabase free tier deployment is a dedicated instance


I recently started to play with BigQuery for the first time and it is kind of an unreal piece of software. I've been able to ingest and query hundreds of GB of data in a matter of seconds.


It's curious to see news about many database products that have come out only recently, since in my personal experience I've seen some companies and their products just evaporate (for example, Clusterpoint a number of years back).

Of course, if there's a sufficient amount of hype/mindshare from the industry towards a certain technology, it feels like the critical mass of attention might perhaps be there to sustain the projects and ensure adoption that's widespread enough for the companies behind them to profit and stay around.

It's also nice to see mentions of MySQL, albeit sadly there's nothing of MariaDB, which has become the "replacement" for MySQL in some projects out there due to the pretty good compatibility: https://mariadb.org/

They even seem to have their own cloud offering, though I cannot comment on it personally (it feels enterprisey and generally I run my own DBs, or let someone manage them in other projects): https://mariadb.com/products/skysql/

Though it's also nice to see PostgreSQL remain popular and it's generally one of the better options for most of the projects out there in my eyes. It has decent tooling, good driver support and lots of useful functionality and plugins (things like PostGIS, for example). I think the article puts it nicely, even if someone could nitpick about the wording:

> PS is batteries included. When a company chooses PG, it gets a database with OLTP, OLAP, Document (JSON-B), FTS (tsvector/tsquery), Time-series (TimescaleDB), Geospatial (PostGIS), Multi-tenancy capabilities (batteries included). To summarize it in a simple formula:

> PostgreSQL = MySQL + Poor man version of (ClickHouse + MongoDB* + Elasticsearch + InfluxDB) + Geospatial + Multi-tenancy

Then again, I come from Latvia and generally the technology choices that I've seen are a bit on the "boring" side (which isn't always a bad thing). Lots of PostgreSQL, some MySQL (sometimes MariaDB), though also proprietary offerings like Oracle DB or MS SQL in enterprise projects. Here, certain solutions are comparatively rare, like MongoDB, Redis or RabbitMQ/Kafka.

CockroachDB, ClickHouse, Yugabyte, CouchDB and a bunch of others are basically unheard of. I suspect that in certain capacity it's "Nobody ever got fired for choosing IBM", in part the fact that the tech scene here is lukewarm at best and people aren't interested in experimenting that much, or maybe just choose whatever has worked decently in the past.

I wonder whether one can read more into this trend, with countries like mine being a few years behind in adoption of certain new things. Only recently (the past few years) seeing Kubernetes at scale comes to mind.

Oh, I also have to support the argument that SQLite is pretty nice for simpler setups, or as an application format. My homepage actually runs on SQLite, since it doesn't need to scale much.


CockroachDB and Yugabyte are both wire-compatible with Postgres and follow its syntax and feature set to a large extent. They offer a reasonably clean upgrade path to a distributed database while still using the same tools you use with vanilla Postgres.

Seeing more and more projects that are either compatible or are a value-add on top of Postgres. Not seeing the same level variability of ecosystem in the MySQL world. There's MariaDB, which adds cool features like temporal tables, but PG has the aforementioned distributed DBs along with Citius, TimescaleDB for time series, AWS Redshift for OLAP, ZomboDB for enhanced full text search, Apache MADLib for big data machine learning within the database, etc.

I 100% agree that most projects don't need anything beyond SQLite (or its OLAP cousin DuckDB). More and more it looks like a relatively clean migration path from the small to the heavily concurrent to the high end niche. Great time to be a data worker.


Gdjhdgdhdg




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: