Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
First Impressions of GitHub Codespaces (aristotlemetadata.com)
109 points by legostormtroopr on Sept 1, 2020 | hide | past | favorite | 93 comments


While impressive technology, I still get that eerie feeling of my computer's ownership being slowly taken away from me. The problem is not Github codespaces offering us an alternative to traditional dev environments, the problem may be 10 years from now, when someone says: "All the coding is done on the web nowadays, why should we allow users to install compilers and dev tools on their machine? They may use those for hacking and compromising the security of our systems. They may hurt themselves in the process and sue us! Better not take risks". I'm afraid to go into a future where this is normal...


> All the coding is done on the web nowadays, why should we allow users to install compilers and dev tools on their machine?

Currently, web-based development is unsuitable for video streaming or native GUIs which operate on local filesystem files. I don't think video streaming over networks is easy. If development shifts such that people stop wanting local filesystems and native GUIs, I'll be sad...


10? I'm fairly certain some companies were asking for these features precisely for this reason. To have better control.


It’s the same rationale as having Citrix for developers in big banks, so they don’t have anything on their local machine. And this way you don’t have to keep a big machine with a bunch of GPUs around.


That sounds like 'The future will be no code development, will all the developers lose their job?'. I can see it happening in some business centric shops (outsourcing?), but going to happen at widespread scale at all.


>They may hurt themselves in the process and sue us!

has a hacker ever hurt themselves by hacking and sued the company whose computer they were using?

Anyway I guess people can also buy their own computers if they want to play outside the sandbox.


Fair enough. But that was not my rationale, it was just a made-up corporate rationale that is not too far from what I can observe in reality. Take, for example, lobbyists in right-to-repair hearings. They use arguments like this: "If we let users repair their smartphones, they might hurt themselves, so it must not be allowed and instead always performed by a skilled technician". The implication always being, that if users hurt themselves, they will sue.

> Anyway I guess people can also buy their own computers if they want to play outside the sandbox.

My point was, precisely, that we might get to some point where this is no longer possible. Imagine they stopped selling what we today call "PC", and instead everything is closer to smartphones or tablets. There would be no way to setup a development environment on the machine. There's no sudo access, no compiler toolchain...


A hacker? Probably not. A random Joe who copy/pasted some random things found on some sketchy website? I wouldn't be surprised.

Also, what matters isn't whether someone actually sued as much as if some executive somewhere thinks that someone might.


There is also coder.com which provides more or less the same thing (running Visual Studio Code in the browser).

The code is actually open source and you can run it on your own machine: https://github.com/cdr/code-server

Hint: Set the following environment variables before starting code-server to get access to the official Visual Studio Code extension marketplace. This is against the TOS of VSCode, but some extensions are missing/broken on the alternative marketplaces.

SERVICE_URL=https://marketplace.visualstudio.com/_apis/public/gallery

ITEM_URL=https://marketplace.visualstudio.com/items


I also just got access to the beta. It works very seamlessly. You can install all the VSCode extensions you want.

It came preinstalled with some Python 3.8 venv. The venv was put into the Git repo root directory, which is a bit strange, as I first needed to modify my gitignore to ignore that.

Git push directly from the VSCode menu directly worked and pushed the change back to GitHub, without further setup.

I did not expect that I get a full virtual machine, but it seems so. I edited some Latex project, and pdflatex was missing, so I did `sudo apt install texlive-full`, it installed 4GB of packages, and then all worked fine.

I also edited some TensorFlow stuff, and `pip3 install tensorflow` also just worked.

The side preview (eg. for markdown or Latex) somehow is broken. It stays blank. However, e.g. you can open the Latex PDF preview in a new browser window, which works.

The speed just felt native. There was no latency. It feels exactly like native VSCode.

I very much like it. I guess the side preview will get fixed soon. I will probably use it for a couple of smaller projects, of different kinds, like Latex, Markdown notes, smaller Python stuff, or other smaller code projects where VSCode is fine.

Otherwise, for e.g. Python, I still prefer PyCharm, which has far superior code browsing and auto-completion. Codespaces with PyCharm would be really nice! Or maybe VSCode can improve on that. Or maybe VSCode can even reuse the language server from VSCode. Or at least the inspections. I think this part is open source, right?


From the blog post I understand that my vscode configuration is somehow stored within my browser, not connected to my github account. Is it also stored per project or globally?

Because either choice brings its own problems. I want the vim plugin to behave the same on all instances but language-specific plugins or similar only make sense in the context of a specific project. How does this work currently?


It stores settings just as usual in .vscode/settings.json in your project. (I usually have .vscode in my .gitignore, but maybe it make sense to include it in the repo.)

Additionally, there is some "Settings-Sync" feature, which I have not really explored yet. Maybe this is global for all instances.

Btw, the startup time of the codespace VM is a bit annoying. It takes approx 30-60 seconds when it was inactive. And it get automatically suspended after 30 minutes of inactivity.


In addition to project local .vscode/settings.json and the still Beta Settings Sync, it should also pick up your user settings if you set up a dotfiles repository.


The big draw for me is working on the iPad. If you attach the keyboard it becomes a pretty awesome 'do anything' device, now that you can code on it as well. Can't build native apps yet, but seems like a future opportunity. Web apps / PWA development works great, I can just have the app running in a hovering window / on a quarter of the screen.


It's such a contrast to see this and the post by setzer22 right next to each other


I'm in the process of switching a lot of my work onto iPad.

I don't see a conflict here - I'm still using a "compiler" api, its just somewhere else (i.e. not on my local device), its just in somebodies cloud. I can't see why that would go away in the future and I don't really care where the compiler actually is (if I did, I could self host).


Yes and no. So I still have a giant desktop, it’s unlikely that I’ll give that up any time soon. The advantage of being able to code from a portable device is additive, not subtractive.

The setzer22 post is a contemplation of future subtraction, but today my opportunities to code have increased, not reduced.


Quite exciting, I've been keen to realise my iPad for this sort of thing, and have experimented with code-server.

Does make me lament the fact that I can't tap into the perfectly viable OS beneath the hood, but oh well.


You’ll never be able to build native apps on your iPad, and if you could a web app like this wouldn’t be necessary.


Or on the Oculus Quest :-)


> as a co-founder who is spending less time as a developer, and more time in meetings, writing emails, strategy papers, [...] this is just amazing [...] I’ve been spending a lot of time getting Windows Subsystem for Linux working

lol, just ssh into a $3 vps as dev machine, real Ubuntu, real tmux/(n)vim, no setup, no hassle

Edit: to be fair, Cloud9 was my entry drug into remote development bringing me to my setup above; the idea is awesome and Github Codespaces is too but there are just too many limitations and having a real, ever-running Linux vps everywhere, even on your phone is just magic and even when you write strategy papers...


>lol, just ssh into a $3 vps as dev machine, real Ubuntu, real tmux/(n)vim, no setup, no hassle

Infamous Dropbox comment: https://news.ycombinator.com/item?id=9224


Meh. The difference is that rsync doesn’t really do what Dropbox does (handle syncing both ways) otherwise they would have been right. Ssh+tmux really does do everything this does and it does it in a way that doesn’t tie you to any single organization. The current company I work for uses the ssh to a VM style setup and I was able to just jump in. Everything was where I expected it to be.


Yeah but in contrast the idea isn't new, Cloud9 did this ages ago. Github has the advantage that they own a huge dev focused platform. My point is if you deal a bit with code, even as a founder who writes strategy papers all day long, getting into tmux/vim isn't harder than a vscode-like interface, it pays off in the long run


XDrive -> Dropbox


I respect developers who make that work, but the development experience over SSH (vim, etc) is never as good as local development. You can't use vs code. You have to deal with lag delaying keystrokes. If you download a file and want to copy it into your dev environment you need to SCP it over. If you want to test a network service its nontrivial to point your local browser at the remote machine. If you sleep your laptop and open it again, you need to reconnect your SSH sessions. Etc etc.

A lot of these issues can be solved with enough extra work setting everything up - but that completely defeats the point. "no setup, no hassle" is never my experience.


>You can't use vs code

Have I got news for you https://code.visualstudio.com/docs/remote/ssh


That requires installing VS Code locally whereas I thought we were talking about a "no install" dev environment?


vim once you get into it is as powerful as vscode, and runs just fine in machine without GUI installed.

Funny enough, your description is exactly how I work, except the java part, but then vscode doesn't have an edge with java either.


> vim once you get into it is as powerful as vscode

Maybe, with a lot of effort to set it up like that, and a steep and long learning curve with lots and lots of idiosyncracies, because it's really ancient tech that was made for a completely different kind of computer, and you have to actually like living in a terminal and vim's idiosyncracies in particular.


You're probably right, but the advantages of vim in my mind compared to IDE/VSCode/Atom is:

1. Ability to open up any sized file. Anyone who accidentally clicked on a 15mb log file in the directory know what I'm talking about.

2. commands, specifically %s, norm, etc applied over visual mode. norm is particularly powerful when you want to batch edit.

3. macros, the ability to define macros both beforehand, and on the fly is really useful.

4. Availability. Any linux box I ever sshed into has never failed to have vim.

^^ Those are the features that imo are important and won't be available as a vim plugin to other IDEs. People also don't realize how vim has these feature that matches other IDEs.

1. Plugins, the vim plugin world is as rich as other IDEs like vscode.

2. language servers, these can run in the background and provide autocomplete, code smell, e.g. jedi, tsserver-vim.


TIL about norm. Thanks!


I guess that's where vscode's remote ssh extension comes into play.

There's things to be said about it being closed source, but it _definitely_ also removes, IMO, the pain points of remote editing.


> vs code

check coc.vim, uses vsc's native LSP, 99% of LSP's features

> delaying keystrokes

I have 16ms latency as my 60hz screen does and def less latency than a local vscode

just use a vps close to where you are


Or use Mosh, which eliminates the keystroke lag.


Some people don't want to use vim, they want to use VSCode.


All of these frankly just sounds like you're unfamiliar with the typical workflow over ssh and have different tooling preferences.

Nothing wrong with that - it's a matter of taste. But personally the reason I often work over ssh connections is that I find the local-only development experience deeply deficient. I package up environments in containers that I can bring up anywhere by dropping in a systemd unit file to pull the images, which means I don't have to worry about whether I have my laptop with me as long as I have ssh access, or about environments changing when I switch laptops. I can also bring those environments up in containers on my laptop, of course. Once I got into the habit of packaging up everything that way it became easy to do, and keep cutting down on the setup effort when I change laptops or make other changes.

The flexibility to be able to access the same environment and editor anywhere is an important reason why I insist on using editors that work in a terminal, because that flexibility is far more important to me than any specific editor features. While most of the time I work locally, when I need to ssh in somewhere, not having to deal with a different environment matters. And ssh vs. entering a container is a close enough equivalent that the same workflows apply for the most part.

Last couple of years I've used my own client-server based editor that holds all the buffers in a separate process (and traps all exceptions and forwards them to the client, and checkpoints its internal state - I have a couple of years worth of open buffers in RAM, but it just adds to ~24M), because I wanted an editor I could get exactly how I wanted in exactly the language I wanted, and it was worth it.

Lag is an issue if on a slow phone connection or something, but you surprisingly quickly get used to working even with ~100ms+ lag, and that's a rare exception. As long as you pick a VPS provider reasonably close 15ms-30ms is not hard. I'm in London, and mostly use Hetzner in Germany, get consistent <20ms to them. It's not noticeable when I log into them. There are few places in the world where you can't find VPS providers close enough. Of course it depends on having decent internet access, so I'm sure there are places that isn't viable. That said, years back I worked over SSH from Beijing to servers in Texas, and even that worked well enough despite the lag.

And if you work on things remotely, you quickly get used to download things straight to the remote machine rather than download and copy over whenever possible, and it usually is possible - wget, curl, lynx and links can take a while to get used to, but it's rare I need to resort to downloading anything locally. Not that it is usually a problem if I do, but of course more dependent on upload speeds.

Pointing a local browser to a remote machine requires no setup, just knowing the IP. If you want an encrypted tunnel, all it takes is a flag to your ssh client to port-forward. First thing I'll do if I need to do work against a remote server regularly is to set up an alias in .ssh/config to pass the right options.

Similarly setting up autossh or similar to automatically re-establish ssh connections (and re-attach to screen or tmux) is a simple one-time affair if you often need to re-attach, but if you run screen or tmux anyway (and I couldn't imagine not doing that on any machines I do actual work on), it's trivial to re-attach to the same state anyway. I run bspwm - a programmable tiling wm, which also means I can easily set up workspaces where it automatically triggers suitable scripts to set up the right windows ssh'd in to the right screen sessions on login, as well.

It's true some of this is extra work, but it's extra work once and you have setups you can copy everywhere, and there's extra work to get used to any new tool, but once you get used to these workflows they are extra-ordinarily flexible, not least because they're also scriptable in ways that allows you to add more and more customised shortcuts for the things you do regularly in a lasting way.

I've carried my current client-side set of configs and convenience-scripts through half a dozen laptops by now.


nice read and I also recommend anyone to try this path, living 24/7 in the shell has not just productivity benefits, you are just closer to the bare metal and will learn so much more. Something we lost in the decades of GUIs


It's the opposite for me. Not living in the shell 24/7, for me personally, doesn't just have big sanity benefits, it also makes me so much more productive.

I find shell environments incredibly limiting, I feel like my visual brain starves when limited to what feels like a visual desert to me for longer durations. Yes, there are a couple of things they absolutely excel at, and I use the shell for those, have one open all the time in a window – but everything else? Not my cup of tea. I'm a fairly visual person and I like manipulating things directly, I like visual cues, I like discoverable software, I'm into well-done(!) motion cues big time. I like using the mouse. I may have to hand in my hacker card after writing this.

I like how IntelliJ shows a million small annotations and cues that terminal just can't possibly support, how I can just stumble over features via the UI. I like how graphical text editors aren't just a featureless wall of text, that there is a UI for the eye to use for structure, that I usually have much more information immediately visible, and that the UI allows me to do things without having everything in my head or looking it up in huge manpages that often as not are hardly fit for for human consumption.

Lots of things I do wouldn't even be possible in a shell, like graphics/design work, 3d modelling, spreadsheets, and the rest wouldn't be much fun. Yes, I'm sure there is a way to do complex 3d modelling in emacs. No, I don't think that will work for me. Yes, I've tried vim and neovim and emacs, and I really don't like their paradigm.

As to what has been lost, I firmly believe I've gained much, much more, and I doubt I personally have actually lost anything substantial, but maybe we won't agree there. Not being bare-metal all the time is an advantage in my book; I've done some bare-metal and it isn't for me, huge respect to those who work with assembly all the time. A shell isn't very close to the metal for me, it's a really heavyweight, very idiosyncractic abstraction over bare metal, once you peel apart all the layers in between, it just makes it easier to manipulate other heavyweight abstractions directly. What I've learned in years of doing lots of work in shells is mostly limited to idiosyncracies of the shell I'm using (so far fish, sh, bash, a bit of zsh) and the various tools involved; especially macOS doesn't expose that much of its internals via the shell, not that much to be learned there.

Why stop at that level of abstraction, though? Why not put something on top that allows me to use all those areas of my brain that do complex visual and motion stuff? I strongly feel those are still way under-utilized, or rather badly over-utilized, or mis-used in general in all current mainstream GUIs; maybe that's why some people prefer theirs to be extremely simplicistic. Better use of motion might involve less motion than e.g. macOS currently has, but applied way more judiciously and effectively, possibly with a solid neuroscientific backing. Reality isn't a solid monochromatic wall of even-sized glyphs, why would UIs have to be?

Sadly, the only place where I feel motion in UIs is done well currently is the very rare game that gets it just right, but those paradigms wouldn't translate well to productivity UIs. But while not ideal by any means, I still find the macOS UI to be way ahead of any terminal UI in that respect; iOS even more so, but I can't use iOS devices for day-to-day work, not yet at any rate. I gather it's the other way around for some people, which is fine of course. You do you, and I stick to my animated GUIs, and I hope someone will figure out why we see these things so very differently some day, and we'll be able to make new things work better for all of us with that knowledge.

Another thing, discoverability and rich input. Take the Touchbar, fantastic thing – I'd absolutely love to have an external one that I can attach to my mechanical keyboard. With the touchbar I can do all sorts of stuff right away (in apps that support it) that I'd have to memorize arcane key combinations for otherwise; I hate doing that. IntelliJ supports the Touchbar pretty well. It can double as a sort-of analog slider, without any finger smudges on my main display like on Surfaces with touchscreens (though that's neat too – love to do this on the iPad). There are menus that contain pretty much everything an app can do, on macOS they're even searchable. I'm sure there are ways to replicate some of that in shell environments, but I doubt it would get very close – and then I wouldn't want to invest hours and hours to set it up like that, because, in the end, my visual brain would still starve. I've toyed with using my iPad and Pencil as a graphics tablet via Sidecar; I'll definitely do a lot more of that in the future. I wish there was a collaborative whiteboard app that supports this really well and also can get approved at my work. That would get close to whiteboarding my thoughs like I do all the time in the physical office, another thing I can't translate to a terminal workflow at all, I can't even translate that to a graphical workflow without a physical pen or similar.

I get that a higher-stimulus, "richer", more immersive, more physical user experience is the opposite of what some desire, but I believe there are lots of people who enjoy that sort of thing, when done well. I realize there are lots of upsides to shell environments as well (super stable software, highly portable, runs everywhere, relatively consistent, highly configurable unless you want anything not made of solid glyphs, etc. pp.) but those just don't rank all that high for me. Like, I have my Macbook set up the way I want it, I can move everything to another one just by restoring from backup, I don't need everything to be highly portable. I rarely work on remote machines – one of the upsides of doing everything as infrastructure-as-code, and when I do, it's via JupyterLab and the like.


> Reality isn't a solid monochromatic wall of even-sized glyphs, why would UIs have to be?

I work almost exclusively in a terminal, but I never work in "a solid monochromatic wall of even-sized glyphs". Well, I usually use even-sized glyphs, but certainly not walls of them, and I rely heavily on tools that use unicode creatively to annotate text and colours.

Terminals have been able to handle graphics of various levels for decades (e.g. Sixel and ReGIS), though unfortunately it's not as widely used as it could be. E.g. my repo contains a script in bin/ to spit out images to the terminal using Sixel, which makes it transparent to ssh.

But the point for me at least is not an objection to graphics, but an objection to not having a command line to manipulate everything. Including graphics when I use it. And an objection to being unable to access data and code remotely without taking special steps.

The terminals gives me that.

It's not at odds with graphics at all. But it's add odds with a focus on graphics at the expense of function. And it's at odds with accepting a world where your app and your user interface needs to live on the same machine.


> graphics/design work, 3d modelling, spreadsheets,

here I agree. these just don't work in the shell. but too many programs are gui-based which shouldn't be. a shell is the foundation for coding and the reason, so many non-tech people think of coding of some wizardry is the lost of the shell. Remember DOS and that .bat files. Latter were not more than some batched commands and eventually a program.

Stuff like settings, task manager, text editors, any kind of servers, should be text based. Everyone is 100x times faster skimming through VSCode's text based settings than Blender's preferences. I am not against GUIs but less of them.


it IS trivial to port forward though, lol


>... no setup...

Wow, that's a neat trick, how do you manage it?


?

click "new vps" at your hoster (60sec), ssh to your new vps, git clone your tmux+nvim config from github (3s), you're ready to go


So it's "no setup" if you've already done all the work to make a portable setup that enables you to do work over ssh, but actually a ton of setup it you haven't.


You've done all the work to commit a setup that already works for you, for the sake of avoiding having to manually create one whenever you need to setup a new machine or vm.

It's one of those things that repays itself by the second install or so.


shouldn't everyone who touched code on a server have some basic tmux/vim config, at least vim? it's no rocket science and doesn't have to be perfect


shouldn't everyone who touched code on a server have a basic tmux/vim config, at least vim?

No. Obviously not. There are millions of developers who don't work that way.


Yeah. I still go with nano those rare times when I need to edit remotely. Vim just never seemed to fit my brain and I don't need to do this enough to make it work getting over the hump.

The infamous "can't work out how to quit Vim" might have soured my initial experiences. That's pretty much all I remember from the first few dozen times I accidentally launched it because it was set as the default $EDITOR.


I’ve been developing for a decade and have never used tmux and have use vim for probably ten hours.


The specific tools are not really the point. The same applies whichever tools you need - sooner or later you'll need to cleanly set up a new machine, at which point committing your configs pays for itself.


I can't think of anything that I need in a config that isn't a) a luxury or b) specific to my local machine.

There's a few tweaks I miss when working in a remote shell but none are essential.


So you can think of things that'd be useful, you just see it as a luxury. But the point is it's not a luxury once you commit the configs and setup scripts and turn it into a single command to deploy anywhere you want it.

And what I found when I started doing that was that suddenly I was a lot less hesitant to let myself become dependent on time-saving tools because I knew they'd be available anywhere with ease.

I can't imagine suffering through a default shell environment or a default editor config any more.

But committing them is not just about remote machines, but about having a clean record of what is needed to set up any new machine I acquire as well.


I've had to set up a new machine like, maybe four times in recent years I think. Except for one time (new employer) I just restored a new Macbook from a backup and was good to go. I don't edit code on servers unless in dire emergencies that are extremely rare (as they should), and nano will do fine for those. Everything is infrastructure-as-code and I like it that way. What work I do on remote hosts happens through e.g. JupyterLab.


I don't edit code on servers either, but I frequently need to debug systems, or do experiments or develop in containers set up for that purpose. As such it's not unusual for me to work with dozens of different environments in the course of even a week.

It's not just about having my editor available, but being able to trivially easily pull in the basic tools to do what I want, and that is especially important during emergencies, but it's also about convenience in other situations.

It sounds like you're lucky enough to deal with very simple environments. That's great. I'd still argue the cost in time of committing your config is so low that it'd be worth it. It takes pretty much one reinstall before it's paid for itself, or one instance of being somewhere without your usual environment.

Or one instance of doing something silly to a config and forgetting what you changed.

The nice thing about it is that it can start extremely light-weight - just literally a "git init" and .gitignore everything, and then gradually force add and commit.


> I’ve been spending a lot of time getting Windows Subsystem for Linux working

I wouldn't spend a minute on it. Why bother with such an abomination when you can just use Linux the way it is intended to be used with less hassle and corporate superstructure that you most likely will never need.


Well if you're actually asking... I do it so I can use Windows tools like Unity (Editor just runs the best on Windows) and good graphics drivers, as well as having access to bash and linux tooling.


> so I can use Windows tools like Unity (Editor just runs the best on Windows)

Wasn't Unity a Mac exclusive tool to start out? I know they have support for Windows at this point, and would understand if they've refocused on Windows since then.


Unity feels like it runs best on Windows to me. Metal bugs caused quite a few Unity issues for me when I was developing on a Mac. That was a year or so ago, so maybe the situation has improved.

Also a lot of useful 3d content creation software is Windows only, or only supports CUDA acceleration, so implicitly rules out Mac from being a first class citizen (e.g. substance painter)

Obviously not every game is the same, or needs fancy tooling. But Windows feels like it just works for game dev.


It doesn't shock me that it run's best on Windows, and you should of course use whatever works for you. I'm sorry if I implied anything else.

I have to agree. I love me some *nix (excluding Mac/iOS, actually), but game dev is the one thing I played with that just felt easier on Windows.


Metal, Catalina's 32bit changes and other things have made everything but the latest version on Unity unstable on Mac.


I didn't see a question mark in there.

If the Unity Editor runs best on Windows then maybe petition the authors to improve it? Otherwise you might end up in a Photoshop/Apple situation and we all know how that ended.

Good graphics drivers are available for Linux, and have been for years.

Access to bash tooling is the norm on any Linux system.


[flagged]


"Use SSH rather than WSL" is not the same as the Dropbox comment. Dropbox was dead simple to use, and it did something complicated, keeping files on various machines synced. WSL requires setup and there's friction while you use it. SSH is actually less setup than WSL if you're using a standard Ubuntu Digital Ocean instance.


WSL has one advantage over SSH. I can continue working if my signal drops.

(Setup and friction are actually pretty low - but that's another discussion)


What are the specs? How much memory, CPU, disk space?

Of course it's in beta so nothing's final, but it would be good to get a sense of order-of-magnitude.

MS Codespaces starts at 4 GB RAM, 2 cores, 64 GB storage.


I'm surprised nobody mentioned GitPod[1] - it's in business since quite a while in my GitHub projects, powered by VSCode, with terminal, pretty much everything Codespaces does... Any thoughts on such a comparison? Is GitHub reinventing the wheel locally?

[1]https://www.gitpod.io/


FYI, the Gitpod blog has some thoughts on this comparison: https://www.gitpod.io/blog/github-codespaces/


Gitpod is powered by eclipse theia


The GitPod own page says "Gitpod is an open-source Kubernetes application providing prebuilt, collaborative development environments in your browser - powered by VS Code."

Might be they mean that Eclipse Theia can run VSCode extensions?


Eclipse thea is a wrapper On Vscode for users who want to build upon it.


The pieces are starting to fall in place for a decent virtual first development experience.

I don't expect my recent laptop purchase, which was a combination of form for comfort, form for clients to see, and perf for working to last more than 3 years. At a $2500 purchase (most expensive damn laptop I've ever got) I'm aware that half that price or less would get everything but the perf.

So if a cloud solution was good enough, I have about $400 a year to spend on it.

It's about $.5 an hour to match perf on EC2, but we should consider:

- I only need the top perf for minutes at a time. A lot of work could be browser based, but browsers cant do the heavy lifting

- I also have a desktop, which I use more than the laptop, is significantly cheaper and higher perf. Its most useful to me if both envs are the same.


I still don't understand why people want their code dependent on the web.

I want to share my one true dev environment, but I want to be able to do this on my private premises setup without internet access.

I don't trust the internet and things like codespaces not to leak my company's code whether nominally in a private repository or not.

I don't trust using internet dependencies either.

Even something like pair programming etc with a shared view of a project is very appealing, but again, it needs to be appealing in a private context and not dependent on public access.

What might be OK for an open source project is unlikely to be true for a company's valuable IP, but people just seem to get carried away.

Secure first, managed dependencies first. Please.


“Most of my daily tools are Windows based, so I’ve been spending a lot of time getting Windows Subsystem for Linux working alongside the rest of my daily workflow, which comes with its own headaches. So being able to quickly checkout a branch, make changes and take some load of my local machine seems very promising.”

Making it easier for more devs to use and stick with Microsoft tech, is, I think, a core driver behind this initiative.


I'm pretty excited about what Codespaces is going to bring to the table. I can already see a lot of good use cases.

I've been using an iPad Pro as a laptop replacement for the past year and Codespaces makes it a pretty viable development machine. I'd been mulling over switching back to a laptop, but I'll probably stick to the iPad for now.

Before I was using Blink shell to ssh into my desktop and run Emacs. Not really ideal for me since I mainly use VS Code on my desktop (which is my main computer) and hardly touch Emacs anymore.


do you use codespaces in safari? doesn't work that well for me, scroll using a touchpad doesn't work, and I guess not having a real fullscreen mode is kinda annoying too


AFAIK This is a bug that was introduced earlier this year when WebKit changed how scroll events are handled https://github.com/cdr/code-server/issues/1455

If you can add a shortcut to the dev environment to your homescreen it’ll load the app without safaris control elements which comes relatively close to fullscreen. (Not sure if this will work the way codespaces is built)


FYI, gitpod.io is similar and works pretty well on Safari and iPad


"Most of my daily tools are Windows based, so I’ve been spending a lot of time getting Windows Subsystem for Linux working alongside the rest of my daily workflow, which comes with its own headaches."

My work place is considering switching providing developers with a MacBook Pro to a Dell Windows machine. At one point we were all allowed Linux which was awesome.. Is WSL still painful to use? My prior experience came to a dead end when I had irreconcilable Docker issues, the fine details I don't quite remember now other than it had something to do with accessing the host network. All good now or still meh?


With the release of WSL2 and the new versions of Docker that take advantage of it, I would say that most of the pain around the WSL toolchain has been resolved.

Compared with trying to get anything working in powershell or dealing with the slowness of git bash or Cygwin, WSL2 is a breeze.

The only pain point I still have is running Linux GUI applications. It requires running an XWindow server on the Windows side and letting WSL talk to it over TCP. Apparently MS is working on that now and hopes to have a solution later this year on the slow ring of windows update. That'll make running things like Cypress a hell of a lot easier and (fingers crossed) prevent me from squinting on HiDPI displays.


Actually, you can self-host VS Code on a server and access it via browser: https://github.com/cdr/code-server


One of the things I hope to use Codespace for is a CMS editor directly hooked into source control and its permissions (given static sites already get generated by the CI), instead of hosting some admin UI on a thirdparty server that needs to deal with all the authentication flows, without sacrificing ease of use for non-devs.

If all goes well, it could even customize the Codespace with VSCode plugins to fit customer-specific needs (but I have yet to see what limitations there are, luckily by now I'm quite used to writing plugins for my local VSCode).

So I hope to be in the beta sooner or later.


My problem (so far) with github for CMS is no preview. Maybe preview could be added to Codespace as a VSC plugin. It needs to access all the media I'm adding as well. Hmmm, maybe I will look into this :)


AFAIK VSC is explicitly not iOS/Android friendly. Maybe Codespaces will end up changing that but they make a point of Monaco (The editor portion of VSCode) not being designed for tablets/mobile

https://microsoft.github.io/monaco-editor/


As long as it allows creating custom plugins, the sky is the limit.

And when I say editor, I don't just mean editing marketing blurb for some website, you could have something like Three.js's online editor to make 3D scenes for example.

I'd imagine it would be useful for remote learning as well, like a super-powered Codepen.


I’d bet this stuff is mostly popular because of really really broken platforms like iOS.

Also OT: Wow. That’s an entirely new level of broken web design. The links don’t even work! How can you mess up HTML this badly?


Which ones - we're a pretty lean startup so if you've found some errors, I'd like to know.


(sorry for the long delay, I struggle with compulsive hacker news usage and have noprocrast on.)

On ios none of the links on that page appeard to work, coming back with firefox they all do.


Thanks for coming back to clarify, glad to hear its not broken. I'll check the iOS compatibility anyway.


The pendulum swings again.

Remember mainframes? Remember PCs? Remember thin clients? Rememeber PCs again? Remember this i a couple of years as the pendulum starts to swing back.


It’s not a pendulum, the industry is always moving towards two goals: miniaturization (for portability) and centralization (for efficiency).

Sometimes it appears to go “backwards” towards decentralization when there’s a shift in form factor. This is just to avoid getting stuck at a local optimum.

Eventually we’ll get to the global optimum where everything’s centralized but available from everywhere.


I love it how the pendulum always goes way to far to one side, to end up with enough force to go overboard on the other side again.

I guess it's the most natural way to find balance.

Same story for testing. I think we are at the end of "move fast and break things".


That is a very badly chosen name. Codespaces was a company that went under because they were (or claimed to have been) hacked.

https://threatpost.com/hacker-puts-hosting-service-code-spac...

If you want to instill confidence this name is not the best start you could have picked.


Nobody (but you) remembers a minor company that went under six years ago. And those that do are unlikely to believe there’s a connection.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: