I want to build a large format, real-time, physical music visualizer that could orchestrate an artistic light symphony for any song.
I'm imagining physical visualizers that are columns of multiple, discrete light nodes, each able to have variable brightness and color.
The real-time music processing is the hard part (for me) to crack.
There's some standard tricks here: FFTs, bandpass filters, etc.
But I want to do more: Real-time stem separation, time signature and downbeat tracking, etc.
Imagine hearing Sweet Caroline and, when the horns kick in, the whole installation 'focuses' on the horns and bright yellow light jumps between each column on each horn note, before returning to tracking the bass line or something.
I've been noodling on this idea for a long time and slowly digging into the music and CS fundamentals. The rise of LLMs might finally be the piece that enables me to close my intelligence gap and finally build this thing...
- Setup VMs locally, on your development machine. (This eliminates the cost of hosting but gives you all the technical learning opportunities). My development machine is macOS and UTM has been an excellent app to manage these VMs. You can eventually model your VM's configuration around what resources your VPS will have on AWS/DO (e.g. 1GB RAM, 2 vCPUs, etc).
- Learn the basics of Ansible, in order to provision a server (local or remote). I did the course on KodeKloud.com and found it great to getting me going quickly.
- Write Ansible playbooks to provision your local VM as you would want your VPS on AWS/DO/etc to work. Ansible Galaxy is a repository of many community-supplied roles for common tasks/services. You could consult these for best practices on building your own playbooks or totally offload provisioning onto those roles.
- Once you're comfortable getting your local VM setup, point your Ansible playbook at an AWS/DO VM and put it online!
My high-level roadmap has been to build my own Ansible playbook to provision a Ubuntu server to CIS Level 2.
CIS benchmarks define security controls for a few of the more common aspects of DevOps work (e.g. Ubuntu OS hardening, AWS account security, Docker host, etc). They're freely available and there's many well-maintained scripts that can both audit and provision your host to the standard. I've been using the benchmarks as an easy to way to self-teach security aspects (and validate I've done it correctly). Level 2 is the standard used to handle financial information and medical records, so it's probably the most secure you'll ever need to go.
Once I have a provisioning playbook to stand up a secure host with some services (Nginx, Redis, etc), the next goal on my roadmap is learn Terraform to configure + deploy a personal cloud of services to AWS/DO/etc.
Similarly, Google purchased reCaptcha and ended up harnessing the stream of human interaction into that to, among other things, classify all of their Street View content (e.g. select the stop lights/bridges/license plates/etc).
I've always wondered: wouldn't they need to have already classified those captchas for them to determine whether the user has made the right selections? If so, doesn't that defeat its "real" purpose of getting people to do that classification work for them?
IIRC, back when it was text, you were shown two words and could type anything for one of the words (typically the easier to read word) and the other word would be a word they’d intentionally blurred a bit to use for the actual captcha check.
That's why you have to do multiple tasks in one verification. Some are against a known ground truth and used as verification, but you don't know which one.
Looking at the Beeper Mini announcement [1], they clearly state that a user doesn't need an Apple ID to register their phone number and send/receive iMessages. Also, they describe direct, device-to-Apple interactions.
However, this article says:
> IDS is used as a keyserver for iMessage...
> The first step in registering for IDS is getting an authentication token. This requires giving the API your Apple ID Username and Password.
> After registering with IDS, you will receive an “identity keypair”. This keypair can then be used to perform public key lookups.
So how does the Beeper Mini app take an arbitrary Android phone number, register public keys for it with IDS, and perform public key lookup of recipients... all without ever using an Apple ID?
EDIT - It looks like the answer here is the 'SMS Gateway' which is virtually undescribed in the OP article or anywhere on [1]. Guess that's the secret sauce.
I just downloaded Beeper Mini on my android phone and after giving me a fail error when trying to send an SMS to Apple to register my phone, it then popped up asking for my apple id.
I've hesitated to ever attempt this because every residential ISP I've had refuses to offer static IP addresses.
As well, deploying a server in a Google/Amazon/Microsoft datacenter which could be surreptitiously monitored defeats the theoretical privacy aspects of on-premises mail server hosting inside one's personal residence.
However, today, I looked into the newish movement of 'confidential computing' in the cloud (where data in motion - e.g., in memory - is encrypted and cannot be observed from the OS or hypervisor).
I openly wonder if one solution, then, is to build a secure VM that acts as a simple forwarding proxy to one's home server, gets assigned a static IP from a datacenter, and is deployed on one of these confidential computing instances, ensuring full E2E data privacy and data control?
Is confidential computing needed if all you're doing is forwarding packets? Your cloud provider can see the packets as they leave and enter your VM.
If I was building this I'd stand up a VPN (choose your favourite protocol) between the cloud VM and home server. For the cloud end pick something from lowendbox/lowendtalk or just use the cheapest Vultr instance. NAT port forwarding down the tunnel back to your server at home - just a few iptables rules. Job done. Bonus points if you get an IPv6 /64 and route that down the tunnel too.
It's possible to use policy routing at home so that traffic that needs to go down the VPN does, and traffic that can egress through your home internet can too. Replies to incoming connections that came down the tunnel go back up the tunnel. Outgoing SMTP connections go down the tunnel. Outgoing HTTP goes out your normal internet.
If surreptitiously monitoring your stuff in a cloud is in your threat model, what makes you think that anything you can do in a general home environment is beyond the reach of a dedicated adversarial actor?
NoIP/DDNS/etc still means a dynamic IP address, with possibly broken reverse DNS, from a dynamic DNS pool.
To send email you need a static IP with correct reverse DNS, or other people's servers will reject your mail (best case) or silently mark it as spam. Welcome to the real world of email deliverability, the worst part of running your own mail server.
So use an SMTP relay service for outgoing mail. Most of them even have free tiers. I've been using one with a dynamic IP for years, albeit one where the IP doesn't change often.
On the receiving end I use a super inexpensive spam filtering service too, MX Guard dog. If my IP suddenly changes then it queues up mail until host resolution succeeds again.
Hi, I'm Josh. I've been developing iOS apps for the last 7 years including Tinder, CrowdRise, and LivingSocial. I've also spent time at Apple, Google, Microsoft, and an acquired startup. (I'm most proud of rebuilding the Tinder card stack and watching it spend over a year in production (2015-2016).)
I can help your team with...
- Designing and developing stable iOS features, quickly
- Mentoring developers on best practices in mobile app development
- Setting up a CI pipeline, code reviews, and unit testing
When I worked at Apple, it was fun to walk in some buildings and notice the California 'cancer warning' sign posted in the lobby. From that, you knew there was probably a hardware engineering lab somewhere in that building, because I believe they had to post those due to carcinogens released from soldering. (And, inside of Apple, that kind of knowledge would only be shared on a need-to-know basis, otherwise.)
No, they put the signs on every building everywhere because it’s cheaper than not putting them up and possibly getting fined $2,500 for everyone who ever walked inside.
It’s impossible to construct a building which would not contain some element which would trigger the legal requirement of posting the sign.
The sign conveys you absolutely no information. Look closer and you will see they are posted on every commercial building run by someone smart enough to know to post them. The more entertaining places I’ve seen them is grocery stores and preschools.
I'd like an OSS solution that will allow me to deploy an arbitrary server-side service - be it a Ruby, Python application or even a package like OpenVPN - to some cloud infrastructure, easily.
This solution should spin up a hardened OS distro (CIS-compliant, maybe?), provision it with my arbitrary services (using Ansible or Chef or something), and deploy it to AWS or some cloud infrastructure for me (using Terraform or something).
All these component pieces exist, but nothing ties all of them together for an easy deploy for a front-end developer like me.
(And, I know things like Heroku and CodeDeploy exist, but I dislike lock-in and they nearly universally come with their own restrictions, like lack of support for server-side Swift applications or custom services like OpenVPN or git-annex.)
EDIT - I'm strongly considering taking some time off to write this soon, so get in touch if this is something you're interested in! Contributing or using!
There's an overlap with what BOSH does. Starts with a stemcell, compiles your packages against it, spins up whole VMs configured with those packages.
It also adds monitoring for both VMs and processes.
Don't underestimate what you get from a PaaS like Heroku, though. If you're able to stick to 12-factor apps, the lockin is pretty mild -- you should be able to hoist your skirts and move to Cloud Foundry or even OpenShift without too much pain.
Disclosure: I work for Pivotal, we work on BOSH and Cloud Foundry. We compete with Red Hat and Heroku.
We're actually about to release a product that does exactly this! We're in private beta right now, but will have something available to the public soon. If you're interested, check out www.nucleus.codes. And if you want to join the beta, email us at hi@nucleus.codes.
I'm imagining physical visualizers that are columns of multiple, discrete light nodes, each able to have variable brightness and color.
The real-time music processing is the hard part (for me) to crack.
There's some standard tricks here: FFTs, bandpass filters, etc.
But I want to do more: Real-time stem separation, time signature and downbeat tracking, etc.
Imagine hearing Sweet Caroline and, when the horns kick in, the whole installation 'focuses' on the horns and bright yellow light jumps between each column on each horn note, before returning to tracking the bass line or something.
I've been noodling on this idea for a long time and slowly digging into the music and CS fundamentals. The rise of LLMs might finally be the piece that enables me to close my intelligence gap and finally build this thing...