You linked to the OpenCog linkparser. Just out of curiosity, what are you using it for?
I used to also keep a beefy EC2 or GCP instance all set up with Jupyter, custom kernels (Common Lisp, etc.), etc., etc. Being able to start and stop them quickly made it really cost effective.
I ended up, though, getting a System76 GPU laptop a year ago and it is so nice to be able to experiment more quickly, especially noticable improvement for quick experiments.
That said, a custom EC2 or GCP instance all set up, and easily start/stop-able is probably the cost effective thing to do unless just vanilla colab environments do it for you.
I just hate n-grams is the short answer - tear apart the input with link, use arc and modifier info as input to the statistical processes - so far, too early to say it works better, but sure as hell its more interesting . :-) . Sadly I have to keep my instances running all the time, they eat all of the public Reuters news feeds - side note - additional info can be obtained by taking the top N parse candidates for a given sentence and collapsing them into a graph - I have a pretty good signal out of that for 'the parser has gone insane'
I used to also keep a beefy EC2 or GCP instance all set up with Jupyter, custom kernels (Common Lisp, etc.), etc., etc. Being able to start and stop them quickly made it really cost effective.
I ended up, though, getting a System76 GPU laptop a year ago and it is so nice to be able to experiment more quickly, especially noticable improvement for quick experiments.
That said, a custom EC2 or GCP instance all set up, and easily start/stop-able is probably the cost effective thing to do unless just vanilla colab environments do it for you.