I remember being taught to use yacc in our compiler course because "writing it by hand is too hard". But looks like Ruby joins the growing list of languages that have hand-written parsers, apparently working with generated parsers turned out to be even harder in the long run.
That said, replacing a ~16k line parse.y[1] with a 22k line prism.c[2] is a pretty bold move.
It seems like you're ignoring the context/environment. Ruby has enough advanced developers, large enough test suite and enough people who care about performance, that it can tackle the parser as a longer project regardless of its complexity. The same thing will apply to other popular languages. But it won't apply to smaller projects with very localised parser use. In those cases writing anything custom would be a waste of time (and potential introduce bugs solved years ago in generators).
Having tried both on solo projects, I disagree: like other commenters here, I've found parser generators to be a big waste of time.
Writing a parser by hand requires understanding the theory of parsing and understanding your implementation language. Writing a parser with a parser generator requires understanding the theory of parsing, your implementation language, and a gigantic black box that tries unsuccessfully to abstract away the theory of parsing.
The time spent learning and troubleshooting the black box is almost always better spent putting together your own simple set of helper methods for writing a parser and then using them to write your own. The final result ends up being far easier to maintain than the version where you pull in a generator as a dependency.
Parser generators handle a lot of edge cases for you and are battle tested.
Unless I had a relatively simple grammar or had very strict performance requirements (like in the case of Ruby), I would not trust a hand rolled parser on a CFG by someone who isn’t dedicated to the craft. (PEGs are simpler so maybe).
I’ve written recursive descent parsers by hand and they look very simple until you have to deal with ambiguous cases.
The big plus of parser generators is that they report ambiguities. Handling conflicts is a pain but explicit.
Do people who write predictive recursive descent parsers (LL(k)) really calculate first/follow sets by hand? What if the grammar requires backtracking?
Arbitrary backtracking in a recursive descent parser is very easy with exceptions.
However, there's also an argument here that if the grammar is too complicated to be parsed with recursive descent, it's probably just too complicated in general and should be simplified if possible. Obviously you don't always have this option when you're dealing with an external grammar, but for your own PL, you can design around that. Most Wirth's languages are good examples; Pascal is famously LL(1).
Recursive descent parsers can have ambiguities, but like PEGs, they just resolve the ambiguity by arbitrarily picking one of the alternatives.
A famous example is ALGOL 60, which had the dangling else ambiguity (https://en.wikipedia.org/wiki/Dangling_else), but this was not discovered until the language had already been published. If they had been using a parser generator tool, it would have warned them that the grammar was not LL or LR.
Overall I do still prefer hand-written recursive descent parsers, but I do find this to be one of the biggest downsides of not using parser generators.
Of course the Algol 60 committee could not have used a parser generator tool because they were not invented until later in the 1960s, but your point is valid for anyone developing a language after that time.
There's nothing to stop you from writing out a grammar in some form that is intelligible to a verification tool and then implementing the grammar by hand. I almost always write out the grammar anyway because that's the design—without it I'm flying blind. The cost of the generator isn't writing out the grammar, it's in using the runtime code it generates, which is optional even if you want to use it for verification.
Ruby's syntax is also not trivial to parse and isn't set in stone either. At some point it was simply decided that trying to maintain the status quo was worse than attempting a rewrite which could bring in some extra gains, either through performance or having an easier time tweaking the grammar.
The biggest improvement from this (besides maybe performance) is that it should enable much better manually programmed syntax error messages. Those generated by yacc were pretty shit.
This is generally my #1 reason for using a manual parser — nobody has yet made a pretty good syntax error handling / reporting for parser generators or parser combinators.
It's genuinely very complex — I read the whole literature on that as of 2019 (there's surprisingly little). You basically have to inject custom logic, though there are a few heuristics that you can prepackage and can be useful in a lot of places. But the custom aspect of it means this doesn't play nice with traditional LL/LR parser generators. It could be done for parser combinators (PEG etc) however. Didn't have enough time in my PhD thesis to play with this, and I moved on to other things, but I'm hoping someone will make this eventually.
I'm pretty sure the only reason people ever used parser generators is that it allows a language that vaguely resembles the formal description of the target language. I always found them very confusing to write, confusing to debug, and much less efficient than writing your own. It's actually pretty straightforward once you get the tokenization and lookahead working.
Agreed. Parser generators are a magic black box. Parsing is not too difficult, there is some actual computer science in some spots, but I think parsing should be a core complacency of a programming language to unlock full potential.
Number of line is as any metrics, it gives you a quick idea of some amount, and that's it. To start having a sense of what it means, you need to be more acquainted with the topic at end.
It's not that uncommon to have an implementation with code that is lengthier but with an obvious pattern, while the smarter compressed implementation whose understanding is not necessary trivial to grab even for people seasoned to metaprogramming, reflexivity and so on.
Not to say that is what happen here, the point here was to recall that number of lines is not an absolute linear metrics.
> I remember being taught to use yacc in our compiler course because "writing it by hand is too hard". But looks like Ruby joins the growing list of languages that have hand-written parsers, apparently working with generated parsers turned out to be even harder in the long run.
I've been writing parsers for simple (and sometimes not so simple) languages ever since i was in middle school and learned about recursive descent parsing from a book (i didn't knew it was called like that back then, the book had a section on writing an expression parser and i just kept adding stuff) - that was in the 90s.
I wonder why yacc, etc were made in the first place since to me they always felt more complicated and awkward to work with than writing a simple recursive descent parser that works with the parsed text or builds whatever structure you want.
Was it resource constraints that by the 90s didn't really exist anymore but their need in previous decades ended up shaping how parsers were meant to be written?
Parser generators will tell you whether the grammar given to it is well-formed (according to whatever criteria the parser generator uses).
When hand-rolling a parser, there could be accidental ambiguities in the definition of your grammar, which you don't notice because the recursive descent parser just takes whatever possibility happened to be checked first in your particular implementation.
When that happens, future or alternative implementations will be harder to create because they need to be bug-for-bug compatible with whatever choice the reference implementation takes for those obscure edge cases.
> When hand-rolling a parser, there could be accidental ambiguities in the definition of your grammar, which you don't notice because the recursive descent parser just takes whatever possibility happened to be checked first in your particular implementation.
Is that a problem? Just use a grammar formalism with ordered choice.
My hot take is that the allure of parser-generators is mostly academic. If you're designing a language it's good practice to write out a formal grammar for it, and then it feels like it should be possible to just feed that grammar to a program and have it spit out a fully functional parser.
In practice, parser generators are always at least a little disappointing, but that nagging feeling that it _should_ work remains.
Edit: also the other sense of academic, if you have to teach students how to do parsing, and need to teach formal grammar, then getting two birds with one stone is very appealing.
It is not academic. It is very practical to actually have a grammar and thus the possibility to use any language that has a perser generator. It is very annoying to have a great format, but no parser and no official grammar for the format available and being stuck with whatever tooling exists, because you would have to come up with a completely new grammar to implement a parser.
I fully agree that you need to have a grammar for your language.
> and thus the possibility to use any language that has a perser generator.
See, this is where it falls down in my experience. You can't just feed "the grammar" straight into each generator, and you need to account for the quirks of each generator anyway. So the practical, idk, "reusability"... is much lower than it seems like it should be.
If you could actually just write your grammar once and feed it to any parser generator and have it actually work then that would be cool. I just don't think it works out that way in practice.
Good error reporting gets really tricky with generated parsers. That said, it can be a nice time saver for smaller things like DSLs and languages early on.
Even then, yacc and bison are pretty solid overall. I believe Postgres still uses a yacc grammar today, as another high profile example. I'd arguebthr parsing of SQL is one or the least interesting thi.gs an RDBMS does, though.
I can only imagine working with generated parsers to become more difficult, if the syntax of a language is highly ad-hoc or irregular, not elegant like concattenative, or lispy languages, or Smalltalk style, which is ironic, given Ruby's history. Maybe they added too many bells and whistles.
In every other case having a grammar in form of parser generator macros should be better and preferrred, since it is well portable to other languages and tools and lends itself to be more readable (with good naming).
Ruby is getting more and more awesome these last few years, especially when it comes to performance. Since 3.3 I've been running all my apps with --yjit, it makes a tremendous difference!
No wait, I know Oracle has a bad rep which is deserved, but TruffleRuby and GraalVM is truly open-source, not open-core. They actually did something great this time.
> You will need to sign the Oracle Contributor Agreement (using an online form) for us to able to review and merge your work.
Read my lips:
N. O.
Read the CLA. This is a trap, do not get yourself or your company caught in it. It is open-source for now, until it gets enough traction. Then the rug will be pulled, the code will be relicensed as well as any further development or contributions.
This is insane, I cannot believe anyone can read and understand this and not consider the abuses of power it allows:
> 2. With respect to any worldwide copyrights, or copyright applications and registrations, in your contribution:
> ...
> you agree that each of us can do all things in relation to your contribution as if each of us were the sole owners, and if one of us makes
a derivative work of your contribution, the one who makes the derivative work (or has it made) will be the sole owner of that derivative
work;
> you agree that you will not assert any moral rights in your contribution against us, our licensees or transferees;
> you agree that we may register a copyright in your contribution and exercise all ownership rights associated with it; and
> you agree that neither of us has any duty to consult with, obtain the consent of, pay or render an accounting to the other for any use or
distribution of your contribution.
I would go as far as to state that anyone who contributes any code to this works against open source (by helping out an obvious rugpull/embrace-extend-extinguish scheme that diverts adoption and contribution from cruby/jruby) and against their fellow developers (by working for free for Oracle).
For what it worth, in France so called moral rights are "innaliénables", so you legally can't get rid of them, and I wouldn't be surprised this holds in most Roman civil law countries (most countries in the world). Just like you can't decide to get rid of all your civil rights and become a slave of the nice company that promised to treat you well and free you of the hurdle to take decisions by yourself. So IANAL but this contract is not only ignominious but is actually trying to require authors to make promises that they can not legally make.
That's completely normal for cathedral-style open source development. The FSF themselves required copyright assignment (not just a CLA) if you wanted to contribute to GNU projects (e.g. GCC) for many years; several GNU projects still do.
You only need to sign the CLA if you want to contribute to upstream, you can maintain your own fork if you want, and the code that is open source today will always be open source. Frankly I'd say Oracle is less likely to close it up in a scramble to try to monetise their open-source assets than smaller companies like Redis Labs - Oracle has plenty of products and makes their money from consulting rather than from selling code directly.
How can you compare the FLA with Oracle's CLA??? From the FSFA:
> An FLA offers a special clause against this kind of situation, in order to protect the Free Software project against potentially malicious intentions of the Trustee. According to this provision, if the Trustee acts against the principles of Free Software, all granted rights and licences return to their original owners. That means that the Trustee will be effectively prevented from continuing any activity which is contrary to the principles of Free Software.
You can name a few more rugpulls made possible by contributor agreements that permitted blatant abuse of power, and Oracle is also not innocent in this. Off the top of my head I remember the VirtualBox extensions fiasco. Oracle changed the license then started sending bills to companies.
I don't know what the "FLA" or "FSFA" is (are they real things or is this an AI-generated comment?), but the FSF traditionally required copyright assignment, which gave them all the rights in Oracle's CLA and more.
A programme that seemingly only existed for one significant project, and is not open to new projects. Interesting, but hardly representative of the free software movement in general.
MySQL was forked and the fork is the defacto standard shipped by linux distros. To me the only MySQL that existed was the one by Sun, now MariaDB has completely succeeded it.
Do you see the licensing/distrubution clusterfuck with Java as a good example of open-source stewardship by Oracle? Which Java disto are you using?[1]
Do you see the Google v. Oracle Java API copyright case as a good example of open-source stewardship by Oracle?
You know what else is prudently (/s) stewarded by Oracle? ZFS. That is why it is still not a part of the Linux kernel. A company that is basically a meme with the amount of lawyers it imploys would easily find a safe way to allow integration into the Linux kernel if only they wanted to contribute.
The examples above show exactly why Oracle has a decidedly bad reputation. On top of that, their CLA enshrines their shit treatment of the open-source movement and their free slave labour^W^W^W open-source contributors.
> Do you see the Google v. Oracle Java API copyright case as a good example of open-source stewardship by Oracle?
100% YES, given the clusterfuck support of standard Java on Android.
It is no different from what Microsoft has made with J++ on Windows, and like they came up with .NET, Google came up with Kotlin migration, ironically they keep relying on the standard Java that they don't support, for InteliJ, Gradle, and everything else that powers Android SDK on the desktop.
Google could have avoided the lawsuit if they bought Sun, after torpedoing it with Android, they thought no one would buy the company and were safe from paying anyone, wrong call.
Certainly! I bet it was part of a sort of "this is my Christmas present to all" sort of thing; and, as happens sometimes, little mistakes, like the need for the quick bugfix happen :)
That has been the story of every dynamic language since forever, thankfully the whole AI focus has made JITs finally matter in CPython world as well.
Personally I have learnt this lesson back in 2000's, in the age of AOLServer, Vignette, and our own Safelayer product. All based on Apache, IIS and Tcl.
We were early adopters of .NET, when it was only available to MSFT Partners and never again, using scripting languages without compilers, for full blown applications.
Those learnings are the foundations of OutSystems, same ideas, built with a powerful runtime, with the hindsight of our experiences.
The push for Python performance and JIT compilation has little to do with AI and more to do with Python's explosion in adoption for backend server applications in the 2010s, as well as the dedication of smaller projects like PyPy that existed largely because it was possible to make them exist. The ML/AI boom helped spread Python even farther and wider, yes, but none of the core language performance improvements are all that relevant for ML or AI.
As another commenter pointed out, the performance bottlenecks in AI specifically have essentially to do with the CPython runtime performance. The only exception is in the pre-processing of very large text corpora, and that alone has hardly been a blip on the radar of the people working on CPython performance.
Moreover, most of the "Python performance" projects that do sit closer to machine learning use cases (Cython-Numpy integration, Numba, Nuitka) are more or less orthogonal to the more recent push for Python interpreter performance.
Cython itself and MypyC are mainly relevant because they are intended to be general-ish purpose performance boosters for CPython, and in doing so helped fill the need for greater performance in "hot and loopy" code such as network protocols, linters, and iterators. Cython also acted as a convenient glue layer for ad-hoc C library binding. But neither project is all that closely related to AI or to the various JIT compilers that have arisen over the years.
Not at all, given Facebook and Microsoft involvement into making CPython folks finally accept a JIT has to be part of the story, coupled by NVidia and Intel work on GPU JIT DSLs for Python.
Yeah but how much of the Microsoft and Facebook effort was due to AI directly, as opposed to the general popularity of Python? which is undoubtedly driven nowadays by AI, but indirectly.
The C/C++ is shipped in the form of well-established libraries like Numpy and PyTorch. Very few end users ever interact with the C/C++ parts, except for specialists with special requirements, and library contributors themselves.
Can you name specific "un-fashionable" AI projects that are dependent on Python code for things that have any significant performance impact, which are seeing significant benefits from Python JIT implementations?
> Personally I have learnt this lesson back in 2000's, in the age of AOLServer, Vignette, and our own Safelayer product. All based on Apache, IIS and Tcl.
Woah, your mention of “Vignette” just brought back a flood of memories I think my subconscious may have blocked out to save my sanity.
What's a scripting language? Also I'm not sure for TCL (https://news.ycombinator.com/item?id=24390937 claims it's had a bytecode compiler since around 2000) but the main python and Ruby implementations have compilers (compile to bytecode then interpret the bytecode). Apparently ruby got an optional (has to be enabled) jit compiler recently and python has an experimental jit in the last release (3.13).
"... the distinguishing feature of interpreted languages is not that they are not compiled, but that any eventual compiler is part of the language runtime and that, therefore, it is possible (and easy) to execute code generated on the fly."
No, I worked with the founders at a previous startup, Intervento, which became part of an EasyPhone acquisition, which got later renamed into Altitude Software alongside other acquisitions.
They eventually left and founded OutSystems with what we learned since the Intervento days, OutSystems is of the greatest startup stories in the Portuguese industry.
This was all during dotcom wave from the 2000's, instead I left to CERN.
During their black friday / cyber monday load peak, Shopify averaged between ~0.85 and ~1.94 back-to-back RPS per CPU core. Take from that what you will.
You seem to imply that everything they run is Ruby, but they're talking about 2.4 million CPU cores on their K8s cluster, where maybe other stuff runs as well, like their Kafka clusters [1] and Airflow [2]?
Obviously you meant for the whole infrastructure: ruby / rails workers, Mysql, Kafka, whatever other stuff their app needs (redis, memcache, etc), loadbalancers, infrastructure monitoring, etc.
Just to reiterate stuff said in the other comments because your comment is maybe deliberately misrepresenting what was said in the thread.
Their entire cluster was 2.4 million CPU cores (without more info on what the cores were). This includes not only Ruby web applications that handle requests, but also other infrastructure. Asynchronous processing, database servers, message queue processing, data workflows etc, etc, etc. You cannot run a back of the envelope calculation and say 0.85 requests per second per core and that is why they're optimising Ruby. While that might be the end result and a commentary on contemporary software architecture as a whole, it does not tell you much about the performance of the Ruby part of the equation in isolation.
They had bursts of 280 million rpm (4.6 million rps) with average of 2.8 million rps.
> It does not tell you much about the performance of the Ruby part of the equation in isolation.
Indeed, it doesn't. However, it would be a fairly safe bet to assume it was the slowest part of their architecture. I keep wondering how the numbers would change if Ruby were to be replaced with something else.
Shopify invest heavily in Ruby and write plenty of stuff in lower level languages where they need to squeeze out that performance. They were heavily involved in Ruby's new JIT architecture and invested in building their own tooling to try and make Ruby act more like a static language (Sorbet, Bootsnap).
Runtime performance is just one part of a complex equation in a tech stack. It's actually a safe bet that their Ruby stack is pretty fucking solid because they've invested in that, and hiring ruby and JS engineers is still 1000x easier than hiring a C++ or Rust expert to do basic CRUD APIs.
Since we're insinuating, I bet you that Ruby is not their chief bottleneck. You won't get much more RPS if you wait on an SQL query or RPC/HTTP API call.
In my experience when you have a bottleneck in the actual Ruby code (not speaking about n+1s or heavy SQL queries or other IO), the code itself is written in such a way that it would be slow in whichever language. Again, in my experience this involves lots of (oft unnecessary) allocations and slow data transformations.
Usually this is preceded by a slow heavy SQL query. You fix the query and get a speed-up of 0.8 rps to 40 rps, add a TODO entry "the following code needs to be refactored" but you already ran out of estimation and mark the issue as resolved. Couple of months later the optimization allowed the resultset to grow and the new bottleneck is memory use and the speed of the naive algorithm and lack of appropriate data structures in the data transformation step... Again in the same code you diligently TODOed... Tell me how this is Ruby's fault.
Another example is one of the 'Oh we'll just introduce Redis-backed cache to finally make use of shared caching and alleviate the DB bottleneck'. Implementation and validation took weeks. Finally all tests are green. The test suite runs for half an hour longer. Issue was traced to latency to the Redis server and starvation due to locking between parallel workers. The task was quietly shelved afterwards without ever hitting production or being mentioned again in a prime example of learned helplessness. If only we had used an actual real programming language and not Ruby, we would not be hitting this issue (/s)
I wish most performance problems would be solved by just using a """fast language"""...
Effective use of IO at such scale implies high-quality DB driver accompanied by performant concurrent runtime that can multiplex many outstanding IO requests over few threads in parallel. This is significantly influenced by the language of choice and particular patterns it encourages with its libraries.
I can assure you - databases like MySQL are plenty fast and e.g. single-row queries are more than likely to be bottlenecked on Ruby's end.
> the code itself is written in such a way that it would be slow in whichever language. Again, in my experience this involves lots of (oft unnecessary) allocations and slow data transformations.
Inefficient data transformations with high amount of transient allocations will run at least 10 times faster in many of the Ruby's alternatives. Good ORM implementations will also be able to optimize the queries or their API is likely to encourage more performance-friendly choices.
> I wish most performance problems would be solved by just using a """fast language"""...
Many testimonies on Rust do just that. A lot of it comes down to particular choices Rust forces you to make. There is no free lunch or a magic bullet, but this also replicates to languages which offer more productivity by means of less decision fatigue heavy defaults that might not be as performant in that particular scenario, but at the same time don't sacrifice it drastically either.
You know, if I was flame-baiting, I would go ahead and say 'there goes the standard 'performance is more important than actually shipping' comment. I won't and I will address your notes even though unsubstantiated.
> Effective use of IO at such scale implies high-quality DB driver accompanied by performant concurrent runtime that can multiplex many outstanding IO requests over few threads in parallel. This is significantly influenced by the language of choice and particular patterns it encourages with its libraries.
In my experience, the bottleneck is mostly on the 'far side' of the IO from the app's PoV.
> I can assure you - databases like MySQL are plenty fast and e.g. single-row queries are more than likely to be bottlenecked on Ruby's end.
I can assure you, Ruby apps have no issues whatsoever with single-row queries. Even if they did, the speed-up would be at most constant if written in a faster language.
> Inefficient data transformations with high amount of transient allocations will run at least 10 times faster in many of the Ruby's alternatives. Good ORM implementations will also be able to optimize the queries or their API is likely to encourage more performance-friendly choices.
Or it could be o(n^2) times faster if you actually stop writing shit code in the first place.
Good ORMs do not magically fix shit algorithms or DB schema design. Rails' ORM does in fact point out common mistakes like trivial n+1 queries. It does not ask you "Are you sure you want me to execute this query that seq scans the ever-growing-but-currently-20-million-record table to return 5000 records as a part of your artisanal hand-crafted n+1 masterpiece(of shit) for you to then proceed to manually cross-reference and transform and then finally serialise as JSON just to go ahead and blame the JSON lib (which is in C btw) for the slowness".
> Many testimonies on Rust do just that. A lot of it comes down to particular choices Rust forces you to make. There is no free lunch or magic bullet, but this also replicates to languages which offer more productivity by means of less decision fatigue heavy defaults that might not be as performant in that particular scenario, but at the same time don't sacrifice it drastically either.
I am by no means going to dunk on Rust as you do on Ruby as I've just toyed with it, however I doubt that I could right now make the performance/productivity trade-off in Rust's favour for any new non-trivial web application.
To summarise, my points were that whatever language you write in, if you have IO you will be from the get go or later bottlenecked by IO and this is the best case. The realistic case is that you will not ever scale enough for any of this to matter. Even if you do you will be bottlenecked by your own shit code and/or shit architectural decisions far before even IO; both of these are also language-agnostic.
Just-in-time compilation of Ruby allowing you to elide a lot of the overhead of dynamic language features + executing optimized machine code instead of running in the VM / bytecode interpreter.
For example, doing some loop unrolling for a piece of code with a known & small-enough fixed-size iteration. As another example, doing away with some dynamic dispatch / method lookup for a call site, or inlining methods - especially handy given Ruby's first class support for dynamic code generation, execution, redefinition (monkey patching).
> In particular, YJIT is now able to better handle calls with splats as well as optional parameters, it’s able to compile exception handlers, and it can handle megamorphic call sites and instance variable accesses without falling back to the interpreter.
> We’ve also implemented specialized inlined primitives for certain core method calls such as Integer#!=, String#!=, Kernel#block_given?, Kernel#is_a?, Kernel#instance_of?, Module#===, and more. It also inlines trivial Ruby methods that only return a constant value such as #blank? and specialized #present? from Rails. These can now be used without needing to perform expensive method calls in most cases.
it makes ruby code faster than c ruby code so they are moving toward rewriting a lot of the core ruby stuff in ruby to take advantage of it. run time performance enhancing makes the language much faster.
Same as the benefits of JIT compilers for any dynamic language; makes a lot of things faster without changing your code, by turning hot paths into natively compiled code.
That's certainly not what I get out of what they said.
Shopify has introduced a bunch of very nice improvements to the usability of the Ruby language and their introductions have been seen in a very positive light.
Also, I'm pretty sure both Shopify for Ruby and Facebook for their custom PHP stuff are both considered good moves.
Years back I took over the ownership of the third-party Arch Linux package for ruby-build because the maintainer at the time wasn't using it anymore and was looking to pass it off. At the time, I had no idea that Ruby did released every Christmas, but I found out a few months later when I got an email mentioning the package was out of date that day. Even though I haven't done much Ruby dev for years now, it's been a small little tradition of mine since then to update the package first thing every Christmas morning and push out the update (basically, just updating the version number in a file in a git repo and then running a couple commands to update the checksums and push the changes; nothing anywhere close to the amount of work that people who actually develop that tool do, let alone the people who work on the language!). I can't help but feel like that farmer from the meme saying "it ain't much, but it's honest work"; I enjoyed the little tradition I've built up and like thinking that maybe every now and then someone might have noticed and been pleased to get the updates without having to file a notice to remind me to update things (although it's happened a few times since that time years ago, I hope it hasn't been that often!).
Just now, I was surprised to see that the package seems to be getting put into the official Arch repos, so my eight years of very minimal volunteer service seem to be at an end. I still think I'm going to remember doing this and smile a little every Christmas morning for years to come!
1. Wondering 3.4 JIT performance vs 3.3 JIT on production rails.
2. Also wondering what upside could Ruby / Rails gain on a hypothetical Java Generational ZGC like GC? Or if current GC is even a bottleneck anywhere in most Rails applications.
> Also wondering what upside could Ruby / Rails gain on a hypothetical Java Generational ZGC like GC? Or if current GC is even a bottleneck anywhere in most Rails applications.
Ruby's GC needs are likely to be very far from the needs of JVM and .NET languages, so I expect it to be both much simpler but also relatively sufficient for the time being. Default Ruby implementation uses GIL so the resulting allocation behavior is likely to be nowhere near the saturation of throughput of a competent GC design.
Also, if you pay attention to the notes discussing the optimizations implemented in Ruby 3.4, you'll see that such JIT design is effectively in its infancy - V8, RyuJIT (and its predecessors) and OpenJDK's HotSpot did all this as a bare minimum more than 10 years ago.
This is a welcome change for the Ruby ecosystem itself I guess but it's not going to change the performance ladder.
I would expect some measurable improvement given how object-happy rails programming is. It's not uncommon to see 3 layers of models just wrapping a single variable - "objects that could've been functions". Some kind of tiers like generations or per-request pools would be amazing.
I'd have thought allowing _ as a synonym for _1 would have been more aesthetically consistent. That's the path I went with when designing my CL #λ reader macro, personally.
I don't understand the point of it when the `.map(&:upcase)` syntax is shorter. This just seems like yet another syntactic sugar Rubyism that doesn't really add anything.
If it's an alternative to the `|x|` syntax when using only one block variable, then I like that.
I started at a company 3 years ago that was on Rails 5.1. After 3 years on and off work I've managed to get it to Rails 6.1. The process is such an incredible nightmare on a large app.
At some point you just have to rip the bandaid off and put any ongoing work on pause until the upgrade is done. Otherwise it'll be another 3 years on and off while you try to do the upgrade but the codebase keeps changing underneath you.
And if that isn't happening and there's no other development on the codebase, why bother upgrading it?
Same thing for me—4 years ago, Rails 4.2. Now on 6.0, work for 6.1 is wrapped up. I did just finish going from Ruby 2.7 to 3.3. Any particular issues you’re having, or just working through the process?
Don’t have the exact details on me but it was just the change for the method params hash thing. The stack trace seems to be pointing places that aren’t the source of the issue, just where it got triggered in some dynamic way.
Probably just need to spend more time understanding exactly what changed and how to convert stuff.
I haven’t worked in ruby or rails in a few years but both seem like they’re in great spots and I’ll be spinning up a new project with Rails 8 soon. Hype
Ruby has the nicest object-oriented design (everything is an object) outside of smalltalk (IMHO).
In contrast to the mess that is Python. For instance, in Ruby it is natural that each or map are methods of Array or Hash rather than global functions which receive an Array or Hash argument.
This goes as far as having the not operator '!' as a method on booleans:
false.! == true
Once you have understood it, it is a very beautiful language.
Stuff like map() is generic iteration, over any structure that exposes iteration. When it's a member function, it means that every collection has to implement map itself basically. When it's separate, the collections only need to provide the interface needed to iterate over it, and the generic map() will just use that.
Taking OOP more seriously, this kind of thing should be implemented through inheritance, interfaces, mixins, etc.
Even though I've got used to it, Python has these inconsistencies that sometimes it wants to be more OOP, sometimes it wants to be more FP.
Ruby has chosen OOP as its size, and implements nicely those functional operations as methods (same for Kotlin, for example). That makes easy for composing them:
# Sum of the square of even elements of a list, in Ruby
my_list.filter{|x| x % 2 == 0}.map{|x| x * 2}.sum
Python could do something like that, but we quickly fall in a parentheses hell:
# The same code, in Python
sum(map(lambda x: x * 2, filter(lambda x: x % 2 == 0, my_list)))
Languages that have chosen the FP path more seriously implement function composition. They could do the same way as Python, but composition makes it more readable:
# The same code, in Haskell
sum . map (^ 2) . filter (\x -> x `mod` 2 == 0) $ my_list
PS: I know that I it would be better to use comprehensions in both Python and Haskell, but these are general examples
Kotlin is still rather different though in that it still implements stuff like map and filter outside of specific collection classes. If you look at the definition of, say, List, there's no map & filter there, only iterator. Instead, map & filter are defined as extension functions that work on any Iterable.
So semantically it's actually closer to Python, with the only difference that, since Python doesn't have extension methods, it has to use global functions for this, while Kotlin lets you pretend that those methods are actually members. But this is pure syntactic sugar, not a semantic difference.
And some will say the exact opposite: contrary to what would seem obvious, code is primarily meant to be read by humans, then written by humans. Because you'll spend way more time unfucking code than actually spitting it.
Yes, but it is not fully OO. Something like `if.class` generates an error, as opposed to returning some type such as "Syncategoreme".
That might looks really anecdotal, but on practice for example that's is probably the biggest obstacle to providing fully localized version of Ruby for example.
The second biggest challenge to do so would probably be the convention of using majuscule to mark a constant, which thus requires a bicameral writing system. That is rather ironic given that none of the three writing system of Japanese is bicameral (looks fair to exclude romaniji here). Though this can be somehow circumvented with tricks like
```
# Define a global method dynamically
Object.send(:define_method, :lowercase_constant) do
"This is a constant-like value"
end
It's very powerful though which is a bit terrifying. You can literally monkey patch Object at runtime and add methods to every single instantiated object! (I believe this is how rspec works..)
Awesome, but with great power come great responsibility ;)
RSpec moved from that quite some time ago. Monkey patching nowadays is usually frowned upon, even refinements, which could simulate monkey patching in a limited scope, are rarely used.
Oh I'm extremely out of date, I was into ruby back when Why's guide was a thing. Maybe I'll revisit it someday if I ever get bored of go paying the rent.
It's the language with the highest ratio of (useful work / LOC), so it's the least verbose language. This makes it very suitable to write and understand complex scripts, because the reduced boilerplate means less cognitive overhead for the programmer. As a result, experienced programmers can be extremely productive with it.
The well-known Rails framework uses this to great effect, however, some people argue that the choice of "convention over configuration" and extensive use of meta-programming, derisively called "magic", make it less suitable for inexperienced teams because they get too much rope to hang themselves and the lack of explicitness starts working against you if you're not careful.
> It's the language with the highest ratio of (useful work / LOC), so it's the least verbose language.
That's not even close to true. Even setting aside APL and its descendants, even setting aside Perl, any of the functional programming languages like Haskell and Scala are less verbose.
(The relative lack of success of those languages should indicate why minimizing verbosity is a poor aim to target.)
Don't just focus on the language syntax, the high ratio of useful work to verbosity is in large part owing to the excellent design of the standard library, which is available without including any headers or downloading third party libraries. This is where it handily beats out any of the alternatives you mention.
I see some people saying that Ruby is too much "magic", while what is magic is Rails. Ruby itself can have its high useful work / LoC ratio thanks to its syntax. For example, you can spawn a thread with:
thread = Thread.new do
# thread code
end
...
thread.join
In this example we can see that it's not magic, only concise.
Fair. I must admit that I'm not aware of the recent features of Java. Last time I really needed it was the time that we needed to instance an anonymous class for callbacks. I still find the block syntax in Ruby cleaner though.
Kotlin is another language that has this Ruby-style blocks for callbacks.
It depends which kind of magic. Everybody love some magic to be in its life, as long as it doesn't reveal to be a curse unintentionally coming out from a good willing wish.
Also you don't want all and everything being the results of spells you don't have a clue how they are cast.
Ruby is something like a "improved" Python, with a better OO system, a code block syntax that makes it easy to use callbacks, more consistent standard libraries, etc. It could be what Python is today.
I wouldn't say niche, but the killer app of Ruby is Rails, a web framework similar to Django. In fact, many people treat them as they are the same. But there are big projects that use Ruby and that are not related to Rails. As far as I remember: Metasploit, Homebrew, Vagrant and Jekyll.
Personally I think Ruby is amazing language for writing shell scripts. I write a blog post about it, you can see it and its discussion here: https://news.ycombinator.com/item?id=40763640
Can you name one way Ruby has parity with Python? Ruby is a dead language that uses sponsored posts here. Nobody actually uses this since like 2018 but some people are paid to hype it up. Just look at the empty praise. No real applications mentioned.
It's not just the well-known GitHub, Shopify, Chime, Figma, Zendesk, Convertkit (Kit), Coinbase etc.
See the "few" companies here, actively hiring Rubyists for no reason https://rubyonremote.com/remote-companies/
Square, Gitlab, Cisco, Figma, Instacart, Block, Calendly, 1password, and so on
huh. I'm not sure if I understood you right, do you script and configure those in ruby, or have you written them in ruby from scratch? Are the sources available to read/learn from?
Beware that one of the joys of writing these for my own use is that I've only added the features I use, and fixed bugs that matter to me, and "clean enough to be readable for me" is very different from best practice for a bigger project.
I'm slowly extracting the things I'm willing to more generally support into gems, though.
The wm was actually discussed on HN way back. I think once some of my other projects, like the terminal, is a bit more mature (it works for me and I use it for 99%+ of my terminal needs) I might post those too.
The biggest issue with these projects is that I feel uncomfortable pushing a few of them because I make a living of providing development and devops work, and my personal "only has to work on my machine and certain bugs are fine to overlook" projects are very different to work projects in how clean they are etc... But as I clean things up so they're closer to meeting my standards for publication I'll post more.
Rails has some very, very good features that make standing up a CRUD app with an administrative backend _very easy_.
It's also got a bunch of semi-functional-programming paradigms throughout that make life quite a bit easier when you get used to using them.
Honestly, if it had types by default and across all / most of its packages easily (no. Sorbet + Rails is pain, or at least was last I tried), I'd probably recommend it over a lot of other languages.
It's not a 100% compatible replacement, but I've ported a few things with only trivial chances. I didn't say it's a drop in, just that it's a fine choice.
Compile/test time is ok. It's a few extra seconds to run tests, but hasn't been an issue in practice for me.
I've tend to have found Kotlin to be the direction I'm more happy going with. It speaks to my particular itches for me personally, more effectively. I can absolutely see how it's a very effective choice.
I love Rails and spent a good chunk of my career using it - and I'd recommend it more if only the frontend story wasn't that bumpy over the years with all the variations of asset pipelines.
I wish the TypeScript/React integration was easier. Say what you will but there's no way you can achieve interactivity and convenience of React (et al) UIs with Turbo/Hotwire in a meaningful time.
I converted from webpacker (or rather shakapacker, the continuation after rails moved away from webpacker) to vite_rails recently, and it's been such a breath of fresh air. It's easy to set up, and easier to maintain. Strongly recommended.
Can you elaborate more in this? Years ago, I used to primarily do Rails development. Recently I built some web apps that use a JVM backend (one app uses Java & Spring and the other Kotlin & Micronaut) and a React frontend. One thing I ended up really missing issue the the frameworks, especially with disjointed fronted, don't solve the standard issue of a request sending an invalid form entry and showing the validation errors on the form. I ended up building my own implementation of that which of course also requires a convention on message format. Since most apps need to solve this it's so weird to be that frameworks nowadays don't solve this out of the box.
I definitely suggest using vite and the vite ruby gem. Create your Rails app, Create your TS + React app with vite, add the vite gem and done. It does not get better than that. Super fantastic.
The language is incredibly flexible and allows for "DSLs" that are just ruby libraries.
A simple example: `3.days.ago` is a very commonly used idiom in Rails projects. Under the hood, it extends the base Number class with `def days` to produce a duration and then extends duration with `def ago` to apply the duration to the current time.
`yyyy-mm-dd(datestr)` will parse a date str that matches yyyy-mm-dd format. It looks like a special DSL, but it's just Ruby. `dd(datestr)` produces a `DatePart`. Then it's just operator overloading on subtraction to capture the rest of the format and return the parsed date.
That library feels unnecessary, but the entire thing is 100 lines of code. The ease of bending the language to fit a use case led to a very rich ecosystem. The challenge is consistency and predictability, especially with a large team.
Ruby is optimized for developer happiness, and it is not a small thing. Ruby on Rails is optimized to build successful web application businesses as a highly efficient team. It minimizes boilerplate code, and thus, time to market, while giving guidance (the Rails Way) on how to design for growth and scale.
Not really a niche language. Fantastic web server development. A more flexible and powerful language than python—the metaprogramming can be ridiculously powerful (when done well)—without the nonsense of white space sensitivity. ActiveRecord is perhaps the best ORM out there. Rails has tons of functionality to get running.
Overall, a pleasant and expressive language with an incredible community. Python ends up "winning" because of pytorch + pandas, but is (imo) a worse language to work in + with.
...but ruby is whitespace sensitive too. It's hard to notice, because rules mostly follow intuition, but there're cases when not only a missing newline, but absense or addition of a space changes resulting syntax. Currently I remember only difference in parsing unary vs binary operators, like + and *, and ternary operator ? : vs : in symbols, but there're certainly more other cases.
Sure, like `a ?b :c` is nothing like `a ? b : c` (I guess the former is actually invalid), but that's obviously not what the previous message was referring to when speaking of Python which uses spaces as main facility to determine block scope.
There was a recent thread about parsing Ruby wherein I learned that the % quoting character accepts any character following it as the opening/closing delimiter <https://news.ycombinator.com/item?id=42032212> Crazypants.
it's by far the best language I've found for writing a quick bit of code to explore a problem, or to do a one-off task. perhaps something about it just fits the way my brain works, but I find it incredibly easy to convert ideas to working ruby without having to think too hard about "okay, how do I actually express this in code".
If I want to make a SSR site using Ruby, are there any good frontend UI libraries that make doing this easier? It’d be nice if there was some Ruby abstraction for writing HTML, CSS, and JS that makes building interactive UIs easier (possibly built on top of HTMX, AlpineJS, etc).
I would suggest taking a look at Phlex (https://www.phlex.fun/). This kind of ruby maximalism is very pleasing to the dev process. For the interaction I'm using hotwire and stimulus. Been using pure Phlex views in production for 2 years now. I wrote Protos (https://github.com/inhouse-work/protos) which is built on top of Phlex and adds a bunch of quality of life features I wanted.
People say frontend/backend parity, and that’s true, but I also remember there was a time in 2011 or so where single thread/async was this new hot thing.
Nginx was starting to get popular and overtake Apache on installs, and people were enamored with its performance and idea of “no blocking, ever” and “callbacks for everything”, which the nginx codebase sorta takes to the extreme. The c10k problem and all that.
When JavaScript got a good engine in v8, Node was lauded as this way to do what nginx was doing, but automatically and by default: you simply couldn’t write blocking code so waiting on I/O will never bottleneck your incoming connections. Maximum concurrency because your web server could go right back to serving the next request concurrently while any I/O was happening. But no “real” multithreading so you didn’t have to worry about mutexes or anything. I remember being slightly jealous of that as a Rails developer, because webrick/unicorn/etc had a worker pool and every worker could only handle one request at a time, and fixing that could only happen if everything was async, which it basically wasn’t.
JavaScript becoming a popular language in its own right due to frontend was certainly the most important factor, but it wasn’t the only one.
Not sure why this is considered a "classic" piece. It reads as if the author has just discovered the difference between preemptive vs cooperative scheduling, but hasn't yet found the words to describe his "discovery". Yes, you can write a `while(true){}` loop and block the event loop. That's not some damning indictment of Node. The point is that you don't have to block on IO, so your program doesn't have to halt the entire world, and sit around doing nothing while you're waiting for a hard drive to spin or a network request to complete.
Heh, he's so right in every regard although I use Node.
Worst of all, they made npm packages dead easy, so most of them don't even have a readme file, not to mention inline docs like POD or RDoc. This is how you end up with spam pacakges, malware in npm and lpad disasters.
Given the popularity of Github, and the fact that a readme file is the first thing you see when pulling up a project on Github, most projects these days do in fact have readme files.
To add, front-end developers and other people that learned in Javascript (because a web browser is something everyone has, turns out it's a pretty great runtime environment, has live editing with dev tools, etc. It's honestly a fantastic way to 'get into programming) could write the icky backend code to make their slick websites, SPAs and games have internet-based savestate
God forbid we reuse knowledge instead of drudging lives through never ending learning of same concepts with different syntax’s and 10x costs for supporting every special native snowflake toolchain.
And of course the right way to do that is to take an extremely mediocre, rushed, incomplete language that is currently constrained to the browser, and make it run everywhere else, ironically having to reinvent many wheels and re-learn very old lessons the hard way along the way. Mission "reuse knowledge" is a hearty failure in node.js-land.
To be fair, js interpreters are available out of the box in all digital devices out there that embed a browser. That's a huge deal, as far as portability is concerned.
Because it’s not as good. Why would I want two languages and two runtimes when I can just have one, all while delivering a demonstrably better user experience?
Rails wants to be the UI framework, and a lot of devs didn't want to do server side UI and state, especially OOP style. So it was easier to do JS for your APIs, etc. DHH's opinions kind of made it an all or nothing choice for many folks.
I want to try Ruby since the news of Rails 8 came out, but it's been so difficult that I just gave up. Installing Ruby on Mac and Windows and actually getting the 3.3 version required for Rails 8 was a huge mission and test of patience because every installer defaulted to older versions of both Ruby and Rails even one month after the release. And yes, even Docker required tweaking to get the versions and I had issues with devContainers anyway...
I finally got it installed and then followed some tutorials only to see that Rails' html.erb files have completely broken syntax highlighting in VSCode and other editors. I facepalmed and though I tried to search for a fix online, I couldn't find one. I saw posts mentioning it in forums and yet not a single solution posted.
So I gave up. I tried in Mac, Windows and Linux. If someone here knows how to fix the broken highlighter, that can be my Christmas gift today, but for the most part I've moved on.
Like psychoslave suggested, try out mise (https://github.com/jdx/mise). I used asdf for years, did the switch to mise and have never looked back for package management. It supports a huge number of languages and is performant.
I used to use ruby a lot - mostly just because it's the nicest language for scripting things on unix. I can remember trying to get it set up a year or so ago and finding the process difficult (think I was using rvm).
probably good idea to point people here before they install ruby, since it'll compile for minutes then tell you it's missing a dependency, and you have to start the whole process over.
I've found the easiest way to have a nice, consistent, working Ruby installation is to install from source. Ubuntu, Debian or Fedora are the easiest. There are a bunch of one-liners to install all the dependencies on various distros floating around. The Ruby website has instructions but the gist of it is, run ./configure, then make, then make install. Actually pretty easy. Gem is great for managing libraries, certainly better than any Python solution for that ecosystem.
On Mac, rbenv or asdf are both great. Also other commenters here have good suggestions. I never had problems with VSCode; curious what you ran into here.
Ruby itself works okay on bare-metal Windows, but virtually guaranteed any decent size Rails project will use some native gem that's a nightmare to get to build on Windows.
Most gems with native extensions won't work. Gems that listen to filesystem changes like guard can be buggy. I recommend using Mac or Linux for Ruby on Rails development.
That being said, Matz also isn't a fan of static typing. Static type annotations exist in the form of RBS, but no one that matters in the Ruby eco-system is pushing static type annotations in .rb files themselves.
Also, after seeing TypeScript, I'm very happy about that.
I myself am unsure where I stand on RBS. I wouldn't mind more use of it in my gem dependencies, but would probably not like it if it was enforced everywhere.
For now I'll stick with improving my test/spec-writing skills, and maybe some runtime type checking like https://literal.fun/
I think RBS is a decent tool, I don't mind it as long as it never becomes a requirement for anything. I hate the trend of statically typed dynamic languages because it's all of the pain without the main benefit (native speed).
I don't know but I still fully take the criticism: Google Trends is not an indicator of absolute usage, but (if anything) relative usage. It's not clear which of the two parent referred to.
Those are relative positions. We can't talk about a "nosedive" from that. It may be the case, but also maybe Ruby was just the slowest growing out of a number of languages growing in popularity. We don't have enough data from there.
I am most excited about the parser change, previously discussed here:
https://news.ycombinator.com/item?id=36310130 - Rewriting the Ruby parser (2023-06-13, 176 comments)
I remember being taught to use yacc in our compiler course because "writing it by hand is too hard". But looks like Ruby joins the growing list of languages that have hand-written parsers, apparently working with generated parsers turned out to be even harder in the long run.
That said, replacing a ~16k line parse.y[1] with a 22k line prism.c[2] is a pretty bold move.
[1] https://github.com/ruby/ruby/blob/master/parse.y
[2] https://github.com/ruby/prism/blob/main/src/prism.c
It seems like you're ignoring the context/environment. Ruby has enough advanced developers, large enough test suite and enough people who care about performance, that it can tackle the parser as a longer project regardless of its complexity. The same thing will apply to other popular languages. But it won't apply to smaller projects with very localised parser use. In those cases writing anything custom would be a waste of time (and potential introduce bugs solved years ago in generators).
Having tried both on solo projects, I disagree: like other commenters here, I've found parser generators to be a big waste of time.
Writing a parser by hand requires understanding the theory of parsing and understanding your implementation language. Writing a parser with a parser generator requires understanding the theory of parsing, your implementation language, and a gigantic black box that tries unsuccessfully to abstract away the theory of parsing.
The time spent learning and troubleshooting the black box is almost always better spent putting together your own simple set of helper methods for writing a parser and then using them to write your own. The final result ends up being far easier to maintain than the version where you pull in a generator as a dependency.
Parser generators handle a lot of edge cases for you and are battle tested.
Unless I had a relatively simple grammar or had very strict performance requirements (like in the case of Ruby), I would not trust a hand rolled parser on a CFG by someone who isn’t dedicated to the craft. (PEGs are simpler so maybe).
I’ve written recursive descent parsers by hand and they look very simple until you have to deal with ambiguous cases.
The big plus of parser generators is that they report ambiguities. Handling conflicts is a pain but explicit.
Do people who write predictive recursive descent parsers (LL(k)) really calculate first/follow sets by hand? What if the grammar requires backtracking?
Arbitrary backtracking in a recursive descent parser is very easy with exceptions.
However, there's also an argument here that if the grammar is too complicated to be parsed with recursive descent, it's probably just too complicated in general and should be simplified if possible. Obviously you don't always have this option when you're dealing with an external grammar, but for your own PL, you can design around that. Most Wirth's languages are good examples; Pascal is famously LL(1).
Recursive descent parsers can have ambiguities, but like PEGs, they just resolve the ambiguity by arbitrarily picking one of the alternatives.
A famous example is ALGOL 60, which had the dangling else ambiguity (https://en.wikipedia.org/wiki/Dangling_else), but this was not discovered until the language had already been published. If they had been using a parser generator tool, it would have warned them that the grammar was not LL or LR.
Overall I do still prefer hand-written recursive descent parsers, but I do find this to be one of the biggest downsides of not using parser generators.
Of course the Algol 60 committee could not have used a parser generator tool because they were not invented until later in the 1960s, but your point is valid for anyone developing a language after that time.
There's nothing to stop you from writing out a grammar in some form that is intelligible to a verification tool and then implementing the grammar by hand. I almost always write out the grammar anyway because that's the design—without it I'm flying blind. The cost of the generator isn't writing out the grammar, it's in using the runtime code it generates, which is optional even if you want to use it for verification.
Ruby's syntax is also not trivial to parse and isn't set in stone either. At some point it was simply decided that trying to maintain the status quo was worse than attempting a rewrite which could bring in some extra gains, either through performance or having an easier time tweaking the grammar.
The biggest improvement from this (besides maybe performance) is that it should enable much better manually programmed syntax error messages. Those generated by yacc were pretty shit.
This is generally my #1 reason for using a manual parser — nobody has yet made a pretty good syntax error handling / reporting for parser generators or parser combinators.
It's genuinely very complex — I read the whole literature on that as of 2019 (there's surprisingly little). You basically have to inject custom logic, though there are a few heuristics that you can prepackage and can be useful in a lot of places. But the custom aspect of it means this doesn't play nice with traditional LL/LR parser generators. It could be done for parser combinators (PEG etc) however. Didn't have enough time in my PhD thesis to play with this, and I moved on to other things, but I'm hoping someone will make this eventually.
I'm pretty sure the only reason people ever used parser generators is that it allows a language that vaguely resembles the formal description of the target language. I always found them very confusing to write, confusing to debug, and much less efficient than writing your own. It's actually pretty straightforward once you get the tokenization and lookahead working.
Agreed. Parser generators are a magic black box. Parsing is not too difficult, there is some actual computer science in some spots, but I think parsing should be a core complacency of a programming language to unlock full potential.
A very very long list of CVEs disagrees with that parsers are "not too difficult".
Binary format parsing != Programming language parsing.
Binary formats are rarely context free.
Agreed, but there are different things being prioritized with each approach.
"core complacency" was an excellent typo in the context of this conversation.
It's rather compelling even without the context. :(
[dead]
Number of line is as any metrics, it gives you a quick idea of some amount, and that's it. To start having a sense of what it means, you need to be more acquainted with the topic at end.
It's not that uncommon to have an implementation with code that is lengthier but with an obvious pattern, while the smarter compressed implementation whose understanding is not necessary trivial to grab even for people seasoned to metaprogramming, reflexivity and so on.
Not to say that is what happen here, the point here was to recall that number of lines is not an absolute linear metrics.
> I remember being taught to use yacc in our compiler course because "writing it by hand is too hard". But looks like Ruby joins the growing list of languages that have hand-written parsers, apparently working with generated parsers turned out to be even harder in the long run.
I've been writing parsers for simple (and sometimes not so simple) languages ever since i was in middle school and learned about recursive descent parsing from a book (i didn't knew it was called like that back then, the book had a section on writing an expression parser and i just kept adding stuff) - that was in the 90s.
I wonder why yacc, etc were made in the first place since to me they always felt more complicated and awkward to work with than writing a simple recursive descent parser that works with the parsed text or builds whatever structure you want.
Was it resource constraints that by the 90s didn't really exist anymore but their need in previous decades ended up shaping how parsers were meant to be written?
Parser generators will tell you whether the grammar given to it is well-formed (according to whatever criteria the parser generator uses).
When hand-rolling a parser, there could be accidental ambiguities in the definition of your grammar, which you don't notice because the recursive descent parser just takes whatever possibility happened to be checked first in your particular implementation.
When that happens, future or alternative implementations will be harder to create because they need to be bug-for-bug compatible with whatever choice the reference implementation takes for those obscure edge cases.
> When hand-rolling a parser, there could be accidental ambiguities in the definition of your grammar, which you don't notice because the recursive descent parser just takes whatever possibility happened to be checked first in your particular implementation.
Is that a problem? Just use a grammar formalism with ordered choice.
My hot take is that the allure of parser-generators is mostly academic. If you're designing a language it's good practice to write out a formal grammar for it, and then it feels like it should be possible to just feed that grammar to a program and have it spit out a fully functional parser.
In practice, parser generators are always at least a little disappointing, but that nagging feeling that it _should_ work remains.
Edit: also the other sense of academic, if you have to teach students how to do parsing, and need to teach formal grammar, then getting two birds with one stone is very appealing.
It is not academic. It is very practical to actually have a grammar and thus the possibility to use any language that has a perser generator. It is very annoying to have a great format, but no parser and no official grammar for the format available and being stuck with whatever tooling exists, because you would have to come up with a completely new grammar to implement a parser.
> It is very practical to actually have a grammar
I fully agree that you need to have a grammar for your language.
> and thus the possibility to use any language that has a perser generator.
See, this is where it falls down in my experience. You can't just feed "the grammar" straight into each generator, and you need to account for the quirks of each generator anyway. So the practical, idk, "reusability"... is much lower than it seems like it should be.
If you could actually just write your grammar once and feed it to any parser generator and have it actually work then that would be cool. I just don't think it works out that way in practice.
Good error reporting gets really tricky with generated parsers. That said, it can be a nice time saver for smaller things like DSLs and languages early on.
Even then, yacc and bison are pretty solid overall. I believe Postgres still uses a yacc grammar today, as another high profile example. I'd arguebthr parsing of SQL is one or the least interesting thi.gs an RDBMS does, though.
To reinforce your point on good error reporting, though, SQL errors are notoriously unhelpful.
"Yeah there's an unbalanced parentheses...somewhere near this point... might actually be unbalanced, or you missed a comma or semicolon. You tell me."
It's not a "growing list" - the vast majority of languages use hand-written parsers.
I can only imagine working with generated parsers to become more difficult, if the syntax of a language is highly ad-hoc or irregular, not elegant like concattenative, or lispy languages, or Smalltalk style, which is ironic, given Ruby's history. Maybe they added too many bells and whistles.
In every other case having a grammar in form of parser generator macros should be better and preferrred, since it is well portable to other languages and tools and lends itself to be more readable (with good naming).
Ruby is getting more and more awesome these last few years, especially when it comes to performance. Since 3.3 I've been running all my apps with --yjit, it makes a tremendous difference!
Wait until you hear about TruffleRuby
Thanks, but no thanks. Never touching anything by oracle.
No wait, I know Oracle has a bad rep which is deserved, but TruffleRuby and GraalVM is truly open-source, not open-core. They actually did something great this time.
Someone pointed this out https://news.ycombinator.com/item?id=42323293
> You will need to sign the Oracle Contributor Agreement (using an online form) for us to able to review and merge your work.
Read my lips:
N. O.
Read the CLA. This is a trap, do not get yourself or your company caught in it. It is open-source for now, until it gets enough traction. Then the rug will be pulled, the code will be relicensed as well as any further development or contributions.
This is insane, I cannot believe anyone can read and understand this and not consider the abuses of power it allows:
> 2. With respect to any worldwide copyrights, or copyright applications and registrations, in your contribution:
> ...
> you agree that each of us can do all things in relation to your contribution as if each of us were the sole owners, and if one of us makes a derivative work of your contribution, the one who makes the derivative work (or has it made) will be the sole owner of that derivative work;
> you agree that you will not assert any moral rights in your contribution against us, our licensees or transferees;
> you agree that we may register a copyright in your contribution and exercise all ownership rights associated with it; and
> you agree that neither of us has any duty to consult with, obtain the consent of, pay or render an accounting to the other for any use or distribution of your contribution.
I would go as far as to state that anyone who contributes any code to this works against open source (by helping out an obvious rugpull/embrace-extend-extinguish scheme that diverts adoption and contribution from cruby/jruby) and against their fellow developers (by working for free for Oracle).
For what it worth, in France so called moral rights are "innaliénables", so you legally can't get rid of them, and I wouldn't be surprised this holds in most Roman civil law countries (most countries in the world). Just like you can't decide to get rid of all your civil rights and become a slave of the nice company that promised to treat you well and free you of the hurdle to take decisions by yourself. So IANAL but this contract is not only ignominious but is actually trying to require authors to make promises that they can not legally make.
https://en.m.wikipedia.org/wiki/Civil_law_(legal_system)
That's completely normal for cathedral-style open source development. The FSF themselves required copyright assignment (not just a CLA) if you wanted to contribute to GNU projects (e.g. GCC) for many years; several GNU projects still do.
You only need to sign the CLA if you want to contribute to upstream, you can maintain your own fork if you want, and the code that is open source today will always be open source. Frankly I'd say Oracle is less likely to close it up in a scramble to try to monetise their open-source assets than smaller companies like Redis Labs - Oracle has plenty of products and makes their money from consulting rather than from selling code directly.
How can you compare the FLA with Oracle's CLA??? From the FSFA:
> An FLA offers a special clause against this kind of situation, in order to protect the Free Software project against potentially malicious intentions of the Trustee. According to this provision, if the Trustee acts against the principles of Free Software, all granted rights and licences return to their original owners. That means that the Trustee will be effectively prevented from continuing any activity which is contrary to the principles of Free Software.
You can name a few more rugpulls made possible by contributor agreements that permitted blatant abuse of power, and Oracle is also not innocent in this. Off the top of my head I remember the VirtualBox extensions fiasco. Oracle changed the license then started sending bills to companies.
I don't know what the "FLA" or "FSFA" is (are they real things or is this an AI-generated comment?), but the FSF traditionally required copyright assignment, which gave them all the rights in Oracle's CLA and more.
I meant FSFE.
https://fsfe.org/activities/fla/fla.en.html
A programme that seemingly only existed for one significant project, and is not open to new projects. Interesting, but hardly representative of the free software movement in general.
Oh wow! Thanks for the clarification! Is the case same with JRuby?
JRuby -- no CLA.
Java is problematic though. See my other comment.
Thank you. Never trust Oracle, ever. They will betray you.
I mean Java and MySQL are from Oracle as well.
MySQL was forked and the fork is the defacto standard shipped by linux distros. To me the only MySQL that existed was the one by Sun, now MariaDB has completely succeeded it.
Do you see the licensing/distrubution clusterfuck with Java as a good example of open-source stewardship by Oracle? Which Java disto are you using?[1]
Do you see the Google v. Oracle Java API copyright case as a good example of open-source stewardship by Oracle?
You know what else is prudently (/s) stewarded by Oracle? ZFS. That is why it is still not a part of the Linux kernel. A company that is basically a meme with the amount of lawyers it imploys would easily find a safe way to allow integration into the Linux kernel if only they wanted to contribute.
The examples above show exactly why Oracle has a decidedly bad reputation. On top of that, their CLA enshrines their shit treatment of the open-source movement and their free slave labour^W^W^W open-source contributors.
[1] https://en.wikipedia.org/wiki/OpenJDK#OpenJDK_builds
> Do you see the Google v. Oracle Java API copyright case as a good example of open-source stewardship by Oracle?
100% YES, given the clusterfuck support of standard Java on Android.
It is no different from what Microsoft has made with J++ on Windows, and like they came up with .NET, Google came up with Kotlin migration, ironically they keep relying on the standard Java that they don't support, for InteliJ, Gradle, and everything else that powers Android SDK on the desktop.
Google could have avoided the lawsuit if they bought Sun, after torpedoing it with Android, they thought no one would buy the company and were safe from paying anyone, wrong call.
GraalVM tends toward open core. They have entire test suites and test tools that are internal-only that make developing it kind of difficult.
Already outdated: https://www.ruby-lang.org/en/news/2024/12/25/ruby-3-4-1-rele...
I had just hit this issue (3.4.0 showing as 3.4.0dev in rbenv) and am having a coffee before looking at what is going on -- thanks for posting
[flagged]
Christmas Day is a traditional Ruby release day: https://en.wikipedia.org/wiki/History_of_Ruby#Table_of_versi...
Certainly! I bet it was part of a sort of "this is my Christmas present to all" sort of thing; and, as happens sometimes, little mistakes, like the need for the quick bugfix happen :)
Shopify strategy aka the story of YJIT
If I cannot refactor my services, I shall refactor Ruby instead.
That has been the story of every dynamic language since forever, thankfully the whole AI focus has made JITs finally matter in CPython world as well.
Personally I have learnt this lesson back in 2000's, in the age of AOLServer, Vignette, and our own Safelayer product. All based on Apache, IIS and Tcl.
We were early adopters of .NET, when it was only available to MSFT Partners and never again, using scripting languages without compilers, for full blown applications.
Those learnings are the foundations of OutSystems, same ideas, built with a powerful runtime, with the hindsight of our experiences.
> AI
The push for Python performance and JIT compilation has little to do with AI and more to do with Python's explosion in adoption for backend server applications in the 2010s, as well as the dedication of smaller projects like PyPy that existed largely because it was possible to make them exist. The ML/AI boom helped spread Python even farther and wider, yes, but none of the core language performance improvements are all that relevant for ML or AI.
As another commenter pointed out, the performance bottlenecks in AI specifically have essentially to do with the CPython runtime performance. The only exception is in the pre-processing of very large text corpora, and that alone has hardly been a blip on the radar of the people working on CPython performance.
Moreover, most of the "Python performance" projects that do sit closer to machine learning use cases (Cython-Numpy integration, Numba, Nuitka) are more or less orthogonal to the more recent push for Python interpreter performance.
Cython itself and MypyC are mainly relevant because they are intended to be general-ish purpose performance boosters for CPython, and in doing so helped fill the need for greater performance in "hot and loopy" code such as network protocols, linters, and iterators. Cython also acted as a convenient glue layer for ad-hoc C library binding. But neither project is all that closely related to AI or to the various JIT compilers that have arisen over the years.
Not at all, given Facebook and Microsoft involvement into making CPython folks finally accept a JIT has to be part of the story, coupled by NVidia and Intel work on GPU JIT DSLs for Python.
Yeah but how much of the Microsoft and Facebook effort was due to AI directly, as opposed to the general popularity of Python? which is undoubtedly driven nowadays by AI, but indirectly.
What Python projects do they have outside AI?
>thankfully the whole AI focus has made JITs finally matter in CPython world as well.
Isn't most of the work in Python AI projects done in C or C++ extensions anyway?
Yes, but not everyone loves to have dual stack development, I surely didn't, back in the Tcl days, eventually we ask ourselves for how long.
That's not how it works in Python.
The C/C++ is shipped in the form of well-established libraries like Numpy and PyTorch. Very few end users ever interact with the C/C++ parts, except for specialists with special requirements, and library contributors themselves.
It is definitely how it works in Python.
As if there is nothing else to chose from regarding Python performance issues and libraries used by folks.
Not everything is fashionable AI.
The comment thread was specifically about AI, so my comments were specifically meant for that context. I wasn't clear enough, sorry for the confusion.
Can you name specific "un-fashionable" AI projects that are dependent on Python code for things that have any significant performance impact, which are seeing significant benefits from Python JIT implementations?
I guess you will have to ask Microsoft, Facebook, NVidia and Intel why they are bothering then.
Can you name projects at those companies which meet the description?
He cannot
Microsoft, Facebook, NVidia and Intel apparently can.
> Personally I have learnt this lesson back in 2000's, in the age of AOLServer, Vignette, and our own Safelayer product. All based on Apache, IIS and Tcl.
Woah, your mention of “Vignette” just brought back a flood of memories I think my subconscious may have blocked out to save my sanity.
What's a scripting language? Also I'm not sure for TCL (https://news.ycombinator.com/item?id=24390937 claims it's had a bytecode compiler since around 2000) but the main python and Ruby implementations have compilers (compile to bytecode then interpret the bytecode). Apparently ruby got an optional (has to be enabled) jit compiler recently and python has an experimental jit in the last release (3.13).
"... the distinguishing feature of interpreted languages is not that they are not compiled, but that any eventual compiler is part of the language runtime and that, therefore, it is possible (and easy) to execute code generated on the fly."
p57 https://www.lua.org/pil/#1ed
https://en.m.wikipedia.org/wiki/Scripting_language
Hey, I have worked on the Outsystems platform. Developed some applications. Do you work at Outsystems?
No, I worked with the founders at a previous startup, Intervento, which became part of an EasyPhone acquisition, which got later renamed into Altitude Software alongside other acquisitions.
They eventually left and founded OutSystems with what we learned since the Intervento days, OutSystems is of the greatest startup stories in the Portuguese industry.
This was all during dotcom wave from the 2000's, instead I left to CERN.
HHVM has raised its head.
Which happens to have a JIT compiler, and contributed to standard PHP having one as well.
[dead]
Classic story. Didn't Dropbox do the same for Python? ANd Facebook for PHP (and then forked it)?
Roblox did the same with luau https://luau.org/performance
And cPanel for perl
During their black friday / cyber monday load peak, Shopify averaged between ~0.85 and ~1.94 back-to-back RPS per CPU core. Take from that what you will.
Reference: https://x.com/ShopifyEng/status/1863953413559472291
You seem to imply that everything they run is Ruby, but they're talking about 2.4 million CPU cores on their K8s cluster, where maybe other stuff runs as well, like their Kafka clusters [1] and Airflow [2]?
[1] https://shopify.engineering/running-apache-kafka-on-kubernet...
[2] https://shopify.engineering/lessons-learned-apache-airflow-s...
Obviously you meant for the whole infrastructure: ruby / rails workers, Mysql, Kafka, whatever other stuff their app needs (redis, memcache, etc), loadbalancers, infrastructure monitoring, etc.
This is correct! I thought this was clear but I guess not...
It is not because this is the first time I heard about back to back RPS. Which when come to think of it isn't too bad of a metric from a business POV.
We can also infer that into how much saving YJIT provides. At this point Shopify is likely already getting a return of investment from YJIT.
Just to reiterate stuff said in the other comments because your comment is maybe deliberately misrepresenting what was said in the thread.
Their entire cluster was 2.4 million CPU cores (without more info on what the cores were). This includes not only Ruby web applications that handle requests, but also other infrastructure. Asynchronous processing, database servers, message queue processing, data workflows etc, etc, etc. You cannot run a back of the envelope calculation and say 0.85 requests per second per core and that is why they're optimising Ruby. While that might be the end result and a commentary on contemporary software architecture as a whole, it does not tell you much about the performance of the Ruby part of the equation in isolation.
They had bursts of 280 million rpm (4.6 million rps) with average of 2.8 million rps.
> It does not tell you much about the performance of the Ruby part of the equation in isolation.
Indeed, it doesn't. However, it would be a fairly safe bet to assume it was the slowest part of their architecture. I keep wondering how the numbers would change if Ruby were to be replaced with something else.
Shopify invest heavily in Ruby and write plenty of stuff in lower level languages where they need to squeeze out that performance. They were heavily involved in Ruby's new JIT architecture and invested in building their own tooling to try and make Ruby act more like a static language (Sorbet, Bootsnap).
Runtime performance is just one part of a complex equation in a tech stack. It's actually a safe bet that their Ruby stack is pretty fucking solid because they've invested in that, and hiring ruby and JS engineers is still 1000x easier than hiring a C++ or Rust expert to do basic CRUD APIs.
Since we're insinuating, I bet you that Ruby is not their chief bottleneck. You won't get much more RPS if you wait on an SQL query or RPC/HTTP API call.
In my experience when you have a bottleneck in the actual Ruby code (not speaking about n+1s or heavy SQL queries or other IO), the code itself is written in such a way that it would be slow in whichever language. Again, in my experience this involves lots of (oft unnecessary) allocations and slow data transformations.
Usually this is preceded by a slow heavy SQL query. You fix the query and get a speed-up of 0.8 rps to 40 rps, add a TODO entry "the following code needs to be refactored" but you already ran out of estimation and mark the issue as resolved. Couple of months later the optimization allowed the resultset to grow and the new bottleneck is memory use and the speed of the naive algorithm and lack of appropriate data structures in the data transformation step... Again in the same code you diligently TODOed... Tell me how this is Ruby's fault.
Another example is one of the 'Oh we'll just introduce Redis-backed cache to finally make use of shared caching and alleviate the DB bottleneck'. Implementation and validation took weeks. Finally all tests are green. The test suite runs for half an hour longer. Issue was traced to latency to the Redis server and starvation due to locking between parallel workers. The task was quietly shelved afterwards without ever hitting production or being mentioned again in a prime example of learned helplessness. If only we had used an actual real programming language and not Ruby, we would not be hitting this issue (/s)
I wish most performance problems would be solved by just using a """fast language"""...
Here comes the "IO" excuse :)
Effective use of IO at such scale implies high-quality DB driver accompanied by performant concurrent runtime that can multiplex many outstanding IO requests over few threads in parallel. This is significantly influenced by the language of choice and particular patterns it encourages with its libraries.
I can assure you - databases like MySQL are plenty fast and e.g. single-row queries are more than likely to be bottlenecked on Ruby's end.
> the code itself is written in such a way that it would be slow in whichever language. Again, in my experience this involves lots of (oft unnecessary) allocations and slow data transformations.
Inefficient data transformations with high amount of transient allocations will run at least 10 times faster in many of the Ruby's alternatives. Good ORM implementations will also be able to optimize the queries or their API is likely to encourage more performance-friendly choices.
> I wish most performance problems would be solved by just using a """fast language"""...
Many testimonies on Rust do just that. A lot of it comes down to particular choices Rust forces you to make. There is no free lunch or a magic bullet, but this also replicates to languages which offer more productivity by means of less decision fatigue heavy defaults that might not be as performant in that particular scenario, but at the same time don't sacrifice it drastically either.
> There comes the standard "IO" excuse :)
You know, if I was flame-baiting, I would go ahead and say 'there goes the standard 'performance is more important than actually shipping' comment. I won't and I will address your notes even though unsubstantiated.
> Effective use of IO at such scale implies high-quality DB driver accompanied by performant concurrent runtime that can multiplex many outstanding IO requests over few threads in parallel. This is significantly influenced by the language of choice and particular patterns it encourages with its libraries.
In my experience, the bottleneck is mostly on the 'far side' of the IO from the app's PoV.
> I can assure you - databases like MySQL are plenty fast and e.g. single-row queries are more than likely to be bottlenecked on Ruby's end.
I can assure you, Ruby apps have no issues whatsoever with single-row queries. Even if they did, the speed-up would be at most constant if written in a faster language.
> Inefficient data transformations with high amount of transient allocations will run at least 10 times faster in many of the Ruby's alternatives. Good ORM implementations will also be able to optimize the queries or their API is likely to encourage more performance-friendly choices.
Or it could be o(n^2) times faster if you actually stop writing shit code in the first place.
Good ORMs do not magically fix shit algorithms or DB schema design. Rails' ORM does in fact point out common mistakes like trivial n+1 queries. It does not ask you "Are you sure you want me to execute this query that seq scans the ever-growing-but-currently-20-million-record table to return 5000 records as a part of your artisanal hand-crafted n+1 masterpiece(of shit) for you to then proceed to manually cross-reference and transform and then finally serialise as JSON just to go ahead and blame the JSON lib (which is in C btw) for the slowness".
> Many testimonies on Rust do just that. A lot of it comes down to particular choices Rust forces you to make. There is no free lunch or magic bullet, but this also replicates to languages which offer more productivity by means of less decision fatigue heavy defaults that might not be as performant in that particular scenario, but at the same time don't sacrifice it drastically either.
I am by no means going to dunk on Rust as you do on Ruby as I've just toyed with it, however I doubt that I could right now make the performance/productivity trade-off in Rust's favour for any new non-trivial web application.
To summarise, my points were that whatever language you write in, if you have IO you will be from the get go or later bottlenecked by IO and this is the best case. The realistic case is that you will not ever scale enough for any of this to matter. Even if you do you will be bottlenecked by your own shit code and/or shit architectural decisions far before even IO; both of these are also language-agnostic.
Ouch. I had no idea it was that much of a resource hog.
For a stranger to the Ruby ecosystem, what are the benefits of YJIT?
Just-in-time compilation of Ruby allowing you to elide a lot of the overhead of dynamic language features + executing optimized machine code instead of running in the VM / bytecode interpreter.
For example, doing some loop unrolling for a piece of code with a known & small-enough fixed-size iteration. As another example, doing away with some dynamic dispatch / method lookup for a call site, or inlining methods - especially handy given Ruby's first class support for dynamic code generation, execution, redefinition (monkey patching).
From https://railsatscale.com/2023-12-04-ruby-3-3-s-yjit-faster-w...,
> In particular, YJIT is now able to better handle calls with splats as well as optional parameters, it’s able to compile exception handlers, and it can handle megamorphic call sites and instance variable accesses without falling back to the interpreter.
> We’ve also implemented specialized inlined primitives for certain core method calls such as Integer#!=, String#!=, Kernel#block_given?, Kernel#is_a?, Kernel#instance_of?, Module#===, and more. It also inlines trivial Ruby methods that only return a constant value such as #blank? and specialized #present? from Rails. These can now be used without needing to perform expensive method calls in most cases.
it makes ruby code faster than c ruby code so they are moving toward rewriting a lot of the core ruby stuff in ruby to take advantage of it. run time performance enhancing makes the language much faster.
Same as the benefits of JIT compilers for any dynamic language; makes a lot of things faster without changing your code, by turning hot paths into natively compiled code.
Since when contributing back to the community is considered a bad faith move?
That's certainly not what I get out of what they said.
Shopify has introduced a bunch of very nice improvements to the usability of the Ruby language and their introductions have been seen in a very positive light.
Also, I'm pretty sure both Shopify for Ruby and Facebook for their custom PHP stuff are both considered good moves.
Always looking forward to the Christmas tradition of Ruby releases
Years back I took over the ownership of the third-party Arch Linux package for ruby-build because the maintainer at the time wasn't using it anymore and was looking to pass it off. At the time, I had no idea that Ruby did released every Christmas, but I found out a few months later when I got an email mentioning the package was out of date that day. Even though I haven't done much Ruby dev for years now, it's been a small little tradition of mine since then to update the package first thing every Christmas morning and push out the update (basically, just updating the version number in a file in a git repo and then running a couple commands to update the checksums and push the changes; nothing anywhere close to the amount of work that people who actually develop that tool do, let alone the people who work on the language!). I can't help but feel like that farmer from the meme saying "it ain't much, but it's honest work"; I enjoyed the little tradition I've built up and like thinking that maybe every now and then someone might have noticed and been pleased to get the updates without having to file a notice to remind me to update things (although it's happened a few times since that time years ago, I hope it hasn't been that often!).
Just now, I was surprised to see that the package seems to be getting put into the official Arch repos, so my eight years of very minimal volunteer service seem to be at an end. I still think I'm going to remember doing this and smile a little every Christmas morning for years to come!
Thank you for the quiet work, appreciated
I am liking all the performance improvement goodies on JIT and GC level.
1. Wondering 3.4 JIT performance vs 3.3 JIT on production rails.
2. Also wondering what upside could Ruby / Rails gain on a hypothetical Java Generational ZGC like GC? Or if current GC is even a bottleneck anywhere in most Rails applications.
> Also wondering what upside could Ruby / Rails gain on a hypothetical Java Generational ZGC like GC? Or if current GC is even a bottleneck anywhere in most Rails applications.
Ruby's GC needs are likely to be very far from the needs of JVM and .NET languages, so I expect it to be both much simpler but also relatively sufficient for the time being. Default Ruby implementation uses GIL so the resulting allocation behavior is likely to be nowhere near the saturation of throughput of a competent GC design.
Also, if you pay attention to the notes discussing the optimizations implemented in Ruby 3.4, you'll see that such JIT design is effectively in its infancy - V8, RyuJIT (and its predecessors) and OpenJDK's HotSpot did all this as a bare minimum more than 10 years ago.
This is a welcome change for the Ruby ecosystem itself I guess but it's not going to change the performance ladder.
https://speed.yjit.org/
Railsbench is 5.8% faster with 3.4 over 3.3
Turns out YJIT is already 100%+ faster than non JIT in RailBench and ActiveRecord.
I would expect some measurable improvement given how object-happy rails programming is. It's not uncommon to see 3 layers of models just wrapping a single variable - "objects that could've been functions". Some kind of tiers like generations or per-request pools would be amazing.
There's ongoing work to allow pluggable GCs, and specifically to allow using MMTk which would be IBM's Jikes GC
https://bugs.ruby-lang.org/issues/20470
https://www.mmtk.io/
Not quite, MMTk was rewriten from Java into Rust.
So already something else from Jikes days.
There's TruffleRuby (built on Graal) and JRuby if you want to explore that. They're not viable for everything, but they can be much faster.
Most likely not yet there, but for applications implemented in Ruby it is certainly an improvement for overall usability experience.
Congratulations guys. Thank you for the hard work.
‘it’ is a welcome addition!
Truly is, much nicer than that lonely `_1`
I'd have thought allowing _ as a synonym for _1 would have been more aesthetically consistent. That's the path I went with when designing my CL #λ reader macro, personally.
_2 can be as bad as _1
I don't understand the point of it when the `.map(&:upcase)` syntax is shorter. This just seems like yet another syntactic sugar Rubyism that doesn't really add anything.
If it's an alternative to the `|x|` syntax when using only one block variable, then I like that.
`arr.map { it.thing.blah.stuff }`
The `&:` doesn't work in that context
Not to point any fingers, but shows that the previous commenter have not struggled with this :)
&: is very nice, but not enough.
That only works when calling a method on the things you’re iterating thru, it is a replacement for the single variable block example you gave there
Does `it` conflict with Rspec's `it`? Surely they've thought of this, but to my eye it looks like it would get confusing.
Nope it doesn’t, they did take that into account during development.
Don't say that word!
https://www.youtube.com/watch?v=RZvsGdJP3ng
Every year-end, I update my Rails app. Lately, it's been stable, and the updates just improve performance, so it's gotten easier.
I started at a company 3 years ago that was on Rails 5.1. After 3 years on and off work I've managed to get it to Rails 6.1. The process is such an incredible nightmare on a large app.
Currently stuck on trying to get Ruby 3 working.
At some point you just have to rip the bandaid off and put any ongoing work on pause until the upgrade is done. Otherwise it'll be another 3 years on and off while you try to do the upgrade but the codebase keeps changing underneath you.
And if that isn't happening and there's no other development on the codebase, why bother upgrading it?
Same thing for me—4 years ago, Rails 4.2. Now on 6.0, work for 6.1 is wrapped up. I did just finish going from Ruby 2.7 to 3.3. Any particular issues you’re having, or just working through the process?
Don’t have the exact details on me but it was just the change for the method params hash thing. The stack trace seems to be pointing places that aren’t the source of the issue, just where it got triggered in some dynamic way.
Probably just need to spend more time understanding exactly what changed and how to convert stuff.
Curious what specifically you’re running into?
I haven’t worked in ruby or rails in a few years but both seem like they’re in great spots and I’ll be spinning up a new project with Rails 8 soon. Hype
Sounds good. Rails is still my first choice for personal development (though I've been using Next.js more often recently).
What does ruby do well that other languages don't? What is the niche it's trying to fill?
Ruby has the nicest object-oriented design (everything is an object) outside of smalltalk (IMHO).
In contrast to the mess that is Python. For instance, in Ruby it is natural that each or map are methods of Array or Hash rather than global functions which receive an Array or Hash argument.
This goes as far as having the not operator '!' as a method on booleans:
false.! == true
Once you have understood it, it is a very beautiful language.
Everything is an object in Python, as well.
Stuff like map() is generic iteration, over any structure that exposes iteration. When it's a member function, it means that every collection has to implement map itself basically. When it's separate, the collections only need to provide the interface needed to iterate over it, and the generic map() will just use that.
> over any structure that exposes iteration
Taking OOP more seriously, this kind of thing should be implemented through inheritance, interfaces, mixins, etc. Even though I've got used to it, Python has these inconsistencies that sometimes it wants to be more OOP, sometimes it wants to be more FP.
Ruby has chosen OOP as its size, and implements nicely those functional operations as methods (same for Kotlin, for example). That makes easy for composing them:
# Sum of the square of even elements of a list, in Ruby
my_list.filter{|x| x % 2 == 0}.map{|x| x * 2}.sum
Python could do something like that, but we quickly fall in a parentheses hell:
# The same code, in Python
sum(map(lambda x: x * 2, filter(lambda x: x % 2 == 0, my_list)))
Languages that have chosen the FP path more seriously implement function composition. They could do the same way as Python, but composition makes it more readable:
# The same code, in Haskell
sum . map (^ 2) . filter (\x -> x `mod` 2 == 0) $ my_list
PS: I know that I it would be better to use comprehensions in both Python and Haskell, but these are general examples
Kotlin is still rather different though in that it still implements stuff like map and filter outside of specific collection classes. If you look at the definition of, say, List, there's no map & filter there, only iterator. Instead, map & filter are defined as extension functions that work on any Iterable.
So semantically it's actually closer to Python, with the only difference that, since Python doesn't have extension methods, it has to use global functions for this, while Kotlin lets you pretend that those methods are actually members. But this is pure syntactic sugar, not a semantic difference.
Still looks much less beautiful. Python feels to me like it can't make up its mind.
I'll take "properly generalized" over "looks beautiful" any time. At the end of the day, the purpose of the code is to do something, not to be pretty.
FWIW you can have both in this case; you just need to make dotted method calls syntactic sugar for global function invocations.
And some will say the exact opposite: contrary to what would seem obvious, code is primarily meant to be read by humans, then written by humans. Because you'll spend way more time unfucking code than actually spitting it.
Yes, but it is not fully OO. Something like `if.class` generates an error, as opposed to returning some type such as "Syncategoreme".
That might looks really anecdotal, but on practice for example that's is probably the biggest obstacle to providing fully localized version of Ruby for example.
The second biggest challenge to do so would probably be the convention of using majuscule to mark a constant, which thus requires a bicameral writing system. That is rather ironic given that none of the three writing system of Japanese is bicameral (looks fair to exclude romaniji here). Though this can be somehow circumvented with tricks like
``` # Define a global method dynamically Object.send(:define_method, :lowercase_constant) do "This is a constant-like value" end
# Usage puts lowercase_constant ```
It's very powerful though which is a bit terrifying. You can literally monkey patch Object at runtime and add methods to every single instantiated object! (I believe this is how rspec works..)
Awesome, but with great power come great responsibility ;)
Actually, learning ruby is great way to see the light and stop trying to be creative when writing code.
You end up feeling and steered to the the right idiomatic way of doing things is the satisfying way.
RSpec moved from that quite some time ago. Monkey patching nowadays is usually frowned upon, even refinements, which could simulate monkey patching in a limited scope, are rarely used.
Oh I'm extremely out of date, I was into ruby back when Why's guide was a thing. Maybe I'll revisit it someday if I ever get bored of go paying the rent.
It's the language with the highest ratio of (useful work / LOC), so it's the least verbose language. This makes it very suitable to write and understand complex scripts, because the reduced boilerplate means less cognitive overhead for the programmer. As a result, experienced programmers can be extremely productive with it.
The well-known Rails framework uses this to great effect, however, some people argue that the choice of "convention over configuration" and extensive use of meta-programming, derisively called "magic", make it less suitable for inexperienced teams because they get too much rope to hang themselves and the lack of explicitness starts working against you if you're not careful.
> It's the language with the highest ratio of (useful work / LOC), so it's the least verbose language.
That's not even close to true. Even setting aside APL and its descendants, even setting aside Perl, any of the functional programming languages like Haskell and Scala are less verbose.
(The relative lack of success of those languages should indicate why minimizing verbosity is a poor aim to target.)
Don't just focus on the language syntax, the high ratio of useful work to verbosity is in large part owing to the excellent design of the standard library, which is available without including any headers or downloading third party libraries. This is where it handily beats out any of the alternatives you mention.
I see some people saying that Ruby is too much "magic", while what is magic is Rails. Ruby itself can have its high useful work / LoC ratio thanks to its syntax. For example, you can spawn a thread with:
thread = Thread.new do # thread code end
...
thread.join
In this example we can see that it's not magic, only concise.
I wrote more about it here: https://news.ycombinator.com/item?id=40763640
Creating a new thread is equally simple in Java:
Yes, but it's a good example of how Java is about twice as verbose to do the same work. The Ruby version of that can be written as simply:
Thread.new{ ... thread code ...}.join
The extra verboseness quickly adds up if every statement takes twice as much code.
Fair. I must admit that I'm not aware of the recent features of Java. Last time I really needed it was the time that we needed to instance an anonymous class for callbacks. I still find the block syntax in Ruby cleaner though.
Kotlin is another language that has this Ruby-style blocks for callbacks.
Experienced teams love magic?
It depends which kind of magic. Everybody love some magic to be in its life, as long as it doesn't reveal to be a curse unintentionally coming out from a good willing wish.
Also you don't want all and everything being the results of spells you don't have a clue how they are cast.
Experienced teams know to be careful and sparing with its use.
>It's the language with the highest ratio of (useful work / LOC), so it's the least verbose language
Why doesn't clojure fit the bill here?
)))))))))))))))))
Ruby is something like a "improved" Python, with a better OO system, a code block syntax that makes it easy to use callbacks, more consistent standard libraries, etc. It could be what Python is today.
I wouldn't say niche, but the killer app of Ruby is Rails, a web framework similar to Django. In fact, many people treat them as they are the same. But there are big projects that use Ruby and that are not related to Rails. As far as I remember: Metasploit, Homebrew, Vagrant and Jekyll.
Personally I think Ruby is amazing language for writing shell scripts. I write a blog post about it, you can see it and its discussion here: https://news.ycombinator.com/item?id=40763640
Can you name one way Ruby has parity with Python? Ruby is a dead language that uses sponsored posts here. Nobody actually uses this since like 2018 but some people are paid to hype it up. Just look at the empty praise. No real applications mentioned.
It's not just the well-known GitHub, Shopify, Chime, Figma, Zendesk, Convertkit (Kit), Coinbase etc. See the "few" companies here, actively hiring Rubyists for no reason https://rubyonremote.com/remote-companies/ Square, Gitlab, Cisco, Figma, Instacart, Block, Calendly, 1password, and so on
Yes, nothing real, just some githubs and shopifys
The yearly ruby release announcement getting to the top of hackernews every year certainly seems to imply that it’s not a dead language
> Nobody actually uses this since like 2018 but some people are paid to hype it up.
What’s the conspiracy theory here? Why would anyone be paying people to hype Ruby? What could possibly be the end goal?
> Why would anyone be paying people to hype Ruby? What could possibly be the end goal?
Hiring increasingly disinterested junior devs.
Being concise and pleasant to work with.
I wouldn't have had this much control of my own environment with another language, so that all of these are pure Ruby:
- My window manager - My shell - My terminal, including the font renderer. - My editor - My desktop manager
That's less than 10k lines of code. I've taken it a bit to the extreme, but I wouldn't have had the time to if I had to fight a more verbose language.
huh. I'm not sure if I understood you right, do you script and configure those in ruby, or have you written them in ruby from scratch? Are the sources available to read/learn from?
They're written in Ruby from scratch. Some are available, e.g. the window manager is here:
https://github.com/vidarh/rubywm
Beware that one of the joys of writing these for my own use is that I've only added the features I use, and fixed bugs that matter to me, and "clean enough to be readable for me" is very different from best practice for a bigger project.
I'm slowly extracting the things I'm willing to more generally support into gems, though.
Nice!
That's something that you could submit as a post here in HN
The wm was actually discussed on HN way back. I think once some of my other projects, like the terminal, is a bit more mature (it works for me and I use it for 99%+ of my terminal needs) I might post those too.
The biggest issue with these projects is that I feel uncomfortable pushing a few of them because I make a living of providing development and devops work, and my personal "only has to work on my machine and certain bugs are fine to overlook" projects are very different to work projects in how clean they are etc... But as I clean things up so they're closer to meeting my standards for publication I'll post more.
wow, thank you for publishing this!
thanks! I love ruby but I'd be afraid to do anything but web backends and shell scripting with it. it's people like you who move language adoption!
Rails has some very, very good features that make standing up a CRUD app with an administrative backend _very easy_.
It's also got a bunch of semi-functional-programming paradigms throughout that make life quite a bit easier when you get used to using them.
Honestly, if it had types by default and across all / most of its packages easily (no. Sorbet + Rails is pain, or at least was last I tried), I'd probably recommend it over a lot of other languages.
If you're happy to trade the ecosystem and a bit of compilation speed for types, then Crystal is a perfectly cromulent choice.
Except it's not because:
1) It has differences in behavior with certain classes and is not a drop-in replacement.
2) It always compiles, so it's kind of slow to compile-test
It's not a 100% compatible replacement, but I've ported a few things with only trivial chances. I didn't say it's a drop in, just that it's a fine choice.
Compile/test time is ok. It's a few extra seconds to run tests, but hasn't been an issue in practice for me.
I've heard good things, yeah :)
I've tend to have found Kotlin to be the direction I'm more happy going with. It speaks to my particular itches for me personally, more effectively. I can absolutely see how it's a very effective choice.
I love Rails and spent a good chunk of my career using it - and I'd recommend it more if only the frontend story wasn't that bumpy over the years with all the variations of asset pipelines.
I wish the TypeScript/React integration was easier. Say what you will but there's no way you can achieve interactivity and convenience of React (et al) UIs with Turbo/Hotwire in a meaningful time.
Agreed re asset pipelines. I definitely have Webpacker related scar tissue.
Have you tried either Inertia (https://github.com/inertiajs/inertia-rails) or vite-ruby (https://vite-ruby.netlify.app/)? Both look very promising.
I converted from webpacker (or rather shakapacker, the continuation after rails moved away from webpacker) to vite_rails recently, and it's been such a breath of fresh air. It's easy to set up, and easier to maintain. Strongly recommended.
Can you elaborate more in this? Years ago, I used to primarily do Rails development. Recently I built some web apps that use a JVM backend (one app uses Java & Spring and the other Kotlin & Micronaut) and a React frontend. One thing I ended up really missing issue the the frameworks, especially with disjointed fronted, don't solve the standard issue of a request sending an invalid form entry and showing the validation errors on the form. I ended up building my own implementation of that which of course also requires a convention on message format. Since most apps need to solve this it's so weird to be that frameworks nowadays don't solve this out of the box.
I definitely suggest using vite and the vite ruby gem. Create your Rails app, Create your TS + React app with vite, add the vite gem and done. It does not get better than that. Super fantastic.
Try React on Rails [1]. I’ve found it to be a very pleasant development experience.
[1] https://github.com/shakacode/react_on_rails
The language is incredibly flexible and allows for "DSLs" that are just ruby libraries.
A simple example: `3.days.ago` is a very commonly used idiom in Rails projects. Under the hood, it extends the base Number class with `def days` to produce a duration and then extends duration with `def ago` to apply the duration to the current time.
Taking that concept to a bigger extreme is this mostly unnecessary library: https://github.com/sshaw/yymmdd
`yyyy-mm-dd(datestr)` will parse a date str that matches yyyy-mm-dd format. It looks like a special DSL, but it's just Ruby. `dd(datestr)` produces a `DatePart`. Then it's just operator overloading on subtraction to capture the rest of the format and return the parsed date.
That library feels unnecessary, but the entire thing is 100 lines of code. The ease of bending the language to fit a use case led to a very rich ecosystem. The challenge is consistency and predictability, especially with a large team.
It’s a general purpose language with some very mature frameworks.
I don’t think it needs a niche. :)
It was a niche for a time, but now it's way more a general purpose lang.
Where it shines now is in it's width and depth. There are thousands of well documented libraries built by millions of dev's.
If you want to do something; near anything, ruby has a gem for it. It's power today is that it is omni.
Ruby is optimized for developer happiness, and it is not a small thing. Ruby on Rails is optimized to build successful web application businesses as a highly efficient team. It minimizes boilerplate code, and thus, time to market, while giving guidance (the Rails Way) on how to design for growth and scale.
Not really a niche language. Fantastic web server development. A more flexible and powerful language than python—the metaprogramming can be ridiculously powerful (when done well)—without the nonsense of white space sensitivity. ActiveRecord is perhaps the best ORM out there. Rails has tons of functionality to get running.
Overall, a pleasant and expressive language with an incredible community. Python ends up "winning" because of pytorch + pandas, but is (imo) a worse language to work in + with.
...but ruby is whitespace sensitive too. It's hard to notice, because rules mostly follow intuition, but there're cases when not only a missing newline, but absense or addition of a space changes resulting syntax. Currently I remember only difference in parsing unary vs binary operators, like + and *, and ternary operator ? : vs : in symbols, but there're certainly more other cases.
Sure, like `a ?b :c` is nothing like `a ? b : c` (I guess the former is actually invalid), but that's obviously not what the previous message was referring to when speaking of Python which uses spaces as main facility to determine block scope.
There was a recent thread about parsing Ruby wherein I learned that the % quoting character accepts any character following it as the opening/closing delimiter <https://news.ycombinator.com/item?id=42032212> Crazypants.
yeah, you can combine it with % formatting operator to do stuff like `% %% % %%%`
it's by far the best language I've found for writing a quick bit of code to explore a problem, or to do a one-off task. perhaps something about it just fits the way my brain works, but I find it incredibly easy to convert ideas to working ruby without having to think too hard about "okay, how do I actually express this in code".
If you write if 0... end around your code, it runs!
As the name suggests, it's just a Perl alt... everyday chores, data analysis, some automation tools.
You'll need to learn it if you want to use Rails.
If I want to make a SSR site using Ruby, are there any good frontend UI libraries that make doing this easier? It’d be nice if there was some Ruby abstraction for writing HTML, CSS, and JS that makes building interactive UIs easier (possibly built on top of HTMX, AlpineJS, etc).
You might want to look at view components:
https://viewcomponent.org/
https://evilmartians.com/chronicles/viewcomponent-in-the-wil...
https://thoughtbot.com/blog/hotwire-turbo-streaming-viewcomp...
Very intriguing, thanks for sharing this.
Do you mean like Hotwire?
https://hotwire.io/
Yes that is quite close to what I had in mind, thanks
I would suggest taking a look at Phlex (https://www.phlex.fun/). This kind of ruby maximalism is very pleasing to the dev process. For the interaction I'm using hotwire and stimulus. Been using pure Phlex views in production for 2 years now. I wrote Protos (https://github.com/inhouse-work/protos) which is built on top of Phlex and adds a bunch of quality of life features I wanted.
Also, you can check out the list of UI libraries for Hotwire https://hotwire.io/ecosystem/ui-frameworks
Thank you! This is exactly what I was looking for.
Why did NodeJS take off on the backend while Rails was still popular? I'll never understand it.
People say frontend/backend parity, and that’s true, but I also remember there was a time in 2011 or so where single thread/async was this new hot thing.
Nginx was starting to get popular and overtake Apache on installs, and people were enamored with its performance and idea of “no blocking, ever” and “callbacks for everything”, which the nginx codebase sorta takes to the extreme. The c10k problem and all that.
When JavaScript got a good engine in v8, Node was lauded as this way to do what nginx was doing, but automatically and by default: you simply couldn’t write blocking code so waiting on I/O will never bottleneck your incoming connections. Maximum concurrency because your web server could go right back to serving the next request concurrently while any I/O was happening. But no “real” multithreading so you didn’t have to worry about mutexes or anything. I remember being slightly jealous of that as a Rails developer, because webrick/unicorn/etc had a worker pool and every worker could only handle one request at a time, and fixing that could only happen if everything was async, which it basically wasn’t.
JavaScript becoming a popular language in its own right due to frontend was certainly the most important factor, but it wasn’t the only one.
“Node makes it impossible to write blocking code” reminds me of this classic and hilarious piece by Ted Dziuba:
http://widgetsandshit.com/teddziuba/2011/10/node-js-is-cance...
Not sure why this is considered a "classic" piece. It reads as if the author has just discovered the difference between preemptive vs cooperative scheduling, but hasn't yet found the words to describe his "discovery". Yes, you can write a `while(true){}` loop and block the event loop. That's not some damning indictment of Node. The point is that you don't have to block on IO, so your program doesn't have to halt the entire world, and sit around doing nothing while you're waiting for a hard drive to spin or a network request to complete.
Heh, he's so right in every regard although I use Node.
Worst of all, they made npm packages dead easy, so most of them don't even have a readme file, not to mention inline docs like POD or RDoc. This is how you end up with spam pacakges, malware in npm and lpad disasters.
> most of them don't even have a readme file
Given the popularity of Github, and the fact that a readme file is the first thing you see when pulling up a project on Github, most projects these days do in fact have readme files.
> inline docs like POD or RDoc
JSDoc is relatively popular.
Using the same language to write your frontend and backend is desirable for many people / teams.
To add, front-end developers and other people that learned in Javascript (because a web browser is something everyone has, turns out it's a pretty great runtime environment, has live editing with dev tools, etc. It's honestly a fantastic way to 'get into programming) could write the icky backend code to make their slick websites, SPAs and games have internet-based savestate
but why does that language have to be one as braindead as javascript. path-dependency is the root of all evil.
Because nobody championed another language for the browser.
Good news is we have WASM now, so you can write backend and frontend code in basically whatever language you want.
Google hired all the best JIT engineers and set them to work on v8. If you want better performance you'd have to choose an AOT compiled language.
OpenJDK HotSpot and .NET RyuJIT both produce much faster code :)
And the latter lets you operate on the same level of abstraction as Rust and C++ compilers do.
> OpenJDK HotSpot and .NET RyuJIT both produce much faster code :)
For dynamic languages? Stuff like Clojure, JRuby, Boo, are definitely not faster than V8 JavaScript...
Google also made Dart which is better than Js in every regard.
That seems like a meme from 10 years ago. I don’t think that’s really true anymore is it?
I mean Truffle Ruby is as fast as V8 already and MRI yJIT and jRuby are catching up fast.
It could also be argued that JVM is the gold standard JIT.
I think that shows that Google doesn’t have a monopoly on great JIT engineers.
> Truffle Ruby is as fast as V8 already and MRI yJIT and jRuby are catching up fast
According to which benchmark? At my hand [1], node is ~60% faster than TruffleRuby and over an order of magnitude faster than yjit v3.3.0.
[1] https://github.com/attractivechaos/plb2?tab=readme-ov-file#a...
> I mean Truffle Ruby is as fast as V8 already
v8 and node are 15 years old. That's when this actually mattered and js on the backend took off.
Because of the JavaScript Everywhere crowd. When you have a hammer, everything looks like a problem for JavaScript.
God forbid we reuse knowledge instead of drudging lives through never ending learning of same concepts with different syntax’s and 10x costs for supporting every special native snowflake toolchain.
And of course the right way to do that is to take an extremely mediocre, rushed, incomplete language that is currently constrained to the browser, and make it run everywhere else, ironically having to reinvent many wheels and re-learn very old lessons the hard way along the way. Mission "reuse knowledge" is a hearty failure in node.js-land.
That’s why you use a language where the build tool of choice changes every month?
If that's the goal, the problem itself has been re-implemented by the Javascript ecosystem.
> knowledge
To be fair, js interpreters are available out of the box in all digital devices out there that embed a browser. That's a huge deal, as far as portability is concerned.
That said you do have things like https://opalrb.com/
Because it’s not as good. Why would I want two languages and two runtimes when I can just have one, all while delivering a demonstrably better user experience?
nonblocking IO
Node is web scale.
Rails wants to be the UI framework, and a lot of devs didn't want to do server side UI and state, especially OOP style. So it was easier to do JS for your APIs, etc. DHH's opinions kind of made it an all or nothing choice for many folks.
Because Node.js is the most bad ass rock star tech to come out, since Ruby on Rails.
I want to try Ruby since the news of Rails 8 came out, but it's been so difficult that I just gave up. Installing Ruby on Mac and Windows and actually getting the 3.3 version required for Rails 8 was a huge mission and test of patience because every installer defaulted to older versions of both Ruby and Rails even one month after the release. And yes, even Docker required tweaking to get the versions and I had issues with devContainers anyway...
I finally got it installed and then followed some tutorials only to see that Rails' html.erb files have completely broken syntax highlighting in VSCode and other editors. I facepalmed and though I tried to search for a fix online, I couldn't find one. I saw posts mentioning it in forums and yet not a single solution posted.
So I gave up. I tried in Mac, Windows and Linux. If someone here knows how to fix the broken highlighter, that can be my Christmas gift today, but for the most part I've moved on.
Use asdf (https://asdf-vm.com/) to manage your Ruby versions.
You should be able to do
$ asdf plugin add ruby
$ asdf list all ruby (you'll see 3.4.1, the latest is available)
$ asdf install ruby 3.4.1
And now you can use Ruby 3.4.1 with no issues. Follow that up with
$ gem install bundler
$ gem install rails
$ rails new ...
Like psychoslave suggested, try out mise (https://github.com/jdx/mise). I used asdf for years, did the switch to mise and have never looked back for package management. It supports a huge number of languages and is performant.
Or mise, https://mise.jdx.dev/
Thanks for this.
I used to use ruby a lot - mostly just because it's the nicest language for scripting things on unix. I can remember trying to get it set up a year or so ago and finding the process difficult (think I was using rvm).
https://github.com/rbenv/ruby-build/wiki#suggested-build-env...
probably good idea to point people here before they install ruby, since it'll compile for minutes then tell you it's missing a dependency, and you have to start the whole process over.
I've found the easiest way to have a nice, consistent, working Ruby installation is to install from source. Ubuntu, Debian or Fedora are the easiest. There are a bunch of one-liners to install all the dependencies on various distros floating around. The Ruby website has instructions but the gist of it is, run ./configure, then make, then make install. Actually pretty easy. Gem is great for managing libraries, certainly better than any Python solution for that ecosystem.
On Mac, rbenv or asdf are both great. Also other commenters here have good suggestions. I never had problems with VSCode; curious what you ran into here.
Use rvm to install ruby. Ruby dev sucks on Windows, mac only.
I think this is a major reason Ruby had trouble taking off compared to Python. Most desktops were Windows, especially for businesses.
typical hn comments,ignoring elephant in the room.
It actually works quite well, if you use WSL.
Ruby itself works okay on bare-metal Windows, but virtually guaranteed any decent size Rails project will use some native gem that's a nightmare to get to build on Windows.
Ruby and rvm sucks also on linux (at least on SteamDeck).
What sucks about it?
Most gems with native extensions won't work. Gems that listen to filesystem changes like guard can be buggy. I recommend using Mac or Linux for Ruby on Rails development.
The listen gem works on windows: https://github.com/guard/listen?tab=readme-ov-file#listen-ad... . Not sure whether guard builds on top of it.
Only way to reliably use ruby seems to be docker.
Hope ruby 4 had static type system like typescript.
I hope not, that would turn Ruby into just another of the myriad of statically typed languages.
Try Crystal if you want less dynamic typing.
You can also check out the type checker by Ruby built by Stripe https://sorbet.org/
I would love that!
DHH is the issue, he means Rails is for one-person framework while he think static typing is only for enterprisey software.
DHH has no say in Ruby development.
That being said, Matz also isn't a fan of static typing. Static type annotations exist in the form of RBS, but no one that matters in the Ruby eco-system is pushing static type annotations in .rb files themselves.
Also, after seeing TypeScript, I'm very happy about that.
https://github.com/soutaro/rbs-inline
I myself am unsure where I stand on RBS. I wouldn't mind more use of it in my gem dependencies, but would probably not like it if it was enforced everywhere.
For now I'll stick with improving my test/spec-writing skills, and maybe some runtime type checking like https://literal.fun/
I think RBS is a decent tool, I don't mind it as long as it never becomes a requirement for anything. I hate the trend of statically typed dynamic languages because it's all of the pain without the main benefit (native speed).
(Removed)
Neither of those is the case.
[flagged]
Let me guess, you're the creator of chat-to.dev?
Do you have any evidence that it's been used less and less?
https://trends.google.com/trends/explore?date=all&q=%2Fm%2F0...
Do you reckon Ubuntu is used by fewer people than it was 20 years ago?
https://trends.google.com/trends/explore?q=ubuntu&date=all
I don't know but I still fully take the criticism: Google Trends is not an indicator of absolute usage, but (if anything) relative usage. It's not clear which of the two parent referred to.
Trend is not equivalent to usage.
The amount of pushes seem to be steady if you look at Ruby, https://madnight.github.io/githut/#/pushes/2024/1
Just because something is not hyped up or talked about constantly doesn't mean it's dying.
Glad to see Estonia being number 4 in that list. We have some nice successful businesses built on Ruby.
Same. Dropping this related list here: https://ruby.ee/members/ (This list is incomplete; you can help by expanding it :)
this is evidence of google searches. nothing more.
There's also the TIOBE index: https://www.tiobe.com/tiobe-index/ruby/
It conveys similar trend to Google Search chart
Because it uses same metrics.
If you want to see real usage statistics you need to consult GitHub, JetBrains, RedMonk ratings.
Okay:
https://octoverse.github.com/2022/top-programming-languages
Ruby took a nosedive from 5th "top used" programming language in 2016 to 10th in 2022
Those are relative positions. We can't talk about a "nosedive" from that. It may be the case, but also maybe Ruby was just the slowest growing out of a number of languages growing in popularity. We don't have enough data from there.
> since ruby is a language that is less and less used by many programmers
But this statement from thread starter is about Ruby's relative position.
If Ruby's rank in top-used language ranking drops, then we can say that the language is less used by programmers in the survey pool.
> thread starter is about Ruby's relative position.
I guess we disagree on that part.
"nothing more" doing a lot of work.
I'm up. A beginner enthusiast in ruby.
[flagged]