I tend to think that the sophistication of the tools reflect the extent that the language is broken. Valgrind doesn't need to exist for the Java user. You don't need class boilerplate getter/setter generators for Scala. Nobody needs NPE static analyzers for Rust. The need for rename refactoring tools is drastically diminished by the existence of type inference, almost to the point where `:%s/Foo/Bar/g` is entirely adequate.
The non-existence of tools that you find in other languages might signal an immature ecosystem, but it might also signal a diminished need for them.
Or as Avdi has rightly pointed out, it means that the language is not conducive for tooling. Tools have to operate on the static representation of code, and the more it diverges from runtime reality, the harder it becomes to understand the codebase structurally. The extreme opposite of all this would be Kotlin, which was built from the ground-up to have first-class IDE support.
In retrospect my statement may have come off as a claim to an absolute truth, but I didn't mean it that way. It is entirely possible for Ruby to be in a sweet spot of needing better tools but the language doesn't facilitate their existence. I just was explaining how a lack of tooling isn't necessarily (or even commonly) a bad thing.
It's been a while since I wrote much Ruby, but I can vividly recall wanting to find the definition of a method and being frustrated at how difficult it often was.
> It's been a while since I wrote much Ruby, but I can vividly recall wanting to find the definition of a method and being frustrated at how difficult it often was.
Given Ruby's open classes and per-object metaclasses (which are still open), I'm pretty sure a general solution to "find the definition of the method being called at this point in the code" is impossible (and not just, "there is a single correct answer, no general solution is available" impossible, but "there is no guarantee that there is a single correct answer, and even for the subset of cases where there is, no general solution is available" possible.)
No, nor have I missed the particular function described working with Ruby.
But, OTOH, I am aware that its possible to do good-enough-for-many-uses implementations of things for which a complete, general solution is impossible, including the function described.
Personally, I haven't -- is there a way that the benefits of these tools could be translated into a ruby-based environment? Are such solutions already available? Would be interested to know more.
This is definitely the downside of ruby's dynamic/duck typing.
On the flip side, the limitations of ruby forces its devs to adjust the way they write code. Knowing that I may need to use ⌘-shift-f (find in project) to look up method definitions changes some of my conventions, and because operations like refactor-rename (which I use all the time in C#) are very tedious, it forces me to think a bit harder up front when designing a class's API surface.
I'm not arguing that it's better than having good tooling -- just that the lack of tooling adds some interesting constraints that, in some ways, force us to write simpler code.
I dare to say that most Ruby code is written in Rails projects and we almost always know were a method is defined, because of conventions (models, views, lib, concerns). It's when the developer is too smart that we get into troubles. The only project where I needed ctags in the last years was a php zend application that I had to get familiar with to be able to design its v2.
So it might be another issue with Rails' influence over Ruby: not many chances to move beyond the fence (me too.)
> This is definitely the downside of ruby's dynamic/duck typing.
No, IDEs were pioneered (and if you checkout Pharo you'll see they are still pushing the boundaries) by dynamic languages that allow runtime refleciton. Like smalltalk.
Just because some IDEs push the boundary when it comes to dynamic languages doesn't mean it still isn't a downside. By that I mean, on other languages it's not even a problem you have to solve. The solutions to 'go to definition' or 'rename all references' are trivial in languages like the C-variants.
You are replying as if had I addressed your other statement. I pointed out that the general tooling woes is not due to its dynamic typing by providing a counter example.
As for your other statement I also disagree:
> I'm not arguing that it's better than having good tooling -- just that the lack of tooling adds some interesting constraints that, in some ways, force us to write simpler code.
What lack of tooling (and introspection in general) deprive us from is from forming a clear picture of what is going on, perspective. To illustrate my point I'll quote a Ken Pitman's annecdote:
> Last night on my Lisp Machine I was frustrated by the absence of NT file support. I had no idea how file support was implemented, but I had a general working understanding of how the Lisp Machine was organized, and within a small number of hours I had learned about how to extend the operating system to talk to NT over TCP and implemented a new set of classes so that I could transparently talk to my NT box over my local ethernet. A lot of my ability to do this was enabled by the ability to point and click
on objects in order to debug them and inspect them without any fear or confusion about whether I was using the right sources, the right packages, etc.
However I think this is a more nuanced point, therefore I have no interest in arguing it over the internet. I see the case for constraints resulting in simpler code.
What I objected to is using Ruby's dynamic typing as a cop-out and try to pass it as trade-off.
You're confusing "sophisticated" with "annoying or difficult to use". Boilerplate generators are annoying, and Valgrind is difficult to use, but they're unrefined: they solve problems that are best avoided than solved.
A better measure of sophistication is how much bang you get for the buck, and how much of this bang is made possible by the language itself.
In my opinion, the best example of this is GHCi: Using GHCi almost feels like asking the computer "Is what I'm doing right?". GHCi itself is made possible by Haskell's excellent balance between type system expressiveness and making inference feasible. To see why maximizing raw expressive power won't help, in most dependently typed languages, it's hard to construct nontrivial meaningful utterances, so you don't even get to the point where you can ask "Is what I'm doing right?" - the type checker will tell you "It isn't even wrong". That being said, I'm aware that the situation w.r.t. tooling is improving a lot.
Another example is Rust's borrow checker: It will tell you exactly at which point an equivalent C or C++ program would have had a dangling pointer. It doesn't quite have the same "fluid conversation" feel as using GHCi, but it saves the enormous amount of time that would've been otherwise spent chasing memory bugs in a debugger, especially in a concurrent program.
Ruby isn't defined by poor tools. Ruby is defined by a design that makes static analysis intractable. The lack of sophisticated tools is just a corollary.
I agree about diminished need, but I think the article was addressing quality not quantity. Coverage is what matters, and that's a joint effect of the language+tools.
Do you know Clojure? I'm curious of what do you think about Clojure error stacktraces. Even though the languages seems to be designed very well - one thing that is underwhelming is how hard understanding errors may be for Clojure beginners.
> And yet, in the early 2000s, long before Rails drove throngs of people to learn Ruby, very smart people who knew about Lisp and Haskell and good tooling, fell in love with Ruby. People like Dave Thomas and Martin Fowler and Jim Weirich.
This is probably because very smart people don't need tools, so they can let themselves fall for the appearance of raw textbook examples of the source code.
Very smart people probably have written programs in low-level languages and gotten them right with only the help of print traces, and raw stack/register dumps.
Print traces and dumps were the only tools had for a long time, and quite frankly there are a lot of areas in programming where this is still the case (lots of embedded programming, for instance, and it's not likely you'll get a print from that either).
Tools are nice, this isn't an old-man shaking his fist at the wind moment. It's more important to note that not all tools are software. Some tools are mental-based. We have to force ourselves to build those tools over time.
"Very smart" probably means different things to different people, but, for me, it means proactive laziness: the ability to foresee anything that might create repetitive work in the future, and work now to prevent that from happening.
So I can't regard a programmer as being "very smart" if he or she doesn't have the ability to abstract over every recurring pattern in his code, and turn it into a nice reusable library. And I also can't regard a programmer as being "very smart" if he or she designs abstractions that leak: in my experience, abstraction leaks create more work than anything else.
I have no idea whether it correlates with "very smart" in any particular way, and suspect that it probably doesn't, but I have definitely observed significant differences in preference when it comes to the use of extralinguistic tools. It's easy to see how there are some people better suited to early adoption of new languages than others, because they are comfortable with and may actually prefer a minimal-tooling environment, while others will find it so unbearably primitive that they won't be able to get anything done.
I've had coworkers who believed anything that can be automated should be, and therefore one should learn and use all available tools. In this view, anyone failing to take advantage of all possible automation is wasting their time on grunt work and therefore being less productive than they could. One guy at my last job was notorious for his incredibly long and frequently misspelled identifiers; I could not understand how he was failing to notice these problems until I realized that he literally never typed out a full identifier more than once, and typically generated as many bytes of source code via intellisense as he did by typing.
I am out on the opposite end of the spectrum; when I'm coding I don't want to think about tools at all, and a tool had better be absolutely spot-on perfect or its rough edges will waste more of my time by distracting and irritating me than its features will save by automating something I would otherwise have had to do with my brain. I sometimes use IDEs depending on the project, but I always disable code indexing / completion, because the little timing hiccups you get as the tool does its thing break my flow and irritate me. Instead, I put more time into making consistent names and simple APIs so there is less to remember, and do a bit of extra grepping when I can't recall what I need. For me this feels easy and natural; for other people it seems like going the hard way, and they don't understand why I won't let the machine help.
These differences extend all up and down the tool chain; some people love their sophisticated debuggers and write all kinds of complex scripts that produce interesting results, while I tend to just throw in a lot of printfs and debug by reading logs. Not to say I won't use a debugger when it's available - just that I know the log approach will always work, and it can be less work to fall back on something simple that I don't have to think about rather than taking the time to remember where all the debugger's knobs and switches are. On the flip side, I will make heavy use of all available language tools for abstraction and code simplification - lambdas, namespaces, interfaces, whatever's handy, my goal is to build every interface, no matter how minor or internal, in such a way that it's difficult to use it improperly. This has irritated people I've worked with who are happy with simple boilerplate because they rely on automated refactoring tools; to them, these esoteric abstractions were aggravating because it left them struggling to see the forest for all the unfamiliar trees.
Some of this may be a function of experience, particularly experience with low-level environments. I've written plenty of debugging, logging, grepping, and reporting tools over the years, so I know I can always just bang together what I need in the moment, and the prospective payback from learning some existing do-it-all tool feels correspondingly reduced. But is this a smarts thing? I am not so sure - I mean, I do think I'm pretty smart, but it could just be preference; I really like digging into the foundations of things and have therefore invested a lot of time learning how to work in those environments. Someone who actually prefers to live at a higher abstraction level might well be wasting their time by thinking about this sort of thing, and while that's not my bag I certainly wouldn't argue that high-level work is unnecessary or of lesser importance.
Some of it may also be a function of age. I've seen so many tools come and go that it has come to feel like a waste of time to learn one deeply until it has become stable and ubiquitous enough that I can reasonably expect the time I invest in it to pay back over at least 5-10 more years. If it doesn't meet that threshold then why bother? There will just be something else coming along later. I'll learn it if I need to use it, but because of the above-described confidence that I can get the job done regardless of tooling, I'm less likely to hit the bar for "need to use it".
The weakness of the heavy-tooling approach is that tools always have flaws and blind spots, and one can get stuck thinking through the tool's mindset and have trouble when there's a problem it doesn't solve particularly well. But the weakness of the light-tooling approach is that tools shape mental models, and mental models shape language, and software development is really more about communication between humans than it is about communication with machines; if you don't adopt the mental frameworks your coworkers are using it may be more difficult to interact with the systems they are building.
(also, a moment of brag: one of my proudest debugging feats was when I wrote a bootloader for a brand new embedded architecture using no feedback but a single blinking LED. There was no stack trace, not even a serial line for a printf log, because none of the drivers had loaded yet.)
Regarding misspelled identifiers, I've experienced the exact opposite problem. Code I had to work with that was clearly written in simple editors often contained misspelled words (though it was also usually written by non-native English speakers) and even bugs arising from using a wrong variable name. Errors that with proper tooling would be near impossible to make, because the tool would immediately underline the variable as either containing a spelling error or not being defined. Not to mention it also lead people to using generic non-descriptive variable names and abbreviations.
I also watched some people trying to printf-debug a problem, and it's usually a fest of "Was the problem here? Hm, not here, ok, let's put these statements over there. Hm, not over there either, what about here... (repeat several more times, depending on how deep the calls go)" Having to do something like this would really feel like stone age to me, when you could just get all the relevant information in one quick pass without having to modify any code.
Another thing is documentation. When you know that you can just bring up a method's documentation with a single keystroke, you are much more motivated to document your own methods in the same way. If it's just "some generator will create a bunch of static html pages later on that most people likely won't ever read", some people will not bother.
I do agree about flaws in tools being a huge issue though. Especially with refactoring tools, if you can't trust it to not accidentally mess up parts of your code, you might as well not use it at all. Similarly, if you can't trust a "find usages of this method" to find ALL of its usages, the utility diminishes severely.
EDIT: oh, and perhaps the most simple, but most important feature of all, being able to Ctrl+click a function to navigate to its definition means that people are going to dive into the code a lot more often, even framework/library code, which is obviously beneficial to learning how things work under the hood. The less friction there is to doing something, the more readily people are going to do it.
Why my request for a TL;DR got downvoted? I thought, if you had that much to say, it could be am interesting opinion, but I didn't have enough time to go through it all.
I didn't downvote, but I read a TL;DR, especially one directed at the commenter, as saying "my time is more important than yours; could you spend your time purely for my benefit, despite my not being willing to put in my time to read what you've said?" All else aside, it probably takes longer to ask for, and then complain about downvotes on, the request than it does to read 4 short paragraphs.
Since when TL;DR's have an obligatory hidden meaning? You can speculate on that, but it's. just your speculation and nothing more and it can be at the compete opposite side of my intentions. So don't tell me what you read in my two words "TL;DR".
There is no way I should justify a request for summary in the first place I just did that afterwards in order to undetstand reasons for downvoting and if that was that someone else is giving their own interpretation to my request, then it's not a valid reason for me.
Their downvote does not make me learn anything new and I just observe how obtuse and flawed their mental process was. They were probably angry for having to read all the long comment I referred to and for not accepting someone else could get away with a shorter version of it. It's called envy and it sucks.
TL;DR:
I don't give a shit about your downvotes if you give your own interpretation to my request in order to support your envy for the fact that I could get away with a shorter version of the original text.
Ruby's tooling is obviously not as advanced as a language like Java, for many reasons. But terrible isn't the adjective I'd use. It's pretty easy to use editor integration to provide pretty decent automatic formatting, linting and code completion. Pry-debugger is pretty great, and generally I don't find working in Ruby to be markedly more taxing that other languages.
> automatic formatting, linting and code completion
You are describing tooling that is used to write code, but not to read or debug it. And Ruby is exceptionally easy to write without any tooling whatsoever.
The hard problem with dynamic languages is reading the code and understanding what it is doing at run time. And that is where TFA suggests Ruby is terrible.
One the way to reading about something else, I read an essay by a language developer complaining about type erasure. In functional languages.
In short, when you get to the point in implementing a language where you can do type erasure, you're golden because it usually means you have things nailed down. Unfortunately then people actually implement type erasure and now there is no way to link data structures in memory with a piece of code at run time.
Or at least very easily.
As an example C/C++ does erase type information. There often being nothing that tells you what an object in memory is. But compilers take great pains to preserve that information in the generated output formats. Which allow you to write debuggers. I'm sure languages like Java and .net do the same.
I used to think that it was fine, but then I started working with LISP and SLIME, and the difference between that and what's available in the Ruby-world is striking.
Pry debugger is great when or if it works. On windows this is generally impossible, even on Ubuntu it is spotty. Debuggers for other languages just work.
Then there is also a ton of tools Avdi didn't touch on like silent failures from gem, that bundler is not installed by default, that editing code at runtime damages what kinds of inspection can be done, many gems are platform specific, how underspecified Ruby is and in general I find few things can be relied on.
In other languages I find that what I can rely is much larger than in Ruby.
If it didn't pay so well I would have dropped Ruby a long time ago because the language is composed of so many half measures.
Do you have an example of when Pry was being spotty? I've never had a single issue with it (on Ubuntu, too).
There are certain scenarios where you can expect Pry not to work "as expected" due to the nature of the language. If you expected it to work, it means you're just not acquainted with the tool/language. Same can be said about any tool or language, not just Pry or Ruby.
Don't know if it's still valid, but, when ruby 2 came out pry wasn't working with that. Not just that it wasn't working, it actually starts and then when I print some variable, VM crashes. I spent few hours thinking my code was wrong and finally finding out pry doesn't support the version of ruby I'm using.
"Tools" can mean different things in different contexts, and this isn't the type of tooling Avdi is talking about. For example, I agree with Avdi's post, but Bundler is an amazing, wonderful tool.
I write both Ruby and Lisp frequently, and I must agree that Lisp tools are vastly superior. Ruby, despite being a dynamic language, has a sorry excuse for a REPL which really hampers my workflow. For example, Rails watches your Ruby files for changes and reloads them in development mode, but if it was easy to re-evaluate arbitrary expressions from your editor (a la SLIME or Geiser) such a feature would never be needed. Then, of course, there's typically so much magic happening that it's often impossible to find the place in which a symbol was defined by searching for that symbol, which makes debugging overly difficult.
When you say "sorry excuse for a REPL", you are referring to the built in repl and not pry (http://pryrepl.org/), right? because pry is freaking awsome!
The same issue is there. This is a language issue. I want to edit a file and re-evaluate arbitrary expressions with a keystroke. Writing the code and then copy/pasting it into a REPL prompt is not sufficient. Lisp programmers have had excellent live coding environments for a very long time, but with Ruby I must make do with Rails auto-reloading files for me.
> I want to edit a file and re-evaluate arbitrary expressions with a keystroke. Writing the code and then copy/pasting it into a REPL prompt is not sufficient.
This suggests to me you've not spent much time with Pry. Are you aware of this:
It's not perfect - doing it perfect with Ruby is a hard problem, as a method can have been defined and redefined in multiple places. But it works very well.
Better in what way? How is it better than a buffer to put arbitrary code in to get executed?
So far, from what you've pointed at, you seem to have picked the most restrictive simplistic examples of what Pry is capable of as a counter-example to explain why it isn't enough for you, rather than explain what it is you believe is actually missing.
Point is: You have access to much better than the examples you've pointed at with Pry as well.
I wrote a Smalltalk-esque System Browser for manipulating class definitions without the need for a filesystem awhile ago (here's a rather crappy screencast of it: https://www.youtube.com/watch?v=2xC5B5t5Rq8)
It was a LOT of fun, and I wish I had the time to resume it. The feedback loop was so much faster, and I can say that there is serious cognitive overhead we all carry by manipulating the definitions of things in files rather than just the objects and classes themselves.
This is one of the things I've always liked about JRuby. You can use the excellent tooling of the JVM ecosystem (e.g. YourKit, Coverity Dynamic Analyzer) to understand the behavior of your JRuby applications, and even attach directly to running production apps to debug them.
I've found tools like this essential for debugging multithreaded JRuby applications, and there's simply nothing else like them available for MRI.
How would people that use both Python and Ruby compare the tooling for both languages? I only am familiar with Python, and I end up using IPython most of the time. I'm curious whether I'm missing out, or maybe Python also has "terrible" tooling..
I'd say Python tooling may be OK at language level, even slightly better (code introspection abilities are more recent in ruby I think, the fact you could get method docs, method def, etc.). The equivalent for IPython (shell version, not notebooks) in Ruby is pry and it's pretty good.
On the other side, in Python higher-level tools are horrible from an operational perspective, and much worse than Ruby's if you ask me (pip vs bundler, rbtrace vs.. gdb-python(?), pry/pry_remote vs pdb, ...).
Also +1 on boardwaalk's comment, this side of Python is pretty good.
Python is slightly easier in my experience to refactor and manage as the codebase grows, somewhat easier to get performance wins out of, somewhat easier to debug. But it's approximately in the same bucket as Ruby when it comes to tooling - put another way, you probably wouldn't switch to Python from Ruby because of the relative improvement in tool support.
I think Avdi's right in the sense that weak tooling is inevitable due to the language's properties. With Ruby and Python, what you gain in near term productivity and effectiveness you pay for later - your tipping point may vary.
This is pretty specific, but Ruby has pretty terrible support for parsing Ruby. Libraries are either unmaintained or broken or for old versions. Python is great for parsing ('import ast'), compiling ('compile(...)') and executing ('exec(...)'). So it's much easier to write code generators, write linters and autocompletion tools and such.
I had looked into the 'ast' module a little while back, and I do agree that it's pretty neat. The main use I've seen it type checking, or adding language level features (that would typically be done by macros in LISP).
I am not aware of many tools written using 'ast', that don't try to extend the language in some way. Any thoughts there?
One of the biggest success stories in recent times where language support has made tooling awesome is JSX.
The success of JSX shows that developers are happy to write HTML inside code, and there are clear benefits of doing so compared to using a dedicated templating language. But the reason why this wasn't popular till JSX showed up was that it is impossible to parse the HTML built in code through string munging. JSX makes XML-inside-Code parseable, which makes it possible for us to use IntelliJ's excellent IDEs to work seamlessly with it.
JSX is a statically-typed, object-oriented programming language designed to run on modern web browsers. Being developed at DeNA as a research project, …
> People in the industry are very excited about various ideas that nominally help you deal with
> large code bases, such as IDEs that can manipulate code as "algebraic structures", and search
> indexes, and so on. These people tend to view code bases much the way construction workers view
> dirt: they want great big machines that can move the dirt this way and that. There's
> conservation of dirt at work: you can't compress dirt, not much, so their solution set consists
> of various ways of shoveling the dirt around. There are even programming interview questions,
> surely metaphorical, about how you might go about moving an entire mountain of dirt, one truck
> at a time.
> Industry programmers are excited about solutions to a big non-problem. It's just a mountain of
> dirt, and you just need big tools to move it around. The tools are exciting but the dirt is not.
> My minority opinion is that a mountain of code is the worst thing that can befall a person, a
> team, a company. I believe that code weight wrecks projects and companies, that it forces
> rewrites after a certain size, and that smart teams will do everything in their power to keep
> their code base from becoming a mountain. Tools or no tools. That's what I believe.
I'm not quoting this to refute the idea that Ruby has terrible tooling, or to suggest that tooling doesn't matter. But rather, to give some context for this idea:
When we first embraced Ruby, we were writing web apps in Java. Ruby allowed us to write those same apps with fewer lines of code, by fewer people, in less time. That was a triple-win, and there were all sorts of secondary positive effects.
Fewer people on a team means less necessary process. It means less money required. YCombinator wouldn't exist today if you couldn't have done something meaningful with $18,000 over a few months. Less time means you can be lean and iterate, rather than spending time and tons of money trying to analyze what a market wants in advance.
Those factors were massive wins for the startup culture. There are similar effects within companies, but I needn't list them out here.
But one negative effect is that with small teams writing fewer lines of code, in a startup culture, there is much less emphasis on tooling. It's a "nice-to-have," because app complexity does not make the difference between shipping and not shipping.
So yeah, there's some tooling, but it is not very good. Really, it's not. But it was good enough.
But as time went on, our apps and teams grew. Most businesses are in a Red Queen's Race against their competition. It is very hard to stay small and focused and have a single responsibility in a market. So our code grows and grows, until it reaches a kind of equilibrium with how many lines of code a company can support. Which means we expand until we're back to having big code bases, with many engineers.
And guess what? One million lines of Java is easier to understand than one million lines of Ruby. Doubly so when you have better Java tooling than Ruby tooling, and when Ruby does things that demand better tooling.
So now we're in this place where successful companies have big apps, but we don't have a "big app" culture, so we don't have the tooling required to support this many lines of Java, much less this many lines of Ruby.
Even if I knew nothing about Java and Ruby (which is not true) have you noticed that you have managed to assert these three things? ->
"we were writing web apps in Java. Ruby allowed us to write those same apps with fewer lines of code"
"So our code grows and grows, until it reaches a kind of equilibrium with how many lines of code a company can support."
"One million lines of Java is easier to understand than one million lines of Ruby. Doubly so when you have better Java tooling than Ruby tooling"
This suggests to me that if, line for line, Ruby is more difficult to understand then the codebase will not reach the size of the Java codebase. But though smaller, the codebase does more. So is it a wash? Maybe.
Nobody would deny that it is harder to build tooling for a dynamically typed language versus a statically typed massacre. But Java has been around longer, it also has/had huge enterprise backing with tens[1] of implementations versus, what, a handful for Ruby[2]? I think it is fairer to compare tooling in the ecosystems of Ruby and Python and Perl and so on before using the word "terrible".
I don't disagree with the thrust of this post, but
> When we first embraced Ruby, we were writing web apps in Java.
Avdi's reference to people coming to embrace Ruby is specifically in the pre-Rails time period. So you're talking about different, though absolutely connected, things.
I was there too. You know... It was possible to write web apps in Ruby before Rails and Sinatra, although today that feels like a party trick, like writing FizzBuzz in Lambda Calculus.
Anyway, I'm hoping that the new static typing proposals can help make Ruby's tooling better. Ruby's greatest strength is also a huge weakness, it's just so hard to tell what's happening with all the dynamism. As someone who's been back to static typing for a while, but wrote a lot of JavaScript over the past few weeks, that's the angle I've been thinking about in this area.
I haven't been able to properly articulate why I never feel the need for these tools in Ruby, yet in other languages, they feel indispensable. Human brains are amazing: you can hold two completely contradictory opinions simultaneously. (Though maybe the different context means that they're not actually contradictory, even if it feels like it)
> Anyway, I'm hoping that the new static typing proposals can help make Ruby's tooling better.
Probably not. Keeping the language in place and focusing on tooling would make the tooling better; as others have pointed out, there are dynamic languages with better tooling.
And, its not like people who want the costs and benefits of static typing are ill-served by the current marketplace of computer languages.
The author touches on something important when comparing Ruby to Unix, and why we love them more than more structured systems.
Rigorous language is the enemy of expressive language.
That's not to say you can't be expressive within a rigorous system - it's just more difficult. And you can only express the things that the rigorous language is capable of expressing. So less rigorous, less formal environments, like Ruby and Unix, allow for a great deal of fast and intuitive expressiveness.
Immutability is rigidity. Of course, it comes with tremendous benefits as well, but as an effort latency vs bandwidth thing, there's significant latency with using Lisp intellectually. Would you use it for shell scripting or a simple glue language? I wouldn't, and I love Lisp.
Lisp is the language of the future. it's always been the language of the future, for nearly 60 years now. It always will be the language of the future. We should think hard about why Lisp has both never become mainstream and never died out. But the fact that it never become mainstream, especially if you exclude conspiracy theory and "most programmers are idiots unworthy of our wisdom", is telling.
Rigidity may not be the reason Lisp never broke out. But there is some human factors reason that's worth understanding.
But Lisp is no more inherently immutable than any other language—depending, I guess, on what you mean by 'immutability'. The runtime is certainly highly mutable (although I guess that would usually be called dynamic). It's good practice to use lots of immutable data structures, but that's hardly required, nor an inherent feature of the language.
I don't know, if the statement is true, since I did not work with Ruby really till now.
But I definitively would support this statement:
> Programming languages cannot be considered separately from their ecosystems
I think, one of the main reasons for the success of the C system was not the language itself, but the standard C library. Standard Pascal did not have such a thing (Borland Pascal had and was more successful as any other Pascal), Modula 2 -- a superior language at its time, did not have it (the library that was standard for Modula 2, was just to clunky for wide usage) and also other languages did not have it.
C, when it started, had not only the library, but also a -- for that time -- superior development environment: Unix.
From this, I would strongly agree, that the ecosystem is decisive -- and the bar has risen since the times of C. Today, you need top compilers or interpreters, top of the notch libraries -- best to have a framework (like Rails) for a good use-case to use the language and tools around that support your programming.
With all those good languages around today, any new language must have an appeal and the ecosystem and if possible the community, to be successful.
This argument rings true and has been in the back of my mind for quite a while. To take this into a different context... for example if you were building a house per se.
Houses were built out of wood for years (and still is), but we slowly built tools to make it easier (Hand held saws, levels, nail guns). Now a new material has hit the market... plastic (for instance) and it is a better and more pliable medium to build a home, but we lack the tools to make it easier. It is as though we have no hand tools that will cut through it, or no nail gun that will penetrate it. Yes, it (Ruby) seems to be a better material to have a structure built with, but when you are building a lot of homes or even a mansion, it seems silly to use a material that limits what you can do to manipulate it. The argument that "Ruby is so cool, I don't need tools" seems quite silly and misguided. That may be OK if you are building treehouse in your back yard, but if you are building an entire housing district, you are going to lose money in the development costs.
The best quality, long term lowest cost roofing material is copper. But we use asphalt shingles, which are short-lived and expensive in the long term. Why? Because they're cheap in the short run, they require little expertise to install, and they're easy to customize.
Some people would say "Python", since the tooling for Python is generally excellent, but I don't have enough experience with Python web stacks to say that they're better or worse than Rails. There's certainly less "magic" with Python, but I'd argue that's a good thing.
So wrong. The author has probably never tried RubyMine (or IntelliJ with the Ruby + Rails + etc. plugins).
I was hacking with Pharo Smalltalk just yesterday, so I get what the author means by the niceties of Smalltalk, but Ruby tools from Jetbrains are very nice.
Might be, but (regarding ruby) not if your codebase is designed[1] and not grown. Saudi Metz does an extremely good job at explain how to use 'single responsibility' to make even huge codebases maintainable.
I believe the same is be true for smalltalk and any OO language.
EDIT: I realize I left out my main point: I definitely agree with several of the points in the article. I have long wished for some way to traverse ObjectSpace easily and to know what code is being actually executed at any point in time. Lack of use of or availablity of comprehensive static analysis tools in our programming today is a huge problem, IMO. </end edit>
I don't have much experience with Ruby debuggers, as I rarely use them.
I do use RubyMine, every day. It's got powerful inspections and is an incredibly valuable tool for me. But, I originally used IDEA many years ago, and I don't think that RubyMine is anywhere near as powerful, because of all the dynamism inherent in Ruby.
I personally think that Ruby is far too flexible a language. I use metaprogramming, monkey patching, and `method_missing` exceedingly sparingly, if at all. I try to never make code any less traceable than possible. But, I use Rails, and while I don't dive into the Rails source very often, I'm sure there's a lot of hidden stuff in there. All sorts of methods added to classes (ActiveSupport - inflections, time zone helpers, ... - and all the core extensions that have made their ways into gems that do not even require ActiveSupport), and all of my URL helpers which use method_missing to "define methods" that never existed. In a word, Ruby's dangerous (and can be made to be very hard to reason about).
I have been watching http://crystal-lang.org/ every once in a while. I think one of the core strengths of Ruby is how concise and elegant it is or at least can be, and how readable it is or at least can be. It's pretty damn nice in a lot of ways. The community support is of course incredible as well (https://github.com/bbatsov/ruby-style-guide is great, http://slim-lang.com/ has made markup a non-arduous task, etc). Maybe we can do better.
EDIT: Incidentally: a lot of my complaints about Ruby are things that I would say about JS as well, but I have considerably less experience there. I just know that I've half-learned sooo many JS patterns. While I don't know Python, one of the things I've really liked hearing about it is how there's supposed to be "one right way" to do stuff. I think this is good. Code serves a purpose, it's a functional tool - IMO, it's not like written language where you really need a bunch of similar-but-different ways to get the same meaning across.
The Python "one right way" is a bit of a myth. It's a design goal that is unrealizable in practice. However, the documentation is very good and (in my experience) it lends itself to solving problems in very obvious ways.
everything within our knowledge in the material world can be fixed with enough resources. maybe for Ruby you could need more resources at the beginning given how it has been designed, but once you build up a good land upon which it will be easier to make other changes, (possibly even with a Kickstarter for a new kickass ruby implementation?) then the resources required for other changes will be comparable with the same needed in other languages.
In the meanwhile in Ruby we have been profiting from nice syntax and object model, which in the end, IMHO is what paired the balance and still made us use it. This article was great in the sense that highlightinged the right approach that we would need in Ruby to make it grow to a "state of the art language". I really hope this will happen but for some reason I feel a great amount of opposing force to it and I am not sure it only comes from rational thought. I'd like to see someone have their say on this aspect at a deeper level.
Refactoring my own comment above (I apologize but I can't delete it anymore).
Everything in the material world within our knowledge can be fixed with enough resources.
Given the current design, maybe for Ruby we could need more resources at the start of this process, but once a good base is layed out it will be easier to make other changes. Or could it all start with a Kickstarter for a new Ruby implementation???
Anyway, at that point, the resources required for successive changes will be comparable with the same needed by other languages.
So far in Ruby we have been profiting from nice syntax and object model, which in the end is what paired the balance and still made us use it.
This article was great in the sense that highlightinged the right approach that we would need in Ruby to make it grow to a better implementation. I really hope this will happen but for some reason I feel a great amount of opposing force to it and I am not sure it only comes from rational mindset. I'd like to see someone have their say on this on a deeper level.
Considering that the author's primary source of income (to the best of my knowledge) is a video series on tips and tricks for Ruby development, I wouldn't chalk this up to "backlash" so much as "call to arms".
The non-existence of tools that you find in other languages might signal an immature ecosystem, but it might also signal a diminished need for them.