What have been the drawbacks of static typing for you?


Hi everyone,

I’m set to make the case at work for transitioning to TypeScript from JavaScript.

We currently use Rails and React. I know from years of experience with Ruby and JS that it can be difficult to track down where something is defined or used, and the most common error you’ll see in logs is calling methods on null.

Meanwhile, Rust avails a plethora of incredible developer tools made possible by its type system: looking up definitions, looking up uses, knowing for certain the type of an object at a particular line of code, and so on. I take Rust as a gold standard for type systems, so, wanting to be fair in my assessment, I want to know what the pain points have been for you.

I’ve been keeping in mind an analysis of what is possible in the super-dynamic languages of Ruby and JavaScript while working on features and such, trying to find patterns that just wouldn’t be possible with static typing. I can’t think of much that I’ve seen done via metaprogramming in a Rails app, for example, that wouldn’t be just as well served by compile-time macros or even regular modules and function calls.

Now, I’ve heard the argument that static typing requires more keyboard typing. Or that its not worth adding types to small projects. I consider that tradeoff pretty trivially in favor of static typing in any large project. I don’t mind being more explicit in code in order to make it easier on myself later.

What I want to know is if you’ve seen drawbacks to your workflow, the kind of patterns you can express in code, or other issues because of strong typing. What has become more painful, awkward, or contrived?

Discussion of what becomes more pleasant is alright, but I’m mostly looking for drawbacks.


As far as I know there are no definitive studies showing that dynamic or static typing is overall better. These are different approaches, with different trade-offs.

It’s great not ever having “undefined is not a function” error, but there is an up-front cost of that. You’ll have to structure programs in more rigid way. If you have an existing codebase that takes advantage of dynamism (even if unintentionally), and programmers who don’t “think in types”, then there might be extra cost of switching.

However, of all the objections, “more typing” is probably the weakest one. It used to be a big issue in the old C++ or Java, but as languages with type inference have shown, that was just a “bug” in the design of older languages. Nowadays you can have static types and not type* them all the time (*not a pun).

Some specifics:

  • In JS I keep forgetting to add await when calling async methods, so I end up trying to use a Promise of result as if it was the result.

    const user = /*await*/ get_user(); 
    if (user) {}
    if (!user.has_email) {}

    neither line throws an error, but due to type mixup it changes meaning of the code and performs nonsense operations.

  • There’s a whole class of errors when your front-end prints [object Object] somewhere.

  • That’s very Rust-specific, but I got used to fn foo() {1} returning 1 in Rust. I keep doing that in JS and get undefined :slight_smile:

Drawbacks of static typing:

  • It’s mainly about upfront cost of declaring types and then sticking to the types. It is quicker to use duck typing, instead of defining structures, interfaces, unions, etc.

  • With powerful type systems there’s no end to how far you can go to guarantee things about your program, but you might create a complex monster.

Both of these are probably harsher in Rust where you have more advanced type system, and it’s not optional. With TypeScript you have gradual typing, so you still have option of not using types where it doesn’t make sense.



Im not a expert in TypeScript or JavaScript.
But Ive done some in typescript, in the beginning I hated it :slight_smile:
But now I love it.

Typescript cons:

  • cant run it without any transpiler

Typescript(+tslint) pros

  • find bugs you never saw (less bugs)
  • easier to read others code
  • easier to refactor
  • since you do transpile you can pretty much use all the new cool stuff

3 projects I did for fun in typescript to learn more:

Hopefully Ill have more time and start learning some more rust :slight_smile:
Finally got debugging working in windows, very useful, easier to see weird bugs when I see whats going on


Anecdotally and only tangentially related to the question, but I think null-safety is a problem mostly orthogonal to static vs dynamic typing. In my personal experience, I’ve seen much more NPEs in Java than in Python, presumably because Python prefers to through an exception where Java would haply return null. It’s also interesting that Erlang, a dynamic language, does not have null, instead, optional values are represented as tagged values like {:ok value} or :error, so you have to explicitly unwrap the happy case.


Debugging. If you’ll get an exception in dynamically typed language, you’ll get enough information to know where it came from. With Rust, you only know that something returned an error, and any details can be lost. In Python, I can set debugger to wherever I want to, and once I am there, I can introspect everything to figure out the cause. With Rust, its not not easy.


Just so you know, debugging works in Rust (on every platform I’ve written it on?), and RUST_BACKTRACE=full exists. It can be annoying to setup breakpoints and other things (hello Windows!), but I’ve never had backtracing fail on me (even if you have 64 threads running it’ll give you the backtrace (well, it’ll spam you)).

# REM Windows
cargo run
RUST_BACKTRACE=full cargo run

The details are there, and are super useful. You can infer what’s actually happened if it’s code outside of your own (e.g. a slice failure from your code, coupled with the error, will tell you what happened in great detail), and provided the external/third-party code isn’t broken, it’ll usually be that you passed in a bad argument. From there, if you can’t setup debugging/don’t want to, debug prints work.

Type inference has already been brought up, but I think the bigger thing is that even if/when I do have to explicitly type it (and I actually explicitly type it more often than not for complex code), if it’s written I have a visual reminder of what it is.

In Rust, when I use a Vec of something simple (e.g. a primitive, or a two value struct representing a point or vertex) I often don’t type it. The reason being that the internal type can then be swapped out and it’s a non-issue. But if it’s a piece of code where I know I’ll need to come back, or possible step through later, I’ll explicitly write Vec<MyType> (though usually that only applies to key/value collections, I think I’ve only explicitly typed the Vec type a handful of times in the last year, and most of those were because I hadn’t finished the code when I pushed it into the repository or I wanted to backup the code (usually due to workflow interruptions)).

I come from a mixture of OOP and functional programming, so I’m more than used to typing things. I actually really dislike code that’s underdocumented, and sometimes a type name is all it takes to make the code easier to follow. This is particularly true of things like Python, Lua, and/or JavaScript, which are used to prototype something that’s later to be rewritten for performance. Porting it later really hurts if you just rely on the system to work out the best route for you (as you may have to go back multiple functions to find the type definition).

I can/will happily write dynamic code when needed, but once you get used to the presence of the information you need being tied to the local declaration, it will grow on you. Couple that with Rust’s type inference and it’s great. For languages without type inference (and there are a lot), it’s a bit painful when you move across from a dynamic language, or something like Rust, but that quickly fades with experience :slight_smile:


Oh man, I do appreciate Ruby’s debugging options, but at the same time, it’s aggravating when the stacktraces don’t even point to the right place in code. This happens fairly often when there’s any sort of metaprogramming going on. I never quite figured out what in particular causes it.

But yeah, debugging with an interpreter is a lot more pleasant than with, say, GDB.


I decided to respond via blog post to share one of my war stories about this:


Elm goes a lot further than typescript. It should be almost impossible to have runtime errors if the code compiles.
As you can see here https://medium.freecodecamp.org/a-real-world-comparison-of-front-end-frameworks-with-benchmarks-2018-update-e5760fb4a962 it can get 3 times the size of some others, and still a lot larger than Angular/React.
Another thing I really like about clojurescript is that you can keep state, and still reload the code, that’s a lot harder to do with a static typed language.

  • Compile times.

  • It’s true that the things you can do with metaprogramming in dynamic languages can often be replicated in statically typed languages using macros and/or fancy type system features. But – in my experience – implementing that way takes a lot more work. And that tends to dissuade people from using them at all. Some people might call that a benefit…

  • Statically typed languages encourage you to design your data structures top down; dynamically typed languages encourage you to design them bottom up. What I mean is, when you write a struct or class definition –

    • First of all, in most languages it has to have a name. As silly as it sounds, when writing in dynamic languages I often start by stuffing some fields into a dictionary, maybe calling it info, and only once more of the program has been written does the overarching purpose of the data structure reveal itself (at which point I might replace it with a class).
    • After the name, you’re prompted to think of all the fields you might need and what types they should have, which might require defining further structs/classes, and so on. Of course, you don’t have to do this – you could just start with no fields and add them as needed – but adding a field requires flipping between at least two points in the code (where the field is defined and where you’re using it; if there’s a constructor or destructuring, there might be more points), so it’s natural to want to save the effort by figuring it out up front. Anyway, you already gave it a name, so maybe you have an idea what fields it needs?
    • But in dynamically typed languages, adding a field is just a matter of foo.bar = baz, no need for further ceremony.
  • In general, I agree that “being more explicit in code in order to make it easier on myself later” tends to be the best approach for large projects; it’s a worthy tradeoff. But it’s still a tradeoff, which means there is a drawback: increasing the cost of experimentation.

    • On the other hand, experimentation tends to involve heavy refactoring, and the type checker can often help with that by identifying spots you’ve missed. So it’s not a pure win for dynamic typing.


I overall agree with your points, but I feel like I must point this out:

Compile times are largely related to optimizations and stuff. If you were to compile JS with an optimizing compiler (like feeding it to LLVM IR somehow) it would take its good while as well.

Now, whether or not this would take longer than rustc to produce a fast binary… I’m not sure. I’d tend to guess towards JS+LLVM taking less anyway, since the optimum for a random project would be much lower in performance than rustc’s. But I know barely anything here, so I might be entirely off :sweat_smile:

But yeah, I just wanted to add to your idea about compile times… that having low compile times usually comes at the price of low-performance at runtime :3


I come from several centuries of Perl. :wink:

For me, it’s mostly been:

  • working with unknown or loose data, like parsing a bunch of JSON logfile entries or webapi returns to try and figure out what’s in them, or just pass them through with a few fixups and annotations. Yes, serde_json has a way to do this via its Value enum, but it’s nowhere near as ergonomic as just slurping stuff into native data structure. Related: numbers that are sometimes “strings” in CSV or JSON or other things made by humans.
  • constructing data, in particular populating or initialising maps of lookup values. This hasn’t been quite as restrictive as I expected, because at least some of the cases were I’d have used a Perl lookup hash are now handled by match blocks. And as I learn more about it, some more are handled by enums with custom de/serialisations or Display impls - but it’s still more work.
  • collectively: closures, futures & eventloop callbacks vs error handling. Duck-typing makes this easy, I can chain Promises and callbacks and pass closures from different libraries together happily (and store them in hash entries for lookup-based actions, as above), whereas in Rust I have spent a lot of time with .map_err() and very long type signatures in errors trying to figure out what’s going on.


One thing I’ve had trouble with in Rust is integration tests of a component relying on IO.

Let’s take as an example the case of testing how a service recovers from an error on its first attempt followed by success. In Ruby, one can easily test in such a manner:



We’ve mocked not just a single response by the network service, but a sequence. It’s possible to mock in Rust by defining interfaces and dependency injection, and indeed one could even achieve the above by implementing a state machine in the mock, but it’s a lot heavier.

@ExpHP, I hadn’t even begun to consider going overboard on type system cleverness as a potential pitfall, but that’s certainly something I’m vulnerable to. It’s intriguing that one of the main things you lose with strong, static typing is runtime metaprogramming cleverness, yet one thing you potentially gain is compile-time type system cleverness.


To phrase it another way: programming languages in the same family as JavaScript or Python have one, and only one, data type. Maybe two, if you count the separate “function namespace” and “value namespace” in Erlang and Elixir as separate types, but anyway they don’t allow the language user to define their own.


Which is just one less thing you need to worry about. Sure, having a non-degenerate type system allows you to statically verify things for correctness and optimizations at compile time, but it’s also another feature that the programmer can get wrong and end up boxing themselves in or tangling themselves in type system level spaghetti. Static types are nice when they’re done well, but worse than useless when done wrong.

Polytyped languages [like Rust] have two systems the programmer must create a mental model of in their head. The first is the computation model. This is what happens when code is executed. The second is the type model. This is how the types in the language are used and manipulated by the compiler. In a unityped system there is no type model, and the computation model is the only thing to learn. If it is simple, as in languages such as Lua, this can be really nice.


What he says about the type model and the computation model may be correct, but only up to a point. As far as I understand it, if you have dependent types, the types themselves are as complex and powerful as the computation model.

Also: nice links! They were a good read :3


Lua sells simplicity, but does anyone really know whether they’ve got an array or a table half the time, or an integer or a FP number? It’s not supposed to matter … until it does. (e.g. format a non-integer FP with %d and you get an error, exceed 64-bits for an integer and it wraps, and what is the length of an array if it has a few nils in it? (A: depends on how it was constructed))

Edit: What I’m trying to say is that Lua has a type model you have to learn as well, except it is hidden.


I feel like it’s mostly just shifting where you do the work. In a dynamically typed language like Ruby, you have to do more work to ensure the types of what you are getting as input and what you are returning otherwise your project will likely end up becoming an unmaintainable, error-prone mess as it gets bigger.

In a statically typed language you have to deal with those type checks explicitly before your code can even run. So you end up with less unhandled type errors at runtime, but you pay for it in the time it takes to code and mental overhead of having to explicitly think about types.


Totally agree. This is exactly where I define productivity. It’s not just about delivering as fast as possible. It’s about tolerance of after-delivery debugging and failures.

The way I see it:
Despite best intentions and skill, dynamic typed programs have inherent risk about runtime errors, which will require time and resources to track down, verify that, for example, nil was interpreted as false when it should not have been, and fix the code (maybe adding tests). This after-the-fact cost varies and is hard to measure and predict so I see people don’t often include it in their estimations of “I’m so productive in X”, but it’s a reality.

However, truth is, most business applications can tolerate shipping fast and handling runtime errors as they come up. I see no fundamental problems with that approach. It is a write, test, deploy, pray approach where you hedge bets that you’ll not need to pay much after-the-fact cost of runtime errors.

With static typed programs, and with rust specifically, you don’t get a choice, you always pay the cost up-front. But you also have guarantees that everyone on the team has also paid the cost. So you have higher confidence because of greater guarantees about runtime safety.

It’s up to experienced devs to know when a project needs to incur this kind of after-the-fact versus up-front cost. They both have pros and cons, and they both define the true productivity of an application. I think it’s a bit disingenuous when devs use time-to-write a program as THE productivity measure, cause we all know it’s more complicated than that. Conversely, I find it disinegenious when devs argue static typed languages make objectively safer programs ( e.g., using unsafe in rust or go) we all know it’s more complicated than that.


For me personally, biggest drawbacks are time spent to experiment or do some ad-hoc analysis.
I also spend more time designing and/or thinking about how data is structured. For example, do I create a specific type for both centimeters and inches so a calculation is guaranteed to use the right metric? In ruby I’d just use a float and be done and would not even spend time thinking about it. It takes discipline to know in a static typed language when to take the time to think about the kinds of examples above. In an ideal world, one might say that we should all think deeply about our programs. But reality is that thinking about creating centimeter and inch types for a calculation versus just using a float is, from a business point of view, a distraction … Unless of course it’s a core part of the app. These small distractions can add up, even without writing any code, just knowing you can do those things can really tank focus sometimes.


I’ve been keeping in mind an analysis of what is possible in the super-dynamic languages of Ruby and JavaScript while working on features and such, trying to find patterns that just wouldn’t be possible in Rust (which I’m taking as a standard for a great type system). I can’t think of much that I’ve seen done via metaprogramming in a Rails app, for example, that wouldn’t be just as well served (nay, better) by compile-time macros or even regular modules and function calls.

To be clear, it is quite possible to program in an extremely dynamic style, even in a statically typed language like Rust!

For example, you can create a Vec where each element is either a string or float, and you can then do dynamic type checks to determine whether an element is a string or float:

enum StringOrFloat {

let x = vec![

match &x[0] {
    StringOrFloat::DString(value) => println!("It's a string! {}", value),
    StringOrFloat::DFloat(value) => println!("It's a float! {}", value),

What we’ve done here is created a new static type called StringOrFloat. At runtime the StringOrFloat type is either DString or DFloat (which are short for “dynamic string” and “dynamic float”).

We can then put both DString and DFloat inside of the Vec (even though Vec has the restriction that all of its elements must be of the same type!)

And when we extract elements out of the Vec (such as by using x[0]), we can do dynamic runtime checks (using match) to determine whether it’s a DString or DFloat.

There’s no trickery or complicated metaprogramming here: just simple ADTs/enums.

To give a comparison, the above Rust program is equivalent to this JavaScript program:

let x = [

let value = x[0];

if (typeof value === "string") {
    console.log("It's a string!", value);

} else {
    console.log("It's a float!", value);

As you can see, the Rust program does indeed require more keyboard typing (you have to declare a new StringOrFloat enum, and you need to use StringOrFloat::DString and StringOrFloat::DFloat).

However, fundamentally it is doing the same thing as JavaScript: when you create a StringOrFloat::DString or StringOrFloat::DFloat, it creates a runtime tag which “remembers” whether it’s a DString or DFloat. And then match uses that tag to do runtime checks. This is exactly the same as the type tag which is used by dynamically-typed languages.

So, in principle, anything that dynamically typed languages can do, Rust can do, because Rust has ADTs/enums: Rust functions can accept dynamically typed arguments, and they can also return dynamically typed values. And Rust structs/enums can have fields which are dynamically typed. It just requires some extra work (to declare and use an enum).

Whether it’s convenient or idiomatic is a different question, but at least it’s possible (without needing to write an interpreter).

There’s also even more advanced stuff, such as the Any trait, which can be used to do some pretty crazy things (notice that the is_string function in the example can be called with any 'static Rust type, and it will do a dynamic type check to determine whether that argument is a String or not!)

With that out of the way, let me answer the OP’s question. I’ve been programming in JavaScript for over 12 years (and I’ve used many other dynamically typed languages).

I eventually fell in love with statically typed languages (well, good statically typed languages…) because of their ability to confidently refactor code. In my opinion this is by far the most important benefit of static typing: doing significant refactoring in a large code-base in a dynamically typed language is possible, but it takes forever and often creates new bugs. That is not the case with (good) statically typed languages.

However, there are downsides of static types:

  • You are required to name your static types, and naming things is hard. So increasing the amount of things you need to name is really not great.

  • Static typing strongly pushes you into certain designs, which is usually a good thing, but it sometimes requires a lot of effort to make certain programs fit within those designs.

    For example, a lot of programs are easier to write with some sort of duck-typing. But static typing doesn’t have that, so you have to completely change the design to fit within static typing.

    Another example is certain kinds of programs which are fundamentally dynamic, such as retrieving some JSON from a web server and then parsing it.

    I had to write a Rust program which opens a CSV file and then extracts some data from it. I used some CSV-parsing crates to do the bulk of the work, but it still required some ugly code on my part:

    for result in reader.deserialize() {
        let (character1, character2, winner, _strategy, _prediction, tier,   mode,   odds,   duration, _crowd_favorite, _illuminati_favorite, _date):
            (String,     String,     String, String,    String,      String, String, String, u32,      String,          String,               String) = result?;

    I imagine parsing JSON (with some complex data) will be much worse.

  • Static typing often requires a lot more type casting/type conversion functions. Dynamically typed languages let you delay the conversion until the final moment, whereas statically typed languages often require you to do the conversion immediately. In some cases dynamically typed languages don’t need to do type conversion at all!

  • Static typing is more annoying when you are quickly slapping together a prototype, or you’re half-way through a big refactoring, and the compiler forces you to fix all of the errors, even if you know that the program is fine at runtime (lemme just test my program, dammit!)

  • Static typing forces you to design your program up-front. This is usually a good thing! But it does mean that the initial prototyping stage takes longer, because you can’t just slap things together.

    I find that I spend a lot more time thinking about things with static typing. Usually this thinking is about “how can I make the compiler happy?”. But I don’t mean that in a bad way: the compiler is usually unhappy because my code is wrong! So making the compiler happy is the same as writing correct code.

    Nonetheless, the fact that you are forced to do things correctly does take more time up-front (with big pay-offs in the long-term).

    On the other hand, static typing makes it easier to refactor (even during the prototyping stage!), and the fact that it forces you to design things properly means that it’s actually reasonable to push your prototype into production (which inevitably happens with every language).

  • This isn’t actually a problem with static typing per se, but in my experience statically typed languages take a long time to compile. This does slow down the development experience a lot.

  • This also isn’t actually a problem with static typing per se, but in my experience certain languages (I’m looking at you Haskell and PureScript) have a tendency to go really crazy with the type-level stuff.

    This isn’t always a bad thing: static proofs are great! But the complexity can be very overwhelming, especially if you aren’t used to it yet.

    Thankfully Rust has managed to (mostly) avoid this problem, so its type system remains accessible even to non-functional programmers.

    To be clear, I love Haskell and PureScript, but I feel like they sometimes go a bit too far in their pursuit of purity and correctness at the expense of practicality.

  • I’m not sure how true it is, but to me it “feels” like it’s easier for a beginner to learn a dynamically typed language rather than a statically typed language. This doesn’t matter when you’re experienced, but it is a downside of statically typed languages for beginners.

Those are the ones I can think of off the top of my head.

After I embraced static typing, I actually haven’t had too many problems with it. It does help a lot that the statically typed languages I use (including Rust) have typeclasses and ADTs/enums. Those two features dramatically increase the flexibility of static typing, making it quite enjoyable overall.

Most of my problems with Rust are due to its strict memory model (including references), not really with the static type system itself.

If we’re talking about problems with a specific statically-typed language (as opposed to static typing in general), I could talk for hours about the problems I’ve had with TypeScript…