So much goodness in this release. Struct redefinition combined with Revise.jl makes development much smoother. Package apps are also an amazing (and long awaited) feature!
I can't wait to try out trimming and see how well it actually works in its current experimental instantiation.
How's the Julia ecosystem these days? I used it for a couple of years in the early days (2013-2016ish) and things initially felt like they were going somewhere, but since then I haven't seen it make much inroads.
Any thoughts from someone more plugged in to the community today?
My company (a hedge fund) has been using Julia for our major data/numeric pipelines for 4 years. It's been great. Very easy to translate math/algorithms into code, lots of syntactical niceties, parallelism/concurrency is easy, macros for the very rare cases you need them. It's easy to get high performance and possible to get extremely high performance.
It does have some well-known issues (like slow startup/compilation time) but if you're using it for long-running data pipelines it's great.
What kind of library stack do you use? Julia has lots of interesting niche libraries for online inference, e.g. Gen.jl, which can be quite relevant for a hedge fund.
If you can't talk about library stacks, it'd be at least interesting to hear your thoughts about how you minimize memory allocation.
Very little actually, we try to minimize dependencies, especially for the core inference engine. We just have some basic stuff like Statistics and LinearAlgebra. We use a lot more libraries for offline analysis, but even there it's just popular stuff like DataFrames combined with our own code.
We control memory allocation the boring, manual way – we preallocate all our arrays, and then just modify them, so that we have very little continued allocation in production.
In my experience starting with Julia in 2025, the main thing missing from the ecosystem tends to be boring glue type packages, like a production grade gRPC client/server. I heard HTTP.jl is also slow, but I havn't sufficiently dug into this myself. At least we have an excellent ProtoBuf implementation so you can roll your own performant RPC protocol.
As for the actual numerical stuff I tend to roll my own implementations of most algorithms to better control relevant tradeoffs. There are sometimes issues where a particular algorithm is implemented by a Julia package, but has performance issues / bugs in edge cases. For example, in my testing I wasn't able to get ImageContrastAdjustment CLAHE to run very fast and it had an issue where it throws an exception with an image of all zeros. You also can't easily call the OpenCV version as CLAHE is implemented in OpenCV using an object which doesn't have a binding available in Julia. After not getting anywhere within the ecosystem I just wrote my own optimized CLAHE implementation in Julia which I'm very happy with, this is truly where Julia shines. It's worth noting however that there are many excellent packages to build on such as InterprocessCommunication, ResumableFunctions, StaticArrays, ThreadPinning, Makie, and more. If you don't mind filling in some gaps here and there its completely serviceable.
As for the core language and runtime we are deploying a Julia service to production next release and haven't had any stability/GC/runtime issues after a fairly extensive testing period. All of the Python code we replaced led to a ~40% speedup while improvements to numerical precision led to measurably improved predictions. Development with Revise takes some getting used to but once you get familiar with it you will miss it in other languages. All in all it feels like the language is in a good place currently and is only getting better. I'd like to eventually contribute back to help with some of the ecosystem gaps that impacted me.
The other day that old article "Why I no longer recommend Julia" got passed around. On the very same day I encountered my own bug in the Julia ecosystem, in JuliaFormatter, that silently poisoned my results. I went to the GitHub issues and someone else encountered it on the same day. I'm sure they will fix it (they haven't yet, JuliaFormatter at this very moment is a subtle codebase-destroyer) but as a newcomer to the ecosystem I am not prepared to understand which bog standard packages can be trusted and which cannot. As an experiment I switched to R and the language is absolute filth compared to Julia, but I haven't seen anyone complain about bugs (the opposite, in fact) and the packages install fast without needing to ship prebuilt sysimages like I do in Julia. Those are the only two good things about R but they're really important.
I think Julia will get there once they have more time in the oven for everything to stabilize and become battle hardened, and then Julia will be a force to be reckoned with. An actually good language for analysis! Amazing!
just to be fair, the very first words in the README for JuliaFormatter is a warning that v2 is broken, and users should stick to v1. so it is not a "subtle" codebase-destroyer so much as a "loud" codebase-destroyer.
That's fair, and my bug was in 2.x, but it doesn't really make me feel better. If anything, I feel worse knowing this is OffsetArrays again--the ecosystem made cross-cutting changes that it doesn't have the manpower to absorb across the board, so everything is just buggy everywhere as a result. This is now a pattern.
The codebase destruction warning was not super loud, though. Obviously I missed it despite using JuliaFormatter constantly. It doesn't get printed when you install the package nor when you use it. It's not on the docs webpage for JuliaFormatter. 2.x is still the version you get when you install JuliaFormatter without specifying a version. The disclaimer is only in the GitHub readme, and I was reading the docs. What other packages have disclaimers that I'm not seeing because I'm "only" reading the user documentation and not the GitHub developer readme?
> so everything is just buggy everywhere as a result
I don't think this is an accurate summary. the bug here is that JuliaFormatter should put a <=1.9 compatibility bound in its Project.toml if it isn't correct with JuliaSyntax.jl
OffsetArrays was different because it exposed a bunch of buggy and common code patterns that relied on (incorrect) assumptions about the array interface.
You're purposefully being disingenuous. README me says "If you're having issues with v2 outputs use the latest v1". That's a big "If". How about If it's not ready for production use, say so explicitly in the README - not maybe use it but maybe don't use it.
We were a startup and I was the "management", but it was mostly for HR reasons. The original dev who convinced me to try Julia ended up leaving, and when we pivoted to a new niche that required a rethinking of the codebase, we took the opportunity to re-write in C# (mostly because a we _needed_ C# to develop a plugin, and it would simplify things if everything was C#).
It's not steady, though, when there are 9 projects in 2023, 5 projects in 2024 and 1 project in 2025 (until now). Maybe steady but a steady decrease. I don't want to exaggerate the importance of the case study quantity but overall it's not that promising.
for many types of scientific computing, there's a case to be made it is the best language available. often this type of computing would be in scientific/engineering organizations and not in most software companies. this is its best niche, an important one, but not visible to people with SWE jobs making most software.
it can be used for deep learning but you probably shouldn't, currently, except as a small piece of a large problem where you want Julia for other reasons (e.g. scientific machine learning). They do keep improving this and it will probably be great eventually.
i don't know what the experience is like using it for traditional data science tasks. the plotting libraries are actually pretty nicely designed and no longer have horrible compilation delays.
people who like type systems tend to dislike Julia's type system.
they still have the problem of important packages being maintained by PhD students who graduate and disappear.
as a language it promises a lot and mostly delivers, but those compromises where it can't deliver can be really frustrating. this also produces a social dynamic of disillusioned former true believers.
> people who like type systems tend to dislike Julia's type system.
This is true. As far as I understand it, there is not a type theory basis for Julia's design (type theory seems to have little to say about subtyping type lattices). Relatedly, another comment mentioned that Julia needs sum types.
We're not using "type theory" the same way, I think. I'm thinking in terms of
- simply typed lambda calculus
- System F
- dependent type theory (MLTT)
- linear types
- row types
- and so on
But it's subtle to talk about. It's not like there is a single type theory that underlies Typescript or Rust, either. These practical languages have partial, (and somewhat post-hoc) formalizations of their systems.
I do wonder in particular about the startup time "time-to-plot" issue. I last used Julia about 2021-ish to develop some signal processing code, and restarting the entire application could have easily taken tens of seconds. Both static precompilation and hot reloading were in early development and did not really work well at the time.
$ time julia -e "exit"
real 0m0.156s
user 0m0.096s
sys 0m0.100s
$ time julia -e "using Plots"
real 0m1.219s
user 0m0.981s
sys 0m0.408s
$ time julia -e "using Plots; display(plot(rand(10)))"
real 0m1.581s
user 0m1.160s
sys 0m0.400s
Not a super fair test since everything was already hot in i/o cache, but still shows how much things have improved.
This was absolutely not "fixed" in 1.9, what are you talking about. It was improved in 1.9, but that's it. Startup time is still unacceptably slow - still tens of seconds for large codebases.
Worse, there are still way too many compilation traps. Splatted a large collection into a function? Compiler chokes. Your code accidentally moves a value from the value to the type domain? You end up with millions of new types, compiler chokes. Accidentally pirate a method? Huge latency. Chose to write type unstable code? Invalidations tank your latency.
It's the most annoying thing about hn that people will regularly declare/proclaim some thing like this as if it's a Nobel prize winning discovery when it's actually just some incremental improvement. I have no idea how this works in these people's lives - aren't we all SWEs where the specifics actually matter. My hypothesis is these people are just really bad SWEs.
My shop just moved back to Julia for digital signal processing and it’s accelerated development considerably over our old but mature internal C++ ecosystem.
Mine did the same for image processing but coming from python/numpy/numba. We initially looked at using Rust or C++ but I'm glad we chose to stick it out with Julia despite some initial setbacks. Numerical code flows and read so nicely in Julia. It's also awesome seeing the core language continuously improve so much.
I'm excited to see `--trim` finally make it, but it only works when all code from entrypoints are statically inferrable. In any non-toy Julia program that's not going to be the case. Julia sorely needs a static mode and a static analyzer that can check for correctness. It also needs better sum type support and better error messages (static and runtime).
In 2020, I thought Julia would be _the_ language to use in 2025. Today I think that won't happen until 2030, if even then. The community is growing too slowly, core packages have extremely few maintainers, and Python and Rust are sucking the air out of the room. This talk at JuliaCon was a good summary of how developers using Rust are so much more productive in Rust than in Julia that they switched away from Julia:
I don't think Julia was designed for pure overhead projects in memory-constrained environments, or for squeezing out that last 2% of hardware performance to cut costs, like C++, Rust or Zig.
Julia is the language to use in 2025 if what you’re looking for is a JIT-compiled, multiple-dispatch language that lets you write high-performance technical computing code to run on a cluster or on your laptop for quick experimentation, while also being metaprogrammable and highly interactive, whether for modelling, simulation, optimisation, image processing etc.
actually I think it sort of was, I remember berkeley squeezing a ton of perf out of their cray for a crazy task because it was easy to specialize some wild semi-sparse matrix computations onto an architecture with strange memory/cache bottlenecks, while being guaranteed that the results are still okay.
Telling what? Did you actually listen to the talk that you linked to, or read the top comment there by Chris Rackauckas?
> Given all that, outside of depending heavily on DifferentialEquations.jl, I don't know why someone would pick Julia over Python + Rust.
See his last slide. And no, they didn't replace their Julia use in its entirety with Rust, despite his organization being a Rust shop. Considering Rust as a replacement for Julia makes as much sense to me as to considering C as a replacement for Mathematica; Julia and Mathematica are domain specific (scientific computation) languages, not general systems programming languages.
Neither Julia nor Mathematica is a good fit for embedded device programming.
I also find it amusing how you criticize Julia while praising Python (which was originally a "toy" scripting language succeeding ABC, but found some accidental "gaps" to fit in historically) within the narrative that you built.
> In any non-toy Julia program that's not going to be the case.
> Telling what? Did you actually listen to the talk that you linked to, or read the top comment there by Chris Rackauckas?
To clarify exactly where I'm coming from, I'm going to expand on my thoughts here.
What is Julia's central conceit? It aims to solve "the two language" problem, i.e. the problem where prototyping or rapid development is done in a dynamic and interactive language like Python or MATLAB, and then moved for production to a faster and less flexible language like Rust or C++.
This is exactly what the speaker in the talk addresses. They are still using Julia for prototyping, but their production use of Julia was replaced with Rust. I've heard several more anecdotal stories of the exact same thing occurring. Here's another high profile instance of Julia not making it to production:
Julia as a community have to start thinking about what makes a language successful in production.
Quote from the talk:
> "(developers) really love writing Rust ... and I get where they are coming from, especially around the tooling."
Julia's tooling is ... just not good. Try working several hundred thousand line project in Julia and it is painful for so many reasons.
If you don't have a REPL open all the time with the state of your program loaded in the REPL and in your head, Julia becomes painful to work in. The language server crashes all the time, completion is slow, linting has so many false positives, TDD is barebones etc. It's far too easy to write type unstable code. And the worst part is you can write code that you think is type stable, but with a minor refactor your performance can just completely tank. Optimizing for maintaining Julia code over a long period of time with a team just feels futile.
That said, is Python perfect? Absolutely not. There's so many things I wish were different.
But Python was designed (or at the very least evolved) to be a glue language. Being able to write user friendly interfaces to performant C or C++ code was the reason the language took off the way it did.
And the Python language keeps evolving to make it easier to write correct Python code. Type hinting is awesome and Python has much better error messages (static and runtime). I'm far more productive prototyping in Python, even if executing code is slower. When I want to make it fast, it is almost trivial to use PyO3 with Rust to make what I want to run fast. Rust is starting to build up packages used for scientific computing. There's also Numba and Cython, which are pretty awesome and have saved me in a pickle.
As a glue language Python is amazing. And jumping into a million line project still feels practical (Julia's `include` feature alone would prevent this from being tenable). The community is growing still, and projects like `uv` and `ty` are only going to make Python proliferate more.
I do think Julia is ideal for an individual researcher, where one person can keep every line of code in their head and for code that is written to be thrown away. But I'm certainly not betting the near future on this language.
Python has useful and rich ecosystem that grows every day. Julia is mostly pile of broken promises (it neither reads as Python, nor it runs as C, at least not without significant effort required to produce curated benchmarks) and desperate hype generators.
Since you have a rosy picture of Python, I assume you're young. Python has been mostly a fringe/toy language for 2 decades, until around ~2010, when a Python fad started not too different from the Rust fad of today, and at some point Google started using it seriously and thought they can fix Python but gave up eventually. The fad lived on and kept evolving and somehow found some popularity with SciPy and then ML. I used it in 90s a little, and I found the language bad for anything other than replacing simple bash scripts or simple desktop applications or a desktop calculator, and I still think it is (but sure, there are people who disagree and think it is a good language). It was slow and didn't have type system, you didn't know whether your code would crash or not until you run that line of code, and the correctness of your program depended on invisible characters.
"Ecosystem" is not a part of the language, and in any case, the Python ecosystem is not written in Python, because Python is not a suitable language for scientific computing, which is unsurprising because that's not what it was designed for.
It is ironic you bring up hype to criticize Julia while praising Python which found popularity thanks to hype rather than technical merit.
What promise are you referring to? Who promised you what? It's a programming language.
> Ecosystem" is not a part of the language, and in any case, the Python ecosystem is not written in Python, because Python is not a suitable language for scientific computing
Doesn't matter. Languages do not matter, ecosystems do, for they determine what is practically achievable.
And it doesn't matter that Python ecosystem relies on huge amounts of C/C++ code. Python people made the effort to wrap this code, document it and maintain those wrappers. Other people use such code through Python APIs. Yes, every language with FFI can do the same. For some reason none achieved that.
Even people using Julia use PythonCall.jl, that's how much Python is unsuitable.
> What promise are you referring to? Who promised you what? It's a programming language.
Acting dumb is poor rhetorical strategy, and ignores such a nice rhetorical advice as principle of charity - it is quite obvious that I didn't mean that programming language made any promise. Making a promise is something that only people can do. And Julia creators and people promoting it made quite bombastic claims throughout the years that turned out to not have much support in reality.
I leave your assumptions about my age or other properties to you.
Ecosystems matter, but runtimes do as well. Take Java, for instance. It didn’t have to wrap C/C++ libraries, yet it became synonymous with anything data-intensive. From Apache Hadoop to Flink, from Kafka to Pulsar. Sure, this is mostly ETL, streaming, and databases rather than numeric or scientific computing, but it shows that a language plus a strong ecosystem can drive a movement.
This is why I see Julia as the Java for technical computing. It’s tackling a domain that’s more numeric and math-heavy, not your average data pipeline, and while it hasn’t yet reached the same breadth as Python, the potential is there. Hopefully, over time, its ecosystem will blossom in the same way.
If what determines the value of a language libraries (which makes no sense to me at all, but let's play your game), then it is one more argument against Python.
You don't need FFI to use a Fortran library from Fortran, and I (and many physicists) have found Fortran better suited to HPC than Python since... the day Python came to existence. And no, many other scripting languages have wrappers, and no, scientific computing is not restricted to ML which the only area Python can be argued to have most wrapper libraries to external code.
Language matters, and two-language problem is a real problem, and you can't make it go away by closing your ears and chanting "doesn't matter! doesn't matter!"
Julia is a real step toward solving this problem, and allows you to interact with libraries/packages in ways that is not possible in Python + Fortran + C/C++ + others. You are free to keep pretending that problem doesn't exist.
You are making disparaging and hyperbolic claims about hyperbolic claims without proper attribution, and when asked for source, you cry foul and sadly try to appear smart by saying "you're acting dumb". You should take on your advice and instead of "acting dumb", explicitly cite what "promises" or "bombastic claims" you are referring to. This is what I asked you to do, but instead of doing it, you are doing what you are doing, which is interesting.
> If what determines the value of a language libraries (which makes no sense to me at all, but let's play your game), then it is one more argument against Python
The fact that you can use those nice numerical and scientific libraries from the language that had also tremendous amount of nice libraries from other domains, wide and good IDE support, is very well documented and has countless tutorials and books available... is an argument against that language? Because you can easily use Fortran code in Fortran?
Nice.
> You don't need FFI to use a Fortran library from Fortran
Wow. Didn't know that.
> And no, many other scripting languages have wrappers,
Always less complete, less documented, with less teaching materials available etc.
But sure, many other languages have wrappers. Julia for example wraps Python API.
> and no, scientific computing is not restricted to ML
Never said it is. I don't do ML, by the way.
> You are making disparaging and hyperbolic claims about hyperbolic claims without proper attribution, and when asked for source, you cry foul
Yeah, yeah. My claims on marketing like "Julia writes like Python, runs like C" are hyperbolic and require explicit citation, even though everyone that had any exposure to this language knows such and similar catch-phrases.
Look, you like Julia, good for you. Have fun with it.
in the early aughts educators loved the shit out of python because "it forced kids to organize their code with indentation". This was about a decade before formatting linters became low key required for languages.
These are exactly the feelings that I left with from the community in ~2021 (along with the AD story, which never really materialized _within_ Julia - Enzyme had to come from outside Julia to “save it” - or materialized in a way (Zygote) whose compilation times were absolutely unacceptable compared to competitors like JAX)
More and more over time, I’ve begun to think that the method JIT architecture is a mistake, that subtyping is a mistake.
Subtyping makes abundant sense when paired with multiple dispatch — so perhaps my qualms are not precise there … but it also seems like several designs for static interfaces have sort of bounced off the type system. Not sure, and can’t defend my claims very well.
Julia has much right, but a few things feel wrong in ways that spiral up to the limitations in features like this one.
Anyways, excited to check back next year to see myself proven wrong.
I basically agree with subtyping (but not multiple dispatch). More importantly, I think it's important to recognize that Julia has a niche that literally no one can compete with - interactive, dynamic and high performance.
Like, what exactly is the alternative? Python? Too slow. Static languages? Unusable for interactive exploration and data science.
That leaves you with hybrids, like Python/Cython, or Python/Rust or Numba, but taken on their own term, these are absolutely terrible languages. Python/Rust is not safe (due to FFI), certainly not pleasant to develop in, and no matter how you cut your code between the languages, you always lose. You always want your Python part to be in Rust so you get static analysis, safety and speed. You always want your Rust part to be in Python, so you can experiment with it easier and introspect.
I think multiple dispatch (useful as it is) is a little overrated. There's a significant portion of the time where I know I have a closed set of cases to cover, and an enum type with a match-like syntax would have worked better for that. For interfaces, multiple dispatch is good but again I would have preferred a trait based approach with static type checking.
I largely think multiple dispatch works well in Julia, and it enables writing performant code in an elegant manner. I mostly have smaller gripes about subtyping and the patterns it encourages with multiple dispatch in Julia, and larger gripes about the lack of tooling in Julia.
But multiple dispatch is also a hammer where every problem in Julia looks like a nail. And there isn't enough discussion, official or community driven, that expands on this. In my experience the average developer to Julia tends to reach for multiple dispatch without understanding why, mostly because people keep saying it is the best thing since sliced bread.
wrt to hybrid languages, honestly, I think Python/Cython is extremely underrated. Sure you can design an entirely new language like Mojo or Julia, but imo it offers only incremental value over Python/Cython. I would love to peek into another universe where all that money, time and effort for Mojo and Julia went to Cython instead.
And I personally don't think Python/Rust is as bad. With a little discipline (and some tests), you can ensure your boundary is safe, for you and your team. Rust offers so much value that I would take on the pain of going through FFI. PyO3 simplifies this significantly. The development of `polars` is a good case study for how Rust empowers Python.
I think the Julia community could use some reflection on why it hasn't produced the next `polars`. My personal experience with Julia developers (both in-person and online) is that they often believe multiple dispatch is so compelling that any person that "saw the light" would obviously naturally flock to Julia.
Instead, I think the real challenge is meeting users where they are and addressing their needs directly. The fastest way to grow Julia as a language is to tag along Python's success.
Would I prefer a single language that solves all my problems? Yes. But that single language is not Julia, yet, for me.
Mojo has a different scope than Julia and Python, it targets inference workloads.
Polars is a dataframe library.
Yes, it features vectorized operations, but it is focused on columnar data manipulation, not numerical algorithm development. I might say that this is narrow framing, people are looking at Julia through the lens of a data scientist and not of an engineer or computational scientist.
There is no "_the_ language to use", I always pick the language based on the project delivery requirements, like what is tier 1 support on a specific SDK, not pick the project based on the language.
Python predates Julia by 3 decades. In many ways Julia is a response to Python's shortcomings. Julia could've never taken off "instead of" python but it clearly hopes to become the mature and more performant alternative eventually
Some small additional details: 23 years not 30. Also, I think Julia was started as much in response to Octave/Matlab’s shortcomings. I don’t know if it is written down, but I was told a big impetus was that Edelman had just sold his star-p company to Microsoft, and star-p was based around octave/matlab.
When Julia came out neither Python nor data science and ML had the popularity they have today. Even 7-8 years ago people we're still having Python vs R debates.
In 2012, python was already well-established in ML, though not as dominant as it is today. scikit-learn was already well-established and Theano was pretty popular. Most of the top entries on Kaggle were C++ or Python.
Julia only came on to my radar in scientific computing (not ML) in about 2015-2016 or so but while I tried it at the time, it was really not very stable and my view was that it was very immature compared to Python’s scientific ecosystem. Looking at the dates, v1.0 came out in 2018 and I remember going to a talk about it at my academic institution where someone showed off the progress and we had a play again in our research group but it still didn’t have many things we needed and the trade offs felt not great as we were heavy users of IPython and then when it came out Jupyter and while Ju stood for Julia the kernel development environment didn’t work so well because rerunning cells could often cause errors if you’d changed a type for e.g.
At the time we were part of the wave I suppose that was trying to convince people that open source Python was a better prospect than MATLAB which was where many people in physics/engineering were on interpreted languages. At least in my view, it wasn’t until much more recently that Julia became a workable alternative to those, regardless of the performance benefits (which were largely workable in Python and MATLAB anyway - and for us at least we were happy developing extension modules in C for the flexibility that the Python interface gave us over the top).
Wow, there are so many amazing practical improvements in this release. It's better at both interactive use _and_ ahead-of-time compilation use. Workspaces and apps and trimmed binaries are massive - letting it easily do things normally done in other languages. It will be interesting so see what "traditional" desktop software will come out of that (CLI tools? GUI apps?).
> For example, the all-inference benchmarks improve by about 10%, an LLVM-heavy workload shows a similar ~10% gain, and building corecompiler.ji improves by 13–16% with BOLT. When combined with PGO and LTO, total improvements of up to ~23% have been observed.
> To build a BOLT-optimized Julia, run the following commands
Is BOLT the default build (eg. fetched by juliaup) on the supported Linux x86_64 and aarch64? I'm assuming not, based on the wording here, but I'm interested in what the blocker is and whether there's plans to make it part of the default build process. Is it considered as yet immature? Are there other downsides to it than the harmless warnings the post mentions?
BOLT isn't on by default. The main problem is that no one has tested it much (because you can only get it by building your own Julia). We should try distributing BOLT by default. It should just work...
This is it. Anyone who's anyone has been waiting for the 1.12 release with the (admittedly experimental) juliac compiler with the --trim feature. This will allow you to create small, redistributable binaries.
Honest take: yes, it's not ready. When I tried it, the generated binary crashed.
For what it's worth, I am able to generate a non-small (>1 GiB) binary with 1.11 that runs on other people's machines. Not shipping in production, but it could be if you're willing to put in the effort. So in a sense, PackageCompiler.jl is all you need. ;)
This is a fantastic release, been looking forward to --trim since the 2024 JuliaCon presentation. All of the other features look like fantastic QoL additions too - especially redefinition of structs and the introduction of apps.
We use it for binary deployments via building simulations into the FMI standard (i.e. building FMUs) https://fmi-standard.org/. We still need to get libraries updated for getting the smallest possible trim when using the more computationally difficult implicit methods, but at least the stack is matured for simple methods like explicit RK and Rosenbrock-type solvers already. For folks interested in --trim on SciML, the PR to watch is https://github.com/SciML/NonlinearSolve.jl/pull/665 which is currently held up by https://github.com/SciML/SciMLBase.jl/pull/1074 which is the last remaining nugget to get the vast majority of the SciML solver set trimming well. So hopefully very soon for anyone interested in this part of the world. Note that does not include inverse problems in the binary as part of small trim.
But all of this is more about maturing the ecosystem to be more amenable to static compilation and analysis. The whole SciML stack had an initiative starting at the beginning of this summer to add JET.jl testing for type inference everywhere and enforcing this to pass as part of standard unit tests, and using AllocCheck.jl for static allocation-free testing of inner solver loops. With this we have been growing the surface of tools that have static and real-time guarantees. Not done yet, some had to be marked as `@test_broken` for having some branch that can allocate if condition number hits a numerical fallback and such, but generally it's getting locked down. Think of it as "prototype in Julia, deploy in Rust", except instead of re-writing into Rust we're just locking down the behavior with incrementally enforcing the package internals to satisfy more and more static guarantees.
I've tried it on some of my julia code. The lack of support for dynamic dispatch severely limits the use of existing libraries. I spent a couple days pruning out dependencies that caused problems, before hitting some that I decided would be more effort to re-implement than I wanted to spend.
So for now we will continue rewriting code that needs to run on small systems rather than deploy the entire julia environment, but I am excited about the progress that has been made in creating standalone executables, and can't wait to see what the next release holds.
it works well --- IF your code is already written in a manner amenable to static analysis. if your coding style is highly dynamic it will probably be difficult to use this feature for the time being (although UX should of course improve over time)
v1.13 already has a good number of improvements to trim in the cases which are less analyzable, but even then I think the best thing for the package ecosystem is to start to become really good for trim, JETLS, etc. through statically checking things.
I think Julia missed the boat with Python totally dominating the AI area.
Which is a shame, because now Python has all the same problems with the long startup time. On my computer, it takes almost 15 seconds just to import all the machine-learning libraries. And I have to do that on every app relaunch.
Waiting 15+ seconds to test small changes to my PyTorch training code on NFS is rather annoying. I know there are ways to work around it, but sometimes I wish we could have a training workflow similar to how Revise works. Make changes to the code, Revise patches it, then run it via a REPL on the main node. Not sure if Revise actually works in a distributed context, but that would be amazing if it did. No need to start/fork a million new Python processes every single time.
Of course I would also rather be doing all of the above in Julia instead of Python ;)
Revise can work on your server for hot reloading if you need it - you copy your new code files in place over the old ones.
Of course there are caveats - it won't update actively running code, but if your code it's structured reasonably and you are aware of Revise's API and the very basics of Julia's world age you can do it pretty easily IME.
So much goodness in this release. Struct redefinition combined with Revise.jl makes development much smoother. Package apps are also an amazing (and long awaited) feature!
I can't wait to try out trimming and see how well it actually works in its current experimental instantiation.
How's the Julia ecosystem these days? I used it for a couple of years in the early days (2013-2016ish) and things initially felt like they were going somewhere, but since then I haven't seen it make much inroads.
Any thoughts from someone more plugged in to the community today?
My company (a hedge fund) has been using Julia for our major data/numeric pipelines for 4 years. It's been great. Very easy to translate math/algorithms into code, lots of syntactical niceties, parallelism/concurrency is easy, macros for the very rare cases you need them. It's easy to get high performance and possible to get extremely high performance.
It does have some well-known issues (like slow startup/compilation time) but if you're using it for long-running data pipelines it's great.
What kind of library stack do you use? Julia has lots of interesting niche libraries for online inference, e.g. Gen.jl, which can be quite relevant for a hedge fund.
If you can't talk about library stacks, it'd be at least interesting to hear your thoughts about how you minimize memory allocation.
Very little actually, we try to minimize dependencies, especially for the core inference engine. We just have some basic stuff like Statistics and LinearAlgebra. We use a lot more libraries for offline analysis, but even there it's just popular stuff like DataFrames combined with our own code.
We control memory allocation the boring, manual way – we preallocate all our arrays, and then just modify them, so that we have very little continued allocation in production.
In my experience starting with Julia in 2025, the main thing missing from the ecosystem tends to be boring glue type packages, like a production grade gRPC client/server. I heard HTTP.jl is also slow, but I havn't sufficiently dug into this myself. At least we have an excellent ProtoBuf implementation so you can roll your own performant RPC protocol.
As for the actual numerical stuff I tend to roll my own implementations of most algorithms to better control relevant tradeoffs. There are sometimes issues where a particular algorithm is implemented by a Julia package, but has performance issues / bugs in edge cases. For example, in my testing I wasn't able to get ImageContrastAdjustment CLAHE to run very fast and it had an issue where it throws an exception with an image of all zeros. You also can't easily call the OpenCV version as CLAHE is implemented in OpenCV using an object which doesn't have a binding available in Julia. After not getting anywhere within the ecosystem I just wrote my own optimized CLAHE implementation in Julia which I'm very happy with, this is truly where Julia shines. It's worth noting however that there are many excellent packages to build on such as InterprocessCommunication, ResumableFunctions, StaticArrays, ThreadPinning, Makie, and more. If you don't mind filling in some gaps here and there its completely serviceable.
As for the core language and runtime we are deploying a Julia service to production next release and haven't had any stability/GC/runtime issues after a fairly extensive testing period. All of the Python code we replaced led to a ~40% speedup while improvements to numerical precision led to measurably improved predictions. Development with Revise takes some getting used to but once you get familiar with it you will miss it in other languages. All in all it feels like the language is in a good place currently and is only getting better. I'd like to eventually contribute back to help with some of the ecosystem gaps that impacted me.
Disclaimer: I am not plugged into the community.
The other day that old article "Why I no longer recommend Julia" got passed around. On the very same day I encountered my own bug in the Julia ecosystem, in JuliaFormatter, that silently poisoned my results. I went to the GitHub issues and someone else encountered it on the same day. I'm sure they will fix it (they haven't yet, JuliaFormatter at this very moment is a subtle codebase-destroyer) but as a newcomer to the ecosystem I am not prepared to understand which bog standard packages can be trusted and which cannot. As an experiment I switched to R and the language is absolute filth compared to Julia, but I haven't seen anyone complain about bugs (the opposite, in fact) and the packages install fast without needing to ship prebuilt sysimages like I do in Julia. Those are the only two good things about R but they're really important.
I think Julia will get there once they have more time in the oven for everything to stabilize and become battle hardened, and then Julia will be a force to be reckoned with. An actually good language for analysis! Amazing!
just to be fair, the very first words in the README for JuliaFormatter is a warning that v2 is broken, and users should stick to v1. so it is not a "subtle" codebase-destroyer so much as a "loud" codebase-destroyer.
That's fair, and my bug was in 2.x, but it doesn't really make me feel better. If anything, I feel worse knowing this is OffsetArrays again--the ecosystem made cross-cutting changes that it doesn't have the manpower to absorb across the board, so everything is just buggy everywhere as a result. This is now a pattern.
The codebase destruction warning was not super loud, though. Obviously I missed it despite using JuliaFormatter constantly. It doesn't get printed when you install the package nor when you use it. It's not on the docs webpage for JuliaFormatter. 2.x is still the version you get when you install JuliaFormatter without specifying a version. The disclaimer is only in the GitHub readme, and I was reading the docs. What other packages have disclaimers that I'm not seeing because I'm "only" reading the user documentation and not the GitHub developer readme?
> so everything is just buggy everywhere as a result
I don't think this is an accurate summary. the bug here is that JuliaFormatter should put a <=1.9 compatibility bound in its Project.toml if it isn't correct with JuliaSyntax.jl
OffsetArrays was different because it exposed a bunch of buggy and common code patterns that relied on (incorrect) assumptions about the array interface.
You're purposefully being disingenuous. README me says "If you're having issues with v2 outputs use the latest v1". That's a big "If". How about If it's not ready for production use, say so explicitly in the README - not maybe use it but maybe don't use it.
Going well, regardless of the regular doom and gloom comments on HN.
https://juliahub.com/case-studies
One of those case studies is me at my former company. We ended up moving away from Julia
Because of Julia flaws, or management decisions completely unrelated to the tooling?
We were a startup and I was the "management", but it was mostly for HR reasons. The original dev who convinced me to try Julia ended up leaving, and when we pivoted to a new niche that required a rethinking of the codebase, we took the opportunity to re-write in C# (mostly because a we _needed_ C# to develop a plugin, and it would simplify things if everything was C#).
Thanks for the clarification.
There's only one case study from 2025, though.
"Keep it steady and slow"
Python started in 1989, it also took its time.
It's not steady, though, when there are 9 projects in 2023, 5 projects in 2024 and 1 project in 2025 (until now). Maybe steady but a steady decrease. I don't want to exaggerate the importance of the case study quantity but overall it's not that promising.
Are you using Julia?
for many types of scientific computing, there's a case to be made it is the best language available. often this type of computing would be in scientific/engineering organizations and not in most software companies. this is its best niche, an important one, but not visible to people with SWE jobs making most software.
it can be used for deep learning but you probably shouldn't, currently, except as a small piece of a large problem where you want Julia for other reasons (e.g. scientific machine learning). They do keep improving this and it will probably be great eventually.
i don't know what the experience is like using it for traditional data science tasks. the plotting libraries are actually pretty nicely designed and no longer have horrible compilation delays.
people who like type systems tend to dislike Julia's type system.
they still have the problem of important packages being maintained by PhD students who graduate and disappear.
as a language it promises a lot and mostly delivers, but those compromises where it can't deliver can be really frustrating. this also produces a social dynamic of disillusioned former true believers.
> people who like type systems tend to dislike Julia's type system.
This is true. As far as I understand it, there is not a type theory basis for Julia's design (type theory seems to have little to say about subtyping type lattices). Relatedly, another comment mentioned that Julia needs sum types.
It is the same type theory that has powered Common Lisp and Dylan.
We're not using "type theory" the same way, I think. I'm thinking in terms of
But it's subtle to talk about. It's not like there is a single type theory that underlies Typescript or Rust, either. These practical languages have partial, (and somewhat post-hoc) formalizations of their systems.For starters,
"On the use of LISP in implementing denotational semantics"
https://dl.acm.org/doi/10.1145/319838.319866
Type theory in CS isn't a synonymous with whatever Haskell happens to do.
I do wonder in particular about the startup time "time-to-plot" issue. I last used Julia about 2021-ish to develop some signal processing code, and restarting the entire application could have easily taken tens of seconds. Both static precompilation and hot reloading were in early development and did not really work well at the time.
On a 5 year old i5-8600, with Samsung PM871b SSD:
Not a super fair test since everything was already hot in i/o cache, but still shows how much things have improved.That was fixed in 1.9. Indeed it makes a huge difference now that you can quickly run for the first time.
This was absolutely not "fixed" in 1.9, what are you talking about. It was improved in 1.9, but that's it. Startup time is still unacceptably slow - still tens of seconds for large codebases.
Worse, there are still way too many compilation traps. Splatted a large collection into a function? Compiler chokes. Your code accidentally moves a value from the value to the type domain? You end up with millions of new types, compiler chokes. Accidentally pirate a method? Huge latency. Chose to write type unstable code? Invalidations tank your latency.
> what are you talking about.
It's the most annoying thing about hn that people will regularly declare/proclaim some thing like this as if it's a Nobel prize winning discovery when it's actually just some incremental improvement. I have no idea how this works in these people's lives - aren't we all SWEs where the specifics actually matter. My hypothesis is these people are just really bad SWEs.
on a macMini (i.e. fast RAM), time to display:
- Plots.jl, 1.4 seconds (include package loading)
- CairoMakie.jl, 4 seconds (including package loading)
julia> @time @eval (using Plots; display(plot(rand(3))))
My shop just moved back to Julia for digital signal processing and it’s accelerated development considerably over our old but mature internal C++ ecosystem.
Mine did the same for image processing but coming from python/numpy/numba. We initially looked at using Rust or C++ but I'm glad we chose to stick it out with Julia despite some initial setbacks. Numerical code flows and read so nicely in Julia. It's also awesome seeing the core language continuously improve so much.
Can you elaborate on what libraries, platform, and tooling you use?
How do you deploy it?
StaticCompiler.jl is the main workhorse.
I'm excited to see `--trim` finally make it, but it only works when all code from entrypoints are statically inferrable. In any non-toy Julia program that's not going to be the case. Julia sorely needs a static mode and a static analyzer that can check for correctness. It also needs better sum type support and better error messages (static and runtime).
In 2020, I thought Julia would be _the_ language to use in 2025. Today I think that won't happen until 2030, if even then. The community is growing too slowly, core packages have extremely few maintainers, and Python and Rust are sucking the air out of the room. This talk at JuliaCon was a good summary of how developers using Rust are so much more productive in Rust than in Julia that they switched away from Julia:
https://www.youtube.com/watch?v=gspuMS1hSQo
Which is pretty telling. It takes a overcoming a certain inertia to move from any language.
Given all that, outside of depending heavily on DifferentialEquations.jl, I don't know why someone would pick Julia over Python + Rust.
I don't think Julia was designed for pure overhead projects in memory-constrained environments, or for squeezing out that last 2% of hardware performance to cut costs, like C++, Rust or Zig.
Julia is the language to use in 2025 if what you’re looking for is a JIT-compiled, multiple-dispatch language that lets you write high-performance technical computing code to run on a cluster or on your laptop for quick experimentation, while also being metaprogrammable and highly interactive, whether for modelling, simulation, optimisation, image processing etc.
actually I think it sort of was, I remember berkeley squeezing a ton of perf out of their cray for a crazy task because it was easy to specialize some wild semi-sparse matrix computations onto an architecture with strange memory/cache bottlenecks, while being guaranteed that the results are still okay.
Telling what? Did you actually listen to the talk that you linked to, or read the top comment there by Chris Rackauckas?
> Given all that, outside of depending heavily on DifferentialEquations.jl, I don't know why someone would pick Julia over Python + Rust.
See his last slide. And no, they didn't replace their Julia use in its entirety with Rust, despite his organization being a Rust shop. Considering Rust as a replacement for Julia makes as much sense to me as to considering C as a replacement for Mathematica; Julia and Mathematica are domain specific (scientific computation) languages, not general systems programming languages.
Neither Julia nor Mathematica is a good fit for embedded device programming.
I also find it amusing how you criticize Julia while praising Python (which was originally a "toy" scripting language succeeding ABC, but found some accidental "gaps" to fit in historically) within the narrative that you built.
> In any non-toy Julia program that's not going to be the case.
Why?
> Telling what? Did you actually listen to the talk that you linked to, or read the top comment there by Chris Rackauckas?
To clarify exactly where I'm coming from, I'm going to expand on my thoughts here.
What is Julia's central conceit? It aims to solve "the two language" problem, i.e. the problem where prototyping or rapid development is done in a dynamic and interactive language like Python or MATLAB, and then moved for production to a faster and less flexible language like Rust or C++.
This is exactly what the speaker in the talk addresses. They are still using Julia for prototyping, but their production use of Julia was replaced with Rust. I've heard several more anecdotal stories of the exact same thing occurring. Here's another high profile instance of Julia not making it to production:
https://discourse.julialang.org/t/julia-used-to-prototype-wh...
Julia is failing at its core conceit.
Julia as a community have to start thinking about what makes a language successful in production.
Quote from the talk:
> "(developers) really love writing Rust ... and I get where they are coming from, especially around the tooling."
Julia's tooling is ... just not good. Try working several hundred thousand line project in Julia and it is painful for so many reasons.
If you don't have a REPL open all the time with the state of your program loaded in the REPL and in your head, Julia becomes painful to work in. The language server crashes all the time, completion is slow, linting has so many false positives, TDD is barebones etc. It's far too easy to write type unstable code. And the worst part is you can write code that you think is type stable, but with a minor refactor your performance can just completely tank. Optimizing for maintaining Julia code over a long period of time with a team just feels futile.
That said, is Python perfect? Absolutely not. There's so many things I wish were different.
But Python was designed (or at the very least evolved) to be a glue language. Being able to write user friendly interfaces to performant C or C++ code was the reason the language took off the way it did.
And the Python language keeps evolving to make it easier to write correct Python code. Type hinting is awesome and Python has much better error messages (static and runtime). I'm far more productive prototyping in Python, even if executing code is slower. When I want to make it fast, it is almost trivial to use PyO3 with Rust to make what I want to run fast. Rust is starting to build up packages used for scientific computing. There's also Numba and Cython, which are pretty awesome and have saved me in a pickle.
As a glue language Python is amazing. And jumping into a million line project still feels practical (Julia's `include` feature alone would prevent this from being tenable). The community is growing still, and projects like `uv` and `ty` are only going to make Python proliferate more.
I do think Julia is ideal for an individual researcher, where one person can keep every line of code in their head and for code that is written to be thrown away. But I'm certainly not betting the near future on this language.
Python has useful and rich ecosystem that grows every day. Julia is mostly pile of broken promises (it neither reads as Python, nor it runs as C, at least not without significant effort required to produce curated benchmarks) and desperate hype generators.
Since you have a rosy picture of Python, I assume you're young. Python has been mostly a fringe/toy language for 2 decades, until around ~2010, when a Python fad started not too different from the Rust fad of today, and at some point Google started using it seriously and thought they can fix Python but gave up eventually. The fad lived on and kept evolving and somehow found some popularity with SciPy and then ML. I used it in 90s a little, and I found the language bad for anything other than replacing simple bash scripts or simple desktop applications or a desktop calculator, and I still think it is (but sure, there are people who disagree and think it is a good language). It was slow and didn't have type system, you didn't know whether your code would crash or not until you run that line of code, and the correctness of your program depended on invisible characters.
"Ecosystem" is not a part of the language, and in any case, the Python ecosystem is not written in Python, because Python is not a suitable language for scientific computing, which is unsurprising because that's not what it was designed for.
It is ironic you bring up hype to criticize Julia while praising Python which found popularity thanks to hype rather than technical merit.
What promise are you referring to? Who promised you what? It's a programming language.
> Ecosystem" is not a part of the language, and in any case, the Python ecosystem is not written in Python, because Python is not a suitable language for scientific computing
Doesn't matter. Languages do not matter, ecosystems do, for they determine what is practically achievable.
And it doesn't matter that Python ecosystem relies on huge amounts of C/C++ code. Python people made the effort to wrap this code, document it and maintain those wrappers. Other people use such code through Python APIs. Yes, every language with FFI can do the same. For some reason none achieved that.
Even people using Julia use PythonCall.jl, that's how much Python is unsuitable.
> What promise are you referring to? Who promised you what? It's a programming language.
Acting dumb is poor rhetorical strategy, and ignores such a nice rhetorical advice as principle of charity - it is quite obvious that I didn't mean that programming language made any promise. Making a promise is something that only people can do. And Julia creators and people promoting it made quite bombastic claims throughout the years that turned out to not have much support in reality.
I leave your assumptions about my age or other properties to you.
Ecosystems matter, but runtimes do as well. Take Java, for instance. It didn’t have to wrap C/C++ libraries, yet it became synonymous with anything data-intensive. From Apache Hadoop to Flink, from Kafka to Pulsar. Sure, this is mostly ETL, streaming, and databases rather than numeric or scientific computing, but it shows that a language plus a strong ecosystem can drive a movement.
This is why I see Julia as the Java for technical computing. It’s tackling a domain that’s more numeric and math-heavy, not your average data pipeline, and while it hasn’t yet reached the same breadth as Python, the potential is there. Hopefully, over time, its ecosystem will blossom in the same way.
If what determines the value of a language libraries (which makes no sense to me at all, but let's play your game), then it is one more argument against Python. You don't need FFI to use a Fortran library from Fortran, and I (and many physicists) have found Fortran better suited to HPC than Python since... the day Python came to existence. And no, many other scripting languages have wrappers, and no, scientific computing is not restricted to ML which the only area Python can be argued to have most wrapper libraries to external code.
Language matters, and two-language problem is a real problem, and you can't make it go away by closing your ears and chanting "doesn't matter! doesn't matter!"
Julia is a real step toward solving this problem, and allows you to interact with libraries/packages in ways that is not possible in Python + Fortran + C/C++ + others. You are free to keep pretending that problem doesn't exist.
You are making disparaging and hyperbolic claims about hyperbolic claims without proper attribution, and when asked for source, you cry foul and sadly try to appear smart by saying "you're acting dumb". You should take on your advice and instead of "acting dumb", explicitly cite what "promises" or "bombastic claims" you are referring to. This is what I asked you to do, but instead of doing it, you are doing what you are doing, which is interesting.
> If what determines the value of a language libraries (which makes no sense to me at all, but let's play your game), then it is one more argument against Python
The fact that you can use those nice numerical and scientific libraries from the language that had also tremendous amount of nice libraries from other domains, wide and good IDE support, is very well documented and has countless tutorials and books available... is an argument against that language? Because you can easily use Fortran code in Fortran?
Nice.
> You don't need FFI to use a Fortran library from Fortran
Wow. Didn't know that.
> And no, many other scripting languages have wrappers,
Always less complete, less documented, with less teaching materials available etc.
But sure, many other languages have wrappers. Julia for example wraps Python API.
> and no, scientific computing is not restricted to ML
Never said it is. I don't do ML, by the way.
> You are making disparaging and hyperbolic claims about hyperbolic claims without proper attribution, and when asked for source, you cry foul
Yeah, yeah. My claims on marketing like "Julia writes like Python, runs like C" are hyperbolic and require explicit citation, even though everyone that had any exposure to this language knows such and similar catch-phrases.
Look, you like Julia, good for you. Have fun with it.
in the early aughts educators loved the shit out of python because "it forced kids to organize their code with indentation". This was about a decade before formatting linters became low key required for languages.
These are exactly the feelings that I left with from the community in ~2021 (along with the AD story, which never really materialized _within_ Julia - Enzyme had to come from outside Julia to “save it” - or materialized in a way (Zygote) whose compilation times were absolutely unacceptable compared to competitors like JAX)
More and more over time, I’ve begun to think that the method JIT architecture is a mistake, that subtyping is a mistake.
Subtyping makes abundant sense when paired with multiple dispatch — so perhaps my qualms are not precise there … but it also seems like several designs for static interfaces have sort of bounced off the type system. Not sure, and can’t defend my claims very well.
Julia has much right, but a few things feel wrong in ways that spiral up to the limitations in features like this one.
Anyways, excited to check back next year to see myself proven wrong.
I basically agree with subtyping (but not multiple dispatch). More importantly, I think it's important to recognize that Julia has a niche that literally no one can compete with - interactive, dynamic and high performance.
Like, what exactly is the alternative? Python? Too slow. Static languages? Unusable for interactive exploration and data science.
That leaves you with hybrids, like Python/Cython, or Python/Rust or Numba, but taken on their own term, these are absolutely terrible languages. Python/Rust is not safe (due to FFI), certainly not pleasant to develop in, and no matter how you cut your code between the languages, you always lose. You always want your Python part to be in Rust so you get static analysis, safety and speed. You always want your Rust part to be in Python, so you can experiment with it easier and introspect.
I think multiple dispatch (useful as it is) is a little overrated. There's a significant portion of the time where I know I have a closed set of cases to cover, and an enum type with a match-like syntax would have worked better for that. For interfaces, multiple dispatch is good but again I would have preferred a trait based approach with static type checking.
I largely think multiple dispatch works well in Julia, and it enables writing performant code in an elegant manner. I mostly have smaller gripes about subtyping and the patterns it encourages with multiple dispatch in Julia, and larger gripes about the lack of tooling in Julia.
But multiple dispatch is also a hammer where every problem in Julia looks like a nail. And there isn't enough discussion, official or community driven, that expands on this. In my experience the average developer to Julia tends to reach for multiple dispatch without understanding why, mostly because people keep saying it is the best thing since sliced bread.
wrt to hybrid languages, honestly, I think Python/Cython is extremely underrated. Sure you can design an entirely new language like Mojo or Julia, but imo it offers only incremental value over Python/Cython. I would love to peek into another universe where all that money, time and effort for Mojo and Julia went to Cython instead.
And I personally don't think Python/Rust is as bad. With a little discipline (and some tests), you can ensure your boundary is safe, for you and your team. Rust offers so much value that I would take on the pain of going through FFI. PyO3 simplifies this significantly. The development of `polars` is a good case study for how Rust empowers Python.
I think the Julia community could use some reflection on why it hasn't produced the next `polars`. My personal experience with Julia developers (both in-person and online) is that they often believe multiple dispatch is so compelling that any person that "saw the light" would obviously naturally flock to Julia. Instead, I think the real challenge is meeting users where they are and addressing their needs directly. The fastest way to grow Julia as a language is to tag along Python's success.
Would I prefer a single language that solves all my problems? Yes. But that single language is not Julia, yet, for me.
PS: I really enjoy your blog posts and comments.
Mojo has a different scope than Julia and Python, it targets inference workloads.
Polars is a dataframe library. Yes, it features vectorized operations, but it is focused on columnar data manipulation, not numerical algorithm development. I might say that this is narrow framing, people are looking at Julia through the lens of a data scientist and not of an engineer or computational scientist.
There is no "_the_ language to use", I always pick the language based on the project delivery requirements, like what is tier 1 support on a specific SDK, not pick the project based on the language.
I wish a) that I was a Julia programmer and b) that Julia had taken off instead of python for ML. I’m always jealous when I scan the docs.
Python predates Julia by 3 decades. In many ways Julia is a response to Python's shortcomings. Julia could've never taken off "instead of" python but it clearly hopes to become the mature and more performant alternative eventually
Some small additional details: 23 years not 30. Also, I think Julia was started as much in response to Octave/Matlab’s shortcomings. I don’t know if it is written down, but I was told a big impetus was that Edelman had just sold his star-p company to Microsoft, and star-p was based around octave/matlab.
- https://julialang.org/blog/2012/02/why-we-created-julia/
When Julia came out neither Python nor data science and ML had the popularity they have today. Even 7-8 years ago people we're still having Python vs R debates.
> Even 7-8 years ago people we're still having Python vs R debates.
They still have to this day.
In 2012, python was already well-established in ML, though not as dominant as it is today. scikit-learn was already well-established and Theano was pretty popular. Most of the top entries on Kaggle were C++ or Python.
Julia only came on to my radar in scientific computing (not ML) in about 2015-2016 or so but while I tried it at the time, it was really not very stable and my view was that it was very immature compared to Python’s scientific ecosystem. Looking at the dates, v1.0 came out in 2018 and I remember going to a talk about it at my academic institution where someone showed off the progress and we had a play again in our research group but it still didn’t have many things we needed and the trade offs felt not great as we were heavy users of IPython and then when it came out Jupyter and while Ju stood for Julia the kernel development environment didn’t work so well because rerunning cells could often cause errors if you’d changed a type for e.g.
At the time we were part of the wave I suppose that was trying to convince people that open source Python was a better prospect than MATLAB which was where many people in physics/engineering were on interpreted languages. At least in my view, it wasn’t until much more recently that Julia became a workable alternative to those, regardless of the performance benefits (which were largely workable in Python and MATLAB anyway - and for us at least we were happy developing extension modules in C for the flexibility that the Python interface gave us over the top).
Wow, there are so many amazing practical improvements in this release. It's better at both interactive use _and_ ahead-of-time compilation use. Workspaces and apps and trimmed binaries are massive - letting it easily do things normally done in other languages. It will be interesting so see what "traditional" desktop software will come out of that (CLI tools? GUI apps?).
I am so excited - well done everyone!
> For example, the all-inference benchmarks improve by about 10%, an LLVM-heavy workload shows a similar ~10% gain, and building corecompiler.ji improves by 13–16% with BOLT. When combined with PGO and LTO, total improvements of up to ~23% have been observed.
> To build a BOLT-optimized Julia, run the following commands
Is BOLT the default build (eg. fetched by juliaup) on the supported Linux x86_64 and aarch64? I'm assuming not, based on the wording here, but I'm interested in what the blocker is and whether there's plans to make it part of the default build process. Is it considered as yet immature? Are there other downsides to it than the harmless warnings the post mentions?
BOLT isn't on by default. The main problem is that no one has tested it much (because you can only get it by building your own Julia). We should try distributing BOLT by default. It should just work...
This is it. Anyone who's anyone has been waiting for the 1.12 release with the (admittedly experimental) juliac compiler with the --trim feature. This will allow you to create small, redistributable binaries.
If it is experimental it doesn't allow to create small, redistributable binaries, but allows to hope it will create such binaries.
Though it is quite a progress after years of insisting that this additional package PackageCompiler.jl is all you need.
Honest take: yes, it's not ready. When I tried it, the generated binary crashed.
For what it's worth, I am able to generate a non-small (>1 GiB) binary with 1.11 that runs on other people's machines. Not shipping in production, but it could be if you're willing to put in the effort. So in a sense, PackageCompiler.jl is all you need. ;)
How small are we talking?
~1MB for hello world (almost all of which is runtime), fairly complicated simulations in ~50 MB (solving 500 state ODEs with implicit solvers)
Being able to redefine structs is what I always wanted when prototyping using Revise.jl :) great to have it
This is a fantastic release, been looking forward to --trim since the 2024 JuliaCon presentation. All of the other features look like fantastic QoL additions too - especially redefinition of structs and the introduction of apps.
Congrats Julia team!
Has anyone tried the `--trim` option? I wonder how well it works in "real life".
We use it for binary deployments via building simulations into the FMI standard (i.e. building FMUs) https://fmi-standard.org/. We still need to get libraries updated for getting the smallest possible trim when using the more computationally difficult implicit methods, but at least the stack is matured for simple methods like explicit RK and Rosenbrock-type solvers already. For folks interested in --trim on SciML, the PR to watch is https://github.com/SciML/NonlinearSolve.jl/pull/665 which is currently held up by https://github.com/SciML/SciMLBase.jl/pull/1074 which is the last remaining nugget to get the vast majority of the SciML solver set trimming well. So hopefully very soon for anyone interested in this part of the world. Note that does not include inverse problems in the binary as part of small trim.
But all of this is more about maturing the ecosystem to be more amenable to static compilation and analysis. The whole SciML stack had an initiative starting at the beginning of this summer to add JET.jl testing for type inference everywhere and enforcing this to pass as part of standard unit tests, and using AllocCheck.jl for static allocation-free testing of inner solver loops. With this we have been growing the surface of tools that have static and real-time guarantees. Not done yet, some had to be marked as `@test_broken` for having some branch that can allocate if condition number hits a numerical fallback and such, but generally it's getting locked down. Think of it as "prototype in Julia, deploy in Rust", except instead of re-writing into Rust we're just locking down the behavior with incrementally enforcing the package internals to satisfy more and more static guarantees.
I've tried it on some of my julia code. The lack of support for dynamic dispatch severely limits the use of existing libraries. I spent a couple days pruning out dependencies that caused problems, before hitting some that I decided would be more effort to re-implement than I wanted to spend.
So for now we will continue rewriting code that needs to run on small systems rather than deploy the entire julia environment, but I am excited about the progress that has been made in creating standalone executables, and can't wait to see what the next release holds.
it works well --- IF your code is already written in a manner amenable to static analysis. if your coding style is highly dynamic it will probably be difficult to use this feature for the time being (although UX should of course improve over time)
v1.13 already has a good number of improvements to trim in the cases which are less analyzable, but even then I think the best thing for the package ecosystem is to start to become really good for trim, JETLS, etc. through statically checking things.
Somehow I thought Julia had been around for much longer than this.
I think Julia missed the boat with Python totally dominating the AI area.
Which is a shame, because now Python has all the same problems with the long startup time. On my computer, it takes almost 15 seconds just to import all the machine-learning libraries. And I have to do that on every app relaunch.
Waiting 15+ seconds to test small changes to my PyTorch training code on NFS is rather annoying. I know there are ways to work around it, but sometimes I wish we could have a training workflow similar to how Revise works. Make changes to the code, Revise patches it, then run it via a REPL on the main node. Not sure if Revise actually works in a distributed context, but that would be amazing if it did. No need to start/fork a million new Python processes every single time.
Of course I would also rather be doing all of the above in Julia instead of Python ;)
Revise can work on your server for hot reloading if you need it - you copy your new code files in place over the old ones.
Of course there are caveats - it won't update actively running code, but if your code it's structured reasonably and you are aware of Revise's API and the very basics of Julia's world age you can do it pretty easily IME.