• 6 Posts
  • 20 Comments
Joined 9 months ago
cake
Cake day: February 26th, 2024

help-circle




  • I don’t know about your workplace, but if at all possible I would try to find time between tasks to spend on learning. If your company doesn’t have a policy where it is clear that employees have the freedom to learn during company time, try to underestimate your own velocity even more and use the time it leaves for learning.

    About 10 years ago I worked for a company where I was performing quite well. Since that meant I finished my tasks early, I could have taken on even more tasks. But I didn’t really tell our scrum master when I finished early. Instead I spent the time learning, and also refactoring code to help me become more productive. This added up, and my efficiency only increased more, until at some point I only needed one or two days to complete a week’s sprint. I didn’t waste my time, but I used it to pick up more architectural stuff on the side, while always learning on the job.

    I’ll admit that when I started this route, I already had a bunch of experience under my belt, and this may not be feasible if you have managers breathing down your neck all the time. But the point is, if you play it smart you can use company time to improve yourself and they may even appreciate you for it.



  • You can use the regular data structures in java and run into issues with concurrency but you can also use unsafe in rust so it’s a bit of a moot point.

    In Java it isn’t always clear when something crosses a thread boundary and when it doesn’t. In Rust, it is very explicit when you’re opting into using unsafe, so I think that’s a very clear distinction.

    Java provides classes for thread safe programming, but the language isn’t thread safe. Just like C++ provides containers for improved memory safety, and yet the language isn’t memory safe.

    The distinction lies between what’s available in the standard library, and what the language enforces.




  • Great suggestions! One nitpick:

    But in principle I find this quite workable, as you get to write your CI code in Rust.

    Having used xtask in the past, I’d say this is a downside. CI code is tedious enough to debug as it is, and slowing down the cycle by adding Rust compilation into the mix was a horrible experience. To add, CI is a unique environment where Rust’s focus on correctness isn’t very valuable, since it’s an isolated, project-specific environment anyway.

    I’d rather use Deno or indeed just for that.



  • I would say at this point in time it’s clearly decided that Rust will be part of the future. Maybe there’s a meaningful place for Zig too, but that’s the only part that’s too early to tell.

    If you think Zig still has a chance at overtaking Rust though, that’s very much wishful thinking. Zig isn’t memory safe, so any areas where security is paramount are out of reach for it. The industry isn’t going back in that direction.

    I actually think Zig might still have a chance in game development, and maybe in specialized areas where Rust’s borrow checker cannot really help anyway, such as JIT compilers.







  • Hehe, yeah, I actually agree in principle, although in the context of web tooling I think it’s at least understandable. For many years, web tooling was almost exclusively written in JavaScript itself, which was hailed as a feature, since it allowed JS developers to easily jump in and help improve their own tooling. And it made the stack relatively simple: All you needed was Node.js and you were good to go.

    Something like the Google Closure Compiler, written in Java, was for many years better than comparable tooling written in JS, but remained in obscurity, partially because it was cumbersome to setup and people didn’t want to deal with Java.

    Then the JS ecosystem ran into a wall. JS projects were becoming bigger and bigger, and the performance overhead of their homegrown tooling started frustrating more and more. That just happened to be the time that Rust came around, and it happened to tick all the boxes:

    • It showed that it can solve the performance bottlenecks.
    • It has great support for WASM, which many web developers were having an interest in.
    • Its syntax is familiar enough for TypeScript developers.
    • It has a good story around interior mutability, which is a common frustration among TypeScript developers, especially those familiar with React.

    I think these things combined helped the language to quickly win the hearts and minds of many in the web community. So now we’re in a position where just name dropping “Rust” can be a way to quickly resonate with those developers, because they associate it with fast and reliable and portable. In principle you’re right, it should just be an implementation detail. But through circumstance it seems to have also become an expression of mindshare – ie. a marketing tool.


  • Finding a Webpack replacement that doesn’t use NPM at all is going to be hard, but there are certainly alternatives that don’t require the 1000+ NPM dependencies required to use Webpack.

    Some alternatives you can consider are Rsbuild and Farm. Part of the reason they use so much fewer NPM dependencies is because they’re written in Rust, so they’ll have Cargo dependencies instead, but you shouldn’t notice anything of that. Of course if you want to audit everything it’s not that much easier, but at least the Cargo ecosystem seems to have avoided quite some of the mistakes that NPM made. But yes, in the end it still comes down to the extent that you trust your dependencies.






  • Just keep in mind that inheritance is nowadays a very contested feature. Even most people still invested in object oriented programming recognise that in hindsight inheritance was mostly a mistake. The industry as a whole is also making a shift to move more towards functional programming, in which object orientation as a whole is taking more of a backseat and inheritance specifically is not even supported anymore. So yeah, take the chance to learn, but be cautious before going into any one direction too deeply.



  • For a little bit I thought this library might be a subtle joke, seeing the #define _SHITPRESS_H at the start. That combined with the compress() and decompress() not taking any arguments and not having a return value, I thought we were being played. Not to mention the library appears to be plain C rather than C++… surely the author should know the difference?

    Then I saw how the interface actually works:

    // interface for the library user, implement these in your program:
    unsigned int SPR_in(); // Return next byte from input or value > 255 on EOF.
    void SPR_out(unsigned char); // Output byte.
    

    This seems extremely poorly thought out. Calling into global functions for input and output means that your library will be a pain to use in any program that has to (de)compress anything more than a single input.