• by xorvoid on 6/6/2025, 5:32:03 PM

    The real fun thing is when the same application is using “select()” and then somewhere else you open like 5000 files. Then you start getting weird crashes and eventually trace it down to the select bitset having a hardcoded max of 4096 entries and no bounds checking! Fun fun fun.

  • by jeroenhd on 6/6/2025, 6:06:08 PM

    I think there's something ironic about combining UNIX's "everything is a file" philosophy with a rule like "every process has a maximum amount of open files". Feels a bit like Windows programming back when GDI handles were a limited resource.

    Nowadays Windows seems to have capped the max amount of file handles per process to 2^16 (or 8096 if you're using raw C rather than Windows APIs). However, as on Windows not everything is a file, the amount of open handles is limited "only by memory", so Windows programs can do a lot of things UNIX programs can't do anymore when the file handle limit has been reached.

  • by raggi on 6/6/2025, 6:36:11 PM

            use std::io;
            
            #[cfg(unix)]
            fn raise_file_limit() -> io::Result<()> {
                use libc::{getrlimit, setrlimit, rlimit, RLIMIT_NOFILE};
                
                unsafe {
                    let mut rlim = rlimit {
                        rlim_cur: 0,
                        rlim_max: 0,
                    };
                    
                    if getrlimit(RLIMIT_NOFILE, &mut rlim) != 0 {
                        return Err(io::Error::last_os_error());
                    }
                    
                    rlim.rlim_cur = rlim.rlim_max;
                    
                    if setrlimit(RLIMIT_NOFILE, &rlim) != 0 {
                        return Err(io::Error::last_os_error());
                    }
                }
                
                Ok(())
            }

  • by a_t48 on 6/6/2025, 10:23:14 PM

    Years ago I had the fun of hunting down a bug at 3am before a game launch. Randomly, we’d save the game and instead get an empty file. This is pretty much the worst thing a game can do (excepting wiping your hard drive, hello Bungie). Turned out some analytics framework was leaking network connections and thus stealing all our file handles. :(

  • by Izkata on 6/6/2025, 5:01:58 PM

      lsof -p $(echo $$)
    
    The subshell isn't doing anything useful here, could just be:

      lsof -p $$

  • by geocrasher on 6/6/2025, 5:33:42 PM

    Back in the earlier days of web hosting we'd run into this with Apache. In fact, I have a note from 2014 (much later than the early days actually):

      ulimit -n 10000
    
      to set permanently:
      /etc/security/limits.conf
      \* - nofile 10000

  • by css on 6/6/2025, 3:58:22 PM

    I ran into this issue recently [0]. Apparently the integrated VSCode terminal sets its own (very high) cap by default, but other shells don't, so all of my testing in the VSCode shell "hid" the bug that other shells exposed.

    [0]: https://github.com/ReagentX/imessage-exporter/issues/314#iss...

  • by database64128 on 6/6/2025, 6:36:03 PM

    This is one of the many things where Go just takes care of automatically. Since Go 1.19, if you import the os package, on startup, the open file soft limit will be raised to the hard limit: https://github.com/golang/go/commit/8427429c592588af8c49522c...

  • by saagarjha on 6/7/2025, 10:48:02 AM

    As others have mentioned, macOS both picks low ulimits but it also has a fun little poorly-documented limit for sandboxed apps which is not queryable in any way. Unfortunately how this manifests is that you try to open a file in a sandboxed app and…it just fails. Somewhere around the few thousand range or so. If you have a sandboxed app, try opening a bunch of files and see how it handles it ;)

    The rationale Apple gives is something about using kernel resources, but a normal file descriptor also uses kernel resources, so I'm leaning towards implementation laziness or legacy like many of the examples here.

  • by trinix912 on 6/6/2025, 4:44:44 PM

    Brings back memories of setting FILES= in config.sys in MS-DOS. I've totally forgotten this can still be a problem nowadays!

  • by userbinator on 6/7/2025, 9:36:37 AM

    but the solution is to just bump the soft limit of open file descriptors in my shell

    As you can see the max value reached is around 1600, which is way above the previous limit of 256.

    Without asking and answering "why the hell does it need to keep so many files open", I don't think that's a good way to do things. Raising the limit is justified only if there is a real reason why the process needs to have that many files open simultaneously.

  • by L3viathan on 6/6/2025, 11:08:53 PM

    Nitpick, but:

    > At its core, a file descriptor (often abbreviated as fd) is simply a positive integer

    A _non-negative_ integer.

  • by nasretdinov on 6/6/2025, 3:57:56 PM

    Yeah macOS has a very low default limit, and apparently it affects more than just cargo test, e.g. ClickHouse, and there's even a quite good article about how to increase it permanently: https://clickhouse.com/docs/development/build-osx

  • by loeg on 6/6/2025, 4:37:07 PM

    Also possible to have an fd leak when this error arises. Probably worth investigating a little if that might be the case.

  • by AdmiralAsshat on 6/6/2025, 5:29:53 PM

    Used to run into this error frequently with a piece of software I supported. I don't remember the specifics, but it was your basic C program to process a record-delimited datafile. A for-loop with an fopen that didn't have a corresponding fclose at the end of the loop. For a sufficiently large datafile, eventually we'd run out of file handles.

  • by quotemstr on 6/6/2025, 3:55:30 PM

    There's no reason to place an arbitrary cap on the number of file descriptors a process can have. It's neither necessary nor sufficient for limiting the amount of memory the kernel will allocate on behalf of a process. On every Linux system I use, I bump the FD limit to maximum everywhere.

  • by eviks on 6/7/2025, 5:32:20 AM

    > I couldn’t get the script to catch the exact moment the process reached the soft limits,

    That's unfortunate, is there no way to subscribe to open file events instead of polling for status?

  • by JdeBP on 6/6/2025, 9:00:14 PM

    > Another useful command to check for open file descriptors is lsof,

    ... but the one that comes with the operating system, on the BSDs, is fstat(1).

    > 10u: Another file descriptor [...] likely used for additional terminal interactions.

    The way that ZLE provides its user interface, and indeed what the Z shell does with the terminal in general, is quite interesting; and almost nothing like what one would expect from old books on the Bourne shell.

    > it tries to open more files than the soft limit set by my shell

    Your shell can change limits, but it isn't what is originally setting them. That is either the login program or the SSH daemon. On the BSDs, you can read about the configuration file that controls this in login.conf(5).

  • by mzs on 6/6/2025, 5:57:18 PM

    Is there no way limit the number of concurrently running tests as with make -j 128?

  • by gjvc on 6/6/2025, 6:30:33 PM

    seems like the limits today have not been raised for a very long time.

  • by mbrumlow on 6/7/2025, 3:38:13 PM

    Such a big blog post for “Mac OS has crap max file defaults”

  • by LAC-Tech on 6/6/2025, 10:35:10 PM

    what's a good default for modern 64 bit systems? I know there's some kind of table in the linux kernel.

    1024 on my workstation. seems low.

  • by amelius on 6/7/2025, 11:19:09 AM

    Why can't we just dynamically allocate file descriptors until memory is physically full?

    I hate to say it but it sounds like the developers of Unix were being lazy here.

  • by JackYoustra on 6/7/2025, 1:09:28 AM

    htop has a setting where you can show how many fds you have open and how many remaining

  • by NooneAtAll3 on 6/6/2025, 8:47:30 PM

    ...but that didn't solve the bug itself, did it?

    what was causing so many open files?

  • by NetOpWibby on 6/7/2025, 8:10:41 AM

    Sublime Text language parsers will crash if you have a lot of windows and files open for some long period of time.

    I really should close all these windows but meh.

  • by jkol36 on 6/7/2025, 1:06:46 AM

    Anyone need any programming done?