• 解决javascript


    For some days I have searched for a working solution to an error

    Error: EMFILE, too many open files

    It seems that many people have the same problem. The usual answer involves increasing the number of file descriptors. So, I've tried this:

    sysctl -w kern.maxfiles=20480,

    The default value is 10240. This is a little strange in my eyes, because the number of files I'm handling in the directory is under 10240. Even stranger, I still receive the same error after I've increased the number of file descriptors.

    Second question:

    After a number of searches I found a work around for the "too many open files" problem:

    var requestBatches = {};
    function batchingReadFile(filename, callback) {
      // First check to see if there is already a batch
      if (requestBatches.hasOwnProperty(filename)) {
        requestBatches[filename].push(callback);
        return;
      }
    
      // Otherwise start a new one and make a real request
      var batch = requestBatches[filename] = [callback];
      FS.readFile(filename, onRealRead);
    
      // Flush out the batch on complete
      function onRealRead() {
        delete requestBatches[filename];
        for (var i = 0, l = batch.length; i < l; i++) {
          batch[i].apply(null, arguments);
        }
      }
    }
    
    function printFile(file){
        console.log(file);
    }
    
    dir = "/Users/xaver/Downloads/xaver/xxx/xxx/"
    
    var files = fs.readdirSync(dir);
    
    for (i in files){
        filename = dir + files[i];
        console.log(filename);
        batchingReadFile(filename, printFile);
    

    Unfortunately I still recieve the same error. What is wrong with this code?

    One last question (I'm new to javascript and node), I'm in the process of developping a web application with a lot of requests for about 5000 daily users. I've many years of experience in programming with other languages like python and java. so originally I thought to developp this application with django or play framework. Then I discovered node and I must say that the idea of non-blocking I/O model is really nice, seductive, and most of all very fast!

    But what kind of problems should I expect with node? Is it a production proven web server? What are your experiences?

    javascript osx node.js file-descriptor 
     edited Jan 23 '12 at 5:05 jlafay 7,017 8 45 81 asked Jan 22 '12 at 23:18 xaverras 600 1 6 12      "Is it a production proven web server?" May be a bit pedantic, but node isn't a web server as such. –  UpTheCreek Oct 2 '13 at 19:45



     | 

    Answers
    8

    Using the graceful-fs module by Isaac Schlueter (node.js maintainer) is probably the most appropriate solution. It does incremental back-off if EMFILE is encountered. It can be used as a drop-in replacement for the built-in fs module.


     answered Apr 10 '13 at 19:32 Braveg1rl 7,034 3 24 41 1   Saved me, why is this not the node default? Why do I need to install some 3rd party plugin to solve the issue? –  Anthony Webb Aug 14 '13 at 14:44 3   I think, generally speaking, Node tries to expose as much to the user as possible. This gives everyone (not just Node core developers) the opportunity to solve any problems arising from the use of this relatively raw interface. At the same time, it's really easy to publish solutions, and download those published by others through npm. Don't expect a lot of smarts from Node itself. Instead, expect to find the smarts in packages published on npm. – Braveg1rl Aug 14 '13 at 15:00 4   That's fine if it's your own code, but plenty of npm modules dont use this. –  UpTheCreek Oct 2 '13 at 19:43 1   This module solved all my issues! I agree that node appears to be a little raw still, but mainly because it's really hard to understand what is going wrong with so little documentation and 解决方法 right solutions to known issues. –  sidonaldsonOct 31 '13 at 12:39      how do you npm it? how do I combine this in my code instead of the regular fs? –  Aviram Netanel Feb 4 '14 at 11:45  |  show more comment

    For when graceful-fs doesn't work... or you just want to understand where the leak is coming from. Follow this process.

    (e.g. graceful-fs isn't gonna fix your wagon if your issue is with sockets.)

    From My Blog Article: http://www.blakerobertson.com/devlog/2014/1/11/how-to-determine-whats-causing-error-connect-emfile-nodejs.html

    How To Isolate

    This command will output the number of open handles for nodejs processes:

    lsof -i -n -P | grep nodejs

    COMMAND     PID    USER   FD   TYPE    DEVICE SIZE/OFF NODE NAME
    ...
    nodejs    12211    root 1012u  IPv4 151317015      0t0  TCP 10.101.42.209:40371->54.236.3.170:80 (ESTABLISHED)
    nodejs    12211    root 1013u  IPv4 151279902      0t0  TCP 10.101.42.209:43656->54.236.3.172:80 (ESTABLISHED)
    nodejs    12211    root 1014u  IPv4 151317016      0t0  TCP 10.101.42.209:34450->54.236.3.168:80 (ESTABLISHED)
    nodejs    12211    root 1015u  IPv4 151289728      0t0  TCP 10.101.42.209:52691->54.236.3.173:80 (ESTABLISHED)
    nodejs    12211    root 1016u  IPv4 151305607      0t0  TCP 10.101.42.209:47707->54.236.3.172:80 (ESTABLISHED)
    nodejs    12211    root 1017u  IPv4 151289730      0t0  TCP 10.101.42.209:45423->54.236.3.171:80 (ESTABLISHED)
    nodejs    12211    root 1018u  IPv4 151289731      0t0  TCP 10.101.42.209:36090->54.236.3.170:80 (ESTABLISHED)
    nodejs    12211    root 1019u  IPv4 151314874      0t0  TCP 10.101.42.209:49176->54.236.3.172:80 (ESTABLISHED)
    nodejs    12211    root 1020u  IPv4 151289768      0t0  TCP 10.101.42.209:45427->54.236.3.171:80 (ESTABLISHED)
    nodejs    12211    root 1021u  IPv4 151289769      0t0  TCP 10.101.42.209:36094->54.236.3.170:80 (ESTABLISHED)
    nodejs    12211    root 1022u  IPv4 151279903      0t0  TCP 10.101.42.209:43836->54.236.3.171:80 (ESTABLISHED)
    nodejs    12211    root 1023u  IPv4 151281403      0t0  TCP 10.101.42.209:43930->54.236.3.172:80 (ESTABLISHED)
    ....
    

    Notice the: 1023u (last line) - that's the 1024th file handle which is the default maximum.

    Now, Look at the last column. That indicates which resource is open. You'll probably see a number of lines all with the same resource name. Hopefully, that now tells you where to look in your code for the leak.

    If you don't know multiple node processes, first lookup which process has pid 12211. That'll tell you the process.

    In my case above, I noticed that there were a bunch of very similar IP Addresses. They were all 54.236.3.### By doing ip address lookups, was able to determine in my case it was pubnub related.

    Command Reference

    Use this syntax to determine how many open handles a process has open...

    To get a count of open files for a certain pid

    I used this command to test the number of files that were opened after doing various events in my app.

    lsof -i -n -P | grep "8465" | wc -l

    # lsof -i -n -P | grep "nodejs.*8465" | wc -l
    28
    # lsof -i -n -P | grep "nodejs.*8465" | wc -l
    31
    # lsof -i -n -P | grep "nodejs.*8465" | wc -l
    34
    

    What is your process limit?

    ulimit -a

    The line you want will look like this:  open files (-n) 1024


     edited Nov 7 '16 at 21:17 commanda 4,219 1 16 24 answered Jan 12 '14 at 2:27 blak3r 8,274 8 51 80 1   How can you change open files limit? –  2619 May 29 '14 at 9:35 6   ulimit -n 2048 to allow 2048 files open –  Gael Nov 11 '14 at 1:10



     | 

    I ran into this problem today, and finding no good solutions for it, I created a module to address it. I was inspired by @fbartho's snippet, but wanted to avoid overwriting the fs module.

    The module I wrote is Filequeue, and you use it just like fs:

    var Filequeue = require('filequeue');
    var fq = new Filequeue(200); // max number of files to open at once
    
    fq.readdir('/Users/xaver/Downloads/xaver/xxx/xxx/', function(err, files) {
        if(err) {
            throw err;
        }
        files.forEach(function(file) {
            fq.readFile('/Users/xaver/Downloads/xaver/xxx/xxx/' + file, function(err, data) {
                // do something here
            }
        });
    });
    


     answered Mar 8 '13 at 1:50 Trey Griffith 71 1 4



     | 

    You're reading too many files at once. Node reads files asynchronously, so you'll be reading all files at once. So you're probably reading 10240 at once.

    See if this works:

    var fs = require('fs')
    var events = require('events')
    var util = require('util')
    var path = require('path')
    
    var FsPool = module.exports = function(dir) {
        events.EventEmitter.call(this)
        this.dir = dir;
        this.files = [];
        this.active = [];
        this.threads = 1;
        this.on('run', this.runQuta.bind(this))
    };
    // So will act like an event emitter
    util.inherits(FsPool, events.EventEmitter);
    
    FsPool.prototype.runQuta = function() {
        if(this.files.length === 0 && this.active.length === 0) {
            return this.emit('done');
        }
        if(this.active.length < this.threads) {
            var name = this.files.shift()
    
            this.active.push(name)
            var fileName = path.join(this.dir, name);
            var self = this;
            fs.stat(fileName, function(err, stats) {
                if(err)
                    throw err;
                if(stats.isFile()) {
                    fs.readFile(fileName, function(err, data) {
                        if(err)
                            throw err;
                        self.active.splice(self.active.indexOf(name), 1)
                        self.emit('file', name, data);
                        self.emit('run');
    
                    });
                } else {
                    self.active.splice(self.active.indexOf(name), 1)
                    self.emit('dir', name);
                    self.emit('run');
                }
            });
        }
        return this
    };
    FsPool.prototype.init = function() {
        var dir = this.dir;
        var self = this;
        fs.readdir(dir, function(err, files) {
            if(err)
                throw err;
            self.files = files
            self.emit('run');
        })
        return this
    };
    var fsPool = new FsPool(__dirname)
    
    fsPool.on('file', function(fileName, fileData) {
        console.log('file name: ' + fileName)
        console.log('file data: ', fileData.toString('utf8'))
    
    })
    fsPool.on('dir', function(dirName) {
        console.log('dir name: ' + dirName)
    
    })
    fsPool.on('done', function() {
        console.log('done')
    });
    fsPool.init()
    


     edited Jul 25 '16 at 11:19 atc 3,661 2 27 53 answered Jan 23 '12 at 0:30 Tim P. 146 1 9



     | 

    I just finished writing a little snippet of code to solve this problem myself, all of the other solutions appear way too heavyweight and require you to change your program structure.

    This solution just stalls any fs.readFile or fs.writeFile calls so that there are no more than a set number in flight at any given time.

    // Queuing reads and writes, so your nodejs script doesn't overwhelm system limits catastrophically
    global.maxFilesInFlight = 100; // Set this value to some number safeish for your system
    var origRead = fs.readFile;
    var origWrite = fs.writeFile;
    
    var activeCount = 0;
    var pending = [];
    
    var wrapCallback = function(cb){
        return function(){
            activeCount--;
            cb.apply(this,Array.prototype.slice.call(arguments));
            if (activeCount < global.maxFilesInFlight && pending.length){
                console.log("Processing Pending read/write");
                pending.shift()();
            }
        };
    };
    fs.readFile = function(){
        var args = Array.prototype.slice.call(arguments);
        if (activeCount < global.maxFilesInFlight){
            if (args[1] instanceof Function){
                args[1] = wrapCallback(args[1]);
            } else if (args[2] instanceof Function) {
                args[2] = wrapCallback(args[2]);
            }
            activeCount++;
            origRead.apply(fs,args);
        } else {
            console.log("Delaying read:",args[0]);
            pending.push(function(){
                fs.readFile.apply(fs,args);
            });
        }
    };
    
    fs.writeFile = function(){
        var args = Array.prototype.slice.call(arguments);
        if (activeCount < global.maxFilesInFlight){
            if (args[1] instanceof Function){
                args[1] = wrapCallback(args[1]);
            } else if (args[2] instanceof Function) {
                args[2] = wrapCallback(args[2]);
            }
            activeCount++;
            origWrite.apply(fs,args);
        } else {
            console.log("Delaying write:",args[0]);
            pending.push(function(){
                fs.writeFile.apply(fs,args);
            });
        }
    };
    


     answered Dec 2 '12 at 23:28 fbartho 184 6      U should make a repo for this on github. –  NickSep 4 '14 at 3:06      This works very well if graceful-fs is not working for you. –  Ceekay Nov 8 '16 at 18:08



     | 

    With bagpipe, you just need change

    FS.readFile(filename, onRealRead);
    

    =>

    var bagpipe = new Bagpipe(10);
    
    bagpipe.push(FS.readFile, filename, onRealRead))
    

    The bagpipe help you limit the parallel. more details: https://github.com/JacksonTian/bagpipe


     answered Nov 20 '12 at 4:22 user1837639 27 3      It's all on chinese or other asian language. Is there any documentation written in english? –  Fatih Arslan Feb 20 '13 at 23:16      @FatihArslan English doc is available now. –  user1837639 Jul 30 '13 at 12:12 2   or use async.js – Melbourne2991 Mar 3 '15 at 5:03



     | 

    Had the same problem when running the nodemon command so i reduced the name of files open in sublime text and the error dissappeared.


     answered Dec 9 '15 at 7:24 Buhiire Keneth 84 5      I, too, was getting EMFILE errors and through trial and error noticed that closing some Sublime windows resolved the issue. I still don't know why. I tried adding ulimit -n 2560 to my .bash_profile, but that didn't solve the issue. Does this indicate a need to change to Atom instead? –  The Qodesmith Jan 26 '16 at 13:38



     | 

    cwait is a general solution for limiting concurrent executions of any functions that return promises.

    In your case the code could be something like:

    var Promise = require('bluebird');
    var cwait = require('cwait');
    
    // Allow max. 10 concurrent file reads.
    var queue = new cwait.TaskQueue(Promise, 10);
    var read = queue.wrap(Promise.promisify(batchingReadFile));
    
    Promise.map(files, function(filename) {
        console.log(filename);
        return(read(filename));
    })
    


     answered May 10 '16 at 13:46 jjrv 2,776 1 22 40



     | 

    protected by Community♦ Jan 23 '15 at 12:43

    Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). 

    Would you like to answer one of these unanswered questions instead?

    Not the answer you're looking for? Browse other questions tagged javascript osx node.js file-descriptor or ask your own question.

  • 相关阅读:
    Python3学习笔记(十七):requests模块
    fiddler(四)、断点(转)
    fiddler(三)、会话框添加显示请求方法栏
    PostgreSQL12同步流复制搭建-同步不生效的问题、主库恢复后,无法与新主库同步问题
    PostgreSQL的count(*) count(1) count(列名)的区别
    CentOS系统日志(转)
    常用PostgreSQL HA(高可用)工具收集
    转-性能优化中CPU、内存、磁盘IO、网络性能的依赖
    PostgreSQL查询数据库中包含某种类型的表有哪些
    PostgreSQL中with和without time zone两者有什么区别
  • 原文地址:https://www.cnblogs.com/zzsdream/p/11140512.html
Copyright © 2020-2023  润新知